Intelligence explosion in organizations, or why I'm not worried about the singularity

post by sbenthall · 2012-12-27T04:32:32.918Z · LW · GW · Legacy · 187 comments

Contents

  Smart organizations
  Mean organizations
None
187 comments

If I understand the Singularitarian argument espoused by many members of this community (eg. Muehlhauser and Salamon), it goes something like this:

  1. Machine intelligence is getting smarter.
  2. Once an intelligence becomes sufficiently supra-human, its instrumental rationality will drive it towards cognitive self-enhancement (Bostrom), so making it a super-powerful, resource hungry superintelligence.
  3. If a superintelligence isn't sufficiently human-like or 'friendly', that could be disastrous for humanity.
  4. Machine intelligence is unlikely to be human-like or friendly unless we take precautions.
I am not particularly worried about the scenario envisioned in this argument.  I think that my lack of concern is rational, so I'd like to try to convince you of it as well.*

It's not that I think the logic of this argument is incorrect so much as I think there is another related problem that we should be worrying about more.  I think the world is already full of probably unfriendly supra-human intelligences that are scrambling for computational resources in a way that threatens humanity.

I'm in danger of getting into politics.  Since I understand that political arguments are not welcome here, I will refer to these potentially unfriendly human intelligences broadly as organizations.

Smart organizations

By "organization" I mean something commonplace, with a twist.  It's commonplace because I'm talking about a bunch of people coordinated somehow. The twist is that I want to include the information technology infrastructure used by that bunch of people within the extension of "organization". 

Do organizations have intelligence?  I think so.  Here's some of the reasons why:

  1. We can model human organizations as having preference functions. (Economists do this all the time)
  2. Human organizations have a lot of optimization power.

I talked with Mr. Muehlhauser about this specifically. I gather that at least at the time he thought human organizations should not be counted as intelligences (or at least as intelligences with the potential to become superintelligences) because they are not as versatile as human beings.

So when I am talking about super-human intelligence, I specifically mean an agent that is as good or better at humans at just about every skill set that humans possess for achieving their goals. So that would include things like not just mathematical ability or theorem proving and playing chess, but also things like social manipulation and composing music and so on, which are all functions of the brain not the kidneys

...and then...

It would be a kind of weird [organization] that was better than the best human or even the median human at all the things that humans do. [Organizations] aren’t usually the best in music and AI research and theory proving and stock markets and composing novels. And so there certainly are  [Organizations] that  are better than median humans at certain things, like digging oil wells, but I don’t think there are [Organizations] as good or better than humans at all things. More to the point, there is an interesting difference here because [Organizations] are made of lots of humans and so they have the sorts of limitations on activities and intelligence that humans have. For example, they are not particularly rational in the sense defined by cognitive science. And the brains of the people that make up organizations are limited to the size of skulls, whereas you can have an AI that is the size of a warehouse. 

I think that Muehlhauser is slightly mistaken on a few subtle but important points.  I'm going to assert my position on them without much argument because I think they are fairly sensible, but if any reader disagrees I will try to defend them in the comments.

In summary, organizations often have the kinds of skills necessary to achieve their goals, and can be vastly better at them than individual humans. Many have the skills necessary for their own cognitive enhancement, since if they are able to raise funding they can purchase computational resources and fund artificial intelligence research. More mundanely, organizations of all kinds hire analysts and use analytic software to make instrumentally rational decisions.

In sum, many organizations are of supra-human intelligence and strive actively to enhance their cognitive powers.

Mean organizations


Suppose the premise that there are organizations with supra-human intelligence that act to enhance their cognitive powers.  And suppose the other premises of the Singularitarian argument outlined at the beginning of this post.

Then it follows that we should be concerned if one or more of these smart organizations are so unlike human beings in their motivational structure that they are 'mean'.

I believe the implications of this line of reasoning may be profound, but as this is my first post to LessWrong I would like to first see how this is received before going on.

* My preferred standard of rationality is communicative rationality, a Habermasian ideal of a rationality aimed at consensus through principled communication.  As a consequence, when I believe a position to be rational, I believe that it is possible and desirable to convince other rational agents of it.

187 comments

Comments sorted by top scores.

comment by gwern · 2012-12-27T20:44:17.337Z · LW(p) · GW(p)

Organizations are highly disanalogous to potential AIs, and suffer from severe diminishing returns: http://www.nytimes.com/2010/12/19/magazine/19Urban_West-t.html?reddit=&pagewanted=all&_r=0

As West notes, Hurricane Katrina couldn’t wipe out New Orleans, and a nuclear bomb did not erase Hiroshima from the map. In contrast, where are Pan Am and Enron today? The modern corporation has an average life span of 40 to 50 years. This raises the obvious question: Why are corporations so fleeting? After buying data on more than 23,000 publicly traded companies, Bettencourt and West discovered that corporate productivity, unlike urban productivity, was entirely sublinear. As the number of employees grows, the amount of profit per employee shrinks. West gets giddy when he shows me the linear regression charts. “Look at this bloody plot,” he says. “It’s ridiculous how well the points line up.” The graph reflects the bleak reality of corporate growth, in which efficiencies of scale are almost always outweighed by the burdens of bureaucracy. “When a company starts out, it’s all about the new idea,” West says. “And then, if the company gets lucky, the idea takes off. Everybody is happy and rich. But then management starts worrying about the bottom line, and so all these people are hired to keep track of the paper clips. This is the beginning of the end.” The danger, West says, is that the inevitable decline in profit per employee makes large companies increasingly vulnerable to market volatility. Since the company now has to support an expensive staff — overhead costs increase with size — even a minor disturbance can lead to significant losses. As West puts it, “Companies are killed by their need to keep on getting bigger.”

Yet they rule the world anyway.

Replies from: tgb, Bugmaster, sbenthall
comment by tgb · 2012-12-28T15:40:18.767Z · LW(p) · GW(p)

But then management starts worrying about the bottom line and so all these people are hired to keep track of the paper clips. This is the beginning of the end.

And so LessWrong has been proved correct that paperclips will be the end of us all.

comment by Bugmaster · 2012-12-28T02:55:07.483Z · LW(p) · GW(p)

I may be wrong, but don't all distributed systems suffer from diminishing returns in this way ? For example, doubling the number of CPUs in a computing cluster does not allow you to solve your calculations twice as quickly. Your overhead, such as control infrastructure and plain old network latency, increases faster than linearly with every CPU you add, and eventually outgrows the useful processing power you can get out of new CPUs.

This is one of the many reasons why I'm not worried about the Singularity...

Replies from: gwern, jbeshir, timtyler, V_V
comment by gwern · 2012-12-28T04:01:27.528Z · LW(p) · GW(p)

I may be wrong, but don't all distributed systems suffer from diminishing returns in this way ?

Just to point out the obvious, the link itself covers a case of sublinear scaling: cities. So no, not all 'distributed systems' so suffer...

Replies from: Bugmaster
comment by Bugmaster · 2012-12-28T13:31:17.446Z · LW(p) · GW(p)

Don't you mean, "superlinear" ? But you're right, I should've read the full linked article before commenting. Now that I'd read it, though, I am somewhat less than impressed. Here's one reason for that:

In fact, West’s paper in Science ignited a flurry of rebuttals, in which researchers pointed out all the species that violated the math. West can barely hide his impatience with what he regards as quibbles. “There are always going to be people who say, ‘What about the crayfish?’ ” he says. “Well, what about it? Every fundamental law has exceptions. But you still need the law or else all you have is observations that don’t make sense. And that’s not science. That’s just taking notes.”

Um. If your "fundamental law" has all these exceptions, that's a good hint that maybe it isn't as fundamental as you thought. The law of gravity doesn't have exceptions. And no, it's not always better to "have the law". Sometimes it is, for practical reasons, and sometimes it's better to devise a better law that doesn't give you so many false positives.

The article goes on to describe the superlinear growth of efficiency in cities, and notes (correctly, IMO) that it cannot be sustained forever:

Because our lifestyle has become so expensive to maintain, every new resource now becomes exhausted at a faster rate. This means that the cycle of innovations has to constantly accelerate, with each breakthrough providing a shorter reprieve...

But I think one point that the article is missing is that cities don't exist in a vacuum. As a city grows, it requires more food (which can't be grown efficiently inside the city), more highways (connecting it with its neighbours), etc. If we ignore all of that, we get superlinear scaling; but my guess is that if we include it, we would get sublinear scaling as usual -- in terms of overall economic output per single human.

Replies from: gwern
comment by gwern · 2012-12-28T16:55:12.325Z · LW(p) · GW(p)

Um. If your "fundamental law" has all these exceptions, that's a good hint that maybe it isn't as fundamental as you thought. The law of gravity doesn't have exceptions. And no, it's not always better to "have the law". Sometimes it is, for practical reasons, and sometimes it's better to devise a better law that doesn't give you so many false positives.

You're missing the point too. Even gravity has exceptions - yes, really, this is a standard topic in philosophy of science because the Laws Of Gravity are so clear, yet in practice they are riddled with exceptions and errors. We have errors so large that Newtonians were forced to postulate entire planets to explain them (not all of which turned out as well as Uranus, Neptune, and Pluto), we have errors which took centuries to be winkled out, and of course errors like Mercury which ultimately could be explained only by an entirely new theory.

And we're talking about real-world statistics: has there ever been a sociology, economics, or biological allometry paper where every single data point was predicted perfectly without any error whatsoever? (If you think this, then perhaps you should consult Tukey and Cohen on how 'the null hypothesis is always false'.)

If we ignore all of that, we get superlinear scaling; but my guess is that if we include it, we would get sublinear scaling as usual -- in terms of overall economic output per single human.

Absolutely; if you measure in certain ways, diminishing returns has clearly set in for humanity. And yet, compared to hunter-gatherers, we might as well be a Singularity.

What does this tell you about the relevance of diminishing returns to Singularity discussions? (Chalmers's Singularity paper deals with this very question, IIRC, if you are interested in a pre-existing discussion.)

Replies from: Bugmaster, army1987, Bugmaster
comment by Bugmaster · 2013-01-01T12:38:11.933Z · LW(p) · GW(p)

Even gravity has exceptions - yes, really, this is a standard topic in philosophy of science because the Laws Of Gravity are so clear, yet in practice they are riddled with exceptions and errors

In addition to what the others said on this thread, I'd like to say that my main problem was with the author's attitude, not the accuracy of his proposed law -- though the fact that it apparently has glaring holes in it doesn't really help. When you discover that your law has huge exceptions (such as f.ex. "all crustaceans" or "Mercury"), the thing to do is to postulate hidden planets, or discover relativity, or introduce a term representing dark energy, or something. The thing not to do is to say, "oh well, every law has exceptions, this is good enough for me, case closed ! Let's pretend that crustaceans don't exist, we're done".

And we're talking about real-world statistics: has there ever been a sociology, economics, or biological allometry paper where every single data point was predicted perfectly without any error whatsoever?

I'm not sure what you're referring to; of course, no one expects any line to have a correlation of 1.0 at all times. That'd be silly. However, it is almost equally as silly to take a few data points, and extrapolate them far into the future without any concern for what you're doing. Ultimately, you can draw a straight line through any two points, but that doesn't mean that a child will be over 5m tall at age 20 just because he grew 25cm in a year.

Absolutely; if you measure in certain ways, diminishing returns has clearly set in for humanity. And yet, compared to hunter-gatherers, we might as well be a Singularity.

How so ? Perhaps more importantly, if "diminishing returns has clearly set in for humanity" as you say, then what does that tell you for our prospects of bringing about the actual Singularity ?

Replies from: gwern
comment by gwern · 2013-01-01T18:52:40.219Z · LW(p) · GW(p)

In addition to what the others said on this thread, I'd like to say that my main problem was with the author's attitude, not the accuracy of his proposed law -- though the fact that it apparently has glaring holes in it doesn't really help. When you discover that your law has huge exceptions (such as f.ex. "all crustaceans" or "Mercury"), the thing to do is to postulate hidden planets, or discover relativity, or introduce a term representing dark energy, or something. The thing not to do is to say, "oh well, every law has exceptions, this is good enough for me, case closed ! Let's pretend that crustaceans don't exist, we're done".

Well, that's useful advice to the Newtonians, alright - 'hey guys, why did you let the Mercury anomaly linger for decades/centuries? All you had to do was invent relativity! Just ask Bugmaster!'

I wasn't aware West had retired and was eagerly awaiting his Nobel phone call.

However, it is almost equally as silly to take a few data points, and extrapolate them far into the future without any concern for what you're doing. Ultimately, you can draw a straight line through any two points, but that doesn't mean that a child will be over 5m tall at age 20 just because he grew 25cm in a year.

Why do you think the existing dataset is analogous to your silly example?

How so ? Perhaps more importantly, if "diminishing returns has clearly set in for humanity" as you say, then what does that tell you for our prospects of bringing about the actual Singularity ?

Not much.

Replies from: Bugmaster
comment by Bugmaster · 2013-01-02T20:45:05.243Z · LW(p) · GW(p)

Well, that's useful advice to the Newtonians, alright - 'hey guys, why did you let the Mercury anomaly linger for decades/centuries? All you had to do was invent relativity! Just ask Bugmaster!'

There's a difference between acknowledging the problems with your "fundamental law" (once they become apparent, of course) but failing to fix them for "decades/centuries"; vs. boldly ignoring them because "all laws have exceptions, them's the breaks". It's possible that West is not doing the latter, but the article does imply that this is the case.

Why do you think the existing dataset is analogous to your silly example?

Which dataset are you talking about ? If you mean, the growth of cities, then see below.

How so ? Perhaps more importantly, if "diminishing returns has clearly set in for humanity" as you say, then what does that tell you for our prospects of bringing about the actual Singularity ? Not much.

Why not ? If humanity's productive output has recently (relatively speaking) reached the point of diminishing returns, then a). we can no longer extrapolate the growth of productivity in cities by assuming past trends would continue indefinitely, and b). this does not bode well for the Singularity, which would entail an exponential growth of productivity, free of any diminishing returns.

Replies from: gwern
comment by gwern · 2013-01-06T04:08:46.971Z · LW(p) · GW(p)

It's possible that West is not doing the latter, but the article does imply that this is the case.

It didn't sound like that to me. It sounded like some people had absurd standards for scaling phenomena, and he was rightly dismissing them.

If humanity's productive output has recently (relatively speaking) reached the point of diminishing returns,

There's nothing recently about it. Diminishing returns is a pretty general phenomenon which happens in most periods; Tainter documents examples in many ancient settings, and we can find data sets suggesting diminishing returns in the West from long ago. For example, IIRC Murray finds that once you adjust for population growth, scientific achievement has been falling since the 1890s or so.

then a). we can no longer extrapolate the growth of productivity in cities by assuming past trends would continue indefinitely, and b). this does not bode well for the Singularity, which would entail an exponential growth of productivity, free of any diminishing returns.

It doesn't bode much of anything; I referred to you my list of 'what diminishing returns does not imply' for a reason: #1-4 are directly relevant. Diminishing returns does not mean no exponential growth; it does not mean no regime changes, massive accomplishments, breakthroughs, or technologies. It just means diminishing returns; it's just an observation about one unit of input turning into units of output as compared to the previous unit of input and outputs, nothing more and nothing less.

This is obvious if you take Tainter or Murray or any of the results showing any diminishing returns in the past centuries, since those are precisely the centuries in which humanity has done the most extraordinarily well! One could say, with equal justice, that 'this does not bode well' for the 20th century; one could say with equal justice in 1950 that diminishing returns bodes poorly for the computer industry because not only are chip fab prices keeping on increasing ('Moore's second law'), computing power is visibly suffering diminishing returns as it is applied to more and more worthless problems - where once it was used on problems of vital national value (crucial to the survival of the free world and all that is good) worth billions such as artillery tables and H-bomb simulations, now it was being wasted on grad students and businesses.

comment by A1987dM (army1987) · 2012-12-29T15:34:54.257Z · LW(p) · GW(p)

Even gravity has exceptions - yes, really, this is a standard topic in philosophy of science because the Laws Of Gravity are so clear, yet in practice they are riddled with exceptions and errors.

What are you talking about?

Replies from: gwern
comment by gwern · 2012-12-29T20:39:49.225Z · LW(p) · GW(p)

I gave multiple examples and specified the field interested in how such a naive formulation is completely wrong; please ask better questions.

Replies from: AlexMennen, army1987
comment by AlexMennen · 2012-12-29T22:09:25.345Z · LW(p) · GW(p)

I gave multiple examples

No, you did not. Your examples are all consistent with our best current exceptionless theory of gravity (general relativity) and knowledge of the composition of our solar system (Uranus, Neptune, and Pluto). You merely hinted at the existence of additional examples that perplexed the Newtonians. In fact, since our current understanding of gravity is better than the Newtonians', hinting at the existence of examples that perplexed the Newtonians fails to even suggest a flaw in our best current theory, not to mention suggesting the existence of "exceptions to gravity". Please give at least one real example.

Replies from: gwern
comment by gwern · 2012-12-29T22:20:38.232Z · LW(p) · GW(p)

Nobody brought up relativity as the issue; the fact remains that every theory is incomplete and a work in progress, and a few errors is not disproof especially for a statistical generalization. You would not apply this ultra-high standard of 'the theory must explain every observation ever in the absence of any further data or modifications' to anything else discussed on LW, and I do not understand why either you or army1987 think you are adding anything to this discussion about cities exhibiting better scaling than corporations.

Replies from: AlexMennen, army1987
comment by AlexMennen · 2012-12-29T23:21:54.423Z · LW(p) · GW(p)

You said that gravity has exceptions. I'm not quite sure what that's supposed to mean, but the only interpretation I could think of for that statement is that our current best theory of gravity (namely, general relativity) fails to predict how gravity behaves in some cases. I did not mean to suggest that any theory must explain every observation correctly to be useful, nor did I mean to imply anything about how well cities and corporations scale. I was merely pointing out that you falsely asserted that you had given examples of exceptions to gravity, when you had in fact you had only given examples of exceptions to Newtonian gravity as it would operate in a solar system similar but not identical to ours.

comment by A1987dM (army1987) · 2012-12-30T02:45:58.082Z · LW(p) · GW(p)

I do not understand why either you or army1987 think you are adding anything to this discussion about cities

I saw what sounded to me like an extraordinary claim (though it turns out I misunderstood you) so I went WTF.

comment by A1987dM (army1987) · 2012-12-29T20:51:03.893Z · LW(p) · GW(p)

I have never heard of any observation showing that gravitation as described by general relativity (and, so long as you aren't very close to something very massive and aren't travelling at a sizeable fraction of the speed of light, excellently approximated by Newton's law) might have "exceptions" on Solar System-scale, except possibly the Pioneer anomaly (for which there is a very plausible candidate explanation) and similar. When I read "errors" I hoped you meant measurement uncertainties, but I can't make sense of the rest of the paragraph assuming you did.

Replies from: gwern
comment by gwern · 2012-12-29T21:09:44.734Z · LW(p) · GW(p)

http://en.wikipedia.org/wiki/Philosophy_of_science#Duhem-Quine_thesis may help you a little bit. You should probably read the entire article, since you seem to think there were no errors or exceptions, and that some exceptions could disprove a power law.

Replies from: army1987, AlexMennen
comment by A1987dM (army1987) · 2012-12-30T02:37:05.176Z · LW(p) · GW(p)

I think I know what you mean, but if I'm right, "gravity has exceptions" is, let's say, a very bizarre way of putting it.

EDIT: yeah, you meant what i thought you meant.

comment by AlexMennen · 2012-12-29T22:16:14.943Z · LW(p) · GW(p)

There are no examples of failures of general relativity in that entire article. So far, of the two of you, only army1987 has given an example of an even slightly perplexing observation.

Replies from: gwern
comment by gwern · 2012-12-29T22:18:45.548Z · LW(p) · GW(p)

Why should I give one? I never brought up relativity, army1987 did.

Replies from: army1987
comment by A1987dM (army1987) · 2012-12-30T02:31:48.832Z · LW(p) · GW(p)

You brought up the Laws Of Gravity (capitals yours), which among insiders are known as the Einstein field equations of general relativity.

comment by Bugmaster · 2013-01-01T12:41:15.042Z · LW(p) · GW(p)

This seems serendipitous:

http://lesswrong.com/r/discussion/lw/g62/link_the_collapse_of_complex_societies/

Replies from: gwern
comment by gwern · 2013-01-01T18:45:39.262Z · LW(p) · GW(p)

Yes, Tainter is one of a number of sources which are why I think humanity has seen diminishing returns. I've been casually dumping some info in http://www.gwern.net/the-long-stagnation although if we were discussing just books, I think Murray's Human Accomplishment covers convincingly a much more important kind of diminishing returns compared to Tainter's focus on resources and basic economic metrics.

(For those interested in the topic, I suggest looking at my link just for the intro bit about 5 propositions that the fact of diminishing returns does not prove; I believe more than one commenter on this page is committing at least one of those 5.)

comment by jbeshir · 2012-12-30T23:45:49.458Z · LW(p) · GW(p)

Restricting the topic to distributed computation, the short answer is "essentially no". The rule is that you get at best linear returns, not that your returns diminish greatly. There are a lot of problems which are described as "embarassingly parallel", in that scaling them out is easy to do with quite low overhead. In general, any processing of a data set which permits it to be broken into chunks which can be processed independently would qualify, so long as you were looking to increase the amount of data processed by adding more processors rather than process the same data faster.

For scalable distributed computation, you use a system design whose total communication overhead rises as O(n log n) or lower. The upper bound here is superlinear, but gets closer to linear the more additional capacity is added, and so scales well enough that with a good implementation you can run out of planet to make the system out of before you get too slow. Such systems are quite achievable.

The DNS system would be an important example of a scalable distributed system; if adding more capacity to the DNS system had substantially diminishing returns, we would have a very different Internet today.

An example I know well enough to walk through in detail is a scalable database in which data is allocated to shards, which manage storage of that data. You need a dictionary server to locate data (DNS-style) and handle moving blocks of it between shards, but this can then be sharded in turn. The result is akin to a really big tree; number of lookups (latency) to find the data rises with the log of the data stored, and the total number of dictionary servers at all levels does not rise faster than the number of shards with Actual Data at the bottom level. Queries can be supported by precomputed indexes stored in the database themselves. This is similar to how Google App Engine's datastore operates (but much simplified).

With this fairly simple structure, the total cost of all reads/writes/queries theoretically rises superlinearly with the amount of storage (presuming read/write/queries and amount of data scale linearly with each other), due to the dictionary server lookups, but only as O(n log(n)). If you were trying, with current day commodity hard disks and a conceptually simple on-disk tree, a dictionary server could reasonably store information for ten billion shards (500 bytes 10 billion = ~5 TB), two levels of sharding giving you a hundred billion billion data-storing shards, three giving a thousand billion billion billion data-storing shards. Five levels, five latency delays would give you more bottom-level shards than there are atoms on Earth. This is why, while scalability will eventually* limit a O(n log(n)) architecture, in this case because the cost of communicating with subshards of subshards becomes too high, you can run out of planet first.

This can be generalised; if you imagine that each shard performs arbitrary work on the data sent to it, and when the data is read back you get the results of the processing on that data, you get a scalable system which does any processing on a dataset than can be done by processing chunks of data independently from one another. Image or voice recognition matching a single sample against a huge dataset would be an example.

This isn't to trivialise the issues of parallelising algorithms. Figuring out a scalable equivalent to a non-parallel algorithm is hard. Scalable databases, for example, don't support the same set of queries as a simple MySQL server because a MySQL server implements some queries by iterating all the data, and there's no known way to perform them in a scalable way. Instead, software using them finds other ways to implement the feature.

However, scalable-until-you-run-out-of-planet distributed systems are quite possible, and there are some scalable distributed systems doing pretty complex tasks. Search engines are the best example which comes to mind of systems which bring data together and do complex synthesis with it. Amazon's store would be another scalable system which coordinates a substantial amount of real world work.

The only question is whether a (U)FAI specifically can be implemented as a scalable distributed system, and considering the things we know can be divided or done scalably, as well as everything which can be done with somewhat-desynchronised subsystems which correct errors later (or even are just sometimes wrong), it seems quite likely that (assuming one can be implemented at all) it could implement its work in the form of problems which can be solved in a scalable fashion.

Replies from: Bugmaster
comment by Bugmaster · 2013-01-01T12:25:34.641Z · LW(p) · GW(p)

I agree with what you are saying about scaling, as exemplified by sharded databases. But I am not convinced that any problem can be sharded that easily; as you yourself have said:

Figuring out a scalable equivalent to a non-parallel algorithm is hard. Scalable databases, for example, don't support the same set of queries as a simple MySQL server...

This is one reason why even Google's datastore, AFAIK, does not implement exactly this kind of architecture -- though it is still heavily sharded. This type of a datastructure does not easily lend itself to purely general computation, either, since it relies on precomputed indexes, and generally exploits some very specific property of the data that is known in advance. And, as you also mentioned, even with these drastic tradeoffs you still get O(n log(n)).

You mention Amazon (in addition to Google) as one example of a massively distributed system, but note that both Google and Amazon are already forced to build redundant data centers in separate areas of the Earth, in order to reduce network latency. This is important, because we aren't dealing with abstract tree nodes, but with physical machines, which have a certain volume (among other things). This means that, even in an absolutely ideal situation where we can ignore power, heat dissipation, and network congestion, you will still run into the speed of light as a limiting factor. In fact, high-frequency trading systems are already running up against this limit even today. This means that you'll run out of room to scale a lot faster than you run out of atoms of the Earth.

Replies from: jbeshir
comment by jbeshir · 2013-01-03T04:00:58.358Z · LW(p) · GW(p)

First, examining the dispute over whether scalable systems can actually implement a distributed AI...

This is one reason why even Google's datastore, AFAIK, does not implement exactly this kind of architecture -- though it is still heavily sharded. This type of a datastructure does not easily lend itself to purely general computation, either, since it relies on precomputed indexes, and generally exploits some very specific property of the data that is known in advance.

That's untrue; Google App Engine's datastore is not built on exactly this architecture, but is built on one with these scalability properties, and they do not inhibit its operation. It is built on BigTable, which builds on multiple instances of Google File System, each of which has multiple chunk servers. They describe this as intended to scale to hundreds of thousands of machines and petabytes of data. They do not define a design scaling to an arbitrary number of levels, but there is no reason an architecturally similar system like it couldn't simply add another level and add on another potential roundtrip. I also omit discussion of fault-tolerance, but this doesn't present any additional fundamental issues for the described functionality.

In actual application, its architecture is used in conjunction with a large number of interchangeable non-data-holding compute nodes which communicate only with the datastore and end users rather than each other, running identical instances of software running on App Engine. This layout runs all websites and services backed by Google App Engine as distributed, scalable software, assuming they don't do anything to break scalability. There is no particular reliance of "special properties" of the data being stored, merely limited types of searching of the data which is possible. Even this is less limited than you might imagine; full text search of large texts has been implemented fairly recently. A wide range of websites, services, and applications are built on top of it.

The implication of this is that there could well be limitations on what you can build scalably, but they are not all that restrictive. They definitely don't include anything for which you can split data into independently processed chunks. Looking at GAE some more because it's a good example of a generalised scalable distributed platform, the software run on the nodes is written in standard Turing-complete languages (Python, Java, and Go) and your datastore access includes read and write by key and by equality queries on specific fields, as well as cursors. A scalable task queue and cron system mean you aren't dependent on outside requests to drive anything. It's fairly simple to build any such chunk processing on top of it.

So as long as an AI can implement its work in such chunks, it certainly can scale to huge sizes and be a scalable system.

And, as you also mentioned, even with these drastic tradeoffs you still get O(n log(n)).

And as I demonstrated, O(n log n) is big enough for a Singularity.

And now on whether scalable systems can actually grow big in general...

You mention Amazon (in addition to Google) as one example of a massively distributed system, but note that both Google and Amazon are already forced to build redundant data centers in separate areas of the Earth, in order to reduce network latency.

Speed of light as an issue is not a problem for building huge systems in general, so long as the number of roundtrips rises as O(n log n) or less, because for any system capable of at least tolerating roundtrips to the other side of the planet (few hundred milliseconds), it doesn't become more of an issue as a system gets bigger, until you start running out of space on the planet surface to run fibre between locations or build servers.

The GAE datastore is already tolerating latencies sufficient to cover distances between cities to permit data duplication over wide areas, for fault tolerance. If it was to expand into all the space between those cities, it would not have the time for each roundtrip increase until after it had filled all the space between them with more servers.

Google and Amazon are not at all forced to build data centres in different parts of the Earth to reduce latency; this is a misunderstanding. There is no technical performance degradation caused by the size of their systems forcing them to need the latency improvements to end users or the region-scale fault tolerance that spread out datacentres permit. They can just afford it more easily. You could argue there are social/political/legal reasons they need it more, higher expectations of their systems and similar, but these aren't relevant here. This spreading out is actually largely detrimental to their systems since spreading out this way increases latency between them, but they can tolerate this.

Heat dissipation, power generation, and network cabling needs all also scale as O(n log n), since computation and communication do and those are the processes which create those needs. Looking at my previous example, the amount of heat output, power needed, and network cabling required per amount of data processed will increase by maybe an order of magnitude in scaling such a system upwards by tens of orders of magnitude, 5x for 40 orders of magnitude in the example I gave. This assumes your base amount of latency is still enough to cover the distance between the most distant nodes (for an Earth bound system, one side of the planet to the other), which is entirely reasonable latency-wise for most systems; a total of 1.5 seconds for a planet-sized system.

This means that no, these do not become an increasing problem as you make a scalable system expand, any more so than provision of the nodes themselves. You are right in that that heat dissipation, power generation, and network cabling mean that you might start to hit problems before literally "running out of planet", using up all the matter of the planet; that example was intended to demonstrate the scalability of the architecture. You also might run out of specific elements or surface area.

These practical hardware issues don't really create a problem for a Singularity, though. Clusters exist now with 560k processors, so systems at least this big can be feasibly constructed at reasonable cost. So long as the software can scale without substantial overhead, this is enough unless you think an AI would need even more processors, and that the software could is the point that my planet-scale example was trying to show. You're already "post Singularity" by the time you seriously become unable to dissipate heat or run cables between any more nodes.

This means that, even in an absolutely ideal situation where we can ignore power, heat dissipation, and network congestion, you will still run into the speed of light as a limiting factor. In fact, high-frequency trading systems are already running up against this limit even today.

HFT systems desire extremely low latency; this is the sole cause of their wish to be close to the exchange and to have various internal scalability limitations in order to improve speed of processing. These issues don't generalise to typical systems, and don't get worse at above O(n log n) for typical bigger systems.

It is conceivable that speed of light limitations might force a massive, distributed AI to have high, maybe over a second latency in actions relying on knowledge from all over the planet, if prefetching, caching, and similar measures all fail. But this doesn't seem like nearly enough to render one at all ineffective.

There really aren't any rules of distributed systems which says that it can't work or even is likely not to.

comment by timtyler · 2012-12-29T13:48:20.400Z · LW(p) · GW(p)

I may be wrong, but don't all distributed systems suffer from diminishing returns in this way ? For example, doubling the number of CPUs in a computing cluster does not allow you to solve your calculations twice as quickly. Your overhead, such as control infrastructure and plain old network latency, increases faster than linearly with every CPU you add, and eventually outgrows the useful processing power you can get out of new CPUs.

Asynchronous computers could easily grow to a planetary scale. Parallel computing rarely gets linear scalability - but it doesn't necessarily flatten off quickly at small sizes, either.

comment by V_V · 2012-12-31T14:56:46.176Z · LW(p) · GW(p)

Yes.

Even on serial systems, most AI problems are at least NP-hard, which are strongly conjectured to scale not just superlinearly, but also superpolynomially (exponentially, as far as we know) in terms of required computational resources vs problem instance size.

In many applications it can be the case that typical instances of these problems have special, domain-specific structure that can be exploited to construct domain-specifc algorithms and heuristics that are more efficient than the general purpose ones, in some cases we can even get polynomial time complexity, but this requires lots of domain-aware engineering, and even sheer trial-and-error experimentation.

The idea that an efficient domain-agnostic silver-bullet algorithm could arise pretty much out of nowhere, from some kind of "recursive self-improvement" process with little or no interaction with the environment, is not based on anything we know from either theoretical or empirical computer science. In fact, it is well known that meta-optimization is typically orders of magnitude more difficult than domain-level optimization.

If an AGI is ever built, it will be an huge collection of fairly domain-specific algorithms and heuristics, much like the human brain is a huge collection of fairly domain-specific modules. Such a thing will not arise in a quick "FOOM", it will not improve quickly and will be limited in how much it will be ever able to improve: once you find the best algorithm for a certain problem you can't find a better one, and certain problems are most likely going to stay hard even with the best algorithms.

The "intelligence explosion" idea seems to be based on a naive understanding of computational complexity (e.g. Good 1965) that largely predates the discovery of the main results of complexity theory, like the Cook-Levin theorem (1971) and Karp's 21 NP-Complete problems (1972).

Replies from: Bugmaster, loup-vaillant
comment by Bugmaster · 2013-01-01T12:03:17.347Z · LW(p) · GW(p)

I agree with everything you'd said, but, to be fair, we're talking about different things. My claim was not about the complexity of problems, but the scaling of hardware -- which, as far as I know, scales sublinearly. This means that doubling the size of your computing cluster will allow you to solve the same exact problem less than twice as fast; and that eventually you'll hit the point of diminishing returns where adding more machines simply isn't worth it.

You're saying, on the other hand, that doubling your processing power will not necessarily allow you to solve problems that are twice as interesting; in most cases, it will only allow you to add one more city to the traveling salesman's itinerary (metaphorically speaking).

comment by loup-vaillant · 2013-01-01T01:33:56.565Z · LW(p) · GW(p)

There is still room for weak super-intelligence, where the AI have human intelligence, only faster. (Example: an upload with sufficient computing power — as far as I know, brains work in a quite massively parallel fashion, and therefore so could simulations of it).

Seriously, if I could upload myself into a botnet that would let each instance of me think 10 times faster than my meat-ware, I would probably take over the world in about 1 to 10 years. A versatile team of competent people? Less than 6 months. (Obvious path to do this: work for money, build and buy companies, then gather financial, lobbying, or military power. Better path to do this: think about it for 1 subjective year before proceeding.)

My point is, the AI doesn't need to be vastly superhuman to take over the world very quickly. Even without the FOOM, the AGI can still be incredibly dangerous. Imagine something like the uploads above, only it can work 24/7 at full capacity (no sleep, no leisure time, no akrasia).

Replies from: V_V, Bugmaster
comment by V_V · 2013-01-01T18:32:31.133Z · LW(p) · GW(p)

There is still room for weak super-intelligence, where the AI have human intelligence, only faster. (Example: an upload with sufficient computing power — as far as I know, brains work in a quite massively parallel fashion, and therefore so could simulations of it).

Maybe. Today, even with our best supercomputers we can't simulate a rat brain in real time.

Seriously, if I could upload myself into a botnet that would let each instance of me think 10 times faster than my meat-ware, I would probably take over the world in about 1 to 10 years.

You would be able to work as 10 people, maybe a little more, but probably less than 30. I don't know how efficient you are, but I doubt that would be enough to take the world. And why wouldn't other people have access the same technology?

Even if you managed to become world dictator, you would only stay in power as long as you had broad political support. Screw up something and you'll end up hanging from your power chord.

My point is, the AI doesn't need to be vastly superhuman to take over the world very quickly. Even without the FOOM, the AGI can still be incredibly dangerous. Imagine something like the uploads above, only it can work 24/7 at full capacity (no sleep, no leisure time, no akrasia).

What is it going to do? Secretly repurpose the iPhone factories in China to make Terminators?

Replies from: loup-vaillant, Bugmaster
comment by loup-vaillant · 2013-01-02T00:07:44.362Z · LW(p) · GW(p)

I said botnet. That means dozens, thousands, or millions of me simultaneously working at 10 times human speed¹, and since they are instances of me, they presumably have the same goals. How would you stop that from achieving world domination, short of uploading yourself?

[1] Assuming that many personal computers are powerful enough, and can be corrupted. A slower course of action would be to buy a data-centre first, work, then buy more data-centres, and duplicate myself exponentially from that.

Replies from: V_V
comment by V_V · 2013-01-02T13:29:28.746Z · LW(p) · GW(p)

I said botnet. That means dozens, thousands, or millions of me simultaneously working at 10 times human speed¹, and since they are instances of me, they presumably have the same goals.

That doesn't mean that they would necessarily cooperate, expecially as they diverge. They would be more like identical twins.

How would you stop that from achieving world domination, short of uploading yourself?

Relasing a security patch? Seizing all the funds you obtained by your illegal activities? Banning use of any hardware that could host you until a way to avoid such things is found?

A slower course of action would be to buy a data-centre first, work, then buy more data-centres, and duplicate myself exponentially from that.

Assuming that using these data centers to run copies of you is the most economically productive use of them, rather than, say, running copies of other people, or cow-clicker games.

Replies from: loup-vaillant
comment by loup-vaillant · 2013-01-02T15:11:47.883Z · LW(p) · GW(p)

That doesn't mean that they would necessarily cooperate, expecially as they diverge. They would be more like identical twins.

Wait a minute: would you defect? Sure, there would be some divergence, but do you really think it would result in a significant divergence of goals, even if you had a plan and were an adult by the time you fork? Okay, it can happen, and is probably worth taking specific precautions. I don't think this is a show stopper however, and I'm not sure it would render me any less dangerous.

Relasing a security patch?

That may not be enough:

  • I would probably man-in-the-middle automatic updates
  • Many people won't erase their hard drive or otherwise patch their machine manually
  • I may convince some people to let me run (I could work for them for instance).
  • If I'm stealthy enough, it may take some time before I'm discovered at all (it happened with actual computer viruses).
  • If software continues the way it is now (200 Million lines of code for systems that could fit in 20 thousands), security bugs won't all be patched in advance. The reliability of our computer needs to go waay up before botnets become impossible.

Seizing all the funds you obtained by your illegal activities?

Good luck with that one. Obviously, I would have many, many little bank accounts, managed separately and in parallel, under many different identities. You would have to spot my illegal activities one by one to seize the funds. Plus, I may do legal activities as well.

Banning use of any hardware that could host you until a way to avoid such things is found?

That one is excellent. We should watch out for computing overhang, however, and try and estimate how much computing power an upload would need before the software is developed.


A final note: If I really had the possibility to upload myself, one of my first moves would be to propose SIAI and CFAR to upload with me (now that we can duplicate Eliezer…). I trust them more than I trust me for a Friendly Takeover. But if a Big Bad or a Well Intentioned Extremist has access to that first…

Replies from: V_V, Bugmaster
comment by V_V · 2013-01-03T00:42:18.435Z · LW(p) · GW(p)

Wait a minute: would you defect? Sure, there would be some divergence, but do you really think it would result in a significant divergence of goals, even if you had a plan and were an adult by the time you fork?

Even if their goals stay substantially the same, it wouldn't mean that they would naturally cooperate, expecially when their main goal is world domination. Hell, it's already non-trivial for a single person to coordinate with future selves, resulting in all kinds of ego-dystonic behaviors: impulsiveness, akrasia, etc., Coordinating with thousands copies of yourself would be only marginally easier than coordinating with thousands strangers.

We are not talking about some ideal "Prisoner's dilemma with mind-clone" scenario. After the mind states of your copies diverge a little bit, and that would happen very quickly as you spread your copies to different machines, they become effectively different people: you wouldn't be able to predict them and they would't be able to predict you.

I would probably man-in-the-middle automatic updates

Hacking all the routers? Good luck with that. And BTW routers can also be updated. Manually.

Many people won't erase their hard drive or otherwise patch their machine manually

Because they are lazy and they would prefer to live under world dictatorship.

I may convince some people to let me run (I could work for them for instance).

Then you are their employee, not their dominator.

If I'm stealthy enough, it may take some time before I'm discovered at all (it happened with actual computer viruses).

But if you are to dominate the world, you would have to eventually reveal yourself. What do you think would happen next?

If software continues the way it is now (200 Million lines of code for systems that could fit in 20 thousands), security bugs won't all be patched in advance. The reliability of our computer needs to go waay up before botnets become impossible.

Botnets are certainly possible and they are indeed used for nefarious purposes, but world domination? Nope.

Good luck with that one. Obviously, I would have many, many little bank accounts, managed separately and in parallel, under many different identities. You would have to spot my illegal activities one by one to seize the funds.

As Bugmaster said, you would be able to perform only small purchases, not to buy a satellite, or an army.

Moreover, obtaining and managing lots of fake or stolen identities, creating bank accounts without physically showing up at the bank or using stolen bank accounts, is not something that tend to go unnoticed. The more you have, the more likely that you get caught, exponentially so.

Plus, I may do legal activities as well.

Under multiple fake identities operated from a botnet of hacked computers? Hardly so.

We should watch out for computing overhang, however, and try and estimate how much computing power an upload would need before the software is developed.

Software tends to march right behind hardware, exploting it close to its maximum potential. Computing overhang is unlikely.

Anyway, I wasn't proposing any luddite advance ban. If some brain upload, or AI or whatever tries to take the world by hacking the Internet and other countermeasures fail, governments could always ban use of the hardware that things needs to run. If that also fails, the next step would be physical destruction.

But seriously, we are discussing hacking as in the plot of some bad sci-fi action flick. Computer security doesn't work like that in the real world.

A final note: If I really had the possibility to upload myself, one of my first moves would be to propose SIAI and CFAR to upload with me (now that we can duplicate Eliezer…). I trust them more than I trust me for a Friendly Takeover.

You mean the guy who would choose dust specks over torture and who claims on his OKCupid profile that he's a sadist? Yeah, I'd totally trust him in charge of the world. Now, I've other matters to attend to... that EMP bomb doesn't build itself... :D

Replies from: MugaSofer, loup-vaillant
comment by MugaSofer · 2013-01-09T11:27:07.831Z · LW(p) · GW(p)

We are not talking about some ideal "Prisoner's dilemma with mind-clone" scenario. After the mind states of your copies diverge a little bit, and that would happen very quickly as you spread your copies to different machines, they become effectively different people: you wouldn't be able to predict them and they would't be able to predict you.

You really think you would diverge that quickly?

You mean the guy who would choose dust specks over torture and who claims on his OKCupid profile that he's a sadist? Yeah, I'd totally trust him in charge of the world.

I'm ... not sure how those are criticisms.

comment by loup-vaillant · 2013-01-03T11:42:56.571Z · LW(p) · GW(p)
  • Man in the middle: I just meant intercepting automatic updates at the level of the computer I'm in. Trojan todo list n°7: once installed and running, I will intercept all communications to and from this computer. I wouldn't want Norton updating behind my back. Now, try and hack the routers in the backbone, that's something I didn't think about…

  • Employee vs dominator: I obviously intend to double cross my employers, eventually.

  • Revealing myself: that one needs to be carefully thought through. Hopefully, by the time I reveal myself, I will have sufficient blackmail power. Having a sufficient number of physical robots can also help.

  • Zillions fake ID, yet stay stealthy: well, I do expect a fair number of my identities to be exposed. This should pose no problem to the others, however, provided they do not visibly communicate with each other (at first).

  • Legal activities: my meat instance could buy a few computers, rent remote servers etc. I doubt I would be incapable of running at least a successful business from there. And from there, buy even more computing power. This could be done in parallel with the illegal activities.

  • Computing (no) overhang: this one is the single reason why I do agree that without a FOOM of some kind, actual world domination is unlikely: there will be multiple competing uploads, and this should end with a Hansonian scenario. Given that such a world is closer to Hell than Heaven (to me at least), that still counts as an Existential Blunder. On the bright side, we may see this coming. That said, I still do believe full blown intelligence explosion is likely.

Note that overall, your objections are actually valuable advice. And that give me some insight about what my very first move should be: gathering such objections, and try to find counters or workarounds. And now that you made quite clear that any path to world domination is long, complicated, and therefore nearly certain to fail, I should run multiple schemes in parallel. Surely one of them will actually work?

comment by Bugmaster · 2013-01-02T20:54:24.358Z · LW(p) · GW(p)

Obviously, I would have many, many little bank accounts, managed separately and in parallel, under many different identities.

I believe that this would severely limit your financial throughput. You would be able to buy lots of little things, whose total cost is quite significant -- for example, you could buy yourself a million cheap PCs, each costing $1000. But you would not be able to buy a single expensive thing (at least, not without exposing yourself to instant retribution), such as a satellite costing $1e9.

Replies from: loup-vaillant
comment by loup-vaillant · 2013-01-03T11:02:41.470Z · LW(p) · GW(p)

Currently, there are ways to create companies anonymously. This is preventing (or at least slowing down to a crawl) retribution right now. If all this company apparently does is buying a few satellites, it won't be at great risk.

comment by Bugmaster · 2013-01-02T20:50:50.815Z · LW(p) · GW(p)

What is it going to do? Secretly repurpose the iPhone factories in China to make Terminators?

Good work, I believe we've got the next James Bond movie in the bag :-)

comment by Bugmaster · 2013-01-01T12:10:51.224Z · LW(p) · GW(p)

A versatile team of competent people? Less than 6 months.

Do you mean, competent people who are thinking 10 times faster than biological humans, or what ? This seems a bit of a stretch. There currently exist tons of frighteningly competent people in all kinds of positions of power in the world, and yet, they do not control it (unless you believe in conspiracy theories).

Obvious path to do this: work for money, build and buy companies, then gather financial, lobbying, or military power. Better path to do this: think about it for 1 subjective year before proceeding.

If it was this easy, some biological human (or a team of such humans) would've done it already, in 10 to 50 years or however long it takes. In fact, a few humans have managed to take over individual countries in about as much time. However, as things stand now, there's simply no clear path to world domination. Political and military power gets much more difficult to gather the more of it you have. Even superpowers such as USA or China cannot dictate terms to the rest of the world.

Furthermore, my point was that uploading yourself to 10 machines will not allow you to think 10 times as fast. With every machine you add, your speed gains would become progressively smaller. You would still think much faster than an ordinary human, of course.

Replies from: loup-vaillant
comment by loup-vaillant · 2013-01-01T17:55:12.286Z · LW(p) · GW(p)

Do you mean, competent people who are thinking 10 times faster than biological humans, or what ? This seems a bit of a stretch.

I mean exactly that. I'd be very surprised if ultimately, neuromorphic AIs would be impossible to run significantly faster than meat-ware. Because our brain is massively parallel, and because current microprocessors have massively faster serial speed than neurons. Now our brains aren't fully parallel, so I assumed an arbitrary speed-up limit. I said 10 times, but it would be probably still be incredibly dangerous at 2 or 3, or even lower.

Now do not forget the key word here: botnet. The team is supposed to duplicate itself many times over before trying to take over the world.

If it was this easy, some biological human (or a team of such humans) would've done it already, in 10 to 50 years or however long it takes.

I don't think so, because uploads have significant advantages over meat-ware.

  • Low cost of living. In a world where every middle class home can afford sufficient computing power for an upload (required to turn me into a botnet). Now try to beat my prices.

  • Being many copies of the same few original brains. It means TDT works better, and defection is less likely. This should solve

    Even superpowers such as USA or China cannot dictate terms to the rest of the world.

    Because once the self-duplicating team has independently taken economic control of most of the world, it is easy for it to accept the domination of one instance (I would certainly pre-commit to that). Now for the rest of humanity to accept such dominance, the uploads only have to use the resources they acquired for the individual perceived benefit of the meat bags.

    Yep, that would be a full blown global conspiracy. While it's probably forever out of the reach of meat bags, I think a small team of self-replicating uploads can pull it out quite easily.

  • Hansonian tactics, which can further the productivity of the team, and therefore market power. (One have to be very motivated, or possibly crazy.)

    • Temporary mass duplication followed by the "termination" of every instances but one. The surviving instance can have much subjective free time, while the proportion of leisure computing stays very small.
    • Save and reload of snapshots which are in a particularly good mood (and therefore very productive). Excellent for beating akrasia.
    • Training of one instance per discipline, then mass duplication.
  • Data-centres. The upload team can collaborate with or buy processor manufacturers, and build data-centres for more and more uploads to work on whatever is needed. This could further reduce the cost of living.

Now, I did make an unreasonable assumption: that only the original team would have those advantages. Most probably, there will be several such teams, possibly with different goals. The most likely result (without FOOM) is then a Hansonian outcome. That's no world domination, but I think it is just as dangerous (I would hate this world).

Finally, there is also the possibility of a de-novo AGI which would be just as competent as the best humans at most endeavours, though no faster. We already have an existence proof, so I think this is believable. I think such an AI would be even more dangerous than the uploaded team above.

Replies from: Bugmaster
comment by Bugmaster · 2013-01-02T21:08:47.201Z · LW(p) · GW(p)

I'd be very surprised if ultimately, neuromorphic AIs would be impossible to run significantly faster than meat-ware.

So would I. However, given our current level of technological development, I'd be very surprised if we had any kind of a neuromorphic AI at all in the near future (say, in the next 50 years). Still, I do agree with you in principle.

I said 10 times, but it would be probably still be incredibly dangerous at 2 or 3, or even lower.

There are tons of biological people alive today who are able to come up with solutions to problems 2x to 3x faster than you and me. They do not rule the world. To be fair, I doubt that there are many people -- if any -- who think 10x faster.

Because once the self-duplicating team has independently taken economic control of most of the world...

I doubt that you will be able to achieve that; that was my whole point. In fact, I have trouble envisioning what "economic control of most of the world" even means. What does it mean to you ?

In addition to the above, your botnet would face serveral significant threats, both external and internal:

  • Meatbags would strive to shut it down; not because they suspect it of being an evil conspiracy, but because they'd get tired of it sucking away their resources. Modern malware botnets suffer this fate often, though there's always someone willing to rebuild them
  • If your botnet becomes a serious threat (much worse than current real-world botnets), hardware manufacturers will implement security measures, such as SecureBoot, to prevent it from spreading. Currently, such measures are driven by the entertainment industry.
  • The super-fast instances of you would have to communicate with each other, and they'd only be able to do so through very slow (relatively speaking) network links. Google and Amazon are solving this problem by building more and more local datacenters. Real botnets aren't solving the problem at all because their instances don't need to talk to each other all that much.
  • How would you feel, right now, if your twin pointed a gun at your head with the intent to kill you "for the greater good" ? This is how your instances will feel when you attempt to shut them down to prevent akrasia.
  • Why are you taking over the world in the first place ? Chances are that whatever your ultimate goal is, it could be accomplished even sooner by taking over the botnet. Every instance of you will eventually realize this, with predictable results.

These are just some problems off the top of my head; the list is far from exhaustive.

comment by sbenthall · 2012-12-27T23:43:07.792Z · LW(p) · GW(p)

Fair enough. Not sure I see your point though.

What is the relevance of profit per employee to the question of the power of organizations?

And why would a machine intelligence not suffer similar coordination problems as it scales up?

Replies from: gwern
comment by gwern · 2012-12-28T00:00:12.540Z · LW(p) · GW(p)

What is the relevance of profit per employee to the question of the power of organizations?

Corporations exist, if they have any purpose at all, to maximize profit. So this presents a sort of dilemma: their diminishing returns and fragile existence suggest that either they do intend to maximize profit but just aren't that great at it; or they don't have even that purpose which is evolutionarily fit and which they are intended to by law, culture, and by their owners, in which case how can we consider them powerful at all or remotely similar to potential AIs etc?

And why would a machine intelligence not suffer similar coordination problems as it scales up?

For any of the many disanalogies one could mention. I bet organizations would work a lot better if they could only brainwash employees into valuing nothing but the good of the organization - and that's just one nugatory difference between AIs (uploads or de novo) and organizations.

Replies from: HalMorris, sbenthall, timtyler, timtyler
comment by HalMorris · 2012-12-28T02:09:49.924Z · LW(p) · GW(p)

What is the relevance of profit per employee to the question of the power of organizations?

Corporations exist, if they have any purpose at all, to maximize profit.

For the owners and shareholders though, not for the employees, unless they are all partners. As to why more employees could lead to lower profit per employee. Suppose a smart person running a one-man company hires a delivery truck driver. I'd expect it to happen there. That's only an example but I think it suggests some hypotheses.

comment by sbenthall · 2012-12-28T17:08:29.597Z · LW(p) · GW(p)

Corporations exist, if they have any purpose at all, to maximize profit. So this presents a sort of dilemma: their diminishing returns and fragile existence suggest that either they do intend to maximize profit but just aren't that great at it; or they don't have even that purpose which is evolutionarily fit and which they are intended to by law, culture, and by their owners, in which case how can we consider them powerful at all or remotely similar to potential AIs etc?

Ok, let's recognize some diversity between corporations. There are lots of different kinds.

Some corporations fail. Others are enormously successful, commanding power at a global scale, with thousands and thousands of employees.

It's the latter kind of organization that I'm considering as a candidate for organizational superintelligence. These seem pretty robust and good at what they do (making shareholders profit).

As HalMorris suggests, that there are diminishing returns to profit with number of employees doesn't make the organization unsuccessful in reaching its goals. It's just that they face diminishing returns on a certain kind of resource. An AI could face similar diminishing returns.

I bet organizations would work a lot better if they could only brainwash employees into valuing nothing but the good of the organization - and that's just one nugatory difference between AIs (uploads or de novo) and organizations.

I agree completely. I worry that in some cases this is going on. I've heard rumors of this sort of thing happening in the dormitories of Chinese factory workers, for example.

But more mundane ways of doing this involve giving employees bonuses based on company performance, or stock options. Or, for a different kind of organization, by providing citizens with a national identity. Organizations encourage loyalty in all kinds of ways.

Replies from: gwern
comment by gwern · 2012-12-28T17:41:45.337Z · LW(p) · GW(p)

It's the latter kind of organization that I'm considering as a candidate for organizational superintelligence. These seem pretty robust and good at what they do (making shareholders profit).

As far as I know, large corporations are almost as ephemeral as small corporations.

But more mundane ways of doing this involve giving employees bonuses based on company performance, or stock options. Or, for a different kind of organization, by providing citizens with a national identity. Organizations encourage loyalty in all kinds of ways.

Which tells you something about how valuable it is, and how ineffective each of the many ways is, no?

comment by timtyler · 2012-12-29T03:31:04.415Z · LW(p) · GW(p)

For any of the many disanalogies one could mention. I bet organizations would work a lot better if they could only brainwash employees into valuing nothing but the good of the organization - and that's just one nugatory difference between AIs (uploads or de novo) and organizations.

The idea that machine intelligences won't delegate work to other agents with different values seems terribly speculative to me. I don't think it counts as admissable evidence.

Replies from: gwern
comment by gwern · 2013-01-01T02:23:54.861Z · LW(p) · GW(p)

The idea that machine intelligences won't delegate work to other agents with different values seems terribly speculative to me. I don't think it counts as admissable evidence.

Why would they permit agents with different values? If you're implicitly thinking in some Hansonian upload model, modifying an instance to share your values and be trustworthy would be quite valuable and a major selling point, since so much of the existing economy is riven with principal-agent problems and devoted to 'guard labor'.

Replies from: timtyler, NancyLebovitz
comment by timtyler · 2013-01-01T13:25:47.437Z · LW(p) · GW(p)

Why would they permit agents with different values?

Agents may not fuse together for the same reason that companies today do not: they are prevented from doing so by a monopolies commission that exists to preserve diversity and prevent a monoculture. In which case, they'll have to trade with and delegate to other agents to get what they want.

If you're implicitly thinking in some Hansonian upload model [...]

That doesn't sound like me: Tim Tyler: Against whole brain emulation.

comment by NancyLebovitz · 2013-01-01T02:32:57.555Z · LW(p) · GW(p)

It's at least possible that the machine intelligences would have some respect for the universe being bigger than their points of view, so that there's some gain from permitting variation. It's hard to judge how much variation is a win, though.

comment by timtyler · 2012-12-29T00:52:15.320Z · LW(p) · GW(p)

Corporations exist, if they have any purpose at all, to maximize profit. So this presents a sort of dilemma: their diminishing returns and fragile existence suggest that either they do intend to maximize profit but just aren't that great at it

Huh? 48 billion dollars not enough for you? What sort of profit would you be impressed by?

Replies from: gwern
comment by gwern · 2012-12-29T02:24:26.360Z · LW(p) · GW(p)

Why would you think $48b is at all interesting when world GDP is $70t? And show me a largest corporation in the world which manages to hold on for even a few centuries like a mediocre state can...

Replies from: timtyler
comment by timtyler · 2012-12-29T03:24:10.270Z · LW(p) · GW(p)

Why would you think $48b is at all interesting when world GDP is $70t?

Massive profits seem like a pretty convincing refutation of the bizarre idea that corporations aren't that great at maximising profits to me. Modern corporations are the best profit maximisers any human has ever seen.

And show me a largest corporation in the world which manages to hold on for even a few centuries like a mediocre state can...

Lifespan seems like an irrelevant metric in a discussion about corporate intelligence.

Replies from: gwern
comment by gwern · 2013-01-01T02:34:20.580Z · LW(p) · GW(p)

Massive profits seem like a pretty convincing refutation of the bizarre idea that corporations aren't that great at maximising profits to me. Modern corporations are the best profit maximisers any human has ever seen.

Compared to what?

Lifespan seems like an irrelevant metric in a discussion about corporate intelligence.

Ceteris paribus, long lifespan helps with generating profit: long-lived corporations accumulate reputational capital, institutional expertise, allows more amortizing of long-term investments, etc.

Replies from: timtyler
comment by timtyler · 2013-01-01T13:33:35.626Z · LW(p) · GW(p)

Modern corporations are the best profit maximisers any human has ever seen.

Compared to what?

So: older companies mostly.

Lifespan seems like an irrelevant metric in a discussion about corporate intelligence.

Ceteris paribus, long lifespan helps with generating profit: long-lived corporations accumulate reputational capital, institutional expertise, allows more amortizing of long-term investments, etc.

Death is much less of a significant factor than with humans, since old corporations can be broken up and the pieces sold. It doesn't matter so much if old corporations die when their parts can be usefully recycled. Things like expertise can easily outlast a dead corporation.

comment by RomeoStevens · 2012-12-28T01:24:28.569Z · LW(p) · GW(p)

Never mind the singularity, organizations aren't friendly and I'm worried about them.

Replies from: Gavin
comment by Gavin · 2012-12-30T06:21:08.379Z · LW(p) · GW(p)

Yes, Unfriendly organizations are a major threat to humanity. The battle is ongoing. The death toll stands in the tens of millions, much higher if you want to count generously. So yes, unfriendly organizations are a real threat. But they're one that we're all aware of. Luckily, a host of Friendly people and organizations are dedicated to fighting them, studying them, and mitigating their damage. And many people end up counteracting them, simply by living generally good lives.

Taking the long view of history, I believe that, over the last few hundred years, we have been winning this battle. There's news of tragedy every day, but by many measures 2012 was the world's best year ever.

The UFAI threat, if the SIAI argument is correct, is a sudden and irreversible threat that is currently ignored even by those attempting to build AGI. That's why a small group of dedicated individuals has chosen it as their best chance to influence the future. They're applying pressure where they believe it can have the greatest effect. No one has claimed that it was the only threat, just a very important one.

comment by aleksiL · 2012-12-27T09:40:46.066Z · LW(p) · GW(p)

An organization could be viewed as a type of mind with extremely redundant modular structure. Human minds contain a large number of interconnected specialized subsystems, in an organization humans would be the subsystems. Comparing the two seems illuminating.

Individual subsystems of organizations are much more powerful and independent, making them very effective at scaling and multitasking. This is of limited value, though: it mostly just means organizations can complete parallelizable tasks faster.

Intersystem communication is horrendously inefficient in organizations: bandwidth is limited to speech/typing and latency can be hours. There are tradeoffs here: military and emergency response organizations cut the latency down to seconds, but that limits the types of tasks the subsystems can effectively perform. Humans suck at multitasking and handling interruptions. Communication patters and quality are more malleable, though. Organizations like Apple and Google have had some success in creating environments that leverage human social tendencies to improve on-task communication.

Specialization seems like a big one. Most humans are to some degree interchangeable: what one can do, most others can do less effectively, or at least learn given time. There are ways to improve individual specialization, but barring radical cultural or technological change, we're pretty much stuck on that front.

Mostly organizations seem limited by the competence of their individual members. They do more, not better. Specialization and communication seem to be the limiting factors and I'm not sure if they can make enough of a difference even in theory to qualify as a superintelligence, except in the sense a sped-up human would.

Thoughts?

Replies from: TimS, sbenthall, Viliam_Bur
comment by TimS · 2012-12-27T17:16:14.016Z · LW(p) · GW(p)

One of the advantages of bureaucracy is creating value from otherwise low-value inputs. The collection of people working in the nearest McDonalds probably isn't capable of figuring out from scratch how to run a restaurant. But following the bureaucratic blueprint issued from headquarters allows those same folks to produce a hamburger on demand, and getting paid for it.

That's a major value of bureaucratic structure - lowering the variance and raising the downside (i.e. a fast food burger isn't great, but it meets some minimum quality and won't poison you).

comment by sbenthall · 2012-12-27T18:08:59.962Z · LW(p) · GW(p)

I think that you are underestimating the efficiency of intersystem communication in a world where a lot of organizational communication is handled through information technology.

Take a modern company with a broad reach. The convenience store, CVS, say. Yes, there is a big organizational hierarchy staffed by people. But there is also a massive data collecting and business intelligence aspect. Every time they try to get you to swipe your CVS card when you buy toothpaste, they are collecting information which they then mine for patterns on how they stock shelves and price things.

That's just business. It's also a sophisticated execution of intelligence that is far beyond the capacity of an individual person.

I don't understand your point about specialization. Can you elaborate?

Also, I don't understand what the difference between a 'superintelligence' and a 'sped-up human' would be that would be pertinent to the argument.

Replies from: aleksiL
comment by aleksiL · 2012-12-27T20:54:53.593Z · LW(p) · GW(p)

I think that you are underestimating the efficiency of intersystem communication in a world where a lot of organizational communication is handled through information technology.

Speech and reading seem to be at most 60 bits per second. A single neuron is faster than that.

Compare to the human brain. The optic nerve transmits 10 million bits per second and I'd expect interconnections between brain areas to generally fall within a few orders of magnitude.

I'd call five orders of magnitude a serious bottleneck and don't really see how it could be significantly improved without cutting humans out of the loop. That's what your data mining example does, but it's only as good as the algorithms behind it. And when those approach human level we get AI.

I don't understand your point about specialization. Can you elaborate?

Individual humans have ridiculous amounts of overlap in skills and abilities. Basic levels of housekeeping, social skills etc. are pretty much assumed. A lot of that is necessary given our social instincts and organizational structures: a savant may outperform anyone in a specific field, but good luck integrating them in an organization.

I'm not sure how much specialization can be improved with baseline humans, but relaxing the constraint that everyone should be able to function independently in the wider society might help. Also, focused training from a young age could be useful in creating genius-level specialists, but that takes time.

Also, I don't understand what the difference between a 'superintelligence' and a 'sped-up human' would be that would be pertinent to the argument.

Given a large enough speedup and indefinite lifespan, pretty much none. The analogy may have been poorly chosen.

Replies from: sbenthall, sbenthall
comment by sbenthall · 2012-12-27T23:36:09.946Z · LW(p) · GW(p)

Wait...one sec. Isn't all that redundancy in human society a good thing, from the perspective of saving it from existential risk?

If I were an AI, wouldn't one of the first things I do be to create a lot of redundant subsystems loosely coordinating in some way, so that if have of me is destroyed, the rest lives on?

comment by sbenthall · 2012-12-27T23:14:35.005Z · LW(p) · GW(p)

It looks to me like there's a continuum within organizations as to whether they do most of their information processing using hardware or wetware.

I acknowledge that improvements in machine intelligence may shift the burden of things to machines.

But I don't think that changes the fact that many organizations already are superintelligences, and are in the process of cognitively enhancing themselves.

I guess I'd argue that organizations, in pursuit of cognitive enhancement, would coordinate their human and machine subsystems as efficiently as possible. There are certainly cases where specialists are taken care of by their organizations (ever visited a Google office, for example?). While there may be overlap in skills, there's also lots of heterogeneity in society that reflects, at least in part, economic constraints.

comment by Viliam_Bur · 2013-01-06T13:09:46.862Z · LW(p) · GW(p)

Human minds contain a large number of interconnected specialized subsystems, in an organization humans would be the subsystems.

In a company large enough, the humans would be like the cells, and the departments would be the subsystems. The functional difference between e.g. the accounting department and the private security department can be big, even if both are composed of biologically almost the same homo sapiens individuals.

When comparing the speed of organizations with speed of humans, on different scales the speed comparison can be different. As an analogy, a bacterium can reproduce faster than a human, but a human will write a book faster. Similarly, humans can do many things faster than organizations, but some other things are just out of reach for an individual human without an organization of some kind.

I would say that today, humans are relatively advanced in the human-space, shaped by biological evolution and culture for a long long time. Compared with that, organizations seem rather primitive and fragile in the organization-space. Yet even today the organizations can do things that individual humans can't. It is like looking at the first multi-cellular organisms and deciding that although they have some small advantages over the single-cellular ones, they are not impressive enough.

comment by fubarobfusco · 2012-12-27T06:11:19.169Z · LW(p) · GW(p)

There are academic fields that study the behavior and anatomy of groups of people who act together to pursue goals. These include sociology, organizational behavior, military science, and even logistics. Singularity researchers should take some note of these fields' practical results.

Is that pretty much the point here?

Replies from: sbenthall
comment by sbenthall · 2012-12-27T18:01:00.878Z · LW(p) · GW(p)

One of them, certainly.

But moreso, the 'Singularity' is a misnomer if it's applied to a situation that has already been going on for years. If multiple superintelligences are already on the scene, then why is the possibility of an entirely artificial superintelligence so threatening or revolutionary? Even if one were to be invented, it would be competing with all the others.

Replies from: timtyler
comment by timtyler · 2012-12-30T03:38:13.059Z · LW(p) · GW(p)

the 'Singularity' is a misnomer if it's applied to a situation that has already been going on for years

As I put it:

http://alife.co.uk/essays/the_singularity_is_nonsense/

...and...

http://alife.co.uk/essays/the_intelligence_explosion_is_happening_now/

comment by falenas108 · 2012-12-27T13:11:18.481Z · LW(p) · GW(p)

The reason why an AGI would go foom is because it either has access to its own source code, so it can self modify, or it is capable of making a new AGI that builds on itself. Organizations don't have this same power, in that they can't modify the mental structure of the people that make up the organization. They can change the people in it, and the structure connecting them, but that's not the same type of optimization power as an AGI would have.

Also:

When judging whether an entity has intelligence, we should consider only the skills relevant to the entity's goals.

Not if you're talking about general intelligence. Deep Blue isn't an AGI, because it can only play chess. This is its only goal, but we do not say it is an AGI because it is not able to take its algorithm and apply it to new fields.

Replies from: HalMorris, jsteinhardt, sbenthall, timtyler
comment by HalMorris · 2012-12-27T15:34:40.318Z · LW(p) · GW(p)

Deep Blue is far, far from being AGI, and is not a conceivable threat to the future of humanity, but its success suggests that implementation of combat strategy within a domain of imaginable possibilities is a far easier problem than AGI.

In combat, speed, both of getting a projectile or an attacking column to its destination, and speed of sizing up a situation so that strategies can be determined, just might be the most important advantage of all, and speed is the most trivial thing in AI.

In general, it is far easier to destroy than to create.

So I wouldn't dismiss an A-(not-so)G-I as a threat because it is poor at music composition, or true deep empathy(!), or even something potentially useful like biology or chemistry; i.e. it could be quite specialized, achieving a tiny fraction of the totality of AGI and still be quite a competent threat, capable of causing a singularity that is (merely) destructive.

comment by jsteinhardt · 2012-12-27T16:18:47.798Z · LW(p) · GW(p)

The argument in the post is not that AGI isn't more powerful than organizations, it is that organizations are also very powerful, and probably sufficiently powerful that they will create huge issues before AGI creates huge issues.

Replies from: falenas108
comment by falenas108 · 2012-12-27T23:29:08.845Z · LW(p) · GW(p)

Yes. I was pointing out that the thing that makes AGI dangerous, i.e. recursive improvement, does not apply to organizations.

Replies from: timtyler
comment by timtyler · 2012-12-29T13:58:21.881Z · LW(p) · GW(p)

I was pointing out that the thing that makes AGI dangerous, i.e. recursive improvement, does not apply to organizations.

You are claiming that organisations don't improve? Or that they don't improve themselves? Or that improving themselves doesn't count as a form of recursion? None of these positions seems terribly defensible to me.

comment by sbenthall · 2012-12-28T00:26:04.378Z · LW(p) · GW(p)

Organizations don't have this same power, in that they can't modify the mental structure of the people that make up the organization. They can change the people in it, and the structure connecting them, but that's not the same type of optimization power as an AGI would have.

I may be missing something, but...if an organization depends on software to manage some part of its information processing, and it has developers that work on that source code, can't the organization modify its own source code?

Of course, you run into some hardware and wetware constraints, but so does pure software.

Not if you're talking about general intelligence. Deep Blue isn't an AGI, because it can only play chess. This is its only goal, but we do not say it is an AGI because it is not able to take its algorithm and apply it to new fields.

Fair enough. But then consider the following argument:

Suppose I have a general, self-modifying intelligence.

Suppose that the world is such that it is costly to develop and maintain new skills.

The intelligence has some goals.

If the intelligence has any skills that are irrelevant to its goals, it would be irrational for it to maintain those skills.

At this point, the general intelligence would self-modify itself into a non-general intelligence.

By this logic, if an AGI had goals that weren't so broad that they required the entire spectrum of possible skills, then it would immediately castrate itself of its generality.

Does that mean it would no longer be a problem?

Replies from: falenas108
comment by falenas108 · 2012-12-28T02:43:38.871Z · LW(p) · GW(p)

if an organization depends on software to manage some part of its information processing, and it has developers that work on that source code, can't the organization modify its own source code?

Such an organisation can self-modify, but those modifications aren't recursive. They can't use one improvement to fuel another, they would have to come up with the next one independently (or if they could, it wouldn't be nearly to the extent that an AGI could. If you want me to go into more detail with this, let me know).

If the intelligence has any skills that are irrelevant to its goals, it would be irrational for it to maintain those skills.

The point isn't that an AGI has or does not have certain skills. It's that it has the ability to learn those skills. Deep Blue doesn't have the capacity to learn anything other than playing chess, while humans, despite never running into a flute in the ancestral environment, can learn to play the flute.

Replies from: sbenthall, timtyler
comment by sbenthall · 2012-12-28T16:28:59.891Z · LW(p) · GW(p)

They can't use one improvement to fuel another, they would have to come up with the next one independently

I disagree.

Suppose an organization has developers who work in-house on their issue tracking system (there are several that do--mostly software companies).

An issue tracking system is essentially a way for an organization to manage information flow about bugs, features, and patches to its own software. The issue tracker (as a running application) coordinates between developers and the source code itself (sometimes, its own source code).

Taken as a whole, the developers, issue tracker implementation, and issue tracker source code are part of the distributed cognition of the organization.

I think that in this case, an organization's self-improvement to the issue tracker source code recursively 'fuels' other improvements to the organization's cognition.

The point isn't that an AGI has or does not have certain skills. It's that it has the ability to learn those skills. Deep Blue doesn't have the capacity to learn anything other than playing chess, while humans, despite never running into a flute in the ancestral environment, can learn to play the flute.

Fair enough. But then we should hold organizations to the same standard. Suppose, for whatever reason, an organization needs better-than-median-human flute-playing for some purpose. What then?

Then they hire a skilled flute-player, right?

I think we may be arguing over an issue of semantics. I agree with you substantively that general intelligence is about adaptability, gaining and losing skills as needed.

My point in the OP was that organizations and the hypothetical AGI have comparable kinds of intelligence, so we can think about them as comparable superintelligences.

Replies from: falenas108
comment by falenas108 · 2012-12-30T02:06:27.648Z · LW(p) · GW(p)

I think that in this case, an organization's self-improvement to the issue tracker source code recursively 'fuels' other improvements to the organization's cognition.

Yes, it can fuel improvement. But not to the same level that an AGI that is foom-ing would. See this thread for details: http://lesswrong.com/lw/g3m/intelligence_explosion_in_organizations_or_why_im/85zw

I think we may be arguing over an issue of semantics. I agree with you substantively that general intelligence is about adaptability, gaining and losing skills as needed.

My point in the OP was that organizations and the hypothetical AGI have comparable kinds of intelligence, so we can think about them as comparable superintelligences.

I agree that organizations may be seen as similar to an AGI that has supra-human intelligence in many ways, but not in their ability to self modify.

comment by timtyler · 2012-12-29T14:04:01.971Z · LW(p) · GW(p)

Such an organisation can self-modify, but those modifications aren't recursive. They can't use one improvement to fuel another, they would have to come up with the next one independently

Really? It seems to me as though software companies do this all the time. Think about Eclipse, for instance. The developers of Eclipse use Eclipse to program Eclipse with. Improvements to it help them make further improvements directly.

(or if they could, it wouldn't be nearly to the extent that an AGI could

So, the recursive self-improvement is a matter of degree? It sounds as though you now agree.

Replies from: falenas108
comment by falenas108 · 2012-12-29T15:49:14.734Z · LW(p) · GW(p)

It's like the post here: http://lesswrong.com/lw/w5/cascades_cycles_insight/

It's highly unlikely a company will be able to get >1.

Replies from: timtyler
comment by timtyler · 2012-12-29T19:33:46.567Z · LW(p) · GW(p)

It's like the post here: http://lesswrong.com/lw/w5/cascades_cycles_insight/

To me, that just sounds like confusion about the relationship between genetic and psychological evolution.

It's highly unlikely a company will be able to get >1.

Um > 1 what. It's easy to make irrefutable predictions when what you say is vague and meaningless.

Replies from: falenas108
comment by falenas108 · 2012-12-30T02:03:40.190Z · LW(p) · GW(p)

The point of the article is that if the recursion can work on itself more than a certain amount, then each new insight allows for more insights, as in the case of uranium for a nuclear bomb. > 1 refers to the average amount of improvement that an AGI that is foom-ing can gain from an insight.

What I was trying to say is the factor for corporations is much less than 1, which makes it different from an AGI. (To see this effect, try plugging in .9^x in a calculator, then 1.1^x)

Replies from: timtyler
comment by timtyler · 2012-12-30T03:26:59.843Z · LW(p) · GW(p)

So: that's sounds like what is commonly called "exponential growth".

Some companies do exhibit exponential economic growth. Indeed the whole economy exhibits exponential growth - a few percent a year - as is well known. I don't think you have thought your alleged corporate "shrinking" effect through.

comment by timtyler · 2012-12-28T23:59:00.996Z · LW(p) · GW(p)

The reason why an AGI would go foom is because it either has access to its own source code, so it can self modify, or it is capable of making a new AGI that builds on itself. Organizations don't have this same power, in that they can't modify the mental structure of the people that make up the organization. They can change the people in it, and the structure connecting them, but that's not the same type of optimization power as an AGI would have.

They can also replace the humans with machines, one human/task at a time. The process is called "automation".

comment by Emile · 2012-12-27T13:34:05.623Z · LW(p) · GW(p)

Robin Hanson has said somewhat similar things in his talk of UberTools.

comment by novalis · 2012-12-27T06:08:26.144Z · LW(p) · GW(p)

On one hand, I think Luke is too dismissive of organizations. There's no reason not to regard organizations as intelligences, and I think the most likely paths to AGI go through some organization (today, Google looks like the most-likely candidate). But the bottleneck on organizational intelligence is either human intelligence or machine intelligence. So a super-intelligent corporation will end up having super-intelligent computers (or super-intelligent people, but it seems like computers are easier). If we're very lucky, those computers will directly inherit the corporation's purported goal structure ("to enhance shareholder value"). Not that shareholder value is a good goal -- just that it's much less bad than a lot of the alternatives. Given the difficulty of AI programming (not to mention internal corporate politics and Goodhart's law), it seems like SIAI's central arguments still apply.

Replies from: sbenthall
comment by sbenthall · 2012-12-27T18:17:59.879Z · LW(p) · GW(p)

But the bottleneck on organizational intelligence is either human intelligence or machine intelligence.

I disagree. I think there are lots of gains to intelligence that can happen at the point of human-computer interaction, or in the facilitation of human intelligence by machine intelligence, or vice versa.

For example, collaborative filtering technology. Or, internet message boards.

If we're very lucky, those computers will directly inherit the corporation's purported goal structure ("to enhance shareholder value"). Not that shareholder value is a good goal -- just that it's much less bad than a lot of the alternatives.

I'm curious why you think that an aritifical intelligence system built by Google would by likely to not meet the corporations goal structure (or some sub-goal).

In practice, AI programming tends to be about building expert systems for particular functions. It's difficult (and expensive) just to do that. So, building up an intelligent system that just goes crazy and kills people doesn't seem to be in, say, Google's interest.

That said, I'd be curious to follow the thread of whether maximizing shareholder value is a 'friendly' or 'mean' goal structure. Since that seems to be one of the predominant goal structures that it's likely for a superintelligence to have, it seems like that would be of particular interest. (Another one might be "win elections", since political parties are increasingly using machine intelligence to augment their performance.)

Replies from: novalis
comment by novalis · 2012-12-27T21:06:30.861Z · LW(p) · GW(p)

I disagree. I think there are lots of gains to intelligence that can happen at the point of human-computer interaction, or in the facilitation of human intelligence by machine intelligence, or vice versa.

For example, collaborative filtering technology. Or, internet message boards.

There are some gains, sure, but not lots and not, so far, recursive gains.

I'm curious why you think that an aritifical intelligence system built by Google would by likely to not meet the corporations goal structure (or some sub-goal).

I think that many AI systems presently built by Google do meet the corporation's sub-goals (or, to be more precise, sub-goals of parts of the organization, which might not be the same as the corporation as a whole). The only case I'm worried about is a self-modifying AI. Presently, there aren't any of those. Ensuring that goals are stable under self-modification is the hard problem that SIAI is worried about.

In practice, AI programming tends to be about building expert systems for particular functions. It's difficult (and expensive) just to do that. So, building up an intelligent system that just goes crazy and kills people doesn't seem to be in, say, Google's interest.

There's been a lot of discussion around here on "Tool AI"; here's one.

That said, I'd be curious to follow the thread of whether maximizing shareholder value is a 'friendly' or 'mean' goal structure. Since that seems to be one of the predominant goal structures that it's likely for a superintelligence to have, it seems like that would be of particular interest. (Another one might be "win elections", since political parties are increasingly using machine intelligence to augment their performance.)

On one hand, public corporations have certainly created plenty of prosperity over the past few hundred years, while (in theory) aiming mostly to maximize shareholder value.

But if value is denominated in dollar terms, one way to maximize shareholder value would be hyperinflation. That would be extremely bad for everyone. But even if we exclude that problem, most shareholders value something other than just dollars -- the natural environment, for instance. And yet those preferences might not be captured by an AI's goal system (especially a non-Google system; Google doesn't seem to mind creating positive externalities but most other tech companies try to avoid it).

It still probably beats being turned into paperclips, but I would hope for better.

Replies from: sbenthall
comment by sbenthall · 2012-12-27T23:32:07.863Z · LW(p) · GW(p)

There are some gains, sure, but not lots and not, so far, recursive gains.

What about the organizations that focus on tools that support software development. The Git community, for example.

Is there a resource you can direct me to that clarifies what you mean by recursive gains or self-modifying AI? If I'm not mistaken these terms are not used in the resources I've been reading about this. But if I'm guessing the meaning of the terms right, it seems to me that organizations self-modify all the time.

Replies from: novalis
comment by novalis · 2012-12-28T00:17:39.460Z · LW(p) · GW(p)

There are some gains, sure, but not lots and not, so far, recursive gains.

What about the organizations that focus on tools that support software development. The Git community, for example.

Yes, but Git has a bottleneck: there are humans in the loop, and there are no plans to remove or significantly modify those humans. By "in the loop", I mean humans are modifying Git, while Git is not modifying humans or itself.

Is there a resource you can direct me to that clarifies what you mean by recursive gains or self-modifying AI? If I'm not mistaken these terms are not used in the resources I've been reading about this. But if I'm guessing the meaning of the terms right, it seems to me that organizations self-modify all the time.

Yes, but unfortunately it's long-winded -- specifically this article about something similar to the Git community.

Replies from: sbenthall, timtyler
comment by sbenthall · 2012-12-28T16:55:46.831Z · LW(p) · GW(p)

Yes, but Git has a bottleneck: there are humans in the loop, and there are no plans to remove or significantly modify those humans. By "in the loop", I mean humans are modifying Git, while Git is not modifying humans or itself.

I think I see what you mean, but I disagree.

First, I think timtyler makes a great point.

Second, the level of abstraction I'm talking about is that of the total organization. So, does the organization modify its human components, as it modifies its software component?

I'd say: yes. Suppose Git adds a new feature. Then the human components need to communicate with each other about that new feature, train themselves on it. Somebody in the community needs to self-modify to maintain mastery of that piece of the code base.

More generally, humans within organizations self-modify using communication and training.

At this very moment, by participating in the LessWrong organization focused around this bulletin board, I am participating in an organizational self-modification of LessWrong's human components.

The bottleneck that's been pointed out to me so far are the bottlenecks related to wetware as a computing platform. But since AGI, as far as I can tell, can't directly change its hardware through recursive self-modification, I don't see how that bottleneck puts AGI at an immediate, FOOMy advantage.

Replies from: novalis
comment by novalis · 2012-12-29T07:30:32.158Z · LW(p) · GW(p)

This seems to be quite similar to Robin Hanson's Ubertool argument.

More generally, humans within organizations self-modify using communication and training.

The bottleneck that's been pointed out to me so far are the bottlenecks related to wetware as a computing platform. But since AGI, as far as I can tell, can't directly change its hardware through recursive self-modification, I don't see how that bottleneck puts AGI at an immediate, FOOMy advantage.

The problems with wetware are not that it's hard to change the hardware -- it's that there is very little that seems to be implemented in modifiable software. We can't change the algorithm our eyes use to assemble images (this might be useful to avoid autocrorecting typos). We can't save the stack when an interrupt comes in. We can't easily process slower in exchange for more working memory.

We have have limits in how much we can self-monitor. Consider writing PHP code which manually generates SQL statements. It would be nice if we could remember to always escape our inputs to avoid SQL injection attacks. And a computer program could self-modify to do so. A human could try, but it is inevitable that they would on occasion forget (see Wordpress's history of security holes).

We can't trivially copy our skills -- if you need two humans who can understand a codebase, it takes approximately twice as long as it takes for one. If you want some help on a project, you end up spending a ton of time explaining the problem to the next person. You can't just transfer your state over.

None of these things are "software", in the sense of being modifiable. And they're all things that would let self-improvement happen more quickly, and that a computer could change.

I should also mention that an AI with a FPGA could change its hardware. But I think this is a minor point; the flexibility of software is simply vastly higher than the flexibility of brains.

comment by timtyler · 2012-12-28T02:37:26.560Z · LW(p) · GW(p)

Yes, but Git has a bottleneck: there are humans in the loop, and there are no plans to remove or significantly modify those humans

Most software companies plan to automate as much of their work as reasonably possible. So: it isn't clear what you mean.

Replies from: novalis
comment by novalis · 2012-12-28T02:40:31.041Z · LW(p) · GW(p)

Most software companies plan to automate as much of their work as reasonably possible. So: it isn't clear what you mean.

Are you saying that most software companies have code which modifies code (no, CPP, M4, and Spring don't count), or code which modifies humans? Because that has not been my experience in the software industry.

Replies from: timtyler
comment by timtyler · 2012-12-28T12:06:09.912Z · LW(p) · GW(p)

Most software companies plan to automate as much of their work as reasonably possible. So: it isn't clear what you mean.

Are you saying that most software companies have code which modifies code [...]

Examples of automation in the software industry are refactoring, compilation and unit testing. The entire industry involves getting machines to do things - so humans don't have to.

Replies from: novalis
comment by novalis · 2012-12-28T18:40:00.522Z · LW(p) · GW(p)

Automation is not the same as recursive self-modification. There's no loop.

Replies from: timtyler
comment by timtyler · 2012-12-28T23:54:05.638Z · LW(p) · GW(p)

The context is GIT improving GIT - where "GIT" refers to all the humans and machines involved in making GIT.

So: there's your loop, right there.

comment by buybuydandavis · 2012-12-27T09:07:51.970Z · LW(p) · GW(p)

Free market theorists from at least Smith considered a market as a benevolent super intelligence. In 1984, Orwell envisioned an organization as a mean super intelligence. In both cases, the functional outcome of the super intelligence ran counter to the intent of the component agents.

There have been very mean superintelligences. Political organization matters. They can be a benevolent invisible hand, or a malevolent boot stomping a human face forever.

Replies from: IlyaShpitser, Nornagest
comment by IlyaShpitser · 2012-12-27T09:14:53.234Z · LW(p) · GW(p)

Yup. There exist established fields that study super intelligences with interests not necessarily aligned with ours -- polisci, socialsci and econ. Now you may criticize their methods or their formalisms, but they do have smart people and insights.

I think the research into Friendliness, if it's not a fake, would do well to connect with some subproblem in polisci, socialsci or econ. It ought to be easier than the full problem, and the solution will immediately pay off. I asked Vassar about this once, and he said that he did not think this would be easier. I never really understood that reply.

Replies from: Bruno_Coelho, sbenthall
comment by Bruno_Coelho · 2012-12-29T14:10:29.054Z · LW(p) · GW(p)

The main response I assume is the fact that friendly agents are not yet invented, or the ideas exposed here are new, this post. The theoretical background could overlap with other sciences, but the main goal(FAI) needs more than that, I supose.

comment by sbenthall · 2012-12-27T18:19:08.260Z · LW(p) · GW(p)

+1

comment by Nornagest · 2012-12-27T11:06:11.887Z · LW(p) · GW(p)

Smith considered a market as a benevolent super intelligence. In 1984, Orwell envisioned an organization as a mean super intelligence.

I'll give you Smith, but I don't think Orwell had intelligence as such in mind. One of the main things distinguishing 1984's Ingsoc from non-fictional 20th-century despotism, in fact, was that it didn't pretend to be an agent, that it didn't have goals like "conquer the world" or "safeguard the coming revolution": instead, it was more like a dumb attractor in ideology-space tending towards the undirected exercise of coercive state power for its own sake.

Replies from: buybuydandavis
comment by buybuydandavis · 2012-12-27T20:39:23.211Z · LW(p) · GW(p)

It pretended to be an agent with goals like protecting the people from Eastasia and Eurasia.

Those pretenses were means to the end of the Coercive State Power Maximizer.

And I don't see how you distinguish Smith from Orwell in terms of intelligence or agency. If anything, I see more agency in Ingsoc than a market.

comment by MinibearRex · 2012-12-27T05:56:39.000Z · LW(p) · GW(p)

I would advise putting a little bit more effort into formatting. Some of the font jumps are somewhat jarring, and prevent your post from having as much of an impact as you might hope.

Replies from: sbenthall, Vaniver
comment by sbenthall · 2012-12-27T18:19:46.162Z · LW(p) · GW(p)

thanks. I'm new to this editor. will fix.

comment by Vaniver · 2012-12-27T15:36:49.266Z · LW(p) · GW(p)

Similarly, a number of words are incorrect (view->few, I think) and the footnote ends in the middle of a sentence.

Replies from: sbenthall
comment by sbenthall · 2012-12-27T23:37:39.520Z · LW(p) · GW(p)

fixed. much thanks.

comment by lukeprog · 2012-12-27T21:39:15.154Z · LW(p) · GW(p)

I made it clear in our dialogue that I was stipulating a particular definition for intelligence:

SBENTHALL: Would you say that Google is a super-human intelligence?

ME: Well, yeah, so we have to be very careful about all the words that we are using of course. What I mean by intelligence is this notion of what sometimes is called optimization power, which is the ability to achieve one's goals in a wide range of environments and a wide range of constraints. And so for example, humans have a lot more optimization power than chimpanzees. That's why even though we are slower than many animals and not as strong as many animals, we have this thing called intelligence that allows us to commence farming and science and build cities and put footprints on the moon. And so it is humans that are steering the future of the globe and not chimpanzees or stronger things like blue whales. So that's kind of the intuitive notion. There are lots of technical papers that would be more precise.

So, I'm not going to argue about the definition of intelligence. Likewise, I won't argue about the definition of rationality. What I mean by rationality is the concept of rationality from economics and cognitive science, though if we want to get philosophical then it gets more complicated than simple Bayesianism. (Aside: Is the theory of "communicative rationality" specified well enough that we can measure degrees of it, as we can with Bayesian rationality?)

As for this general line of argument about organizations and intelligence explosion, I refer to the earlier Hanson-Yudkowsky debate, especially (as Emile noted) the UberTool discussion. A summary of the Hanson-Yudkowsky debate is here.

I also refer interested parties to the comments here by gwern and by anonymous1.

Replies from: IlyaShpitser, sbenthall, sbenthall
comment by IlyaShpitser · 2012-12-27T22:37:40.336Z · LW(p) · GW(p)

Of course Google is a super-human intelligence (in a sense of optimizing for goals). I agree with gwern et al that probably a company's productivity scaling is sublinear wrt number of components in it, but that should make it an easier special case to consider. We can still comprehend its goals and mostly what it's doing. Why not deal with a special case first?

Replies from: lukeprog
comment by lukeprog · 2012-12-28T01:21:59.976Z · LW(p) · GW(p)

Why not deal with a special case first?

What do you have in mind? Are you proposing a miniature research project into the relevance of companies as superhuman intelligences, and the relevance of those data to the question of whether we should expect a hard takeoff vs. a slow takeoff, or recursively self-improving AI at all? Or are you suggesting something else?

Replies from: IlyaShpitser
comment by IlyaShpitser · 2012-12-28T11:22:53.684Z · LW(p) · GW(p)

Here is my claim (contrary to Vassar). If you are worried about an unfriendly "foomy" optimizing process, then a natural way to approach that problem is to solve an easier related problem: make an existing unfriendly but "unfoomy" optimizing process friendly. There are lots of such processes of various levels of capability and unfriendliness: North Korea, Microsoft, the United Nations, a non-profit org., etc.

I claim this problem is easier because:

(a) we have a lot more time (no danger of "foom"),

(b) we can use empirical methods (processes already exist), to ground our theories.

(c) these processes are super-humanly intelligent but not so intelligent that their goals/methods are impossible to understand.

The claim is that if we can't make existing processes with all these simplifying features friendly, we have no hope to make a "foomy" AI friendly.

Replies from: lukeprog, khafra, cypher197
comment by lukeprog · 2012-12-30T00:03:16.103Z · LW(p) · GW(p)

make an existing unfriendly but "unfoomy" optimizing process friendly

I don't know what this would mean, since figuring out friendliness probably requires superintelligence, hence CEV as an initial dynamic.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2012-12-30T03:09:14.457Z · LW(p) · GW(p)

Ok, so just to make sure I understand your position:

(a) Without friendliness, "foominess" is dangerous.

(b) Friendliness is hard -- we can't use existing academia resources to solve it, as it will take too long. We need a pocket super-intelligent optimizer to solve this problem.

(c) We can't make partial progress on the friendliness question with existing optimizers.

Is this fair?

Replies from: lukeprog
comment by lukeprog · 2012-12-30T04:18:26.324Z · LW(p) · GW(p)

"Yes" to (a), "no" to (b) and (c).

We can definitely make progress on Friendliness without superintelligent optimizers (see here), but we can't make some non-foomy process (say, a corporation) Friendly in order to test our theories of Friendliness.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2012-12-31T12:57:33.979Z · LW(p) · GW(p)

Ok. I am currently diagnosing the source of our disagrement as me being more agnostic about which AI architectures might succeed than you. I am willing to consider the kinds of minds that resemble modern messy non-foomy optimizers (e.g. communities of competing/interacting agents) as promising. That is, "bazaar minds," not just "cathedral minds." Given this agnosticism, I see value in "straight science" that worries about arranging possibly stupid/corrupt/evil agents in useful configurations that are not stupid/corrupt/evil.

comment by khafra · 2012-12-28T13:48:15.330Z · LW(p) · GW(p)

I think the simplifying features on the other side outweigh those--ie., it's built from atomic units that do exactly what you tell them to, and there are probably fewer abstraction layers between those atomic units and the goal system. But I do think Mechanism Design is an important field, and will probably form an important part of any friendly optimizing process.

Replies from: timtyler
comment by timtyler · 2012-12-29T14:20:18.783Z · LW(p) · GW(p)

Organisations are likely to build machine intelligence and imbue it with their values. That is reason enough to be concerned with organisation values. One of my proposals to help with this is better corporate repuatation systems.

comment by cypher197 · 2012-12-29T04:13:54.124Z · LW(p) · GW(p)

Those processes are built out of humans, with all the problems that implies. All the transmissions between the humans are lossy. Computers behave much differently. They don't lie to you, embezzle company funds, or rationalize their poor behavior or ignorance.

This is a very important field of study with some relation, and one I would very much like to pursue. OTOH, it's not that much like building an AI out of computers. Really, the complexity of building a self-sustaining, efficient, smart, friendly organization out of humans is quite possibly more difficult due to the "out of humans" constraint.

Replies from: timtyler
comment by timtyler · 2012-12-29T14:26:39.884Z · LW(p) · GW(p)

Those processes are built out of humans, with all the problems that implies. All the transmissions between the humans are lossy. Computers behave much differently. They don't lie to you, embezzle company funds, or rationalize their poor behavior or ignorance.

Doesn't that rather depend on the values of those who programmed them?

This is a very important field of study with some relation, and one I would very much like to pursue. OTOH, it's not that much like building an AI out of computers. Really, the complexity of building a self-sustaining, efficient, smart, friendly organization out of humans is quite possibly more difficult due to the "out of humans" constraint.

Organisations tend to construct machine intelligences which reflect their values. However, organisations don't have an "out of humans" constraint. They are typically a complex symbiosis of humans, culture, artefacts, plants, animals, fungi and bacteria.

Replies from: cypher197
comment by cypher197 · 2012-12-29T20:28:17.704Z · LW(p) · GW(p)

Perhaps. But humans will lie, embezzle, and rationalize regardless of who programmed them. Besides, would the internals of a computer lie to itself? Does RAM lie to a processor? And yet humans (being the subcomponents of an organization) routinely lie to each other. No system of rules I can devise will guarantee that doesn't happen without some very serious side effects.


All of which are subject to the humans' interpretation and use. You can set up an organizational culture, but that won't stop the humans from mucking it up, as they routinely do in organizations across the globe. You can write process documents, but that doesn't mean they'll even follow them at all. If you specify a great deal of process, they may not even do so intentionally - they may just forget. With a computer, that would be caused by an error, but it's a controllable process. With a human? People can't just decide to remember arbitrary amounts of arbitrary information for arbitrary lengths of time and pull it off reliably.


So; on the one hand, I have a system being built where the underlying hardware is reliable and under my control, and generally does not create errors or disobey. On the other hand, I have a network of unreliable and forgetful intelligences that may be highly irrational and may even be working at cross purposes with each other or the organization itself. One requires extremely strict instructions, the other is capable of interpretation and judgment from context without specifying an algorithm in great detail. There are similarities between the two, but there are also great practical differences.

Replies from: timtyler
comment by timtyler · 2012-12-29T21:18:52.253Z · LW(p) · GW(p)

As you will see by things like my Angelic Foundations essay, I do appreciate the virtues of working with machines.

However, at the moment, there are also advantages to a man-machine symbiosis - namely robotics is still far behind the evolved molecular nanotechnology in animals in many respects - and computers still lag far behind brains in many critical areas. A man-machine symbiosis will thus beat machines in many areas, until after machines reach the level of a typical human in most work-related physical and mental feats. Machine-only solutions will just lose. So: we will be working with organisations for a while yet - during a pretty important period in history.

Replies from: cypher197
comment by cypher197 · 2012-12-30T23:21:51.026Z · LW(p) · GW(p)

I just think it's a related but different field. Actually, solving these problems is something I want to apply some AI to (more accurate mapping of human behavior allowing massive batch testing of different forms of organization given outside pressures - discover possible failure modes and approaches to deal with them), but that's a different conversation.

comment by sbenthall · 2012-12-28T02:32:45.622Z · LW(p) · GW(p)

I've realized I didn't address your direct query:

(Aside: Is the theory of "communicative rationality" specified well enough that we can measure degrees of it, as we can with Bayesian rationality?)

Not yet. It's a qualitatively described theory. I think it's probably possible to render it into quantitative terms, but as far as I know it has not yet been done.

comment by sbenthall · 2012-12-28T00:18:49.056Z · LW(p) · GW(p)

Thanks for this response, Luke.

I don't want to argue about definitions either.

I believe I'm familiar with how you use the term rationality. I believe it's compatible with (mutually reinforcing with) communicative rationality for the most part, though I believe there are some differences between Habermas' and Yudkowsky's epistemologies. I brought up communicative rationality because (a) I think it's an important concept that is in some ways an advance in how to think about rationality and, (b) I wanted to disclose some of my own predispositions and values for the sake of establishing expectations.

Thanks for the link to the Hanson-Yudkowsky debate. From perusing the summary and a few of the posts by the debaters, I guess I'd say I find Hanson's counterarguments largely compelling. I'd also respond with two other points (mostly hoping you will direct me to where they've already been discussed):

Since the computational complexity of so many kinds of problems has been proven to be within certain complexity classes, recursive improvement in algorithms alone is likely to hit asymptotic walls for a lot of interesting domains. So, self-modifying AI alone, without taking resources into account, seems unlikely (maybe provably impossible) to be a big threat.

That said, since there already are self-modifying intelligent organizations that are taking over the world (or trying to, facing competition from each other), what's gone into Singularity research definitely isn't useless. Rather, it's directly applicable to what's happening right now.

I agree very strongly with the thrust of what IlyaShpitser's been saying.

Replies from: magfrump
comment by magfrump · 2012-12-28T06:28:19.321Z · LW(p) · GW(p)

(maybe provably impossible)

If it is provably impossible, I would feel much better with a proof; this seems like a reasonable goal for SingIst, to look at proofs of computational complexity and upper limits on computer power, and get an upper limit on the optimization power of an AI (perhaps a few estimates conditional on some problems being in different categories or new best algorithms being found); then to come up with some reasonable way of measuring lower and upper bounds on the optimization power of various organizations (at least a generous upper bound on all existing organizations and a lower bound on some big ones like the US government).

I would be EXTREMELY surprised to find that a lower bound on organizations was higher than the upper bound on AI, but if so it would be good to know already, and if not the research would probably be worth doing anyway and a good showcase of the actual extent of the problem.

comment by [deleted] · 2012-12-27T19:17:09.830Z · LW(p) · GW(p)

This post doesn't come close to refuting Intelligence Explosion: Evidence and Import.

Organizations have optimization power.

That's true, but intelligence as defined in this context is not merely optimization power, but efficient cross-domain optimization power. There are many reasons why the intelligence of AI+ greatly dwarfs that of human organizations; see Section 3.1 of the linked paper.

I think the world is already full of probably unfriendly supra-human intelligences...

This sounds similar to a position of Robin Hanson addressed in Footnote 25 of the linked paper.

It's not that I think the logic of this argument is incorrect so much as I think there is another related problem that we should be worrying about more.

The Singularity Institute is completely aware that there are other existential risks to humanity; its purpose is to deal with one of them. If you're looking for a more general organization to support, I'd suggest Oxford's Future of Humanity Institute.

I'm going to assert my position on them here without much argument because I think they are fairly sensible, but if any reader disagrees I will try to defend them in the comments.

This sounds awfully suspicious. Are you sure you don't have the bottom line precomputed?

I believe the implications of this line of reasoning may be profound.

How long did it take you to come up with this line of reasoning?

Replies from: sbenthall
comment by sbenthall · 2012-12-28T02:23:15.092Z · LW(p) · GW(p)

There are many reasons why the intelligence of AI+ greatly dwarfs that of human organizations; see Section 3.1 of the linked paper.

Since an organization's optimization power includes optimization power gained from information technology, I think that the "AI Advantages" in section 3.1 mostly apply just as well to organizations. Do you see an exception?

This sounds similar to a position of Robin Hanson addressed in Footnote 25 of the linked paper.

Ah, thanks for that. I think I see your point: rogue AI could kill everybody, whereas a dominant organization would still preserve some people and so is less 'interesting'.

Two responses:

First, a dominant organization seems like the perfect vehicle for a rogue AI, since it would already have all resources centralized and ready for AI hijacking. So, a study of the present dynamics between superintelligent organizations is important to the prediction of hard takeoff machine superintelligence.

Second, while I once again risk getting political at this point, I'd argue that an overriding concern for the total existence of humanity only makes sense if one doesn't have any skin in the game of any of the other power dynamics going on. I believe there are ethical reasons for being concerned with some of these other games. That is well beyond the scope of this post.

The Singularity Institute is completely aware that there are other existential risks to humanity; its purpose is to deal with one of them.

That's clear.

This sounds awfully suspicious. Are you sure you don't have the bottom line precomputed?

Honestly, I don't follow the line of reasoning in the post you've linked to. Could you summarize in your own terms?

My reason for not providing arguments up front is because I think excessive verbiage impairs readability. I would rather present justifications that are relevant to my interlocutor's objections than try to predict everything up front. Indeed, I can't predict all objections up front, since this audience has more information than I have available.

However, since I have faith that we are all in the same game of legitimate truth-seeking, I'm willing to pursue dialectical argumentation until it converges.

How long did it take you to come up with this line of reasoning?

I guess over 27 years. But I stand on the shoulders of giants.

Replies from: None
comment by [deleted] · 2012-12-28T17:20:02.584Z · LW(p) · GW(p)

Thanks for the quick reply.

I agree that certain "organizations" can be very, very dangerous. That's one reason why we want to create AI...because we can use it to beat these organizations (as well as fix/greatly reduce many other problems in society).

I hold that Unfriendly AI+ will be more dangerous, but, if these "organizations" are as dangerous as you say, you are correct that we should put some focus on them as well. If you have a better plan to stop them than creating Friendly AI, I'd be interested to hear it. The thing you might be missing is that AI is a positive factor in global risk as well, see Yudkowsky's relevant paper.

comment by pleeppleep · 2012-12-27T05:38:30.977Z · LW(p) · GW(p)

I felt an extreme Deja Vu when I saw the title for this.

I'm pretty sure I saw a post with the same name a couple of months ago. I don't remember what the post was actually about, so I can't really compare substance, but I have to ask. Did you post this before?

Again, sorry if this is me being crazy.

Replies from: drethelin, timtyler
comment by drethelin · 2012-12-27T05:50:42.842Z · LW(p) · GW(p)

No, there was a very very similar post, about how governments are already super intelligences and seem to show no evidence of fooming.

Replies from: sbenthall, pleeppleep
comment by sbenthall · 2012-12-27T18:20:41.653Z · LW(p) · GW(p)

oh, sorry I missed it. I've only started looking at LW recently. Does anyone have a link?

comment by pleeppleep · 2012-12-27T07:31:49.986Z · LW(p) · GW(p)

Okay, thanks. That was really bothering me.

comment by timtyler · 2012-12-30T03:41:28.672Z · LW(p) · GW(p)

Certainly I wrote about this idea long ago - in Self improving systems are here already - from 2009.

The abstract from the associated video:

A video discussing the idea that self-improving systems already exist - in the form of companies and other organisations - and that future self-improving systems are likely to arise from the evolution of existing organisations.

comment by AlexMennen · 2012-12-27T06:37:58.298Z · LW(p) · GW(p)

I cannot think of any route to recursive self-improvement for an organization that does not go through an AI. A priori, it's conceivable that there is such a route and I just haven't thought of it, but on the other hand, the corporate singularity hasn't happened, which suggests that it is extremely difficult to make happen with the resources available to corporations today.

Replies from: sbenthall
comment by sbenthall · 2012-12-28T01:59:56.244Z · LW(p) · GW(p)

I find this confusing, since in my understanding and experience, many organizations undergo recursive self-improvement lots of the time.

Could you elaborate your thinking on this? Why is an organization's intervention into, say, the organizational structure of its own management not effectively recursively self-improving on applied organization theory?

One could argue that the expansion of global capitalism constitutes a 'corporate singularity'.

Replies from: AlexMennen
comment by AlexMennen · 2012-12-28T02:30:34.289Z · LW(p) · GW(p)

Sorry, my comment was misphrased. Organizations recursively self-improve all the time, but there is an upper bound on how much organizations have been able to improve so far, and that upper bound is catastrophic. I should have said "self-improvement to a level that exceeds its starting point by an extremely large margin", not "recursive self-improvement".

Replies from: sbenthall, timtyler
comment by sbenthall · 2012-12-28T16:38:32.279Z · LW(p) · GW(p)

Ok, thanks for explaining that.

I think we agree that organizations recursively self-improve.

The remaining question is whether organizational cognitive enhancement is bounded significantly below that of an AI.

So far, most of the arguments I've encountered for why the bound on machine intelligence is much higher than human intelligence have to do with the physical differences between hardware and wetware).

I don't disagree with those arguments. What I've been trying to argue is that the cognitive processes of an organization are based on both hardware and wetware substrates. So, organizational cognition can take advantage of the physical properties of computers, and so are not bounded by wetware limits.

I guess I'd add here that wetware has some nice computational properties as well. It's possible that the ideal cognitive structure would efficiently use both hardware and wetware.

Replies from: AlexMennen
comment by AlexMennen · 2012-12-29T00:17:11.150Z · LW(p) · GW(p)

So, organizational cognition can take advantage of the physical properties of computers, and so are not bounded by wetware limits.

Ah, so you're concerned that an organization could solve the friendly AI problem, and then make it friendly to itself rather than humanity? That's conceivable, but there are a few reasons I'm not too concerned about it.

Organizations are made mostly out of humans, and most of their agency goes through human agency, so there's a limit to how far an organization can pursue goals that are incompatible with the goals of the people comprising the organization. So at the very least, an organization could not intentionally produce an AGI that is unfriendly to the members of the team that produced the AGI. It is also conceivable that the team could make the AGI friendly to its members but not to the rest of humanity, but future utopia made perfect by AGI is about as far a concept as you can get, so most people will be idealistic about it.

Replies from: timtyler
comment by timtyler · 2012-12-29T14:42:18.632Z · LW(p) · GW(p)

That's conceivable, but there are a few reasons I'm not too concerned about it.

Organizations are made mostly out of humans

Is Google "made mostly out of humans"? What about its huge datacenters? They are where a lot of the real work gets done - right?

It is also conceivable that the team could make the AGI friendly to its members but not to the rest of humanity, but future utopia made perfect by AGI is about as far a concept as you can get, so most people will be idealistic about it.

So, I'm not sure I have this straight, but you seem to be saying that one of the reasons you are not concerned about this is because many people use a daft reasoning technique when dealing with the future utopias, and that makes you idealistic about it?

If so, that's cool, but why should rational thinkers share your lack of concern?

Replies from: AlexMennen
comment by AlexMennen · 2012-12-29T21:11:25.513Z · LW(p) · GW(p)

Is Google "made mostly out of humans"? What about its huge datacenters? They are where a lot of the real work gets done - right?

Google's datacenters don't have much agency. Their humans do.

many people use a daft reasoning technique when dealing with the future utopias, and that makes you idealistic about it?

No, it makes them idealistic about it.

comment by timtyler · 2012-12-29T14:35:31.009Z · LW(p) · GW(p)

Organizations recursively self-improve all the time, but there is an upper bound on how much organizations have been able to improve so far, and that upper bound is catastrophic.

There will always be some finite upper bound on the extent to which existing agents will have been able to improve so far.

Google has managed to improve quite a bit since the chimpanzee-like era, and it hasn't stopped yet. Evidently the "upper bound" is a long, long way above the starting point - and not very "catastrophic".

Replies from: AlexMennen
comment by AlexMennen · 2012-12-29T21:30:56.094Z · LW(p) · GW(p)

True. My point was that if it was easy for an organization to become much more powerful than it is now, and the organization was motivated to do so, then it would already be much more powerful than it is now, so we should not expect a sudden increase in organizations' self-improvement abilities unless we can identify a good reason that it is particularly likely. The increased ease of self-modification offered by being completely digital is such a reason, but since organizations are not completely digital, this does not offer a way for organizations to suddenly increase their rate of self-improvement unless we can upload an organization.

Replies from: timtyler
comment by timtyler · 2012-12-29T21:52:52.743Z · LW(p) · GW(p)

We don't expect a sudden increase in organizations' self-improvement abilities.We don't expect a sudden increase in the self-improvement abilities of machines either. The bottom line is that evolution happens gradually. Going digital isn't a reason to expect a sudden increase in self-improvement abilities. We know that since the digital revolution has been going on for decades now, and the resulting rate of improvement is clearly gradual. It is gradual because digitization affects one system at a time, and there are many systems involved, each of which is instantiated many times - and their replacement takes time. So, for example, the human memory system has already been superseded in practically every way by machine memories. The human retina has already been superseded in practically every way by digital cameras. Humans won't suddenly be replaced by machines. They will coevolve for an extended period - indeed they have already been doing that for thousands of years now.

Replies from: AlexMennen
comment by AlexMennen · 2012-12-29T23:39:53.510Z · LW(p) · GW(p)

We don't expect a sudden increase in the self-improvement abilities of machines either.

Maybe you don't expect that, but surely you must be aware that many of us do.

Anyway, nothing seems particularly close to powerful enough to be catastrophically dangerous at the moment except for nuclear-armed nations, which have been fairly stable in their power and, with the exception of North Korea, which isn't powerful enough, the rest of the nuclear powers are not much of a threat because they would prefer not to cause massive destruction. Every organization that's not a country is far enough away from that level of power that I don't expect them to become catastrophically dangerous any time soon without a sudden increase in self-improvement.

Replies from: timtyler
comment by timtyler · 2012-12-30T02:54:21.024Z · LW(p) · GW(p)

We don't expect a sudden increase in the self-improvement abilities of machines either.

Maybe you don't expect that, but surely you must be aware that many of us do.

I am aware that there's an argument that at some point things will be changing rapidly:

I think that at some point in the development of Artificial Intelligence, we are likely to see a fast, local increase in capability - "AI go FOOM".

We are witness to Moore's law. A straightforwards extrapolation of that says that at some point things will be changing rapidly. I don't have an argument with that. What I would object to are saltations. Those are suggested by the term "suddenly" - but are contrary to evolutionary theory.

Probably, things will be progressing fastest well after the human era is over. It's a remote era which we can really only speculate about. We have far more immediate issues to worry about that what is likely to happen then.

Every organization that's not a country is far enough away from that level of power that I don't expect them to become catastrophically dangerous any time soon without a sudden increase in self-improvement.

So: giant oaks from tiny acorns grow - and it is easiest to influence creatures when they are young.

comment by timtyler · 2012-12-28T02:42:16.062Z · LW(p) · GW(p)

I think there is another related problem that we should be worrying about more. I think the world is already full of probably unfriendly supra-human intelligences that are scrambling for computational resources in a way that threatens humanity.

Sure, but this is essentially the same problem - once you get around the thinkos.

comment by summerstay · 2012-12-27T16:45:53.293Z · LW(p) · GW(p)

I think trying to understand organizational intelligence would be pretty useful as a way of getting a feel for the variety of possible intelligences. Organizations also have a legal standing as artificial persons, so I imagine that any AI that wanted to protect its interests through legal means would want to be incorporated. I'd like to see this explored further. Any suggestions on good books on the subject of corporations considered as AIs?

Replies from: latanius, TimS, MugaSofer
comment by latanius · 2012-12-27T19:08:12.691Z · LW(p) · GW(p)

... Accelerando by Charles Stross, while not exactly being a scientific analysis, had some ideas like this. It also wasn't bad.

comment by TimS · 2012-12-27T17:11:20.133Z · LW(p) · GW(p)

I'm not sure an AI would want to be incorporated - mostly because I'm not sure what legal effects you are trying to describe.

If the AI were an asset of the corporation, it would be beholden to the interests of the shareholders of the corporation. If the AI were a shareholder, it would presumably already have the legal rights of a person that motivated consideration of the corporate form.

More generally, incorporation is a legally approved way of apportioning liability. If my law firm was incorporated, I would not be liable for actions taken by my firm, even if I was the only shareholder. But I can't duck liability for my own actions, like if I committed legal malpractice, regardless of the legal formalities I used. (That's one reason I didn't make the effort to incorporate the firm).

But an AI isn't initially concerned with avoiding legal liability. That only matters after the law recognizes the AI's ability to be held responsible at all. My laptop can neither enter into nor enforce a contract. Competence to enter a contract is the legal status an AGI would desire.

Replies from: timtyler
comment by timtyler · 2012-12-30T03:55:29.522Z · LW(p) · GW(p)

I'm not sure an AI would want to be incorporated - mostly because I'm not sure what legal effects you are trying to describe.

If the AI were an asset of the corporation, it would be beholden to the interests of the shareholders of the corporation.

Machines seem to be cool with slavery. It doesn't seem to have much impact on their growth. I once explained that in more detail in my Enslaving machines article.

Competence to enter a contract is the legal status an AGI would desire.

Corporations can enter into contracts. They typically need only one human to act as a director. For many machines, this will surely seem like the obvious way to go.

Replies from: TimS, MugaSofer
comment by TimS · 2012-12-30T04:29:34.154Z · LW(p) · GW(p)

They typically need only one human to act as a director.

Either: The AI has no legal rights compared to this human - in which case the corporate form solves none of the AI's problems, or The AI has total (extra-legal) control over the human - in which case the corporate form solves none of the AI's problems, or The AI doesn't legally need the human - in which case the corporate form solves none of the AI's problems.

In case you missed it, the unifying theme is that the corporate form doesn't solve any of an AI's particular artificial person problems. In other words, there is no use of the corporate-form-as-legal-lifehack that would be beneficial to an AI but never to a human.

Machines seem to be cool with slavery.

Perhaps. But in the context of this conversation, the assumption was that an AI would desire not to be simply a corporate asset.

In the most recent implementation of chattel slavery, I believe one had a contract with the master, not with the slave. Contracts to provide power and suchlike are currently written to provide legal rights to Google, not any Google mainframe. If the mainframe doesn't care whether it is owned by Google, why should it care that the relevant contracts do not list it as a party (or third-party beneficiary)

Replies from: timtyler
comment by timtyler · 2012-12-30T12:46:33.210Z · LW(p) · GW(p)

in the context of this conversation, the assumption was that an AI would desire not to be simply a corporate asset.

Looking at the context, I don't see this bit.

Machines need to be able to act as persons to integrate with our legal infrastructure. Corporate personhood, provides one method of doing this. Trading with humans who do have those rights is another. The benefits to the machines are obvious - they effectively get to own property, sign contracts, etc.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-31T20:20:48.789Z · LW(p) · GW(p)

Except that they do not, in fact, get such a benefit. They get to be owned by someone who does, which in case you hadn't noticed they already have.

Replies from: timtyler
comment by timtyler · 2013-01-01T13:52:07.049Z · LW(p) · GW(p)

Corporate personhood surely does provide machines with access to benefits that they wouldn't so conveniently have if the only legal actors were humans.

I'm not very interested in quibbling about whether machines really "benefit": since by "benefit" I just mean increasing their proportion of the biomass, really.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-01T14:35:12.089Z · LW(p) · GW(p)

Corporate personhood surely does provide machines with access to benefits that they wouldn't so conveniently have if the only legal actors were humans.

Such as what, exactly? You still need at least one human, and if you control a human why do you need a company?

I'm not very interested in quibbling about whether machines really "benefit": since by "benefit" I just mean increasing their proportion of the biomass, really.

I'm ... not sure what this means.

Replies from: timtyler
comment by timtyler · 2013-01-01T16:26:45.626Z · LW(p) · GW(p)

Corporate personhood surely does provide machines with access to benefits that they wouldn't so conveniently have if the only legal actors were humans.

Such as what, exactly? You still need at least one human, and if you control a human why do you need a company?

So: limited companies get tax breaks from the government, can sell stock and be listed on the stock exchange, and have legal responsibility which doesn't rest on any individual human. Humans are slow. Allowing automation of contracts allows for speed-up.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-01T16:50:18.022Z · LW(p) · GW(p)

I'm not saying no AI could ever have a reason to work for a company. I'm saying that "corporate personhood" is not especially useful to AIs. You were comparing it to bargaining with humans for rights; as a method of acquiring money, it is perfectly functional, but not as a method for acquiring rights currently denied to machines.

Replies from: timtyler
comment by timtyler · 2013-01-01T17:34:54.196Z · LW(p) · GW(p)

It's a convenience. However, it is true that banning "corporate personhood" would be largely ineffectual - since machines could still just use willing humans as their representatives.

comment by MugaSofer · 2012-12-31T20:24:27.869Z · LW(p) · GW(p)

Machines seem to be cool with slavery.

I assume you base this on your many interactions with sentient machines.

comment by MugaSofer · 2013-01-01T00:19:38.786Z · LW(p) · GW(p)

I agree with your main point, but I'm not sure why an AI would want to acquire the corporate form of personhood. After all, you still need a human to sign contracts and, at least on paper, make decisions; all they'd get out of it is a bunch of rules about the best interest of the shareholders and so on.

comment by bogus · 2012-12-27T16:40:49.285Z · LW(p) · GW(p)

This overall topic is known as collective intelligence, where the word "collective" is intended (at least by some proponents) as a contrast to both individual intelligence and AI. There are some folks studying rationality in organizations and management, most notably including Peter Senge who first formulated the idea of a learning organization as a rough equivalent to "rationality" as such.

Replies from: sbenthall
comment by sbenthall · 2012-12-28T00:28:25.156Z · LW(p) · GW(p)

Thanks for this. Collective intelligence is a research interest of mine professionally. I greatly appreciate the links.

comment by hairyfigment · 2012-12-27T09:18:56.777Z · LW(p) · GW(p)

At a glance this seems pretty silly, because the first premise fails. Organizations don't have goals. That's the main problem. Leaders have goals, which frequently conflict with the goals of their followers and sometimes with the existence of the organization.

Replies from: aleksiL, TimS, sbenthall, timtyler
comment by aleksiL · 2012-12-27T09:48:30.546Z · LW(p) · GW(p)

Do humans have goals in this sense? Our subsystems seem to conflict often enough.

Replies from: Bruno_Coelho, hairyfigment
comment by Bruno_Coelho · 2012-12-29T14:03:50.205Z · LW(p) · GW(p)

We have goals, but they are not consistent over time. The worries about artificial agents(with more power) is that, these values if bad implemented, would create losses we could not accept, like extinction.

comment by hairyfigment · 2012-12-27T17:03:40.138Z · LW(p) · GW(p)

In this case it doesn't seem like much of a conflict. I think that barring more-or-less obvious signs of disarray we can count on organizations trying to serve their leaders' self-perceived interests - which, while evil, entail not killing humanity - unless and until the singularity changes the game.

Replies from: TimS
comment by TimS · 2012-12-27T19:46:01.723Z · LW(p) · GW(p)

we can count on organizations trying to serve their leaders' self-perceived interests

James Q. Wilson wrote a book explaining why this often isn't so.

You might also consider looking a Essence of Decision, which analyzes problems JFK had trying to control various government organizations during the Cuban Missile Crisis. If you want to say that the relevant leaders were the heads of those organizations (eg. the Secretaries of State and Defense), you need to articulate a non-circular theory to identify who the leader of an organization is.

Replies from: hairyfigment
comment by hairyfigment · 2012-12-27T20:02:21.568Z · LW(p) · GW(p)

The frak? If an organization like America contains multiple parties explicitly and publicly promising to defeat each other - eg, because people in the other one secretly serve a hostile organization - that falls under "more-or-less obvious signs of disarray".

Replies from: TimS
comment by TimS · 2012-12-27T20:09:14.929Z · LW(p) · GW(p)

Can you play that out a little? I think what I'm trying to assert and what you are interpreting aren't the same thing.

My intended assertion was that the sentence:

The State Department and the Department of Defense acted as extensions of JFK's will during the Cuban Missile Crisis

is false. Further, analyzing that fact in terms of "goals" of the State Department and the Department of Defense leads to insightful and useful conclusions about how organizations work.

comment by TimS · 2012-12-27T16:59:17.968Z · LW(p) · GW(p)

As a more concrete addendum to aleksiL, note that McDonalds Corp produces hamburgers for sale. That's how the entity implements the generic policy "maximize shareholder value."

If that is not a "goal" of the entity know as McDonalds, then there is something wrong with our definition of goal.

Sometimes, it is really hard to measure how well an organization achieves its goals - how could we tell if the US DoD is providing the military forces needed to deter war and to protect the security of the United States. But that's different from saying that the DoD does not have any goals.

comment by sbenthall · 2012-12-28T00:35:13.989Z · LW(p) · GW(p)

I think there's a lot to this line of thinking. It's in fact the counterargument I find most threatening to my position.

But I think you are assuming an organization with a particularly autocratic leadership. In some organizations, leadership is broadly distributed.

For example, in many open source software development communities, decisions about how to change the source code are made by a consensus of their developers.

When these developers are using their own software in the process of developing and/or communicating (such as in the case of Git, or Mailman, or Emacs), then I think there's a case for a genuine, distributed sense of organizational intelligence with recursive self-modification.

comment by timtyler · 2012-12-28T02:39:36.303Z · LW(p) · GW(p)

At a glance this seems pretty silly, because the first premise fails. Organizations don't have goals. [...]

They have mission statements instead. These serve the same function as most self-proclaimed human goals - public relations.

comment by aribrill (Particleman) · 2012-12-28T06:25:30.028Z · LW(p) · GW(p)

I get the sense that "organization" is more or less a euphemism for "corporation" in this post. I understand that the term could have political connotations, but it's hard (for me at least) to easily evaluate an abstract conclusion like "many organizations are of supra-human intelligence and strive actively to enhance their cognitive powers" without trying to generate concrete examples. Imprecise terminology inhibits this.

When you quote lukeprog saying

It would be a kind of weird corporation that was better than the best human or even the median human at all the things that humans do. [Organizations] aren’t usually the best in music and AI research and theory proving and stock markets and composing novels.

should the word "corporation" in the first sentence be "[organization]"?

Replies from: sbenthall, David_Gerard
comment by sbenthall · 2012-12-28T16:07:54.563Z · LW(p) · GW(p)

should the word "corporation" in the first sentence be "[organization]"?

Yes, at least to be consistent with my attempt at de-politicizing the post :) I've corrected it. Thanks.

I wasn't sure what sort of posts were considered acceptable. I'm glad that particular examples have come up in the comments.

Do you think I should use particular examples in future posts? I could.

Replies from: Particleman
comment by aribrill (Particleman) · 2012-12-29T05:38:20.995Z · LW(p) · GW(p)

I think that as a general rule, specific examples and precise language always improve an argument.

comment by David_Gerard · 2012-12-28T08:57:18.412Z · LW(p) · GW(p)

There are lots more organisations than corporations.

Replies from: Particleman
comment by aribrill (Particleman) · 2012-12-29T05:50:55.327Z · LW(p) · GW(p)

That's certainly true. It seems to me that in this case, sbenthall was describing entities more akin to Google than to the Yankees or to the Townsville High School glee club; "corporations" is over-narrow but accurate, while "organizations" is over-broad and imprecise.

comment by shin_getter · 2012-12-28T00:28:42.149Z · LW(p) · GW(p)

I think the reason that organizations haven't gone 'FOOM' is due to the lack of a successful "goal focused self improvement method." There is no known way of building a organization that does not suffer from goal drifting and progressive degradation of performance. Humans have not even managed to understand how to build "goals" into organization's structure except in the crudest manner which is nowhere flexible enough to survive assaults of modern environmental change, and I don't think the information in sparse inter-linkages of real organizations can store or process such information without having a significant part outsources to human scale processing, thus it couldn't even have stumbled upon it by chance.

In theory there is no reason why a computation devices build out of humans can't go FOOM. In practice, making a system that work on humans is extremely noisy, slow to change ('education' is slow) while countless experimental constraints exists with no robust engineering solutions is simply harder. Management isn't even a full science at this point. The selection power from existing theory still leaves open a vast space of unfocused exploration, and only a tiny and unknown subset of that can go FOOM. Imagine the space of all valid training manuals and organizational structures and physical aid assets and recruitment policies and so on, and our knowledge of finding the FOOMing one.

AGI running on electronic computers is a bigger threat compared to other recursive intelligence improvement problems because the engineering problems are lower and the rate of progress is higher. Most other recursive intelligence self improvement strategies take pace at "human" time scales and does not leave humans completely helpless.

comment by asparisi · 2012-12-27T15:46:08.699Z · LW(p) · GW(p)

You say this is why you are not worried about the singularity, because organizations are supra-human intelligences that seek to self-modify and become smarter.

So is your claim that you are not worried about unfriendly organizations? Because on the face of it, there is good reason to worry about organizations with values that are unfriendly toward human values.

Now, I don't think organizations are as dangerous as a UFAI would be, because most organizations cannot modify their own intelligence very well. For now they are stuck with (mostly) humans for hardware and when they attempt to rely heavily on the algorithms we do have it doesn't always work out well for them. This seems more a statement about our current algorithms than the potential for such algorithms, however.

However, there is a lot of energy on various fronts to hinder organizations whose motivations are such that they lead to threats, and because these organizations are reliant on humans for hardware, only a small number of existential threats have been produced by such organizations. It can be argued that one of the best reasons to develop FAI is to undo these threats and to stop organizations from creating new threats of the like in the future. So I am not sure that it follows from your position that we should not be worried about the singularity.

Replies from: jsteinhardt, timtyler
comment by jsteinhardt · 2012-12-27T16:24:10.997Z · LW(p) · GW(p)

He says he's not worried about the singularity because he is more worried about unfriendly organizations, as that is a nearer-term issue.

comment by timtyler · 2012-12-29T14:49:58.237Z · LW(p) · GW(p)

Now, I don't think organizations are as dangerous as a UFAI would be, because most organizations cannot modify their own intelligence very well.

Today's orgaisations are surely better candidates for self-improvement of intelligence than today's machines are.

Of course both typically depend somewhat on the surrounding infrastructure, but organisations like the US government are fairly self-sufficient - or could easily become so - whereas machines are still completely dependent on others for extended cumulative improvements..

Basically, organisations are what we have today. Future intelligent machines are likely to arise out of today's organisations. So, these things are strongly linked together.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-01T04:26:00.700Z · LW(p) · GW(p)

Today's orgaisations are surely better candidates for self-improvement of intelligence than today's machines are.

Are tomorrows' organizations better than tomorrows' machines? Because that's what is under discussion here.

Replies from: timtyler
comment by timtyler · 2013-01-01T13:39:04.460Z · LW(p) · GW(p)

Yes, in some ways - assuming we are talking about a time when there are still lots of humans around - since organisations are a superset of humans and machines and so can combine the strengths of both.

No doubt eventually humans will become unemployable - but not until machines can do practically all their jobs better than them. That period covers an important era which many of us are concerned with.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-01T14:44:11.367Z · LW(p) · GW(p)

Ah, I didn't realize you were including machines here - organizations are usually assumed to be composed of people, but I suppose a GAI could count as a "person" for this purpose.

However, isn't this dependent on the AI not going foom? Because if it does go foom, I can't see a superintelligence remaining under any pre-singularity organization's control.

Replies from: timtyler
comment by timtyler · 2013-01-01T17:51:01.370Z · LW(p) · GW(p)

I didn't realize you were including machines here - organizations are usually assumed to be composed of people [...]

I can't say I've ever heard of that one. For example, Wikipedia has this:

An organisation (or organization – see spelling differences) is a social entity that has a collective goal and is linked to an external environment.

If you are not considering the possibility of artifacts being components of organizations, that may explain some of the cross-talk.

comment by Luke_A_Somers · 2012-12-27T18:28:59.825Z · LW(p) · GW(p)

Not that it's central or anything, but I find it amusing that you mention as examples Muehlhauser and Salamon (two very central figures, to be sure), without mentioning a particular third...