Singularity the hard way

post by CCC · 2012-12-12T19:07:53.470Z · LW · GW · Legacy · 30 comments

Contents

30 comments

So far, we only have one known example of the development of intelligent life; and that example is us. Humanity. That means that we have only one machanism that is known to be able to produce intelligent life; and that is evolution. But by far the majority of life that is produced by evolution is not

intelligent. (In fact, by far the majority of life produced by evolution appears to be bacteria, as far as I can tell. There's also a lot of beetles).

Why did evolution produce such a steep climb in human intelligence, while not so much in the case of other creatures? That, I suspect, is at least partially because as humans we are not competing against other creatures anymore. We are competing against each other.

Also, once we managed to start writing things down and sharing knowledge, we shifted off the slow, evolutionary timescale and onto the faster, technological timescale. As technology improves, we find ourselves being more right, less wrong; our ability to affect the environment continually increases. Our intellectual development, as a species, speeds up dramatically.

And I believe that there is a hack that can be applied to this process; a mechanism by which the total intelligence of humanity as a whole can be rather dramatically increased. (It will take time). The process is simple enough in concept.

 


 

These thoughts were triggered by an article on some Ethiopian children who were given tablets by OLPC. They were chosen specifically on the basis of illiteracy (through the whole village) and were given no teaching (aside from the teaching apps on the tablets; some instruction on how to use the solar chargers was also given to the adults) and in fairly short order, they taught themselves basic literacy. (And had modified the operating system to customise it, and re-enable the camera).

My first thought was that this gives an upper limit to the minimum cost of world literacy; the minimum cost of world literacy is limited to the cost of one tablet per child (plus a bit for transportation).

 


 

In short, we need world literacy. World literacy will allow anyone and everyone to read up on that which interests them. It will allow a vastly larger number of people to start thinking about certain hard problems (such as any hard problem you care to name). It will allow more eyes to look at science; more experiments to be done and published; more armour-piercing questions which no-one has yet thought to ask because there simply are not enough scientists to ask them.

World literacy would improve the technological progress of humanity; and probably, after enough generations, result in a humanity who we would, by todays standards, consider superhumanly intelligent. (This may or may not necessitate direct brain-computer interfaces)

The aim, therefore, is to allow humanity, and not some human-made AI, to go *foom*. It will take some significant amount of time - following this plan means that our generation will do no more than continue a process that began some millions of years ago - but it does have this advantage; if it is humanity that goes *foom*, then the resulting superintelligences are practically guaranteed to be human-Friendly since they will be human. (For the moment, I discard the possibility of a suicidal superintelligence).

It also has this advantage; the process is likely to be slow enough that a significant fraction of humanity will be enhanced at the same time, or close enough to the same time that none will be able to stop any of the others' enhancements. This drastically reduces the probability of being trapped by a single Unfriendly enhanced human.

The main disadvantage is the time taken; this will take centuries at the least, perhaps millenia. It is likely that, along the way, a more traditional AI will be created.

30 comments

Comments sorted by top scores.

comment by jimrandomh · 2012-12-12T19:45:45.692Z · LW(p) · GW(p)

According to the CIA world factbook, the world literacy rate is 83.7%, so increasing this to 100% is only a 20% increase in the number of literate people. That's equivalent to about 18 years of population growth at the current rate of 1.1%/yr. World literacy is a good and desirable thing, but we already got most of the way there (and collected the benefits) in the 20th century; the remaining benefits are humanitarian, not technological.

Replies from: NancyLebovitz, CCC
comment by NancyLebovitz · 2012-12-13T03:52:57.248Z · LW(p) · GW(p)

I think the breakpoint is access to computers, not literacy.

comment by CCC · 2012-12-13T03:30:42.836Z · LW(p) · GW(p)

There's another effect as well. Humans compete with each other; at the moment, all literate people can claim a legitimate advantage over the illiterate people (and, in the case of some, this may be an excuse to stop self-improving). Once there are no illiterates, that excuse falls away.

Replies from: PaulS
comment by PaulS · 2012-12-14T05:22:59.853Z · LW(p) · GW(p)

Most potential scientists don't view illiterate children in Third World countries as their competitors.

comment by Alicorn · 2012-12-12T19:41:29.992Z · LW(p) · GW(p)

Humans aren't Friendly. Whatever gave you that idea?

Replies from: BerryPick6
comment by BerryPick6 · 2012-12-12T20:08:25.670Z · LW(p) · GW(p)

Don't we mean 'friendly to humans and their goals' when we say 'Friendly' in the context of AI? I'm pretty sure that would make us at least moderately Friendly (or, at least, more so than an Unfriendly AI would be.)

Replies from: Alicorn, DaFranker, CCC
comment by Alicorn · 2012-12-12T20:13:40.935Z · LW(p) · GW(p)

We are Friendlier than a paperclip maximizer, but we're not just-plain-Friendly. We can be led to do nasty things for all kinds of reasons in all kinds of ways, we are subject to goal distortion and various interfering biases even insofar as our goals are correct, and our goals aren't fully transparent to us to allow explicit unambiguous pursuit anyway.

Replies from: army1987
comment by A1987dM (army1987) · 2012-12-13T13:30:46.931Z · LW(p) · GW(p)

I think most of us are Friendly “enough”, but those who aren't tend to have a disproportionate impact on world history (Hitler would be one of the most extreme examples).

comment by DaFranker · 2012-12-12T20:22:30.481Z · LW(p) · GW(p)

An Unfriendly AI would only be bad because it becomes ridiculously hard for us to stop, and it doesn't care about us. If an ufAI is exactly as powerful and smart as an average human, and cannot ever get better, it's not all that much of a threat, and is really just only as dangerous as your average socio/psycho/something-path.*

May I point at the various instances of systematic slavery in human history, or even right now across the world? Imagine if the slavers had double our triple the intelligence they had/have. What makes you think that these superintelligent slaver humans would be "Friendly" even at the basic level, let alone would be the Safe kind of Friendly under self-modification? (supposing they manage to modify or enhance themselves in some way)

The assumption that all humans foom, AND all do so at the same time, AND all do so at the same (or insignificant difference) rate, AND (Remain Safe under self-modification OR never find a way to self-modify), AND are human-Friendly by default... is a very far-fetched combined assumption to be making here, IMO.

* Yes, that's anthropomorphizing it a bit, but I'm assuming that it would need its own set of heuristics to replace humans' biases and heuristics, otherwise it'd probably be thinking very slowly and pose even less of a threat. If those heuristics aren't particularly better optimized than our own, then it's still only so much of a threat, probably equivalent to a particularly unpredictable psychopath.

Replies from: CCC, mwengler
comment by CCC · 2012-12-13T16:21:28.833Z · LW(p) · GW(p)

The assumption that all humans foom, AND all do so at the same time, AND all do so at the same (or insignificant difference) rate, AND (Remain Safe under self-modification OR never find a way to self-modify), AND are human-Friendly by default... is a very far-fetched combined assumption to be making here, IMO.

The assumptions that I make are that the humanity-fooming would be both very slow, and generally available in some way (I'm not sure entirely how, but brain-computer interfaces are a possibility). That all humans foom, at more-or-less the same time, and at more-or-less the same rate, then follows on (especially in the case of the brain-computer interface, in which case the speed of the foom is controlled by the speed of technological development).

I don't think that all of the fooming people would be Friendly, but I do think that under those circumstances, any Unfriendly ones would be outnumbered by roughly-equivalently-intelligent Friendly ones, resulting in a by-and-large Friendly consensus.

comment by mwengler · 2012-12-13T14:45:11.461Z · LW(p) · GW(p)

I believe OP was referring to a single FOOM of humanity collectively.

Replies from: DaFranker, CCC
comment by DaFranker · 2012-12-13T15:45:42.320Z · LW(p) · GW(p)

I believe OP was referring to a single FOOM of humanity collectively.

Yes, so was I:

The assumption that all humans foom, AND all do so at the same time, AND all do so at the same (or insignificant difference) rate, AND (Remain Safe under self-modification OR never find a way to self-modify), AND are human-Friendly by default... is a very far-fetched combined assumption to be making here, IMO.

Replies from: mwengler
comment by mwengler · 2012-12-13T21:01:31.786Z · LW(p) · GW(p)

Hmm, I see from OP's response that he is thinking that EACH human will have a doubling of IQ per decade once we can all read. I certainly can't see where he'd get that from. It seems most likely that high literacy high wealth countries would be near the limits of individual IQ achievable from good nutrition, education, and pervasive literacy.

I thought, incorrectly apparently, he was referring to a collective intelligence of humanity.

It seems clear enough to me that humanity functions as a Searlian "Chinese room" style intelligence at least. In that sense, the infrastructure, the technology available to that room to integrate the individuals in the room, as well as the total number of individuals available to be installed in the room, limits the effective intelligence of that room.

If you don't like the metaphor of the Searlian "Chinese room," think of a multiprocessor where each core is a human, and the communications and shared memory and other linkages are internet, written documents, and so on.

Then turning the last 1/6 of humanity literate (world literacy rate currently about 5/6) might give a 16ish % boost in total intelligence, plus a bit more since excess capacity over what is available for pure survival is what we get to contribute to the total, and presumably illiterate people are working at close to breakeven (just effectively smart enough to stay alive).

But the idea that individual intelligence will change because literary rate goes from 84% to 99+%, I don't get that at all.

comment by CCC · 2012-12-13T15:50:53.934Z · LW(p) · GW(p)

Yes, exactly. A slow foom, one in which we take maybe a decade or longer for each doubling of IQ, so that there's time for everyone to keep up.

comment by CCC · 2012-12-13T15:42:45.363Z · LW(p) · GW(p)

Don't we mean 'friendly to humans and their goals' when we say 'Friendly' in the context of AI?

That is how I was using the term, yes.

comment by RomeoStevens · 2012-12-12T21:40:50.593Z · LW(p) · GW(p)

$50 smartphones are accomplishing more than OLPC ever did.

comment by D_Malik · 2012-12-12T20:26:40.784Z · LW(p) · GW(p)

On OLPC: I became slightly less enthusiastic about it after reading this; there was also some discussion of this here.

It also has this advantage; the process is likely to be slow enough that a significant fraction of humanity will be enhanced at the same time, or close enough to the same time that none will be able to stop any of the others' enhancements. This drastically reduces the probability of being trapped by a single Unfriendly enhanced human.

Well, our offensive technology is far ahead of our defensive technology. Despite being on the whole human-friendly, people keep running around trying to kill each other; sometimes we stop them and sometimes we do not.

One of the reasons people give for trying to develop AGI quickly is that, if done right, it would protect us from all the other things that are trying to kill us.

comment by TrE · 2012-12-12T19:47:50.348Z · LW(p) · GW(p)

Considering that every new human more or less starts off as a blank slate (or rather, that which we do start with doesn't improve with each generation) and there is only so much a human can learn within a lifetime (unless we consider genetic engineering or control over death/prolonging life), I'd expect that progress becomes slower and slower over time. I don't see how a takeoff (where progress speeds up again, and drastically so) could be achieved without either mind uploads, anti-death measures which greatly prolong life, or genetic engineering/breeding of humans. Or major breakthroughs in education, to the point where you don't need brains to observe and pattern-match, but can teach them directly.

A data point: Even today, in highly explored areas such as mathematics, it takes a significant fraction of a normal human life to acquire the skillset needed to tackle the hard problems.

Replies from: CCC
comment by CCC · 2012-12-13T03:38:23.365Z · LW(p) · GW(p)

Ancillary devices (like computers) do improve, however. With time, education will likely shift away from memorising facts, and people will put a greater reliance on handheld computing systems; which can include, for example, automated theorem proving software (which already exists).

Brain-computer direct interfaces will take time to develop, but are a continuation of this trend.

comment by Tenoke · 2012-12-12T20:58:42.272Z · LW(p) · GW(p)

Some people would argue that we don't want to reach the superintelligent level the really slow way because more and more people are dying and there is more tragedy the longer we take.

Replies from: pleeppleep
comment by pleeppleep · 2012-12-12T22:39:28.790Z · LW(p) · GW(p)

And, more importantly, we'd all be dead by that time so it wouldn't help us very much.

comment by [deleted] · 2012-12-14T00:24:14.116Z · LW(p) · GW(p)

Humans will not go about fooming until we can doctor our brains well enough that we might as well run the entire civilization in upload space. Even then I and four associates suspect that modifying your own brain is dangerous and can lead to UFAI.

comment by A1987dM (army1987) · 2012-12-13T13:27:35.356Z · LW(p) · GW(p)

World literacy will allow anyone and everyone to read up on that which interests them.

There are plenty of people who know perfectly well how to read but can't be arsed to, so long as they can eat everyday and watch TV on their couch.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2012-12-13T13:41:04.501Z · LW(p) · GW(p)

No doubt, but what's important isn't so much the people who don't do much (and I don't think there's any superiority from reading ordinary stuff vs. watching ordinary stuff), as the people who could be doing something interesting or important but are blocked because of lack of literacy.

comment by timtyler · 2012-12-13T00:54:35.553Z · LW(p) · GW(p)

The aim, therefore, is to allow humanity, and not some human-made AI, to go foom. It will take some significant amount of time [...]

And so it will be beaten by machines, which are doubling in power every year or so. Brains have lost, long live computers.

Replies from: CCC, NancyLebovitz, TimS
comment by CCC · 2012-12-13T03:53:09.943Z · LW(p) · GW(p)

Machines are doubling in raw computational speed, but have yet to show signs of intelligence. In fact, it's pretty obvious by now that intelligence is not merely a function of raw computational speed; there's some other, as yet unknown, factor which is necessary. Once this factor is identified and can be artificially created, then yes, machines will quickly outpace humanity... but what is known about this factor is that it is very, very hard to identify.

Though it seems unlikely that AI will fail to be invented over the next century, it has nonetheless non-negligible probability.

comment by NancyLebovitz · 2012-12-13T04:07:26.206Z · LW(p) · GW(p)

By one measure, the computers have won. My name is difficult for many human beings to process. This not nearly as serious a problem as having a name which computers find difficult to process.

Replies from: Desrtopa
comment by Desrtopa · 2012-12-13T04:23:30.606Z · LW(p) · GW(p)

And of course, one can also run into problems with words that computers could process, but refuse to.

comment by TimS · 2012-12-13T02:17:03.033Z · LW(p) · GW(p)

Moore's Law is a socio-economic observation, not a physical law like the Second Law of Thermodynamics.

Replies from: Desrtopa
comment by Desrtopa · 2012-12-13T03:36:52.995Z · LW(p) · GW(p)

On the other hand, when I was in middle school, I was taught that the human brain was hundreds of thousands of times more powerful than any supercomputer. These days, our top supercomputers have a whole lot more processing power than the human brain. So it's more or less true that human brains have already lost, even if we haven't yet reached the point where everyone can own a computer more powerful than a human brain.