Why safety is not safe

post by rwallace · 2009-06-14T05:20:55.442Z · LW · GW · Legacy · 118 comments

Contents

118 comments

June 14, 3009

Twilight still hung in the sky, yet the Pole Star was visible above the trees, for it was a perfect cloudless evening.

"We can stop here for a few minutes," remarked the librarian as he fumbled to light the lamp. "There's a stream just ahead."

The driver grunted assent as he pulled the cart to a halt and unhitched the thirsty horse to drink its fill.

It was said that in the Age of Legends, there had been horseless carriages that drank the black blood of the earth, long since drained dry. But then, it was said that in the Age of Legends, men had flown to the moon on a pillar of fire. Who took such stories seriously?

The librarian did. In his visit to the University archive, he had studied the crumbling pages of a rare book in Old English, itself a copy a mere few centuries old, of a text from the Age of Legends itself; a book that laid out a generation's hopes and dreams, of building cities in the sky, of setting sail for the very stars. Something had gone wrong - but what? That civilization's capabilities had been so far beyond those of his own people. Its destruction should have taken a global apocalypse of the kind that would leave unmistakable record both historical and archaeological, and yet there was no trace. Nobody had anything better than mutually contradictory guesses as to what had happened. The librarian intended to discover the truth.

Forty years later he died in bed, his question still unanswered.

The earth continued to circle its parent star, whose increasing energy output could no longer be compensated by falling atmospheric carbon dioxide concentration. Glaciers advanced, then retreated for the last time; as life struggled to adapt to changing conditions, the ecosystems of yesteryear were replaced by others new and strange - and impoverished. All the while, the environment drifted further from that which had given rise to Homo sapiens, and in due course one more species joined the billions-long roll of the dead. For what was by some standards a little while, eyes still looked up at the lifeless stars, but there were no more minds to wonder what might have been.



Were I to submit the above to a science fiction magazine, it would be instantly rejected. It lacks a satisfying climax in which the hero strikes down the villain, for it has neither hero nor villain. Yet I ask your indulgence for a short time, for it may yet possess one important virtue: realism.

The reason we relate to stories with villains is easy enough to understand. In our ancestral environment, if a leopard or an enemy tribesman escaped your attention, failure to pass on your genes was likely. Violence may or may not have been the primary cause of death, depending on time and place; but it was the primary cause that you could do something about. You might die of malaria, you might die of old age, but there was little and nothing respectively that you could do to avoid these fates, so there was no selection pressure to be sensitive to them. There was certainly no selection pressure to be good at explaining the distant past or predicting the distant future.

Looked at that way, it's a miracle we possess as much general intelligence as we do; and certainly our minds have achieved a great deal, and promise more. Yet the question lurks in the background: are there phenomena, not in distant galaxies or inside the atomic nucleus beyond the reach of our eyes but in our world at the same scale we inhabit, nonetheless invisible to us because our minds are not so constructed as to perceive them?

In search of an answer to that question, we may ask another one: why is this document written in English instead of Chinese?

As late as the 15th century, this was by no means predictable. The great civilizations of Europe and China were roughly on par, the former having almost caught up over the previous few centuries; yet Chinese oceangoing ships were arguably still better than anything Europe could build. Fleets under Admiral Zheng He reached as far as East Africa. Perhaps China might have reached the Americas before Europeans did, and the shape of the world might have been very different.

The centuries had brought a share of disasters to both continents. War had ravaged the lands, laying waste whole cities. Plague had struck, killing millions, men, women and children buried in mass graves. Shifts of global air currents brought the specter of famine. Civilization had endured; more, it had flourished.

The force that put an end to the Chinese arc of progress was deadlier by far than all of these together, yet seemingly intangible as metaphysics. By the 16th century, the fleets had vanished, the proximate cause political; to this day there is no consensus on the underlying factors. It seems what saved Europe was its political disunity. Why was that lost in China? Some writers have blamed flat terrain, which others have disputed; some have blamed rice agriculture and its need for irrigation systems. Likely there were factors nobody has yet understood; perhaps we never will.

An entire future that might have been, was snuffed out by some terrible force compared to which war, plague and famine were mere pinpricks - and yet even with the benefit of hindsight, we still don't truly understand what it was.

Nor is this an isolated case. From the collapse of classical Mediterranean civilization to the divergent fates of the US and Argentina, whose prospects looked so similar as recently as the early 20th century, we find more terrible than any war or ordinary disaster are forces which operate unseen in plain sight and are only dimly understood even after the fact.

The saving grace has always been the outside: when one nation, one civilization, faltered, another picked up the torch and carried on; but with the march of globalization, there may soon be no more outside.

Unless of course we create a new one. Within this century, if we continue to make progress as quickly as possible, we may develop the technology to break our confinement, to colonize first the solar system and then the galaxy. And then our kind may truly be immortal, beyond the longest reach of the Grim Reaper, and love and joy and laughter be not outlived by the stars themselves.

If we continue to make progress as quickly as possible.

Yet at every turn, when risks are discussed, ten voices cry loudly about the violence that may be done with new technology for every one voice that quietly observes that we cannot afford to be without it, and we may not have as much time as we think we have. It is not that anyone is being intentionally selfish or dishonest. The critics believe what they are saying. It is that to the human mind, the dangers of progress are vivid even when imaginary; the dangers of its lack are scarcely perceptible even when real.

There are many reasons why we need more advanced technology, and we need it as soon as possible. Every year, more than fifty million people die for its lack, most in appalling suffering. But the one reason above all others is that the window of opportunity we are currently given may be the last step in the Great Filter, that we cannot know when it will close or if it does, whether it will ever open again.

Less Wrong is about bias, and the errors to which it leads us. I present then what may be the most lethal of all our biases: that we react instantly to the lesser death that comes in blood and fire, but the greater death that comes in the dust of time, is to our minds invisible.

And I ask that you remember, next time you contemplate alleged dangers of technology.

 

118 comments

Comments sorted by top scores.

comment by Z_M_Davis · 2009-06-14T07:15:58.782Z · LW(p) · GW(p)

I hardly think many here would object to love, joy, and laughter being not outlived by the stars themselves: as you say, the critics are not dishonest. As steven points out, any disagreement would seem to stem from differing assessments of the probabilities of stagnation risk and existential risk. If the future is going to be dominated by a hard takeoff Singularity, then it is incredibly important to make sure to get that first AGI exactly, perfectly right at the expense of all else. If the future is to be one of "traditional" space colonization and catostrophic risk from AI, MNT, &c. is negligible, then it's incredibly important to develop techs as quickly as possible. While the future does depend on what "we" decide to do now (bearing in mind that there is no unitary we), this is largely an empirical issue: how does the tech tree actually look? What does it take to colonize the stars? Is hard takeoff possible, and what would that take? &c. I think that these are the sorts of questions we need to be asking and trying to answer, rather then pledging ourselves to the "pro-safety" or "pro-technology" side. Since we all want more-or-less the same thing, it's in all of our best interests to try to reach the most accurate conclusions possible.

Replies from: Roko
comment by Roko · 2009-06-14T14:27:51.051Z · LW(p) · GW(p)

how does the tech tree actually look? What does it take to colonize the stars? Is hard takeoff possible, and what would that take?

The tech tree for the future? Oh that's easy. It looks like this, and comes with handy lists of what technologies enable what weapons, armour types and chassis.

In all seriousness, we haven't got a clue. I upvoted ZM's comment because this is an empirical issue. It's just that I find it highly unlikely that I could refute or confirm RWallace's position by any earthly method of future prediction.

I think that we might do better to play a mixed strategy where we prepare as well as possible in this relatively safe, comfortable time period for a variety of scenarios that reality might throw at us.

Stagnation and/or resource depletion (as alluded to by RWallace) is a strong possible threat. But so is a negative singularity. I doubt anyone can come up with the data necessary to support focusing on only one of these threats.

Replies from: asciilifeform
comment by asciilifeform · 2009-06-14T15:14:16.743Z · LW(p) · GW(p)

resource depletion (as alluded to by RWallace) is a strong possible threat. But so is a negative singularity.

Resource depletion is as real and immediate as gravity. You can pick up a pencil and draw a line straight through present trends to a horse-and-cart world (or the smoking, depopulated ruins from a cataclysmic resource war.) The negative singularity, on the other hand, is an entirely hypothetical concept. I do not believe the two are at all comparable.

Replies from: wuwei, Vladimir_Nesov, Roko
comment by wuwei · 2009-06-14T18:16:32.924Z · LW(p) · GW(p)

Would you bet on resource depletion?

Replies from: asciilifeform
comment by asciilifeform · 2009-06-14T22:06:59.495Z · LW(p) · GW(p)

What do you expect the loser of the bet to repay me with? Ammo?

comment by Vladimir_Nesov · 2009-06-14T15:26:51.734Z · LW(p) · GW(p)

That is your present position, not a good argument for it. It could be valuable as a voice of dissent, if many other people shared your position but hesitated to voice it.

My position, for example, is that resource depletion isn't really an issue, it may only lead to some temporary hardship, but not to something catastrophic and civilization-stopping, while negative AGI is a very real show-stopper. Does my statement change your mind? If not, what's the message of your own statement for people who disagree?

Replies from: Roko
comment by Roko · 2009-06-14T15:35:46.417Z · LW(p) · GW(p)

is that resource depletion isn't really an issue, it may only lead to some temporary hardship

I think that this is overconfident. I would say that resource depletion - especially of fossil fuels - combined with war, famine, etc could permanently put us back in the stone age.

Replies from: asciilifeform, Vladimir_Nesov
comment by asciilifeform · 2009-06-14T15:52:41.826Z · LW(p) · GW(p)

permanently put us back in the stone age

Exactly. The surface-accessible minerals are entirely gone, and pre-modern mining will have no access to what remains. Even meaningful landfill harvesting requires substantial energy and may be beyond the reach of people attempting to "pick up the pieces" of totally collapsed civilization.

Replies from: Vladimir_Nesov, CronoDAS
comment by Vladimir_Nesov · 2009-06-14T15:57:38.287Z · LW(p) · GW(p)

Even thrown back to a stone age, the second arc of development doesn't need to repeat the first. There are plenty of ways to systematically develop technology by other routes, especially if you don't implement mass production for the planetary civilization in the process, working only on improving technology on small scale, up to a point of overcoming the resource problem.

Replies from: Drahflow
comment by Drahflow · 2009-06-14T17:59:17.039Z · LW(p) · GW(p)

Technological advance is strongly dependent on "mass production for the planetary civilization", because otherwise most people are busy doing agriculture and don't have time to become PhDs.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-06-14T18:16:27.839Z · LW(p) · GW(p)

That's only because the power of technology wasn't realized until the industry was way under development. Roughly speaking, you can always tax everyone 10%, and have 10% of the population do science.

Replies from: Aurini, asciilifeform
comment by Aurini · 2009-06-15T21:22:05.105Z · LW(p) · GW(p)

You're assuming that 90% of the population can spare 10%. If things were to revert to subsistence-level farming that might not be possible.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-06-15T21:46:21.200Z · LW(p) · GW(p)

It's always possible to spare if not 10% then 5% or 2%, still enough to sustain a considerable number of people. I don't see what is the point of your argument.

comment by asciilifeform · 2009-06-14T22:54:47.042Z · LW(p) · GW(p)

have 10% of the population do science

Do you actually believe that 10% of the population are capable of doing meaningful science? Or that post-collapse authority figures will see value in anything we would recognize as science?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-06-14T23:00:48.729Z · LW(p) · GW(p)

This addresses the wrong issue: the question I answered was about capability of the pre-industrial society to produce enough surplus for enough people to think professionally, while your nitpick is about a number clearly intended to serve as a feasible upper bound being too high.

See also: Least convenient possible world.

Replies from: asciilifeform
comment by asciilifeform · 2009-06-14T23:08:46.956Z · LW(p) · GW(p)

Thank you for the link.

I concede that a post-collapse society might successfully organize and attempt to resurrect civilization. However, what I have read regarding surface-mineral depletion and the mining industry's forced reliance on modern energy sources leads me to believe that if our attempt at civilization sinks, the game may be permanently over.

Replies from: Strange7, Vladimir_Nesov
comment by Strange7 · 2010-03-26T03:02:53.891Z · LW(p) · GW(p)

Why would we need to mine for minerals? It's not as though iron or copper permanently stop being usable as such when they're alloyed into structural steel or semiconductors. The wreckage of an industrial civilization would make better ore than any natural stratum.

Replies from: Clippy
comment by Clippy · 2010-03-26T03:12:17.507Z · LW(p) · GW(p)

No, once a technological civilization has used the minerals, they're too scattered and worn to be efficiently gathered. When the minerals are still in the planet, you can use geological knowledge to predict where they are and find them in concentrated form. Once Sentients start using them for various purposes, they lose the order and usefulness they once had.

In short, the entropy of the minerals massively increases, because the information about its distribution is destroyed. Therefore, it requires greater energy to convert back into useful form, almost certainly needing a higher energy expenditure per unit useful mineral obtained (otherwise, humans would be currently mining modern middens (aka landfills) for metals).

Replies from: Nick_Tarleton, MugaSofer
comment by Nick_Tarleton · 2010-03-26T03:27:23.887Z · LW(p) · GW(p)

OTOH, when large concentrations of metal (buildings, vehicles) are disposed of, they're almost always recycled. Many such large concentrations would survive a collapse. I'm not sure how long it would take for iron/steel buildings to mostly rust away, or how much steel would be buried safe from rust.

comment by MugaSofer · 2013-04-25T13:30:32.113Z · LW(p) · GW(p)

We do. It's called "recycling".

Replies from: Clippy
comment by Clippy · 2013-04-29T21:22:54.308Z · LW(p) · GW(p)

You should recycle.

Replies from: MugaSofer
comment by MugaSofer · 2013-05-01T18:40:19.082Z · LW(p) · GW(p)

I do. So do a lot of other people. Because it is, in fact, a good idea. IIRC, it's more efficient than mining, what with all the easily-accessible minerals already mined out.

comment by Vladimir_Nesov · 2009-06-14T23:14:47.925Z · LW(p) · GW(p)

Possible, but again your reply doesn't contain an argument, it can't change anyone's beliefs.

comment by CronoDAS · 2009-06-15T05:15:50.119Z · LW(p) · GW(p)

I agree with the opinion presented in this comment.

Replies from: MugaSofer
comment by MugaSofer · 2013-04-25T13:25:26.424Z · LW(p) · GW(p)

This is why upvoting was invented.

Replies from: CronoDAS
comment by CronoDAS · 2013-04-26T03:14:32.307Z · LW(p) · GW(p)

http://lesswrong.com/lw/3h/why_our_kind_cant_cooperate/

Replies from: MugaSofer
comment by MugaSofer · 2013-04-26T12:08:24.568Z · LW(p) · GW(p)

I disagree with this comment.

comment by Vladimir_Nesov · 2009-06-14T15:50:08.362Z · LW(p) · GW(p)

Of the consequences of resource depletion, catastrophic social collapse seems to require something like famine (which it should be possible to avoid, given a bit of planning), and even then stability may be achieved once some of the population died out. Quality of life might drop significantly, due to inability to make all the stuff with present technology. But meanwhile, you don't need a lot of resources to continue (or renew) R&D, laboratories won't take too much stuff, only efficient services that come with economy of scale. In time, new technologies will appear that allow better and better quality of life on the resources that were inaccessible before. (And of course, at one point, AGI gets developed, leaving these issues irrelevant, at least for the survival of civilization, for either outcome.)

Replies from: Roko, asciilifeform
comment by Roko · 2009-06-14T16:02:38.947Z · LW(p) · GW(p)

Quality of life might drop significantly, due to inability to make all the stuff with present technology. But meanwhile, you don't need a lot of resources to continue (or renew) R&D, laboratories won't take too much stuff, only efficient services that come with economy of scale. It time, new technologies will appear

Consider the following chain of events: resource depletion (in particular of fossil fuels) causes countries to fight over resources, and populations to elect more aggressive leaders, leading to global nuclear war in 2050. Worldwide famine, Gigadeath, and a long new dark age follow.

Suppose this is followed by a massive famine which kills, say, 90% of the world's population. This could easily cause every single advanced society in the world to collapse and revert to medieval style fiefdoms circa 2075. Such societies will not have access to fossil fuels, and therefore they won't have modern agriculture. Population density therefore plummets to medieval levels, with people feeding themselves using subsistence agriculture. You then have a medieval population without fossil fuels and therefore it is harder to get the industrial revolution going again. Though, on balance, it seems more likely than not that technological civilization would rise again eventually.; and on this second take, perhaps humanity would be more careful, taking care to see where they went wrong the first time. The huge amount of data and artefacts stored around the world would surely allow archaeologists of the 23rd or 24th century to piece together what happened.

I suppose that my true objection to this scenario is that I would die, almost for certain.

In fact, such a scenario seems to me to be a very likely route to eventual human colonization of the stars.

Replies from: MichaelBishop
comment by Mike Bishop (MichaelBishop) · 2009-06-14T19:03:29.226Z · LW(p) · GW(p)

Resources don't become scarce overnight. It happens a little more slowly, the price of the scarce resource rises. People find ways to use it more efficiently; they find or invent substitutes.

Nor would unprecedented levels of resource scarcity be likely to lead to a war between major powers. Our political systems may be imperfect, but the logic of mutually assured destruction would be clear and compelling even to the general public.

Replies from: Roko, asciilifeform
comment by Roko · 2009-06-15T16:32:04.496Z · LW(p) · GW(p)

Nor would unprecedented levels of resource scarcity be likely to lead to a war between major powers. Our political systems may be imperfect, but the logic of mutually assured destruction would be clear and compelling even to the general public.

This statement lacks evidence to support it; we have been close to war between nuclear powers in the past; I don't think that either of us know whether resource shortage would lead to nuclear war. Severe resource shortage makes people desperate and makes them do silly things that aren't really in their interest, just as shortage of sci-fi future progress is making ascilifeform desperate and advocate doing things that aren't really in his/her interest.

I'm not saying it would, but I'm not saying it wouldn't, and I don't have a good way of assigning probabilities.

Nuclear war might escalate in a way that those who started it didn't predict.

Replies from: MichaelBishop
comment by Mike Bishop (MichaelBishop) · 2009-06-15T17:51:42.829Z · LW(p) · GW(p)

I feel that in this argument, the onus is on you to provide evidence that most people are ignoring a serious risk.

I don't have a good way of assigning probabilities either, but I feel obliged to try.

I estimate the probability that scarcity of natural resources leads to a fifty percent decline in real GDP per capita in many wealthy countries in the next fifty years is less than one percent.

Conditional on my being wrong, I would be very confused about the world, but at this point would still assign less than a five percent risk of large scale nuclear war between major powers which are both aware there is a very high risk of mutual destruction.

I don't think I'm looking at the world through rose colored glasses. I think the risk of small scale nuclear conflict, or threats such as the use of a bioengineered virus by terrorists, is much greater.

Replies from: Roko
comment by Roko · 2009-06-15T19:36:50.864Z · LW(p) · GW(p)

I estimate the probability that scarcity of natural resources leads to a fifty percent decline in real GDP per capita in many wealthy countries in the next fifty years is less than one percent.

Ok. What's your expertise? Are you just generally well informed, or are you a business person or economist etc?

Replies from: MichaelBishop
comment by Mike Bishop (MichaelBishop) · 2009-06-15T19:55:53.697Z · LW(p) · GW(p)

I haven't earned a great deal of authority on these topics. I am a phd student in sociology who reads a lot of economics. As far as I can tell, economists don't seem to think we should worry that scarcity of natural resources could lead to such major economic problems. I'd be interested to know what the market "thinks." What investment strategy would profit in such a scenario?

Replies from: Roko
comment by Roko · 2009-06-15T19:58:20.346Z · LW(p) · GW(p)

What investment strategy would profit in such a scenario?

I don't know.

comment by asciilifeform · 2009-06-14T22:58:14.282Z · LW(p) · GW(p)

he logic of mutually assured destruction would be clear and compelling even to the general public

When was the last time a government polled the general public before plunging the nation into war?

Now that I think about it, the American public, for instance, has already voted for petrowar: with its dollars, by purchasing SUVs and continuing to expand the familiar suburban madness which fuels the cult of the automobile.

Replies from: MichaelBishop
comment by Mike Bishop (MichaelBishop) · 2009-06-15T02:00:41.593Z · LW(p) · GW(p)

I encourage you to write more serious comments... or find some other place to rant.

Replies from: asciilifeform, MugaSofer
comment by asciilifeform · 2009-06-15T02:19:13.609Z · LW(p) · GW(p)

Please attack my arguments. I truly mean what I say. I can see how you might have read me as a troll, though.

Replies from: MichaelBishop
comment by Mike Bishop (MichaelBishop) · 2009-06-15T02:59:34.867Z · LW(p) · GW(p)

In the next century I think it is unlikely

  1. resource scarcity will dramatically lower economic growth across the world, or
  2. competition for resources will lead to devastating war between major powers, e.g. U.S. and China, because each country has too much to lose.

I believe my opinions are shared by most economists, political scientists, politicians. Do you agree that you hold a small minority opinion?

Do you have any references where the arguments are spelled out in greater detail?

Replies from: asciilifeform
comment by asciilifeform · 2009-06-15T03:30:36.347Z · LW(p) · GW(p)

Do you agree that you hold a small minority opinion?

Yes, of course.

Do you have any references where the arguments are spelled out in greater detail?

I was persuaded by the writings of one Dmitry Orlov. His work focuses on the impending collapse of the U.S.A. in particular, but I believe that much of what he wrote is applicable to the modern economy at large.

comment by MugaSofer · 2013-04-25T13:32:18.545Z · LW(p) · GW(p)

Thankfully, it's not up to you.

For the record, what about that comment struck you as unserious?

comment by asciilifeform · 2009-06-14T16:00:22.483Z · LW(p) · GW(p)

catastrophic social collapse seems to require something like famine

Not necessarily. When the last petroleum is refined, rest assured that the tanks and warplanes will be the very last vehicles to run out of gas. And bullets will continue to be produced long after it is no longer possible to buy a steel fork.

R&D... efficient services... economy of scale... new technologies will appear

Your belief that something like present technological advancement could continue after a cataclysmic collapse boggles my mind. The one historical precedent we have - the Dark Ages - teaches the exact opposite lesson. Reversion to barbarism - and a barbarism armed with the remnants of the finest modern weaponry, this time around - is the more likely outcome.

Replies from: Tyrrell_McAllister, MichaelBishop, Roko
comment by Tyrrell_McAllister · 2009-06-14T17:04:01.490Z · LW(p) · GW(p)

Your belief that something like present technological advancement could continue after a cataclysmic collapse boggles my mind. The one historical precedent we have - the Dark Ages - teaches the exact opposite lesson.

IIRC, Robert Wright argued in his book NonZero that technological development had stagnated when the Roman Empire reached its apex, and that the dark ages actual brought several important innovations. These included better harnesses, better plows, and nailed iron horse shoes, all of which increased agricultural yield. The Dark Ages also saw improvements to water-wheel technology, which led to much wider use if it.

He also makes the case that all the fractured polities led to greater innovations in the social and economic spheres as well.

comment by Mike Bishop (MichaelBishop) · 2009-06-14T19:11:56.371Z · LW(p) · GW(p)

The key is that some group would set up some form of government. My best guess is that governments which established rule of law, including respect for private property, would become more powerful relative to other governments. Technological progress would begin again.

Also, see what I just wrote to Roko about why resource scarcity is unlikely to be as a great a problem as you think and why wars and famines are unlikely to affect wealthy countries as a result of resource scarcity.

comment by Roko · 2009-06-14T16:18:32.517Z · LW(p) · GW(p)

belief that something like present technological advancement could continue after a cataclysmic collapse boggles my mind.

It could - and most probably would - rise up again, eventually. Rising up from the half-buried wreckage of modern civilization is easier than building it from scratch.

But I don't go as far as Vladimir and say it's virtually guaranteed. One scenario is that the survivors could all fall to a new religion that preached that technology itself was evil. This religion might suppress technological development for longer than Christianity suppressed it in the dark ages - which was 1000 years. I still think it is likely that technology would eventually make it through, but perhaps it would be used to create a global totalitarian state?

Replies from: CronoDAS, Vladimir_Nesov
comment by CronoDAS · 2009-06-15T05:09:33.746Z · LW(p) · GW(p)

It could - and most probably would - rise up again, eventually. Rising up from the half-buried wreckage of modern civilization is easier than building it from scratch.

Not necessarily. To be blunt, we've basically exhausted practically all the useful non-renewal natural resources (ores, etc.) that a civilization could access with 1200s-level technology. They'd have to mine our ruins for metals and such - and much of it is going to be locked up in forms that are completely useless.

comment by Vladimir_Nesov · 2009-06-14T17:28:57.680Z · LW(p) · GW(p)

Of course it's nowhere near the guaranteed -- notice, for example, that I excluded all other catastrophic risks from consideration in that scenario, such as crazy wars for scraps, only looking at the effects of shortage of resources stopping much of the industry, because of dependencies that weren't ensured.

comment by Roko · 2009-06-14T15:33:58.226Z · LW(p) · GW(p)

You can pick up a pencil and draw a line straight through present trends to a horse-and-cart world

Sure, this is a fair point.

The negative singularity, on the other hand, is an entirely hypothetical concept.

I think this is unfair. Resource depletion is also a hypothetical concept, because it hasn't happened yet.

Both resource depletion and the technological singularity are based upon trend extrapolation and uncertain theoretical arguments.

It is also the case that resource depletion is being addressed by $10^9 's of research money.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-06-14T08:20:30.454Z · LW(p) · GW(p)

So basically you're saying that when Leo Szilard wanted to hide the true neutron cross section of purified graphite and Enrico Fermi wanted to publish it, you'd have published it.

Replies from: None, asciilifeform, cousin_it
comment by [deleted] · 2009-06-14T09:01:34.703Z · LW(p) · GW(p)

I think rwallace is saying both men were right to continue their research.

comment by asciilifeform · 2009-06-14T15:11:40.838Z · LW(p) · GW(p)

Would you have hidden it?

You cannot hide the truth forever. Nuclear weapons were an inevitable technology. Likewise, whether or not Eurisko was genuine, someone will eventually cobble together an AGI. Especially if Eurisko was genuine, and the task really is that easy. The fact that you seem persuaded of the possibility of Lenat having danced on the edge of creating hard takeoff gives me more interest than ever before in a re-implementation.

Reading "value is fragile" almost had me persuaded that blindly pursuing AGI is wrong, but shortly after, "Safety is not Safe" reverted me back to my usual position: stagnation is as real and immediate a threat as ever there was, vastly dwarfing any hypothetical existential risks from rogue AI.

For instance, bloat and out-of-control accidental complexity have essentially halted all basic progress in computer software. I believe that the lack of quality programming systems will lead (and may already have led) directly to stagnation in other fields, such as computational biology. The near-term future appears to resemble Windows Vista rather than HAL. Engelbart's Intelligence Amplification dream has been lost in the noise. I thus expect civilization to succumb to Natural Stupidity in the near term future, unless a drastic reversal in these trends takes place.

Replies from: Eliezer_Yudkowsky, hrishimittal, Roko
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-06-15T01:26:52.028Z · LW(p) · GW(p)

Would you have hidden it?

I hope so. It was the right decision in hindsight, since the Nazi nuclear weapons program shut down when the Allies, at cost of some civilian lives, destroyed their source of deuterium. If they'd known they could've used purified graphite... well, they probably still wouldn't have gotten nuclear weapons in this Everett branch but they might have somewhere else.

Before 2001 I would probably have been on Fermi's side, but that's when I still believed deep down that no true harm could come to someone who was only faithfully trying to do science. (I.e. supervised universe thinking.)

comment by hrishimittal · 2009-06-14T17:41:53.856Z · LW(p) · GW(p)

stagnation is as real and immediate a threat as ever there was, vastly dwarfing any hypothetical existential risks from rogue AI.

How is blindly looking for AGI in a vast search space better than stagnation?

How does working on FAI qualify as "stagnation"?

Replies from: asciilifeform
comment by asciilifeform · 2009-06-14T17:53:26.107Z · LW(p) · GW(p)

How is blindly looking for AGI in a vast search space better than stagnation?

No amount of aimless blundering beats deliberate caution and moderation (see 15th century China example) for maintaining technological stagnation.

How does working on FAI qualify as "stagnation"?

It is a distraction from doing things which are actually useful in the creation of our successors.

You are trying to invent the circuit breaker before discovering electricity; the airbag before the horseless carriage. I firmly believe that all of the effort currently put into "Friendly AI" is wasted. The bored teenager who finally puts together an AGI in his parents' basement will not have read any of these deep philosophical tracts.

Replies from: Z_M_Davis, hrishimittal, Vladimir_Nesov
comment by Z_M_Davis · 2009-06-14T18:45:11.070Z · LW(p) · GW(p)

The bored teenager who finally puts together an AGI in his parents' basement will not have read any of these deep philosophical tracts.

AGI is a really hard problem. If it ever gets accomplished, it's going to be by a team of geniuses who have been working on the project for years. Will they be so immersed in the math that they won't have read the deep philosophical tracts?---maybe. But your bored teenager scenario makes no sense.

Replies from: asciilifeform
comment by asciilifeform · 2009-06-14T22:42:33.685Z · LW(p) · GW(p)

AGI is a really hard problem

It has successfully resisted solution thus far, but I suspect that it will seem laughably easy in retrospect when it finally falls.

If it ever gets accomplished, it's going to be by a team of geniuses who have been working on the project for years

This is not how truly fundamental breakthroughs are made.

Will they be so immersed in the math that they won't have read the deep philosophical tracts?

Here is where I agree with you - anyone both qualified and motivated to work on AGI will have no time or inclination to pontificate regarding some nebulous Friendliness.

But your bored teenager scenario makes no sense.

Why do you assume that AGI lies beyond the capabilities of any single intelligent person armed with a modern computer and a sufficiently unorthodox idea?

Replies from: Z_M_Davis
comment by Z_M_Davis · 2009-06-15T05:56:50.115Z · LW(p) · GW(p)

This is not how truly fundamental breakthroughs are made.

Hmm---now that you mention it, I realize my domain knowledge here is weak. How are truly fundamental breakthroughs made? I would guess that it depends on the kind of breakthrough---that there are some things that can be solved by a relatively small number of core insights (think Albert Einstein in the patent office) and some things that are big collective endeavors (think Human Genome Project). I would guess furthermore that in many ways AGI is more like the latter than the former, see below.

Why do you assume that AGI lies beyond the capabilities of any single intelligent person armed with a modern computer and a sufficiently unorthodox idea?

Only about two percent of the Linux kernel was personally written by Linus Torvalds. Building a mind seems like it ought to be more difficult than building an operating system. In either case, it takes more than an unorthodox idea.

Replies from: Vladimir_Nesov, asciilifeform
comment by Vladimir_Nesov · 2009-06-15T09:04:06.567Z · LW(p) · GW(p)

Only about two percent of the Linux kernel was personally written by Linus Torvalds. Building a mind seems like it ought to be more difficult than building an operating system.

There is no law of Nature that says the consequences must be commensurate with their cause. We live in an unsupervised universe where a movement of butterfly's wings can determine the future of nations. You can't conclude that simply because the effect is expected to be vast, the cause ought to be at least prominent. This knowledge may only be found by a more mechanistic route.

Replies from: Z_M_Davis
comment by Z_M_Davis · 2009-06-15T14:42:33.718Z · LW(p) · GW(p)

You're right in the sense that I shouldn't have used the words ought to be, but I think the example is still good. If other software engineering projects take more than one person, then it seems likely that AGI will too. Even if you suppose the AI does a lot of the work up to the foom, you still have to get the AI up to the point where it can recursively self-improve.

comment by asciilifeform · 2009-06-15T14:01:09.836Z · LW(p) · GW(p)

How are truly fundamental breakthroughs made?

Usually by accident, by one or a few people. This is a fine example.

ought to be more difficult than building an operating system

I personally suspect that the creation of the first artificial mind will be more akin to a mathematician's "aha!" moment than to a vast pyramid-building campaign. This is simply my educated guess, however, and my sole justification for it is that a number of pyramid-style AGI projects of heroic proportions have been attempted and all failed miserably. I disagree with Lenat's dictum that "intelligence is ten million rules." I suspect that the legendary missing "key" to AGI is something which could ultimately fit on a t-shirt.

Replies from: Z_M_Davis
comment by Z_M_Davis · 2009-06-15T14:21:07.181Z · LW(p) · GW(p)

I personally suspect that the creation of the first artificial mind will be more akin to a mathematician's "aha!" moment than to a vast pyramid-building campaign. [...] my sole justification [...] is that a number of pyramid-style AGI projects of heroic proportions have been attempted and failed miserably.

"Reversed Stupidity is Not Intelligence." If AGI takes deep insight and a pyramid, then we would expect those projects to fail.

Replies from: asciilifeform
comment by asciilifeform · 2009-06-15T14:34:23.974Z · LW(p) · GW(p)

Fair enough. It may very well take both.

comment by hrishimittal · 2009-06-14T18:22:12.247Z · LW(p) · GW(p)

The bored teenager who finally puts together an AGI in his parents' basement will not have read any of these deep philosophical tracts.

That truly would be a sad day.

Are you seriously suggesting hypothetical AGIs built by bored teenagers in basements are "things which are actually useful in the creation of our successors"?

Is that your plan against intelligence stagnation?

Replies from: asciilifeform
comment by asciilifeform · 2009-06-14T22:34:36.345Z · LW(p) · GW(p)

Is that your plan against intelligence stagnation?

I'll bet on the bored teenager over a sclerotic NASA-like bureaucracy any day. Especially if a computer is all that's required to play.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-06-14T22:45:59.184Z · LW(p) · GW(p)

This is an answer to a different question. A plan is something implemented to achieve a goal, not something that is just more likely to work (especially against you).

Replies from: asciilifeform
comment by asciilifeform · 2009-06-14T23:03:43.335Z · LW(p) · GW(p)

I view the teenager's success as simultaneously more probable and more desirable than that of a centralized bureaucracy. I should have made that more clear. And my "goal" in this case is simply the creation of superintelligence. I believe the entire notion of pre-AGI-discovery Friendliness research to be absurd, as I already explained in other comments.

Replies from: Vladimir_Nesov, MugaSofer
comment by Vladimir_Nesov · 2009-06-14T23:10:53.023Z · LW(p) · GW(p)

You are using wrong terminology here. If the consequences of whatever AGI that got developed are seen as positive, if you are not dead as a result, it is already almost FAI, that is how it's defined: that the effect is positive. Deeper questions play on what it means for the effect to be positive, and how one can be wrong about considering certain effect positive even though it's not, but let's leave it aside for the moment.

If the teenager implemented something that has a good effect, it's FAI. The argument is not that whatever ad-hoc tinkering leads to is not within a strange concept of "Friendly AI", but that ad-hoc tinkering is expected to lead to disaster, however you call it.

Replies from: asciilifeform
comment by asciilifeform · 2009-06-14T23:16:07.537Z · LW(p) · GW(p)

if you are not dead as a result

I am profoundly skeptical of the link between Hard Takeoff and "everybody dies instantly."

ad-hoc tinkering is expected to lead to disaster

This is the assumption which I question. I also question the other major assumption of Friendly AI advocates: that all of their philosophizing and (thankfully half-hearted and ineffective) campaign to prevent the "premature" development of AGI will lead to a future containing Friendly AI, rather than no AI plus an earthbound human race dead from natural causes.

Ad-hoc tinkering has given us the seed of essentially every other technology. The major disasters usually wait until large-scale application of the technology by hordes of people following received rules (rather than an ab initio understanding of how it works) begins.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-06-14T23:30:00.381Z · LW(p) · GW(p)

ad-hoc tinkering is expected to lead to disaster

This is the assumption which I question.

To discuss it, you need to address it explicitly. You might want to start from here, here and here.

I also question the other assumption of Friendly AI advocates: that all of their philosophizing and (thankfully half-hearted and ineffective) campaign to prevent the "premature" development of AGI will lead to a future containing Friendly AI, rather than no AI plus an earthbound human race dead from natural causes.

That's a wrong way to see it: the argument is simply that lack of disaster is better than a disaster (note that the scope of this category is separate from the first issue you raised, that is if it's shown that ad-hoc AGI is not disastrous, by all means go ahead and do it). Suicide is worse than pending death from "natural" causes. That's all. Whether it's likely that a better way out will be found, or even possible, is almost irrelevant to this position. But we ought to try to do it, even if it seems impossible, even if it is quite improbable.

Ad-hoc tinkering has given us the seed of essentially every other technology.

True, but if you expect a failure to kill civilization, the trial-and-error methodology must be avoided, even if it's otherwise convenient and almost indispensable, and has proven itself over the centuries.

comment by MugaSofer · 2013-04-25T13:16:27.178Z · LW(p) · GW(p)

You consider the creation of an unFriendly superinelligence a step on the road to understanding Friendliness?

comment by Vladimir_Nesov · 2009-06-14T18:07:35.229Z · LW(p) · GW(p)

I firmly believe that all of the effort currently put into "Friendly AI" is wasted. The bored teenager who finally puts together an AGI in his parents' basement will not have read any of these deep philosophical tracts.

Earlier:

The negative singularity, on the other hand, is an entirely hypothetical concept. I do not believe the two are at all comparable.

In other words, Friendly AI is an ineffective effort even compared to something entirely hypothetical.

comment by Roko · 2009-06-14T16:26:41.469Z · LW(p) · GW(p)

I thus expect civilization to succumb to Natural Stupidity

What do you mean by this?

Replies from: asciilifeform
comment by asciilifeform · 2009-06-14T16:45:33.677Z · LW(p) · GW(p)

I am convinced that resource depletion is likely to lead to social collapse - possibly within our lifetimes. Barring that, biological doomsday-weapon technology is becoming cheaper and will eventually be accessible to individuals. Unaugmented humans have proven themselves to be catastrophically stupid as a mass and continue in behaviors which logically lead to extinction. In the latter I include not only ecological mismanagement, but, for example, our continued failure to solve the protein folding problem, create countermeasures to nuclear weapons, and to create a universal weapon against virii. Not to mention our failure of the ultimate planetary IQ test - space colonization.

Replies from: hrishimittal, Roko, MugaSofer
comment by hrishimittal · 2009-06-14T17:36:54.839Z · LW(p) · GW(p)

I am convinced that resource depletion is likely to lead to social collapse - possibly within our lifetimes.

What convinced you and how convinced are you?

Replies from: asciilifeform
comment by asciilifeform · 2009-06-14T17:47:10.326Z · LW(p) · GW(p)

Dmitry Orlov, and very.

Replies from: cousin_it
comment by cousin_it · 2009-06-14T22:27:08.204Z · LW(p) · GW(p)

Oh. It might be too late, but as a Russian I feel obliged to warn you: when reading texts written by Russians, try to ignore the charm of darkness and depression. We are experts at this.

comment by Roko · 2009-06-14T16:49:45.060Z · LW(p) · GW(p)

Unaugmented humans have proven themselves to be catastrophically stupid as a mass and continue in behaviors which logically lead to extinction. In the latter I include not only ecological mismanagement, but, for example, our continued failure to solve the protein folding problem, create countermeasures to nuclear weapons, and to create a universal weapon against virii.

So you, like me are a "Risk transhumanist" - someone who thinks that existential risk motivates the enhancement of the intelligence of those humans who do the substantial information processing in our society (i.e. politicians, economists, scientists, etc).

I completely agree with this position.

However, creating an uFAI doesn't make things any better.

How about thinking about ways to enhance human intelligence?

Replies from: asciilifeform
comment by asciilifeform · 2009-06-14T17:01:03.720Z · LW(p) · GW(p)

How about thinking about ways to enhance human intelligence?

I agree entirely. It is just that I am quite pessimistic about the possibilities in that area. Pharmaceutical neurohacking appears to be capable of at best incremental improvements, often at substantial cost. Our best bet was probably computer-aided intelligence amplification, and it may be a lost dream.

If AGI even borders on being possible with known technology, I would like to build our successor race. Starting from scratch appeals to me greatly.

Replies from: Roko
comment by Roko · 2009-06-14T17:50:03.060Z · LW(p) · GW(p)

I would like to build our successor race. Starting from scratch appeals to me greatly.

Dying doesn't appeal to me, hence the desire to build an FAI.

Replies from: asciilifeform
comment by asciilifeform · 2009-06-14T17:56:00.744Z · LW(p) · GW(p)

Dying is the default.

I maintain that there will be no FAI without a cobbled-together-ASAP (before petrocollapse) AGI.

Replies from: Roko, hrishimittal, Nick_Tarleton
comment by Roko · 2009-06-14T18:05:21.658Z · LW(p) · GW(p)

I maintain that there will be no FAI without a cobbled-together-ASAP (before petrocollapse) AGI.

but when do you think the petrocollapse is?

Personally, I don't think that the end of oil will be so bad; we have nuclear, wind, solar and other fossil fuels.

Also, look at the incentives: each country is individually incentivized to develop alternative energy sources.

Replies from: asciilifeform
comment by asciilifeform · 2009-06-14T22:50:21.703Z · LW(p) · GW(p)

we have nuclear, wind, solar and other fossil fuels

Petrocollapse is about more than simply energy. Much of modern industry relies on petrochemical feedstock. This includes the production and recycling of the storage batteries which wind/solar enthusiasts rely on. On top of that, do not forget modern agriculture's non-negotiable dependence on synthetic fertilizers.

Personally I think that the bulk of the coming civilization-demolishing chaos will stem from the inevitable cataclysmic warfare over the last remaining drops of oil, rather than from direct effects of the shortage itself.

Replies from: Roko
comment by Roko · 2009-06-14T23:30:33.062Z · LW(p) · GW(p)

You can synthesize petrol from water and CO2 given large energy input. One way to do this is by first turning water into hydrogen, then heat the hydrogen and CO2 to make alkenes, etc. Chemists please feel free to correct.

But I repeat; when do you think the petrocalypse is? How soon? When you say asap for agi we need numbers.

Replies from: SilasBarta
comment by SilasBarta · 2009-06-14T23:40:44.865Z · LW(p) · GW(p)

Yes, the US military is extensively researching how to convert nuclear energy + atmospheric CO2 + water (all of which are in no short supply) into traditional fuel. New York Times article about it. The only thing holding it back from use is that it costs more than making the fuel from ordinary fossil fuels, but when you account for existing taxes in my most countries, if this method weren't taxed while other taxes remained in place, "nuclear octane" would be cost-competitive.

Replies from: CronoDAS, Roko
comment by CronoDAS · 2009-06-15T05:58:04.048Z · LW(p) · GW(p)

Well, one way to convert nuclear energy into hydrocarbons is fairly common, if rather inefficient.

Replies from: SilasBarta
comment by SilasBarta · 2009-06-15T22:00:35.961Z · LW(p) · GW(p)

Well, one way to exploit the properties of air to fly is fairly common, if rather inefficient ;-)

Replies from: CronoDAS
comment by CronoDAS · 2009-06-15T22:05:32.150Z · LW(p) · GW(p)

Indeed. It's a hard resource to exploit, that one, but it has been done. ;)

It's harder to hitch a ride on a bird than it is to turn plants into car fuel, though, but, on a less silly note, the fact that so much fertilizer comes from petrochemicals and other non-renewable sources seriously limits the long-term potential of biofuels.

comment by Roko · 2009-06-15T15:08:09.259Z · LW(p) · GW(p)

But I repeat; when do you think the petrocalypse is? How soon? When you say asap for agi we need numbers.

Replies from: SilasBarta
comment by SilasBarta · 2009-06-15T17:12:49.880Z · LW(p) · GW(p)

I'm not asciilifeform and am not suggesting there will be a petrocalypse.

comment by hrishimittal · 2009-06-14T18:15:28.104Z · LW(p) · GW(p)

You make a lot of big claims in this thread. I'm interested in reading your detailed thoughts on these. Could you please point to some writings?

Replies from: asciilifeform
comment by asciilifeform · 2009-06-14T22:25:03.737Z · LW(p) · GW(p)

The intro section of my site (Part 1, Part 2) outlines some of my thoughts regarding Engelbartian intelligence amplification. For what I regard as persuasive arguments in favor of the imminence of petrocollapse, I recommend Dmitry Orlov's blog and dead-tree book.

As for my thoughts regarding AGI/FAI, I have not spoken publicly on the issue until yesterday, so there is little to read. My current view is that Friendly AI enthusiasts are doing the equivalent of inventing the circuit breaker before discovering electricity. Yudkowsky stresses the importance of "not letting go of the steering wheel" lest humanity veer off into the maw of a paperclip optimizer or similar calamity. My position is that Friendly AI enthusiasts have invented the steering wheel, playing with it - "vroom, vroom" - without having invented the car.

The history of technology provides no examples of a safety system being developed entirely prior to the deployment of "unsafe" versions of the technology it was designed to work with. The entire idea seems arrogant and somewhat absurd to me.

I have been reading Yudkowsky since he first appeared on the Net in the 90's, and remain especially intrigued by his pre-2001 writings - the ones he has disavowed, which detail his theories regarding how one might actually construct an AGI. It saddens me that he is now a proponent of institutionalized caution regarding AI. I believe that the man's formidable talents are now going to waste. Caution and moderation lead us straight down the road of 15th century China. They give us OSHA and the modern-day FDA. We are currently aboard a rocket carrying us to pitiful oblivion rather than a glorious SF future. I, for one, want off.

Replies from: wuwei
comment by wuwei · 2009-06-14T23:18:19.581Z · LW(p) · GW(p)

You seem to think an FAI researcher is someone who does not engage in any AGI research. That would certainly be a rather foolish researcher.

Perhaps you are being fooled by the fact that a decent FAI researcher would tend not to publicly announce any advancements in AGI research.

Replies from: asciilifeform
comment by asciilifeform · 2009-06-14T23:23:06.225Z · LW(p) · GW(p)

the fact that a decent FAI researcher would tend not to publicly announce any advancements in AGI research

Science as priestcraft: a historic dead end, the Pythagoreans and the few genuine finds of the alchemists nonwithstanding. I am astounded by the arrogance of people who consider themselves worthy of membership in such a secret club, believing themselves more qualified than "the rabble" to decide the fate of all mankind.

Replies from: Vladimir_Nesov, MugaSofer
comment by Vladimir_Nesov · 2009-06-14T23:41:04.522Z · LW(p) · GW(p)

This argument mixes up the question of factual utilitarian efficiency of science, claim for overconfidence in science's efficiency, and moral judgment about breaking the egalitarian attitude based on said confidence in efficiency. Also, the argument is for some reason about science in general, and not just the controversial claim about hypothetical FAI researchers.

comment by MugaSofer · 2013-04-25T13:28:43.355Z · LW(p) · GW(p)

Science as priestcraft: a historic dead end

Name three.

Not being rhetorical, genuinely curious here.

comment by Nick_Tarleton · 2009-06-14T18:07:49.862Z · LW(p) · GW(p)

i.e. you think we can use AGI without a Friendly goal system as a safe tool? If you found Value Is Fragile persuasive, as you say, I take it you then don't believe hard takeoff occurs easily?

comment by MugaSofer · 2013-04-25T13:18:25.206Z · LW(p) · GW(p)

That doesn't make guaranteed destruction any better. It just makes FAI harder, because the time limit is closer.

Also, excellent example with the "planetary IQ test" thing.

comment by cousin_it · 2009-06-14T10:44:33.615Z · LW(p) · GW(p)

As a Usenet discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches 1.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-06-15T01:37:54.058Z · LW(p) · GW(p)

Huh. Looking back, it actually seems that I already wrote up the complete reply to this post and it is "Raised in Technophilia" (Sep 17 '08).

Replies from: whpearson
comment by whpearson · 2009-06-15T10:08:56.611Z · LW(p) · GW(p)

My position is yes technology can kill us all. Also lack of technologies can get us all killed,

The ability to create intelligence is one we need on the long scale. I don't think ultra safe self-improving AI is a coherent concept, but I understand my thinking might be wrong. My problem is how can we move on from following that path if it is a dead end?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-06-15T10:30:16.180Z · LW(p) · GW(p)

Here it is again: there is no such requirement that FAI needs to be "ultra safe" and that if it's not, it's unacceptable. This is a strawman. The requirement is that there needs to be any chance at all that the outcome is good (preferably a greater chance). Then, there is a separate conjecture that to have any chance at all, AI needs to be deeply understood.

If you think that being careful is unnecessary, that ad-hoc approach is ready to be used positively, you are not disputing the need for Friendliness in AGI. You are disputing the conjecture that Friendliness requires care. This is not a normative question, this is a factual question.

The normative question is whether to think about consequences of your actions, which is largely decided against or rather dismissed as trivial by far too many people who think they are working on AGI.

Replies from: whpearson
comment by whpearson · 2009-06-15T11:08:19.876Z · LW(p) · GW(p)

I got the impression from, "do the impossible" that Eliezer was going for definitely safe AI and might be safe was not good enough. Edit Oh and the sequence on fun theory suggested that scenarios where humanity just survived, were not good enough either.

I think we are so far away from having the right intellectual framework for creating AI or even thinking about its likely impact on the future, that the ad hoc approach might be valuable for pushing us in the right direction or telling us what the important structure in the human brain is going to look like.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-06-15T11:16:19.319Z · LW(p) · GW(p)

I got the impression from, "do the impossible" that Eliezer was going for definitely safe AI and might be safe was not good enough.

The hypothesis here is that if you are unsure whether AGI is safe, it's not, and when you are sure it is, it's still probably not. Therefore, to have any chance of success, you have to be sure that you understand how the success is achieved. This is a question of human bias, not of the actual probability of success. See also: Possibility, Antiprediction.

I also thought that ad-hoc brings insight, but after learning more I changed my mind.

Replies from: whpearson
comment by whpearson · 2009-06-15T11:51:45.716Z · LW(p) · GW(p)

The hypothesis here is that if you are unsure whether AGI is safe, it's not, and when you are sure it is, it's still probably not.

I really didn't get that impression... Why worry about whether the AI will separate humanity if you think it might fail anyway. Surely spend more time making sure it doesn't fail...

comment by Mycroft65536 · 2009-06-14T06:32:56.191Z · LW(p) · GW(p)

"I do not fear [technology]. I fear the lack of them. " -Isaac Azimov

Replies from: CronoDAS
comment by Houshalter · 2015-07-10T06:31:50.687Z · LW(p) · GW(p)

I don't agree with your conclusion or the connection to AI research. But the segment about civilizations collapsing for unknown reasons is brilliant and well written, and really stands on it's own.

comment by BenFRayfield · 2014-11-10T04:35:55.196Z · LW(p) · GW(p)

I wish people were more scared of the dangers that cant yet be measured, like the chance a very large gamma ray could hit Earth for a short time then be aimed somewhere else. How do we know major extinctions in the past werent related to unknown behaviors of spacetime from outside where we measure? Or maybe the "constants" in the wave equations of physics sometimes vary. Is it really a good deal to let individual businesses hold the pieces of this knowledge to themselves instead of putting all our knowledge together to figure out whats possible?

Replies from: Lumifer
comment by Lumifer · 2014-11-10T04:46:10.880Z · LW(p) · GW(p)

I wish people were more scared of the dangers that cant yet be measured

Why?

Replies from: BenFRayfield
comment by BenFRayfield · 2014-12-04T23:57:31.135Z · LW(p) · GW(p)

The map is not the territory, but most people are happy to take the lack of dangers on their map as evidence of the safety of the terrirtory, so they dont update their maps.

Replies from: NxGenSentience, Lumifer
comment by NxGenSentience · 2014-12-05T09:30:49.274Z · LW(p) · GW(p)

It's nice to hear a quote from Wittgenstein. I hope we can get around to discussing the deeper meaning of this, which applies to all kinds of things... most especially, the process by which each kind of creature (bats, fish, homo sapiens, and potential embodied artifactual (n.1) minds (and also not embodied in the contemporaneously most often used sense of the term -- Watson was not embodied in that sense) *constructs it's own ontology) (or ought to, by virtuue of being embued with the right sort of architecture.)

That latter sense, and the incommensurability of competing ontologies in competing creatures (where 'creature' is defined defined as a hybrid, and N-tuple, of cultural legacy contructs, endemic evolutionarily bequeathed physiological sensorium, it's individual autobiographical experience...), but not (in my view, in the theory I am developing) opaque to enlightened translatability -- though the conceptual scaffolding for translaiton involves the nature of, purpose of, and boundaries, both logical and temporal of the "specious present", the quantum zeno effect, and other considerations, so it is more suble than meets the eye)... is more of what Wittengensttein was thinking about, considering Kant's answer to skepticism, and lots of other issues.

Your more straightforward point bears merit, however. Most of us have spend a good deal of our lives battling not issue opacity, as much as human opacity to new, expanded, revised, or unconventional ideas.

Note 1.: BY the way, I occasionally write 'artifactual' as opposed to 'artificial' because of the sense in which, as products of nature, everything we do -- including building AIs -- is, ipso facto, a product of nature, and hence, 'artificial' is an adjective we should be careful about.

comment by Lumifer · 2014-12-05T05:05:27.926Z · LW(p) · GW(p)

most people are happy to take the lack of dangers on their map as evidence of the safety of the terrirtory

I believe they are mostly correct in that. What other evidence should they consider?

so they dont update their maps

That's a non sequitur. There are strong natural selection forces against this kind of behaviour.

comment by NancyLebovitz · 2010-10-31T23:06:20.951Z · LW(p) · GW(p)

Actually, there is a science fiction story very similar to your opening section. I'm putting author and title in rot13 because the story got much of its effect because I read it under normal science fiction protocols. Surely humanity would eventually get into space-- but it doesn't and dies out. Well, then aliens will manage-- but there aren't any.

On the other hand, that's just one story in a large field, and I think it's only been reprinted once.

Zhecul'f Unyy ol Cbhy Naqrefba.

comment by Mike Bishop (MichaelBishop) · 2009-06-14T19:40:54.858Z · LW(p) · GW(p)

I agree that groups/societies get stuck for fairly long periods of time and that independence and competition between groups/societies is often beneficial. But I think stagnation is unlikely unless we end up with a totalitarian world government. See Bryan Caplan's essay

comment by steven0461 · 2009-06-14T06:18:17.065Z · LW(p) · GW(p)

But the one reason above all others is that the window of opportunity we are currently given may be the last step in the Great Filter, that we cannot know when it will close or if it does, whether it will ever open again.

Rather than running harder toward this window of yours, we should take special care to check what floor it's on.

Replies from: Mycroft65536
comment by Mycroft65536 · 2009-06-14T06:34:04.071Z · LW(p) · GW(p)

You might be taking the metaphor too far.

Replies from: steven0461
comment by steven0461 · 2009-06-14T06:39:27.173Z · LW(p) · GW(p)

Not sure how else I should reply to something as vague as the original post. Not developing specific technologies might marginally increase risks of stagnation, but it might decrease risks from that technology, and depending on the technology in question, the latter effect might far outweigh the former. Without some reason why it wouldn't, there's no real case, just attempts at poetry.

Or if you need me to unpack grandparent, "it can be bad to worry too much about a window of opportunity possibly closing if there's a possibility that the most obvious way to take the opportunity leads to disaster, and if one may be able to find other, better opportunities instead".

comment by MugaSofer · 2013-04-25T13:04:10.604Z · LW(p) · GW(p)

This seems to apply more to the space program than any field where progress is impeded by safety concerns, or such impediments are advocated here. Yes, we should have Mars colonies by now. That's not a shocking revelation, it's a near-universal belief among nerdy types, including LWers. But, since we don't, we need to minimize existential risks to life on Earth until we do, and we will always need to minimize existential risks capable of crossing interplanetary distances (i.e. uFAI, maybe nanotech or memetic plagues, although we don't know enough to even know if those are a danger.)