Raised in Technophilia

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-09-17T02:06:26.000Z · LW · GW · Legacy · 33 comments

Contents

33 comments

My father used to say that if the present system had been in place a hundred years ago, automobiles would have been outlawed to protect the saddle industry.

One of my major childhood influences was reading Jerry Pournelle's A Step Farther Out, at the age of nine.  It was Pournelle's reply to Paul Ehrlich and the Club of Rome, who were saying, in the 1960s and 1970s, that the Earth was running out of resources and massive famines were only years away.  It was a reply to Jeremy Rifkin's so-called fourth law of thermodynamics; it was a reply to all the people scared of nuclear power and trying to regulate it into oblivion.

I grew up in a world where the lines of demarcation between the Good Guys and the Bad Guys were pretty clear; not an apocalyptic final battle, but a battle that had to be fought over and over again, a battle where you could see the historical echoes going back to the Industrial Revolution, and where you could assemble the historical evidence about the actual outcomes.

On one side were the scientists and engineers who'd driven all the standard-of-living increases since the Dark Ages, whose work supported luxuries like democracy, an educated populace, a middle class, the outlawing of slavery.

On the other side, those who had once opposed smallpox vaccinations, anesthetics during childbirth, steam engines, and heliocentrism:  The theologians calling for a return to a perfect age that never existed, the elderly white male politicians set in their ways, the special interest groups who stood to lose, and the many to whom science was a closed book, fearing what they couldn't understand.

And trying to play the middle, the pretenders to Deep Wisdom, uttering cached thoughts about how technology benefits humanity but only when it was properly regulated—claiming in defiance of brute historical fact that science of itself was neither good nor evil—setting up solemn-looking bureaucratic committees to make an ostentatious display of their caution—and waiting for their applause.  As if the truth were always a compromise.  And as if anyone could really see that far ahead.  Would humanity have done better if there'd been a sincere, concerned, public debate on the adoption of fire, and commitees set up to oversee its use?

When I entered into the problem, I started out allergized against anything that pattern-matched "Ah, but technology has risks as well as benefits, litte one."  The presumption-of-guilt was that you were either trying to collect some cheap applause, or covertly trying to regulate the technology into oblivion.  And either way, ignoring the historical record immensely in favor of technologies that people had once worried about.

Today, Robin Hanson raised the topic of slow FDA approval of drugs approved in other countries.  Someone in the comments pointed out that Thalidomide was sold in 50 countries under 40 names, but that only a small amount was given away in the US, so that there were 10,000 malformed children born globally, but only 17 children in the US.

But how many people have died because of the slow approval in the US, of drugs more quickly approved in other countries—all the drugs that didn't go wrong?  And I ask that question because it's what you can try to collect statistics about—this says nothing about all the drugs that were never developed because the approval process is too long and costly.  According to this source, the FDA's longer approval process prevents 5,000 casualties per year by screening off medications found to be harmful, and causes at least 20,000-120,000 casualties per year just by delaying approval of those beneficial medications that are still developed and eventually approved.

So there really is a reason to be allergic to people who go around saying, "Ah, but technology has risks as well as benefits".  There's a historical record showing over-conservativeness, the many silent deaths of regulation being outweighed by a few visible deaths of nonregulation.  If you're really playing the middle, why not say, "Ah, but technology has benefits as well as risks"?

Well, and this isn't such a bad description of the Bad Guys.  (Except that it ought to be emphasized a bit harder that these aren't evil mutants but standard human beings acting under a different worldview-gestalt that puts them in the right; some of them will inevitably be more competent than others, and competence counts for a lot.)  Even looking back, I don't think my childhood technophilia was too wrong about what constituted a Bad Guy and what was the key mistake.  But it's always a lot easier to say what not to do, than to get it right.  And one of my fundamental flaws, back then, was thinking that, if you tried as hard as you could to avoid everything the Bad Guys were doing, that made you a Good Guy.

Particularly damaging, I think, was the bad example set by the pretenders to Deep Wisdom trying to stake out a middle way; smiling condescendingly at technophiles and technophobes alike, and calling them both immature.  Truly this is a wrong way; and in fact, the notion of trying to stake out a middle way generally, is usually wrong; the Right Way is not a compromise with anything, it is the clean manifestation of its own criteria.

But that made it more difficult for the young Eliezer to depart from the charge-straight-ahead verdict, because any departure felt like joining the pretenders to Deep Wisdom.

The first crack in my childhood technophilia appeared in, I think, 1997 or 1998, at the point where I noticed my fellow technophiles saying foolish things about how molecular nanotechnology would be an easy problem to manage.  (As you may be noticing yet again, the young Eliezer was driven to a tremendous extent by his ability to find flaws—I even had a personal philosophy of why that sort of thing was a good idea.)

The nanotech stuff would be a separate post, and maybe one that should go on a different blog.  But there was a debate going on about molecular nanotechnology, and whether offense would be asymmetrically easier than defense.  And there were people arguing that defense would be easy.  In the domain of nanotech, for Ghu's sake, programmable matter, when we can't even seem to get the security problem solved for computer networks where we can observe and control every one and zero.  People were talking about unassailable diamondoid walls.  I observed that diamond doesn't stand off a nuclear weapon, that offense has had defense beat since 1945 and nanotech didn't look likely to change that.

And by the time that debate was over, it seems that the young Eliezer—caught up in the heat of argument—had managed to notice, for the first time, that the survival of Earth-originating intelligent life stood at risk.

It seems so strange, looking back, to think that there was a time when I thought that only individual lives were at stake in the future.  What a profoundly friendlier world that was to live in... though it's not as if I were thinking that at the time.  I didn't reject the possibility so much as manage to never see it in the first place.  Once the topic actually came up, I saw it.  I don't really remember how that trick worked.  There's a reason why I refer to my past self in the third person.

It may sound like Eliezer1998 was a complete idiot, but that would be a comfortable out, in a way; the truth is scarier.  Eliezer1998 was a sharp Traditional Rationalist, as such things went.  I knew hypotheses had to be testable, I knew that rationalization was not a permitted mental operation, I knew how to play Rationalist's Taboo, I was obsessed with self-awareness... I didn't quite understand the concept of "mysterious answers"... and no Bayes or Kahneman at all.  But a sharp Traditional Rationalist, far above average...  So what?  Nature isn't grading us on a curve.  One step of departure from the Way, one shove of undue influence on your thought processes, can repeal all other protections.

One of the chief lessons I derive from looking back at my personal history is that it's no wonder that, out there in the real world, a lot of people think that "intelligence isn't everything", or that rationalists don't do better in real life.  A little rationality, or even a lot of rationality, doesn't pass the astronomically high barrier required for things to actually start working.

Let not my misinterpretation of the Right Way be blamed on Jerry Pournelle, my father, or science fiction generally.  I think the young Eliezer's personality imposed quite a bit of selectivity on which parts of their teachings made it through.  It's not as if Pournelle didn't say:  The rules change once you leave Earth, the cradle; if you're careless sealing your pressure suit just once, you die.  He said it quite a bit.  But the words didn't really seem important, because that was something that happened to third-party characters in the novels—the main character didn't usually die halfway through, for some reason.

What was the lens through which I filtered these teachings?  Hope. Optimism.  Looking forward to a brighter future.  That was the fundamental meaning of A Step Farther Out unto me, the lesson I took in contrast to the Sierra Club's doom-and-gloom.  On one side was rationality and hope, the other, ignorance and despair.

Some teenagers think they're immortal and ride motorcycles.  I was under no such illusion and quite reluctant to learn to drive, considering how unsafe those hurtling hunks of metal looked.  But there was something more important to me than my own life:  The Future.  And I acted as if that was immortal.  Lives could be lost, but not the Future.

And when I noticed that nanotechnology really was going to be a potentially extinction-level challenge?

The young Eliezer thought, explicitly, "Good heavens, how did I fail to notice this thing that should have been obvious?  I must have been too emotionally attached to the benefits I expected from the technology; I must have flinched away from the thought of human extinction."

And then...

I didn't declare a Halt, Melt, and Catch Fire.  I didn't rethink all the conclusions that I'd developed with my prior attitude.  I just managed to integrate it into my worldview, somehow, with a minimum of propagated changes.  Old ideas and plans were challenged, but my mind found reasons to keep them.  There was no systemic breakdown, unfortunately.

Most notably, I decided that we had to run full steam ahead on AI, so as to develop it before nanotechnology.  Just like I'd been originally planning to do, but now, with a different reason.

I guess that's what most human beings are like, isn't it?  Traditional Rationality wasn't enough to change that.

But there did come a time when I fully realized my mistake.  It just took a stronger boot to the head.  To be continued.

33 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by Carl_Shulman · 2008-09-17T04:57:51.000Z · LW(p) · GW(p)

What did you think about engineered plagues, use of nuclear weapons to induce extreme climate change, and robotic weapons advanced enough to kill off humanity but too limited to carry on civilization themselves?

comment by Phil_Goetz4 · 2008-09-17T05:22:08.000Z · LW(p) · GW(p)

Carl: None of those would (given our better understanding) be as bad as great plagues that humanity has lived through before.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-09-17T06:04:55.000Z · LW(p) · GW(p)

I noticed engineered plagues after noticing nanotech. Neither nuclear weapons nor automated robotic weapons struck me as probable total extinction events.

comment by Tim_Tyler · 2008-09-17T07:48:44.000Z · LW(p) · GW(p)

Nanotechnology will rather obviously wipe out existing protein-DNA organisms - by replacing them with something much better.

However, ending life or civilization doesn't look at all likely. It wouldn't be permitted by those in charge. The whole "oops-apocalypse" scenario seems implausible to me - our descendants simply won't be so stupid and incompetent as to fumble on that scale.

Replies from: wizzwizz4
comment by wizzwizz4 · 2020-03-27T17:30:46.304Z · LW(p) · GW(p)

Why something better? Why not just grey goo?

comment by billswift · 2008-09-17T09:05:34.000Z · LW(p) · GW(p)

In the late 1990s I figured roughly even odds of a doomsday catastrophe with nanotech. A mistake with a weapon seems much more likely than a gray-goo accident though. I also think that the risk goes up with the asymmetry of capability in nano; that is the closer to a monopoly on nano that exists, the more likely a doomsday scenario becomes. Multiple strands of development both acts as a deterrent on would be abusers and provides at least some hope of combatting an actual release.

comment by Brandon_Reinhart · 2008-09-17T09:06:11.000Z · LW(p) · GW(p)

Tim: Eh, you make a big assumption that our descendants will be the ones to play with the dangerous stuff and that they will be more intelligent for some reason. That seems to acknowledge the intelligence / nanotech race condition that is of so much concern to singularitarians.

comment by Brian_Jaress2 · 2008-09-17T09:06:20.000Z · LW(p) · GW(p)

When I read these stories you tell about your past thoughts, I'm struck by how different your experiences with ideas were. Things you found obvious seem subtle to me. Things you discovered with a feeling of revelation seem pedestrian. Things you dismissed wholesale and now borrow a smidgen of seem like they've always been a significant part of my life.

Take, for example, the subject of this post: technological risks. I never really thought of "technology" as a single thing, to be judged good or bad as a whole, until after I had heard a great deal about particular cases, some good and some bad.

When I did encounter that question, it seemed clear that it was good because the sum total of our technology had greatly improved the life of the average person. It also seemed clear that this did not make every specific technology good.

I don't know about total extinction, but there was a period ending around the time I was born (I think we're about the same age) when people thought that they, their families, and their friends could very well be killed in a nuclear war. I remember someone telling me that he started saving for retirement when the Berlin Wall fell.

With that in mind, I wonder about the influence of our experiences with ideas. If two people agree that technology is good overall but specific technologies can be bad, will they tend apply that idea differently if one was taught it as a child and the other discovered it in a flash of insight as an adult? That might be one reason I tend to agree with the principles you lay out but not the conclusions you reach.

comment by billswift · 2008-09-17T09:11:41.000Z · LW(p) · GW(p)

Drexler was worried about just those sorts of problems, so he put off writing up his ideas until he realized how developments in multiple fields were heading in the direction of nanotech without any realistic criticism, that's when he wrote "Engines of Creation". He also made the point that there is no really practical way of preventing the development of molecular nanotech, there are too many reasons for developments leading in that direction. If one nation outlaws it, or too heavily regulates it, it will just be developed elsewhere, maybe even underground, since advancing technology is making it easier and cheaper to do small scale R & D.

Replies from: elityre
comment by Eli Tyre (elityre) · 2019-07-08T00:06:28.555Z · LW(p) · GW(p)

Anyone have a citation for Drexler's motivations?

comment by Tim_Tyler · 2008-09-17T10:57:19.000Z · LW(p) · GW(p)

Re: you make a big assumption that our descendants will be the ones to play with the dangerous stuff and that they will be more intelligent for some reason.

I doubt it. You are probably misinterpreting what I mean by "our" or "descendants". Future living organisms will be descended from existing ones - that's about all I mean.

Re: That seems to acknowledge the intelligence / nanotech race condition that is of so much concern to singularitarians.

I figure we will have AI before we have much in the way of nanotechnology - if that's what you mean.

Building minds is much easier than building bodies. For one thing, you only need a tiny number of component types for a mind.

However, rather obviously the technologies will feed of each other - mutually accelerating each other's development.

comment by Carl_Shulman · 2008-09-17T11:55:15.000Z · LW(p) · GW(p)

"I noticed engineered plagues after noticing nanotech. Neither nuclear weapons nor automated robotic weapons struck me as probable total extinction events." What was the probability threshold below which extinction and astronomical waste concerns no longer drew attention?

comment by Zubon · 2008-09-17T12:17:25.000Z · LW(p) · GW(p)

The whole "oops-apocalypse" scenario seems implausible to me - our descendants simply won't be so stupid and incompetent as to fumble on that scale.

"if you're careless sealing your pressure suit just once, you die." We have come very close to fumbling on that scale already. Petrov Day is next week.

comment by Tim_Tyler · 2008-09-17T12:38:28.000Z · LW(p) · GW(p)

Ah yes, the sunlight reflected off clouds end-of-civilisation scenario. Forgive me for implicitly not giving that more weight.

comment by steven · 2008-09-17T13:11:33.000Z · LW(p) · GW(p)

One disturbing thing about the Petrov issue that I don't think anyone mentioned last time, is that by praising nuclear non-retaliators we could be making future nuclear attacks more likely by undermining MAD.

Replies from: PetjaY
comment by PetjaY · 2016-11-20T19:29:49.211Z · LW(p) · GW(p)

Petrov wasn´t (probably) a non-retaliator, he just wanted to be more sure there was something to retaliate. That is is something we want to praise.

comment by steven · 2008-09-17T13:17:05.000Z · LW(p) · GW(p)

If groups with MNT have first-strike capability, then you'd expect the winners of WW3 to remain standing at least. I'm not sure how much of a consolation that is.

comment by Thom_Blake · 2008-09-17T14:02:17.000Z · LW(p) · GW(p)

Several places in the US did have regulations protecting the horse industry from the early automobile industry - I'm not sure what "the present system" refers to as opposed to that sort of thing.

comment by Cyan2 · 2008-09-17T15:49:21.000Z · LW(p) · GW(p)
One disturbing thing about the Petrov issue that I don't think anyone mentioned last time, is that by praising nuclear non-retaliators we could be making future nuclear attacks more likely by undermining MAD.

Petrov isn't praised for being a non-retaliator. He's praised for doing good probable inference -- specifically, for recognizing that the detection of only 5 missiles pointed to malfunction, not to a U.S. first strike, and that a "retaliatory" strike would initiate a nuclear war. I'd bet counterfactually that Petrov would have retaliated if the malfunction had caused the spurious detection of a U.S. first strike with the expected hundreds of missiles.

comment by Nominull3 · 2008-09-17T16:13:16.000Z · LW(p) · GW(p)

"If you're careless sealing your pressure suit just once, you die" to me seems to imply that proper pressure suit design involves making it very difficult to seal carelessly.

comment by Lara_Foster2 · 2008-09-17T16:45:14.000Z · LW(p) · GW(p)

I understand that there are many ways in which nanotechnology could be dangerous, even to the point of posing extinction risks, but I do not understand why these risks seem inevitable. I would find it much more likely that humanity will invent some nanotech device that gets out of hand, poisons a water supply, kills several thousand people, and needs to be contained/quarantined, leading to massive nano-tech development regulation, rather than a nano-tech mistake that immediately depressurizes the whole space suit, is impossible to contain, and kills us all.

A recursively improving, superintelligent AI, on the other hand, seems much more likely to fuck us over, especially if we're convinced it's acting in our best interest for the beginning of its 'life,' and problems only become obvious after it's already become far more 'intelligent' than we are.

comment by Zubon · 2008-09-17T18:43:37.000Z · LW(p) · GW(p)

Lara Foster, to get what people are worried about, extrapolate the danger of recursive self-improving intelligence to self-reproducing nanotechnology. We want what it can provide, we spread nanomachines, and from there you can calculate how many doublings would be necessary to convert all the molecules on the surface of the planet to nano-assemblers. Ten doublings is 1024*, so we probably would not realize how over-powered we were until far too late.

As you say, this is not the most likely extinction event. Losing Eurasia and Africa to a sign error would be bad thing, but not a full extinction event. The downside of being a nanomachine is that trans-Atlantic swimming is hard with 2nm-long legs.

But if a nano-assembler can reproduce itself in 6 minutes, you have one thousand in an hour, one million the next hour, one billion the next hour... not a lot of time for regulation.

The only one who can react to a problem that big in that timespan is that recursively self-improving AI we have been keeping in the box over there. Guess it's time to let it out. (Say, who is responsible for that first nano-assembler anyway?)

comment by Nick_Tarleton · 2008-09-17T20:09:57.000Z · LW(p) · GW(p)

I find rapidly self-replicating manufacturing (probably, but maybe not necessarily, MNT) leading to genocidal conventional and nuclear war on a previously impossible scale, much more likely than any use or accidental outbreak of replicators in the field.

comment by Tim_Tyler · 2008-09-17T20:39:42.000Z · LW(p) · GW(p)

Note that the non-nucleic replicators are already on the loose - they are commonly known as memes.

comment by Alex_Montgomery · 2008-09-17T20:55:27.000Z · LW(p) · GW(p)

Nick: Why?

comment by Lara_Foster3 · 2008-09-17T21:00:01.000Z · LW(p) · GW(p)

Zubon, Your model assumes that these 'nano-assemblers' will be able to reproduce themselves using any nearby molecules and not some specific kind of molecule/substance. It would seem obviously unwise to invent something that could eat away any matter you put near it for the sake of self-reproduction. Why would we ever design such a thing? Even Kurt Vonnegut's hypothetical Ice-Nine could only crystalize water and only at certain temperatures- creating something that essentially crystalizes EVERYTHING does not seem trivial, easy, or advisable to anyone. Maybe you should be clamouring for regulation of who can use nano-scale design technology so mad-men don't do this to deliberately destroy everything. Maybe this should be a top national-security issue. Heck- Maybe it IS a top national security issue and you just don't know it. Changing security opinions still seems safer and easier than initiating a self-recursively improving general AI.

The scenario you propose is, as I understand it, "Grey Goo," and I was under the impression that this was not considered a primary extinction risk (though I could be wrong there).

comment by Will_Pearson · 2008-09-17T21:18:02.000Z · LW(p) · GW(p)

I find Freitas one of the best writers about the various goos. See this article for example.

comment by Zubon · 2008-09-18T12:09:49.000Z · LW(p) · GW(p)

Lara Foster, since you agree on the important points, that argument seems resolved. On the materials question, please note the Freitas article cited, particularly that many nanotech plans involve using carbon. As a currently carbon-based lifeform, those molecules are more my concern than any.

comment by Tim_Tyler · 2008-09-18T20:19:16.000Z · LW(p) · GW(p)

Re: It would seem obviously unwise to invent something that could eat away any matter you put near it for the sake of self-reproduction.

Like a bacterium that could digest anything? It would be a neat trick. What happens if it meets another creature of its own type?

Note that the behaviour of replicators is not terribly different from the way AIs tend to slurp up space/time and mass/energy and convert them into utility.

comment by FrancesH · 2011-03-13T02:13:18.378Z · LW(p) · GW(p)

I suspect that people raised with the idea of global warming have an advantage in knowing that the human race might well one day die out, that it is not necessarily immortal.

On the other hand, perhaps not. I remember learning about global warming. I don't remember the specific details of what I learned, or even if it was at all accurate, but I do remember learning about it. And I thought something along the lines of, "There's a fair chance everyone's going to die if we don't all do something about this."

And I looked around.

And even the people I knew who believed in global warming--which, considering my social circles, consisted of pretty much everyone--seemed not to really see this. Even the ones who learned the exact same things I did, from the exact same place (that is to say, school) just seemed to assume that everything would, by necessity, just turn out all right.

After a while of this, I just gave up.

comment by wumpus · 2011-06-06T19:36:04.677Z · LW(p) · GW(p)

Just couldn't let this bit go without comment:

"According to this source, the FDA's longer approval process prevents 5,000 casualties per year by screening off medications found to be harmful, and causes at least 20,000-120,000 casualties per year just by delaying approval of those beneficial medications that are still developed and eventually approved."

I haven't examined the source or the methodology by which they can come up with these numbers, but it seems to me that an entire category is missing: the number of 'casualties' per year prevented by having a regulatory process at all. How many quacks and scam artists don't bother to bring snake oil medications to market because they know they can't possibly make it through the regulatory process?

Without the regulatory process, how does the average patient/consumer (or doctor/administrator) tell what is effective and what isn't and what sort of side effects things have, etc? (And this is hardly just a problem of medications, either. I need to put my money into some sort of retirement account - how do I know who is telling the truth about their products without becoming an accountant/broker/economist myself?) It's not that technologies are good or bad - it's that people are good and/or bad.

I suppose that with the above statement I've fallen into your 'Deep Wisdom' bogeyman category, but that's another thing I don't understand about your reasoning: you note that technology does, in fact, have both risks and benefits. And then you assert that the proper stance towards anyone who notes such a fact is mistrust? Should people lie about either the risks or the benefits instead? Would that make them more worthy of trust? Really, if the scientists/engineers aren't telling you about the risks of their technologies, then they shouldn't be calling themselves scientists.

comment by Idan Arye · 2020-10-21T11:51:29.391Z · LW(p) · GW(p)

My father used to say that if the present system had been in place a hundred years ago, automobiles would have been outlawed to protect the saddle industry.

Maybe not outright outlawed, but automobiles were used to be regulated to the point of uselessness: https://en.wikipedia.org/wiki/Red_flag_traffic_laws

comment by Emiya (andrea-mulazzani) · 2020-12-09T11:43:43.638Z · LW(p) · GW(p)

It was Pournelle's reply to Paul Ehrlich and the Club of Rome, who were saying, in the 1960s and 1970s, that the Earth was running out of resources and massive famines were only years away.  It was a reply to Jeremy Rifkin's so-called fourth law of thermodynamics; it was a reply to all the people scared of nuclear power and trying to regulate it into oblivion.

Club of Rome letter talked about disasters decades away, precisely, the first decades of the twenty first century. They were dismissed as unreliable because there was a petrol crisis in the year following the letter, and when that was solved people misrepresented the content of the letter as if it was talking of a disaster only years away.

That was the fundamental meaning of A Step Farther Out unto me, the lesson I took in contrast to the Sierra Club's doom-and-gloom.  On one side was rationality and hope, the other, ignorance and despair

Given the current situation, and what science is saying on the current state of the planet, it seems to me that they got things amazingly right

 

But how many people have died because of the slow approval in the US, of drugs more quickly approved in other countries—all the drugs that didn't go wrong?  And I ask that question because it's what you can try to collect statistics about—this says nothing about all the drugs that were never developed because the approval process is too long and costly.  According to this source, the FDA's longer approval process prevents 5,000 casualties per year by screening off medications found to be harmful, and causes at least 20,000-120,000 casualties per year just by delaying approval of those beneficial medications that are still developed and eventually approved.

It's a huge mistake to generalise the cost/benefits of regulation regarding medicine to technology as a whole.

So there really is a reason to be allergic to people who go around saying, "Ah, but technology has risks as well as benefits".  There's a historical record showing over-conservativeness, the many silent deaths of regulation being outweighed by a few visible deaths of nonregulation.  If you're really playing the middle, why not say, "Ah, but technology has benefits as well as risks"?

The historical record can't possibly take into consideration the rising destructive potential of technology and the abysmal conditions of life we started in. Worst case, if you allowed an unsafe steam engine in the 1800, it could blow up, start a fire and kill an average of dozens people.

 

I feel the reasoning on the cost and benefits of regulation and industrialisation is still really shallow if confronted to everything else in the sequences. The risks coming from regular technology aren't even close to extinction level threat, but they are pretty real and there's a lot of damages that could be cut down without any drawback.