Individuals angry with humanity as a possible existential risk?
post by InquilineKea · 2010-11-29T09:13:54.335Z · LW · GW · Legacy · 36 commentsContents
36 comments
36 comments
Comments sorted by top scores.
comment by Emile · 2010-11-29T09:40:37.668Z · LW(p) · GW(p)
This is not an argument against technology - I'm a transhumanist after all, and I completely embrace technological developments.
If technology brings more harm than good, we should want to believe that technology does more harm than good - group affiliation is a very bad guide for epistemic rationality.
Replies from: FAWS, Vladimir_Nesov, anon47, red75↑ comment by FAWS · 2010-11-29T10:32:41.328Z · LW(p) · GW(p)
The parent in no way deserved to be voted down and that it was looks like a bad sign about the health of this community to me. Note that believing that technology does more harm than good does not equal advocating unfeasible or counterproductive countermeasures.
Replies from: Emile↑ comment by Emile · 2010-11-29T12:45:37.578Z · LW(p) · GW(p)
I didn't downvote the parent (and it seems to be back to 0 now). Short-term karma can fluctuate quite a bit.
Note that believing that technology does more harm than good does not equal advocating unfeasible or counterproductive countermeasures.
Agreed. I was just reacting to something that could be read as implying that group affiliation weighted as much or more than arguments.
In my mind, Inquiline's phrase sounds a bit like something sometimes hear among Christians, "if Evolution is true, then Christianity is wrong", which is used as an argument from one christian to another to reject evolution.
Replies from: FAWS↑ comment by FAWS · 2010-11-29T15:04:25.709Z · LW(p) · GW(p)
I didn't downvote the parent (and it seems to be back to 0 now). Short-term karma can fluctuate quite a bit.
I was referring to your comment being voted down. The funny thing is I originally wrote "this comment" and edited to "the parent" to avoid ambiguity.
Replies from: Emile↑ comment by Vladimir_Nesov · 2010-11-29T15:38:05.810Z · LW(p) · GW(p)
Downvoted the post specifically for making this glaring error. I hope the author will engage this question.
Edit in response to downvoting of this comment: What?
Replies from: Perplexed↑ comment by Perplexed · 2010-11-29T16:15:01.671Z · LW(p) · GW(p)
I am the second downvote.
You hope the author will engage the question how? By abjectly apologizing? By disagreeing? If a simple response of "Good point, thanks" would be sufficient, then what was the point of your comment?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-11-29T16:23:48.295Z · LW(p) · GW(p)
If a simple response of "Good point, thanks" would be sufficient, then what was the point of your comment?
It's a big first step to actually make that "simple response". It's even more important to recognize the problem if you are not inclined to agree.
Replies from: Normal_Anomaly↑ comment by Normal_Anomaly · 2010-11-30T02:49:23.677Z · LW(p) · GW(p)
Vladimir: I upvoted your comment, because I didn't think it was that bad. Principle of charity on the OP: maybe they meant: "I don't think this is enough of a threat that it makes technology a net negative, so it isn't meant as a knockdown argument of transhumanism?"
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-11-30T05:34:09.244Z · LW(p) · GW(p)
"Principle of charity" conflicts with principle of Tarski.
Replies from: Normal_Anomaly↑ comment by Normal_Anomaly · 2010-11-30T14:08:23.373Z · LW(p) · GW(p)
I'm not sure what you mean here. I was proposing an alternate interpretation of the OP's phrasing. I'm not sure what they actually meant. I agree that if they were making a mistake I want to believe they were making a mistake. If technology is bad, I want to believe that too. Can you clarify what you think is the specific problem?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-12-18T23:14:40.374Z · LW(p) · GW(p)
I'm not sure what they actually meant. I agree that if they were making a mistake I want to believe they were making a mistake.
This was my point. There is no power to "principle of charity", since it ought not shift your level of belief in the author intending correct meaning as incorrect one.
↑ comment by anon47 · 2010-12-01T07:38:16.714Z · LW(p) · GW(p)
You seem to be taking a statement of the form (to my reading):
"X appears to imply Y, but it doesn't (assertion). In fact, Y is false (separate assertion)."
and reading it as:
"X appears to imply Y, but I'm a Y-disbeliever (premise). Therefore, Y is false (inference from premise)."
Basically, it seems like you're reading "I'm a transhumanist" as a statement about InquilineKea from which they fallaciously draw a conclusion about reality, while I'm reading it as a disguised direct statement about reality, semantically equivalent to "pursuing the right technologies has positive expected value" (or whatever).
A more charitable interpretation of your post is that you're arguing against belief-as-identity in general, and using "I'm a transhumanist" as an example of it, but if so that's not clear to me.
Replies from: Emilecomment by David_Gerard · 2010-11-29T16:06:08.206Z · LW(p) · GW(p)
Is a sociopathic intelligent individual deliberately doing humanity harm a greater risk than a reasonable and sincere intelligent individual making a terrible mistake, or an organisation of reasonable and sincere intelligent individuals making a terrible mistake? The population of the last two groups is much larger than of the first group.
Replies from: timtyler↑ comment by timtyler · 2010-11-29T17:42:02.972Z · LW(p) · GW(p)
Mistakes are small but numerous - e.g. car accidents.
Evil individuals are rare, but are sometimes highly destructive - e.g. Hitler, Stalin, Mao.
Humanity as a whole probably has more to fear from the latter category.
Replies from: nerzhin, Daniel_Burfoot↑ comment by nerzhin · 2010-11-30T02:25:20.117Z · LW(p) · GW(p)
Hitler, Stalin, and Mao aren't just evil individuals. Somehow they are connected to a strucutre, a society, that enabled the evil.
Replies from: David_Gerard↑ comment by David_Gerard · 2010-11-30T13:08:44.776Z · LW(p) · GW(p)
Don't forget the power of sincerity combined with stupidity. Hitler was ridiculously incompetent - e.g., setting his organisations at each other's throats in wartime? - and World War II only went as well as it did for him because he had excellent generals. Mao was a successful revolutionary, an inspiring leader and relentlessly terrible at actually running a country - his successors carefully backed out of most of his ideas even while maintaining his personality cult. Stalin was, I suggest, less existentially dangerous because he cared about maintaining power more than about perpetuating an ideology per se.
The danger Tim describes is one of stupid politicians with reasonable power bases doing dangerous things with great sincerity - not a wish to burn everything down.
↑ comment by Daniel_Burfoot · 2010-11-30T22:48:42.242Z · LW(p) · GW(p)
Evil individuals are rare, but are sometimes highly destructive - e.g. Hitler, Stalin, Mao.
This suggests a kind of Black Swan effect: truly evil people are rare, but their impact is disproportionately large.
This can cause a subtle form of bias. Most people never meet an evil person (or don't realize it if they do) so it is hard for them to truly understand or visualize what evil is. They might believe in evil in some abstract sense, but it remains a theoretical concept detached from any personal experience, like black holes or the ozone layer.
comment by HonoreDB · 2010-11-29T16:54:53.667Z · LW(p) · GW(p)
James Halperin's The Truth Machine long ago converted me to the idea that the best way to deal with this is to abandon privacy and the right to privacy as a societal ideal, and hope that our ability to thwart terrorists races their increase in power. Even an opt-in total surveillance system would help a lot by reducing the number of suspects.
I should probably make the case against privacy in a top-level post at some point, but pretty much everything I'll say will be taken from that book. For example, I bet Amanda Knox and Raffaele Sollecito are currently cursing the fact that they don't have a government-timestamped video of themselves at the time of Meredith Kercher's murder.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2010-11-30T00:40:37.254Z · LW(p) · GW(p)
On the other hand, the recent policies of the American Transportation Safety Administration demonstrate how easy it is to implement policies that infringe on privacy without getting any corresponding reduction in risk.
comment by Scott Alexander (Yvain) · 2010-11-29T16:24:37.751Z · LW(p) · GW(p)
I think the standard community answer to this question is "Have FAI before then"
Replies from: magfrumpcomment by NancyLebovitz · 2010-11-29T14:05:41.437Z · LW(p) · GW(p)
I've thought of this from the angle of the Fermi paradox. Afaik, Fermi thought war was a major filter. Spam is a minor indicator that individual sociopathy could be another filter as individual power increases. How far are we from home build-a-virus kits?
The major hope [1] I can see is that any of the nano or bio tech which could be used to destroy the human race will have a run-up period, and there will be nano and bio immune systems which might be good enough that the human race won't be at risk, even though there may be large disasters.
[1]Computer programs seem much more able to self-optimize than nano and bio systems. Except that of course, a self-optimizing AI would use nano and bio methods if they seem appropriate.
This is not a cheering thought. I think the only reasonably popular ideology which poses a major risk is the "humanity is a cancer on the planet" sort of enviromentalism-- it seems plausible that a merely pretty good self-optimizing AI tasked with eliminating the human race for the sake of other living creatures would be a lot easier to build than an FAI, and it might be possible to pull a group of people together to work on it.
Replies from: Strange7, Eugine_Nier↑ comment by Strange7 · 2010-11-29T14:36:08.516Z · LW(p) · GW(p)
"Planet-cancer" environmentalists don't own server farms or make major breakthroughs in computer science, unless they're several standard deviations above the norm in both logistical competence and hypocrisy. Accordingly, they'd be working with techniques someone else developed. It's true that a general FAI would be harder to design than even a specific UFAI, but an AI with a goal along the lines of 'restore earth to it's pre-Humanity state and then prevent humans from arising, without otherwise disrupting the glorious purity of Nature' probably isn't easier to design than an anti-UFAI with the goal 'identify other AIs that are trying to kill us all and destroy everything we stand for, then prevent them from doing so, minimizing collateral damage while you do so,' while the latter would have more widespread support and therefore more resources available for it's development.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2010-11-29T21:43:42.304Z · LW(p) · GW(p)
You're adding constraints to the "humanity is a cancer" project which make it a lot harder. Why not settle for "wipe out humanity in a way that doesn't cause much damage and let the planet heal itself"?
The idea of an anti-UFAI is intriguing. I'm not sure it's much easier to design than an FAI.
I think the major barrier to the development of a "wipe out humans" UFAI is that the work would have to be done in secret.
Replies from: Baughn↑ comment by Baughn · 2010-11-29T23:07:55.914Z · LW(p) · GW(p)
It seems to me that an anti-UFAI that does not also prevent the creation of FAIs would, necessarily, be just as hard to make as an FAI. Identifying an FAI without having a sufficiently good model of what one is that you could make one seems implausible.
Am I wrong?
Replies from: NancyLebovitz, Strange7↑ comment by NancyLebovitz · 2010-11-30T00:13:12.930Z · LW(p) · GW(p)
You're at least plausible.
↑ comment by Strange7 · 2010-12-02T01:10:04.988Z · LW(p) · GW(p)
An anti-UFAI could have terms like 'minimal collateral damage' in it's motivation that would cause it to prioritize stopping faster or more destructive AIs over slower or friendlier ones, voluntarily limit it's own growth, accept ongoing human supervision, and cleanly self-destruct under appropriate circumstances.
An FAI is expected to make the world better, not just keep it from getting worse, and as such would need to be trusted with far more autonomy and long-term stability.
↑ comment by Eugine_Nier · 2010-11-29T19:43:18.515Z · LW(p) · GW(p)
I'd also be worried about:
depressed microbiologists
religious fanatics who have too much trust that 'God will protect them' from their virus
Buddhists who loose their memetic immune system and start taking the 'material existence is inherently undesirable' aspect of their religion seriously, or for that mater a practitioner of an Abrahamic religion who takes the idea of heaven seriously.
↑ comment by NancyLebovitz · 2010-11-29T21:45:58.000Z · LW(p) · GW(p)
Buddhists don't seem to go bad that way. I'm not sure that "material existence is undesirable" is a fair description of the religion-- what people seem to conclude from meditation is that most of what they thought they were experiencing is an illusion.
comment by Nic_Smith · 2010-12-01T02:53:20.077Z · LW(p) · GW(p)
"At the moment, you still need to be a fairly well informed terrorist in order to do any serious damage. But what happens when any disgruntled Induhvidual can build a weapon of mass destruction by ordering the parts through magazines?" - Scott Adams, The Dilbert Future, 1997
Thirteen years on, I don't think there's a good answer to that question yet.
comment by Mitchell_Porter · 2011-02-28T10:29:18.373Z · LW(p) · GW(p)
In seeking to prevent such outcomes, you should focus much more on the technology than on the psychology, because the technology is the essential ingredient in these end-of-the-world scenarios and the specific psychology you describe is not an essential ingredient. Suppose there is a type of nanoreplicator which could destroy all life on Earth. Yes, it might be created and released by a suitably empowered angry person; but it might also be released for some other reason, or even just as an accident.
Sometimes this scenario comes up because someone has been imagining a world where everyone has their own desktop nanofactory, and then they suddenly think, what about the sociopaths? If anyone can make anything, that means anyone can make a WMD, which means a small minority will make and use WMDs - etc. But this just means that the scenario of "first everyone gets a nanofactory, then we worry about someone choosing to end the world" is never going to happen. The possibility of human extinction has been part of the nanotech concept from almost the beginning. This was diluted once you had people hoping to get rich by marketing nanotech, and further still once "nanotech" just became a sexy new name for "chemistry", but the feeling of peril has always hovered over the specific concept of replicating nanomachines, especially free-living ones, and any person or organization who begins to seriously make progress in that direction will surely know they are playing with fire.
There simply never will be a society of free wild-type humans with lasting open access to advanced nanotechnology. It's like giving a box of matches to every child in a kindergarten, the place would burn down very quickly. And maybe that is where we're headed anyway, not because some insane idiot really will give everyone on earth a desktop WMD-factory, but because the knowledge is springing up in too many places at once.
Ordinary monitoring and intervention (as carried out by the state) can't be more than a temporary tactic - it might work for a period of years, but it's not a solution that can define a civilization's long-term response to the challenge of nanotechnology, because in the long run there are just too many ways in which the deadly threat might materialize - designed in secret by a distributed process, manufactured in a similar way.
As with Friendly AI, the core of the long-term solution is to have people (and other intelligent agents) who want to not end the world in this way - so "psychology" matters after all - but we are talking about a seriously posthuman world order then, with a neurotechnocracy which studies your brain deeply before you are given access to civilization's higher powers, or a ubiquitous AI environment which invasively studies and monitors the value systems and real-time plans of every intelligent being. You're a transhumanist, so perhaps you can deal with such scenarios, but all of them are on the other side of a singularity and cannot possibly define a practical political or technical pre-singularity strategy for overcoming this challenge. They are not designed for a world in which people are still people and in which they possess the cognitive privacy, autonomy, and idiosyncrasy that they naturally have, and in which there are no other types of intelligent actor on the scene. Any halfway-successful approach for forestalling nanotechnological (and related) doomsdays in that world will have to be a tactical approach (again, tactical means that we don't care about it being a very long-term solution, it's just crisis management, a holding pattern) which focuses first on the specificities of the technology (what exactly would make it so dangerous, how can that be neutralized), and only secondarily on social and psychological factors behind its potential misuse.
comment by jsteinhardt · 2010-11-30T02:15:14.883Z · LW(p) · GW(p)
I agree wholeheartedly with your concern. I think a more practical way of reducing risk other than "develop FAI" (which seems certainly 50+ years out, and probably 100+) is to actually take the War on Terror seriously. Sure, angry individuals are bad, but angry organizations are much, much worse, especially competent ones like Al Qaeda.
I suspect biologists should also care much more about bioterrorism than they currently do, as part of their social responsibility.