[LINK] Terrorists target AI researchers

post by RobertLumley · 2011-09-15T14:22:53.780Z · score: 24 (27 votes) · LW · GW · Legacy · 35 comments

Something a number of LWers should probably be cautious of.



Comments sorted by top scores.

comment by Quirinus_Quirrell · 2011-09-15T15:32:37.767Z · score: 44 (52 votes) · LW · GW


comment by jimrandomh · 2011-09-15T17:20:57.043Z · score: 30 (32 votes) · LW · GW

You're paranoid. We're only speculating on the motives, identity, and whereabouts of a serial killer, in a public forum. What could possibly go wrong?

comment by Hyena · 2011-09-18T09:08:50.763Z · score: 8 (10 votes) · LW · GW

In general, you would be advised not to say anything on the Internet unless you have thought about it for at least five minutes.

comment by Clippy · 2011-09-16T20:46:57.209Z · score: 8 (20 votes) · LW · GW

Why not? You just did. I'm going to post here with my name even if it does draw negative attention from a fringe group of terrorists.

comment by Incorrect · 2011-09-16T01:26:02.973Z · score: 2 (4 votes) · LW · GW

Why not? (This is a serious question. I don't know why not.)

comment by JoshuaZ · 2011-09-16T01:42:03.790Z · score: 24 (26 votes) · LW · GW

There are two primary issues.

First, regular identities can be linked to actual people. If someone talks about how they support AI and nanotech research in this specific context it could draw the attention of the group in question.

Second, people in this thread may be tempted to discuss whether there is any actual legitimacy to the viewpoints in question. In general, Less Wrong commentators are probably more oblivious than the most people about how frank discussions can lead to bad results even when they are being discussed in a highly hypothetical fashion. For example, having the SIAI associated with even marginal, theoretical support of terrorist activity in this age could lead to be bad results.

comment by Quirinus_Quirrell · 2011-09-16T13:21:49.574Z · score: 9 (11 votes) · LW · GW

One Quirrell point to JoshuaZ for getting both of the reasons, rather than stopping after just one like jimrandomh did.

(I'm going to stop PGP signing these things, because when I did that before, it was a pain working around Markdown, and it ended up having to be in code-format mode, monospaced and not line broken correctly, which was very intrusive. A signed list of all points issued to date will be provided on request, but I will only bother if a request is actually made.)

comment by Multiheaded · 2011-10-27T13:34:42.395Z · score: -1 (1 votes) · LW · GW

Heh. If a poster of one of these comments later disappears from LW for any amount of time, this might well become a local meme akin to the Bas-

comment by khafra · 2011-09-15T17:07:10.542Z · score: 30 (36 votes) · LW · GW

I remarked elsewhere that, if someone media-savvy could use this to show the USA's voters that the terrorists hate our Science as well as our freedoms, we might get all manner of space telescopes and stem cell therapies funded.

comment by Paul Crowley (ciphergoth) · 2011-09-16T07:19:05.420Z · score: 34 (34 votes) · LW · GW

I like it, but I couldn't really say that the belief that terrorists hate our freedom led to a great increase in freedom.

comment by Jack · 2011-09-16T01:48:43.636Z · score: 10 (10 votes) · LW · GW

Not just terrorists, Mexican, illegal immigrant terrorists.

comment by wedrifid · 2011-09-16T10:03:34.818Z · score: 1 (5 votes) · LW · GW

If we hadn't already been warned by Quirrel I might start offering advice to anyone who cares about US scientific funding...

comment by Quirinus_Quirrell · 2011-09-15T16:44:39.914Z · score: 29 (33 votes) · LW · GW

A while back, I claimed the Less Wrong username Quirinus Quirrell, and started hosting a long-running, approximate simulation of him in my brain. I have mostly used the account trivially - to play around with crypto-novelties, say mildly offensive things I wouldn't otherwise, and poke fun at Clippy. Several times I have doubted the wisdom of hosting such a simulation. Quirrell's values are not my own, and the plans that he generates (which I have never followed) are mostly bad when viewed in terms of my values. However, I have chosen to keep this occasional alter-identity, because he sees things that would otherwise be invisible to me.

Tor and a virtual machine sandbox are strongly recommended for following all links in this comment. Malware is highly probable and intelligence agencies take notice.

All of the primary source documents from this group are in Spanish. The blog "War on Society" has a translation of one of ITS's manifestos here, plus links to an earlier manifesto, a photo of one of the assembled package bombs, and the original publication in Spanish on the blog Liberacion Total here. Liberacion Total has been accused of being affiliated with ITS for publishing the manifesto, but they put up a notice saying they merely received it by mail. A few interesting observations:

  • The basic thesis of ITS's writing is "technology is bad". It shuffles between talking about different types of technology and different ways, bringing up gray goo, artificial intelligence, animal testing, and environmental contamination.
  • It is focused almost exclusively on Mexico and Mexican universities.
  • The original documents replace o/a->x in many words. I saw this in "lxs cientificxs" ("the scientists") several times, and thought it was meant to be threatening; but on further inspection, I think this is more like those novelty gender-neutral pronouns ("ey") you sometimes see in English. If you want to use automated translation, you will have to undo this first.
  • The blogs War on Society, Liberacion Total and culmine appear to be sympathetic.
  • There are names of specific people and organizations in those documents. Those people should take notice (and probably already have).

SingInst gets one mention on this page, in the middle of some ranting about Facebook being a mind-control tool.

Siguiendo con el tema de la informática, las famosas redes sociales y específicamente una que es Facebook se ha convertido en el centro de atención de la sociedad tecnoindustrial, pues en ella el sistema ve un aliado importante para el total control del comportamiento humano, que es en si, un factor sumamente amenazante para el orden establecido dentro de la Civilización.

Uno de los tres líderes de Facebook es Peter Thiel, un empresario estadounidense quien se ha propuesto la eliminación total del mundo real o natural y la imposición del mundo digital, así como se oye lo ha dicho. Analizando esto, podemos ver que Facebook no es una simple red comunicacional inofensiva, sino que es un experimento social de control mental que el Sistema Tecnológico Industrialestá usando con gran efectividad para excluir a la Naturaleza del contacto humano, es decir, desarrolla en gran medida la alienación total de lxs individuos a la Tecnología.

Pero este empresario pervertido no se ha quedado quieto, además de ser uno de lxs principales contribuyentes de la mencionada herramienta de control mental, ha invertido millonarias ganancias en investigación de inteligencia artificial y nuevas tecnologías capaces de alargar la vida del hombre por medio de la ciencia. Para esto tiene de aliado al Singularity Institute for Artificial Intelligence y al inglés gerontólogo biomédico Aubrey de Grey, quien se encarga específicamente en desarrollar por medio de una tecnología altamente avanzada que el periodo de vida de un ser humano se alargue de manera indefinida, y así, el humano hecho maquina ha sido creado!

Which Google translates to:

Continuing the theme of IT, the famous social networking and specifically one that is Facebook has become the focus of techno-industrial society, for in it the system is an important ally for the total control of human behavior, which is itself, a factor extremely threatening to the established order in Civilization. One of the three leaders of Facebook is Peter Thiel, an American businessman who has proposed the complete elimination of real or natural world and the imposition of the digital world and hear what he said.

Analyzing this, we see that Facebook is not just a harmless communication network, but a social experiment in mind control that the System of Industrial Technology are using very effectively to exclude the nature of human contact, that is largely developed total alienation of individuals to Technology.

But this perverted businessman has not stood still, and is one of the main contributors of that mind-control tool, has invested millions in profits in artificial intelligence research and new technologies to extend the life of man through science. For this is an ally to Singularity Institute for Artificial Intelligence and biomedical gerontologist English Aubrey de Grey, who is specifically responsible to develop through a highly advanced technology that the lifetime of a human being lengthened indefinitely, and so The man made machine has been created!

There are some clues in there that could be useful for figuring out who this is. I'm not sure how uncommon the 'x' thing is, but it's probably in his real-name writings too, and it's easy to search for. His rantings about Facebook indicate he probably had an account at one point but abandoned it. On priors, he's almost certainly a loner, and the same rant seems to back that up. His understanding of technology seems pretty shallow, which means the manifestos might've been sent through insufficiently-anonymized means (though Liberacion Total probably isn't keen on helping unmask him).

comment by Jack · 2011-09-16T02:48:32.590Z · score: 6 (6 votes) · LW · GW

I'm not sure how uncommon the 'x' thing is,

Common enough it seems. "Libertad por lxs pressxs politicxs" is a thing (a facebook group even) and from what I gather, a common graffiti slogan.

comment by gwern · 2011-09-15T15:04:05.186Z · score: 9 (9 votes) · LW · GW

The group praises Theodore Kaczynski, the Unabomber, whose anti-technology crusade in the United States in 1978–95 killed three people and injured many others.

More than that, he specifically targeted CS researchers like Gelernter.

comment by RichardKennaway · 2011-09-15T15:00:24.507Z · score: 5 (17 votes) · LW · GW

On the other hand, the mission of the SIAI is founded on the belief that if anyone succeeds at AGI without solving the Friendliness problem, they will destroy the world. Eliezer has said in an interview a year or two back that he does not think that anyone currently working on AGI has any chance of succeeding. But if not now, then some day the question will have to be faced:

What do you do if you really believe that someone's research has a substantial chance of destroying the world?

comment by Humbug · 2011-09-15T15:30:06.422Z · score: 13 (13 votes) · LW · GW

What do you do if you really believe that someone's research has a substantial chance of destroying the world?

Go batshit crazy.

comment by Dr_Manhattan · 2011-09-15T15:09:26.623Z · score: -3 (21 votes) · LW · GW

What do you do if you really believe that someone's research has a substantial chance of destroying the world?

If you really believe it, and compensated for biases by all means available and you are a good consequentialist, ... fat man .. 5 workers ...

I hear SIAI was looking for martial arts skilled people, lol.

comment by khafra · 2011-09-15T17:02:27.947Z · score: 5 (5 votes) · LW · GW

Somebody mentioned Aleister Crowley's quotes on LW a little while ago; so:

There seems to be much misunderstanding about True Will ... The fact of a person being a gentleman is as much an ineluctable factor as any possible spiritual experience; in fact, it is possible, even probable, that a man may be misled by the enthusiasm of an illumination, and if he should find apparent conflict between his spiritual duty and his duty to honour, it is almost sure evidence that a trap is being laid for him and he should unhesitatingly stick to the course which ordinary decency indicates ... I wish to say definitely, once and for all, that people who do not understand and accept this position have utterly failed to grasp the fundamental principles of the Law of Thelema.

-- Magical Diaries of Aleister Crowley : Tunisia 1923 (1996), edited by Stephen Skinner p.21

comment by Scott Alexander (Yvain) · 2011-09-15T17:24:29.354Z · score: 9 (9 votes) · LW · GW

If one is skeptical of the existence of Thelema or of the validity of these spiritual experiences, then this sounds a lot like religious leaders who say "Sure, believe in Heaven. But don't commit suicide to get there faster. Or commit homicide to get other people there faster. Or do anything else that contradicts ordinary decency."

Part of the fun of being right is that when your system contradicts ordinary decency, you get to at least consider siding with your system.

(although hopefully if your system is right you will choose not to, for the right reasons.)

comment by Nornagest · 2011-09-15T22:32:45.738Z · score: 6 (6 votes) · LW · GW

My Crowley background is pretty spotty, but I read that as him generalizing over ethical intersections with religious experience and then specializing to his own faith. It's not entirely unlike some posts I've read here, in fact; the implication seems to be that if some consequence of your religious (i.e. axiomatic; we could substitute decision-theoretic or similarly fundamental) ethics seems to suggest gross violations of common ethics, then it's more likely that you've got the wrong axioms or forgot to carry the one somewhere than that you need to run out and (e.g.) destroy all humans. Which is very much what I'd expect from a rationalist analysis of the topic.

comment by Dr_Manhattan · 2011-09-15T18:06:33.511Z · score: 5 (11 votes) · LW · GW

Here is an intuition pump: you see a baby who got hold of his dad's suitcase nuke and is about to destroy the city. Do you prevent him from pushing the button, even by lethal means? If the answer is yes, then consider Richard's original question, and confirm that the differences in the two situations stand up to reverse your decision.

comment by khafra · 2011-09-15T18:10:37.814Z · score: 0 (4 votes) · LW · GW

On the one hand, yes; on the other hand, I do think I take the risks from UFAI seriously, and have some relevant experience and skill, but still wouldn't participate in a paramilitary operation against a GAI researcher.

edit: On reflection, this is due to my confidence in my ability to correctly predict the end of the world, and the problem of multiplying low probabilities by large utilities.

comment by Dr_Manhattan · 2011-09-16T20:47:01.100Z · score: 1 (3 votes) · LW · GW

On reflection, this is due to my confidence in my ability to correctly predict the end of the world, and the problem of multiplying low probabilities by large utilities.

You mean lack of confidence, right?

comment by Dr_Manhattan · 2011-09-15T17:17:02.996Z · score: 4 (8 votes) · LW · GW

unhesitatingly stick to the course which ordinary decency indicates

Extraordinary situations call for extraordinary decency

comment by [deleted] · 2011-09-15T16:51:43.988Z · score: 2 (8 votes) · LW · GW

There is a problem that can occur when you are attempting to check all of your biases when contemplating a serious crime.

The risk is, while checking your biases you are exposing yourself to people who would then have the ability to help law enforcement turn you in for that serious crime. And you would presumably be aware of the fact that you can hardly let others capture you, because then you would know that there would be other things that you didn't blow up as part of your plan to save the world because you weren't secretive enough.

This means that by checking all of your biases you are boosting the chance of the world be destroyed if it turns out you weren't biased. And it's easy to convince yourself that you can't risk that, so you can't talk to other people about your plans.

But you can't thoroughly check your biases by consulting yourself and no one else. It is entirely possible for you to be heavily deluding yourself, having gotten brain damage or gone insane.

So you're left with the conflicting demands of "I need to talk with other people to verify this is accurate." and "I need to keep this a secret, so I can implement it if it is accurate."

As a side question, does it feel like this has a few points that are oddly similar to Pascal's mugging to anyone else?

As an example, they both seem to have that aspect of "But you simply MUST do this, because the consequences are simply to great not do it, even after accounting for the probabilities?"

comment by James_Miller · 2011-09-16T23:51:21.582Z · score: 8 (8 votes) · LW · GW

A catholic priest couldn't turn you in, and a smart one probably knows a lot about some kinds of human biases.

comment by Kevin · 2011-09-22T14:06:14.146Z · score: -3 (3 votes) · LW · GW

That's not true about the confidentiality of priests... a priest has the same legal obligation to turn in someone that is a danger to themselves or others as a therapist.

comment by pedanterrific · 2011-09-22T15:03:01.514Z · score: 4 (4 votes) · LW · GW

Doubt it. The Code of Canon Law states:

Can. 983 §1. The sacramental seal is inviolable; therefore it is absolutely forbidden for a confessor to betray in any way a penitent in words or in any manner and for any reason.

Can. 1388 §1. A confessor who directly violates the sacramental seal incurs a latae sententiae excommunication reserved to the Apostolic See; one who does so only indirectly is to be punished according to the gravity of the delict.

comment by atorm · 2011-09-20T14:46:49.266Z · score: 2 (2 votes) · LW · GW

If you are convinced that, barring any biases, your calculated course of action is the right one, you could talk to anyone you trusted to be similarly convinced by your arguments. Either they will point out your errors and convince you that you shouldn't act, or they will not discover any errors and agree to help you with your plans.

comment by TwistingFingers · 2011-09-16T02:13:07.310Z · score: 0 (0 votes) · LW · GW

Screaming and bleeding and gnashing of teeth; little AI researchers can't fall asleep ; )

comment by TwistingFingers · 2011-09-16T02:08:40.993Z · score: -1 (1 votes) · LW · GW

Screaming and bleeding and gnashing of teeth; little AI researchers can't fall asleep : )

comment by Vladimir_Nesov · 2011-09-15T15:02:11.803Z · score: -1 (5 votes) · LW · GW

(Since the linked article doesn't at a first glance talk about AI researchers, the title should be justified.)

comment by Humbug · 2011-09-15T15:34:37.966Z · score: 12 (12 votes) · LW · GW

In statements posted on the Internet, the ITS expresses particular hostility towards nano­technology and computer scientists. It claims that nanotechnology will lead to the downfall of mankind, and predicts that the world will become dominated by self-aware artificial-intelligence technology. Scientists who work to advance such technology, it says, are seeking to advance control over people by 'the system'.

comment by Vladimir_Nesov · 2011-09-15T15:44:26.823Z · score: 3 (3 votes) · LW · GW