A big Singularity-themed Hollywood movie out in April offers many opportunities to talk about AI risk

post by chaosmage · 2014-01-07T17:48:11.363Z · LW · GW · Legacy · 85 comments

Contents

85 comments

There's a big Hollywood movie coming out with an apocalyptic Singularity-like story, called Transcendence. (IMDB, Wiki, official site) With an A-list cast and big budget, I contend this movie is the front-runner to be 2014's most significant influence on discussions of superintelligence outside specialist circles. Anyone hoping to influence those discussions should start preparing some talking points.

I don't see anybody here agree with me on this. The movie has been briefly discussed on LW when it was first announced in March 2013, but since then, only the trailer (out since December) has been mentioned. MIRI hasn't published a word about it. This amazes me. We have three months till millions of people who never considered superintelligence are going to start thinking about it - is nobody bothering to craft a response to the movie yet? Shouldn't there be something that lazy journalists, given the job to write about this movie, can find?

Because if there isn't, they'll dismiss the danger of AI like Erik Sofge already did in an early piece about the movie for Popular Science, and nudge their readers to do so too. And that'd be a shame, wouldn't it?

85 comments

Comments sorted by top scores.

comment by lukeprog · 2014-01-07T23:54:36.895Z · LW(p) · GW(p)

I've been planning to try to watch it early, or at least on launch day, and then write up some blog post about it like "Transcendence vs. Superintelligence Theory" or something, to compare it with e.g. the view in Bostrom's forthcoming book (which I've read).

Replies from: Dr_Manhattan, Ander
comment by Dr_Manhattan · 2014-01-08T01:04:07.050Z · LW(p) · GW(p)

My guess from the trailer is that there are some http://en.wikipedia.org/wiki/Zendegi influences. Sick person with only way out via an upload, and the concerns about fidelity of the upload (from the trailer). The Super(intelligence) meme from somewhere else.

comment by Ander · 2014-01-08T00:35:15.958Z · LW(p) · GW(p)

I look forward to reading your comparison! Hopefully it will also let me know if its worth watching the movie.

comment by ChristianKl · 2014-01-11T16:40:43.609Z · LW(p) · GW(p)

A proposal for a step by step MIRI PR strategy:

(1) : Decide on a person who will speak about the film for MIRI to the public. At best a person who's going to be comfortable in front of a TV camera.

(2) Email the producers of Transcendence. Basically tell them you are the nonprofit MIRI who works on the issue of unfriendly AI risk. You didn't like that Erik Sofge dismissed AI risk in his Popular Science article.

You want to speak to the press about the film, but you want to know what the film is actually about, so it would be nice if the producers of Transcendence would show you the film before the release date. Offer to fly to whatever location in which they might want to show the film. Mention that you will attempt to bring along a journalist for an exclusive story about MIRI reaction to the film.

This proposal should be a no-brainer for someone who produces a film and who wants more PR for the film.

(3) If you have that agreement you message Wired about an exclusive opportunity to cover come along for the screening and cover the reaction of MIRI's spokesperson for it.

Again I think it should be a no-brainer for Wired to send a journalist for such a purpose if you pitch it right. In case Wired decliners you ask around at other places. Tech outlets and places like Business Insider and Forbes.

(4) Once the article is published (it might not be immediately) do all you can to draw attention to it. That probably includes also writing a indepth article on the MIRI blog with arms readers with various important talking points.

(5) Wait for the press to send queries for interviews. Here you have to make a decision about which interview request you want to take. If the press sees you as a general expert on technology there might be a bunch of request that don't have anything to do with MIRI's purpose that you don't want to take.

Replies from: chaosmage
comment by chaosmage · 2014-01-13T11:14:28.219Z · LW(p) · GW(p)

Ben Goertzel (former MIRI director of research) has worked with the Chinese distributor for Transcendence, so maybe he's seen the movie or knows who at the movie to talk to?

comment by JenniferRM · 2014-01-07T22:16:07.772Z · LW(p) · GW(p)

Something to be aware of is that, as with the novel Zendegi (which had the "benign superintelligence bootstrap project" and "overpowering falsehood dot com"), there are likely to be some specific allusions to transhumanist communities, although the visual media supports more allusive mechanisms based on sounds and appearances and emotional/social gestalts. The allusions in Zendegi were quite ungenerous. I'm not sure what kind of critical or positive environment would be good in terms of their expected positive or negative world outcomes, but I imagine that being able to respond to them could be important for the organization and for people.

Off the top of my head I can see stuff just in the trailer and brief summary.

The protagonist's name is "Will Caster" which resonates a bit with the way futurists semi-often give themselves names that can function as priming/identity hacks like Max More, Will Newsome, FM-2030, etc.

There will almost certainly be scenes that try to replicate the vibe of a Singularity Summit. The trailer has some "guy with big wavy hair standing on a stage speaking to a packed audience" visuals but I don't know how much other stuff might come through.

The uploadee's love interest will be played by Rebecca Hall who reminded me a bit of Julia Galef.

I wonder if memes pairing an image from the movie and an image from RL with a caption would tend to be "good" or "bad"?

Replies from: RobbBB, VAuroch
comment by Rob Bensinger (RobbBB) · 2014-01-09T03:08:50.601Z · LW(p) · GW(p)

In-jokes are probably things most people will miss or be indifferent to, so fixating on them may be counterproductive. Being forced to give a serious response to satire is usually a losing conversation, because it looks petty, humorless, and negative.

The parallels you suggest ('this movie features a person with a funny name, features a woman with long brown hair, and features an on-stage speech') don't sound especially telling or nonrandom to me, though. We'll have to wait and see.

comment by VAuroch · 2014-01-09T08:26:30.473Z · LW(p) · GW(p)

The guy talking on a stage is probably much more TED (and, to me, reminescent of the movie Sneakers), than any domain-specific conference.

comment by Dr_Manhattan · 2014-01-07T17:58:05.102Z · LW(p) · GW(p)

I agree this movie will have an impact. My guess it will be polarizing, which is not the worst thing - right now the area of AI risk suffers from lack of attention more than from a specific opposing opinions. Having these themes enter public memesphere, even through entertainment, seems useful.

As far as MIRI commenting on it I think it's too early and would seem to an intelligent observer as jumping on the bandwagon attention whoring - the movie is not even out yet. I imagine after the release there will be a flurry of PopSci type of articles, at which point weighing it might be appropriate and well received.

Replies from: Richard_Kennaway, chaosmage, passive_fist
comment by Richard_Kennaway · 2014-01-07T18:35:48.865Z · LW(p) · GW(p)

[MIRI] would seem to an intelligent observer as jumping on the bandwagon attention whoring

You say that like its a bad thing.

I imagine after the release there will be a flurry of PopSci type of articles, at which point weighing it might be appropriate and well received.

That has to be prepared for (in advance -- that is what preparation is). If a journalist asks MIRI for comment, they need to have a comment ready.

Replies from: Dr_Manhattan, buybuydandavis
comment by Dr_Manhattan · 2014-01-08T01:06:20.505Z · LW(p) · GW(p)

[MIRI] would seem to an intelligent observer as jumping on the bandwagon attention whoring

Status wise it's a bad thing.

Replies from: Richard_Kennaway, chaosmage
comment by Richard_Kennaway · 2014-01-08T09:32:05.701Z · LW(p) · GW(p)

Status wise it's a bad thing.

In what alternate reality? Every prominent politician, and every substantial business or other organisation, has people whose whole job is what you scorn as "attention whoring". It's more usually called something like "publicity", "press department", or "outreach", and I hope MIRI spends a significant number of man-hours on it. Telling people about yourself is a fundamental prerequisite for people knowing about you and whatever cause or business purpose you are trying to pursue. (There are ways of doing this badly, but the surest way of doing it badly is to be resentful at having to do it at all.)

So, MIRI needs to have more than just a comment ready. They need to be able to supply anyone who asks with a whole position paper relating to the film, and where relevant, work references to it into their publicity material, at such time as the actual content of the movie becomes clear. (And there are ways of doing this badly, but etc.)

The journalist might never come knocking, but when opportunity knocks, it is too late to prepare for it. Not doing this for fear of "attention whoring" and people thinking them "low status" would be shooting themselves in the foot. And why would that journalist come knocking? Because the publicity department of the production company has been publicising the film months in advance, and because MIRI has made itself prominent enough to be known to at least one journalist as having something to say on the subject.

Replies from: Dr_Manhattan
comment by Dr_Manhattan · 2014-01-08T13:05:15.283Z · LW(p) · GW(p)

I agree with you that it's important to be prepared; the attention whoring referred specifically to commenting on the movie before it comes out, and it's plot and "statement" (if there is one) becomes clear.

comment by chaosmage · 2014-01-08T08:51:41.239Z · LW(p) · GW(p)

What's the solution to that? Does MIRI need an attention-whoring low-status little sister?

Replies from: IlyaShpitser
comment by IlyaShpitser · 2014-01-09T02:16:58.244Z · LW(p) · GW(p)

Isn't that called "the PR arm?"

comment by buybuydandavis · 2014-01-08T10:38:19.155Z · LW(p) · GW(p)

You say that like its a bad thing.

Ha! Me and my sister always say that! I think it was her canonical answer to "You always have an answer, don't you?"

comment by chaosmage · 2014-01-07T18:08:10.536Z · LW(p) · GW(p)

I agree it would be early to comment now, but much gets written during the marketing run-up to release. By that time, not only after release, an easily findable comment can make a lot of impact. Compare how Wikileaks' comments on "The Fifth Estate", published before the release, influenced what got written about that movie.

And whoever writes something pre-release will be findable when people look for somebody to weigh in post-release.

Replies from: Dr_Manhattan
comment by Dr_Manhattan · 2014-01-07T18:32:51.713Z · LW(p) · GW(p)

Well one idea is write something on your blog and link it to MIRI; PageRank will do the rest :)

Another idea - organize a public movie event with a panel afterwards. Should attract a significant sci-fi fan interest and maybe get some coverage (maybe even use it as a fundraiser?). Louie Helm did this for "Our Final Invention". Will ping him.

Replies from: thelomen
comment by thelomen · 2014-01-10T12:19:39.393Z · LW(p) · GW(p)

The panel idea - especially if this can be done early, has a lot of value, could boost discussion and also provide a possible influx here.

comment by passive_fist · 2014-01-07T22:09:37.519Z · LW(p) · GW(p)

Are you sure being a polarized subject would be much better than suffering from lack of attention? At least now, MIRI and others are able to work in relative peace.

comment by CoffeeStain · 2014-01-08T07:28:09.742Z · LW(p) · GW(p)

The film's trailer strikes me as being aware of the transhumanist community in a surprising way, as it includes two themes that are otherwise not connected in the public consciousness: uploads and superintelligence. I wouldn't be surprised if a screenwriter found inspiration from the characters of Sandberg, Bostrom, or of course Kurzweil. Members of the Less Wrong community itself have long struck me as ripe for fictionalization... Imagine if a Hollywood writer actually visited.

Replies from: Dr_Manhattan
comment by Locaha · 2014-01-07T18:57:50.034Z · LW(p) · GW(p)

I suspect THE POWER OF LUVVVVV will come at some point as something only humans can do, as opposed to heartless machines.

Replies from: knb, Vulture, ChristianKl
comment by knb · 2014-01-08T01:53:36.000Z · LW(p) · GW(p)

Most of the recent depictions of AI I can think of show them totally capable of emotion. The AIs in the Matrix experience emotions, the little boy from AI experiences emotion, GLaDOS in Portal experiences emotions, David in Prometheus experiences emotion, etc.

I'm sure there are recent examples where general AIs are incapable of emotion, but none come to mind.

comment by Vulture · 2014-01-07T19:04:51.274Z · LW(p) · GW(p)

I doubt it. That trope is pretty thoroughly discredited by now.

Replies from: Locaha
comment by Locaha · 2014-01-07T21:00:47.085Z · LW(p) · GW(p)

Is it really? In Hollywood?

Replies from: ESRogs
comment by ESRogs · 2014-01-08T01:58:16.128Z · LW(p) · GW(p)

Well at least now there is Her.

Replies from: buybuydandavis
comment by buybuydandavis · 2014-01-08T10:50:17.852Z · LW(p) · GW(p)

Scarlett Johansson was a great pick for the voice.

There are certain moments in moves where people use their voices to convey passion in an amazing way. Where it just strikes home. She's on that list for me. In Iron Man - "I'd do whatever I wanted to do. With whoever I wanted to do it with."

Also see Katharine Ross in The Stepford Wives "But she won't take pictures, and she won't be me."

comment by ChristianKl · 2014-01-08T12:14:06.700Z · LW(p) · GW(p)

I doubt it, given that the AI is an upload. There no reason why the upload shouldn't be able to love. Especially given that the trailer speaks about computers with the full range of human emotions.

comment by timtyler · 2014-01-08T11:27:32.532Z · LW(p) · GW(p)

Uploads first? It just seems silly to me.

The movie features a luddite group assassinating machine learning researchers - not a great meme to spread around IMHO :-(

Slightly interestingly, their actions backfire, and they accelerate what they seek to prevent.

Overall, I think I would have preferred Robopocalypse.

comment by V_V · 2014-01-08T11:00:57.590Z · LW(p) · GW(p)

The premise doesn't sounds particularly original:

http://tvtropes.org/pmwiki/pmwiki.php/Main/AIIsACrapshot
http://tvtropes.org/pmwiki/pmwiki.php/Main/BrainUploading
http://tvtropes.org/pmwiki/pmwiki.php/Main/ImmortalityImmorality

Replies from: VAuroch
comment by VAuroch · 2014-01-09T08:35:00.623Z · LW(p) · GW(p)

Yes, but based on the trailer it looks substantially more grounded in current best predictions than previous treatments of the same idea.

comment by Ander · 2014-01-07T23:32:06.424Z · LW(p) · GW(p)

"Because if there isn't, they'll dismiss the danger of AI like Erik Sofge already did in an early piece about the movie for Popular Science, and nudge their readers to do so too. And that'd be a shame, wouldn't it?"

I would much rather see someone dismiss the dangers of AI, than misrepresent them, by having a movie in which Johnny Depp plays "a seemingly megalomaniacal AI researcher". This gives the impression that a "mad scientist" type who creates an "evil" AI that takes over the world is what we should worry about. Eliezer's posts do a great job of explaining the actual dangers of unfriendly AI, more along the lines of "the AI neither loves you, nor hates you, but you are composed of matter it can use for other things". That is, that if we create a powerful AI (or an AI who creates an AI who creates an AI who creates a powerful AI), whose values and morals do not align with what we humans would "want", that it will probably result in something terrible. (And not even in a way that provides us the silver lining of 'well, the AIs wiped out humanity, but at least the AI civilization is highly advanced and interesting! But more like: now the entire planet earth is a Grey Goo/Paperclips/whatever). Or even just the danger of us biological humans losing relevance in a world with superintelligent entities.

While I would love to see a great, well done, well thought out movie about Transhumanism, it seems pretty likely that this movie is just going to make me angry/annoyed. I really hope I am wrong, and that this movie is actually great.

Replies from: ChristianKl
comment by ChristianKl · 2014-01-08T12:11:05.468Z · LW(p) · GW(p)

Eliezer's posts do a great job of explaining the actual dangers of unfriendly AI, more along the lines of "the AI neither loves you, nor hates you, but you are composed of matter it can use for other things".

I'm not sure that's true. At the beginning stages where an AI is vulnerable it might very well use violence to prevent itself from getting destroyed.

Replies from: RobbBB, timtyler
comment by Rob Bensinger (RobbBB) · 2014-01-09T02:03:12.610Z · LW(p) · GW(p)

Hurricanes act with 'violence' in the sense of destructive power, but hurricanes don't hate people. The idea is that an AGI, like an intelligent hurricane, can be dangerous without bearing any special animosity for humans, indeed without caring or thinking about humans in any way whatsoever.

Replies from: ChristianKl
comment by ChristianKl · 2014-01-09T12:44:07.208Z · LW(p) · GW(p)

The idea is that an AGI, like an intelligent hurricane, can be dangerous without bearing any special animosity for humans, indeed without caring or thinking about humans in any way whatsoever.

No. That's not what he said. There's a difference between claiming that A can be dangerous without X and claiming that a scenario that A can be dangerous due to X.

There more than one plausible UFAI scenario. We do have discussions about boxing AI and in those cases it's quite useful to model the AI as trying to act against humans to get out.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2014-01-09T20:38:29.261Z · LW(p) · GW(p)

There's a difference between claiming that A can be dangerous without X and claiming that a scenario that A can be dangerous due to X.

If intelligent hurricanes loved you, they might well avoid destroying you. So it can indeed be said that intelligent hurricanes' indifference to us is part of what makes them dangerous.

We do have discussions about boxing AI and in those cases it's quite useful to model the AI as trying to act against humans to get out.

"the AI neither loves you, nor hates you" is compatible with 'your actions are getting in the way of the AI's terminal goals'. We don't need to appeal to interpersonal love and hatred in order to model the fact that a rational agent is competing in a zero-sum game.

Replies from: ChristianKl
comment by ChristianKl · 2014-01-09T21:48:54.938Z · LW(p) · GW(p)

We don't need to appeal to interpersonal love and hatred in order to model the fact that a rational agent is competing in a zero-sum game.

There a difference between "need to appeal" and something being a possible explanation.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2014-01-09T22:31:21.921Z · LW(p) · GW(p)

Sure, but love and hate are rather specific posits. Empirically, the vast majority of dangerous processes don't experience them. Empirically, the vast majority of agents don't experience them. Very plausibly, the vast majority of possible intelligent agents also don't experience them. "the AI neither loves you, nor hates you" is not saying 'it's impossible to program an AI to experience love or hate'; it's saying that most plausible uFAI disaster scenarios result from AGI disinterest in human well-being rather than from AGI sadism or loathing.

comment by timtyler · 2014-01-10T00:18:08.586Z · LW(p) · GW(p)

Also, competition between humans (with machines as tools) seems far more likely to kill people than a superintelligent runaway. However, it's (arguably) not so likely to kill everybody. MIRI appears to be focussing on the "killing everybody case". That is because - according to them - that is a really, really bad outcome.

The idea that losing 99% of humans would be acceptable losses may strike laymen as crazy. However, it might appeal to some of those in the top 1%. People like Peter Thiel, maybe.

comment by kokotajlod · 2014-01-07T21:56:10.561Z · LW(p) · GW(p)

I agree. Here's a quick brainstormed statement, just to get the ball rolling:

"This film portrays an implausible runaway unfriendly AI scenario, trivializing what is actually a serious issue. For depictions of much more plausible runaway unfriendly AI scenarios, visit [website], where the science behind these depictions is also presented."

Replies from: passive_fist, ChristianKl
comment by passive_fist · 2014-01-07T22:14:26.627Z · LW(p) · GW(p)

Perhaps 'trivializing' is not the best word, as it might make the casual reader (who has absolutely no idea of the real dangers of AI risk) think we're taking ourselves too seriously. Consider this revised statement:

"The film is an entertaining look at a runaway AI scenario. While the film's story is probably implausible, it is plausible that a runaway unfriendly AI scenario could occur in real life. An in-depth discussion of this issue is given on [website]."

Replies from: kokotajlod
comment by kokotajlod · 2014-01-10T04:27:10.654Z · LW(p) · GW(p)

Yep, I think that's an improvement. What do you think about ChristianKI's objection to putting a link in the statement?

Replies from: passive_fist
comment by passive_fist · 2014-01-10T04:30:54.794Z · LW(p) · GW(p)

"This is our statement. If you wish to learn more, refer to [website]"

comment by ChristianKl · 2014-01-08T12:13:07.920Z · LW(p) · GW(p)

That's a statement that doesn't work as far as media is concerned. A journalist has the job of writing an article, he does't want to spend significant time reading your website.

comment by lukeprog · 2014-01-10T18:12:35.047Z · LW(p) · GW(p)

With an A-list cast and big budget, I contend this movie is the front-runner to be 2014's most significant influence on discussions of superintelligence outside specialist circles

Probably. It's not entirely true that the public's awareness and concern for asteroid risk was created by Armageddon, but I bet it's 4/10 true.

Bonus link: Could Bruce Willis Save the World?

comment by Manfred · 2014-01-08T07:38:24.266Z · LW(p) · GW(p)

I'd guess it's going to go all Mother of Storms at the end, which is happy as endings go.

comment by polymathwannabe · 2014-02-19T12:23:27.370Z · LW(p) · GW(p)

Mr. Sofge's review says this,

"Being smart doesn’t guarantee malice, or a callous urge to enslave or destroy less-capable beings. Those are human traits, assigned to the idea of intelligent machines with the kind of narcissism only humans can muster."

which sounds like an accusation of typical mind fallacy when we warn that AI may turn unfriendly.

But then he says this,

"The machines become smarter, but not superior. They’re the ultimate intellectuals—far too busy with discourse and theory to even consider something as superfluous as enslaving or supplanting their creators."

which sounds like he doesn't really get what rationality is about.

comment by ChristianKl · 2014-01-08T13:15:49.781Z · LW(p) · GW(p)

I started by thinking that one should wait till the film comes out. No I think that's a bad idea. At the point the film comes out relevant journalists will read the articles that are already published. If those contains quotations from a MIRI person, than that person is going to get contact for further interviews.

It might also worth thinking about whether one can place a quote of a MIRI person to the Wikipedia article for the film.

Replies from: David_Gerard, Viliam_Bur, chaosmage
comment by David_Gerard · 2014-01-08T16:54:32.220Z · LW(p) · GW(p)

It turns out that putting promotional spam for your organisation on Wikipedia is considered not a great idea.

Please don't do this.

Replies from: ChristianKl
comment by ChristianKl · 2014-01-08T17:15:46.770Z · LW(p) · GW(p)

I'm not talking about payed advocacy.

Replies from: David_Gerard
comment by David_Gerard · 2014-01-08T17:27:50.622Z · LW(p) · GW(p)

Spam is spam. If you want MIRI to be primarily known to Wikipedia as spammers ...

comment by Viliam_Bur · 2014-01-10T09:22:48.609Z · LW(p) · GW(p)

It might also worth thinking about whether one can place a quote of a MIRI person to the Wikipedia article for the film.

Inserting your quote to Wikipedia, whether directly or through a sockpuppet -- wrong.

Making a quote famous enough (outside of Wikipedia) so that other people can't resist putting it into Wikipedia -- right.

So, the right approach (compatible with Wikipedia's rules) would be e.g. to give an interview to a newspaper shortly after the film comes out. As a side benefit, people who read the newspaper would also get the info. Another possible method is to discuss the movie in a published paper.

In other words, MIRI should publish some texts when the movie comes out, but should not publish them on Wikipedia directly. The texts will get to Wikipedia if they become visible enough outside of Wikipedia. And becoming visible outside of Wikipedia is also important.

Replies from: ChristianKl
comment by ChristianKl · 2014-01-10T12:16:19.315Z · LW(p) · GW(p)

So, the right approach (compatible with Wikipedia's rules) would be e.g. to give an interview to a newspaper shortly after the film comes out. As a side benefit, people who read the newspaper would also get the info. Another possible method is to discuss the movie in a published paper.

I agree. But it might help to point a trustworthy Wikipedian you know to insert the quote and make the case that the quote should be in the article on it's merits.

By the very definition someone who understands enough about the topic of UFAI to be qualified to decide what quotes should be in a relevant article, has interests in getting the topic represented the right way.

The idea that those interests that come with being qualified about a topic is something that should disqualify oneselves from editing a relevant article is just wrong. If followed it leads to article that contain factual errors because the person doesn't know what they are talking about and just copies what some other source wrote and newspapers tend to be error riden and are published without peer review.

The texts will get to Wikipedia if they become visible enough outside of Wikipedia.

There are plenty of texts out there that are very visible outside of Wikipedia and that have business being cited in Wikipedia but that aren't. The Wikipedia system doesn't work in a way that effectively identifies all suitable texts. I don't think there is anything wrong to help it in an area where you know the landscape.

As far as the topic goes I have no financial ties to MIRI and a lot of people reading here don't have either. The only interest I have that could disqualify me is that I don't want humanity to die. The idea that this is the same thing as commercial spam and that this is a irrelevant motivation for putting a edit into Wikipedia is to me wrong on a fundamental level.

I'm surprised that given the amount of utilitarianism on Lesswrong that sentiment doesn't get a better reception on Lesswrong. Then I guess it's easy to argue in the abstract that one should push the fat man but hard to make ethical decisions in the real life.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-01-10T13:55:38.382Z · LW(p) · GW(p)

The idea is that if you can't get your quote published outside of Wikipedia, you shouldn't put it in Wikipedia. Preferably, "outside" shouldn't be your own blog, but something more respectable.

The conflict of interest part is (this is my opinion, not Wikipedia policy) just a simple heuristic to prevent most of the "it's not anywhere else, but I insist it should be in Wikipedia" edits.

Yes, it is extremely annoying if a newspaper prints a false information, and some Wikipedia editor insists on adding it to the article, and calls all expert explanations "original research". But still, the heuristic in general is useful -- it would be much worse on average to instead have hundreds of crackpots claiming expertise and "correcting" information from newspapers and books. It is easier to find a third-party volunteer to verify whether X really was published in a newspaper, than to verify whether X is true. -- And you need a lot of volunteers who don't have life and only care about the Wikipedia rules, because if you remove all such non-experts from the game, and leave only the experts and the crackpots, the crackpots will obviously win by their numbers and dedication.

If a newspaper contains an error, the long-term strategy to fix it is to find or create another newspaper article or a book that will correct the error. In general, the battles should be fought outside of Wikipedia, and Wikipedia should just announce the winners.

The only interest I have that could disqualify me is that I don't want humanity to die.

Problem is, there are many people who believe the same. Don't only think: spam, but also: religions, cults, homeopathics, etc. Even if you are right and they are wrong, the difference is only seen by those who agree that you are right. For everyone else, this is just one of literally thousands of different causes that wants to be promoted by Wikipedia. The Wikipedia immune system will evaluate you as a threat.

The Wikipedia system doesn't work in a way that effectively identifies all suitable texts.

Well, this is the place where you have the chance to add the text successfully. The more important and relevant the source of the text, the higher your chance of success.

I'm surprised that given the amount of utilitarianism on Lesswrong that sentiment doesn't get a better reception on Lesswrong.

Okay, let's talk about consequences. You add a MIRI quote to Wikipedia, someone deletes it. You add it again, someone deletes it again and quotes some Wikipedia rule. You add it again and perhaps even say that you consider Wikipedia rules irrelevant in this specific case. The quote is removed again, and now you have a group of people who have no life watching the page all day ready to remove your quote if you add it again. Also, you have increases the probability of other MIRI quotes being removed in the future, even if they are moderately well sourced.

Strategy B: Get the quote in some high-status source. Then add the quote to the article, somewhere at the bottom (obviously it's not a part of the plot, nor the characters and cast, but maybe criticism), with a reference to the source. The probability that it stays there is much higher.

I guess it's easy to argue in the abstract that one should push the fat man

But in real life the fat man has dozen deontologist bodyguards ready to stop you. So instead you listen to the bodyguards; they tell you they only obey the wisdom from newspapers, so you bring them the newspaper article recommending to push the man, and they will happily push him themselves.

Replies from: NancyLebovitz, ChristianKl
comment by NancyLebovitz · 2014-01-10T18:00:34.593Z · LW(p) · GW(p)

Maybe instead of focusing on details of quoting in Wikipedia, we should be looking at how to write things which are sufficiently sharp and interesting that they keep getting quoted.

comment by ChristianKl · 2014-01-10T14:43:29.038Z · LW(p) · GW(p)

The idea is that if you can't get your quote published outside of Wikipedia, you shouldn't put it in Wikipedia. Preferably, "outside" shouldn't be your own blog, but something more respectable.

Yes, having it published outside would be the start. I live in a world where it's easy for me to get things about Quantified Self published in relevant sources. I just have trouble placing something about my father in there to get his Wikipedia article factually correct. While he lived he wanted our press stories without entanglement.

It might be that I underrate the difficult that MIRI has with getting something published 'outside'. I would expect that it should be easy to find a Wired journalist who is happy to write such a story.

But even if you can not find an actual journalist, write the article yourself. Most newspapers do publish meaningful op-eds. I would be surprised if you wouldn't find a newspaper willing to publish it. It's free content for them and MIRI is sort of authoritative, so there no reason not to publish the article provided it's well written.

Getting something published in the Guardian's Comment-is-free is also really easy and might be enough that most Wikipedia editors consider it "outside".

Okay, let's talk about consequences. You add a MIRI quote to Wikipedia, someone deletes it. You add it again, someone deletes it again and quotes some Wikipedia rule. You add it again and perhaps even say that you consider Wikipedia rules irrelevant in this specific case.

I probably wouldn't engage in an edit war. I nowhere argued that you should be stupid about adding the article.

Yes, I do agree that you would want to go the road of getting the quote in some newspaper before you edit the Wikipedia article. Given that the article is about a topic with a lot of interest that what you need to do, to let the edit stick.

Replies from: David_Gerard, Viliam_Bur
comment by David_Gerard · 2014-01-13T10:31:16.914Z · LW(p) · GW(p)

CiF might actually be a good place to get an op-ed placed. Note that they happily put a stupid headline on (its byline might as well be "Trolling is Free, Clicks are Sacred") and hack up the text, all while putting your picture on, not paying you and bringing on the faeces-flinging monkeys in the comments (which one should never, ever read). But it might be of interest to them.

comment by Viliam_Bur · 2014-01-10T18:38:35.794Z · LW(p) · GW(p)

I live in a world where it's easy for me to get things about Quantified Self published in relevant sources.

How much is because of the relevance of QS to the sources, and how much is your skill? I mean, if your skill plays an important role, perhaps you could volunteer for MIRI or CFAR as a media person. For example, they would give you the materials they produced, and you would try to get them in media (not Wikipedia, for the beginning).

It might be that I underrate the difficult that MIRI has with getting something published 'outside'. I would expect that it should be easy to find a Wired journalist who is happy to write such a story.

I don't know such details about MIRI; I am on the different side of the planet. You would have to ask them whether they are satisfied with their media output. Maybe they are, maybe they are not. Maybe they consider it a better use of their time to focus on something else (AI research), but would appreciate if someone else pushed their material to media. This is just my guess, but I think it's worth asking. (Specifically: ask lukeprog.)

But even if you can not find an actual journalist, write the article yourself.

This is another way you could be helpful. Again, ask them. But I think that having a volunteer who pushes your material to media, and is good at doing it, is a great help for any organization.

Replies from: ChristianKl
comment by ChristianKl · 2014-01-10T20:08:50.602Z · LW(p) · GW(p)

How much is because of the relevance of QS to the sources, and how much is your skill? I mean, if your skill plays an important role, perhaps you could volunteer for MIRI or CFAR as a media person.

It's difficult to judge your own skill. I was at the right place at the right time and therefore the first person to be featured in German newsmedia. I spoke in a way that was interesting and from there other journalists continue to contact me.

MIRI PR goals are also very different than the one of QS. QS basically wins if you can motivate individiuals to do QS and come to QS meetups. It's not necessary to convince the existing medical system that QS is good. MIRI on the other hand wins to the extend it can convince AI researchers to change their ways and to the extend it gets funders who donate money to it, to increase it's output.

MIRI PR was to takes care to avoid antagonizing existing AI researchers. If I do QS PR I don't want to associate with Big Pharma and can say things that might antagonize people.

I could imagine that I could contribute something to CFAR PR if CFAR would operate in Germany but currently that not the case. CFAR probably benefits from telling the story that it's the new hot thing that much better than the awful status quo.

Should CFAR organise an event in Berlin, I could try to get a journalist to cover it.

But I think that having a volunteer who pushes your material to media, and is good at doing it, is a great help for any organization.

It's not really a matter of pushing. but a matter of forming it in a way that the media wants it. Authenticity matters a great deal and if a journalist would get the feeling that I'm just pushing someone else's statements the kind of work I did wouldn't work as well.

From the mindset it's much more that you have something they want and they have something you want.

A while ago I heard Jeff Hawkins say that the best way to get VC funding who started Palm is to play hard to get. The same thing might be true with regards to media.

In this case I think the film provides a good opportunity for setting up such a relationship for MIRI. Start by by visible at the beginning as someone authoritative who has something interesting to say about the film.

Afterwards I would expect journalists will be reaching out to MIRI and MIRI can provide them stuff that they want. That different than MIRI trying to push something on journalists.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-01-10T22:15:07.352Z · LW(p) · GW(p)

Should CFAR organise an event in Berlin, I could try to get a journalist to cover it.

They are planning to do a workshop or a few of them in Europe. I don't know more details, though.

Covering the event would be useful for next workshops and for the local LW meetups.

comment by chaosmage · 2014-01-08T14:13:14.999Z · LW(p) · GW(p)

A Wiki page for an A-list science fiction movie can get 10000 views per day before it is released, will peak immediately after release and then slowly taper off (Example) until it flatlines at around 1000/day (Example). For comparison, the MIRI page gets about 50/day and Technological singularity gets about 2000/day.

So yeah, that'd be an excellent place to link to lukeprog's comment from.

I would expect the Wikipedia page to be tightly monitored by the film's marketers, so any critical comment would have to fully meet the Wiki's relevance criteria in order to survive series of edits and a bunch of us should keep putting it back in if it gets removed.

Replies from: V_V
comment by V_V · 2014-01-08T16:02:23.668Z · LW(p) · GW(p)

Please don't use Wikipedia for advertisement/propaganda.

Replies from: ChristianKl
comment by ChristianKl · 2014-01-08T16:36:15.645Z · LW(p) · GW(p)

There a fine line between propaganda and adding meaningful content that refers the people who read the article to the right resources.

Replies from: David_Gerard
comment by David_Gerard · 2014-01-08T16:53:05.398Z · LW(p) · GW(p)

Wikipedia:Conflict of interest

Please don't do this.

Replies from: ChristianKl
comment by ChristianKl · 2014-01-08T17:18:39.675Z · LW(p) · GW(p)

Could you make the case on the basis of utilitarian morals?

By the way, I substantially disagree with the Wikipedia policy as it stands. It prevents me from removing mistakes in cases where I have better information than some news reporter who writes something that's simply wrong. I think citizendium policy on the matter was better.

Replies from: David_Gerard
comment by David_Gerard · 2014-01-08T17:28:38.632Z · LW(p) · GW(p)

Could you make the case on the basis of utilitarian morals?

All spammers can justify spamming to themselves.

I think citizendium policy on the matter was better.

Funnily enough, one of these works and one is dead.

Replies from: ChristianKl, ChristianKl
comment by ChristianKl · 2014-01-08T17:49:00.684Z · LW(p) · GW(p)

Funnily enough, one of these works and one is dead.

If you make a claim that Wikipedia works in the sense that it's effectively prevents interested parties from editing articles I think you are wrong.

I think Wikipedia invites interested parties from editing it by providing no ways for interested parties to get errors corrected through open means.

Replies from: army1987
comment by A1987dM (army1987) · 2014-01-08T18:32:32.675Z · LW(p) · GW(p)

If you make a claim that Wikipedia works in the sense that it's effectively prevents interested parties from editing articles I think you are wrong.

I think he means that Wikipedia unlike Citizendium has managed to create a usable encyclopaedia.

Replies from: ChristianKl
comment by ChristianKl · 2014-01-10T12:24:45.348Z · LW(p) · GW(p)

I think he means that Wikipedia unlike Citizendium has managed to create a usable encyclopaedia.

By making it easy for people to spam it. There are various issues of why Citizendium failed. I'm not claiming that it was overall perfect.

comment by ChristianKl · 2014-01-08T17:49:44.799Z · LW(p) · GW(p)

All spammers can justify spamming to themselves.

That no utilitarian argument. I don't see why it should convince me at all.

Take it as a trolly problem. There are important issues where people die and there are issues where one just acts out tribal loyality. In this case I do see no good reason for tribal loyality given what's at stake.

Replies from: Lumifer, David_Gerard
comment by Lumifer · 2014-01-08T18:02:23.258Z · LW(p) · GW(p)

There are important issues where people die

Like attempting to do a PR campaign for a non-profit via Wikipedia by piggybacking onto a Hollywood big-budget movie..?

Replies from: ChristianKl
comment by ChristianKl · 2014-01-09T00:29:49.956Z · LW(p) · GW(p)

Like attempting to do a PR campaign for a non-profit via Wikipedia by piggybacking onto a Hollywood big-budget movie..?

I do consider the effect of shifting public perception on an existential risk issue by a tiny bit to be worth lives. UFAI is on the road to killing people. I do think you engage into failing to multiply if you think that isn't worth lifes.

Replies from: gjm, Lumifer
comment by gjm · 2014-01-09T12:47:52.624Z · LW(p) · GW(p)

It looks as if you're assuming that the overall PR effect of having MIRI or MIRI supporters add links from the Wikipedia article about Transcendence to comments from MIRI would be positive, or at least that it's more likely to be positive than negative.

I don't think that is a safe assumption.

As David says, one quite likely outcome is that a bunch of people start to see MIRI as spammers and their overall influence is less rather than more.

Replies from: ChristianKl
comment by ChristianKl · 2014-01-09T13:24:10.219Z · LW(p) · GW(p)

It looks as if you're assuming that the overall PR effect of having MIRI or MIRI supporters add links from the Wikipedia article about Transcendence to comments from MIRI would be positive, or at least that it's more likely to be positive than negative.

I agree that this a question that deserve serious thought. But the issue of violating the WIkipedia policy doens't factor much into the calculation.

As David says, one quite likely outcome is that a bunch of people start to see MIRI as spammers and their overall influence is less rather than more.

It's quite natural behavior to add relevant quotations to a Wikipedia article. I wouldn't do it with an account without prior benign history or through anonymous edits.

If you are a good citizen of the web, you probably do fix Wikipedia errors when you notice them, so you should have an account that doesn't look spammy. If you don't, then you probably leave the task to someone else who has a better grasp on Wikipedia.

Replies from: David_Gerard, gjm
comment by David_Gerard · 2014-01-09T15:22:13.548Z · LW(p) · GW(p)

It's quite natural behavior to add relevant quotations to a Wikipedia article. I wouldn't do it with an account without prior benign history or through anonymous edits.

Good thing you're not discussing it in a public forum, then, where screencaps are possible.

comment by gjm · 2014-01-09T18:05:45.282Z · LW(p) · GW(p)

But the issue of violating the Wikipedia policy doesn't factor much into the calculation.

The fact that the issue violates Wikipedia policy is an essential part of why doing as you propose would be likely to have a negative impact on MIRI's reputation.

(For the avoidance of doubt, I don't think this is the only reason not to do it. If you use something that has policies, you should generally follow those policies unless they're very unreasonable. But since ChristianKI is arguing that an expected-utility calculation produces results that swamp that (by tweaking the probability of a good/bad singularity) I think it's important to note that expected utility maximizing doesn't by any means obviously produce the conclusions he's arguing for.)

comment by Lumifer · 2014-01-09T00:52:26.775Z · LW(p) · GW(p)

I do consider the effect of shifting public perception on an existential risk issue by a tiny bit to be worth lives.

So you are ready to kill people in order to shift the public perception of an existential risk issue by a tiny bit?

Replies from: ChristianKl
comment by ChristianKl · 2014-01-09T12:13:04.329Z · LW(p) · GW(p)

I never claimed to be a complete utilitarian. For that matter I wouldn't push fat men of bridges.

As far as the Wikipedia policy goes, it a policy that just doesn't matter much in the grant scheme of things. For what it's worth I never touched the German Quantified Self that contained a paragraph with my name for a long time.

I do however have personal reasons for opposing the Wikipedia policy as Wikipedia gets the cause of death of my father wrong and I can't easily correct the issue as Wikipedia cites a news article with wrong information as it's source.

Should a good opportunity arise I will place the information somewhere citable and correct that report and I won't feel bad about it.

The Wikipedia policy is designed in a way that encourages interested parties to anonymously edit, and I do think that Wikipedia deserves the edits from interested parties that it gets till it gets a more reasonable policy that allows interested parties to correct factual errors without planting the information somewhere and then editing against policy.

Replies from: Lumifer
comment by Lumifer · 2014-01-09T15:41:53.558Z · LW(p) · GW(p)

I am not talking about Wikipedia's policies.

You said "worth lives" -- what did you mean by that?

comment by David_Gerard · 2014-01-08T21:08:51.211Z · LW(p) · GW(p)

People are not going to die if you refrain from deliberately spamming WIkipedia. There should be a Godwin-like law about this sort of comparison. (That's quite apart from your failure to calculate the damage to MIRI's reputation if they become known as spammers.)

Instead, see if you can get organic coverage going. Can MIRI get press coverage about the issue, if they feel it's to their benefit to do so? (This should probably be something directed from MIRI itself.) Get journalists seriously talking about the Friendly AI issue? Should be able to be swung.

Replies from: ChristianKl
comment by ChristianKl · 2014-01-09T00:26:05.360Z · LW(p) · GW(p)

Having the wrong experts on AI risk cited in the article at a critical junction where the public develops an understanding of the issue can result in people getting killed.

If it shifts the probability of an UFAI disaster even by 0.001% that equals over a thousands lives saved. It probably a bigger effect than the 5 people who safe by pushing the fat man.

The moral cost you pay by pushing the fat man is higher than the moral cost of violating Wikipedia norms. The benefit of getting the narrative on the article right about AI risk is probably much more valuable than the handful of people you safe in the trolly example.

Replies from: private_messaging
comment by private_messaging · 2014-01-10T22:24:29.369Z · LW(p) · GW(p)

If it shifts the probability of an UFAI disaster even by 0.001% that equals over a thousands lives saved. It probably a bigger effect than the 5 people who safe by pushing the fat man.

That kind of makes me wonder what would you do in a situation depicted in the movie (and even if you wouldn't, the more radical elements here who do not discuss their ideas online any more would).

There's even a chance that terrorists in the movie are led by an uneducated fear-mongering crackpot who primes them with invalid expected utility calculations and trolley problems.

Having the wrong experts on AI risk cited in the article at a critical junction where the public develops an understanding of the issue can result in people getting killed.

The world's better at determining who the right experts are when conflict-of-interest rules are obeyed.