Guardian coverage of the Summit [link]

post by Dr_Manhattan · 2011-11-03T03:17:13.585Z · LW · GW · Legacy · 9 comments

http://www.guardian.co.uk/commentisfree/belief/2011/nov/02/ai-gods-singularity-artificial-intelligence?newsfeed=true

9 comments

Comments sorted by top scores.

comment by betterthanwell · 2011-11-03T11:27:50.702Z · LW(p) · GW(p)

Singularitarians believe artificial intelligence will be humanity's saviour.
But they also assume AI entities will be benevolent.

Reading this, I wanted to scream; [CITATION NEEDED]!

Replies from: dbaupp
comment by dbaupp · 2011-11-04T01:57:07.308Z · LW(p) · GW(p)

I agree, but the people who are actively thinking about benevolence or otherwise are a proper subset of all "singularitarians".

Replies from: JoshuaZ
comment by JoshuaZ · 2011-11-04T02:13:52.870Z · LW(p) · GW(p)

Not necessarily a proper subset. I consider a Singularity to be unlikely but P(Bad Singularity Occurs|Singularity Occurs) to me is high. I'm disturbed by the fact that the smartest Singulitarians seem to be people who agree with this assessment and that our primary disagreement is just on P(Singularity). I doubt I'm a representative sample, so it does seem that to a close approximation your assessment is correct.

comment by JoshuaZ · 2011-11-03T03:38:24.770Z · LW(p) · GW(p)

Hmm, interesting how the last part does touch on AI risks but seems to think that that's a counterargument against the Singularitarians. It does seem like the reporter didn't realize that there's a segment of the Singularitarians who see absolutely eye-to-eye with the writer on that. And the Hitchhiker's analogy is definitely an amusing one.

Replies from: Nectanebo
comment by Nectanebo · 2011-11-04T03:21:47.402Z · LW(p) · GW(p)

It's funny that she wrote about AI risks in the manner she did. I was I bit miffed at first because she did seem to be misrepresenting or maybe generalising those at the summit somewhat, especially when the AI risk is something that many take very seriously. Her portrayal of this obliviousness via her analogy was kinda annoying for me because of its innaccuracy, it sure doesn't represent a lot of this community.

However, it's possible by taking this kind of stance while still mentioning the AI risk, could she be bringing more validity to the possibility of the singularity?

I mean, it's possible that a message of "these people are kinda weird and crazy, and even if they get their stuff to work it might not end well" gives the idea that the singularity has some kind of possibility of success, at least in comparison to a simple "these people are crazy, look at their crazy ideas and their weird ideals".

I'm not sure if I making sense, but I'm saying that this lady is at least getting people who might be in agreement with her general tone of the rest of her article to also consider the possibility of a singularity or at least AI risk posed in the last paragraph or so. This could be granting more validity to these concepts that the average guardian reader wouldn't take as easily if they had read about it somewhere else.

Replies from: JoshuaZ, lessdazed
comment by JoshuaZ · 2011-11-04T03:24:54.151Z · LW(p) · GW(p)

I mean, it's possible that a message of "these people are kinda weird and crazy, and even if they get their stuff to work it might not end well" gives the idea that the singularity has some kind of possibility of success, at least in comparison to a simple "these people are crazy, look at their crazy ideas and their weird ideals".

I can see how one might think that, but it reads to me a bit differently. It read to me to be closer to trying to simply end on an interesting note, or alternatively to be an example of belief overkill/arguments as soldiers where the author is simply trying to marshal as many possible possible arguments that sound like they go against the entire Singularity idea.

Replies from: Nectanebo
comment by Nectanebo · 2011-11-04T03:29:59.640Z · LW(p) · GW(p)

I would say that either of those was probably her intention; perhaps I'm being optimistic in hoping that she might have accidentally said something that gives even the tiniest amount of validity to an idea that I feel more people should care about.

comment by lessdazed · 2011-11-04T13:34:41.672Z · LW(p) · GW(p)

If one argues against a position one misunderstands, one might be arguing for the actual position.

comment by [deleted] · 2011-11-03T10:49:48.315Z · LW(p) · GW(p)

There's also some discussion of this in the link exchange thread.