What are YOU doing against risks from AI?

post by XiXiDu · 2012-03-17T11:56:44.852Z · LW · GW · Legacy · 38 comments

Contents

  Why are you not doing more?
None
38 comments

This is directed at those who agree with SIAI but are not doing everything they can to support their mission.

Why are you not doing more?

Comments where people proclaim that they have contributed money to SIAI are upvoted 50 times and more. 180 people voted for 'unfriendly AI' to be the most fearsome risk.

If you are one of those people and are not fully committed to the cause, I am asking you, why are you not doing more?

38 comments

Comments sorted by top scores.

comment by orthonormal · 2012-03-17T18:57:13.463Z · LW(p) · GW(p)

Less Wrong needs better contrarians.

Replies from: Thomas, XiXiDu
comment by Thomas · 2012-03-17T20:03:23.432Z · LW(p) · GW(p)

A contrarian is never good enough. When he is, he is no longer a contrarian. Or you've became one of his kind.

Replies from: wedrifid
comment by wedrifid · 2012-03-18T12:41:33.763Z · LW(p) · GW(p)

A contrarian is never good enough. When he is, he is no longer a contrarian. Or you've became one of his kind.

I don't believe you. Is it really true that it is not possible to be a contrarian and be respected?

Replies from: Thomas
comment by Thomas · 2012-03-18T13:11:55.297Z · LW(p) · GW(p)

You can be respected for other properties than your contrarianism. If all those other attributes prevail against your funny believe. Whatever that was.

You don't respect somebody who claims that some centuries were artificially put into the official history but have never happened in fact. If you know only this about him, you can hardly respect him. Except you are inclined to believe it, too.

When you learn it is Kasparov, you probably still think highly of him.

See

Replies from: wedrifid
comment by wedrifid · 2012-03-18T13:24:00.604Z · LW(p) · GW(p)

Let's consider "only possible to be respected for completely different fields" to be a falsification of my position. I'll demark the kind of respect required as "just this side of Will_Newsome". I can't quite consider my respect for Will to fit into specific respect within the lesswrong namespace due to disparities in context relevant belief being beyond a threshold. But I can certainly imagine there being a contrarian that is slightly less extreme that is respect-worthy even at the local level.

I think part of the problem with identifying contrarians that can be respected is that seldom will people who disagree because they are correct or have thought well but differently on a specific issue - rather than merely being contrary in nature - also disagree on most other issues. We will then end up with many people who are contrarian about a few things but mainstream about most. And those people don't get to be called contrarians usually. If they did then I could claim to be one myself.

comment by XiXiDu · 2012-03-18T12:05:34.669Z · LW(p) · GW(p)

Less Wrong needs better contrarians.

Like who? Robin Hanson, who is told that he makes no sense? Or multifoliaterose who is told that he uses dark arts? Or Ben Goertzel, who is frequently ridiculed? Or some actual researcher? Not even Douglas Hofstadter seems good enough.

The community doesn't seem to be able to stand even minor disagreement. Take for example timtyler who agrees about most of what Eliezer Yudkowsky says, even risks from AI as far as I can tell. Here is what Eliezer Yudkowsky has to say:

I think that asking the community to downvote timtyler is a good deal less disruptive than an outright ban would be. It makes it clear that I am not speaking only for myself, which may or may not have an effect on certain types of trolls and trolling.

or

Timtyler is trolling again, please vote down.

Or what about PhilGoetz, a top contributer?

I declined to answer because I gave up arguing with Phil Goetz long before there was a Less Wrong, back in the SL4 days - I'm sorry, but there are some people that I don't enjoy debating.

There are many more examples. It often only takes the slightest criticism for people to be called trolls or accused of using dark arts.

Replies from: orthonormal, wedrifid, Rain
comment by orthonormal · 2012-03-18T15:56:07.397Z · LW(p) · GW(p)

What wedrifid said- just because someone on Less Wrong got angry at a contrarian and called them something doesn't mean Less Wrong holds them in low regard or ignores them.

Multifoliaterose angered a bunch of people (including me) with his early posts, because he thought he could show major logical flaws in reasoning he'd never encountered. But after a while, he settled down, understood what people were actually saying, and started writing criticisms that made sense to his intended audience. He is what I'd consider a good contrarian: I often end up disagreeing with his conclusions, but it's worth my while to read his posts and re-think my own reasoning.

Robin Hanson is another good contrarian- although I can point to where I fundamentally disagree with him on the possibility of hard takeoff (and on many other things), I still read everything he posts because he often brings up ideas I wouldn't have thought of myself.

Phil Goetz is kind of a special case: I find he has interesting things to say unless he's talking about metaethics or decision theory.

One thing that these three have in common is that they read and understood the Sequences, so that they're criticizing what Less Wrong contributors actually think rather than straw-man caricatures. By contrast, you wrote a post accusing us of failure to Taboo intelligence that shows complete ignorance of the fact that Eliezer explicitly did that in the Sequences. A good contrarian, by contrast, would have looked to see if it was already discussed, and if they still had a critique after reading the relevant discussion, they would have explained why the previous discussion was mistaken.

I wish that the better contrarians would post more often, and I wish that you would develop the habit of searching for past discussion before assuming that our ideas come from nowhere. And while I'm at it, I wish for a jetpack.

comment by wedrifid · 2012-03-18T12:37:48.695Z · LW(p) · GW(p)

Like who? Robin Hanson, who is told that he makes no sense? Or multifoliaterose who is told that he uses dark arts? Or Ben Goertzel, who is frequently ridiculed? Or some actual researcher?

All three links from me? I am flattered that single comments by myself can be considered representative of the position of the entirety of lesswrong. I personally don't consider single cherry picked comments representative of even the entirety of my own position but if it means I get to be representative of an entire community I might relent.

Someone less biased might interpret my comments as being a reply to a specific post or comment and faults therein and not at all about them being contrarian. For example they may look at multifoliaterose's earlier, more direct threads and find me supporting multi and giving Eliezer a good scolding. Finding wedrifid citing Robin Hanson positively would be even easier.

On the other hand you didn't manage to find any links for wedrifid ridiculing Ben Goertzel's work, the one case where you would have been representing me correctly. I'm almost certain I have criticized something he said in the past since I hold his work in low esteem and in particular recall his few contributions made directly to lesswrong being substandard.

It often only takes the slightest criticism for people to be called trolls or accused of using dark arts.

Alternately it could be said that it takes only the slightest criticism of a particular argument or piece of work for people to cry 'conspiracy'.

comment by Rain · 2012-03-18T13:33:12.432Z · LW(p) · GW(p)

"Troll" is not a property of a person; it is an activity, one of many which a person may exhibit over time.

comment by jimrandomh · 2012-03-17T15:38:50.080Z · LW(p) · GW(p)

It took all of sixty seconds (starting from the link in your profile) to find:

Whenever I'm bored or in the mood for word warfare, I like to amuse myself by saying something inflammatory at a few of my favorite blogospheric haunts. -- Sociopathic Trolling by Sean the Sorcerer

Please leave.

comment by ArisKatsaris · 2012-03-17T12:35:09.504Z · LW(p) · GW(p)

So again you said that you'll log out and try out not to come back for years, and yet you return less than a day later, with yet another post filled with implicit bashings of the LessWrong community. Sometimes it may be hard not to respond to another's words, but is it truly so hard not to make new posts?

If you're to talk about discrepancy between stated thoughts and actual deeds, why don't you ask about your own?

comment by jimrandomh · 2012-03-17T18:51:20.288Z · LW(p) · GW(p)

tl+troll;dr

comment by Grognor · 2012-03-17T12:05:47.946Z · LW(p) · GW(p)

It's surprisingly hard to motivate yourself to save the world.

Edit: highly related comment by Mitchell Porter.

comment by play_therapist · 2012-03-17T21:22:36.858Z · LW(p) · GW(p)

The only thing I've done recently is send money to the Singularity Institute. I did, however, give birth to and raise a son who is dedicated to saving the world. I'm contemplating changing my user name to Sarah Connor. :)

Replies from: MileyCyrus
comment by MileyCyrus · 2012-03-18T07:47:40.161Z · LW(p) · GW(p)

Congratulations!

I'm at that point in life where I'm thinking about whether I should have kids in the future. It's good to know there are people who have managed to reproduce and still find money to donate.

comment by BrandonReinhart · 2012-03-18T00:08:12.150Z · LW(p) · GW(p)

Carl Shulman has been convincing that I should do nothing directly (in terms of labor) on the problem on AI risks, instead become successful elsewhere, and then direct resources as I am able toward the problem.

However I believe I should 1) continue to educate myself on the topic 2) try to learn to be a better rationalist so when I do have resources I can direct them effectively 3) work toward being someone who can gain access to more resources 4) find ways to better optimize my lifestyle.

At one point I seriously considered running off to San Fran to be in the thick of things, but I now believe that would have been a strictly worse choice. Sometimes the best thing you can do is to do what you already do well and hope to direct the proceeds towards helping people. Even when it feels like this is selfish, disengaged, or remote.

comment by WrongBot · 2012-03-17T19:22:58.134Z · LW(p) · GW(p)

RIght now I'm on a career path that will lead to me making lots of money with reasonable probability. I intend to give at least 10% of my income to existential risk reduction (FHI or SI, depending on the current finances of each) for the foreseeable future.

I wish I could do more. I'm probably smart/rational enough to contribute to FAI work directly in at least some capacity. But while that work is extremely important, it doesn't excite me, and I haven't managed to self-modify in that direction yet, though I'm working on it. Historically, I've been unable to motivate myself to do unexciting things for long periods of time (and that's another self-modification project).

I'm not doing more because I am weak. This is one of the primary motivations for my desire to become stronger.

comment by Giles · 2012-03-17T17:38:09.013Z · LW(p) · GW(p)

Also, since the title and the post seem to be asking completely different questions, I'll answer the other question too.

  • Donating (not much though - see my list of reasons)
  • Started Toronto LW singularity discussion group
  • Generally I try and find time to understand the issues as best I can
  • I hang out on LW and focus particularly on the AI discussions

No significant accomplishments so far though.

comment by Brihaspati · 2012-03-17T13:45:38.742Z · LW(p) · GW(p)

I think it may be time for Less Wrongers to begin to proactively, consciously ignore this troll. Hard.

Replies from: Eliezer_Yudkowsky
comment by Giles · 2012-03-17T17:32:06.268Z · LW(p) · GW(p)

My reasons, roughly sorted with most severe at top:

  • Personal reasons which I don't want to disclose right now
  • Akrasia
  • Communities of like-minded people are either hard to find or hard to get into
  • Not knowing what I should be doing (relates to "communities")
  • Finding time (relates to "personal" and "akrasia")
comment by Rain · 2012-03-17T16:03:23.747Z · LW(p) · GW(p)

Because I am soooooo lazy.

Seriously. I've got a form of depression which manifests as apathy.

Particularly ironic since I'm the one linked to as an example of doing a lot. Though I got more than twice as many upvotes for a pithy quote, which has also been the top comment on LessWrong for more than a year.

Replies from: Tripitaka
comment by Tripitaka · 2012-03-17T18:22:12.467Z · LW(p) · GW(p)

For what its worth, I remembered especially you because of this comment by you, which reflects my thoughts on that matter completely, and also a comment by Eliezer_Yudkowsky which I cannot find right now. There he is like "nobody here should feel too good for themselves, because they do spend only a non-significant number of their income; user Rain is one of the few exceptions, he is allowed to." That was in the aftermath of the singularity challenge. (in the event that my account of said comment is grossly wrong I would like to apologize in advance to everybody who feels wronged by that.)

Replies from: Rain
comment by Rain · 2012-03-17T20:07:32.291Z · LW(p) · GW(p)

You're likely thinking of this comment.

comment by Nectanebo · 2012-03-17T16:51:56.450Z · LW(p) · GW(p)

This is a very important question, and one I have wanted to ask Lesswrongians for a while also.

Personally, I am not entirely convinced by the general idea of it still, and I still have that niggling feeling that keeps me very cautious of the idea of doing something about this.

This is because of the magnitude of the importance of this idea, and how few people are interested in it. Yes, this is a fallacy, but damnit, why not!?

So I bring attention to less wrong and the technological singularity at least as much as I can. I want to know if this truly is as important as it supposedly is. I am gathering information, and withholding making a decision for the moment (perhaps irrationally).

But I genuinely think that if I am eventually convinced beyond some threshhold, I will start being much more proactive about this matter (or at least I hope so). And for those people in a similar boat to me, I suggest you do the same.

comment by Will_Newsome · 2012-03-19T08:21:22.553Z · LW(p) · GW(p)

If you are one of those people and are not fully committed to the cause, I am asking you, why are you not doing more?

To some extent because I am busy asking myself questions like: What are the moral reasons that seem as if they point toward fully committing myself to the cause? Do they actually imply what I think they imply? Where do moral reasons in general get their justification? Where do beliefs in general get their justification? How should I act in the presence of uncertainty about how justification works? How should I act in the presence of structural uncertainty about how the world works (both phenomenologically and metaphysically)? How should I direct my inquiries about moral justification and about the world in a way that is most likely to itself be justified? How should I act in the presence of uncertainty about how uncertainty itself works? How can I be more meta? What causes me to provisionally assume that being meta is morally justified? Are the causes of my assumption normatively justifiable? What are the properties of "meta" that make it seem important, and is there a wider class of concepts that "meta" is an example or special case of?

(Somewhat more object-level questions include:) Is SIAI actually a good organization? How to I determine goodness? How do baselines work in general? Should I endorse SIAI? What institutions/preferences/drives am I tacitly endorsing? Do I know why I am endorsing them? What counts as endorsement? What counts as consent? Specifically, what counts as unreserved consent to be deluded? Is the cognitive/motivational system that I have been coerced into or have engineered itself justifiable as a platform for engaging in inquiries about justification? What are local improvements that might be made to said cognitive/motivational system? Why do I think those improvements wouldn't have predictably-stupid-in-rerospect consequences? Are the principles by which I judge the goodness of badness of societal endeavors consistent with the principles by which I judge the goodness or badness of endeavors at other levels of organization? If not, why not? What am I? Where am I? Who am I? What am I doing? Why am I doing it? What would count as satisfactory answers to each of those questions, and what criteria am I using to determine satisfactory-ness for answers to each of those questions? What should I do if I don't have satisfactory answers to each of those questions?

Et cetera, ad infinitum.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-03-19T16:13:47.134Z · LW(p) · GW(p)

I'm loving the fact that "How to I determine goodness?" and "What counts as consent?" are, in this context, "somewhat more object-level questions."

comment by hairyfigment · 2012-03-17T22:51:05.329Z · LW(p) · GW(p)

I don't know what you believe I can do. I'm currently studying for the actuary exam on probability, so that I can possibly get paid lots of money to learn what we're talking about. (The second exam pertains to decision theory.) This career path would not have occurred to me before Less Wrong.

comment by drethelin · 2012-03-17T18:16:18.188Z · LW(p) · GW(p)

laziness

comment by snarles · 2012-03-18T00:55:40.336Z · LW(p) · GW(p)

I'm still in the exploration phase of the exploration/exploitation dichotomy, in which information is more important than short-term utility gains. Donating to SIAI is not expected to yield much information.

comment by Thomas · 2012-03-17T18:49:56.190Z · LW(p) · GW(p)

I still ponder, that the risks caused by the absence of a super-intelligence around are greater than those induced by one.

So, if you want to do something good, you should maybe act according this probable fact.

The question is for me - What am I doing to bring about the techno singularity?

Not enough, that's sure.

comment by Dmytry · 2012-03-17T12:16:21.519Z · LW(p) · GW(p)

Nothing. The arguments towards any course of action have very low external probabilities (which I assign when I see equally plausible but contradicting arguments), resulting in very low expected utilities even if the bad AI is presumed to do some drastically evil stuff vs good AI doing some drastically nice stuff. There are many problems for which efforts have larger expected payoff.

edit:

I do subscribe to the school of thought that the irregular connectionist AIs (neural networks, brain emulations of various kind and the like) are the ones least likely to engage in highly structured effort like maximization of some scary utility to the destruction of everything else. I'm very dubious that the agent can have foresight so good as to decide humans are not worth preserving, as part of general "gather more interesting information" heuristics.

While the design space near the FAI is a minefield of monster AIs and a bugged FAI represents a worst case scenario. There is a draft of my article on the topic. Note: I am a software developer, and I am very sceptical about our ability to write FAI that is not bugged, as well as of ability to detect substantial problems in FAI goal system, as regardless of the goal system the FAI will do all it can to pretend to be working correctly.

Replies from: Alex_Altair
comment by Alex_Altair · 2012-03-17T12:33:34.400Z · LW(p) · GW(p)

There is a draft of my article on the topic.

I can't see this draft. I think only those who write them can see drafts.

Replies from: Dmytry
comment by Dmytry · 2012-03-17T12:35:44.696Z · LW(p) · GW(p)

Hmm, weird. I thought the hide button would hide it from public, and un-hide button would unhide. How do i make it public as a draft?

Replies from: Multiheaded
comment by Multiheaded · 2012-03-17T14:33:54.608Z · LW(p) · GW(p)

Just post it to Discussion and immediately use "Delete". It'll still be readable and linkable, but not seen in the index.

Replies from: Dmytry
comment by Dmytry · 2012-03-17T15:04:06.902Z · LW(p) · GW(p)

Hmm, can you see it now? (I of course kept a copy of the text on my computer, in case you were joking, so i do have the draft reposted as well)

Replies from: Rain
comment by Rain · 2012-03-17T16:51:11.807Z · LW(p) · GW(p)

It is now readable at the previous link, yes.

comment by Thomas · 2012-03-17T12:48:09.759Z · LW(p) · GW(p)

I am glad you are staying around, really. Despite I don't agree with you, I don't agree with SIAI either and one CAN discuss with you.