Suggest alternate names for the "Singularity Institute"

post by lukeprog · 2012-06-19T04:42:04.892Z · LW · GW · Legacy · 159 comments

Contents

159 comments

Once, a smart potential supporter stumbled upon the Singularity Institute's (old) website and wanted to know if our mission was something to care about. So he sent our concise summary to an AI researcher and asked if we were serious. The AI researcher saw the word 'Singularity' and, apparently without reading our concise summary, sent back a critique of Ray Kurzweil's "accelerating change" technology curves. (Even though SI researchers tend to be Moore's Law agnostics, and our concise summary says nothing about accelerating change.)

Of course, the 'singularity' we're talking about at SI is intelligence explosion, not accelerating change, and intelligence explosion doesn't depend on accelerating change. The term "singularity" used to mean intelligence explosion (or "the arrival of machine superintelligence" or "an event horizon beyond which we can't predict the future because something smarter than humans is running the show"). But with the success of The Singularity is Near in 2005, most people know "the singularity" as "accelerating change."

How often do we miss out on connecting to smart people because they think we're arguing for Kurzweil's curves? One friend in the U.K. told me he never uses the world "singularity" to talk about AI risk because the people he knows thinks the "accelerating change" singularity is "a bit mental." 

LWers are likely to have attachments to the word 'singularity,' and the term does often mean intelligence explosion in the technical literature, but neither of these is a strong reason to keep the word 'singularity' in the name of our AI Risk Reduction organization. If the 'singularity' term is keeping us away from many of the people we care most about reaching, maybe we should change it.

Here are some possible alternatives, without trying too hard:

 

 

We almost certainly won't change our name within the next year, but it doesn't hurt to start gathering names now and do some market testing. You were all very helpful in naming "Rationality Group". (BTW, the winning name, "Center for Applied Rationality," came from LWer beoShaffer.)

And, before I am vilified by people who have as much positive affect toward the name "Singularity Institute" as I do, let me note that this was not originally my idea, but I do think it's an idea worth taking seriously enough to bother with some market testing.

159 comments

Comments sorted by top scores.

comment by betterthanwell · 2012-06-19T17:10:52.767Z · LW(p) · GW(p)

So I read this, and my brain started brainstorming. None of the names I came up with were particularly good. But I did happen to produce a short mnemonic for explaining the agenda and the research focus of the Singularity Institute.

A one word acronym that unfolds into a one sentence elevator pitch:

Crisis: Catastrophic Risks in Self Improving Software

  • "So, what do you do?"
  • "We do CRISIS research, that is, we work on figuring out and trying to manage the catastrophic risks that may be inherent to self improving software systems. Consider, for example..."

Lots of fun ways to play around with this term, to make it memorable in conversations.

It has some urgency to it, it's fairly concrete, it's memorable.
It compactly combines goals of catastrophic risk reduction and self improving systems research.

Bonus: You practically own this term already.

An incognito Google search gives me no hits for "Catastrophic Risks In Self Improving Software", when in quotes. Without quotes, top hits include the Singularity Institute, the Singularity Summit, intelligencexplosion.com. Nick Bostrom and the Oxford group is also in there. I don't think he would mind too much.

Replies from: Jack, Michelle_Z, thomblake, Epiphany
comment by Jack · 2012-06-19T19:26:56.091Z · LW(p) · GW(p)

This is clever but sounds too much like something out of Hollywood. I'd prefer bland but respectable.

Replies from: betterthanwell, betterthanwell
comment by betterthanwell · 2012-06-22T12:26:18.105Z · LW(p) · GW(p)

This is clever but sounds too much like something out of Hollywood. I'd prefer bland but respectable.

I don't entirely disagree, but I do think Catastrophic Risks In Self-Improving Systems can be useful in pointing out the exact problem that the Singularity Institute exists to solve. I'm not at all sure at all that it would make a good name for the organisation itself. But I do perhaps think it would raise fewer questions, and be less confusing than The Singularity Institute for Artificial Intelligence or The Singularity Institute.

In particular, there would be little chance of confusion stemming from familiarity with Kurzweil's singularity from accelerating change.

There are lessons to be learned from Scientist are from Mars the Public is from Earth, and first impressions are certainly important. That said, this description is less over-exaggerated than it may at seem at first glance. The usage can be qualified in that the technical meanings of these words are established, mutually supportive and applicable.

Looking at the technical meaning of the words, the description is (perhaps surprisingly) accurate:

Catastrophic failure: Small changes in certain parameters of a nonlinear system can cause equilibria to appear or disappear, or to change from attracting to repelling and vice versa, leading to large and sudden changes of the behaviour of the system.

Catastrophe theory: Small changes in certain parameters of a nonlinear system can cause equilibria to appear or disappear, (...) leading to large and sudden changes of the behaviour of the system.

Risk is the potential that a chosen action or activity (including the choice of inaction) will lead to a loss (an undesirable outcome). The notion implies that a choice having an influence on the outcome exists (or existed).

Is the CRISIS mnemonic / acronym overly dramatic?

Crisis: From Ancient Greek κρίσις (krisis, “a separating, power of distinguishing, decision, choice, election, judgment, dispute”), from κρίνω (krinō, “pick out, choose, decide, judge”)

A crisis is any event that is, or expected to lead to, an unstable and dangerous situation affecting an individual, group, community or whole society. Crises are deemed to be negative changes in the security, economic, political, societal or environmental affairs, especially when they occur abruptly, with little or no warning. More loosely, it is a term meaning 'a testing time' or an 'emergency event'.

Usage: crisis (plural crises)

  • A crucial or decisive point or situation; a turning point.
  • An unstable situation, in political, social, economic or military affairs, especially one involving an impending abrupt change.
  • A sudden change in the course of a disease, usually at which the patient is expected to recover or die.
  • (psychology) A traumatic or stressful change in a person's life.
  • (drama) A point in a drama at which a conflict reaches a peak before being resolved.

Perhaps CRISIS is overly dramatic in the common usage. But one would quite easily be able to explain how the use of this term is be qualified, and this in it self gives an attractive angle to journalists. In the process they would, inadvertently in a sense, explain what the Singularity institute does and why their work is important.

comment by betterthanwell · 2012-06-19T19:55:33.833Z · LW(p) · GW(p)

I don't entirely disagree. I think Catastrophic Risks In Self-Improving Systems could be useful in pointing out the exact problem that the Singularity Institute exists to solve. I'm not at all sure at all that it would make a good name for the organisation itself. But I do perhaps think it would raise fewer questions, and be less confusing than The Singularity Institute for Artificial Intelligence.

comment by Michelle_Z · 2012-06-19T18:14:57.574Z · LW(p) · GW(p)

I agree. That doesn't sound bad at all.

Replies from: betterthanwell
comment by betterthanwell · 2012-06-19T19:23:20.356Z · LW(p) · GW(p)

After thinking this over while taking a shower:

The CRISIS Research Institute — Catastrophic Risks In Self-Improving Systems
Or, more akin to the old name: Catastrophic Risk Institute for Self-Improving Systems

Hmm, maybe better suited as a book title than the name of an organization.

Replies from: faul_sname
comment by faul_sname · 2012-06-20T06:01:04.475Z · LW(p) · GW(p)

It would make an excellent book title, wouldn't it.

comment by thomblake · 2012-06-20T13:54:21.256Z · LW(p) · GW(p)

That's brilliant.

comment by Epiphany · 2012-08-21T03:52:51.293Z · LW(p) · GW(p)

Center for Preventing a C.R.I.S.I.S. A.I.

C.R.I.S.I.S. A.I. could be a new term also.

comment by Richard_Kennaway · 2012-06-19T21:10:48.748Z · LW(p) · GW(p)

LessDoomed.

Replies from: ChrisHallquist, MarkusRamikin
comment by ChrisHallquist · 2012-06-20T02:25:53.310Z · LW(p) · GW(p)

Upvoted for funny, but probably not a great name for a non-profit.

comment by MarkusRamikin · 2012-06-21T13:08:41.814Z · LW(p) · GW(p)

Clippy's Bane Institute.

comment by John_Maxwell (John_Maxwell_IV) · 2012-06-19T05:55:45.744Z · LW(p) · GW(p)

It's worth noting that your current name has advantages too; people who are interested in the accelerating change singularity will naturally run into you guys. These are people, some pretty smart, who are at home with weird ideas and like thinking about the far future. Isn't this how Louie found out about SI?

Maybe instead of changing your name, you could spin out yet another organization (with most of your current crew) to focus on AI risk, and leave the Singularity Institute as it is to sponsor the Singularity Summit and so on. My impression is that SI has a fairly high brand value, so I would think twice before discarding part of that. Additionally, I know at least one person assumed the Singularity Summit was all you guys did. So having the summit organized independently of the main AI risk thrust could be good.

Replies from: Alex_Altair, negamuhia
comment by Alex_Altair · 2012-06-19T06:25:28.841Z · LW(p) · GW(p)

The spin-off sounds a little appealing to me too, but the problem is that the Summit provides a lot of their revenue.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2012-06-19T23:12:07.429Z · LW(p) · GW(p)

Good point. Maybe this could continue to happen though with sufficiently clever lawyering.

comment by negamuhia · 2012-08-28T16:33:03.509Z · LW(p) · GW(p)

I agree. You should change the name iff your current name-brand is irreparably damaged. Isn't that an important decision procedure for org rebrands? I forget.

EDIT: Unless, of course, the brand is already irreparably damaged...in which case this "advice" would be redundant!

comment by faul_sname · 2012-06-19T05:12:52.719Z · LW(p) · GW(p)

Center for AI Safety most accurately describes what you do.

To be honest, the I. J. Good Institute sounds the most prestigious.

Beneficial Architectures Research makes you sound like you're researching earthquake safety or something similar. I don't think you necessarily need to shy away from the word "AI."

AI Impacts Research sounds incomplete, though I think it would sound good with the word "society," "foundation," or "institute" tacked onto either end.

Replies from: radical_negative_one
comment by radical_negative_one · 2012-06-19T05:32:23.611Z · LW(p) · GW(p)

IJ Good Institute would make me think that it was founded by IJ Good.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2012-06-19T11:18:54.638Z · LW(p) · GW(p)

I would suspect that it means "The Good Institute", something related to either philantropy or religion, with a waving hand and smiling face the webmaster failed to mark properly as a Wingdings font. :D

comment by Jack · 2012-06-19T14:19:12.087Z · LW(p) · GW(p)

I really like Center for AI Safety.

The AI Risk Reduction Center

Center for AI Risk Reduction

Institute for Machine Ethics

Center for Ethics in Artificial Intelligence

And I favor this kind of name change pretty strongly.

Replies from: NancyLebovitz, Bugmaster
comment by NancyLebovitz · 2012-06-19T14:44:18.159Z · LW(p) · GW(p)

"Risk Reduction" is very much in the spirit of "Less Wrong".

comment by Bugmaster · 2012-06-19T17:11:52.538Z · LW(p) · GW(p)

I like "Institute for Machine Ethics", though some people could find the name a bit pretentious.

Replies from: Kaj_Sotala, Alex_Altair
comment by Kaj_Sotala · 2012-06-20T09:34:39.776Z · LW(p) · GW(p)

Machine Ethics is more associated with narrow AI, though.

comment by Alex_Altair · 2012-06-19T23:03:40.842Z · LW(p) · GW(p)

I think the word "machine" is too reminiscent of robots.

comment by wedrifid · 2012-06-19T18:26:37.528Z · LW(p) · GW(p)
  • Center for Helpful Artificial Optimizer Safety (CHAOS)
  • Center for Slightly Less Probable Extinction
  • Freindly Optimisation Of the Multiverse (FOOM)
  • Yudkowsky's Army
  • The Center for World Domination
  • Pinky and The Brain Institute
  • Cyberdyne Systems
Replies from: JGWeissman, Zetetic, thomblake, nshepperd, roll, Michelle_Z
comment by JGWeissman · 2012-06-19T18:33:13.162Z · LW(p) · GW(p)

The Center for World Domination

We prefer to think of it as World Optimization.

comment by Zetetic · 2012-06-19T20:21:07.909Z · LW(p) · GW(p)

Winners Evoking Dangerous Recursively Improving Future Intelligences and Demigods

Replies from: wedrifid
comment by wedrifid · 2012-06-19T20:22:09.124Z · LW(p) · GW(p)

I commit to donating $20k to the organisation if they adopt this name! Or $20k worth of labor, whatever they prefer. Actually, make that $70k.

Replies from: Zetetic
comment by Zetetic · 2012-06-19T21:50:34.750Z · LW(p) · GW(p)

You can donate it to my startup instead, our board of directors has just unanimously decided to adopt this name. Paypal is fine. Our mission is developing heuristics for personal income optimization.

comment by thomblake · 2012-06-20T13:56:14.675Z · LW(p) · GW(p)

Cyberdyne Systems

There's already a Cyberdyne making robotic exoskeletons and stuff in Japan.

comment by nshepperd · 2012-06-24T16:30:50.346Z · LW(p) · GW(p)

The Sirius Cybernetics Corporation?

comment by roll · 2012-06-21T04:47:59.973Z · LW(p) · GW(p)

Center for Helpful Artificial Optimizer Safety

What concerns me is lack of research into artificial optimizers in general... Artificial optimizers are commonplace already, they are algorithms to find optimal solutions to mathematical models, not to optimize the real world in the manner that SI is concerned with (correct me if I am wrong). Furthermore the premise is that such optimizers would 'foom', and i fail to see how foom is not a type of singularity.

Replies from: None
comment by [deleted] · 2012-06-26T09:29:12.946Z · LW(p) · GW(p)

Recent published SI work concerns AI safety. They have not recently published results on AGI, to whatever extent that is separable from safety research, for which I am very grateful. Common optimization algorithms do apply to mathematical models, but that doesn't limit their real world use; an implemented optimization algorithm designed to work with a given model can do nifty things if that model roughly captures the structure of a problem domain. Or to put it simply, models model things. SI is openly concerned with exactly that type of optimization, and how it becomes unsafe if enough zealous undergrads with good intentions throw this, that, and their grandmother's hippocampus into a pot until it supposedly does fantastic venture capital attracting things. The fact that SI is not writing papers on efficient adaptive particle swarms is good and normal for an organization with their mission statement. Foom was a metaphorical onomatopoeia for an intelligence explosion, which is indeed a commonly used sense of the term "technological singularity".

Replies from: roll
comment by roll · 2012-07-05T15:25:19.696Z · LW(p) · GW(p)

SI is openly concerned with exactly that type of optimization, and how it becomes unsafe

Any references? I haven't seen anything that is in any way relevant to the type of optimization that we currently know how to implement. The SI is concerned with notion of some 'utility function', which appears very fuzzy and incoherent - what it is, a mathematical function? What does it have at input and what it has at output? The number of paperclips in the universe is given as example of 'utility function', but you can't have 'universe' as the input domain to a mathematical function. In the AI the 'utility function' is defined on the model rather than the world, and lacking the 'utility function' defined on the world, the work on ensuring correspondence of the model and the world is not an instrumental sub-goal arising from maximization of the 'utility function' defined on the model. This is rather complicated, technical issue, and to be honest the SI stance looks indistinguishable from confusion that would result from inability to distinguish function of model and the property of the world, and subsequent assumption that correspondence of model and the world is an instrumental goal of any utility maximizer. (Furthermore that sort of confusion would normally be expected as a null hypothesis when evaluating an organization so outside the ordinary criteria of competence)

edit: also, by the way, it it would improve my opinion of this community if, when you think that I am incorrect, you would explain your thought rather than click down vote button. While you may want to signal to me that "i am wrong" by pressing the vote button, that, without other information, is unlikely to change my view on the technical side of the issue. Keep in mind that one can not be totally certain in anything, and while this may be a normal discussion forum that happens to be owned by an AI researcher that is being misunderstood due to poor ability to communicate the key concepts he uses, it might also be a support ground for pseudoscientific research, and the norm of substance-less disagreement would seem to be more probable in the latter than in the former.

comment by Michelle_Z · 2012-06-19T18:51:35.804Z · LW(p) · GW(p)

Creative and amusing, at least. :]

comment by shokwave · 2012-06-19T05:26:54.913Z · LW(p) · GW(p)

The obvious change if Singularity has been co-opted is the Institute for Artificial Intelligence. (but IAI is not a great acronym).

Institute for Artifical Intelligence Safety lets you keep the S, but it's at the wrong spot. Safety Institution for Artificial Intelligence is off-puttingly incorrect.

The Institute for Friendly Artificial Intelligence (pron. eye-fay) is IFAI... maybe?

If you go with the Center for Friendly Artificial Intelligence you get CFAI, sort of parallel to CFAR (if that's what you want).

Oh! If associating with CFAR is okay, then what's really lovely is the Center for Friendly Artificial Intelligence Research, acronym as CFAIR. (You could even get to do cute elevator pitches asking people how they'd program their obviously well-defined "fairness" into an AI.)

Edit: I do agree that "Friendly" is not, on the whole, desirable. I prefer "Risk Reduction" to "Safety", because I think Safety might bring a little bit of the same unsophistication that Friendly would bring.

Replies from: wedrifid, daenerys, Multiheaded
comment by wedrifid · 2012-06-19T11:09:16.442Z · LW(p) · GW(p)

Center for Friendly Artificial Intelligence Research

Including "Friendly" is good for those that understand that it is being used as a jargon term with a specific meaning. Unfortunately it could give an undesirable impression of unsophisticated to the naive audience (which is the target).

Replies from: Dorikka
comment by Dorikka · 2012-06-19T11:50:14.807Z · LW(p) · GW(p)

I also strongly object to 'Friendly' being used in the name -- it's a technical term that I think people are very likely to misunderstand.

Replies from: RichardHughes
comment by RichardHughes · 2012-06-19T20:45:54.488Z · LW(p) · GW(p)

Agreed that people are very likely to misunderstand it - however, even the obvious, naive reading still creates a useful approximation of what it is you guys actually do. I would consider that misreading to be a feature, not a flaw, because the layman's reading produces a useful layman's understanding.

Replies from: Dorikka
comment by Dorikka · 2012-06-19T22:08:00.056Z · LW(p) · GW(p)

The approximation might end up being 'making androids to be friends with people', or some kind of therapy-related research. Seriously. Given that even many people involved with AGI research do not seem to understand that Friendliness is a problem, I don't think that the first impression generated by that word will be favorable.

It would be convenient to find some laymen to test on, since our simulations of a layman's understanding may be in error.

Replies from: RichardHughes
comment by RichardHughes · 2012-06-25T19:04:49.638Z · LW(p) · GW(p)

I have no ability to do any actual random selection, but you raise a good point - some focus group testing on laymen would be a good precaution to take before settling on a name.

comment by daenerys · 2012-06-19T05:42:45.151Z · LW(p) · GW(p)

upvoted for CFAIR

Replies from: MarkusRamikin
comment by MarkusRamikin · 2012-06-19T07:43:45.082Z · LW(p) · GW(p)

I hate CFAIR.

Replies from: tgb
comment by tgb · 2012-06-21T00:35:07.251Z · LW(p) · GW(p)

But than Eliezer and co. could be called CFAIRers!

Replies from: gwern
comment by gwern · 2012-06-21T01:02:21.650Z · LW(p) · GW(p)

As long as they don't pledge themselves or emulated instances of themselves for 10 billion man-years of labor.

comment by Multiheaded · 2012-06-19T08:25:17.280Z · LW(p) · GW(p)

So far I like IFAI best; it's conscise and sounds like a logical update of SIAI.

"At first they were just excited about all kinds of singularities, now they've decided how to best get to one" is what someone who only ever heard the name "IFAI (formerly SIAI)" would think.

comment by siodine · 2012-06-19T18:05:07.933Z · LW(p) · GW(p)

Paraphrasing, I believe it was said by an SIer that "if uFAI wasn't the most significant and manipulable existential risk, then the SI would be working on something else." If that's true, then shouldn't its name be more generic? Something to do with reducing existential risk...?

I think there are some significant points in favor of a generic name.

  • Outsiders will more likely see your current focus (FAI) as the result of pruning causes rather than leaping toward your passion -- imagine if GiveWell were called GiveToMalariaCauses.

  • By attaching yourself directly with reducing existential risk, you bring yourself status by connecting with existing high status causes such as climate change. Moreover, this creates debate with supporters of other causes connected to existential risk -- this gives you acknowledgement and visibility.

  • The people you wish to convince won't be as easily mind-killed by research coming from "The Center for Reducing Existential Risk" or such.

Is it worth switching to a generic name? I'm not sure, but I believe it's worth discussing.

Replies from: shokwave, private_messaging
comment by shokwave · 2012-06-19T19:45:22.427Z · LW(p) · GW(p)

Is it worth switching to a generic name?

I feel like you could get more general by using the "space of mind design" concept....

Like an Institute for Not Giving Immense Optimisation Power to an Arbitrarily Selected Point in Mindspace, but snappier.

comment by private_messaging · 2012-06-19T19:17:20.062Z · LW(p) · GW(p)

Outsiders will more likely see your current focus (FAI) as the result of pruning causes rather than leaping toward your passion -- imagine if GiveWell were called GiveToMalariaCauses.

If it was pruning, that was helluva lot of pruning in very little time early in SI history: it must be likely that the AI would be created that would have a properly grounded material goal (nobody knows how to do that nor has a need to ground goals), the extreme foom has to be possible (hyper-singularity), the FAI has to be the only solution (I haven't seen SI working on figuring out how to use the wireheading as failsafe, or how to use lack of symbol grounding for safety for that matter, despite examples of both the theorem prover that wireheads and the AIXI that doesn't symbol ground and won't see shutdown of it's physical hardware as resulting in lack of reward to it's logical structure).

comment by ScottMessick · 2012-06-19T18:04:14.136Z · LW(p) · GW(p)

I have direct experience of someone highly intelligent, a prestigious academic type, dismissing SI out of hand because of its name. I would support changing the name.

Almost all the suggestions so far attempt to reflect the idea of safety or friendliness into the name. I think this might be a mistake, because for people who haven't thought about it much, this invokes images of Hollywood). Instead, I propose having the name imply that SI does some kind of advanced, technical research involving AI and is prestigious, perhaps affiliated with a university (think IAS).

Center for Advanced AI Research (CAAIR)

Replies from: None, roll
comment by [deleted] · 2012-06-21T03:50:01.438Z · LW(p) · GW(p)

This name might actually sound scary to people worried about AI risks.

comment by roll · 2012-06-21T04:40:38.382Z · LW(p) · GW(p)

Hmm what do you think would have happened with that someone if the name was more attractive and that person spent more time looking into SI? Do you think that person wouldn't ultimately dismiss it? Many of the premises here seem more far fetched than singularity. I know that from our perspective it'd be great to have feedback from such people, but it wastes their time and it is unclear if that is globally beneficial.

comment by wedrifid · 2012-06-19T08:56:57.271Z · LW(p) · GW(p)

The Center for AI Safety

Like it. What you actually do.

The I.J. Good Institute

Eww. Pretentious and barely relevant. Some guy who wrote a paper in 1965. Whatever. Do it if for some reason you think prestigious sounding initials will give enough academic credibility to make up for having a lame irrelevant name. Money and prestige are more important than self respect.

Beneficial Architectures Research

Architectures? Word abuse! Why not go all the way and throw in "emergent"?

A.I. Impacts Research

Not too bad.

Replies from: None
comment by [deleted] · 2012-06-26T08:22:24.904Z · LW(p) · GW(p)

How is it word abuse? "Architecture" is much more informative than "magic" or "thingy"; it conveys that they investigate how putting together algorithms results in optimization. That differentiates them from Givewell, The United Nations First Committee, the International Risk Governance Council, The Cato Institute, ICOS, Club of Rome, the Svalbard Global Seed Vault, the Foresight Institute, and most other organizations I can think of that study global economic / political / ecological stability, x-risk reduction, or optimal philanthropy.

comment by James_Miller · 2012-06-19T05:28:41.829Z · LW(p) · GW(p)

Sell the naming rights.

Replies from: Jack, LucasSloan
comment by Jack · 2012-06-19T14:23:10.289Z · LW(p) · GW(p)

If you could sell it to a prestigious tech firm... "The IBM Institute for AI Safety" actually sounds pretty fantastic.

comment by LucasSloan · 2012-06-19T06:42:21.829Z · LW(p) · GW(p)

I think this comment is the first that I couldn't decide whether to upvote or downvote, but definitely didn't want to leave a zero.

Replies from: Manfred
comment by Manfred · 2012-06-19T08:22:38.036Z · LW(p) · GW(p)

Don't worry, I'll fix it.

comment by yli · 2012-06-19T12:30:16.364Z · LW(p) · GW(p)

The AI researcher saw the word 'Singularity' and, apparently without reading our concise summary, sent back a critique of Ray Kurzweil's "accelerating change" technology curves. (Even though SI researchers tend to be Moore's Law agnostics, and our concise summary says nothing about accelerating change.)

Worse, when you try to tell someone who already mainly associates the idea of the singularity with accelerating change curves about the distinctions between different types of singularity, they can, somewhat justifiably from their perspective, dismiss it as just a bunch of internal doctrinal squabbling among those loony people who obsess over technology curves, squabbling that it's really beneath them to investigate too deeply.

comment by NancyLebovitz · 2012-06-19T08:35:09.328Z · LW(p) · GW(p)

The Center for AI Safety-- best of the bunch. It might be clearer as The Center for Safe AI.

The I.J. Good Institute-- I have no idea what the IJ stands for.

Beneficial Architectures Research-- sounds like an effort to encourage better buildings.

A.I. Impacts Research-- reads like a sentence. It might be better as Research on AI Impacts.

Replies from: pjeby, Jayson_Virissimo, Douglas_Knight
comment by pjeby · 2012-06-19T17:10:04.056Z · LW(p) · GW(p)

It might be clearer as The Center for Safe AI

Indeed - it better implies that you're actually working towards safe AI, as opposed to just worrying about whether it's going to be safe, or lobbying for OSHA-like safety regulations.

comment by Jayson_Virissimo · 2012-06-19T08:46:02.804Z · LW(p) · GW(p)

The I.J. Good Institute-- I have no idea what the IJ stands for.

Irving John ("Jack").

I would guess that exactly zero of my non-Less Wronger friends have ever heard of I. J. Good.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2012-06-19T09:50:09.456Z · LW(p) · GW(p)

I would guess that exactly zero of my non-Less Wronger friends have ever heard of I. J. Good.

Which is fine; to everyone else, it's some guy's name, with moderately positive affect. I'd be less in favour of this scheme if the idea of intelligence explosion had first been proposed by noted statistician I J Bad.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2012-06-19T12:38:03.508Z · LW(p) · GW(p)

Now I have Johnny C Bad playing in my head.

(Well, not really, but it made for a fun comment.)

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2012-06-19T16:11:06.208Z · LW(p) · GW(p)

Better than Johnny D Ugly.

comment by Douglas_Knight · 2012-06-19T15:31:25.536Z · LW(p) · GW(p)

The I.J. Good Institute-- I have no idea what the IJ stands for.

Did you not understand that "I.J. Good" is a person's name? (Note that in this thread ciphergoth asserts that everyone recognizes the form as a name, despite your comment which I read as a counterexample.)

Replies from: NancyLebovitz, pjeby, TheOtherDave
comment by NancyLebovitz · 2012-06-19T16:11:54.657Z · LW(p) · GW(p)

At this point, I'm not sure what I was thinking. It's plausible that knowing what the initials meant would be enough to identify the person.

I'm pretty sure I was thinking "ok, I. J. Good founded a foundation, but who cares?".

comment by pjeby · 2012-06-19T17:08:25.040Z · LW(p) · GW(p)

Did you not understand that "I.J. Good" is a person's name?

Until I read the comment thread, I thought maybe it was facetious and stood for "It's Just Good".

comment by TheOtherDave · 2012-06-19T16:23:25.228Z · LW(p) · GW(p)

I can imagine, upon discovering that the "I.J.Good Institute" is interested in developing stably ethical algorithms, deciding that the name was some sort of pun... that it stood for "Invariant Joint Good" or some such thing.

comment by novalis · 2012-06-23T03:04:45.727Z · LW(p) · GW(p)

You are worried that the SIAI name signals a lack of credibility. You should be worried about its people do. No, it's not the usual complaints about Eliezer. I'm talking about Will Newsome, Stephen Omohundro, and Ben Goertzel.

Will Newsome has apparently gone off the deep end: http://lesswrong.com/lw/ct8/this_post_is_for_sacrificing_my_credibility/6qjg The typical practice in these cases, as I understand it, is to sweep these people under the rug and forget that they had anything to do with the organization. This might not be the most intellectually honest thing to do, but it's more PR-minded than leaving them listed, and more polite than adding them to a hall of shame.

And, while the Singularity Institute is announcing that it is absolutely dangerous to build an AGI without proof of friendlyness, two of its advisors, Omohundro and Goertzel, are, separately, attempting to build AGIs. Of course, this is only what I have learned from http://singularity.org/advisors/ -- maybe they have since changed their minds?

Replies from: wedrifid, Halfwit
comment by wedrifid · 2012-06-23T03:11:03.037Z · LW(p) · GW(p)

And, while the Singularity Institute is announcing that it is absolutely dangerous to build an AGI without proof of friendlyness, two of its advisors, Omohundro and Goertzel, are, separately, attempting to build AGIs. Of course, this is only what I have learned from http://singularity.org/advisors/ -- maybe the have since changed their minds?

Goertzel is still there? I'm surprised.

comment by Halfwit · 2013-01-11T01:53:10.959Z · LW(p) · GW(p)

And now there are three: http://singularityhub.com/2013/01/10/exclusive-interview-with-ray-kurzweil-on-future-ai-project-at-google/

Replies from: novalis
comment by novalis · 2013-01-14T17:55:37.642Z · LW(p) · GW(p)

Does Kurzweil have anything to do with the Singularity Institute? Because I don't see him listed as a director or advisor on their site.

Replies from: Halfwit
comment by Halfwit · 2013-01-15T01:44:15.108Z · LW(p) · GW(p)

He was an adviser. But I see he no longer is. Retracted.

comment by Bugmaster · 2012-06-19T07:23:15.833Z · LW(p) · GW(p)

...but neither of these is a strong reason to keep the word 'singularity' in the name of our AI Risk Reduction organization.

Why not just call it that, then ? "AI Risk Reduction Institute".

comment by Vladimir_Nesov · 2012-06-19T12:53:45.447Z · LW(p) · GW(p)

"Safe" is a wrong word for describing a process of rewriting the universe.

(An old tweet of mine; not directly relevant here.)

comment by IlyaShpitser · 2012-06-19T16:13:39.417Z · LW(p) · GW(p)

I think something about "Machine ethics" sounds best to me. "Machine learning" is essentially statistics with a computational flavor, but it has a much sexier name. You think statistics and you think boring tables, you think "machine learning" and you think Matrix or Terminator.

Joke suggestions: "Mom's friendly robot institute," "Institute for the development of typesafe wishes" (ht Hofstadter).

Replies from: i77, ChrisHallquist, thomblake
comment by i77 · 2012-06-19T17:49:40.623Z · LW(p) · GW(p)

Singularity Institute for Machine Ethics.

Keep the old brand, add clarification about flavor of singularity.

comment by ChrisHallquist · 2012-06-20T02:28:02.603Z · LW(p) · GW(p)

I like this one a lot. Term that has a clear meaning in the existing literature.

comment by thomblake · 2012-06-20T13:51:07.395Z · LW(p) · GW(p)

But Machine Ethics generally refers to narrow AI - I think it's too vague (but then, "AI" might have the same problem).

comment by CommanderShepard · 2012-06-19T13:19:26.143Z · LW(p) · GW(p)

Cerberus

Replies from: None
comment by [deleted] · 2012-06-21T18:02:59.197Z · LW(p) · GW(p)

Ah yes, "Paperclip Maximizers..."

comment by knb · 2012-06-19T10:15:03.076Z · LW(p) · GW(p)

I think a name change is a great idea. I can certainly imagine someone being reluctant to associate their name with the "Singularity" idea even if they support what SIAI actually does. I think if I was a famous researcher/donor, I would be a bit reluctant to be strongly associated with the Singularity meme in its current degraded form. Yes, there are some high-status people who know better, but there are many more who don't.

Here is a suggestion: Center for Emerging Technology Safety. This name affiliates with the high-status term "emerging technology", while terms with "Singularity" and even "AI" often (unfairly, in my opinion) strike people as being crackpot/kooky. Admittedly, this is less descriptive than some other possible names (but more descriptive than "The I.J. Good Institute"), but descriptiveness isn't the most important factor. Rather, you should consider what kind of organization potential donors or (high-status) employees would like to brag about to the their non-LW reading friends/family at dinner parties.

Replies from: Plasmon
comment by Plasmon · 2012-06-19T18:17:28.751Z · LW(p) · GW(p)

I understand that the original name can be taken as overly techno-optimistic/Kurzweilian. IMHO this name errs on the other side, it sets of Luddite-detecting heuristics.

comment by David_Gerard · 2012-06-19T07:35:30.543Z · LW(p) · GW(p)

"Singularity Institue? Oh, Kurzweil!" It's as if he has a virtual trademark on the word. Yeah.

Replies from: private_messaging
comment by private_messaging · 2012-06-21T16:51:06.379Z · LW(p) · GW(p)

To think about it, SIAI name worked in favour of my evaluation of SI. I sort of mixed up EY with Kurzweil, thought that the EY has created some character recognition software and whatnot. Kurzweil is pretty low status but it's not zero. What I see instead is a person who by the looks of it likely wouldn't even be able to implement belief propagation with loops in the graph, or at least never considered what's involved (as evident from the rationality/bayesianism stuff here, Bayes vs science stuff, and so on). You know, if I were preaching rationality, I'd make a bayes belief propagation applet with nodes and lines connecting them, for demonstration of possible failure modes also (and investigation of how badly incompleteness of the graph breaks it, as well as demonstration of NP-complete in certain cases). I can do that in a week or two. edit: actually, perhaps I'll do that sometime. Or actually, I think there's such applications for medical purposes.

Replies from: David_Gerard
comment by David_Gerard · 2012-06-21T21:56:46.908Z · LW(p) · GW(p)

A simple open-source one would be an actually useful thing to show people failure modes and how not to be stupid.

Replies from: private_messaging
comment by private_messaging · 2012-06-22T00:13:26.109Z · LW(p) · GW(p)

Well it won't be useful for making glass eyed 'we found truth' cult because it'd actually kill the confidence, in the Dunning-Kruger way where more competent are less confident.

The guys here haven't even wondered how exactly do you 'propagate' when A is evidence for B and B is evidence for C and C is evidence for A (or when you only see a piece of cycle, or several cycles intersecting). Or when there's unknown nodes. Or what happens out of the nodes that were added based on reachability or importance or selected to be good for the wallet of dear leader. Or how badly it breaks if some updates are onto wrong nodes. Or how badly it breaks when you ought to update on something outside the (known)graph but pick closest-looking something inside. Or how low the likelihood of correctness gets when there's some likelihood of such errors. Or how difficult it is to ensure sane behaviour on partial graphs. Or how all kinds of sloppiness break the system entirely making it arrive at superfluous very high and very low probabilities.

People go into such stuff for immediate rewards - now i feel smarter than others kind of stuff.

comment by JonathanLivengood · 2012-06-19T20:50:24.422Z · LW(p) · GW(p)

Semi-serious suggestions:

  • Intelligence Explosion Risk Research Group
  • Foundation for Obviating Catastrophes of Intelligence (FOCI)
  • Foundation for Evaluating and Inhibiting Risks from Intelligence Explosion (FEIRIE)
  • Center for Reducing Intelligence Explosion Risks (CRIER)
  • Society for Eliminating Existential Risks (SEERs) of Intelligence Explosion
  • Center for Understanding and Reducing Existential Risks (CURER)
  • Averting Existential Risks from Intelligence Explosion (AERIE) Research Group (or Society or ...)
comment by gwern · 2012-06-19T13:53:28.316Z · LW(p) · GW(p)

'A.I. Impact Institute', although that leads to the unfortunate acronym AIII...

Replies from: faul_sname, Risto_Saarelma
comment by faul_sname · 2012-06-19T15:09:05.329Z · LW(p) · GW(p)

Though it is a remarkably accurate imitation of the reactions of those first hearing about it.

comment by Risto_Saarelma · 2012-06-19T16:58:21.761Z · LW(p) · GW(p)

You might get away with using AI3.

comment by Pavitra · 2012-06-22T06:22:23.068Z · LW(p) · GW(p)

Do we actually have rigorous evidence of a need for name change? It seems that we're seriously considering an expensive and risky move on the basis of mere anecdote.

comment by Arran_Stirton · 2012-06-19T17:01:28.337Z · LW(p) · GW(p)

It’s quite likely you can solve the problem of people miss-associating SI with “accelerating change“ without having to change names.

The AI researcher saw the word 'Singularity' and, apparently without reading our concise summary, sent back a critique of Ray Kurzweil's "accelerating change" technology curves.

What if the AI researcher read (or more likely, skimmed) the concise summary before responding to the potential supporter? At least this line in the first paragraph, “artificial intelligence beyond some threshold level would snowball, creating a cascade of self-improvements,” doesn’t necessarily make it obvious enough that SI isn’t about “accelerating change”. (In fact, it sounds a lot like an accelerating-change-type idea.)

In my opinion at least, you need to get any potential supporter/critic to make the association between the name “Singularity Institute” and what SI actually does(/it’s goals) as soon as possible. While changing the name could do that, “Singularity Institute” has many useful aesthetic qualities that a replacement name probably won’t have.

On the other hand doing something like adding a clear tag-line about what SI does (e.g. “Pioneering safe-AI research”) to the header, would be a relatively cheap and effective solution. Perhaps rewriting the concise summary to discuss the dangers of a smarter-than-human AI before postulating the possibility of an intelligence explosion would also be effective; seeing as a smarter-than-human AI would need to be friendly, intelligence explosion or no.

comment by ChrisHallquist · 2012-06-19T09:17:42.457Z · LW(p) · GW(p)

AI Impacts Research seems to me the best of the bunch, because it's pretty easy to understand. People who know nothing about Eliezer's work can see it and think, "Oh, duh AI will have an impact, it's worth thinking about that." On the other hand:

  • Center for AI Safety: not bad, but people who don't know Eliezer's work might wonder why we need it (same thing with a name involving "risk")
  • The I.J. Good Institute: Sounds pretigious, but gives no information to someone who doesn't know who I.J. Good is.
  • Beneficial Architectures Research: meaningless to 99% of the population, agree with whoever said people will think it's about bridge design.

Somehow, "AI Impact Research" sounds better than "Impact," perhaps to avoid the "reads as a sentence" thing, or just because my brain thinks of "impact" as one (possibly very complex) thing. Also agree that "society" or "institute" could go in there somewhere.

Replies from: JonathanLivengood
comment by JonathanLivengood · 2012-06-19T15:45:52.963Z · LW(p) · GW(p)

The I.J. Good Institute: Sounds pretigious, but gives no information to someone who doesn't know who I.J. Good is.

And gives potentially wrong information to someone who does know who I.J. Good is but doesn't know about his intelligence explosion work.

comment by crazy88 · 2012-06-19T08:32:51.108Z · LW(p) · GW(p)

I actually suspect that the word "Singularity" serves as a way of differentiating you from the huge number of academic institutes to do with AI so I'm not endorsing change necessarily.

However, if you do change, I vote for something to do with the phrase "AI Risk" - your marketing speel is about reducing risk and I think you're name will attract more donor attention if people can see a purpose rather than a generic name. As such, I vote against "I.J. Good Institute".

I also think "Beneficial Architectures Research" is too opaque a name and suspect (though with less certainty) that suggestions to do with "Friendly AI" are also too opaque (the name might seem cuddly but I don't think it will have deeper meaning to those who don't already know what you do).

I think something like "The Center for AI Safety" or "The AI Risk Institute" (TARI) would be your best bet (if you did decide a change was a good move).

Clearly, though, that's simply a list of one person's opinions on the matter.

comment by beoShaffer · 2012-06-19T19:58:19.282Z · LW(p) · GW(p)

A.I. Safety Foundation

Center for existential risk reduction

Friendly A.I. Group

A.I. Ethics Group

Institute for A.I. ethics

comment by MarkusRamikin · 2012-06-19T05:01:42.425Z · LW(p) · GW(p)

Why did the "AI" part get dropped from "SIAI" again?

Replies from: VincentYu
comment by VincentYu · 2012-06-19T08:19:38.815Z · LW(p) · GW(p)

Zack_M_Davis on this:

(Disclaimer: I don't speak for SingInst, nor am I presently affiliated with them.)

But recall that the old name was "Singularity Institute for Artificial Intelligence," chosen before the inherent dangers of AI were understood. The unambiguous for is no longer appropriate, and "Singularity Institute about Artificial Intelligence" might seem awkward.

I seem to remember someone saying back in 2008 that the organization should rebrand as the "Singularity Institute For or Against Artificial Intelligence Depending on Which Seems to Be a Better Idea Upon Due Consideration," but obviously that was only a joke.

Replies from: wedrifid, Normal_Anomaly
comment by wedrifid · 2012-06-19T08:58:12.563Z · LW(p) · GW(p)

So essentially the problem with "SIAI" is the letter "f" in the middle.

comment by Normal_Anomaly · 2012-06-20T01:23:41.559Z · LW(p) · GW(p)

The Singularity Institute was for AI before it was against it! :P

comment by incariol · 2012-06-27T11:07:15.222Z · LW(p) · GW(p)

Mandate

"The Mandate is a Gnostic School founded by Seswatha in 2156 to continue the war against the Consult and to protect the Three Seas from the return of the No-God.

... [it] also differs in the fanaticism of its members: apparently, all sorcerers of rank continuously dream Seswartha's experiences of the Apocalypse every night ...

...the power of the Gnosis makes the Mandate more than a match for schools as large as, say, the Scarlet Spires."

No-God/UFAI, Gnosis/x-rationality, the Consult/AGI community? ;-)

Replies from: Multiheaded
comment by Multiheaded · 2012-06-27T11:33:47.684Z · LW(p) · GW(p)

Haha, we're gonna see a lot more of such comparisons as the community extends.

comment by Crux · 2012-06-20T04:25:13.277Z · LW(p) · GW(p)

Does this mean it's too late to suggest "The Rationality Institute for Human Intelligence" for the recent spin-off, considering the original may no longer run parallel to that?

Seriously though, and more to the topic, I like "The Center for AI Safety", not only because it sounds good and is unusually clear as to the intention of the organization, but also because it would apparently, well, run parallel with "The Center for Modern Rationality" (!), which is (I think) the name that was ultimately (tentatively?) picked for the spin-off.

comment by [deleted] · 2012-06-20T03:57:38.049Z · LW(p) · GW(p)

Center for AI Safety sounds excellent actually.

comment by metaweta · 2012-06-19T20:34:06.444Z · LW(p) · GW(p)

AI Ballistics Lab? You're trying to direct the explosion that's already underway.

comment by SarahSrinivasan (GuySrinivasan) · 2012-06-19T18:27:44.615Z · LW(p) · GW(p)

Center for General Artificial Intelligence Readiness Research

comment by [deleted] · 2012-06-19T18:15:37.129Z · LW(p) · GW(p)

The Last Organization.

comment by MarkusRamikin · 2012-06-19T09:59:44.931Z · LW(p) · GW(p)

Come to think of it, SI have a bigger problem than the name: getting a cooler logo than these guys.

/abg frevbhf

comment by JoshuaFox · 2012-06-19T06:19:52.275Z · LW(p) · GW(p)

More than that, many people in SU-affiliated circles use the word "Singularity" by itself to mean Singularity University ("I was at Singularity"), or else next-gen technology; and not any of the three definitions of the Singularity. These are smart, innovative people, but some may not even be familiar with Kurzweil's discussion of the Singularity as such.

I'd suggest using the name change as part of a major publicity campaign, which means you need some special reason for the campaign, such as a large donation (see James Miller's excellent idea).

comment by roll · 2012-06-21T05:05:10.362Z · LW(p) · GW(p)

A suggestion: it may be a bad idea to use word 'artificial intelligence' in the name without qualifiers, as to serious people in the field

  • the 'artificial intelligence' has much, much broader meaning than what SI is concerning itself with

  • there is very significant disdain for the commonplace/'science fiction' use of 'artificial intelligence'

comment by patrickscottshields · 2012-06-19T16:38:16.800Z · LW(p) · GW(p)

I like "AI Risk Reduction Institute". It's direct, informative, and gives an accurate intuition about the organization's activities. I think "AI Risk Reduction" is the most intuitive phrase I've heard so far with respect to the organization.

  • "AI Safety" is too vague. If I heard it mentioned, I don't think I'd have a good intuition about what it meant. Also, it gives me a bad impression because I visualize things like parents ordering their children to fasten their seatbelts.
  • "Beneficial Architectures" is too vague. It's not clear it's AI-related.
  • "AI Impacts Research" is too vague and non-prescriptive. Unlike "AI Risk Reduction", it's ambiguous in its intentions.
comment by RobertLumley · 2012-06-19T15:02:00.986Z · LW(p) · GW(p)

I'll focus on "The Center for AI Safety", since that seems to be the most popular. I think "safety" comes across as a bit juvenile, but I don't know why I have that reaction. And if you say the actual words Artificial Intelligence, "The Center for Artificial Intelligence Safety" it gets to be a mouthful, in my opinion. I think a much better option is "The Center for Safety in Artificial Intelligence", making it CSAI, which is easily pronounced See-Sigh.

Replies from: mwengler
comment by mwengler · 2012-06-19T15:24:35.901Z · LW(p) · GW(p)

On the one hand, "The Center for AI Safety" really puts me off. Who would want to associate with a bunch of people who are worried about the safety of something that doesn't even exist yet? Certainly you want to be concerned with Safety, but it should be subsidiary to the more appealing goal of actually getting something interesting to work.

On the other hand, if I weren't trying to have positive karma, I would have zero or negative karma, suggesting I am NOT the target demographic for this institute. And if I am not the target demographic, changing the name is a good idea because I like SIAI.

Replies from: private_messaging
comment by private_messaging · 2012-06-21T16:42:39.948Z · LW(p) · GW(p)

Well, the AI exists, just not in the science fictional sense. E.g. there could be a centre working on safety of self driving cars and similar technology. I'm actually kind of curious how the 'change of direction' pans out; when people an't immediately classify this as crackpot based on the name, there will be concise summaries of the topic online created by those whom wasted time looking into it.

comment by Stuart_Armstrong · 2012-06-19T14:29:45.568Z · LW(p) · GW(p)

You could reuse the name of the coming December conference, and go for AI Impacts (no need to add "institute" or "research").

comment by VincentYu · 2012-06-19T07:14:24.814Z · LW(p) · GW(p)

Retaining the meaning of 'intelligence explosion' without the word 'singularity':

comment by [deleted] · 2012-06-21T03:51:51.933Z · LW(p) · GW(p)

Center for AI Ethics Research

Center for Ethical AI

Singularity Institute for Ethical AI

comment by Nic_Smith · 2012-06-20T17:13:37.993Z · LW(p) · GW(p)

The Good Future Research Center

A wink to the earlier I.J. Good Institute idea, it matches the tone of the current logo while being unconfining in scope.

comment by A1987dM (army1987) · 2012-06-20T14:18:36.047Z · LW(p) · GW(p)

Institute for Friendly Artificial Intelligence (IFAI).

comment by Shmi (shminux) · 2012-06-19T18:38:45.676Z · LW(p) · GW(p)

It would be nice if the name reflected the SI's concern that the dangers come not just from some cunning killer robots escaping a secret government lab or a Skynet gone amok, or a Frankenstein monster constructed by a mad scientist, but from recursive self-improvement ("intelligence explosion") of an initially innocuous and not-very smart contraption.

I am also not sure whether the qualifier "artificial" conveys the right impression, as the dangers might come from an augmented human brain suddenly developing the capacity for recursive self-improvement, or from some other creation that does not look like a collection of silicon gates.

If I understand it correctly, SI wants to ensure "safe recursive self-improvement" of an intelligence of any kind, "safe" for the rest of the (human?) intelligences existing at that time, though not necessarily for the self-improver itself.

Of course, a name like "Society For Safe Recursive Self-Improvement" is both unwieldy and unclear to an outsider. (And the acronym sounds like parseltongue.) Maybe there is a way to phrase it better.

Replies from: wedrifid
comment by wedrifid · 2012-06-19T19:04:47.619Z · LW(p) · GW(p)

I am also not sure whether the qualifier "artificial" conveys the right impression, as the dangers might come from an augmented human brain suddenly developing the capacity for recursive self-improvement, or some other creation that does not look like a collection of silicon gates.

The Singularity Institute (folks) does consider the dangers to be from the "artificial" things. They don't (unless I am very much mistaken) consider a human brain to have the possibility to recursively self-improve. Whole Brain Emulation FOOMing would fall under their scope of concern but that certainly qualifies as "artificial".

comment by thomblake · 2012-06-19T15:13:13.587Z · LW(p) · GW(p)

I agree that something along the lines of "AI Safety" or "AI RIsk Reduction" or "AI Impacts Research" would be good. It is what the organization seems to be primarily about.

As a side-effect, it might deter folks from asking why you're not building AIs, but it might make it harder to actually build an AI.

I'd worry about funding drying up from folks who want you to make AI faster, but I don't know the distribution of reasons for funding.

comment by Gastogh · 2012-06-19T12:33:52.399Z · LW(p) · GW(p)

I'd prefer AI Safety Institute over Center for AI Safety, but I agree with the others that that general theme is the most appropriate given what you do.

comment by [deleted] · 2015-09-18T15:21:49.191Z · LW(p) · GW(p)

Going by the google suggest principle, how about the AI Safety Syndicate (ASS)

Replies from: gjm
comment by gjm · 2015-09-18T15:50:33.302Z · LW(p) · GW(p)

Leaving aside the facts that (1) they already changed their name and (2) they probably don't want to be called "ASS" and (3) that webpage looks as sketchy as all hell ... what principle exactly are you referring to?

The "obvious" principle is this: if you start typing something that possible customers might start typing into the Google search box, and one of the first autocomplete suggestions is your name, you win. But if I type "ai safety" into a Google search box, "syndicate" is not one of the suggestions that come up. (Not even if I start typing "syndicate".)

(Perhaps you mean that having a name that begins with "ai safety" is a good idea if people are going to be searching for "ai safety", which is probably true but has nothing to do with Google's search suggestions. And are a lot of people actually going to be searching for "ai safety"?)

comment by Halfwit · 2012-12-07T18:45:13.599Z · LW(p) · GW(p)

The Centre for the Development of Benevolent Goal Architectures

comment by [deleted] · 2012-06-21T22:18:27.668Z · LW(p) · GW(p)

Once, a smart potential supporter stumbled upon the Singularity Institute's (old) website and wanted to know if our mission was something to care about. So he sent our concise summary to an AI researcher and asked if we were serious. The AI researcher saw the word 'Singularity' and, apparently without reading our concise summary, sent back a critique of Ray Kurzweil's "accelerating change" technology curves. (Even though SI researchers tend to be Moore's Law agnostics, and our concise summary says nothing about accelerating change.)

For what it's worth, my instinct would be to send back a message (if I had the opportunity) saying, "Yes, I agree completely; I don't believe that Kurzweil's accelerating change argument has merit. In fact, I believe that most Singularity Institute researchers feel the same way. If you'd like to hear an argument in favor of FAI that does have merit, I'd suggest reading such-and-such."

Replies from: JGWeissman
comment by JGWeissman · 2012-06-21T22:36:19.119Z · LW(p) · GW(p)

That misses the point that SIAI only gets the chance to respond in such a way if the potential supporter actually contacts them and tells them the story. It makes you wonder how many potential supporters they never heard from because the supporter themself or someone the supporter asked for advice rejected a misunderstanding of what SIAI is about.

comment by Rain · 2012-06-20T14:54:48.734Z · LW(p) · GW(p)

Can a moderator please deal with private_messaging, who is clearly here to vent rather than provide constructive criticism?

You currently have 290 posts on LessWrong and Zero (0) total Karma.

I don't care about opinion of a bunch that is here on LW.

Others: please do not feed the trolls.

comment by shokwave · 2012-06-20T06:45:32.811Z · LW(p) · GW(p)

Heh. It's a pretty rare organisation that does Research in Artificial Intelligence Risk Reduction.

(Artificial Intelligence Risk Reduction by itself might work.)

Replies from: thomblake
comment by thomblake · 2012-06-20T14:20:10.443Z · LW(p) · GW(p)

That name reminds me eerily of RAINN.

comment by [deleted] · 2012-06-20T03:48:47.573Z · LW(p) · GW(p)Replies from: wedrifid
comment by wedrifid · 2012-06-20T05:00:21.183Z · LW(p) · GW(p)

"They Who Must Not Be Named"? Like it.

comment by lsparrish · 2012-06-20T02:28:49.417Z · LW(p) · GW(p)

Here's a few:

  • Protecting And Promoting Humanity In The Future
  • Society For An Ethical Post-Humanity
  • Studies For Producing A Positive Impact Through Deep Recursion
  • Rationality Institute For Self-Improving Information Technology
comment by Zaine · 2012-06-20T00:44:22.950Z · LW(p) · GW(p)
  • Remedial Investigation [or Instruction] of Safety Kernel for AI [or: 'for AGI'; 'for Friendly AI'; 'for Friendly AGI'; 'for AGI Research'; etc.] (RISK for AI; RISK for Friendly AI)
  • Friendly Architectures Research (FAR)
  • Sapiens Friendly Research (SFR - pronounced 'Safer')
  • Sapiens' Research Foundation (SRF)
  • Sapiens' Extinction [or Existential] Risk Reduction Cooperative [or Conglomerate] (SERRC)
  • Researchers for Sapiens Friendly AI (RSFAI)
comment by Jay_Schweikert · 2012-06-19T16:39:27.223Z · LW(p) · GW(p)

While the concise summary clearly associates SI with Good's intelligence explosion, nowhere does it specifically say anything about Kurzweil or accelerating change. If people really are getting confused about what sort of singularity you're thinking about, would it be helpful as a temporary measure to put some kind of one-sentence disclaimer in the first couple paragraphs of the summary? I can understand that maybe this would only further the association between "singularity" and Kurzweil's technology curves, but if you don't want to lose the word entirely, it might help to at least make clear that the issue is in dispute.

Also, on a separate subject, I notice that the summary presently has a number of "??" marks, presumably as a kind of formatting error. Just a heads-up. :)

comment by blogospheroid · 2012-06-19T10:36:46.647Z · LW(p) · GW(p)

Ok.

The Center for AI safety and Centre for Friendly Artificial Intelligence research sound the most correct as of now.

If you wanted to aim for a more creative name, then here are some

Centre for Coding Goodness

Man's Best Friend Group (If the slightly implied sexism of "Man's" is Ok..)

The Artificial Angels Institute / Centre for Machine Angels - The angels word directly conveys goodness and superiority over humans, but due to its christian origins and other associated imagery, it might be walking a tight rope.

Replies from: wedrifid
comment by wedrifid · 2012-06-19T13:19:18.830Z · LW(p) · GW(p)

Man's Best Friend Group (If the slightly implied sexism of "Man's" is Ok..)

Naming your research institute after a pet dog reference and it is the non gender neutral word that seems like the problem?

Replies from: blogospheroid
comment by blogospheroid · 2012-06-20T09:21:48.361Z · LW(p) · GW(p)

They'll come for the dogs, they'll stay for the AI. :)

comment by RomeoStevens · 2012-06-19T05:52:49.594Z · LW(p) · GW(p)

Wasn't this discussed before?

Center for Applied Rationality Education?

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2012-06-19T07:33:09.574Z · LW(p) · GW(p)

You're thinking of the CfAR naming. CfAR has been spun out as a separate organisation from SI.

Replies from: RomeoStevens
comment by RomeoStevens · 2012-06-19T07:59:55.282Z · LW(p) · GW(p)

ah yes.

comment by daenerys · 2012-06-19T05:51:12.084Z · LW(p) · GW(p)

the Best Ever Association For Rational AI Development. ;)

comment by level · 2012-06-19T14:34:46.333Z · LW(p) · GW(p)

Kurzweil Institute

comment by private_messaging · 2012-06-19T15:29:12.792Z · LW(p) · GW(p)

What you are doing here is called psychopathic behaviour outside utilitarian circles (and even utilitarians can see utility in crackpottery being easy to identify). Please stop helping to strip a crackpot research organization of all the tell tale signs that help to inexpensively identify it as being crackpot. World is a better place when the researchers themselves choose names and the names are indicative of the likehood of crankery.

It is not OK to replace the goal of AI safety (with the subgoal of 'we need more money if our work helps improve AI safety and we need to dissolve otherwise') with the sub-goal of collecting more money (conditional having been lost). That leads to behaviour identical to your unfriendly paperclip maximizer which once made paperclips for the purpose of helping African schoolchildren or something.

edit: Furthermore, you guys do nothing about safety of e.g. self driving cars, nor plan to do anything of this kind. You're no "Center for AI Safety" (or anything of this kind), that much is a fact. You're a "Centre for preventing a risk few believe is real exclusively via making a solution (FAI) that even fewer believe will work". (and not even a centre but more like a fringe). Changing your name to a more general one that differs in the sense that it would seem to imply you are contributing to e.g. safety of self driving cars (or any other non singularity AI scenario), is nothing short of lying.

Replies from: drethelin
comment by drethelin · 2012-06-19T16:20:22.393Z · LW(p) · GW(p)

Despite the source and tone, these comments make sense to me. It looks suspicious when an organization tries to change its name to a more euphemistic version of its old one, and for most people AI refers to narrow AI and not singularity-causing AI.

Replies from: shminux, private_messaging
comment by Shmi (shminux) · 2012-06-20T16:54:05.497Z · LW(p) · GW(p)

I don't see this as an attempt to mislead. The Technological Singularity is currently a halo-tainted sensationalist term inextricably tied to Kurzwell. The mission of SI is to mitigate the potential x-risk from a recursively self-improving intelligence, which is rather different from what VInge and Kurzwell had in mind, and rather more mundane. While I am not sold on everything SI says or does, I can see how a name better reflecting what SI actually is about can be useful.

Replies from: private_messaging
comment by private_messaging · 2012-06-21T11:22:32.141Z · LW(p) · GW(p)

The mission of SI is to mitigate the potential x-risk from a recursively self-improving intelligence, which is rather different from what VInge and Kurzwell had in mind and rather more mundane.

In which way? Recall the EY vs Hansen foom debate. The foom is a singularity by an AI in a basement on commodity hardware, couple weeks to extremely superhuman, and all the 'work' done here assumes something so strongly superhuman you don't need to be concerned with any algorithmic complexity or anything, and it's only goals that matter (and the goals are somehow physical, like number of paperclips or number of staples). Idea taken from Vinge by the way.

If anything, the Singularity Institute is already too broad, as it is only concerned with particularly extreme form of technological singularity.

There's a crucial difference between strongly anti-social process of changing the string to maximize donations, and reasonably good willed name change: in reasonable name change, first you redefine the goals and get some plan then you make a name reflecting the goals (and what you are actually doing). E.g. you change your mind and decide to dedicate some of the work to something like self driving car safety. Then, in light of this broader focus, you come up with the new name "AI safety institute" or something similar. You keep what you're actually planning on doing in sight and you try to summarize it. In the anti-social process, you sit and model the society's response, and you look how society responded, and you try to come up with best string, typically hiding away any specificities behind euphemisms because, ultimately, being specific lets people know more about you quicker.

comment by private_messaging · 2012-06-19T17:23:34.944Z · LW(p) · GW(p)

They sit together in circle and think - okay, we lost a donor because of the name, let's change the name, let's come up with a good name. Never mind descriptiveness, never mind not trying to mislead, never mind that the cause of the loss was the name being descriptive. That's called swindling people out of their money. Especially if you go ahead and try to interfere with how it is to be evaluated, to eliminate the possibility that 'if researchers are cranks we won't get money because researchers will demonstrate themselves to be cranks'. Anyone asks me if it's worth donating there, I'll tell, no, it's just some bunch of sociopaths whom sat in circle and thought how to improve their appearance, but haven't done anything technical that they could of failed at if they lacked technical ability, haven't even sat and worked on something technical to improve appearance. I won't even say 'its probably cranks'. It's beyond honest crankery now.

edit: or maybe it is actually a good thing. Call yourselves "Centre for AI safety", then it is easily demonstrated you don't work on self driving car safety or anything of this kind, ergo, a bunch of fraudsters.

Replies from: Rain
comment by Rain · 2012-06-19T20:02:35.442Z · LW(p) · GW(p)

You currently have 290 posts on LessWrong and Zero (0) total Karma.

This is a poor way to accomplish your goal.

Replies from: army1987, private_messaging
comment by A1987dM (army1987) · 2012-06-20T15:20:16.712Z · LW(p) · GW(p)

Negative total karma scores are displayed as 0.

Replies from: Rain
comment by Rain · 2012-06-20T15:55:02.655Z · LW(p) · GW(p)

Yes, I know; he's -51 for the last 30 days.

comment by private_messaging · 2012-06-20T04:46:40.820Z · LW(p) · GW(p)

I don't care about opinion of a bunch that is here on LW. Also, that goal was within that particular thread. At the current point I am expressing my opinion on what I think about this whole anti-social activity of sitting, looking at how a string was processed, and making another string as to maximize donations (and the general enterprise of looking at "why people think we're cranks" and changing just the appearance). Centre for AI safety, huh. No one ever done anything that doesn't rely on extreme singularity scenario (FOOM), and it's a centre for AI safety, something that from the name oughta work on safety of self driving cars. (you may not care about my opinion which is totally fine)

Replies from: Rain, faul_sname
comment by Rain · 2012-06-20T14:42:43.928Z · LW(p) · GW(p)

You currently have 290 posts on LessWrong and Zero (0) total Karma.

I don't care about opinion of a bunch that is here on LW.

I suppose it's too much to ask that a moderator get involved with someone who is clearly here to vent rather than provide constructive criticism.

comment by faul_sname · 2012-06-20T06:39:26.201Z · LW(p) · GW(p)

And do you think this "activity of sitting, looking at how a string was processed, and making another string as to maximize donations" works to increase donations?

Replies from: private_messaging
comment by private_messaging · 2012-06-20T07:54:41.894Z · LW(p) · GW(p)

I dunno if it works, it ought to work if you are rational, but can easily backfire in many ways. It is unfriendly to society at large in much same way how paperclip maximizer is unfriendly, sans the power.

Replies from: Rain
comment by Rain · 2012-06-20T14:52:03.524Z · LW(p) · GW(p)

Can a moderator please deal with private_messaging, who is clearly here to vent rather than provide constructive criticism?

You currently have 290 posts on LessWrong and Zero (0) total Karma.

I don't care about opinion of a bunch that is here on LW.

Others: please do not feed the trolls.