Posts

[LINK] Inferring the rate of psychopathy from roadkill experiment 2012-07-20T20:10:23.755Z

Comments

Comment by JaneQ on Uploading: what about the carbon-based version? · 2012-07-27T11:44:07.496Z · LW · GW

Legally, a mind upload is only different from any other medical scan in mere quantity, and a simulation of brain is only qualitatively different from any other processing. Just as the cryopreservation is only a form of burial.

Furthermore, while it would seem to be better to magically have the mind uploading completely figured out without any experimentation on human mind uploads, we aren't writing a science fiction/fantasy story, we are actually building the damn thing in the real world where things tend to go wrong.

edit: also, a rather strong point can be made that it is more ethical to experiment on a copy of yourself than on a copy of your cat or any other not completely stupid mammal. The consent matters.

Comment by JaneQ on Uploading: what about the carbon-based version? · 2012-07-26T13:59:32.869Z · LW · GW

With regards to animal experimentation before first upload and so on, a running upload is nothing but fancy processing of a scan of, likely, a cadaver brain, legally no different from displaying that brain on computer, and doesn't require any sort of FDA style stringent functionality testing on animals, not that such testing would help for a brain much bigger, with different neuron sizes, and with the failure modes that are highly non-obvious in animals. Nor that such regulation is even necessary, as the scanned upload of dead person, functional enough to recognize his family, is a definite improvement over being completely dead, and to prevent it equates mercy killing accident victims who have good prospect at full recovery, to avoid the mere discomfort of being sick.

The gradual progress on humans is pretty much a certainty, if one is to drop the wide eyed optimism bias. There are enough people who would bite the bullet, and it is not human experimentation - it is mere data processing - it might become human experimentation decades after functional uploads.

Comment by JaneQ on Uploading: what about the carbon-based version? · 2012-07-26T13:54:16.420Z · LW · GW

Heh. Well, it's not radioactive, the radon is. It is inert but it dissolves in membranes, changing electrical properties.

Comment by JaneQ on Uploading: what about the carbon-based version? · 2012-07-26T13:47:50.344Z · LW · GW

That was more a note on the Dr_Manhattan's comment.

With regards to 'economic advantage', the advantage has to outgrow the overall growth for the state of carbon originals to decline. Also, you may want to read Accelerando by Charles Stross.

Comment by JaneQ on [LINK] Nick Szabo: Beware Pascal's Scams · 2012-07-25T19:34:34.291Z · LW · GW

will be capable of learning from feedback.

My comment on Dunning-Kruger effect is the second highest ranked comment in my post history or so.

Also: this thread is too weird.

Comment by JaneQ on Work on Security Instead of Friendliness? · 2012-07-25T10:38:10.499Z · LW · GW

If we have an AGI, it will figure out what problems we need solved and solve them.

Only a friendly AGI would. The premise for funding to SI is not that they will build friendly AGI. The premise is that there is an enormous risk that someone else would for no particular reason add this whole 'valuing real world' thing into an AI, without adding any friendliness, actually restricting it's generality when it comes to doing something useful.

Ultimately, the SI position is: input from us the idea guys with no achievements (outside philosophy), are necessary for the team competent enough to build a full AGI, to not kill everyone, and therefore you should donate (Previously, the position was you should donate so we build FAI before someone builds UFAI, but Luke Muehlhauser been generalizing to non-FAI solutions). That notion is rendered highly implausible when you pin down the meaning of AGI, as we did in this discourse. For the UFAI to happen and kill everyone, a potentially vastly more competent and intelligent team that SI has to fail spectacularly.

Only if his own thing isn't also your own thing.

Will require simulation of me or a brain implant that effectively makes it extension of me. Do not want the former, and the latter is IA.

Comment by JaneQ on Uploading: what about the carbon-based version? · 2012-07-24T12:25:16.193Z · LW · GW

Or what if the 'mountain people' are utterly microscopic mites on a tiny ball hurling through space. Ohh, wait, that's the reality.

sidenote: I doubt mind uploads scale all the way up, and it appears quite likely that amoral mind uploads would be unable to get along with the copies, so I am not very worried about the first upload having any sort of edge. The first upload will probably be crippled and on the brink of insanity, suffering from hallucinations and otherwise broken thought (after massively difficult work to get this upload to be conscious and not to just go into simulated seizure ). From that you might progress to sane but stupefied uploads, with very significant IQ drop. Get a whiff of xenon to see what small alteration to electrical properties of the neurons amounts to. It will take a lot of gradual improvement until there are well working uploads, and even then I am pretty sure that nearly anyone would be utterly unable to massively self improve on one's own in any meaningful way rather than just screw itself into insanity, without supervision; sane person shouldn't even attempt that because if your improvement is making things worse then the next improvement will make things even worse, and one needs external verification.

Comment by JaneQ on Work on Security Instead of Friendliness? · 2012-07-24T08:36:02.900Z · LW · GW

The 'predicted effects on external reality' is a function of prior input and internal state.

The idea of external reality is not incoherent. The idea of valuing external reality with a mathematical function is.

Note, by the way, that valuing 'wire in the head' is also a type of 'valuing external reality', not in the sense of 'external' as in wire being outside the box that runs AI, but external in sense of wire being outside the algorithm of the AI. When that point is being discussed here, SI seem to magically acquire an understanding of distinction between outside an algorithm and inside of algorithm to argue that wireheading won't happen. The confusion between model and reality appears and disappears at most convenient moments.

Comment by JaneQ on Work on Security Instead of Friendliness? · 2012-07-24T08:18:16.275Z · LW · GW

nor highly useful (in a singularity-inducing sense).

I'm not clear what we mean by singularity here. If we had an algorithm that works on well defined problems we could solve practical problems. edit: Like improving that algorithm, mind uploading, etc.

Building an AGI may not be feasible. If it is, it will be far more effective than a narrow AI,

Effective at what? Would it cure cancer sooner? I doubt so. An "AGI" with a goal it wants to do, resisting any control, is a much more narrow AI than the AI that basically solves systems of equations. Who would I rather hire: impartial math genius that solves the tasks you specify for him, or a brilliant murderous sociopath hell bent on doing his own thing? The latter's usefulness (to me, that's it) is incredibly narrow.

and far more dangerous.

Besides being effective at being worse than useless?

That's why it's primarily what SIAI is worried about.

I'm not quite sure that there's 'why' and 'what' in that 'worried'.

Comment by JaneQ on [LINK] Nick Szabo: Beware Pascal's Scams · 2012-07-24T06:02:58.895Z · LW · GW

Both an argument and its opposite cannot lead to the same conclusion unless the argument is completely irrelevant to the conclusion.

It's not an argument and it's opposite. One of the assumptions in either argument is 'opposite', that could make distinction between those two assumptions irrelevant but the arguments themselves remain very relevant.

I take as other alternatives everyone who could of worked on AI risk but didn't, because I consider it to be an alternative not to work on AI risk now. Some other people take as other alternatives people working on precisely the kind of AI risk reduction that SI works on. In which case the absence of alternatives - under this meaning of 'alternatives' - is evidence against SI's cause - against the idea that one should work on such AI risk reduction now. There should be no way how you can change - against same world - meanings of the words and arrive at different conclusion; it only happens if you are exercising in the rationalization and rhetoric. In electromagnetism if you are to change right hand rule to left hand rule every conclusion will stay the same; in reasoning if you wiggle what is 'alternatives' that should not change the conclusion either.

This concludes our discussion. Pseudologic derived from formal maxims and employing the method of collision (like in this case, colliding 'assumption' with 'argument') is too annoying.

Comment by JaneQ on [LINK] Nick Szabo: Beware Pascal's Scams · 2012-07-23T07:45:15.196Z · LW · GW

What is this, the second coming of C.S. Lewis and his trilemma? SI must either be completely right and demi-gods who will save us all or they must be deluded fools who suffer from some psychological bias - can you really think of no intermediates between 'saviors of humanity' and 'deluded fools who cannot possibly do any good', which might apply?

No, it comes out of what SI members claim about themselves and their methods - better than science, we are more rational, etc etc etc. That really drives down the probability of anything in the middle between the claimed excellence and the incompetence compatible with the fact of making those claims (you need sufficient incompetence to claim extreme competence). If they didn't want that sort of dichotomy they should have kept their extreme arrogance from surfacing. (Or alternatively they wanted this dichotomy, to drive some people into fallacies from politeness)

Alternatives would also be evidence against donating, too, since what makes you think they are the best one out of all the alternatives? Curious how either way, one should not donate!

Do you have a disagreement besides fairly stupid rhetoric? The lack of alternatives is genuinely evidence against SI's cause, whereas presence of alternatives would genuinely make it unlikely that either of them is necessary. Yep, it's very curious, and very inconvenient for you. The logic is sometimes impeccably against what you like. Without some seriously solid evidence in favour of SI, it is a Pascalian wager as the chance of SI making a difference is small.

Comment by JaneQ on Work on Security Instead of Friendliness? · 2012-07-23T07:32:42.319Z · LW · GW

Actually, this is example of something incredibly irritating about this entire singularity topic: verbal sophistry of no consequence. What do you call 'powerful' has absolutely zero relation to anything. A powerful drill doesn't tend to do something significant regardless of how you stop it. Neither does powerful computer. Nor should powerful intelligence.

Comment by JaneQ on [LINK] Nick Szabo: Beware Pascal's Scams · 2012-07-22T12:34:16.445Z · LW · GW

What is the reasonable probability you think I should assign to the proposition by some bunch of guys (with at most some accomplishments in highly non-gradable field of philosophy) led by a person with no formal education nor prior job experience nor quantifiable accomplishments, that they should be given money to hire more people to develop their ideas on how to save the world from a danger they are most adept at seeing? The prior here is so laughably low you can hardly find a study so flawed it wouldn't be a vastly greater explanation for the SI behavior than it's mission statement taken at face value, even if we do not take into account SI's prior record.

So you're just engaged in reference class tennis. ('No, you're wrong because the right reference class is magicians!')

Reference class is not up for grabs. If you want narrower reference class you need to substantiate why it should be so narrow.

edit: Actually, sorry it comes as unnecessarily harsh. But do you recognize that SI genuinely has a huge credibility problem?

The donations to SI only make sense if we are to assume SI has extremely rare survival ability vs the technological risks. Low priors for extremely rare anything are a tautology, not an opinion. The lack of other alternatives is evidence against SI's cause.

Comment by JaneQ on Work on Security Instead of Friendliness? · 2012-07-22T11:44:06.054Z · LW · GW

the idea of a system that attempts to change its environment so as to maximize the prevalence of some X remains a useful idea.

The prevalence of X is defined how?

And if I extend the aspects of its environment that the system can manipulate to include its own hardware or software, or even just its own tuning parameters, it seems to me that there exists a perfectly crisp, measurable distinction between a system A that continues to increase the prevalence of X in its environment, and a system B that instead manipulates its own subsystems for measuring X.

In A, you confuse your model of the world with the world itself; in your model of the world you have a possible item 'paperclip', and you can therefore easily imagine maximization of number of paperclips inside your model of the world, complete with the AI necessarily trying to improve it's understanding of the 'world' (your model). With B, you construct a falsely singular alternative of a rather broken AI, and see a crisp distinction between two irrelevant ideas.

The practical issue is that the 'prevalence of some X' can not be specified without the model of the world; you can not have a function without specifying it's input domain, and the 'reality' is never an input domain of mathematical functions; the notion is not only incoherent but outright nonsensical.

If any part of that is as incoherent as you suggest, and you're capable of pointing out the incoherence in a clear fashion, I would appreciate that.

Incoherence of so poorly defined concepts can not be demonstrated when no attempts has been made to make the notions specific enough to even rationally assert coherence in the first place.

Comment by JaneQ on Work on Security Instead of Friendliness? · 2012-07-22T08:22:06.859Z · LW · GW

Be specific as of what is the input domain of the 'function' in question.

And yes, there is the difference: one is well defined and what is the AI research works towards, and other is part of extensive AI fear rationalization framework, where it is confused with the notion of generality of intelligence, as to presume that the practical AIs will maximize the "somethings", followed by the notion that pretty much all "somethings" would be dangerous to maximize. The utility is a purely descriptive notion; the AI that decides on actions is a normative system.

edit: To clarify, the intelligence is defined here as 'cross domain optimizer' that would therefore be able to maximize something vague without it having to be coherently defined. It is similar to knights of the round table worrying that the AI would literally search for holy grail, because to said knights, abstract and ill defined goal of holy grail appears entirely natural; meanwhile for systems more intelligent than said knights such a confused goal, due to it's incoherence, is impossible to define.

Comment by JaneQ on Work on Security Instead of Friendliness? · 2012-07-22T07:02:31.085Z · LW · GW

It's valuing external reality. Valuing sensory inputs and mental models would just result in wireheading.

Mathematically any value that AI can calculate from external anything is a function of sensory input.

'Vague' presumes the level of precision that is not present here. It is not even vague. It's incoherent.

Comment by JaneQ on [LINK] Nick Szabo: Beware Pascal's Scams · 2012-07-21T20:50:39.295Z · LW · GW

Dunning Kruger effect is likely a product of some general deficiency in the meta reasoning facility leading both to failure of reasoning itself and failure of evaluation of the reasoning; extremely relevant to people that proclaim themselves to be more rational, more moral, and so on than anyone else but do not seem to accomplish above mediocre performance at fairly trivial yet quantifiable things.

Seriously. In no area of research, medicine, engineering, or whatever, the first group to tackle a problem succeeded? Such a world would be far poorer and still stuck in the Dark Ages than the one we actually live in. I realize this may be a hard concept, but sometimes, the first person to tackle a problem - succeeds!

Ghmm. He said first people to take money, not first people to tackle.

The first people to explain the universe (and take some contributions for that) produced something of negative value, nearly all of the medicine until last couple hundred years was not only ineffective but completely harmful, and so on.

If you look at very narrow definitions, of course, the first to tackle nuclear bomb creation did succeed - but the first to tackle the general problem of weapon of mass destruction were various shamans sending a curse. If saving people from AI is an easy problem, then we'll survive without SI; if it's a hard problem, at any rate SI doesn't start with a letter from Einstein to the government, it starts with a person with no quantifiable accomplishments cleverly employing oneself. As far as I am concerned, there's literally no case for donations here; the donations happen via sort of decision noise similar to how NASA has spent millions on various antigravity devices, the power companies have spent millions on putting electrons in hydrogen orbitals at below ground level (see Mills hydrinos), and millions were invested in Steorn's magnetic engine.

Comment by JaneQ on [LINK] Nick Szabo: Beware Pascal's Scams · 2012-07-20T07:15:07.597Z · LW · GW

Yes. I am sure Holden is being very polite, which is generally good but I've been getting impression that the point he was making did not in full carry across the same barrier that has resulted in the above-mentioned high opinion of own rationality despite complete lack of results for which rationality would be better explanation than irrationality (and presence of results which set rather low ceiling for the rationality). The 'resistance to feedback' is even stronger point, suggestive that the belief in own rationality is, at least to some extent, combined with expectation that it won't pass the test and subsequent avoidance (rather than seeking) of tests; as when psychics do believe in their powers but do avoid any reliable test.

Comment by JaneQ on [LINK] Nick Szabo: Beware Pascal's Scams · 2012-07-19T07:54:59.150Z · LW · GW

SI being the only one ought to lower your probability that this whole enterprise is worthwhile in any way.

With regards to the 'message', i think you grossly over estimate value of a rather easy insight that anyone who has watched Terminator could have. With regards to "rationally discussing", what I have seen so far here is pure rationalization and very little, if any, rationality. What the SI has on the track record is, once again, a lot of rationalizations and not enough rationality to even have had an accountant through it's first 10 years and first over 2 millions dollars in other people's money.

Comment by JaneQ on [LINK] Nick Szabo: Beware Pascal's Scams · 2012-07-18T17:02:57.501Z · LW · GW

All the other people and organizations that are no less capable of identifying the preventable risks (if those exist) and addressing them, have to be unable to prevent destruction of mankind without SI. Just like in the Pascal's original wager, the Thor and other deities are to be ignored by omission.

On how the SI does not look good, well, it does not look good to Holden Karnofsky, or me for that matter. Resistance to feedback loops is an extremely strong point of his.

On the rationality movement, here's a quote from Holden.

Apparent poorly grounded belief in SI's superior general rationality. Many of the things that SI and its supporters and advocates say imply a belief that they have special insights into the nature of general rationality, and/or have superior general rationality, relative to the rest of the population. (Examples here, here and here). My understanding is that SI is in the process of spinning off a group dedicated to training people on how to have higher general rationality.

Yet I'm not aware of any of what I consider compelling evidence that SI staff/supporters/advocates have any special insight into the nature of general rationality or that they have especially high general rationality.

Comment by JaneQ on [LINK] Nick Szabo: Beware Pascal's Scams · 2012-07-18T15:06:31.037Z · LW · GW

It is not just their chances of success. For the donations to matter, you need SI to succeed where without SI there is failure. You need to get a basket of eggs, and have all the good looking eggs be rotten inside but one fairly rotten looking egg be fresh. Even if a rotten looking egg is nonetheless more likely to be fresh inside than one would believe, it is highly unlikely situation.

Comment by JaneQ on [LINK] Nick Szabo: Beware Pascal's Scams · 2012-07-18T14:50:09.454Z · LW · GW

It has to also be probable that their work averts those risks, which seem incredibly improbable by any reasonable estimate. If the alternative Earth was to adopt a strategy of ignoring prophetic groups of 'idea guys' similar to SI and ignore their pleads for donations so that they can hire competent researchers to pursue their ideas, I do not think that such decision would have increased the risk by more than a miniscule amount.

Comment by JaneQ on Exploiting the Typical Mind Fallacy for more accurate questioning? · 2012-07-18T09:12:02.238Z · LW · GW

But are you sure that you are not now falling for typical mind fallacy?

The very premise of your original post is that it is not all signaling; that there is a substantial number of the honest-and-naive folk who not only don't steal but assume that others don't.

Comment by JaneQ on Exploiting the Typical Mind Fallacy for more accurate questioning? · 2012-07-18T08:25:28.654Z · LW · GW

I do not see why are you even interested in asking that sort of question if you have such a view - surely under such view you will steal if you are sure you will get away with it, just as you would e.g. try to manipulate and lie. edit: I.e. the premise seems incoherent to me. You need honesty to exist for your method to be of any value; and you need honesty not to exist for your method to be harmless and neutral. If the language is all signaling, the most your method will do is weed you out at the selection process - you have yourself already sent the wrong signal that you believe language to be all deception and signaling.

Comment by JaneQ on Exploiting the Typical Mind Fallacy for more accurate questioning? · 2012-07-17T22:14:37.417Z · LW · GW

It too closely approximates the way the herein proposed unfriendly AI would reason - get itself a goal (predict stealing on job for example) and then proceed to solving it, oblivious to the notion of fairness. I've seen several other posts over the time that rub in exactly same wrong way, but I do not remember exact titles. By the way, what if I am to use the same sort of reasoning as in this post on the people who have a rather odd mental model of artificial minds (or mind uploaded fellow humans) ?

Comment by JaneQ on Exploiting the Typical Mind Fallacy for more accurate questioning? · 2012-07-17T21:54:39.904Z · LW · GW

Good point. More intricate questions like this, with 'nobody could resist' wording, are also much more fair. The question as of what the person believes is the natural human state are more dubious.

Comment by JaneQ on Exploiting the Typical Mind Fallacy for more accurate questioning? · 2012-07-17T21:26:33.531Z · LW · GW

I'm not quite sure what is the essence of your disagreement, or what relation the honest people already being harmed has with the argument I made.

I'm not sure what you think my disagreement should have focused on - the technique outlined in the article can be effective and is used in practice, and there is no point to be made that it is bad for the persons employing it; I can not make an argument convincing an antisocial person not to use this technique. However, this technique depletes the common good that is the normal human communication; it is an example of tragedy of the commons - as long as most of us refrain from using this sort of approach, it can work for the few that do not understand the common good or do not care. Hence the Dutch dike example. This method is a social wrong, not necessarily a personal wrong - it may work well in general circumstances. Furthermore it is, in certain sense that is normally quite obvious, unfair. (edit: It is nonetheless widely used, of course - even with all the social mechanisms in human psyche, the human brain is a predictive system that will use any correlations it encounters - and people do often signal their pretend non-understanding -I suspect this silly game significantly discriminates against Aspergers spectrum disorders).

edit: For a more conspicuous example of how predictive methods are not in general socially acceptable, consider that if I am to train a predictor of criminality on the profile data complete with a photograph, the skin albedo estimate from the photograph will be a significant part of the predictor assuming that the data processed originates in north America. Matter of fact I have some experience with predictive software of the kind that processes interview answers. Let me assure you, my best guess from briefly reading your profile is that you are not in the category of people who benefit from this software that just uses all correlations it finds - pretty much all people that have non standard interests and do not answer questions in the standard ways are penalized, and I do not think it would be easy to fake the answers beneficially without access to the model being used.

Comment by JaneQ on Exploiting the Typical Mind Fallacy for more accurate questioning? · 2012-07-17T06:34:53.295Z · LW · GW

However, imagine if the typical mind fallacy was correct. The employers could instead ask "what do you think the percentage of employees who have stolen from their job is?"

To be honest, this is a perfect example of what is so off-putting about this community. This method is simply socially wrong - it works against both the people who stole, and people who had something stolen from, who get penalized for the honest answer and, if such methods are to be more widely employed, are now inclined to second-guess and say "no, I don't think anyone steals" (and yes, this method is employed to some extent, subconsciously at least). The idea parasitizes on the social contract that is human language, with the honest naivete of the asocial. It's as if a Dutch town was building a dike and someone was suggesting that anyone who needs materials for repairing the house should just take them from that weird pile in the sea. The only reason such method can work, is because others have been losing a little here and there to maintain some trust necessary for effective communication.

Comment by JaneQ on Reply to Holden on The Singularity Institute · 2012-07-16T14:07:09.557Z · LW · GW

I'm not sure why you think that such writings should convince a rational person that you have the relevant skill. If you were an art critic, even a very good one, that would not convince people you are a good artist.

This is not, in any way shape or form, the same skill as the ability to manage a nonprofit.

Indeed, but you are asking me to assume that the skills you display writing your articles are the same skill as the skills relevant to directing the AI effort.

edit: Furthermore, when it comes to works on rationality as 'applied math of optimization', the most obvious way to classify those writings is to look for some great success attributable to your writings - some highly successful businessmen saying how much the article on such and such fallacy helped them succeed, that sort of thing.

Comment by JaneQ on Reply to Holden on The Singularity Institute · 2012-07-14T09:29:50.339Z · LW · GW

I think it is fair to say Earth was doing the "AI math" before the computers. Extending to the today - there is a lot of mathematics to be done for a good, safe AI - but how are we to know that the SI has the actionable effort planning skills required to correctly identify and fund research in such mathematics?

I know that you believe that you have the required skills; but note that in my model such belief results from both the presence of extraordinary effort planning skill, and from absence of effort planning skills. The prior probability of extraordinary effort planning skill is very low. Furthermore as the effort planning is, to some extent, a cross domain skill, the prior inefficacy (which was criticized by Holden) seem to be a fairly strong evidence against extraordinary skills in this area.

Comment by JaneQ on What Is Optimal Philanthropy? · 2012-07-13T14:20:30.304Z · LW · GW

Great article, however, there is a third important option which is 'request proof then, if passed, donate' (Holden seem to have opted for this in his review of S.I., but it is broadly applicable in general).

For example if there is a charity promising to save 10 millions people using a method X that is not very likely to work, but is very cheap - a Pascal Wager like situation. In this situation, even if this charity is presently the best in terms of expected payoff it may be a better option still to, rather than paying the full sum, pay only enough for a basic test of method X which the method X would be unlikely to pass if it is ineffective; then donating if the test has passed. This decreases the expected cost proportionally to the unlikelihood of X efficacy and the specificity of the test.

Comment by JaneQ on Reply to Holden on The Singularity Institute · 2012-07-13T14:04:02.280Z · LW · GW

It seems to me that 100 years ago (or more) you would have to consider pretty much any philosophy and mathematics to be relevant to AI risk reduction, as well as reduction of other potential risks, and the attempts to select the work particularly conductive to the AI risk reduction would not be able to succeed. Effort planning is the key to success.

On somewhat unrelated: Reading the publications and this thread, there is point of definitions that I do not understand: what exactly does S.I. mean when it speaks of "utility function" in the context of an AI? Is it a computable mathematical function over a model, such that the 'intelligence' component computes the action that results in maximum of that function taken over the world state resulting from the action?

Comment by JaneQ on Reply to Holden on The Singularity Institute · 2012-07-12T07:18:33.823Z · LW · GW

It seems to me that the premise of funding SI is that people smarter (or more appropriately specialized) than you will then be able to make discoveries that otherwise would be underfunded or wrongly-purposed.

But then SI has to have dramatically better idea what research has to be funded to protect the mankind, than every other group of people capable of either performing such research or employing people to perform such research.

Muehlhauser has stated that SI should be compared to alternatives in form of the organizations working on the AI risk mitigation, but that seems like an overly narrow choice reliant on presumption that it is not an alternative to not work on AI risk mitigation now.

For example, 100 years ago it would seem to have been too early to fund work on AI risk mitigation; that may still be the case; as the time gone on one could naturally expect that the opinions will form a distribution and the first organizations offering AI risk mitigation will pop up earlier than the time at which such work is effective. When we look into the past through the goggles of notoriety, we don't see all the failed early starts.