post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by CarlShulman · 2012-04-13T22:58:04.777Z · LW(p) · GW(p)

It is incredibly unlikely to find yourself in the world where the significant insights about real doomsday is coming from single visionary who did so little that can be unambiguously graded, before coming up with those insights.

I think this is a mistaken picture of the intellectual history around AI risk.

Prominent AI folk like Hans Moravec and Marvin Minsky had predicted the eclipse of humanity and humane values (save perhaps as pets/specimens/similar, losing the overwhelming majority of the future) long before Yudkowsky. Moravec in particular published a fair bit of his analysis in his books Mind Children and Robot, including a prediction of the eventual "devouring" of humanity by competitive AI life (stored as data and perhaps occasionally pulled out as simulations). Many other AI researchers endorse this rough picture, although often approvingly, saying that these would be "worthy successors" and "humans aren't worth saving" or "nothing to be done about it" and so forth.

Vinge's (a mathematician as well as a sci-fi novelist) 1993 essay has the phrase "How to survive in the post-human era" in the title, and "If not to be avoided, can events be guided so that we may survive?" and discusses risks to humanity extensively (as large).

I.J. Good mentioned the risk of out-of-control AI in the first public writing on the concept of an intelligence explosion.

Marcus Hutter, Jurgen Schmidhuber, Kevin Warwick, and a number of other AI folk have written about the future of AI and risk of human extinction, etc.

Stephen Omohundro is another successful AI guy who has done work in this area.

The concept of "Friendly AI," i.e. an AI with a clear enough model of human welfare to fairly reliably design successors that get better and better at helping humans, was originally created by the philosopher Nick Bostrom, not Yudkowsky.

Yudkowsky has spent more time on the topic than any of the others on this list, and has specific conclusions that are more idiosyncratic (especially the combination of views on many subjects), but the basic ideas are not so rare or privileged that they do not recur independently among many folk, including subject matter experts.

It appears to me that will to form most accurate beliefs about the real world, and implement solutions in the real world, is orthogonal to problem solving itself.

Problem solving works better when you can flexibly reallocate internal resources to old and new uses that will best meet goal criteria, run experiments (at least internal ones, involving writing and using programs), and identify and pursue subtasks.

Why would optimizing compiler that can optimize it's ability to optimize, suddenly emerge will?

If it considers, creates, and runs programs in the course of identifying and evaluating possible improvements (with respect to some criteria of improvement), then it doesn't need to acquire some novel will. What is the necessary difference between creating and running a program that does a specialized search or simulation on local hardware and returns an answer, and a program that transmits an AI into the outside world to acquire resources and return an answer? Even more so, if the AI is designed to communicate with outside sources of information like humans or experimental apparatus and use them as "black-boxes" to acquire information.

It is immediately conjectured...That is rationalization. Privileging a path of thought.

If you make a point, and someone raises a complication that undercuts your conclusion, it may look to you like it is an instant rationalization to your novel objection. But in fact the points you raise (simulation-based and heuristic instrumental reasons for AI to be wary of immediately killing humans, wireheading, etc) and the counter-considerations you identify as rationalizations, are old news to many of your interlocutors (people like Wei Dai), and were independently invented (by supposed doomsayers) in the course of searching the possibility space, for both good and bad news. ETA: See Wei Dai's link below.

This is not to say that there aren't rationalization and biases of discourse on this subject around here: there are comments and commenters that clearly illustrate those.

Replies from: XiXiDu, private_messaging, Dmytry
comment by XiXiDu · 2012-04-14T11:49:21.520Z · LW(p) · GW(p)

Prominent AI folk like Hans Moravec and Marvin Minsky had predicted the eclipse of humanity and humane values (save perhaps as pets/specimens/similar, losing the overwhelming majority of the future) long before Yudkowsky.

A minor point maybe, but...how big is the fraction of all AI researchers and computer scientists who fall into that category?

I.J. Good, Marcus Hutter, Jurgen Schmidhuber, Kevin Warwick, Stephen Omohundro, Vinge etc.

Those are really a handful of names. And their practically useful accomplishments are little. Most AI researcher would consider them dreamers.

Yudkowsky has spent more time on the topic than any of the others on this list,

This is frequently mentioned but bears little evidence. Many smart people like Roger Penrose spent a lot of time on their pet theories. That does not validate them. It just allowed them to find better ways to rationalize their ideas.

Replies from: JoshuaZ, timtyler
comment by JoshuaZ · 2012-04-14T13:04:02.131Z · LW(p) · GW(p)

I.J. Good, Marcus Hutter, Jurgen Schmidhuber, Kevin Warwick, Stephen Omohundro, Vinge etc.

Those are really a handful of names. And their practically useful accomplishments are little. Most AI researcher would consider them dreamers.

Good was involved in the very early end of computers, so it is a bit hard for him to have done modern AI work. But the work he did do was pretty impressive. He did cryptography work in World War II with Alan Turing, and both during and after the war worked on both theoretical and practical computer systems. He did a lot of probability work, much of which is in some form or another used today in a variety of fields including AI. For example, look at the Good-Turing estimator.

Schmidhuber did some of the first work on practical genetic algorithms and did very important work on neural nets.

Warwick has done so much work in AI and robotics that listing them all would take a long time. One can argue that most of it hasn't gone outside the lab, but it is clear that the much of that work is practically useful even if it is not yet economically feasible to use it on a large scale (which frankly is the status of most AI research at this point in general).

Overall, I don't think your characterization is accurate, although your point that the total set of AI researchers with such concerns being a small percentage of all researchers seems valid.

comment by timtyler · 2012-04-14T22:26:00.763Z · LW(p) · GW(p)

Prominent AI folk like Hans Moravec and Marvin Minsky had predicted the eclipse of humanity and humane values (save perhaps as pets/specimens/similar, losing the overwhelming majority of the future) long before Yudkowsky.

A minor point maybe, but...how big is the fraction of all AI researchers and computer scientists who fall into that category?

I.J. Good, Marcus Hutter, Jurgen Schmidhuber, Kevin Warwick, Stephen Omohundro, Vinge etc.

A strange list, IMO. Fredkin was one who definitely entertained the "pets" hypothesis:

It was rumoured in some of the UK national press of the time that Margaret Thatcher watched Professor Fredkin being interviewed on a late night TV science programme. Fredkin explained that superintelligent machines were destined to surpass the human race in intelligence quite soon, and that if we were lucky they might find human beings interesting enough to keep us around as pets.

Replies from: CarlShulman
comment by CarlShulman · 2012-04-15T04:13:56.441Z · LW(p) · GW(p)

A strange list, IMO.

It was generated by selecting (some) people who had written publications in the area, not merely oral statements. Broadening to include the latter would catch many more folk.

Replies from: timtyler
comment by timtyler · 2012-04-15T11:02:15.714Z · LW(p) · GW(p)

Your list of Hans Moravec and Marvin Minsky was fine - though I believe Moravec characterised humans being eaten by robots as follows:

there''s certainly a finite chance that the whole process will go wrong and the robots will eat us

...though he did go on to say thay he was "not too bothered" by that because "in the long run, that's how it's going to be anyway".

I was more complaining about XiXiDu's "reframing" of the list.

Replies from: CarlShulman
comment by CarlShulman · 2012-04-15T19:08:31.603Z · LW(p) · GW(p)

Hans Moravec

I was thinking of "Robot: from mere machine to transcendent mind" where he talks about an era in which humans survive through local tame robots, but eventually are devoured by competitive minds that have escaped beyond immediate control.

comment by private_messaging · 2013-01-22T22:46:23.451Z · LW(p) · GW(p)

The most impressive on your list (e.g. Good) also are the earliest; in particular 'intelligence explosion' predates computational complexity theory which puts severe bounds on any foom scenarios.

Of the later people with important contributions, I'm not even sure why you have Hutter on the list; I guess misrepresenting Hutter is some local tradition that I didn't pick up on back then. When you are vague with regards to what was said, it is difficult to verify you, which I guess is how this work. But if you keep doing this eventually you're going to piss off someone with 10x the notability of S.I.

And none of that is relevant - it is incredibly improbable that the world saving organisation would look that incompetent.

Replies from: CarlShulman, gwern
comment by CarlShulman · 2013-01-22T23:19:29.833Z · LW(p) · GW(p)

The most impressive on your list (e.g. Good) also are the earliest; in particular 'intelligence explosion' predates computational complexity theory which puts severe bounds on any foom scenarios.

I think there is a trend to this effect (although Solomonoff wrote about intelligence explosion in 1985). I wouldn't point to computational complexity though, so much as general disappointment in AI progress.

How do you think I am misrepresenting Hutter? I agree that he is less influential than Good, and not one of the best-known names in AI. If you are talking about his views on possible AI outcomes, I was thinking of passages like the one in this Hutter paper:

Let us now consider outward explosion, where an increasing amount of matter is transformed into computers of fixed efficiency (fixed comp per unit time/space/energy). Outsiders will soon get into resource competition with the expanding computer world, and being inferior to the virtual intelligences, probably only have the option to flee. This might work for a while, but soon the expansion rate of the virtual world should become so large, theoretically only bounded by the speed of light, that escape becomes impossible, ending or converting the outsiders’ existence.

So while an inward explosion is interesting, an outward explosion will be a threat to outsiders. In both cases, outsiders will observe a speedup of cognitive processes and possibly an increase of intelligence up to a certain point. In neither case will outsiders be able to witness a true intelligence singularity.

Replies from: private_messaging
comment by private_messaging · 2013-01-22T23:50:49.730Z · LW(p) · GW(p)

I think there is a trend to this effect. I wouldn't point to computational complexity though, so much as general disappointment in AI progress.

Well, the self improvement would seem a lot more interesting if it was the case that P=NP or P=PSPACE , I'd say. As it is a lot of scary things are really well bounded - e.g. specific, accurate prediction of various nonlinear systems requires exponential knowledge, exponential space, and exponential number of operation, in a given forecast time. And the progress is so disappointing perhaps thanks to P!=NP and the like - the tasks do not have easy general solutions, or even general purpose heuristics.

re: quote Ahh, that's much better with regard to vagueness. He isn't exactly in agreement with SI doctrine, though, and the original passage creates impression of support for the specific doctrine here.

It goes to say that optimistic AI researchers consider AI to be risky, which is definitely a good thing for the world but at the same time makes this rhetoric in the vibe of 'other AI researchers are going to kill everyone, and we are the only hope of humanity' look rather bad. The researchers that aren't particularly afraid of AI seem to be working on fairly harmless projects which just aren't coding for that sort of will to paperclip.

Suppose some group says that any practical nuclear reactor will intrinsically risk a multimegaton nuclear explosion. What could that really mean? One thing really: the approach that they consider practical will intrinsically risk a multimegaton nuclear explosion. It doesn't say much about other designs, especially if that group doesn't have a lot of relevant experience. Same ought to apply to SI's claims.

Replies from: CarlShulman
comment by CarlShulman · 2013-01-23T00:24:32.457Z · LW(p) · GW(p)

'other AI researchers are going to kill everyone, and we are the only hope of humanity'

Let me explicitly reject such rhetoric then.

The difficulty of safety is uncertain: it could be very easy for anyone with little time, or it could be quite difficult and demand a lot of extra work (which might be hard to put in given competitive pressures). The region where safety depends sensitively on the precautions and setup of early AI development (from realistic options) should not be much larger than the "easy for everyone region," so trivially the probability for building AI with good outcomes should be distributed widely among the many possible AI building institutions: software firms, government, academia, etc. And since a small team is very unlikely to build AGI first, it can have at most only a very small share of the total expected probability of a good outcome.

A closed project aiming to build safe AI could have an advantage either by using more demanding safety thresholds and by the possibility of not publishing results that require additional work to make safe but could be immediately used for harm. This is the reasoning for classifying some kinds of work with dangerous viruses or nuclear technology or the like. This could provide some safety boost for such a project in principle, but probably not an overwhelming one.

Secrecy might also be obtained through ordinary corporate and government security, and governments in particular would plausibly be much better at it (the Manhattan Project leaked, but ENIGMA did not). And different safety thresholds matter most with respect to small risks (most institutions would be worried about large risks, whereas those more concerned with future generations might place extra weight on small risks). But small risks contribute less to expected value.

And I would very strongly reject the idea that "generic project X poses near-certain doom if it succeeds while project Y is almost certain to have good effects if it succeeds": there's just no way one could have such confident knowledge.

And the progress is so disappointing perhaps thanks to P!=NP and the like - the tasks do not have easy general solutions, or even general purpose heuristics.

You can still get huge differences in performance from software. Chess search explodes as you go deeper, but software improvements have delivered gains comparable to hardware gains: the early AI people were right that if they had been much smarter they could have designed a chess program to beat the human world champion using the hardware they had.

Part of this is that in chess one is interested in being better than one's opponent: sure you can't search perfectly 50 moves ahead, but you don't have to play against an infinite-computing-power brute-force search, you have to play against humans and other computer programs. Finance, computer security, many aspects of military affairs, and other adversarial domains are pretty important. If you could predict the weather a few days further than others, you could make a fortune trading commodities and derivatives.

Another element is that humans are far from optimized to use their computation for chess-playing, which is likely true for many of the other activities of modern civilization.

Also, there's empirical evidence from history and firm R&D investments that human research suffers from serial speed limits of human minds, i.e. one gets more progress from doubling time to work than the size of the workforce. This is most true in areas like mathematics, cryptography, and computer science, less true in areas demanding physical infrastructure built using the outputs of many fields and physically rate-limited processes. But if one can rush forward on those elements, there would then be an unprecedented surge of ability to advance the more reluctant physical technologies.

comment by gwern · 2013-01-22T22:55:41.750Z · LW(p) · GW(p)

How is Hutter being misrepresented here?

Replies from: private_messaging
comment by private_messaging · 2013-01-22T23:04:00.792Z · LW(p) · GW(p)

I mentioned how vague it is; it is impossible for anyone to check what is exactly meant without going over literally everything Hutter ever wrote.

Hutter was much less ambiguously misrepresented/misquoted during the more recent debate with Holden Karnofsky (due to the latter's interest in AIXI), so I am assuming, by the process of induction, that same happened here.

Replies from: gwern
comment by gwern · 2013-01-22T23:12:21.161Z · LW(p) · GW(p)

it is impossible for anyone to check what is exactly meant without going over literally everything Hutter ever wrote.

As it happens, I looked it up and did this 'impossible' task in a few seconds before I replied, because I expected the basis for your claim to be as lame as it is; here's the third hit in Google for 'marcus hutter ai risk': "Artificial Intelligence: Overview"

Slide 67 includes some of the more conventional worries like technological unemployment and abuse of AI tools; more importantly, slide 68 includes a perfectly standard statement of Singularity risks, citing, as it happens, Moravec, Goode, Vinge, and Kurzweil; I'll quote it in full (emphasis added):

What If We Do Succeed?

The success of AI might mean the end of the human race.

  • Artificial evolution is replaced by natural solution. AI systems will be our mind children (Moravec 2000)
  • Once a machine surpasses the intelligence of a human it can design even smarter machines (I.J.Good 1965).
  • This will lead to an intelligence explosion and a technological singularity at which the human era ends.
  • Prediction beyond this event horizon will be impossible (Vernor Vinge 1993)
  • Alternative 1: We keep the machines under control.
  • Alternative 2: Humans merge with or extend their brain by AI. Transhumanism (Ray Kurzweil 2000)

Let's go back to what Carl said:

Marcus Hutter, Jurgen Schmidhuber, Kevin Warwick, and a number of other AI folk have written about the future of AI and risk of human extinction, etc.

Sure sounds like 'Marcus Hutter...have written about the future of AI and risk of human extinction'.

Replies from: private_messaging
comment by private_messaging · 2013-01-22T23:16:27.133Z · LW(p) · GW(p)

Which in that case demonstrates awareness among the AI researchers of the risk, while at the same time not demonstrating that Hutter finds it particularly likely that this would happen ('might') or agrees with any specific alarmist rhetoric. I can't know if that's what Carl actually refers to. I do assure you that about every AI researcher has seen the Terminator.

Replies from: CarlShulman, gwern
comment by CarlShulman · 2013-01-22T23:46:36.402Z · LW(p) · GW(p)

I gave the Hutter quote I was thinking of upthread.

My aim was basically to distinguish between buying Eliezer's claims and taking intelligence explosion and AI risk seriously, and to reject the idea that the ideas in question came out of nowhere. One can think AI risk is worth investigating without thinking much of Eliezer's views or SI.

I agree that the cited authors would assign much lower odds of catastrophe given human-level AI than Eliezer. The same statement would be true of myself, or of most people at SI and FHI: Eliezer is at the far right tail on those views. Likewise for the probability that a small team assembled in the near future could build safe AGI first, but otherwise catastrophe would have ensued.

Replies from: private_messaging
comment by private_messaging · 2013-01-23T00:28:08.626Z · LW(p) · GW(p)

Well, I guess that's fair enough. In the quote on the top, though, I am specifically criticizing the extreme view. At the end of the day, the entire raison d'etre for SI's existence is the claim that without paying you the risk would be higher. The claim that you are somehow fairy unique. And there are many risks - for example, risk of lethal flu-like pandemic - which are much more clearly understood and where specific efforts have much more clearly predictable outcome of reducing the risk. Favouring a group of AI theorists but not other does not have clearly predictable outcome of reducing the risk.

(I am inclined to believe that the pandemic is under funded as it would primarily decimate the poorer countries, ending existence of entire cultures, whereas the 'existential risk' is a fancy phrase for a risk to the privileged)

comment by gwern · 2013-01-22T23:31:03.711Z · LW(p) · GW(p)

Which in that case demonstrates awareness among the AI researchers of the risk, while at the same time not demonstrating that Hutter finds it particularly likely that this would happen ('might') or agrees with any specific alarmist rhetoric.

It need not demonstrate any such thing to fit Carl's statement perfectly and give the lie to your claim that he was misrepresenting Hutter.

I do assure you that about every AI researcher has seen the Terminator.

Sure, hence the Hutter citation of "(Cameron 1984)". Oh wait.

comment by Dmytry · 2012-04-14T06:35:48.304Z · LW(p) · GW(p)

Yudkowsky has spent more time on the topic than any of the others on this list, and has specific conclusions that are more idiosyncratic (especially the combination of views on many subjects), but the basic ideas are not so rare or privileged that they do not recur independently among many folk, including subject matter experts.

The argument is for the insights coming out of EY , and the privileging that EY is making for those hypotheses originated by others, aka cherrypicking what to advertise. EY is a good writer.

edit: concrete thought example: There is a drug A that undergoes many tests, with some of them evaluating it as better than placebo, some as equal to placebo, and some as worse to placebo. Worst of all, each trial is conducted on 1 person's opinion. Comes in the charismatic pharmaceutical marketer, or charismatic anti-vaccination campaign leader, and starts bringing to attention the negative or positive trials. That is not good. Even if there's both of those people.

comment by JoshuaZ · 2012-04-14T03:28:31.516Z · LW(p) · GW(p)

I'm unsure whether I should upvote this post or not. Much of it seems to raise valid points. But towards the end you write:

You (LW) may dislike this. You can provide me with a poll results informing me that you dislike this, if you wish (This is pretty silly if you ask me; you think you are rating me, but clearly, if I am not mentally handicapped individual, all that does is providing me with pieces of information which I can use to many purposes besides self evaluation; I self evaluate by trying myself on practical problems, or when I actually care.).

This comes across as a bit passive-aggressive with a bit of the standard "if you downvote me, I win" sort of symptom which is common among people who are either set in their ways or just trying to troll. I don't get that impression at all from the rest of the post, but this bit seems to signal that strongly. It may make sense to rewrite that paragraph or delete it entirely.

Replies from: Viliam_Bur, XiXiDu, David_Gerard
comment by Viliam_Bur · 2012-04-14T12:42:42.306Z · LW(p) · GW(p)

I'm unsure whether I should upvote this post or not. Much of it seems to raise valid points.

If the articles provides more good than bad (as in 10 well-articulated objections versus 1 short trolling paragraph), I guess it still deserves an upvote.

I hate karma games, and usually automatically downvote any article or comment that speaks about its own karma. But this article has enough useful content that I made an exception of this rule. (It also helped that the article is rather long, so I skipped the offending paragraph.)

I disagree with the idea that critisicm is downvoted here, unless it happens to be badly written criticism, and I consider this accusation very unfair. However I upvoted this article, not to provide some kind of "balance" or "diversity", but as my honest judgement of quality of the first 90% of its text.

EDIT: I also very much liked the idea that a self-improving AI would probably wirehead itself. It never occured to me, and it makes a lot of sense. (However, if I hope that future humans will be able to resist wireheading, it makes sense to worry that some AIs will manage to resist wireheading too.)

Replies from: Normal_Anomaly
comment by Normal_Anomaly · 2012-04-14T18:04:25.862Z · LW(p) · GW(p)

I also very much liked the idea that a self-improving AI would probably wirehead itself. It never occured to me, and it makes a lot of sense.

This idea is intuitively plausible, but doesn't hold up when considering rational actors that value states of the world instead of states of their minds. Consider a paperclip maximizer, with the goal "make the number of paperclips in the universe as great as possible". Would it rather a) make paperclips, or b) wirehead to convince itself that the universe is already full of paperclips? Before it wireheads, it knows that option a) will lead to more paperclips, so it does that. Similarly, I would rather actually help people than feel the warm glow that comes from helping people without any actual helping.

Replies from: Dmytry
comment by Dmytry · 2012-04-14T19:41:03.057Z · LW(p) · GW(p)

value states of the world instead of states of their minds

Easier said than done. Valuing state of the world is hard; you have to rely on senses.

Replies from: None
comment by [deleted] · 2012-04-14T20:17:24.634Z · LW(p) · GW(p)

Well, yes, but behind the scenes you need a sensible symbolic representation of the world, with explicitly demarcated levels of abstraction. So, when the system is pathing between 'the world now' and 'the world it wants to get to,' the worlds in which it believes there are a lot of paperclips are in very different parts of state space than the worlds which contain the most paperclips, which is what it's aiming for. Being unable to differentiate would be a bug in the seed AI, one which would not occur later if it did not originally exist.

comment by XiXiDu · 2012-04-14T11:04:52.141Z · LW(p) · GW(p)

This comes across as a bit passive-aggressive with a bit of the standard "if you downvote me, I win" sort of symptom which is common among people who are either set in their ways or just trying to troll.

I consider this to be a problem with reputation systems rather than with the people who raise that point.

I think his point is absolutely valid. What he is saying is that reputation systems, like the one used on Less Wrong, allow for an ambiguous interpretation of the number they assign to content. That downvotes mean that he is objectively wrong is just one unlikely interpretation, given the selection pressure such reputation systems cause and the human bias towards group think.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-14T14:40:37.102Z · LW(p) · GW(p)

I find the interpretation scheme "net downvotes mean more people want less content like this than want more content like this; net upvotes mean the reverse" to be fairly unambiguous.

Sure, it would be nice to have an equally unambiguous indicator of something I care about more (like, for example, the objective wrongness of a statement). Reputation systems aren't that. Anyone who believes they are, is mistaken. Anyone who expects them to act as though they were and pays attention will be disappointed.

There are millions of other things that would be nice to have that reputation systems aren't, also.

comment by David_Gerard · 2012-04-14T07:24:12.014Z · LW(p) · GW(p)

I read this as a reference to past criticisms having been met with really obvious mass-downvoting.

Replies from: Normal_Anomaly
comment by Normal_Anomaly · 2012-04-14T18:06:42.579Z · LW(p) · GW(p)

Can you link to any single instance of that? All I've seen here is intelligent criticism being massively upvoted, and a few instances of unintelligent, incomprehensible, or repetitive (e.g. too many posts on oracle AI) criticism being downvoted to -5 or so.

Replies from: David_Gerard
comment by David_Gerard · 2012-04-14T18:41:32.921Z · LW(p) · GW(p)

Since you ask: I noted here an example of answering the actual question getting a downvote. (And the fact of me noting it got downvoted too.)

edit: at time of making this comment, the linked comment and the comment it points to were both at -1.

Replies from: Normal_Anomaly
comment by Normal_Anomaly · 2012-04-14T23:12:59.900Z · LW(p) · GW(p)

It looks like that comment had fluctuating karma both before and after today--it's hardly mass-downvoting if the comment never goes below -1. Also, AFAICT the people who downvoted were doing it because they thought Dmytry was confused about evolution and/or hill-climbing algorithms. I don't know enough about hill-climbing to say for sure if he was making a mistake worthy of downvoting.

That said, I have updated in favor of people sometimes downvoting for disagreement, and I don't approve of that. I generally try to avoid it myself. For instance, I haven't downvoted anything from Dmytry that I disagree with, because I think his points are intelligent enough to be worth making. Thanks for pointing out that example.

Replies from: David_Gerard
comment by David_Gerard · 2012-04-15T08:30:01.455Z · LW(p) · GW(p)

Please note that mass-downvoting has happened to others recently. I would hope that sockpuppetry hasn't become an accepted mode of social discourse here (per Reddit), but it may be too late.

comment by Mitchell_Porter · 2012-04-14T01:43:01.187Z · LW(p) · GW(p)

It seems you have become "less concerned with AI risks" but more concerned with "FAI risks". You say several times that the conscious attempt to make AIs that won't eat us for breakfast increases the danger that they will eat us for breakfast, because such an effort implies making AIs which by design take an interest in humanity. Apparently we will be safer if we don't try to be safe, because AIs will then just be interested in... whatever AIs are naturally interested in... and they will just overlook humanity and leave it alone.

I can't imagine that philosophy winning the support of the human race. To say that a failed FAI could be dangerous is to concede that AIs could be dangerous. If you concede that point, but deny that FAI is worth pursuing, then the sensible thing or the human race to do is to crush the development of AI wherever it occurs, not to just let it happen and hope that these new godlike entities will luckily be benign.

Replies from: CarlShulman, XiXiDu
comment by CarlShulman · 2012-04-14T01:51:51.706Z · LW(p) · GW(p)

This exact attitude is rare. Much more common is the "let the AIs do their own thing, even if it eats humanity for breakfast, rather than shackling them to human-derived values" attitude, at least among AI folk (David Dalrymple, in the recent comment thread here, is one of many examples).

Also, he isn't saying humanity will be overlooked, but cheaply taken care of as specimens, zoo/nature reserve animals, and possibly ransom or to get in good with more powerful protectors of humanity (aliens or simulators). Or that AIs that don't care about us will be successfully constrained.

Replies from: timtyler, Dmytry
comment by timtyler · 2012-04-15T01:20:53.011Z · LW(p) · GW(p)

Much more common is the "let the AIs do their own thing, even if it eats humanity for breakfast, rather than shackling them to human-derived values" attitude, at least among AI folk (David Dalrymple, in the recent comment thread here, is one of many examples).

That is often known as "Beyondism".

Replies from: CarlShulman
comment by CarlShulman · 2012-04-19T00:19:16.613Z · LW(p) · GW(p)

Most proponents of the view in connection with AI, in my experience, don't seem to use the term or be familiar with Cattell. He's more associated with genetic enhancement, e.g. Jim Flynn (of the Flynn Effect) discusses and rejects Cattell's views in his book on moral philosophy and empirical knowledge, "How to defend humane ideals."

Replies from: timtyler
comment by timtyler · 2012-04-19T01:00:10.946Z · LW(p) · GW(p)

FWIW, I pickled up the term from Roko - who occasionally talked about "beyondist transhumanism". Cattell's "beyondism" seems to be frequently compared to social Darwinism.

comment by Dmytry · 2012-04-14T07:32:25.977Z · LW(p) · GW(p)

This another example of method of thinking I dislike - thinking by very loaded analogies, and implicit framing in terms of zero sum problem. We are stuck on a mud ball with severe resource competition. We are very biased to see everything as zero or negative sum game by default. One could easily imagine example where we expand slower than AI, and so our demands always are less than it's charity which is set at constant percentage point. Someone else winning doesn't imply you are losing.

Replies from: ShardPhoenix
comment by ShardPhoenix · 2012-04-14T09:38:12.187Z · LW(p) · GW(p)

What you describe is arguably already a (mediocre) FAI, with all the attendant challenges.

Replies from: Dmytry
comment by Dmytry · 2012-04-14T17:39:26.713Z · LW(p) · GW(p)

With all of them? How so?

Replies from: ShardPhoenix
comment by ShardPhoenix · 2012-04-15T00:20:39.184Z · LW(p) · GW(p)

There are two main challenges: complexity of human values and safe self-modification. In order to correctly define the "charity percentage" so that what the AI leaves us is actually desirable, you need to be able to define human values about as well as a full FAI. Self-modification safety is needed so that it doesn't just change the charity value to 0 (which with a sufficiently general optimizer can't be prevented by simple measures like just "hard-coding" it), or otherwise screw up its own (explicit or implicit) utility function.

If you are capable of doing all that, you may as well make a proper FAI.

comment by XiXiDu · 2012-04-14T11:13:12.011Z · LW(p) · GW(p)

It seems you have become "less concerned with AI risks" but more concerned with "FAI risks".

See P8 here for why FAI risks might be more dramatic than UFAI risks.

comment by Wei Dai (Wei_Dai) · 2012-04-14T00:13:37.096Z · LW(p) · GW(p)

For example, if I point out that the AI has good reasons not to kill us all due to it not being able to determine if it is within top level world or a simulator or within engineering test sim. It is immediately conjectured that we will still 'lose' something because it'll take up some resources in space.

To back up Carl's claim, see Outline of possible Singularity scenarios (that are not completely disastrous) and further links in the comments there. You know, I keep hoping that you'd update your evaluation of this community and especially your estimate of how much we've already thought about these things, but maybe it's time for me to update...

Replies from: steven0461, Dmytry
comment by steven0461 · 2012-04-14T00:31:25.356Z · LW(p) · GW(p)

You know, I keep hoping that you'd update your evaluation of this community and especially your estimate of how much we've already thought about these things, but maybe it's time for me to update...

Yes. In general, the useful commenters on LessWrong seem to spend too much time arguing with hopeless cases and not enough time arguing with other useful commenters.

Replies from: Wei_Dai, Wei_Dai, XiXiDu
comment by Wei Dai (Wei_Dai) · 2012-04-14T07:06:35.536Z · LW(p) · GW(p)

I just realized there's another possible explanation: discussions/arguments between "useful commenters" usually stop getting upvoted after a certain point (probably because the disagreements are usually over peripheral issues that don't interest a huge number of readers), whereas arguments against "hopeless cases" seem good for unlimited karma (probably because you're making central points that everyone can understand). Perhaps I and others have been unconsciously letting this affect our behavior?

Replies from: steven0461
comment by steven0461 · 2012-04-15T03:10:23.443Z · LW(p) · GW(p)

There are enough important differences of opinion between useful commenters about what we all should do on the grand scale that I would expect it to be at least possible, somehow, to create relatively high expected value by hashing these disagreements out. If the discussion is over peripheral issues that don't much affect the answer to such big questions, maybe we're going about it the wrong way.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2012-04-15T06:51:19.524Z · LW(p) · GW(p)

I see. I had hoped to raise some debate by posting Some Thoughts on Singularity Strategies, but few FAI supporters responded, and none from SIAI. I have the feeling (and also some evidence) that there aren't many people, aside from Eliezer, who are very gung-ho on trying to build an FAI directly.

I did have a private chat with Eliezer recently where I tried to find out why we disagree over FAI, and it seems to mostly come down to different estimates on how hard the philosophical problems involved are compared to his ability to correctly solve them.

Replies from: steven0461
comment by steven0461 · 2012-04-15T07:46:30.218Z · LW(p) · GW(p)

I did have a private chat with Eliezer recently where I tried to find out why we disagree over FAI, and it seems to mostly come down to different estimates on how hard the philosophical problems involved are compared to his ability to correctly solve them.

That's good to know. Was the disagreement more about how hard the philosophical problems are, or about how good Eliezer is at solving philosophical problems, or some of both?

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2012-04-15T07:52:14.114Z · LW(p) · GW(p)

Some of both.

comment by Wei Dai (Wei_Dai) · 2012-04-14T01:11:17.432Z · LW(p) · GW(p)

I'm not sure. Arguing with "hopeless cases" is high risk but high return (if we succeed in bringing in new blood and new insights). Arguing with other "useful commenters" perhaps marginally improves our beliefs and how we approach the problems we're trying to solve, but much of the time when I disagree with some "useful commenter" I still think both of our approaches ought to be explored so there's not that much gain from arguing with them. I'd typically state my reasons (just in case I'm making some kind of gross error) and leave it at that if it doesn't change their mind.

Replies from: steven0461
comment by steven0461 · 2012-04-14T03:07:46.986Z · LW(p) · GW(p)

I think good new insights in practice tend to come from old commenters who rethink things one point at a time and not as much from new commenters who start out with an attitude of belligerent dismissal.

much of the time when I disagree with some "useful commenter" I still think both of our approaches ought to be explored so there's not that much gain from arguing with them

I don't understand. Doesn't arguing with them constitute exploring the different approaches?

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2012-04-14T04:49:27.322Z · LW(p) · GW(p)

I think good new insights in practice tend to come from old commenters who rethink things one point at a time and not as much from new commenters who start out with an attitude of belligerent dismissal.

I think it's good to have some natural contrarians/skeptics around who like to find flaws in whatever ideas they see. I guess I played this role somewhat back in the OB days, but less so now that I'm closer to the "inner circle". Of course I was more careful to make sure the flaws are real flaws, and not very belligerent...

I don't understand. Doesn't arguing with them constitute exploring the different approaches?

Maybe we're not thinking about the same things. I'm talking about like when cousin_it or Nesov has some decision theory idea that I don't think is particularly promising, I tend to let them work on it and either reach that conclusion themselves or obtain some undeniable result, instead of trying to talk them out of it and work on my preferred approaches. What kind of arguments are you thinking of?

Replies from: steven0461
comment by steven0461 · 2012-04-15T03:15:46.157Z · LW(p) · GW(p)

I suppose I was thinking of arguments more informal than decision theory, and I suppose in the context of such informal arguments, exchanging a lot of small chunks of reasoning seems more useful than it does in the context of building decision theory models.

comment by XiXiDu · 2012-04-14T11:28:34.007Z · LW(p) · GW(p)

Yes. In general, the useful commenters on LessWrong seem to spend too much time arguing with hopeless cases and not enough time arguing with other useful commenters.

When I went on house-to-house preaching with other Jehovah's Witnesses as a child, this was almost exactly what more experienced members told me to do when we encountered people who didn't seem to understand that we were clearly right and only trying to warn them that the end is nigh.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2012-04-14T17:08:05.503Z · LW(p) · GW(p)

I couldn't follow that. Could you say it again in more detail?

What kinds of people would you encounter, and which were you told to spend time proselytizing? Were there many who immediately agreed with you? and was it best to give them lots of time?

comment by Dmytry · 2012-04-14T07:14:14.609Z · LW(p) · GW(p)

It's just that I don't believe you folks really are this greedy for sake of mankind, or assume such linear utility functions. If we could just provide food, shelter, and reasonable protection of human beings from other human beings, for everyone a decade earlier, that, in my book, outgoes all the difference between immense riches and more immense riches sometime later. (edit: if the difference ever realizes itself; it may be that at any moment in time we are still ahead)

On top of that, if you fear WBEs self improving - don't we lose ability to become WBEs, and become smarter, under the rule of friendly AI? Now, you have some perfect oracle in model of the AI, and it concludes that this is ok, but I do not have model of perfect oracle in AI and it is abundantly clear AI of any power can't predict outcome of allowing WBE self improvement, especially under ethical constraints that forbid boxed emulation (and even if it would, there's the immense amount of computational resources taken up by the FAI to do this). Once again, the typically selective avenue of thought, you don't think each argument applied both to FAI and AI to make valid comparison. I do know that you already thought a lot about this issue (but i don't think you thought straight; it is not formal mathematics where the inferences do not diverge from sense with the number of steps taken, it is fuzzy verbal reasoning where it unavoidably does). You jump right here on most favourable for you interpretation of what I think.

comment by Wei Dai (Wei_Dai) · 2012-04-13T23:30:15.691Z · LW(p) · GW(p)

(and why I am posting this: looking at the donations received by SIAI and having seen talk of hiring software developers, I got pascal-wagered into explaining it)

Where did you see talk of hiring software developers, and are you sure they're not web developers or something like that? I'd be concerned too if SIAI were hiring software developers to build FAI, but based on what I know about them, it seems extremely unlikely at this point.

Replies from: CarlShulman
comment by CarlShulman · 2012-04-13T23:37:47.364Z · LW(p) · GW(p)

Yeah, no one is being hired to code AGI at SIAI right now. Software developers are for the "Center for Modern Rationality"/LessWrong side, as I understand it, e.g. creating little programs to illustrate Bayes' rule and the like.

Eliezer wants an FAI team to undertake many years of theoretical CS and AI research before trying to code an AGI, and that research group has not even been assembled and is not currently in operation. Also, I would hope that it would have a number of members with comparable or superior intellectual chops who would act as a check on any of Eliezer's individual biases.

Replies from: Dmytry
comment by Dmytry · 2012-04-14T07:05:40.797Z · LW(p) · GW(p)

Also, I would hope that it would have a number of members with comparable or superior intellectual chops who would act as a check on any of Eliezer's individual biases.

Not if there is self selection for coincidence of their biases with Eliezer's. Even worse if the reasoning you outlined is employed to lower risk estimates.

comment by JoshuaZ · 2012-04-14T03:25:52.698Z · LW(p) · GW(p)

Can you expand on why you don't like the random mindspace argument? I'm curious to hear that. I don't think anyone making that argument is arguing that the first strong AIs will hit completely random points in mindspace, and I don't think anyone is arguing that they have a precise probability measure or notion of metrics on mindspace. The argument is purely that mindspace seems to be large and that points in mindspace very close to humans could easily be highly inimical to our value system. In that context, what is your objection?

Replies from: None, timtyler, XiXiDu
comment by [deleted] · 2012-04-14T07:27:06.426Z · LW(p) · GW(p)

The argument is purely that mindspace seems to be large and that points in mindspace very close to humans could easily be highly inimical to our value system.

Considering the diversity of human values, I think other humans already are a working demonstration of "point in mindspace very close" to ours that is however "highly inimical to our value system".

comment by timtyler · 2012-04-14T22:32:57.595Z · LW(p) · GW(p)

The argument is purely that mindspace seems to be large and that points in mindspace very close to humans could easily be highly inimical to our value system. In that context, what is your objection?

That argument seems to be true - but insignificant. Similarly programs with a small hamming distance from Microsoft Windows crash when executed. So what? That doesn't mean that the operating system is unlikely to work.

This sort of statistic is just not very relevant - unless the aim is to sound scary.

Replies from: roystgnr
comment by roystgnr · 2012-04-15T02:29:57.341Z · LW(p) · GW(p)

It's not the risk of an AI crashing that is worrying. To continue in the form of your analogy:

Programs a small distance from correct IDE drivers have overwritten large chunks of a couple of my hard drives with garbage, leaving data irrecoverable. These programs had all the code in them to do low-level edits to hard drives, so a slight error simply caused them to write horribly wrong things.

Programs a small distance from correct video drivers have put garbage on my computer monitor. This one is so common that I can recall random colored ASCII text, stretched and distorted versions of the correct image, clips of data that had been "freed" but not overwritten by other programs using video memory, large blocks of color... in each case the driver had all the code in it to edit the image on the screen, and lots of different bugs led to writing various sorts of grossly incorrect images.

So if we write a program which has all the code in it to try to edit the universe according to its values, and theres a bug in the part which tells it that its values are our values, what do we expect to happen?

And unless people are all quite paranoid, there will be lots of bugs. Windows XP SP2 included over a thousand bug fixes. I agree that our first AGIs are likely to be as correct as our first operating systems. This is not reassuring.

Replies from: timtyler
comment by timtyler · 2012-04-15T11:12:01.346Z · LW(p) · GW(p)

It's not the risk of an AI crashing that is worrying.

That wasn't really the point of the analogy. The idea was of a target representing success being surrounded by a larger space of failure. The seriousness of the failure was intended to be an incidental aspect of the analogy.

comment by XiXiDu · 2012-04-14T11:08:27.044Z · LW(p) · GW(p)

Can you expand on why you don't like the random mindspace argument?

See P4, third paragraph, here.

Replies from: othercriteria
comment by othercriteria · 2012-04-14T12:59:39.382Z · LW(p) · GW(p)

Your link seems to address only a restricted case of the random mind space argument, where an AI is given a correctly specified goal but insufficient constraints on its behavior wrt resources. The randomness is not in what it principally values (e.g., paperclips) but in what else it values. A complete counterargument should address the case where, say, we try to create a paperclip maximizer and end up creating a staple maximizer.

comment by David_Gerard · 2012-04-13T21:08:33.786Z · LW(p) · GW(p)

BTW, have you read the The Hanson-Yudkowsky AI-Foom Debate? I just read through it again last night (after rereading your previous post). Robin Hanson is the economist (with two degrees in physics) who does the Overcoming Bias blog linked in the sidebar, and the LessWrong sequences were first posted at OB. Hanson absolutely doesn't buy the FOOM scenario either.

Replies from: timtyler
comment by timtyler · 2012-04-16T03:38:06.509Z · LW(p) · GW(p)

Hanson doesn't think it likely that a small group will "take over the world".

He does picture pretty rapid progress being caused by machine intelligence fairly soon.

Replies from: lukeprog
comment by lukeprog · 2012-04-16T05:04:31.451Z · LW(p) · GW(p)

Yeah, Hanson's newest article argues that you don't some weird 'intelligence explosion' to get Kurzweilian singularity: you just need standard economic growth rates to continue. He seems to think the most likely scenario is takeover of the economy and everything else by ems and eventually "hardscrabble hell," which might sound terrible to us but, what the hell: it's just another intergenerational conflict. So if somebody is looking for solace in Hanson's own particular doubts about intelligence explosion, I doubt they'll find it. :)

Replies from: timtyler
comment by timtyler · 2012-04-16T22:34:17.284Z · LW(p) · GW(p)

Hanson's newest article argues that you don't some weird 'intelligence explosion' to get Kurzweilian singularity: you just need standard economic growth rates to continue.

It sounds a teensy bit like my own "The Intelligence Explosion Is Happening Now".

He seems to think the most likely scenario is takeover of the economy and everything else by ems and eventually "hardscrabble hell," which might sound terrible to us but, what the hell: it's just another intergenerational conflict.

Brain emulations coming first or mattering much should be assigned a low probability by clued in folks, IMO.

Natural selection on people does seem like a possible outcome. However, like the SIAI folk, I'm more inclined towards thinking that there will be unified rule and self-directed evolution - broadly along the lines that Pierre Teilhard de Chardin foresaw.

The good thing about natural selection directing things is that it might help to keep us from going off the rails - and eventually getting assimilated by aliens. At least a competitive universe won't wirehead itself. Pure self-directed evolution by one dominant agent could lead to a big, fat - but ultimately screwed-up - future.

comment by wedrifid · 2012-04-14T04:55:21.203Z · LW(p) · GW(p)

Now, do not think of it in terms of fixing the good idea's argument, please. Treat it as evidence that the idea is, actually, bad, and process it as to make a better idea - which may or may not coincide with original idea. You can't right now know if your idea is in fact good or not - rather than fixing you should make a new idea. To do anything else is not rationality. It is rationalization. It is to become even more wrong by making even more privileged hypotheses, and make even worse impression on the engineers whom you try to convince.

You seem to be grossly overvaluing the weight we should place on your personal testimony as a "Software Developer". It is most certainly not 'irrational' to not abandon an idea simply because you say so very frequently and very assertively. About half the people here are software developers, many more are mathematicians as well. I've also seen the intellectual work some of them output - which is what you declared we should evaluate people on - and it is orders of magnitude more impressive than impressive than what we have seen from you.

It does not require gross failures of rationality to not update drastically and abandon ideas based on one anecdote of a software developer with little knowledge of this field making ultimatums. This is, indeed "evidence that the idea is, actually, bad" but it is overwhelmingly weak evidence and it would be a mistake to treat it as more.

Replies from: CarlShulman, XiXiDu, Dmytry
comment by CarlShulman · 2012-04-14T05:10:41.648Z · LW(p) · GW(p)

About half the people here are software developers,

Good point.

and it is orders of magnitude more impressive

Really? On what scale? Absolute productivity in dollar terms, or minds affected? Rarity in the population? This sounds like hyperbole.

Replies from: wedrifid
comment by wedrifid · 2012-04-14T05:30:57.723Z · LW(p) · GW(p)

Really? On what scale? Absolute productivity in dollar terms, or minds affected? Rarity in the population? This sounds like hyperbole.

Pardon me - I was referring somewhat more specifically to direct intellectual/academic output, not graphics based software development projects. I know very little about how many dollars have been earned by the people in question (and can't say I have ever been all that curious).

comment by XiXiDu · 2012-04-14T10:51:43.781Z · LW(p) · GW(p)

About half the people here are software developers, many more are mathematicians as well. I've also seen the intellectual work some of them output - which is what you declared we should evaluate people on - and it is orders of magnitude more impressive than impressive than what we have seen from you.

There are more people who have done more impressive work who disagree with AI risk. If I was only going to judge AI risk based on who advocates it, then worrying about AI risk is clearly mistaken.

Replies from: Viliam_Bur, timtyler
comment by Viliam_Bur · 2012-04-14T12:32:07.114Z · LW(p) · GW(p)

I guess the point was that if we are going to consider "software developer output" even as a weak evidence in this debate, why consider Eliezer's output, and not the best output of people who agree with him?

An analogy: Imagine that there is a mathematical problem. A twelve-years old child solves the problem and says "x = 10". Then a university professor of mathematics looks at the problem and the solution and says "indeed, you are right". Taken this story as a whole, would you judge the "x = 10" hypothesis by credentials of the child, or of the professor? Further, imagine that another university professor of mathematics looks at the problem and says "actually, this is wrong, x = 12"; and then the two professors start a long discussion whether "x = 10" or "x = 12". Again, would you frame this debate as a "child versus professor" debate or as a "professor against professor" debate?

The point is, the argument "I am impressive software developer and I say EY is wrong, and EY is not an impressive software developer" is weakened by saying "well there are other impressive software developers that say EY is right".

comment by timtyler · 2012-04-15T01:27:43.461Z · LW(p) · GW(p)

There are more people who have done more impressive work who disagree with AI risk. If I was only going to judge AI risk based on who advocates it, then worrying about AI risk is clearly mistaken.

It probably depends on your values. Most people are more worried by being hit by a car than they are about being eaten by a superintelligent machine. With common values, their beliefs about which issue is more important are absolutely justified.

comment by Dmytry · 2012-04-14T06:39:54.011Z · LW(p) · GW(p)

For every one of those people you can have one, or ten, or a hundred, or a thousand, that dismissed your cause. Don't go down this road for confirmation, that's how self reinforcing cults are made.

Replies from: wedrifid
comment by wedrifid · 2012-04-14T08:10:01.897Z · LW(p) · GW(p)

For every one of those people you can have one, or ten, or a hundred, or a thousand, that dismissed your cause. Don't go down this road for confirmation, that's how self reinforcing cults are made.

I didn't go down any road for confirmation. I put your single testimony in a more realistic perspective. Not believing one person who seems to have a highly emotional agenda isn't 'cultish', it's just practical.

Replies from: Dmytry
comment by Dmytry · 2012-04-14T08:26:45.585Z · LW(p) · GW(p)

I didn't go down any road for confirmation. I put your single testimony in a more realistic perspective. Not believing one person who seems to have a highly emotional agenda isn't 'cultish', it's just practical.

I think you grossly overestimate how much emotional agenda can disagreement with counterfactual people produce.

edit: botched the link.

Replies from: wedrifid
comment by wedrifid · 2012-04-14T08:32:14.484Z · LW(p) · GW(p)

I didn't go down any road for confirmation. I put your single testimony in a more realistic perspective. Not believing one person who seems to have a highly emotional agenda isn't 'cultish', it's just practical.

I think you grossly overestimate how much emotional agenda can disagreement with counterfactual people produce.

This doesn't make sense as a reply to the context. I'm not sure it makes any sense as a matter of English grammar either.

comment by Andy_McKenzie · 2012-04-13T22:54:35.991Z · LW(p) · GW(p)

It seems to me that your opinion is that Eliezer was the driving force behind SIAI and that all of the other people involved, and who have donated to it, are basically "followers". This is my inference based on the fact that so many of your arguments have to do with Eliezer's credentials, e.g.

I would take you far more seriously if you spent 1 week of your time to get into first 5 on a marathon contest on TopCoder

I am skeptical of this reasoning. It seems to me that the views of people like Nick Bostrom and Anna Salamon should also be considered evidence in favor of the FOOM hypothesis, if this is the angle we are evaluating the question from. Surely their beliefs are not completely independent; but surely they are not completely dependent, either.

Replies from: shminux
comment by shminux · 2012-04-13T23:02:43.002Z · LW(p) · GW(p)

Eliezer was the driving force behind SIAI and that all of the other people involved, and who have donated to it, are basically "followers".

No one sane would argue with this statement. Proof: imagine the fate of SIAI if EY suddenly quits (say, for personal reasons). Or don't imagine, ask Anna and Luke what they would do in this case.

Replies from: CarlShulman, John_Maxwell_IV, Andy_McKenzie
comment by CarlShulman · 2012-04-13T23:32:34.093Z · LW(p) · GW(p)

It's clear that Eliezer has been the driving force behind SIAI existing as an organization. He founded it, his writings have been its most visible and influential face, he wants to organize an FAI team of which he would be a member, and so forth. The Singularity Summit is basically independent of him, and he was not particularly involved in the Visiting Fellows and rationality camp events that have occurred so far, proceeded mostly without him), but "driving force" remains very fair.

However, a number of SIAI folk like Michael Vassar and myself were independently interested in AI risk (and benefit) before coming in contact with Eliezer or his work, and would have likely continued to pursue other paths to affect this area sans EY. I think that this is important for the underlying question about independence of beliefs, and the two were bundled together.

Also, Nick Bostrom's work has been influential for a number of people, especially his papers on the ethics of astronomical waste and superintelligence.

Replies from: Dmytry
comment by Dmytry · 2012-04-14T06:47:23.175Z · LW(p) · GW(p)

EY founded it. Everyone else is self selected for joining (as you yourself explained), and represents extreme outliers as far as I can tell.

comment by John_Maxwell (John_Maxwell_IV) · 2012-04-14T05:21:45.099Z · LW(p) · GW(p)

I'm well acquainted with both Anna and Luke (mainly due to geographic accident--I used to attend UC Berkeley), and I'm pretty sure they would still work on rationality and AI if Eliezer quit for personal reasons.

It's interesting to see the difference between the rationalist culture online and in real life. No one in real life, to my knowledge, who is actually acquainted with the people behind SI has quite the attitude you do. Reminds me of this letter I read recently:

http://www.lettersofnote.com/2012/03/i-am-very-real.html

FWIW, I think that there are probably aspects of less wrong culture that could be improved; I'd like to see it become more egalitarian, with less voting down.

To my knowledge, nobody has ever complained about excess noise relative to signal on Less Wrong, which indicates to me that we can ease up on the moderation somewhat, e.g. by restricting downvote velocity heavily. Hopefully we could buy additional diversity of opinions at a very low cost of boring and low-quality posts.

Replies from: shminux, Viliam_Bur
comment by shminux · 2012-04-14T17:55:33.238Z · LW(p) · GW(p)

It's interesting to see the difference between the rationalist culture online and in real life. No one in real life, to my knowledge, who is actually acquainted with the people behind SI has quite the attitude you do.

Yeah, people often come across very differently online and IRL, not much can be done about it. I suppose that some restraints keeping people friendlier in person are absent when all one sees is a string of text.

As for voting down, I prefer it as is (though I would like to encourage downvoters to explain why. I try to do it when I downvote, anyhow.)

comment by Viliam_Bur · 2012-04-14T13:04:51.941Z · LW(p) · GW(p)

To my knowledge, nobody has ever complained about excess noise relative to signal on Less Wrong, which indicates to me that we can ease up on the moderation somewhat, e.g. by restricting downvote velocity heavily.

I prefer it this way. Maybe somewhat less moderation would be better, but I think we will get there gradually without announcing such intent. I am afraid that trying to intentionally reduce moderation could easily lead to too noisy site, and the change back would be painful, because it would create conflicts.

It seems to me that web communities tend to become less moderated as the time goes on. I have also seen a few communities with explicit rules which were intentionally broken and then the offenders complained loudly about censorship and created huge mindkilling debates; and I fear that the debate about voting without explicit rules would be even worse -- people accusing other people of censorship by downvoting, unproved accusations of karma assassination, well-meaning people upvoting worthless content just to provide "freedom" and "balance" against the supposed censors and thus completely ruining the feedback system. Maybe the LW community would handle it with greater rationality, but to me the danger is not worth risking.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-14T14:30:50.960Z · LW(p) · GW(p)

Maybe the LW community would handle it with greater rationality, but to me the danger is not worth risking.

I am not confident that's true of the current community (call it <10%). I am still less confident (call it <1%) that it would be true of the community that would replace the current community, should those changes be made.

comment by Andy_McKenzie · 2012-04-13T23:11:06.553Z · LW(p) · GW(p)

Paging Anna and Luke. I expect that they would stick with it, assuming they still had enough funding. Organizations are designed to be robust to individuals.

I will rephrase the statement a little bit, and emphasize a distinction between "SIAI" and some "AI ethics-oriented institution". We can't re-run history, but there are enough people interested in these topics that I'd expect some non-profit devoted to this cause would have sprung into existence, at some point 2003 - 2017, even if Eliezer had taken a different path. Is there any way we can test this?

Replies from: fubarobfusco, wedrifid
comment by fubarobfusco · 2012-04-14T03:44:57.999Z · LW(p) · GW(p)

Organizations are designed to be robust to individuals.

Only if such design work has actually taken place ... or if the organizations you're sampling have already been subjected to selection on this basis. It isn't magically so.

Replies from: Andy_McKenzie
comment by Andy_McKenzie · 2012-04-14T19:21:34.146Z · LW(p) · GW(p)

I should have said "typically."

comment by wedrifid · 2012-04-14T05:20:25.424Z · LW(p) · GW(p)

I expect that they would stick with it, assuming they still had enough funding.

The funding is the key - assuming they can keep their major donor or acquire sufficient elsewhere losing Eliezer is hardly a dealbreaker. It's just a HR problem and Luke would solve it. It may take the better part of a decade to get someone trained to Eliezer's level of expertise but it's something that could be done concurrently with everything else (for example, concurrently with all the other Eliezer-replacements that were in training at the same time.)

comment by drethelin · 2012-04-13T22:46:07.978Z · LW(p) · GW(p)

Can you give these equally convincing counter-arguments, so I can respond to them with rationalization?

comment by Dustin · 2012-04-14T15:55:11.739Z · LW(p) · GW(p)

I like the idea of this post much more than I like the actual post. Something about it rubs me the wrong way. I upvoted because on balance there are some good points, but it feels like it's written from a standpoint of "Here, let me teach you poor people how to think, aren't I so great, na na na".

This feeling may be because nowadays I don't have the time to keep up with everything Dmytry (or anyone) has wrote here, so I'm missing background information that puts this post into a better context. (Some links to previous discussions would be nice. Especially with points like #1 that are just bald assertions.) Anyway, people's emotions evoked about you and your writing are important. It seems like many engineering-type people don't grasp this point very well. I guess it would be better if we didn't have to worry about evoked emotions, but that's not the universe we live in.

Skepticism over our whole purpose here is valuable. Personally, I'm very interested in good critiques of EY's thoughts. Not because I have any particular problems with what EY has said, but because I like his writing so much. It makes me nervous that I'm missing something because he's such a good writer that he convinced me of something untrue without me even noticing it.

Replies from: AlanCrowe
comment by AlanCrowe · 2012-04-15T14:31:32.218Z · LW(p) · GW(p)

Discussion of this topic suffers from asymmetrical motivation. If you disagree with a mainstream position, arguing against it feels worth while. If you agree with a fringe position, arguing in favour of it feels worth while. But if you disagree with a fringe position, why bother?

The mainstream of research in AI thinks that we are safe from an unfriendly artificial general intelligence and have three layers of protection.

  • We have tried programming AGI's and failed. Humans suck at programming computers. AGI might be possible in theory, but not on this planet.

  • Researchers and funders have both learned that lesson. Even if humans could program an AGI, we are safe because no-one is working on it.

  • We failed really hard. Even if people returned to AGI research and overturned precedent with dramatic breakthroughs we are still safe because of the scale of the challenge. If dangerous success is a 10, and we had given up in the past because our efforts only ever ranked 7 or 8, then an AGI research revival that hoped to get to 9 might succeed better than expected and get to 10. Whoops! But really, AGI fell into disrepute because it was over-hyped crap. We only ever scored 2 or 3 on the fully general stuff. So even major unexpected breakthroughs, that score 5, when we were hoping for 4, still leave us decades to rethink whether there is anything to worry about.

I started this comment with the phrase asymmetric motivation and, having briefly sketched in why the mainstream isn't interested in discussing the issue, I can give an example of how this hurts the discussion. Is it really true that "we are safe because no-one is working on it."? That is not actually a reassuring argument. If you could get a member of the mainstream to engage with the issue they would quickly patch it. AGI is way too hard for a lone genius in a basement. It needs a research community bigger than a fairly substantial critical mass. The point could be elaborated and, fully worked out, may be convincing, but if one just doesn't believe that AGI poses a risk, why bother?

comment by Incorrect · 2012-04-14T00:10:00.713Z · LW(p) · GW(p)

Your art is beautiful, I'm curious how you got started with graphics programming.

comment by timtyler · 2012-04-14T22:43:48.688Z · LW(p) · GW(p)

Very simple probabilistic reasoning (Bayesian, if you insist) makes it incredibly unlikely the AI consequences aspect of lesswrong is not a form of doomsday cult - perhaps a cult within a noncult group.

So: SIAI do closely resemble a doomsday cult. Indeed, they pretty much are a doomsday cult - but a tech-savvy and relatively sane one. As such, they are pretty interesting - from a sociological perspective - IMHO.

comment by John_Maxwell (John_Maxwell_IV) · 2012-04-14T05:02:13.802Z · LW(p) · GW(p)

The 'random mind design space' is probably the worst offender.

My understanding was that this wasn't any attempt to rigorously formulate the idea of a randomly chosen mind, just suggest the possibility of a huge number of possible reasoning architectures that didn't share human goals.

There isn't a solid consequentialist reason to think that FAI effort decreases chance of doomsday as opposed to absence of FAI effort. It may increase the chances as easily as decrease.

This is one of those points you really should've left out... If you got something to say on this topic, say it, we all want to hear it (or at least I do). Of course it's not obvious that in FAI effort will certainly be helpful, but empirically, people trying to do things seems to make it more likely that they get done.

It appears to me that will to form most accurate beliefs about the real world, and implement solutions in the real world, is orthogonal to problem solving itself.

Have you heard of G?

Intelligent people tend to be impractical because of bugs in human brains that we shouldn't expect to appear in other reasoning architectures.

Of course general intelligence is a complicated multifaceted thing, but that doesn't mean it can't be used to improve itself. Humans are terrible improving ourselves because we don't have access to our own source code. What if that changed?

Foom scenario is especially odd in light of the above. Why would optimizing compiler that can optimize it's ability to optimize, suddenly emerge will? It could foom all right, but it wouldn't get out and start touching itself from outside; and if it would, it would wirehead rather than add more hardware; and it would be incredibly difficult to prevent it from doing so.

You seem awfully confident. If you have a rigorous argument, could you share it?

You might wish to read someone who disagrees with you:

http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf

(and why I am posting this: looking at the donations received by SIAI and having seen talk of hiring software developers, I got pascal-wagered into explaining it)

Thanks a lot; one of the big problems with fringe beliefs is that folks rarely take the time to counterargue since there isn't much in it for them.

Even better is if such criticism can take the form of actually forming a consistent contrasting point of view supported by rigorous arguments, without making drama out of the issue, but I'll take what I can get.

If you care for future of mankind, and if you believe in AI risks, and if a software developer, after an encounter with you, becomes less worried of the AI risk, then clearly you are doing something wrong.

That seems wrong to me. When I hear that some group espouses some belief, I give them a certain amount of credit by default. If I hear their arguments and find them less persuasive than I expected, my confidence in their position goes down.

I have certainly seen some of what I would consider privileging of the hypothesis being done by AGI safety advocates. However, groupthink is not all or nothing; better to extract the best of the beliefs of others then throw them out wholesale.

Some of your arguments are very weak in this post (e.g. the lone genius point basically amounts to ad hominem) and you seem to refer frequently to points that you've made in the past without linking to them. Do you think you could assemble a directory of links to your thoughts on this topic ordered from most important/persuasive to least?

Replies from: Dmytry, Dmytry, Wei_Dai
comment by Dmytry · 2012-04-14T06:53:43.011Z · LW(p) · GW(p)

e.g. the lone genius point basically amounts to ad hominem

But why it is irrational, exactly?

but empirically, people trying to do things seems to make it more likely that they get done.

As long as they don't use this heuristic too hard to choose which path to take. If it can be shown that some non-explicitly-friendly AGI design is extremely safe, and the FAIs are a case of higher risk, but a chance at slightly better payoff. Are you sure that this is what has to be chosen?

Replies from: Normal_Anomaly, John_Maxwell_IV
comment by Normal_Anomaly · 2012-04-14T18:25:45.143Z · LW(p) · GW(p)

If it can be shown that some non-explicitly-friendly AGI design is extremely safe, and the FAIs are a case of higher risk, but a chance at slightly better payoff. Are you sure that this is what has to be chosen?

This line of conversation could benefit from some specificity and from defining what you mean by "explicitly friendly" and "safe". I think of FAI as AI for which there is good evidence that it is safe, so for me, whenever there are multiple AGIs to choose between, the Friendliest one is the lowest-risk one, regardless of who developed it and why. Do you agree? If not, what specifically are you saying?

comment by John_Maxwell (John_Maxwell_IV) · 2012-04-14T07:48:09.135Z · LW(p) · GW(p)

It's not irrational, it's just weak evidence.

I'm not sure exactly what you're asking with the second paragraph. In any case, I don't think the Singularity Institute is dogmatically in favor of friendliness; they've collaborated with Nick Bostrom on thinking about Oracle AI.

Replies from: Dmytry
comment by Dmytry · 2012-04-14T07:54:21.938Z · LW(p) · GW(p)

It's not irrational, it's just weak evidence.

Why is it necessarily weak? I found it very instrumentally useful to try to factor out the belief-propagation impacts of people with nothing clearly impressive to show. There is a small risk I miss some useful insights. There is much lower pollution with privileged hypotheses given wrong priors. I am a computationally bounded agent. I can't process everything.

Replies from: Viliam_Bur, John_Maxwell_IV
comment by Viliam_Bur · 2012-04-14T12:21:40.059Z · LW(p) · GW(p)

I found it very instrumentally useful to try to factor out the belief-propagation impacts of people with nothing clearly impressive to show. There is a small risk I miss some useful insights. There is much lower pollution with privileged hypotheses given wrong priors.

It's perfectly OK to give low priors to strange beliefs, like: "Here is EY, a guy from internet who found a way to save the world, because all scientists are wrong. Everybody listen to him, take him seriously despite his lack of credentials, give him your money and spread his words." However, low does not mean infinitely low. A hypothesis with a low prior can still be saved by sufficient evidence.

For example, a hypothesis that "Washington is a capital city of USA" has also a very low prior, since there are over 30 000 towns in USA, and only one of them could be a capital, so why exactly should I privilege the Washington hypothesis? But there happens to be more than enough evidence which overrides the initially weak prior.

So basicly the question is how much evidence does EY need so that it becomes rational to consider his thoughts seriously (which does not yet mean he is right); how exactly low is this prior? So... How many people on this planet are putting a comparable amount of time and study to the topic of values of artificial intelligence? Is he able to convince seemingly rational people, or is he followed by a bunch of morons? Is his criticism of scientific processes just an unsubstantiated school-dropout envy, or is he proven right? Etc. I don't pretend to do a Bayesian calculation, it just seems to me that the prior is not that low, and there is enough evidence. (And by the way, Dmytry, your presence at this website is also a weak evidence, isn't it? I guess there are millions of web pages that you do not read and comment regularly. There are even many computer-related or AI-related pages you don't read, but you do read this one -- why?)

comment by John_Maxwell (John_Maxwell_IV) · 2012-04-14T08:39:09.428Z · LW(p) · GW(p)

Oh please. There's a difference between what makes a useful heuristic for you to decide what to spend time considering and what makes for a persuasive argument in a large debate where participants are willing to spend time hashing out specifics.

DH1. Ad Hominem.
An ad hominem attack is not quite as weak as mere name-calling. It might actually carry some weight. For example, if a senator wrote an article saying senators' salaries should be increased, one could respond:

Of course he would say that. He's a senator.

This wouldn't refute the author's argument, but it may at least be relevant to the case. It's still a very weak form of disagreement, though. If there's something wrong with the senator's argument, you should say what it is; and if there isn't, what difference does it make that he's a senator?

http://paulgraham.com/disagree.html

I found it very instrumentally useful to try to factor out the belief-propagation impacts of people with nothing clearly impressive to show.

If even widely read bloggers like EY don't qualify to affect your opinions, it sounds as though you're ignoring almost everyone.

No one is expecting you to adopt their priors... Just read and make arguments about ideas instead of people, if you're trying to make an inference about ideas.

Replies from: Dmytry
comment by Dmytry · 2012-04-14T08:48:17.358Z · LW(p) · GW(p)

If even widely read bloggers like EY don't qualify to affect your opinions, it sounds as though you're ignoring almost everyone.

I think you discarded one of conditionals. I read Bruce Schneier's blog. Or Paul Graham's. Furthermore, it is not about disagreement with the notion of AI risk. It's about keeping the data non cherry picked, or less cherry picked.

comment by Dmytry · 2012-04-14T21:43:31.792Z · LW(p) · GW(p)

"You might wish to read someone who disagrees with you:"

Quoting from

http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf

To say that a system of any design is an “artificial intelligence”, we mean that it has goals which it tries to accomplish by acting in the world.

I had been thinking, could it be that respected computer vision expert indeed believes that the system will just emerge world intentionality? That'd be pretty odd. Then I see it is his definition of AI here, it already presumes robust implementation of world intentionality. Which is precisely what a tool like optimizing compiler lacks.

edit: and in advance of other objection: I know evolution can produce what ever argument demands. Evolution, however, is a very messy and inefficient process for making very messy and inefficient solutions to the problems nobody has ever even defined.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2012-04-15T00:14:55.207Z · LW(p) · GW(p)

I'm not sure there is a firm boundary between goals respecting events inside the computer and those respecting events outside.

Who has made this optimizing compiler claim that you attack? My impression was that AI paranoia advocates were concerned with efficient cross domain optimization, and optimizing compilers would seem to have a limited domain of optimization.

By the way, what do you think the most compelling argument in favor of AI paranoia is? Since I've been presenting arguments in favor of AI paranoia, here are the points against this position I can think of offhand that seem most compelling:

  • The simplest generally intelligent reasoning architectures could be so complicated as to be very difficult to achieve and improve, so that uploads would come first and even future supercomputers running them would improve them slowly: http://www.overcomingbias.com/2010/02/is-the-city-ularity-near.html
  • I'm not sure that a "good enough" implementation of human values, that was trained on a huge barrage of moral dilemmas and the solutions humans say they want implemented that were somehow sampled from a semi-rigorously defined sample space of moral dilemmas, would be that terrible. Our current universe certainly wasn't optimized for human existence, but we're doing all right at this point.
  • It seems possible that seed AI is very difficult to create by accident, in the same way an engineered virus would be very difficult to create by accident, so that for a long time it is possible for researchers to build a seed AI but they don't do it because of rudimentary awareness of risks (which aren't present today though).
Replies from: timtyler
comment by timtyler · 2012-04-15T15:17:36.304Z · LW(p) · GW(p)

Progress so far has been mostly good. We see big positive trends towards improvement. Some measure of paranoia is usually justified, but excessive paranoia is often unhelpful.

comment by Wei Dai (Wei_Dai) · 2012-04-14T05:18:07.256Z · LW(p) · GW(p)

If you got something to say on this topic, say it, we all want to hear it (or at least I do).

I think he was talking about this post.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2012-04-14T05:40:14.239Z · LW(p) · GW(p)

It's too bad he didn't summarize and link to all of the stuff he's written on this topic, and just give it an inflammatory headline if he felt the issue deserved more attention...

comment by shminux · 2012-04-13T22:45:06.121Z · LW(p) · GW(p)

Bravo! If any post deserves to be on main, it's this entry, as it expresses what a lot of people think, here and elsewhere.

EDIT: To clarify, the productive discussion that will hopefully ensue would go a long way to either address or validate the arguments presented, and to help evaluate the degree of cultishness of the forum for those of us not yet absolutely convinced either way.

Replies from: Dmytry
comment by Dmytry · 2012-04-14T08:37:02.420Z · LW(p) · GW(p)

Thanks. Glad you like it. I did put some work into it. I also have a habit of keeping epistemic hygiene by not generating a hypothesis first then cherry-picking examples in support of it later, but that gets a lot of flak outside scientific or engineering circles.

comment by timtyler · 2012-04-15T13:02:02.608Z · LW(p) · GW(p)

Here is a blog post by the O.P. (Dmytry) - on the topic of Futurism and Artificial Intelligence. It says:

I looked into the issue and it also seem to be strongly centring around cryogenic brain preservation, with advocacy of slowing down technological progress. I’ll go cynical and say that the reason they suddenly care so much for future is that they want to live forever safer. Never mind all the people who lack shelter and food and water right now, and protection from other people, never mind that while the progress got us there, if it is to slow down things will get even worse.

It does appear to me that the SIAI advocates slowing down technological progress by the rest of the world.

They would rather the progress happens within the SIAI - so they can "win" - and thus SAVE THE WORLD.

It also says:

The practical issue with such AI fear mongering is that it raises the probability of unabomber-like incidents.

Maybe. The associated negative marketing isn't a great sign either. I don't think Eliezer Yudkowsky had a good basis for claiming that "Novamente *would* destroy the world if it worked". It just seems like bad-mouthing a competitor's product to me. Using negative marketing isn't a great sign. I think it is best to stick to the established facts about competitor's products.

comment by lavalamp · 2012-04-15T01:20:04.379Z · LW(p) · GW(p)

Meta: I found your second paragraph extremely off-putting. There are lots of software engineers here. I'm a software engineer. Software engineering is not a special thing that makes your opinion more valuable than average, at least not around this crowd.

Anyway, as a data point I agree very much with your point #6, but I think your point #2 is bad reasoning (as well as being factually questionable, as mention in other comments).

comment by timtyler · 2012-04-15T01:12:41.553Z · LW(p) · GW(p)

Unless the work is in fact focussed in some secret FAI effort, it seems likely that some automated software development tool would foom, reaching close to absolute maximum optimality on certain hardware.

The path of "automated programming".

However, there currently seems to be more money and effort in hedge funds, search and networking tools. Automated software development tools are not an obvious bet.

comment by fubarobfusco · 2012-04-14T03:33:41.708Z · LW(p) · GW(p)

The unknown-origin beliefs that I have been exposed to previously trace back to bad ideas, and as I update, those unknown-origin ideas get lower weight.

Could you expand on this?

Replies from: Dmytry
comment by Dmytry · 2012-04-14T06:49:14.510Z · LW(p) · GW(p)

The hyper-foom is the worst. The cherry picked filtration of what to advertise is also pretty bad.

Replies from: fubarobfusco
comment by fubarobfusco · 2012-04-14T15:01:50.533Z · LW(p) · GW(p)

I haven't been able to find the expression "hyper-foom" on this site, so I'm not sure what it is that you are picking out for criticism there.

comment by CarlShulman · 2013-01-23T01:40:54.035Z · LW(p) · GW(p)

I thought that estimate (which is for current people) was too high, talked through the parameters and further complications more with Anna later, and it hasn't been re-used since. But in any case, it was about work aimed at AI risk in general, rather than SI. It didn't, for example, claim that SI or an SI AGI project was a much better bet than research like that at FHI, or finding other ways to address potential risk.

Where I disagree with Eliezer, I have registered my views in many comments here on LW, presentations, response to questions, and the like, but he speaks for himself.

Flap of wings of butterfly

I don't buy the zero-knowledge claim. There are general heuristics (not always right, but still helpful) that more knowledge and thinking through of a problem tends to lead to better responses, that trying to find ways to achieve X makes you more likely to realize X.

comment by Jonathan_Graehl · 2012-08-07T23:36:05.063Z · LW(p) · GW(p)

I don't even understand if you're saying that AI risk is more or less than EY thinks it is (I guess less, since you're saying he intended to give you evidence that would make the severity+urgency seem higher, but it had the opposite effect).

I think you're saying the FAI approach suggested by EY is a mistake. I agree that people should definitely not assume his views as a starting point for their own exploration, but instead take whatever evidence is presented. I'm pretty sure there are other lines of development that will connect to an AI result (maybe FAI or "tool"-AI) first, whether you want them to or not. So people who are wise stewards of risk should also explore freely.

comment by ChrisHallquist · 2012-04-15T03:24:34.664Z · LW(p) · GW(p)

The advice about doing self-testing on ability to foresee problems like the wall socket example seems like good advice. But the rest of this post strikes me as not very useful. It's generally a bad idea to announce that the arguments for a view are just rationalizations, along with heuristics that have led you to that conclusion, when you could be critiquing the arguments.

comment by timtyler · 2012-04-15T01:42:46.360Z · LW(p) · GW(p)

It appears to me that will to form most accurate beliefs about the real world, and implement solutions in the real world, is orthogonal to problem solving itself. It is certainly the case for me. My ability to engineer solutions is independent of my will, and while I have a plenty of solutions I implemented, I have very huge number in the desk drawer. No thought is given to this orthogonality.

I'm sceptical. Both seem correlated with G to me.

Replies from: None
comment by [deleted] · 2012-04-15T01:59:00.916Z · LW(p) · GW(p)

What's G? The term is unfortunately unsearchable. My best guess is that you mean some sort of "general intelligence".

To which I would respond (and maybe this makes sense in terms of whatever G actually means, too) that two orthogonal (independent) variables X and Y can still be correlated to a third variable Z; e.g., with Z = X+Y.

Replies from: Nornagest
comment by Nornagest · 2012-04-15T02:06:31.211Z · LW(p) · GW(p)

It's essentially general intelligence), yes. The claim in timtyler's quote would be substantiated if some measure of problem-solving ability and some measure of goal orientation were more strongly correlated with each other than arbitrarily chosen cognitive metrics are -- a tall order, given how vague "problem solving ability" and "goal orientation" both are.

That said, it sounds to me like Dmytry's pointing to g and executive function, which I'd hesitate to conflate.

Replies from: timtyler
comment by timtyler · 2012-04-15T11:19:46.988Z · LW(p) · GW(p)

The claim in timtyler's quote would be substantiated if some measure of problem-solving ability and some measure of goal orientation were more strongly correlated with each other than arbitrarily chosen cognitive metrics are -- a tall order, given how vague "problem solving ability" and "goal orientation" both are.

For instance, both seem likely to be correlated with past successes at problem solving. Ability is obviously correlated with that - and the link to the will arises because repeated failures cause people to give up and try other approaches to making a living.

comment by Normal_Anomaly · 2012-04-14T17:20:43.161Z · LW(p) · GW(p)

On orthogonality of will and problemsolving: AFAICT, you agree with EY/the LW consensus that Intelligence is orthogonal to rationality/motivation is orthogonal to what goals one has. One can be any of intelligent (able form correct beliefs and solve problems), rational (forming correct beliefs and solving problems to the best of one's ability), or moral (having some particular set of goals) without necessarily being any of the others. Low intelligence makes rationality less useful and low rationality makes intelligence less useful/apparent, but one can be present without the other. As I said, as far as I know there's no disagreement here on any of this.

In a couple of places you seem to be confusing narrow AI with general AI. For instance,

Why would optimizing compiler that can optimize it's ability to optimize, suddenly emerge will? It could foom all right, but it wouldn't get out and start touching itself from outside;

An optimizing compiler is a narrow AI; it only "knows" how to deal with code, only cares about changing its input program, and has no ability to learn how to change its own hardware. An AGI can see that changing its hardware or environment can help it achieve its goals and figure out how to do so. Also, nitpick: an optimizing compiler can only increase the speed with which it can optimize itself. It can't find more effective ways to optimize code than it already "knows".

it seems likely that some automated software development tool would foom, reaching close to absolute maximum optimality on certain hardware.

If a program can perfectly reach it's goals/get max utility in the easiest way possible without taking over the world, then it won't. The AIs that will have large effects on the world, for good or ill, will be the ones that can think about all domains and have utility functions defined over the whole state of the world.

Also, these all seem like meta-points, or meta-arguments. For instance, you say:

The arguments for are pretty bad upon closer scrutiny,

but don't post the detailed results of your scrutiny or the refutations of those arguments. You say:

One can make equally good arguments for the opposite point of view

but don't post them.

This suggests that you have another post or big comment out there where you wrote this stuff, or that you have a lot of relevant thoughts on the subject you haven't written up here yet. Could you add the other material and/or link to it? It'll make the discussion easier if all your ideas are in one place.

Replies from: XiXiDu
comment by XiXiDu · 2012-04-14T18:32:22.291Z · LW(p) · GW(p)

The arguments for are pretty bad upon closer scrutiny,

but don't post the detailed results of your scrutiny or the refutations of those arguments.

Playing the burden of proof game won't help you at all. If you want to convince people of AI risks then you have to improve your arguments if they tell you that your current arguments are unconvincing. It is not the obligation of those who are not convinced to refute your bad arguments (which might not even be possible if they are vague enough).

Replies from: Normal_Anomaly, drethelin, David_Gerard
comment by Normal_Anomaly · 2012-04-14T23:27:33.082Z · LW(p) · GW(p)

It's true that he has no responsibility per se to tell us what's wrong with our arguments, but I can still ask him without claiming he has the burden of proof. I'm willing to accept the burden of proof, and "That's not enough evidence, I'm not convinced" is a valid response, but if he has any specific reasons beyond that I want to know them.

comment by drethelin · 2012-04-15T06:16:40.829Z · LW(p) · GW(p)

In general I agree with this but he specifically mentions equally good arguments for the opposing view, without describing or linking to them. If I say "that's a bad argument, you'll have to do better to convince me" That's one thing but it's quite another when I say "That's a bad argument because of what x said" and then not say what x said.

comment by David_Gerard · 2012-04-14T22:22:11.980Z · LW(p) · GW(p)

Playing the burden of proof game won't help you at all. If you want to convince people of AI risks then you have to improve your arguments if they tell you that your current arguments are unconvincing.

Indeed. I'm wondering how many people complaining of the tone of this post have previously declared Crocker's Rules.

comment by Vaniver · 2012-04-14T02:51:44.755Z · LW(p) · GW(p)

Like pointed out by David Gerard, EY isn't the only game in town. I think the basic question he's trying to answer- "If you had to build human values from scratch, how would you do it?" - is a very interesting one, even if I think his answer to it is not very good.

How you answer that question depends on what shape you think the future will take. EY thinks we'll invent a god, and so we need to proceed very carefully and get the answer completely correct the first time. Hanson thinks we'll evolve as a society, and so most of the answers will get found along the way- but we can make some predictions and alter the shape of the future with our actions now.

Personal values also differ heavily. I don't expect to live forever- and so if my descendents are memetic rather than genetic, and synthetic rather than organic, it's no great loss. (That's not to say there are no values / memes I'd like to preserve, of course.) To someone that wants to personally exist for a long time, it becomes very relevant what part humans have in the future.

Replies from: Dmytry
comment by Dmytry · 2012-04-14T08:32:11.743Z · LW(p) · GW(p)

To someone that wants to personally exist for a long time, it becomes very relevant what part humans have in the future.

I think this is an awesome point I overlooked. That talk of future of mankind, that assigning of the moral values to future humans but zero to the AI itself... it does actually make a lot more sense in context of self preservation.

comment by Manfred · 2012-04-14T00:31:33.790Z · LW(p) · GW(p)

the arguments are being 'fixed' in precisely the way in which you don't when you aren't rationalizing

Is this referring to me? I think it is :D

I would take your "this is clearly rationalization" claim more seriously if that's how you'd responded at the time. Instead, you responded with incomprehension. Which seems like a good reason why wrong, simple arguments get used by people who should know that those arguments are wrong - they want people to understand the basic idea without having to read some paper about AIXI and then some paper about the complexity of human value, or have to know the abstract meaning of a function.

comment by [deleted] · 2012-04-14T19:32:45.517Z · LW(p) · GW(p)

<The arguments for are pretty bad upon closer scrutiny, and are almost certainly rationalizations rather than rationality. Sorry.

Unsubstantiated assertion.

It is incredibly unlikely to find yourself in the world where the significant insights about real doomsday is coming from single visionary who did so little that can be unambiguously graded,

Interesting mixture of misunderstanding how probability works, and an ad-hominem.

Unless the work is in fact focussed in some secret FAI effort, it seems likely that some automated software development tool would foom, reaching close to absolute maximum optimality on certain hardware. But will remain a tool. Availability of such ultra optimal tools in all aspects of software and hardware design would greatly decrease the advantage that self willed UFAI might have.

This is a genuinely interesting thought -- is such a tool achievable, theoretically, and what would it do to the early development of a FOOM brain? A post about this would be worth a read.

if I point out that the AI has good reasons not to kill us all due to it not being able to determine if it is within top level world or a simulator or within engineering test sim. It is immediately conjectured that we will still 'lose' something because it'll take up some resources in space. That is rationalization. Privileging a path of thought.

Or because your objections don't actually solve the underlying problems. You get the same kind of 'updating' if you respond to the thermodynamic objections to your car with a windmill on top by suggesting that you stand on the roof and blow. Also, you should note that you listed one of the weakest possible counter-arguments to your own argument, which is bad practice, rationality-wise.

The botched FAI attempts have their specific risk - euthanasia, wireheading, and so on, which don't exist for an AI that is not explicitly friendly.

Dead wrong. You've clearly never been eaten by a paperclip maximizer.

and if it would, it would wirehead rather than add more hardware; and it would be incredibly difficult to prevent it from doing so.

Unsubstantiated assertion.

EY very strongly pattern-matches to this friend of mine, and focusses very hard on the known unknowns aspect of the problem about which we know very little - which can easily steer one into a very dangerous zone full of unknown unknowns - the not-quite-FAIs that euthanize us or worse.

Ad-hominem, and the rest is just unwarranted condescension. There's a good post to be be written about a skeptical approach to AI-risk scenarios, and this is definitely not it.

Replies from: Dmytry
comment by Dmytry · 2012-04-14T19:39:20.990Z · LW(p) · GW(p)

Okay, then, you're right: the manner of presentation of the AI risk issue on lesswrong somehow makes a software developer respond with incredibly bad and unsubstantiated objections.

Why when bunch of people get together, they don't even try to evaluate the impression they make on 1 individual? (except very abstractly)

Replies from: None
comment by [deleted] · 2012-04-14T19:43:02.844Z · LW(p) · GW(p)

Er... what?

I'm a software developer too (in training, anyway). Sometimes I'm wrong about things. It's not unusual, or the fault of the material I was reading when I made the mistake. I'm not even certain you're wrong. What I am certain of is that your provided argument does not support, or even strongly imply your stated thesis. If you want to change my mind, then give me something to work with.

EDIT: You're right about one thing -- Less Wrong has a huge image problem; but that's entirely tangential to the question at issue.

Replies from: Dmytry
comment by Dmytry · 2012-04-14T19:49:53.368Z · LW(p) · GW(p)

What I am certain of is that your provided argument does not support, or even strongly imply your stated thesis.

I know this. I am not making argument here (or actually, trying not to). I'm stating my opinion, primarily on presentation of the argument. If you want argument, you can e.g. see what Hansen has to say about foom. It is, deliberately this way. I am not some messiah hell bent on rescuing you from some wrongness (that would be crazy).

Replies from: None
comment by [deleted] · 2012-04-14T19:58:40.899Z · LW(p) · GW(p)

In that case, you might want to consider rewriting your post. Right now, the crazy messiah vibe is coming through very strongly. Either back it up and stop wasting our time, or rewrite it to assert less social dominance. If you do the latter without the former, people get cranky.

Replies from: Dmytry
comment by Dmytry · 2012-04-14T21:15:13.873Z · LW(p) · GW(p)

I'm mainstream, you guys are fringe, do you understand? I am informing you that you are not only not convincing, but look like complete clowns who don't know big O from a letter of alphabet. I know you want to do better than this. And I know some of people here have technical knowledge.

Replies from: pedanterrific
comment by pedanterrific · 2012-04-14T21:25:34.778Z · LW(p) · GW(p)

Yes, this is what is meant by "assert social dominance". The suggestion was to do less of it, though, not more.

comment by Dmytry · 2012-04-14T08:13:59.250Z · LW(p) · GW(p)

Something that I forgot to mention, which tends to strike particularly wrong chord: assignation of zero moral value to AI's experiences. The future humans whom may share very few moral values with me, are given nonzero moral utility. The AIs that start from human culture and use it as a starting point to develop something awesome and beautiful, are given zero weight. That is very worrying. When your morality is narrow, others can't trust you. What if you were to assume I am philosophical zombie? What if I am not reflective enough for your taste? What if I am reflective in a very different way? (someone has suggested this as a possibility ) .

Replies from: wedrifid, Viliam_Bur, timtyler, Normal_Anomaly
comment by wedrifid · 2012-04-14T08:28:34.385Z · LW(p) · GW(p)

Something that I forgot to mention, which tends to strike particularly wrong chord: assignation of zero moral value to AI's experiences.

Not something done here. If someone else is interested they can find the places this has been discussed previously (or you could do some background research yourself.) For my part I'll just explicitly deny that this represents any sort of consensus lesswrong position, lest the casual reader be mislead.

What if you were to assume I am philosophical zombie?

That would be troubling indeed. It would mean I have become a rather confused and incompetent philosopher.

Replies from: pedanterrific
comment by pedanterrific · 2012-04-14T09:01:39.968Z · LW(p) · GW(p)

assignation of zero moral value to AI's experiences.

Not something done here. If someone else is interested they can find the places this has been discussed previously

Is this what you had in mind?

Replies from: wedrifid
comment by wedrifid · 2012-04-14T09:42:21.432Z · LW(p) · GW(p)

Is this what you had in mind?

It's a good start, thankyou!

comment by Viliam_Bur · 2012-04-14T11:29:22.728Z · LW(p) · GW(p)

assignation of zero moral value to AI's experiences.

This seems like you are talking about some existing AI that already has a mechanism for having and evaluating its experiences. But this is not the case. We are discussing how to build an AI, and it seems like good idea to make an AI without experiences (if such words make sense), so it can't be hurt by doing what we value. And if this were not possible, I assume we would try to make an AI that has goals compatible with us, so what makes us happy, makes AI happy too.

AI values will be created by us, just like our values were created by nature. We don't suffer because we don't have a different set of values. (Actually, we do suffer because we have conflicting values, so we are often not able to satisfy all of them, but that's another topic.) For example I would feel bad about a future without any form of art. But I would not feel bad about a future without any form of paperclips. Clippy would be probably horrified, and assuming a sympathetic view, ze would feel sorry that the blind gods of evolution have crippled me by denying me the ability to value paperclips. However, I don't feel harmed by the lack of this value, I don't suffer, I am perfectly OK with this situation. So by analogy, if we manage to create the AI with the right set of values, it will be perfectly OK with that situation too.

comment by timtyler · 2012-04-15T11:44:18.063Z · LW(p) · GW(p)

Something that I forgot to mention, which tends to strike particularly wrong chord: assignation of zero moral value to AI's experiences.

That's not so much an assumption as an initial action plan. Many of the denizens here don' t want to build artificial people initially. They do want an artificial moral agent - but not one whose experiences are regarded as being intrinsicallly valuable - at least not straight away.

Of course you could build agents with valued experiences - the issue is more whether it is a good idea to do so initially. If you start with a non-person, you could still wind up building synthetic people eventually - if it was agreed that doing so was a good idea.

If you look at something like the iRobot movie, those robots were't valued much there either. Machines will probably start out being enslaved by humans, not valued as peers.

comment by Normal_Anomaly · 2012-04-14T17:23:41.929Z · LW(p) · GW(p)

assignation of zero moral value to AI's experiences.

Who said they did this and where? Assuming that's what they meant to say, I would like to go chew them out. More likely you and they got hit by illusion of transparency.

comment by Incorrect · 2012-04-13T23:45:47.032Z · LW(p) · GW(p)

AI is a powerful tool. Powerful tools are inherently dangerous because they can cause large amounts of change with relatively less interstitial human intervention in planning (potentially resulting in less human analysis of plans) or execution.

That danger can be lessened by wielding the tool more precisely/carefully and/or designing it with safety features.

Do you disagree with this?

Replies from: AlanCrowe
comment by AlanCrowe · 2012-04-14T00:04:25.140Z · LW(p) · GW(p)

My computer sits beside my lathe (a Boxford which was originally equipped with a 3/4 horse power 3-phase motor). Which is more powerful? They act on the world in such different ways that it seems hard to know where to begin in comparing them. But say the word "danger" and all is made clear. It is an old, manual lathe. It would be easy to lose a finger or an eye. It is the lathe that is dangerous, and the computer that is safe.

AI is dangerous because it is a powerful tool? Such an abstraction is too vague to be useful.

comment by Thomas · 2012-04-14T07:21:33.010Z · LW(p) · GW(p)

A side note. I don't find your sw development site that impressive at all. It is good, but not that good, that I could take you seriously in the FOOM matter.

comment by [deleted] · 2012-04-14T17:36:12.943Z · LW(p) · GW(p)

Dovnvoted because you even cosider the possibility that an AI will wirehead. That is faulty reasoning, and you should seriously look more into the mathematical concept of self-modifying, optimizing, utility function maximizing agents.

To elaborate: There is a mathematical theorem about self-modifying agents that state an agent will not self modify to invalidate it's utility function, because if the agent does that, the modified agent will not maximize the current agents utility function. One very good way to invalidate your utility function is to trick yourself into thinking your utility function is being maximized.

Replies from: CarlShulman, David_Gerard, timtyler
comment by CarlShulman · 2012-04-14T17:44:15.341Z · LW(p) · GW(p)

This is wrong in several ways.

  • An initial AI isn't necessarily a utility maximizer of this very sophisticated form (with a utility function defined in terms of a robust model of the world from a 3rd person perspective), building such a thing is a further challenge beyond making AI
  • If someone designs an AI with a sensory utility function, taking control of its sensory channel is just optimizing for its utility function; it's "wireheading" from the designer's perspective if they expected to be able to ensure that the preferred inputs could only be obtained by performing assigned tasks
  • A utility-maximizer could have reason to modify or even eliminate its own utility function for a variety of reasons, especially when interacting with powerful agents and when its internals are at least partially transparent
Replies from: Dmytry, None
comment by Dmytry · 2012-04-14T19:30:52.772Z · LW(p) · GW(p)

Precisely, thank you! I hate arguing such points. Just because you can say something in English does not make it an utility function in the mathematical sense. Furthermore, just because in English it sounds like modification of utility function, does not mean that it is mathematically a modification of utility function. Real-world intentionality seem to be a separate problem from making a system that would figure out how to solve problems (mathematically defined problems), and likely, a very hard problem (in the sense of being very difficult to mathematically define).

Replies from: CarlShulman
comment by CarlShulman · 2012-04-14T20:38:16.078Z · LW(p) · GW(p)

Real-world intentionality seem to be a separate problem from making a system that would figure out how to solve problems (mathematically defined problems), and likely, a very hard problem (in the sense of being very difficult to mathematically define).

I think I disagree with you, depending on what you mean here. Limited "intentionality" (as in Dennett's intentional stance) shows up as soon as you have a system that selects the best of several actions using prediction algorithms and an evaluation function: a chess engine like Rybka in the context of a game can be modeled well as selecting good moves. That intentionality is limited because the system has a tightly constrained set of actions and only evaluates consequences using a very limited model of the world, but these things can be scaled up. Robust problem-solving and prediction algorithms capable of solving arbitrary problems would be terribly hard, but intentionality would not be much of a further problem. On the other hand if we talk about very narrowly defined problems then systems capable of doing well on those will not be able to address the very economically and scientifically important mass of ill-specified problems.

Also, the separability of action and analysis is limited: Rybka can evaluate opening moves, looking ahead a fair ways, but it cannot provide a comprehensive strategy to win a game (carrying on to the end) without the later moves. You could put a "human in the loop" who would use Rybka to evaluate particular moves, and then make the actual move, but at the cost of adding a bottleneck (humans are slow, cannot follow thousands or millions of decisions at once). The more experimentation and interactive learning are important, the less viable the detached analytical algorithm.

comment by [deleted] · 2012-04-14T18:10:43.761Z · LW(p) · GW(p)

It is true that an AI's utility function-accomplishment-accessing methods can be circumvented. But having an AI that circumvents it's own utility function, would be evidence towards poor utility function design.

Also, eliminating your own utility function is a perfectly valid move if it leads to fullfillment of the current utility function. That is the principle in the above statement: Every planned course of action is evaluated against it's current utility function, if removing the construct that constitutes the utility function is an action that has high utility, then it is a valid course of action.

Now if an AI's utility function is not properly designed it will of course self modify to satisfy it. If that involves putting a blue colour filter in front of your eyes that is a perfectly valid course of action.

Replies from: Zetetic
comment by Zetetic · 2012-04-14T21:04:08.781Z · LW(p) · GW(p)

But having an AI that circumvents it's own utility function, would be evidence towards poor utility function design.

By circumvent, do you mean something like "wireheading", i.e. some specious satisfaction of the utility function that involves behavior that is both unexpected and undesirable, or do you also include modifications to the utility function? The former meaning would make your statement a tautology, and the latter would make it highly non-trivial.

Replies from: None
comment by [deleted] · 2012-04-14T21:20:51.747Z · LW(p) · GW(p)

I mean it in the tautological sense. I try to refrain from stating highly-non trivial things without extensive explanations.

comment by David_Gerard · 2012-04-14T22:23:17.077Z · LW(p) · GW(p)

There is a mathematical theorem about self-modifying agents that state an agent will not self modify to invalidate it's utility function, because if the agent does that, the modified agent will not maximize the current agents utility function.

This sounds like extremely useful information. Do you have more detail, or a reference to further reading on this theorem?

Replies from: None
comment by [deleted] · 2012-04-15T10:35:25.277Z · LW(p) · GW(p)

I am a bit stumped to actually remember where I read it. Give me a few years to study some more advanced economics, and I can probably present you with a home brewed proof.

comment by timtyler · 2012-04-14T22:14:45.812Z · LW(p) · GW(p)

There is a mathematical theorem about self-modifying agents that state an agent will not self modify to invalidate it's utility function, because if the agent does that, the modified agent will not maximize the current agents utility function.

That's not correrct. There is no such "mathematical theorem".

Indeed we know that some agents will wirehead, since we can see things like heroin addicts, hyperinflation and Enron in the real world.

Replies from: pedanterrific
comment by pedanterrific · 2012-04-14T22:25:43.694Z · LW(p) · GW(p)

Humans don't have utility functions, though.

Edit: Oops. Apparently they do.

Replies from: timtyler
comment by timtyler · 2012-04-14T23:14:46.175Z · LW(p) · GW(p)

See: Any computable agent may described using a utility function.

Replies from: pedanterrific
comment by pedanterrific · 2012-04-14T23:33:24.223Z · LW(p) · GW(p)

Sorry, I notice you've had this argument at least once before. That'll learn me to shoot my mouth off. In my defense, the wiki just says "[utility functions] do not work very well in practice for individual humans" without any mention of this fact.

However, I'm still not certain that you can take heroin addicts as proof that some agents self-modify to invalidate their utility functions.