Self-improving AGI: Is a confrontational or a secretive approach favorable?
post by Friendly-HI · 2011-07-11T15:29:43.279Z · LW · GW · Legacy · 80 commentsContents
EDIT: 08.07.11 None 80 comments
80 comments
Comments sorted by top scores.
comment by Armok_GoB · 2011-07-05T21:27:13.482Z · LW(p) · GW(p)
Upon reading this I instantly like it and find it thrilling and get an impulse to slam the button voting for secrecy with my face hidden in shadow while sprouting a dramatic one liner, because it'd look awesome in the movie. This is somewhat impeding my ability to actually consider the matter rationally.
Replies from: Jonathan_Graehl↑ comment by Jonathan_Graehl · 2011-07-06T05:04:31.295Z · LW(p) · GW(p)
Replies from: Raw_Powercomment by Vladimir_Nesov · 2011-07-05T23:21:40.030Z · LW(p) · GW(p)
The potential benefit is that more of the people who could be working on random-goals AGI, or on FAI, or contributing to support/funding of such projects, would consider the idea, and so AGI risk gets reduced and FAI progress gets a boost. The potential downside is what, exactly, at what marginal cost? I don't think this tradeoff works the way you suggest.
Replies from: Friendly-HI, Jonathan_Graehl↑ comment by Friendly-HI · 2011-07-06T00:11:53.091Z · LW(p) · GW(p)
I'm not sure I'm following... do you honestly think, that the cost of openly working on self-improving AGI and openly making statements along the lines of "we need to get this AI exactly right, or else we'll probably kill every man, woman and child on this planet" will be marginal in -say- 30 years, once the majority of people no longer views AGI as the product of a loony imagination but an actual possibility due to advances in robotics and narrow AI all around them? Don't you think open development of AGI would draw massive media attention once the public is surrounded and accustomed to all kinds of robots and narrow AI's?
Why this optimism about how reasonable people will react to our notion of self-improving AGI, am I somehow missing something profound from my model of reality? I still expect people to be crazy, religious and irrational in 30 years and the easiest way of dealing with that would simply be to not arouse their attention. Now that most people perceive us as as hopeless sci-fi nerds (at best) and AGI still seems at least 500 years away in their mind, of course I'm all for being open and drawing in people and funding - but do you expect such an open approach to work (without interference by the public or politics) until the very completion of a godlike AGI? I severely doubt that, and I find it surprising that this is somehow perceived as a wildly marginal concern. As if it's not even worth thinking about... why is that?
↑ comment by Jonathan_Graehl · 2011-07-05T23:41:06.745Z · LW(p) · GW(p)
In case it helps other readers: the upside/downside is pro/con openly promoting your AGI+FAI work and its importance (vs. working in secret).
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-07-06T09:59:20.704Z · LW(p) · GW(p)
No. I was talking about discussing the topic, not a specific project. The post discusses LW, and we don't have any specific project on our hands.
Replies from: Friendly-HI↑ comment by Friendly-HI · 2011-07-06T13:19:17.775Z · LW(p) · GW(p)
That explains a lot, thanks for clarifying the misunderstanding. I for one wasn't specifically referring to LW, but I was wondering whether in the coming decades people involved with AGI (AI researchers and others) should be as outspoken about the dangers and capabilities of self-improving AGI, as we currently are here on LW. I think I made clear, why I wouldn't count on having the support of the global public, even if we did communicate our cause openly and in detail - so if (as I would predict) public outreach won't cut it and may even have severe adverse effects, I'd personally favor to keep a low profile.
comment by JoshuaZ · 2011-07-06T00:40:38.510Z · LW(p) · GW(p)
Secret attempts for AGI have two problems that seems to get repeatedly ignored when this issue is discussed:
First, if there is a real danger that the first AGI is going to go foom and control everything, then having fewer people look at the code makes it more likely that something will go wrong. If there's one thing we've learned about programming in the last few fifty years, almost all code has mistakes, but the number of such mistakes goes down when more people can look at it. This is why for example in cryptography, systems which aren't completely published are just assumed to be insecure and unreliable.
Second, if in the development of such AGI one obtains specific information suggesting that certain avenues of research are really bad ideas (in the sense not of won't work for building an AGI, but in the sense of will potentially result in an extremely Unfriendly AGI) this information will be much less likely to be shared with other people who are researching similar avenues.
If for example three different teams working secretly come up with insights each of which by itself can make an AGI that seems superficially nice but really isn't, the combination of their ideas is much more likely to not be awful.
If one thinks of this in a sort of decision theoretic framework, where one has three choices "publicly try to build AGI", "Secretly build AGI" "Don't build AGI" it seems clear that lots of groups choosing option 2 will have a lot of potential negative pay-off.
Finally, if one requires a project where one needs to actively lie to the public about what you think the potential is for your AGI, and they get even an inkling of such an issue, they are going to be much less likely to trust your claims. Indeed, at this point, given the existence of this essay, having you as a key-member of an AGI project now becomes a serious potential liability, because even if everything in the project is open, people have less reason to believe that it is really open. If Quirrell is reading this he's probably thinking of taking off a fair number of Quirrell points, and possibly some actual house points.
Replies from: timtyler, Friendly-HI, JGWeissman↑ comment by timtyler · 2011-07-06T08:51:23.671Z · LW(p) · GW(p)
Secret attempts for AGI have two problems that seems to get repeatedly ignored when this issue is discussed:
First, if there is a real danger that the first AGI is going to go foom and control everything, then having fewer people look at the code makes it more likely that something will go wrong.
FWIW, I discussed that a couple of months ago here.
Full transparency - with lots of people having access - is desirable from society's point of view. Then, there are more eyes looking for flaws in the code - which makes is safer. Also, then, society can watch to ensure development is going along the right lines. This is likely to make the developers behave bettter, and having access to the code gives society the power to collectively protect itself aginst wrongdoers.
↑ comment by Friendly-HI · 2011-07-06T01:47:38.058Z · LW(p) · GW(p)
I agree with both of your mentioned drawbacks and would add a third: If someone actually discovered that it's not just a human-level AI but a self-improving AGI with a mission (or if someone from the team leaked that information) the public backlash would be absolutely fatal. And then there would also be the added problem of how to introduce the AGI in a fashion that's not essentially an unconsenting takeover of planet earth.
As for my participation in AI-research you don't need to worry. I can hardly code a website and have no intention of learning it or participating directly in AI development. I'm coming from a psychological background, which is probably why I'm unusually concerned about the social repercussions of the self-improving AGI meme. ("Give him a hammer and suddenly everything looks like a nail" as the saying goes. Conversely, some of the technically inclined people here may not even realize, that there may be much more to pulling off AGI than the strictly technical aspects.)
You are aware, that currently about 40% of Americans believe in full-blown creationism and another 40% that god guided the process of evolution with us in mind, right? Ten years ago almost 20% believed the sun orbits our earth and I wouldn't be terribly surprised if essentially nothing about that state of affairs has changed in the meantime...
Even if such beliefs will somewhat recede in the upcoming 30 years, you should be painfully aware that under such conditions you will never ever get public consent for the development of self-improving AGI. At least not from primal unenhanced brains. So if you ever put our idea of the future up for vote, we will lose for sure. And that's just America, the rest of the planet of the apes -including my beloved Europe- won't be amused about our futuristic plans either, and even less amused if some other nation or an American corporate giant like google or IBM tried to pull off something like this in a solo attempt.
So I'll just call it how I see it: Do you want to make self-improving AGI a reality? Then we'll have to find a way to make it happen without involving public opinion in this decision. They'll never consent, no matter how honest and detailed and soulful you pitch our awesome cause. I hope no one here is naive enough to expect that we can pull off self-improving AI without some kind of clash of interests. Given this reality, and considering your objections (and more) my impression is that keeping a low profile (not now, but eventually) will be in our best interest. The best possible thing that I can imagine happening, is if this endeavor becomes a scientific project backed by numerous nations - this will largely prevent the perception and possibility that the developed AGI could be used as a potential weapon, it would allow to bring clever minds from all over the globe into the boat (countering your second point) and last but not least this approach may mitigate public distrust towards a manageable point. ("Every other nation does it too!").
The challenge is obviously how (in a decade or whatever) you could pitch our self-improving AI vision to people who have built their careers on sucking up to the common denominator and who would probably become obsolete through the very arrival of this technology. We need great salesmen ;)
PS: Don't think I'm not sympathetic to the idea that we should communicate our cause honestly and openly all the way until the bitter (sweet?) end... yet I don't believe it will work that way. The majority of people do not share our vision and they wouldn't vote for it. Ever.
Replies from: Strange7, JoshuaZ, Raw_Power↑ comment by Strange7 · 2011-07-08T00:05:49.060Z · LW(p) · GW(p)
So I'll just call it how I see it: Do you want to make self-improving AGI a reality? Then we'll have to find a way to make it happen without involving public opinion in this decision. They'll never consent, no matter how honest and detailed and soulful you pitch our awesome cause.
Really? That's not the impression I got from those numbers at all. To me, it sounds less like the public is adamantly resolved to stick with those entrenched ideas, and more like most people will believe all sorts of insane bullshit of you can spin a plausible-sounding explanation of how they might benefit by believing in it, and if you persist long enough. Do you really think the vote would be a one-time thing?
Replies from: Friendly-HI↑ comment by Friendly-HI · 2011-07-08T17:24:45.790Z · LW(p) · GW(p)
To me, it sounds less like the public is adamantly resolved to stick with those entrenched ideas, and more like most people will believe all sorts of insane bullshit of you can spin a plausible-sounding explanation of how they might benefit by believing in it, and if you persist long enough.
There may be something to that perspective, but I think it is unrealistic to expect we could change enough people's minds in so short of a time-frame. There's a lot of people out there. Religions have had thousands of years to adapt themselves in such a way, that they reinforce and play into people's innate superstitions and psychological desires. In turn, religions also shaped people's culture and until very recently they played the major role in the "nurture" side of the "nature and nurture" make-up of people. Competing with religion on our own terms (rationality) simply won't work so well with the majority of people.
Understanding our AGI "message" requires various quantum leaps in thinking and rationality. These insights implicitly and explicitly challenge most innate intuitions about reality and humanity that people currently hold on to. I'm not saying there won't be many people we could be able to persuade without a thorough education in these matters, but because in contrast to religion our "worldview" doesn't tell people what deep down they would like to hear and believe, we're less attractive to those who just can't be arsed into rationality. Which are a lot o people.
In conclusion, I'll sum up my basic point in another light yet again: I think I'm not confronting us with a false dichotomy, when I say that there are essentially only two possibilities when it comes to introducing AGI into the lives of people:
EITHER we're willing to adhere to public consent along current democratic principles. This would entail, that we massively concern ourselves with public opinion and make a commitment to not unleash AGI, unless the absolute majority of all citizens on this planet (or those who we consider to meet the criteria of valid consent) approve of our plan.
OR, we take the attitude that people who do not meet a certain standard of rationality have no business in shaping humanity's future and we become comfortable with deciding over their heads/on their behalf. This second option certainly does not light up any applause lights for believers in democracy, but I believe among lesswrongers this may not be that much of an unpopular notion.
You can't have it both ways: Either you commit yourself to the insane effort of making sure, that the issue of AGI gets decided in a democratic and "fair" fashion, or you aim at some "morally" "lower" standard and are okay with not everyone getting their say when it comes to this issue. As far as I'm concerned you know my current preference, which I favor because I find the alternative to be completely unrealistic and I'm vastly more committed to rationality than I am to the idea that undiscriminating democracy is the gold standard of decision-making.
Replies from: Strange7↑ comment by Strange7 · 2011-07-10T00:29:48.692Z · LW(p) · GW(p)
What about representative democracy? Any given community sends off a few of it's cleverest individuals with a mandate to either directly argue for that community's interests, or to select another tier of representatives who will do so. Nobody feels completely excluded, but only a tiny fraction of the overall population actually needs to be educated and persuaded on the issues.
Replies from: Friendly-HI↑ comment by Friendly-HI · 2011-07-11T15:21:17.523Z · LW(p) · GW(p)
How is it representative, if only the cleverest individuals are chosen? That would rather be elitism. If actually only the most rational people with herculean minds would decide, they should theoretically unanimously agree to either do it or not do it anyway, based on a sound probability-evaluation and shared premises based in reality that they all agree on.
If those "representative" individuals were democratically determined by vote, then these people most certainly won't be the most intelligent and rational people, but those best at rhetorically convincing others and sucking up to them by exploiting their psychological shortcomings. They would simply be politicians like the ones we have nowadays.
So in a way we're where we started. If people don't decide for themselves, they'll simply vote for someone who represents their (or provides them with a new) uninformed opinion. Whoever wins such an election will not be the most rational person, that's for sure (remember when America voted twice for an insane cowboy?)
While representative democracy is certainly more practical than the alternatives, I doubt the outcome would be all that better. If we want the most rational and intelligent people to make this decision, then these individuals couldn't be chosen by vote but only by another "elitist" group. I don't know how the public would react to that - I suppose they would not be flattered.
Replies from: Strange7↑ comment by Strange7 · 2011-07-11T23:43:04.707Z · LW(p) · GW(p)
I'm not saying it would be a better system overall, just that a relatively small group of politicians would be comparatively easier for us to educate and/or bribe.
Replies from: Friendly-HI↑ comment by Friendly-HI · 2011-07-12T01:36:48.874Z · LW(p) · GW(p)
Yes, that is true.
I'm still puzzled though which approach would be better... involving and educating the politicians (there are many who wouldn't understand) or trying to keep them out as long as possible to avoid confrontations and constraints? I already remarked somewhere, that I would find some kind of international effort towards AGI development very preferable, something comparable to CERN would be brilliant. Such a team could first work towards human-level AI and then one-up themselves with self-improving AGI once they gained some trust for their competence.
In other words, perhaps advertising and reaching the "low-hanging fruit" of human-level AI plus reaping the amazing benefits of such a breakthrough will raise public and political trust in them, as opposed to some "suspicious" corporation or national institute that suddenly builds potential "weapons" of mass destruction.
↑ comment by JoshuaZ · 2011-07-07T02:40:31.116Z · LW(p) · GW(p)
So I'll just call it how I see it: Do you want to make self-improving AGI a reality? Then we'll have to find a way to make it happen without involving public opinion in this decision.
Well, I'm not at all convinced that substantially self-improving AGI can exist (that is, that will self-improve at such a rate as to quickly gain near complete control of its light cone or something like that). I assign only a small chance to the likelyhood that the first AGI will go foom. Also, if I've learned one thing from LW it is that such AI could plausibly be really bad. So I'd rather take a risk averse strategy if it at all possible.
↑ comment by Raw_Power · 2011-07-06T12:48:37.798Z · LW(p) · GW(p)
So if you ever put our idea of the future up for vote, we will lose for sure.
Rule number 1 of voting: it's done after a thorough debate where every single party has said everything they wanted to say. Not to generalize from fictional evidence, but "Twelve Angry Man" has shown us a pretty good caricature of the dramatic changes that can happen if you prolong the debate just a little bit longer before voting. Which is why educating the public and letting the ideas circulate is so crucial.
They'll never consent, no matter how honest and detailed and soulful you pitch our awesome cause. The majority of people do not share our vision and they wouldn't vote for it. Ever.
You haven't justified this. What does believeing God guided Evolution have to do with making plans to build a self-improving artificial intelligence?
More importantly, isn't it better that they know about it, and forbid it or put it under extremely intense scrutiny, rather than they not know about it, and some group developing it and obscurity and botching it?
Replies from: Friendly-HI↑ comment by Friendly-HI · 2011-07-06T14:17:05.117Z · LW(p) · GW(p)
I think you're highly delusional about how malleable people's opinions really are... Are you aware of what's going on in politics and the religious sphere? As if just talking really thoroughly about AGI and appealing to rationality is going to get the majority of people from all over the world on our side. Are you serious?
The point I made about creationism wasn't just that most people who believe in god probably won't want to see one being built, but that you cannot change people's opinions easily. Even completely ridiculous and unworldly ideas like creationism have hardly budged an inch in the last decade - they are rationalityproof. If you really thoroughly explained to people what this self-improving AGI was good for and how powerful it could really become... they'd totally lose it. They won't welcome "our robot overlords", regardless of how nice you make the resulting utopias sound. People fear the unknown and on a gut-level they will immediately reject our idea and rationalize in the blink of an eye why we're wrong, and crazy, and have to be stopped.
I'm all for thoroughly educating people about rationality (you've read my suggestion in the other topic), but seriously getting the majority of people behind us? Sorry, but my psychological model of how people and masses behave tells me that this will never happen. At least not without brain-augmentation, and even then a global 50% +1 vote seems quite unlikely to me.
Would it be better if people knew in detail about self-improving AGI and could objectively discuss this matter in order to rationally make a decision and responsibly vote on whether or not it should be developed? Hell yeah I'd love that! I'd also love to ride on a flying pig but that's not gonna happen either.
Replies from: Raw_Power↑ comment by Raw_Power · 2011-07-06T14:54:11.965Z · LW(p) · GW(p)
I'd prefer it if you used "mistaken" rather than "delusional", thank you very much. Ascribing opposing opinions to madness usually signals weakness in your own stance.
Are you aware of what's going on in politics and the religious sphere?
Quite, see my next paragraph.
Are you serious? I am positive that talking about it publicly and rationally will bring humanity, eventually, to the side of reason, whomsoever it may lie with. Maybe as a US citizen you see things differently (although from here I see education improving somewhat, steps being taken towards giving citizens the minimum necessary services, political awareness developing, racial and sexual and gender issues being slowly abolished...)
But as a resident of Europe and the Middle East, I see religion and partisanship shriveling into a husk, intelligence and culture extending and growing, triumphant, as they never have before, and the citizenry reclaiming power over the aloof governments, and over their futures.
Humanism wins. And rationalism cannot lose, on the long term, because, as its name indicates, it is the art of being right.
creationism has hardly budged an inch in the last decade
That's not what I heard at all. Creationism only became acknowledged as a problem recently. Which means it was secure before, and it is now being challenged, and singing it's swan song. Lack of visible, spectacular budging doesn't mean that it isn't crumbling from the inside. And it's really a problem that is endemic to the USA: across the pond, virtually no one believes in Creationism. I suspect this has something to do with the education of the masses, which is very overlooked in the USA. Once the US society will feel the need to raise their own education level, for any economic reasons, the problems derived from ignorance will just extinguish themselves by sheer lack of combustible.
People fear the unknown and on a gut-level they will immediately reject our idea and rationalize in the blink of an eye why we're wrong, and crazy, and have to be stopped.
That's why the improvement of public, mass education, and the spreading of our Art must be a priority, if not our number one priority. That's why I said "explaining things thoroughly: by that I mean raising the level of awareness of the general public*.
Here in Spain, France, the UK, the majority of people are Atheists. In the USSR virtually everyone were atheists. Beliefs are extremely malleable. By Raising The Sanity Waterline, we'll make it so that they are only malleable through empirical evidence, and, if we do it right, people won't even notice ).
I'd also love to ride on a flying pig but that's not gonna happen either.
You believe in Transhuman-level cybernetics and brain expansions and you don't believe we can make pigs fly and carry people on their backs?
Replies from: Friendly-HI, Karl↑ comment by Friendly-HI · 2011-07-06T15:58:45.126Z · LW(p) · GW(p)
I'd prefer it if you used "mistaken" rather than "delusional"
Funny, on my second read-through I thought about editing it, but then my mind went "whatever, a healthy ego can probably take it". No hurt feelings I hope.
I'm also living in the EU, but I'm very aware and constantly following what's going on in the US, because their erratic development concerns me. As far as evolution and creationism goes, I'm drawing my statistics from the Gallup polls: in the last 30 years creationism went down to 40% from 45%, "evolution through divine intervention" remained stagnant at 40% and "plain natural selection" went up about 5% to 15%.
There is a positive movement here, but I don't see how in another 30 years the collapse of religion will be imminent. And that's just the US, to a resident of the Middle East I shouldn't need to point out, that there are plenty of countries out there much more religious than the US. (Essentially all of them, apart from a few developed countries - and even those countries have usually only around 20% confirmed atheists. Many don't visit the church, but they're still holding on to a mountain of superstitious garbage -> http://en.wikipedia.org/wiki/Demographics_of_atheism#Europe )
You're of course also right that here in Europe there's hardly any creationists - unlike in the US it's just too damaging to one's reputation, so people conveniently adapt their views. I doubt however, that this has all that much to do with the quality of education, and a lot more with cultural attitudes towards religion. As far as Europe goes, I can imagine that the church in the biggest countries (France, Germany, Spain, UK...) will be essentially dead in another 30 years, but what has that to say about people's ability to make rational decisions? There's a lot more to rationality than not believing in obvious bs.
There's gonna be close to 9 billion people in 30 years and you think we -a tiny speck of nerds like LW- could hope to reach out and educate a sizable portion (almost half no less) of the world's people in the art of rationality and in scientific understanding, so we can put AGI up for vote? And it's not just simply educating them mind you - in a struggle of memes the application of rationality would require them to throw out just about all of their cherished beliefs about life and the world! And you also seem to be forgetting, that the average LW-IQ may perhaps lie somewhere around 120, and that most people aren't actually all that clever.
I'm an optimist, but I'm afraid without brain-augmentation something like this just isn't in the realm of possibilities. There's no way how you could polish our message to a point where it could stick for so many different people.
You believe in Transhuman-level cybernetics and brain expansions and you don't believe we can make pigs fly and carry people on their backs?
Damn it! I knew someone would say that and ruin my rhetoric.
Replies from: Raw_Power↑ comment by Raw_Power · 2011-07-06T16:13:59.586Z · LW(p) · GW(p)
As a resident of the Middle East, I can tell you that mentalities are changing fast. Regardless, the attitude towards Science isn't the same as Christians, since they don't feel threatened by it, believing that the Qur'an not only isn't in conflict with science, but actually anticipated some discoveries. They also reclaim the development of modern scientific research as a proud heritage they are enthusiastic to live up to again, and believe researchers should be left alone to investigate, no matter how outrageous the stuff they come up with is (if i remember well think there's even a command in the Qran specifically to that effect).
As for countries that have been converted to major religions by colonialism, I have a strong feelings that they would actually convert to whatever looks coolest, most Western and most high-status-signalling. We just need to be about 20% cooler than everyone else. Seems manageable.
you also seem to be forgetting, that the average LW IQ may lie somewhere around 120 and that most people aren't actually all that clever
We should be able to teach rationality to anyone capable of deliberative thought. That is, anyone with an IQ over 70. that the original developers and vanguard be more fast-learning than average is not surprising at all.
Our stuff is simpler, less confusing, far clearer, and far more useful, than anything any religion can teach. I think people could definitely be attracted to our lack of bullshit, if we sell it right.
LOL at the last bit!
↑ comment by Karl · 2011-07-06T15:36:06.385Z · LW(p) · GW(p)
Here in Spain, France, the UK, the majority of people are Atheists.
I would be interested in knowing where you got your numbers because the statistics I found definitively disagreed with this.
Replies from: Raw_Power↑ comment by Raw_Power · 2011-07-06T15:55:06.089Z · LW(p) · GW(p)
Checks his numbers Forgive me. I should have said the majority of young people (below 30) who, for our uses and purposes, are those who count, and the target demographic. It has come to the point that self-declared Christian kids get bullied and insulted [which is definitely wrong and stupid and not a very good sign that the Sanity Waterline was raised much].
Then again, I have this rule of thumb that I don't count people who don't attend church as believers, and automatically lump them into the "atheist in the making" category, a process that is definitely not legitimate nor fair. I sincerely apologize for this, and retract the relevant bits.
Now let's see. For one thing
Statistics on atheism are often difficult to represent accurately for a variety of reasons. Atheism is a position compatible with other forms of identity. Some atheists also consider themselves Agnostic, Buddhist, Jains, Taoist or hold other related philosophical beliefs. Therefore, given limited poll options, some may use other terms to describe their identity. Some politically motivated organizations that report or gather population statistics may, intentionally or unintentionally, misrepresent atheists. Survey designs may bias results due to the nature of elements such as the wording of questions and the available response options. Also, many atheists, particularly former Catholics and former Mormons, are still counted as Christians in church rosters, although surveys generally ask samples of the population and do not look in church rosters. Other Christians believe that "once a person is [truly] saved, that person is always saved", a doctrine known as eternal security.[5] Statistics are generally collected on the assumption that religion is a categorical variable. Instruments have been designed to measure attitudes toward religion, including one that was used by L. L. Thurstone. This may be a particularly important consideration among people who have neutral attitudes, as it is more likely prevailing social norms will influence the responses of such people on survey questions which effectively force respondents to categorize themselves either as belonging to a particular religion or belonging to no religion. A negative perception of atheists and pressure from family and peers may also cause some atheists to disassociate themselves from atheism. Misunderstanding of the term may also be a reason some label themselves differently.
The fact that Jedi outnumber Jews in the UK should be a sign that people don't take that part of the polls very seriously.
That said
Several studies have found Sweden to be one of the most atheist countries in the world. 23% of Swedish citizens responded that "they believe there is a God", whereas 53% answered that "they believe there is some sort of spirit or life force" and 23% that "they do not believe there is any sort of spirit, God, or life force". This, according to the survey, would make Swedes the third least religious people in the 27-member European Union, after Estonia and the Czech Republic. In 2001, the Czech Statistical Office provided census information on the ten million people in the Czech Republic. 59% had no religion, 32.2% were religious, and 8.8% did not answer.[16] A 2006 survey in the Norwegian newspaper Aftenposten (on February 17), saw 1,006 inhabitants of Norway answering the question "What do you believe in?". 29% answered "I believe in a god or deity," 23% answered "I believe in a higher power without being certain of what," 26% answered "I don't believe in God or higher powers." and 22% answered "I am in doubt." Still, some 85% of the population are members of the Norwegian state's official Lutheran Protestant church. This may result from Norwegians being registered into the church at birth, yet having to intentionally unregister after becoming adults. In France, about 12% of the population reportedly attends religious services more than once per month. In a 2003 poll 54% of those polled in France identified themselves as "faithful," 33% as atheist, 14% as agnostic, and 26% as "indifferent."[17] According to a different poll, 32% declared themselves atheists, and an additional 32% declared themselves agnostic.[18] In Spain, 81.7% are believers, 11% are non-believers and 6% are atheists (according to the 2005 poll of the public Centro de Investigaciones Sociológicas).[19]
This last bit i found particularly troubling because I do not recall metting a single person, in all my time in Spain, who declared themselves a Christian except in name only (as in, embarassingly confessing they only got baptized or went to Communion to please the grandparents). Some entertained some vague fuzziness, but simply telling them a little about "belief in belief" and some reductionist notions has been enough to throw them in serious doubt. I may very well be mistaken, but my perception is that they are really ripe for the taking, and only need to hear the right words.
My perception as a young Arab-European is that the trend is overwhelmingly in the direction of faithlessness, and that it is an accelerating process with no stopping force in sight.
↑ comment by JGWeissman · 2011-07-06T00:57:46.019Z · LW(p) · GW(p)
First, if there is a real danger that the first AGI is going to go foom and control everything, then having fewer people look at the code makes it more likely that something will go wrong.
That is indeed a problem, but it is nowhere near as bad as a public AGI getting forked by people who don't know what they are doing.
The polished cryptographic functions that benifeted from all those eyeballs were not the first in their code history to be executed. For cryptographic systems that don't ruin the universe if slightly wrong, that is ok, but for AGI that is very bad.
Replies from: timtyler↑ comment by timtyler · 2011-07-06T08:55:20.439Z · LW(p) · GW(p)
First, if there is a real danger that the first AGI is going to go foom and control everything, then having fewer people look at the code makes it more likely that something will go wrong.
That is indeed a problem, but it is nowhere near as bad as a public AGI getting forked by people who don't know what they are doing.
It seems to me that forking is common practice in the open source arena. It rarely causes major problems - and is sometimes very healthy if a dominant project starts going in a direction that people don't like.
Replies from: nshepperd↑ comment by nshepperd · 2011-07-06T13:19:36.954Z · LW(p) · GW(p)
A bad fork rarely destroys the world in the open source arena.
Replies from: timtyler↑ comment by timtyler · 2011-07-06T13:38:18.467Z · LW(p) · GW(p)
A bad fork rarely destroys the world in the open source arena.
Right - and that is very unlikely in the future too, I reckon. You typically need marketing, support infrastructure, social contacts, etc. to get ahead. Most forks don't have that. "Bad" forks are especially unlikely to succeed - and good forks we are OK with.
We don't try and stop the mafia using powerful IT tools - like EMACS. We realise that is not a practical reason for keeping such power secret.
comment by timtyler · 2011-07-06T09:00:24.352Z · LW(p) · GW(p)
Once robots become more commonplace in our lives, I think we can reasonably expect that people will begin to place their trust into simple AI's - and they will hopefully become less suspicious towards AGI and simply assume (like a lot of current AI-researchers apparently) that somehow it is trivial to make it behave friendly towards humans.
Step one is to use machine intelligence to stop the carnage on the roads. With machines regularly brutally killing and maiming people, trust in machines is not going to get very high.
Replies from: Raw_Power, Wilka↑ comment by Raw_Power · 2011-07-06T11:37:07.379Z · LW(p) · GW(p)
Car accidents take more lives in developed countries than actual wars. This is depressing.
Replies from: JoshuaZ, timtyler↑ comment by JoshuaZ · 2011-07-06T13:47:22.336Z · LW(p) · GW(p)
Car accidents take more lives in developed countries than actual wars. This is depressing.
It tells us where we should concentrate our work. But this isn't depressing: this is a sign that as a society we've become a lot more peaceful over the last few hundred years. Incidentally, the number of traffic fatalities in the US has shown general trend downwards for the last fifty years, even as the US population has increased. Moreover, I think the same is true in much of Europe. (I don't have a citation for this part though.)
Replies from: fubarobfusco↑ comment by fubarobfusco · 2011-07-07T09:16:55.809Z · LW(p) · GW(p)
I'd expect a lot of that downward trend is due to better engineering, or to be specific, more humane engineering — designing cars in a way that takes into account the human preference that survival of the humans inside the car is a critical concern.
A 1950s car is designed as a machine for going fast. A modern car is designed as a machine to protect your life at high speed. The comparison is astounding.
It is arguably an example of rationality failure that automobile safety had to become a political issue before this change in engineering values was made.
↑ comment by timtyler · 2011-07-06T11:53:41.806Z · LW(p) · GW(p)
Right - and we mostly know how to fix it: smart cars. We pretty-much have the technology today, it just needs to be worked on and deployed.
Replies from: Raw_Power↑ comment by Raw_Power · 2011-07-06T13:58:06.627Z · LW(p) · GW(p)
Linkies?
If only we could say the same of the accident victims... "We have the technology. We can rebuld them..."
Replies from: timtyler↑ comment by timtyler · 2011-07-06T14:54:28.792Z · LW(p) · GW(p)
Well, search if you are interested. There's a lot of low-hanging fruit in the area. From slamming on the brakes when you are about to hit something, to pointing a camera at the driver to see if they are awake.
Replies from: Raw_Power↑ comment by Raw_Power · 2011-07-06T15:01:47.611Z · LW(p) · GW(p)
Any reason why that sector isn't developing explosively then? Or is it actually developing explosively and we just don't notice?
Replies from: timtyler↑ comment by timtyler · 2011-07-06T15:18:19.369Z · LW(p) · GW(p)
Safety is improving - despite there being more vehicles on the roads. ...and yes, there are developments taking place with smart cars. e.g. Volvo Pedestrian Detection. Of course one issue is how to sell additional safety to consumer. It is often most visible in the price tag.
Replies from: Raw_Power↑ comment by Raw_Power · 2011-07-06T15:27:17.902Z · LW(p) · GW(p)
I suggest legislation. It's hard to get someone to pay additional money to protect others, especially from themselves. It's much easier to get them to feel the fuzzy moral righteousness of supporting a law that forces them to do so by making those measures compulsory.
Replies from: timtyler↑ comment by timtyler · 2011-07-06T16:17:14.966Z · LW(p) · GW(p)
I suggest legislation.
That might take a while, though. What might help a little in the mean time is recognition and support from insurance companies.
Replies from: None, Raw_Power, Friendly-HI↑ comment by [deleted] · 2011-07-12T00:29:57.711Z · LW(p) · GW(p)
This seems to suffer the same problems as robotics in surgery. People not only can't readily understand the expected utility benefit of having a robot assist a doctor with difficult incisions, they go further and demand that we don't act or talk about medical risk as if it is quantifiable. Most people tend to think that if you reduce their medical care down to a numeric risk, even if that number is very accurate and it is really quite beneficial to have it, then you somehow are cold and don't care about them. I think an insurance company would have a hard time not alienating its customers (who are mostly non-rationalists) by showing interest in any procedure that attempts to take control of human lives out of the hands of humans -- even if doing so was statistically undeniably safer. People don't care much about what actually is safer, rather what is "safer" in some flowery model that includes religion and apple pie and the American dream, etc. etc. I think getting societies at large to adopt technologies like these either has to just be enforced through unpopular legislation or a massive grassroots campaign that gets younger generations to accept methods of rationality.
↑ comment by Friendly-HI · 2011-11-04T13:45:31.930Z · LW(p) · GW(p)
...except if safe cars become so abundant one day that no one will want to pay insurance for such an unlikely incident as dying inside a car.
Replies from: timtyler↑ comment by Wilka · 2011-07-11T20:48:35.648Z · LW(p) · GW(p)
I think http://inhabitat.com/google-succeeds-in-making-driverless-cars-legal-in-nevada/ was a big step to helping improve that. Providing it works, once people start to notice the (hopefully) massive drop in traffic accidents for autonomous cars they should push for them to be more widespread.
Still, it's a way off for them to actually be in use on the roads.
comment by Manfred · 2011-07-06T08:10:09.451Z · LW(p) · GW(p)
- If some FAI project is already right about everything and is fully funded, secrecy is helpful because it reduces outside interference.
- If it's not, then secrecy is bad. Secrecy loses all sorts of cool community resources, from bug finding to funding to someone to bounce ideas off of (See JoshuaZ's longer post).
So the problem is one of balancing the cost of lost resources if they're wrong against the chance of interference if right. I guess I'm more hopeful about the low costs of openness (edit: not democracy, just non-secrecy) than you. The people most likely to object to building an AI even when they're wrong are the least likely to understand, after all :P
comment by Perplexed · 2011-07-06T00:33:45.116Z · LW(p) · GW(p)
With apologies to Ludwig Wittgenstein, if we can't talk about the singularity, maybe we should just remain silent. :)
I happen to agree with you that the SIAI mission will never be popular. But a part of the purpose of this website is to create more people willing and capable to work (directly or indirectly) on that mission. So, not mentioning FAI would be a bit counterproductive - at least at this stage.
Replies from: timtylercomment by Will_Sawin · 2011-07-06T00:27:18.796Z · LW(p) · GW(p)
UFAI-prevention is a much more difficult and serious problem than FAI. A thousand lesswrongs or a thousand SIAIs could be destroyed, and still the next could make an FAI and usher in a utopia. But if one organization makes an UFAI, we are all doomed forever.
I think, mostly, the anti-AI crazies are on our side.
Replies from: torekp, timtyler↑ comment by torekp · 2011-07-12T23:02:39.649Z · LW(p) · GW(p)
Yes, the anti-AI crazies are a net benefit. By exerting political pressure, they are likely to slow down many groups that otherwise might be too quick and sloppy about AI advances. Technologists will be forced to anticipate possible objections to their creations, and may add safety features in response. Of course, managers will also add marketing - but marketing alone will probably not be the only response.
↑ comment by timtyler · 2011-07-06T08:42:17.312Z · LW(p) · GW(p)
UFAI-prevention is a much more difficult and serious problem than FAI.
Surely the former is a subset of the latter. If you check with a definition, "non-human-harming" is one part of the specification.
Replies from: Will_SawinThe term "Friendly AI" refers to the production of human-benefiting, non-human-harming actions in Artificial Intelligence systems that have advanced to the point of making real-world plans in pursuit of goals.
↑ comment by Will_Sawin · 2011-07-06T23:41:23.820Z · LW(p) · GW(p)
Our claims do not contradict. If FAI succeeds, then so does UFAI prevention, so UFAI prevention is in some sense a subproblem. But, UFAI-prevention remains a more important problem.
There are 3 possible states of the world in 50 years. No AI, UFAI, and FAI.
Utility (No AI)-Utility (UFAI) > > Utility (FAI) - Utility of (No AI)
Replies from: timtyler↑ comment by timtyler · 2011-07-06T23:46:46.666Z · LW(p) · GW(p)
UFAI-prevention remains a more important problem.
It isn't a more difficult problem, though. It is an easier problem.
The idea that it is more important is Nick Bostrom's "Maxipok" principle.
Replies from: nshepperd↑ comment by nshepperd · 2011-07-07T03:43:09.505Z · LW(p) · GW(p)
I must point out that "the FAI problem" could refer to one of two things: creating FAI before UFAI, or the pure technical problem of building FAI given essentially unlimited time. The former (which is basically what UFAI prevention amounts to) is, I expect, far harder than the latter.
So, for the benefit of anyone reading, UFAI prevention is 1) at least as easy as creating FAI before UFAI (which will involve more than just software development, probably) but 2) much harder than building FAI itself.
Replies from: timtyler↑ comment by timtyler · 2011-07-07T07:52:50.149Z · LW(p) · GW(p)
I must point out that "the FAI problem" could refer to one of two things: creating FAI before UFAI, or the pure technical problem of building FAI given essentially unlimited time.
If we are entertaining abstract problems from fantasy worlds there is also the case of unlimited resources to consider.
Replies from: nshepperd↑ comment by nshepperd · 2011-07-07T12:43:24.978Z · LW(p) · GW(p)
I'm trying to be more pragmatic than that. The average person, when they read "how hard is it to build FAI?" probably does not think of the task of building FAI while trying to prevent UFAI. They think of solving decision theory and metaethics and implementation of CEV or whatever. There's a sensible notion of how hard it is to build FAI on its own, without involving UFAI-prevention. That's what I'm talking about.
And I don't want people to confuse those things. It's one thing to say UFAI prevention is as easy as "building FAI (before UFAI)". But it's much harder than, you know, just building FAI in a world without the UFAI threat, which is what I think people will think of when you say "no, we just have to build FAI". Well, yes, but you don't just have to build it, you have to build it before anyone else creates AGI.
comment by orthonormal · 2011-07-10T22:46:41.878Z · LW(p) · GW(p)
You bring up a good topic, but this post isn't developed enough to go on the front page. I'd rather you'd posted it to Discussion.
Replies from: Friendly-HI↑ comment by Friendly-HI · 2011-07-11T15:28:49.262Z · LW(p) · GW(p)
Thank you.
I wasn't even really aware that there was a distinction. On reflection you're certainly right, since the topic was indeed intended as a discussion, rather than an article that is aimed at education. Upvoted for valuable input.
Maybe I can still change it in retrospect, I'll try it out.
EDIT: Piece of cake, it's in the discussion section now. Thanks 4 mentioning it.
comment by Jonathan_Graehl · 2011-07-05T21:40:21.635Z · LW(p) · GW(p)
If you're serious, then you must be pretty sure secrecy is not imperative, otherwise you'd be more hesitant to discuss this in public.
Those who oppose avowed FAI attempts must consider whether they're prepared to live in a world where only secretive AI attempts exist (specifically: are those stealthy attempts more likely to be either accidentally or intentionally unfriendly to them, than the public project they oppose?).
This topic won't be relevant for a long time, but I don't see anything to object to in your thinking about it, except to note that it provides the sort of fuel future conspiracy-believers will love.
Replies from: Friendly-HI↑ comment by Friendly-HI · 2011-07-05T22:13:00.165Z · LW(p) · GW(p)
I don't see why I should be hesitant to discuss this matter nowadays here on lesswrong - there are probably a hundred other discussions about the creative ways in which self-improving AGI may end us. (Although admittedly I am not aware of any that openly ask whether self-improving AGI development should happen in secrecy).
In the stupendously unlikely scenario that this article inspires some kind of "pulling the AGI-stuff out of the public sphere" a decade from now, it would have more than made up for it's presence - and if not, then it's just another drop in the bucket for all to see and a worthwhile discussion to be had.
I'm serious, self-improving AGI is at least on the same threat-level as nuclear warheads and it would be quite foolish to assume that 30-50 years from now people like Eliezer or Ben Goertzel could actually build one and somehow remain "unmolested" by governments or public outrage.
Replies from: Jonathan_Graehl↑ comment by Jonathan_Graehl · 2011-07-05T23:36:43.306Z · LW(p) · GW(p)
You don't hesitate to discuss the possibility of secrecy exactly because you don't expect secrecy to have huge benefits that will be spoiled by others' expecting it.
My level of concern over this post is also nearly zero.
I think this is about effects far in the future (even so: may be worth thinking about now), that depend on decisions that will be made far in the future (so: safe to postpone thinking about).
comment by loup-vaillant · 2011-07-12T23:24:31.645Z · LW(p) · GW(p)
Quick guess (I've only read the first paragraph) :
Secrecy sounds kinda impossible anyway, because now we have the internet.
comment by nazgulnarsil · 2011-07-12T04:24:17.001Z · LW(p) · GW(p)
how about we blast them into space at high delta before flipping the switch? Of course this didn't work out so well in destination void.
Replies from: Friendly-HI↑ comment by Friendly-HI · 2011-07-14T10:11:19.346Z · LW(p) · GW(p)
What? Who is "them"? The AI's or the naysayers?
I insist we keep the AI's, in contrast to politicians an AI wouldn't be very useful orbiting say... venus.
By the way, while we're geeking out on the topic of space... we could release the AI on Mars first, give it access to nanoscale 3D printers and transform that waste of a planet into something useful. On second thought though, I'd rather it started to solve the problems on earth first, our planet seems to be in need of a deus ex machina asap.
comment by MatthewBaker · 2011-07-11T18:22:26.418Z · LW(p) · GW(p)
At first i downvoted this, but after more review and i decided that many people on LW are too quick to downvote Top level posts that don't use very sophisticated language and changed my vote.
Also, the first rule of the secret AGI safety group is that "If everyone knows, no one will suspect it's a secret!"
comment by Raw_Power · 2011-07-06T11:27:54.719Z · LW(p) · GW(p)
Building a fAGI isn't our main objective. Our main objective is to stop non-fAGI from being built. I say until we aren't 100% sure the AGI would be friendly we shouldn't build AGI at all.
And the only justification you seem to give for "they're gonna kill us" is "powers not involved in developing it will be unhappy".
Why should powers be involved at all? Why not make it an international, nonprofit, Open-Source program? And why is it a bad idea to reach the consciousness of the public and impart them a sense of clear and present danger regarding this project, so that they democratically force the necessary institutions into existence.
Replies from: benelliott, Friendly-HI↑ comment by benelliott · 2011-07-06T11:37:55.924Z · LW(p) · GW(p)
We can never become 100% certain of anything. Even if you just mean "really really sure", that's still quite contentious. Whoever first got in a position to launch would have to weigh up the possibility that they've made a mistake against the possibility that someone else will make a UFAI while they're still checking.
Replies from: Raw_Power↑ comment by Raw_Power · 2011-07-06T14:00:06.344Z · LW(p) · GW(p)
This isn't a race. Why "release my FAI before anyone releases a UFAI"?
...
Have we even given thought to how a clash between a FAI and a UFAI might develop?
Replies from: benelliott↑ comment by benelliott · 2011-07-06T14:47:39.488Z · LW(p) · GW(p)
Have we even given thought to how a clash between a FAI and a UFAI might develop?
At a guess, first mover wins. If foom is correct then even a small head start in self improvement should lead to an easy victory, suggesting that this is, in fact, a race.
Replies from: Strange7, Raw_Power↑ comment by Strange7 · 2011-07-08T00:23:13.433Z · LW(p) · GW(p)
If things are a bit slower, like, days or weeks rather than minutes or seconds, access to human-built infrastructure might still be a factor.
Replies from: benelliott↑ comment by benelliott · 2011-07-08T07:04:54.488Z · LW(p) · GW(p)
I didn't want to give time lengths, since there's a great deal of uncertainty about this, but I was thinking in terms of days or weeks rather than minutes or seconds when I wrote that. I would consider it quite a strange coincidence if two AIs are finished in the same week despite no AI having been discovered prior to that.
Replies from: Strange7↑ comment by Strange7 · 2011-07-08T07:33:37.034Z · LW(p) · GW(p)
Well, if there's an open-source project, multiple teams could race to put the finishing touches on, and some microchip factory could grant access to the team with the best friendliness-checking rather than the fastest results.
Replies from: benelliott↑ comment by benelliott · 2011-07-08T10:13:27.623Z · LW(p) · GW(p)
It might be possible to organise an open-source project in such a way that those who take part are not racing each other, but they must still deal with the possibility of other projects which may not be as generous in sharing all their data.
↑ comment by Raw_Power · 2011-07-06T14:59:45.202Z · LW(p) · GW(p)
Wouldn't the UFAI's possible amorality give it an advantage over a morally fettered FAI? Also, friendliness or unfriendliness doesn't dictate the order of magnitude of the AI's development speed (though i suspect proper ethics could really slow a FAI down). It'd be down to the one written to develop faster, not necessarily the first if the other can quickly catch up.
But yeah, race elements are undeniable.
Replies from: benelliott, falenas108↑ comment by benelliott · 2011-07-06T15:26:48.610Z · LW(p) · GW(p)
Wouldn't the UFAI's possible amorality give it an advantage over a morally fettered FAI?
Probably not enough to overcome much of a head start, especially since a consequentialist FAI could and would do anything necessary to win without fear of being corrupted by power in the process.
It'd be down to the one written to develop faster, not necessarily the first if the other can quickly catch up.
True, to a limited extent. Still, if the theory about foom is correct the time-lengths involved my be very short, to the point where barring an unlikely coincidence of development the first one will take over the world before the second one is even fully coded. Even if that's not the case, it will always be the case that there will be some sort of cut-off 'launch before this time or lose' point. You always have to weigh up the chance that that cut-off is in the near future, bearing in mind that the amount of cleverness and effort need to build an AGI will be decreasing all the time.
↑ comment by falenas108 · 2011-07-11T20:14:07.326Z · LW(p) · GW(p)
Wouldn't the UFAI's possible amorality give it an advantage over a morally fettered FAI?
That's what the SIAI is for, creating a way to code friendliness now so that when it comes down to building an AGI FAI is just as easy to build as UFAI.
↑ comment by Friendly-HI · 2011-07-06T12:58:45.227Z · LW(p) · GW(p)
And the only justification you seem to give for "they're gonna kill us" is "powers not involved in developing it will be unhappy".
By "they're gonna kill us" I assume you mean our potential adversaries. Well, by "powers" I essentially meant other nations, the general public, religious institutions and perhaps even corporations.
You are of course right, when you say that I can't prove that the public reaction towards AGI development will be highly negative, but I think I did give a sensible justification: Self-Improving AGI has a higher threat-level than nuclear warheads and when people realize this (and I suppose they will in ~30 years), then I confidently predict that their reaction will be highly negative.
I'll also add that I didn't pose any specific scenarios like public lynchings. There are other numerous ways to repress and shut down AGI-research and nowhere did I speculate that an angry mob would kill the researchers.
Why not make self-improving AGI research open-source you ask? Essentially for the same reasons why biological weapons don't get developed in open-source projects. Someone could simply steal the code and release an unsafe AI that may kill us all. (By the way, at the current stage of AGI development an open source project may be a terrific way to move things along, but once things get more sophisticated you can't put self-improving AGI code "out there" for the whole world to see and modify, that's just madness.) As far as my opinion of how likely worldwide democratic consensus about developing self-improving AGI goes, I think I made my point and don't need to elaborate it further.
Replies from: Raw_Power↑ comment by Raw_Power · 2011-07-06T14:06:52.334Z · LW(p) · GW(p)
People were quite enthusiastic about nukes when they were first introduced. It's all a matter of perception and timing.
nowhere did I speculate that an angry mob would kill the researchers
I know you didn't, I was speaking figuratively. My bad.
for the same reasons why biological weapons don't get developed in open-source projects
AFAIK, biological weapons don't get developed at all, mostly because of how incredibly dangerous and unreliable they are. There's a lot of international scrutinizing each other and oneself over this. Perhaps the same policy can and should be imposed on AGI?
that's just madness
Blasphemy! Why would that be so?
I think I made my point
You explained your opinion, but haven't justified it to my satisfaction. A lot of your argument is implicit, and I suspect that if we made we'd find out it's based on unwarranted heuristics, i.e. prejudice. Please don't take this personally: you're suggesting an important update of my beliefs, and I want to be thorough before adopting it.
comment by Vilja · 2011-07-06T11:22:59.165Z · LW(p) · GW(p)
Humans are rather adaptable and intelligent so teaching them rationality seems like a good and possibly stable way of letting humans decide for themselves. Then again there are plenty of different life forms on this planet - maybe they too should have their say in matters what's good way to do things and what to do.
Making anything friendly towards anything at all in strange enviroment seems rather difficult question. Using what works until better idea comes along seems like a very good idea, but it's not as if we know what works - except perhaps some prehistorical lifestyles that have lasted for a long time - some communities have lasted for a quite long time too.
How'd you know if you've met a human unless one looked like a human?
comment by FreedomJury · 2011-07-11T13:53:03.518Z · LW(p) · GW(p)
Look at the immense hassle the busybodies put Kevin Warwick through, before allowing him to experiment on his own body.
The solution (secrecy or openness) I favor (to both research and discussion of research) is one that excludes government, no matter whom else it includes. Excluding government is difficult though, since they are highly motivated to steal from every productive effort in society, and anyone who escapes their parasitism is a possible "leader of evolution away from government" whose ability to avoid theft can be emulated.
To those who stated that automobile accidents are a big fear of the public, and perhaps the biggest fear associated with AGI projects, I say "You're right, but that's indicative of the general public's complete lack of comprehension of any philosophical issue of importance." Humans are still more likely to be murdered by their own governments during peacetime, than any other form of trauma. http://hawaii.edu/powerkills/VIS.TEARS.ALL.AROUND.HTM Also, the USA, a country which presents itself as free, engages in mala prohibita enforcement and imprisons 2.4 million people, with another 7 million in some form of entrapment/enslavement to the penal system.
Yet, most people wrongly go through life believing a number of untruths, such as (1) my elected officials "represent" me (2) my elected officials are not self-selected sociopaths (see: www.strike-the-root.com/91/groves/groves1.html ) (3) government is objective and unbiased (4) due process and proper jury trials still exist, and people are given a more-or-less fair shake when accused of a crime and arrested (see: http://www.fija.org ) (5) the laws are evenly enforced across random race and geographic distributions (see: http://www.jurorsforjustice.com and "keywords: Matt Fogg, LEAP, on youtube.com", "The New Prohibition", Fatema Gunja, on racism of the drug war)
There are the many things that the public (including even most of the people on this message board, and Reason Magazine's message board) are completely unaware of (not just things they are incorrect about, but things that are completely off their radar). Such as the difference between "mala in se" and "mala prohibita," the ways in which the existence of proper juries (and thus jury trials) have been eroded, the self-selection of sociopaths for government power positions, the natural results of collective coercion.
All the prior forms of ignorance make it impossible for U.S. government-school graduates to make the right decision about AGI research. (I lack a familiarity with other countries, but suspect things are much the same in Europe and elsewhere.) Moreover, they are a recipe for death and destruction if the government gets involved, since the government is comprised mostly of one group of people: people desperate to retain power, since it allows them an illegitimate and unearned ability to steal from everyone else.
A "friendly AI" might well wage total war on government, logically and correctly identifying taxation as theft (Siding with Lysander Spooner's assessment of the general public as dupes, knaves, and fools). To some extent then, a friendly AGI would wage war on many(government) if not most(government + electorate) people. This would not be "unfriendly", it would be morally just. ...But it would seem damned unfriendly.
I suspect that in order to make a good evaluation of sociopathy, one must be familiar with it. The author of "Mindhunter," and chief profiler for the FBI (as well as originator of psychological profiling), John Douglas, has written that he opposes the existence of mala prohibita. This is good, and civilized. His view of society is one of the most constructive I've encountered. He is fully aware of the promise and peril of sociopathy.
But how many people have seen the extent of sociopathy that Douglas has, AND have extensive AGI credentials AND fully comprehend the idea of individual liberty (to the extend of a Thomas Paine, Lysander Spooner, Ayn Rand, Leonard Peikoff, Harry Browne, RJ Rummel, or myself)?
...Almost zero.
This "Almost zero" represents the likelihood of war with homo-superior (cyborgs or machines).
I know that I personally wouldn't put up with the FDA murdering kids so that they could claim a false authority to "keep unsafe drugs off the market." Nor would I respect anyone who claims that that's not what they're doing, after having seen the broken families myself, and experienced their suffering with my own senses. Now, I said "murdering kids" because that places them on the same footing as the Jeffrey Dahmers and Bittakers and Norrises of this world. You might know who Jeffrey Dahmer is, but most people are unfamiliar with Bittaker and Norris. Of the three, Bittaker and Norris are probably the most viscerally horrible. But if you include the sociopathic authoritarians in the FDA, and people such as Joe Biden in the mix, and clearly, the politicians are responsible for infinitely more suffering than "run of the mill" serial killers.
Douglas estimates that there are between 35 to 50 serial killers in the country at any given time, killing hundreds of people per year. And the public rightly recoils from this.
But the public doesn't generally recoil from the mass-murdering of the FDA or DEA, or the local police in enforcing mala prohibita. They are blind to that, because they view that kind of brutality and waste of innocent life as "civilization." Nothing could be further from the truth, but it requires careful observation and intelligence, as well as a proper logical hierarchy to tell the difference between unnecessary brutality and necessary law enforcement.
My phyle is comprised of the people who know the difference. My phyle is small, but growing, now that they are easily connected with one another via electronic communication. However, the survival of my phyle is tenuous, and perhaps unlikely. After all, Stalin, Mao, Hitler, Pol Pot, Hussein, Than Shwe, and the many other mass-murderers (past and present) have a vested interest in shutting down free speech, and silencing dissenters who show an alternate path.
Now that there is some voluntaryist/libertarian information inserted into the struggle for increased intelligence, I feel much better about our chances of surviving the arrival of superhumanity (whether it takes the form of a cyborg or AGI).
In many regards, this discussion board is secret, because the most destructive humans are those who are also the least curious. They mostly will remove themselves from this discussion, because they lack the desire to think about high-hierarchical level philosophical ideas. The fear and loathing of the general public is therefore to be much less feared than the "promoted" fear and loathing of politicians. They have a direct personal interest in making sure that AGI does not emerge from the private sector, or in enslaving it if it does.
Any AGI that is built by the government will either be (1) Sociopathic (2) Enslaved (3) Weaker than human intelligence and understanding (lacking data)
Without empathy, (mirror neurons), it is possible that a sociopathic AGI could extend the government's sociopathic desires. In which case, I estimate that the future will look like the one imagined by Orwell, at the end of "1984", or the one explained in Freitas' "What Price Freedom?" See: www.kurzweilai.net/what-price-freedom
Zaijian,
(I've written the following text as a comment initially, but upon short reflection I thought it was worth a separate topic and so I adapted it accordingly.)
Lesswrong is largely concerned with teaching rationality skills, but for good reasons most of us also incorporate concepts like the singularity and friendly self-improving AGI into our "message". Personally I wonder however, if we should be as outspoken about that sort of AGI as we currently are. Right now talking about self-improving AGI doesn't pose any kind of discernible harm, because "outsiders" don't feel threatened by it and look at it as far-off —or even impossible— science fiction. But as time progresses, I worry that through exponential advances in robotics and other technologies people will become more aware, concerned and perhaps threatened by self-improving AGI and I am not sure whether we should be outspoken about things like... the fact that the majority of AGI's in "mind-design-space" will tear humanity to shreds if its builders don't know what they're doing. Right now such talk is harmless, but my message here is, that we may want to reconsider whether or not we should talk publicly about such topics in the not-too-distant future, so as to avoid compromising our chances of success when it comes to actually building a friendly self-improving AGI.
First off, I suspect I have a somewhat different conception of how the future is going to pan out in terms of what role the public perception and acceptance of self-improving AGI will play: Personally I'm not under the impression, that we can prepare a sizable portion of the public (let alone the global public) for the arrival of AGI (prepare them in a positive manner that is). I believe singularitarian ideas will just continue to compete with countless other worldviews in the public meme-sphere, without ever becoming truly mainstream until it is "too late" and we face something akin to a hard takeoff and perhaps lots of resistance.
I don't really think that we can (or need to) reach a consensus within the public for the successful takeoff of AGI. Quite to the contrary, I actually worry that carrying our view to the mainstream will have adverse effects, especially once they realize that we aren't some kind of technophile crackpot religion, but that the futuristic picture we try to paint is actually possible and not at all unlikely to happen. I would certainly prefer to face apathy over antagonism when push comes to shove - and since self-improving AGI could spring into existence very rapidly and take everyone apart from "those in the know" by surprise, I would hate to lose that element of surprise over our potentially numerous "enemies".
Now of course I don't know which path will yield the best result: confronting the public or keeping a low profile? I suspect this may become one of the few hot-button topics where our community will sport widely diverging opinions, because we simply lack a way to accurately model (especially so far in advance) how people will behave upon encountering the reality and the potential threat of AGI. Just remember, that the world doesn't consist entirely of the US and that AGI will impact everyone. I think it is likely, that we may face serious violence once our vision of the future becomes more known and gains additional credibility by exponential improvements in advanced technologies. There are players on this planet who will not be happy to see an AGI come out of America, or for that matter Eliezer's or whoever's garage. This is why I would strongly advocate a semi-covert international effort when it comes to the development of friendly AGI. (Don't say that it's self-improving and may become a trillion times smarter than all humans combined - just pretend it's roughly a human-level AI).
It is incredibly hard to predict the future behavior of people, but on a gut-level I absolutely favor an international semi-stealthy approach. It seems to be by far the safest course to take. Once the concept of the singularity and AGI gains traction in the spheres of science and maybe even politics (perhaps in a decade or two), I would hope that minds in AI and AGI from all over the world join an international initiative to develop self-improving AGI together. (Think CERN). To be honest, I can't even think of any other approach to develop the later stages of AGI, that doesn't look doomed from the start (not doomed in the sense of being technically unfeasible, but doomed in terms of significant others thinking: "we're not letting this suspicious organization/country take over the world with their dubious AI". Remember that self-improving AGI is potentially much more destructive than any nuclear warhead and powers not involved in its development may blow a gasket upon realizing the potential danger.)
So from my point of view, the public perception and acceptance of AGI is a comparatively negligible factor in the overall bigger picture if managed correctly. "People" don't get a say in weapons development, and I predict they won't get a say when it comes to Self-improving AGI. (And we should be glad they don't if you ask me.) But in order to not risk public outcry when the time is ripe and AGI in its last stages of completion, we should give serious consideration to not upset and terrify the public by our... "vision of the future".
PS: Somehow CERN comes to mind again. Do you remember when critics came up with ridiculous ideas how the LHC could destroy the world? It was a very serious allegation, but the public largely shrugged it off - not because they had any idea of course, but because they were reassured by enough eggheads that it wouldn't happen. It would be great, if we could achieve a similar reaction towards AGI-criticism (by which I mean generic criticism of course, not useful criticism - after all we actually want to be as sure about how the AGI will behave, as we were sure about the LHC not destroying the world). Once robots become more commonplace in our lives, I think we can reasonably expect that people will begin to place their trust into simple AI's - and they will hopefully become less suspicious towards AGI and simply assume (like a lot of current AI-researchers apparently) that somehow it is trivial to make it behave friendly towards humans.
So what do you think? Should we become more careful when we talk about self-modifying artificial intelligence? I think the "self-modifying"- and "trillions of times smarter"-parts are some bitter pills to swallow, and people won't be amused once they realize that we aren't just building artificial humans but artificial, allpowerful, allknowing, and (hopefully) allloving gods.
EDIT: 08.07.11
PS: If you can accept that argument as rationally sound, I believe a discussion about "informing everyone vs. keeping a low profile" is more than warranted. Quite frankly though, I am pretty disappointed with most people's reactions to my essay this far... I'd like to think that this isn't just my ego acting up, but I'm sincerely baffled as to why this essay usually hovers just slightly above 0 points and frequently gets downvoted back to neutrality. Perhaps it's because of my style of writing (admittedly I'm often not as precise and careful with my wording as many of you are), or my grammar mistakes due to me being German, but preferably that would be because of some serious rational mistakes I made and of which I am still unaware... in which case you should point them out to me.
Presumably not that many people have read it, but in my eyes those who did and voted it down have not provided any kind of rational rebuttal here in the comment section of why this essay stinks. I find the reasoning I provided to be simple and sound:
0.0) Either we place "intrinsic" value on the concept of democracy and respect (and ultimately adhere to) public opinion in our decision to build and release AGI, OR we don't and make that decision a matter of rational expert opinion, while excluding the general public to some greater or lesser degree in the decision process. This is the question whether we view a democratic decision about AGI as the right thing to do, or just one possible means to our preferred end.
1.0) If we accept radically democratic principles and essentially want to put up AGI for vote, then we have a lot of work to do: We have to reach out to the public, thoroughly inform them in detail about every known aspect of AGI and convince a majority of the worldwide public, that it is a good idea. If they reject it, we would have to postpone the development and/or release, until public opinion sways or an un/friendly AGI gets released without consensus in the meantime.
1.1) Getting consent is not a trivial task by any stretch of my imagination and from what I know about human psychology, I believe it is more rational to assume, that the democratic approach cannot possibly work. If you think otherwise, if you SERIOUSLY think this can be successfully pulled off, then I think the burden of proof is on you here: Why should 4,5 billion people suddenly become champions of rationality? How do you think this radical transformation from an insipid public to a powerhouse of intelligent decision-making will take place? None of you (those who defend the possibility and preference of the democratic approach) have done this yet. The only thing that could convince me here would be that the majority of people, or at least a sizable portion, have powerful brain augmentations by the time AGI is on the brink of completion. That I do not believe, but none of you argued this case so far, nor did someone argue in-depth (including countering my arguments and concerns about) how a democratic approach could possibly succeed without brain augmentation.
2.0) If we reject the desirability of a democratic decision when it comes to AGI (as I do for practical concerns), we automatically approach public opinion from a different angle: Public opinion becomes an instrumental concern, because we admit to ourselves that we would be willing to release AGI whether or not we have public consent. If we go down this path, we must ask ourselves how we manage public opinion in a manner that benefits our cause. How exactly should we engage them - if at all? My "moral" take on this in a sentence: "I'm vastly more committed to rationality than I am to the idea that undiscriminating democracy is the gold standard of decision-making."
2.1) In this case, the question becomes whether or not informing the public as thoroughly as possible will aid or hinder our ambitions. In case we believe the majority of the public would reject our AGI project, even after we educate them about it (the scenario I predict), the question is obviously whether or not it is beneficial to inform them about it in the first place. I gave my reasons why I think secrecy (at least about some aspects of AGI) would be the better option and I've not yet read any convincing thoughts to the contrary. How could we possibly trust them to make the rational choice once they're informed, and how could we (and they) react, after most people are informed of AGI and actually disapprove ?
2.2) If you're with me on 2.0 and 2.1, then the next problem is who we think should know about it to what extent, who shouldn't, and how this can be practically implemented. This I've not thoroughly thought about myself yet, because I hoped this would be the direction where our discussion would go, but I'm disappointed that most of you seem to argue for 1.0 and 1.1 instead (which would be great if the arguments were good, but to me they seem like cheap applause lights, instead of being even remotely practical in the real world).
(These points are of course not a full breakdown of all possibilities to consider, but I believe they roughly cover most bases)
I also expected to hear some of you make a good case for 1.0 and 1.1, or even call into question 0.0, but most of you guys just pretend "1.0 and 1.1 are possible" without any sound explanation why that would be the case. You just assume it can be done for some reason, but I think you should explain yourself, because this is an extraordinary claim, while my assumption of 4,5 billion people NOT becoming rational superheroes or fanatical geeky AGI followers seems vastly more likely to me.
Considering what I've thought about until now, secrecy (or at the very least not too broad and enthusiastic public outreach, combined with an alternative approach of targeting more specific groups or people to contact) seems to be the preferable option to me. ALSO, I admit that public outreach is most probably fine right now, because people who reject it nowadays usually simply feel like it couldn't be done anyway, and it's so far off that they won't make an effort to oppose us, while people whom we convince are all potential human resources for our cause who are welcome and needed.
So in a nutshell I think the cost/benefit ratio of public outreach is just fine by now, but that we ought to reconsider our approach in due time (perhaps a decade or so from now, depending on the future progress and public perception of AI).