FAI, FIA, and singularity politics

post by Mitchell_Porter · 2012-11-08T17:11:10.674Z · LW · GW · Legacy · 11 comments

Contents

11 comments

In discussing scenarios of the future, I speak of "slow futures" and "fast futures". A fast future is exemplified by what is now called a hard takeoff singularity: something bootstraps its way to superhuman intelligence in a short time. A slow future is a continuation of history as we know it: decades pass and the world changes, with new politics, culture, and technology. To some extent the Hanson vs Yudkowsky debate was about slow vs fast; Robin's future is fast-moving, but on the way there, there's never an event in which some single "agent" becomes all-powerful by getting ahead of all others.

The Singularity Institute does many things, but I take its core agenda to be about a fast scenario. The theoretical objective is to design an AI which would still be friendly if it became all-powerful. There is also the practical objective of ensuring that the first AI across the self-enhancement threshold is friendly. One way to do that is to be the one who makes it, but that's asking a lot. Another way is to have enough FAI design and FAI theory out there, that the people who do win the mind race will have known about it and will have taken it into consideration. Then there are mixed strategies, such as working on FAI theory while liaising with known AI projects that are contenders in the race and whose principals are receptive to the idea of friendliness.

I recently criticised a lot of the ideas that circulate in conjunction with the concept of friendly AI. The "sober" ideas and the "extreme" ideas have a certain correlation with slow-future and fast-future scenarios, respectively. The sober future is a slow one where AIs exist and posthumanity expands into space, but history, politics, and finitude aren't transcended. The extreme future is a fast one where one day the ingredients for a hard takeoff are brought together in one place, an artificial god is born, and, depending on its inclinations and on the nature of reality, something transcendental happens: everyone uploads to the Planck scale, our local overmind reaches out to other realities, we "live forever and remember it afterwards".

Although I have criticised such transcendentalism, saying that it should not be the default expectation of the future, I do think that the "hard takeoff" and the "all-powerful agent" would be among the strategic considerations in an ideal plan for the future, though in a rather broader sense than is usually discussed. The reason is that if one day Earth is being ruled by, say, a coalition of AIs with a particular value system, with natural humans reduced to the status of wildlife, then the functional equivalent of a singularity has occurred, even if these AIs have no intention of going on to conquer the galaxy; and I regard that as a quite conceivable scenario. It is fantastic (in the sense of mind-boggling), but it's not transcendental. All the scenario implies is that the human race is no longer at the top of the heap; it has successors and they are now in charge.

But we can view those successors as, collectively, the "all-powerful agent" that has replaced human hegemony. And we can regard the events, whatever they were, that first gave the original such entities their unbeatable advantage in power, as the "hard takeoff" of this scenario. So even a slow, sober future scenario can issue in a singularity where the basic premises and motivations of existing FAI research apply. It's just that one might need to be imaginative in anticipating how they are realized.

For example, perhaps hegemonic superintelligence could emerge, not from a single powerful AI research program, but from a particular clique of networked neurohackers who have the right combination of collaborative tools, brain interfaces, and concrete plans for achieving transhuman intelligence. They might go on to build an army of AIs, and subdue the world that way, but the crucial steps which made them the winners in the mind race, and which determined what they would do with their victory, would lie in their methods of brain modification, enhancement, and interfacing, and in the ends to which they applied those methods.

In such a scenario, we could speak of "FIA" - friendly intelligence augmentation. A basic idea of existing FAI discourse is that the true human utility function needs to be determined, and then the values that make an AI human-friendly would be extrapolated from that. Similar thinking can be applied to the prospect of brain modification and intelligence increase in human beings. Human brains work a certain way, modified or augmented human brains will work in specifically different ways, and we should want to know which modifications are genuinely enhancements, what sort of modifications stabilize value and which ones destabilize value, and so on.

If there was a mature and sophisticated culture of preparing for the singularity, then there would be FAI research, FIA research, and a lot of communication between the two fields. (For example, researchers in both fields need to figure out how the human brain works.) Instead, the biggest enthusiasts of FAI are a futurist subculture with a lot of conceptual baggage, and FIA is nonexistent. However, we can at least start thinking and discussing about how this broader culture of research into "friendly minds" could take shape.

Despite its flaws, the Singularity Institute stands alone as an organization concerned with the fast future scenario, the hard takeoff. I have argued that a sober futurology, while forecasting a slowly evolving future for some time to come, must ultimately concern itself with the emergence of a posthuman power arising from some cognitive technology, whether that is AI, neurotechnology, or a combination of these. So I have asked myself who, among "slow futurists", is best equipped to develop an outlook and a plan which is sober and realistic, yet also visionary enough to accommodate the really overwhelming responsibility of designing the architecture of friendly posthuman minds capable of managing a future that we would want.

At the moment, my favorites in this respect are the various branches, scattered around the world, of the Longevity Party that was started in Russia a few months ago. (It shouldn't be confused with "Evolution 2045", a big-budget rival backed by an Internet entrepreneur, that especially promotes mind uploading. For some reason, transhumanist politics has begun to stir in that country.) If the Singularity Institute falls short of the ideal, then the "longevity parties" are even further away from living up to their ambitious agenda. Outside of Russia, they are mostly just small Facebook groups; the most basic issues of policy and practice are still being worked out; no-one involved has much of a history of political achievement.

Nonetheless, if there were no prospect of singularity but otherwise science and technology were advancing as they are, the agenda here looks just about ideal. People age and decline until it kills them, an extrapolation of biomedical knowledge suggests this is not a law of nature but just a sign of primitive technology, and the Longevity Party exists to rectify this situation. It's visionary and despite the current immaturity and growing pains, an effective longevity politics must arise one day, simply because the advance of technology will force the issue on us! The human race cannot currently muster enough will to live, to openly make rejuvenation a political goal, but the incremental pursuit of health and well-being is taking us in that direction anyway.

There's a vacuum of authority and intention in the realm of life extension, and transhuman technology generally, and these would-be longevity politicians are stepping into that vacuum. I don't think they are ready for all the issues that transhuman power entails, but the process has to start somewhere. Faced with the infinite possibilities of technological transformation, the basic affirmation of the desire to live as well as reality permits, can serve as a founding principle against which to judge attitudes and approaches for all the more complicated "issues" that arise in a world where anyone can become anything.

Maria Konovalenko, a biomedical researcher and one of the prime movers behind the Russian Longevity Party, wrote an essay setting out her version of how the world ought to work. You'll notice that she manages to include friendly AI on her agenda. This is another example, a humble beginning, of the sort of conceptual development which I think needs to happen. The sort of approach to FAI that Eliezer has pioneered needs a context, a broader culture concerned with FIA and the interplay between neuroscience and pure AI, and we need realistic yet visionary political thinking which encompasses both the shocking potentials of a slow future, above all rejuvenation and the conquest of aging, and the singularity imperative.

Unless there is simply a catastrophe, one day someone, some thing, some coalition will wield transhuman power. It may begin as a corporation, or as a specific technological research subculture, or as the peak political body in a sovereign state. Perhaps it will be part of a broader global culture of "competitors in the mind race" who know about each other and recognize each other as contenders for the first across the line. Perhaps there will be coalitions in the race: contenders who agree on the need for friendliness and the form it should take, and others who are pursuing private power, or who are just pushing AI ahead without too much concern for the transformation of the world that will result. Perhaps there will be a war as one contender begins to visibly pull ahead, and others resort to force to stop them.

But without a final and total catastrophe, however much slow history there remains ahead of us, eventually someone or something will "win", and after that the world will be reshaped according to its values and priorities. We don't need to imagine this as "tiling the universe"; it should be enough to think of it as a ubiquitous posthuman political order, in which all intelligent agents are either kept so powerless as to not be a threat, or managed and modified so as to be reliably friendly to whatever the governing civilizational values are. I see no alternative to this if we are looking for a stable long-term way of living in which ultimate technological powers exist; the ultimate powers of coercion and destruction can't be left lying around, to be taken up by entities with arbitrary values.

So the supreme challenge is to conceive of a social and technological order where that power exists, and is used, but it's still a world that we want to live in. FAI is part of the answer, but so is FIA, and so is the development of political concepts and projects which can encompass such an agenda. The Singularity Institute and the Longevity Party are fledgling institutions, and if they live they will surely, eventually, form ties with older and more established bodies; but right now, they seem to be the crucial nuclei of the theoretical research and the political vision that we need.

11 comments

Comments sorted by top scores.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-11-08T22:14:55.673Z · LW(p) · GW(p)

Somewhat offtopic, but: Do we believe the Russian allegedly-rich transhumanist movements actually have any backing? It's much easier to claim that then to have it, and I've seen no signs of actual large spending by them.

Replies from: turchin, Mitchell_Porter
comment by turchin · 2012-11-09T18:23:33.691Z · LW(p) · GW(p)

I am one of the founders of Russian Longevity party. I dont think russian transhumanists are very rich. Also there is a big difference between what anyone have and what he invests in the projects. For example i live in appartment that costs half a millon dollars and I own it. But I am not going to sell it or even rent for some resons (I have large art collection which I keep here). I decided that I will invest time, but not money in russian longevity party. Another cofounder of our party is Michael Batin and he owns a lot of land - but it is illiquid asset. Batin spent a lot on Macciarini organ regeneration operation for example. Also russian transhumaists are known for hystory of uneffective spending mostly on lifeextension projects. I spent last year 10k usd to go to NY to singularity summit, mostly on expensive hotels. If my social skill were better I could find free accomodation. I dont think Itzkov is billioner. I estimate his fortune in 10 million USD.

comment by Mitchell_Porter · 2012-11-09T01:06:34.640Z · LW(p) · GW(p)

The Longevity Party is grassroots, it emerged from the Russian transhumanist movement and has no powerful sponsor.

Evolution 2045 is the project of a rich guy (Dmitry Itskov) who's a futurist nationalist and a Kurzweil fan. The proposal that everyone should have a chance to upload is part of a broader program which also includes more conventional affirmations of national development, interfaith outreach to show that religion isn't threatened by these ideas, etc. I think it still exists mostly on the level of conferences, networking, and mission statements, and that he hasn't spent much money yet.

I would welcome a better answer from someone in Russia who actually knows the facts on the ground!

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-11-09T01:24:22.394Z · LW(p) · GW(p)

Do we believe Dmitry Itskov is a billionaire? Google shows MSM outlets repeating the claim, but it doesn't look very researched and there are no hits except to the 2045 project.

Replies from: lukeprog, Mitchell_Porter
comment by lukeprog · 2012-11-09T03:15:49.832Z · LW(p) · GW(p)

Note that Itskov spoke at the 2011 Singularity Summit.

Replies from: Eliezer_Yudkowsky
comment by Mitchell_Porter · 2012-11-09T02:50:26.623Z · LW(p) · GW(p)

I changed the description to "entrepreneur". He's president of the company that owns some major Russian portals, that much is verifiable.

comment by Shmi (shminux) · 2012-11-09T01:48:18.723Z · LW(p) · GW(p)

Consider taking a course in writing summaries, your point is hard to distill from this stream.

comment by aaronde · 2012-11-08T18:34:49.161Z · LW(p) · GW(p)

I endorse this idea, but have a minor nitpick:

In such a scenario, we could speak of "FIA" - friendly intelligence augmentation. A basic idea of existing FAI discourse is that the true human utility function needs to be determined, and then the values that make an AI human-friendly would be extrapolated from that.

This certainly gets proposed a lot. But isn't it lesswrongian consensus that this is backwards? That the only way to build a FAI is to build an AI that will extrapolate and adopt the humane utility function on its own? (since human values are too complicated for mere humans to state explicitly).

comment by timtyler · 2012-11-08T23:41:08.882Z · LW(p) · GW(p)

Life extension? I don't see how that could possibly keep humans competitive. It seems pretty irrelevant to what is likely to happen - through being too slow and too insignificant.

comment by AlphaOmega · 2012-11-08T22:38:00.965Z · LW(p) · GW(p)

I can conceive of a social and technological order where transhuman power exists, but you may or may not want to live in it. This is a world where there are god-like entities doing wondrous things, and humanity lives in a state of awe and worship at what they have created. To like living in this world would require that you adopt a spirit of religious submission, perhaps not so different from modern-day monotheists who bow five times a day to their god. This may be the best post-Singularity order we can hope for.