Do Earths with slower economic growth have a better chance at FAI?
post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-12T19:54:07.143Z · LW · GW · Legacy · 175 commentsContents
175 comments
I was raised as a good and proper child of the Enlightenment who grew up reading The Incredible Bread Machine and A Step Farther Out, taking for granted that economic growth was a huge in-practice component of human utility (plausibly the majority component if you asked yourself what was the major difference between the 21st century and the Middle Ages) and that the "Small is Beautiful" / "Sustainable Growth" crowds were living in impossible dreamworlds that rejected quantitative thinking in favor of protesting against nuclear power plants.
And so far as I know, such a view would still be an excellent first-order approximation if we were going to carry on into the future by steady technological progress: Economic growth = good.
But suppose my main-line projection is correct and the "probability of an OK outcome" / "astronomical benefit" scenario essentially comes down to a race between Friendly AI and unFriendly AI. So far as I can tell, the most likely reason we wouldn't get Friendly AI is the total serial research depth required to develop and implement a strong-enough theory of stable self-improvement with a possible side order of failing to solve the goal transfer problem. Relative to UFAI, FAI work seems like it would be mathier and more insight-based, where UFAI can more easily cobble together lots of pieces. This means that UFAI parallelizes better than FAI. UFAI also probably benefits from brute-force computing power more than FAI. Both of these imply, so far as I can tell, that slower economic growth is good news for FAI; it lengthens the deadline to UFAI and gives us more time to get the job done. I have sometimes thought half-jokingly and half-anthropically that I ought to try to find investment scenarios based on a continued Great Stagnation and an indefinite Great Recession where the whole developed world slowly goes the way of Spain, because these scenarios would account for a majority of surviving Everett branches.
Roughly, it seems to me like higher economic growth speeds up time and this is not a good thing. I wish I had more time, not less, in which to work on FAI; I would prefer worlds in which this research can proceed at a relatively less frenzied pace and still succeed, worlds in which the default timelines to UFAI terminate in 2055 instead of 2035.
I have various cute ideas for things which could improve a country's economic growth. The chance of these things eventuating seems small, the chance that they eventuate because I write about them seems tiny, and they would be good mainly for entertainment, links from econblogs, and possibly marginally impressing some people. I was thinking about collecting them into a post called "The Nice Things We Can't Have" based on my prediction that various forces will block, e.g., the all-robotic all-electric car grid which could be relatively trivial to build using present-day technology - that we are too far into the Great Stagnation and the bureaucratic maturity of developed countries to get nice things anymore. However I have a certain inhibition against trying things that would make everyone worse off if they actually succeeded, even if the probability of success is tiny. And it's not completely impossible that we'll see some actual experiments with small nation-states in the next few decades, that some of the people doing those experiments will have read Less Wrong, or that successful experiments will spread (if the US ever legalizes robotic cars or tries a city with an all-robotic fleet, it'll be because China or Dubai or New Zealand tried it first). Other EAs (effective altruists) care much more strongly about economic growth directly and are trying to increase it directly. (An extremely understandable position which would typically be taken by good and virtuous people).
Throwing out remote, contrived scenarios where something accomplishes the opposite of its intended effect is cheap and meaningless (vide "But what if MIRI accomplishes the opposite of its purpose due to blah") but in this case I feel impelled to ask because my mainline visualization has the Great Stagnation being good news. I certainly wish that economic growth would align with FAI because then my virtues would align and my optimal policies have fewer downsides, but I am also aware that wishing does not make something more likely (or less likely) in reality.
To head off some obvious types of bad reasoning in advance: Yes, higher economic growth frees up resources for effective altruism and thereby increases resources going to FAI, but it also increases resources going to the AI field generally which is mostly pushing UFAI, and the problem arguendo is that UFAI parallelizes more easily.
Similarly, a planet with generally higher economic growth might develop intelligence amplification (IA) technology earlier. But this general advancement of science will also accelerate UFAI, so you might just be decreasing the amount of FAI research that gets done before IA and decreasing the amount of time available after IA before UFAI. Similarly to the more mundane idea that increased economic growth will produce more geniuses some of whom can work on FAI; there'd also be more geniuses working on UFAI, and UFAI probably parallelizes better and requires less serial depth of research. If you concentrate on some single good effect on blah and neglect the corresponding speeding-up of UFAI timelines, you will obviously be able to generate spurious arguments for economic growth having a positive effect on the balance.
So I pose the question: "Is slower economic growth good news?" or "Do you think Everett branches with 4% or 1% RGDP growth have a better chance of getting FAI before UFAI"? So far as I can tell, my current mainline guesses imply, "Everett branches with slower economic growth contain more serial depth of cognitive causality and have more effective time left on the clock before they end due to UFAI, which favors FAI research over UFAI research".
This seems like a good parameter to have a grasp on for any number of reasons, and I can't recall it previously being debated in the x-risk / EA community.
EDIT: To be clear, the idea is not that trying to deliberately slow world economic growth would be a maximally effective use of EA resources and better than current top targets; this seems likely to have very small marginal effects, and many such courses are risky. The question is whether a good and virtuous person ought to avoid, or alternatively seize, any opportunities which come their way to help out on world economic growth.
EDIT 2: Carl Shulman's opinion can be found on the Facebook discussion here.
175 comments
Comments sorted by top scores.
comment by Kawoomba · 2013-06-12T20:39:37.307Z · LW(p) · GW(p)
To stay with the lingo (also, is "arguendo" your new catchphrase?): There are worlds in which slower economic growth is good news, and worlds in which it's not. As to which of these contribute more probability mass, that's hard -- because the actual measure would be technological growth, for which economic growth can be a proxy.
However, I find it hard to weigh scenarios such as "because of stagnant and insufficient growth, more resources are devoted to exploiting the remaining inefficiencies using more advanced tech" versus "the worldwide economic upswing caused a flurry of research activities".
R&D, especially foundational work, is such a small part of worldwide GDP that any old effect can dominate it. For example, a "cold war"-ish scenario between China and the US would slow economic growth -- but strongly speedup research in high-tech dual-use technologies.
While we often think "Google" when we think tech research, we should mostly think DoD in terms of resources spent -- state actors traditionally dwarf even multinational corporations in research investments, and whether their investements are spurned or spurred by a slowdown in growth (depending on the non-specified cause of said slowdown) is anyone's guess.
Replies from: Eliezer_Yudkowsky, Luke_A_Somers, roystgnr, John_Maxwell_IV↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-12T20:47:14.968Z · LW(p) · GW(p)
R&D, especially foundational work, is such a small part of worldwide GDP that any old effect can dominate it.
(Note: I agree with this point.)
↑ comment by Luke_A_Somers · 2013-06-13T14:50:03.194Z · LW(p) · GW(p)
For example, a "cold war"-ish scenario between China and the US would slow economic growth -- but strongly speedup research in high-tech dual-use technologies.
Yes - I think we'd be in much better shape with high growth and total peace than the other way around. Corporations seem rather more likely to be satisfied with tool AI (or at any rate AI with a fixed cognitive algorithm, even if it can learn facts) than, say, a nation at war.
↑ comment by roystgnr · 2013-06-13T15:23:29.212Z · LW(p) · GW(p)
There are worlds in which slower economic growth is good news, and worlds in which it's not.
Indeed. The question of "would X be better" usually is shorthand for "would X be better, all else being equal", and since in this case X is an integrated quantity over basically all human activity it's impossible for all else to be equal. To make the question well defined you have to specify what other influences go into the change in economic growth. Even in the restricted question where we look at various ways that charity and activism might increase economic growth, it looks likely that different charities and different policy changes would have different effects on FAI development.
↑ comment by John_Maxwell (John_Maxwell_IV) · 2013-06-14T07:21:44.041Z · LW(p) · GW(p)
So how would working to decrease US military spending rank as an effective altruist goal then? I'd guess most pro-economic-growth EAs are also in favor of it.
comment by Mitchell_Porter · 2013-06-13T02:35:14.948Z · LW(p) · GW(p)
Related questions:
1) Do Earths with dumber politicians have a better chance at FAI?
2) Do Earths with anti-intellectual culture have a better chance at FAI?
3) Do Earths with less missionary rationalism have a better chance at FAI?
4) How much time should we spend pondering questions like (1)-(3)?
5) How much time should we spend pondering questions like (4)?
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-13T14:40:56.830Z · LW(p) · GW(p)
1) Do Earths with dumber politicians have a better chance at FAI?
How much dumber? If we can make politicians marginally dumber in a way that slows down economic growth, or better yet decreases science funding while leaving economic growth intact, without this causing any other marginal change in stupid decisions relevant to FAI vs. UFAI, then sure. I can't think of any particular marginal changes I expect because I already expect almost all such decisions to be made incorrectly, but I worry that this is only a failure of my imagination on my part - that with even dumber politicians, things could always become unboundedly worse in ways I didn't even conceive.
2) Do Earths with anti-intellectual culture have a better chance at FAI?
This seems like essentially the same question as above.
Do Earths with less missionary rationalism have a better chance at FAI?
No. Missionary rationalists are a tiny fraction of world population who contribute most of FAI research and support.
comment by RolfAndreassen · 2013-06-12T20:41:16.506Z · LW(p) · GW(p)
I don't have an answer for the question, but I note that the hypothetical raises the possibility of an anthropic explanation for twenty-first century recessions. So if you believe that the Fed is run by idiots who should have , consider the possibility that in branches where the Fed did in fact , the world now consists of computronium.
I find this especially compelling in light of Japan's two "lost decades" combined with all the robotics research for which Japan is famous. Obviously the anthropic hypothesis requires the most stagnation in nations which are good at robots and AI.
Replies from: Qiaochu_Yuan, None, Larks, CarlShulman↑ comment by Qiaochu_Yuan · 2013-06-12T20:53:11.827Z · LW(p) · GW(p)
I don't have an answer for the question
I hope we can all agree that in discussions on LW this should by no means be regarded as a bad thing.
↑ comment by [deleted] · 2013-06-12T23:09:38.944Z · LW(p) · GW(p)
Can we put a lid on this conflation of subjective probability with objective quantum branching please? A deterministic fair coin does not split the world, and neither would a deterministic economic cycle. Or are we taking seriously the possibility that the course of the economy is largely driven by quantum randomness?
EDIT: actually I just noticed that small quantum fluctuations from long ago can result in large differences between branches today. At that point I'm confused about what the anthropics implies we should see, so please excuse my overconfidence above.
Replies from: B_For_Bandana, Emile, Jack, Kaj_Sotala, Nisan, RolfAndreassen↑ comment by B_For_Bandana · 2013-06-12T23:14:43.311Z · LW(p) · GW(p)
Or are we taking seriously the possibility that the course of the economy is largely driven by quantum randomness?
Isn't everything?
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-13T19:20:03.640Z · LW(p) · GW(p)
This comment was banned, which looked to me like a probable accident with a moderator click, so I unbanned it. If I am in error can whichever mod PM me after rebanning it?
Naturally if this was an accident, it must have been a quantum random one.
↑ comment by Emile · 2013-06-13T12:55:21.281Z · LW(p) · GW(p)
Can we put a lid on this conflation of subjective probability with objective quantum branching please? A deterministic fair coin does not split the world, and neither would a deterministic economic cycle. Or are we taking seriously the possibility that the course of the economy is largely driven by quantum randomness?
I'm certainly taking it seriously, and am somewhat surprised that you're not. Some ways small-sized effects (most likely to "depend" of quantum randomness) can eventually have large-scale impacts:
- DNA Mutations
- Which sperm gets to the egg
- The weather
- Soft errors from cosmic rays or thermal radiation
↑ comment by Jack · 2013-06-13T16:30:25.101Z · LW(p) · GW(p)
Whether or not quantum randomness drives the course of the economy it's still a really good idea to stop conflating subjective probability and the corresponding notion of possible worlds with quantum/inflationary/whatever many world theories. Rolf's comment doesn't actually do this: I read him as speaking entirely about the anthropic issue. Eliezer, on the other hand, totally is conflating them in the original post.
I understand that there are reasons to think anthropic issues play an essential role in the assignment of subjective probabilities, especially at a decision theoretic level. But given a) subjective uncertainty over whether or not many-worlds is correct, b) our ignorance of how the Born probability rule figures into the relationship and c) the way anthropics skews anticipated experiences I am really suspicious that anyone here is able to answer the question:
"Do you think Everett branches with 4% or 1% RGDP growth have a better chance of getting FAI before UFAI"?
People are actually answering
"Do you think possible worlds with 4% or 1% RGDP growth have a better chance of getting FAI before UFAI"?
which is not obviously the same thing.
↑ comment by Kaj_Sotala · 2013-06-13T12:30:14.987Z · LW(p) · GW(p)
You don't need quantum many worlds for this kind of speculation: e.g. a spatially infinite universe would also do the trick.
Replies from: Jack↑ comment by Jack · 2013-06-13T16:47:37.079Z · LW(p) · GW(p)
As I said:
Whether or not quantum randomness drives the course of the economy it's still a really good idea to stop conflating subjective probability and the corresponding notion of possible worlds with quantum/inflationary/whatever many world theories.
↑ comment by RolfAndreassen · 2013-06-12T23:52:51.849Z · LW(p) · GW(p)
My comment was not intended in full seriousness. :)
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2013-06-13T12:35:21.507Z · LW(p) · GW(p)
It is not at all clear to me that this hypothesis shouldn't be taken seriously. It's not clear to me that it should be, either!
↑ comment by Larks · 2013-06-12T21:47:32.780Z · LW(p) · GW(p)
It also explains why the dot com boom had to burst,
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2013-06-13T08:22:40.272Z · LW(p) · GW(p)
why Charles Babbage never built his Analytical Engine,
why Archimedes was killed, and Antikythera mechanism drowned in the sea,
why most children in our culture hate maths, and why internet is mostly used for chatting, games, and porn.
Replies from: gjm, Luke_A_Somers↑ comment by gjm · 2013-06-13T11:02:02.852Z · LW(p) · GW(p)
It unfortunately also explains
- why Alan Turing never published his work on the theory of computation
- why all the projects in the 1950s aimed at making general-purpose computers got cancelled for complex political reasons no one understood
- why that big earthquake killed everyone at the Dartmouth Conference, tragically wiping out almost the entire nascent field of AI
- why all attempts at constructing integrated circuits mysteriously failed
- why progress abruptly stopped following Moore's law in the early 1980s
- why no one has ever been able to make computer systems capable of beating grandmasters at chess, questioning Jeopardy answers, searching huge databases of information, etc.
↑ comment by RolfAndreassen · 2013-06-13T17:22:13.313Z · LW(p) · GW(p)
All of which are true in other possible worlds, which for all we know may have a greater amplitude than ours. That we are alive does not give us any information on how probable we are, because we can't observe the reference class. For all we know, we're one of those worlds that skate very, very close to the edge of disaster, and the two recessions of the aughts are the only thing that have kept us alive; but those recessions were actually extremely unlikely, and the "mainline" branches of humanity, the most probable ones, are alive because the Cuban War of 1963 set the economy back to steam and horses. (To be sure, they have their problems, but UFAI isn't among them.)
Note that, if you take many-worlds seriously, then in branches where UFAI is developed, there will still be some probability of survival due to five cosmic rays with exactly the right energies hitting the central CPU at just the right times and places, causing SkyNet to divide by zero instead of three. But the ones who survive due to that event won't be very probable humans. :)
↑ comment by Paul Crowley (ciphergoth) · 2013-06-13T12:34:19.665Z · LW(p) · GW(p)
If most copies of me died in the shooting but I survived, I should expect to find that I survived for only one reason, not for multiple independent reasons. Perhaps the killer's gun jammed at the crucial moment, or perhaps I found a good place to hide, but not both.
Replies from: gjm↑ comment by gjm · 2013-06-13T15:11:05.654Z · LW(p) · GW(p)
On the other hand, if you are being shot at repeatedly and survive a long time, you should expect there to be lots of reasons (or one reason with very broad scope -- maybe everyone's guns were sabotaged in a single operation, or maybe they've been told to let you live, or a god is looking out for you). And it's only in that sort of situation that anthropic "explanations" would be in any way sensible.
It's always true enough to say "well, of course I find myself still alive because if I weren't I wouldn't be contemplating the fact that I'm still alive". But most of the time this is really uninteresting. Perhaps it always is.
The examples given in this thread seem to me to call out for anthropic explanations to much the same extent as does the fact that I'm over 40 years old and not dead yet.
Replies from: khafra↑ comment by khafra · 2013-06-14T11:17:53.668Z · LW(p) · GW(p)
This just prompted me to try to set a subjective probability that quantum immortality works, so e.g. if I remember concluding that it was 5% likely at 35 and find myself still alive at 95, I will believe in quantum immortality (going by SSA tables).
I'm currently finding this subjective probability too creepy to actually calculate.
Replies from: gjm↑ comment by gjm · 2013-06-14T12:12:50.967Z · LW(p) · GW(p)
I suggest giving some thought first to exactly what "believing in quantum immortality" really amounts to.
Replies from: khafra↑ comment by khafra · 2013-06-14T13:22:26.785Z · LW(p) · GW(p)
To me, it means expecting to experience the highest-weighted factorization of the hamiltonian that contains a conscious instantiation of me, no matter how worse-than-death that branch may be.
Replies from: gjm↑ comment by gjm · 2013-06-14T14:46:39.844Z · LW(p) · GW(p)
I think you should analyse further. Expecting conditional on still being alive? Surely you expect that even without "quantum immortality". Expecting to find yourself still alive, and experience that? Again, what exactly do you mean by that? What does it mean to expect to find yourself still alive? (Presumably not that others will expect to find you still alive in any useful sense, because with that definition you don't get q.i.)
I expect there are Everett branches in which you live to 120 as a result of a lot of good luck (or, depending on what state you're in, bad luck). Almost equivalently, I expect there's a small but nonzero probability that you live to 120 as a result of a lot of luck. { If you live to 120 / In those branches where you live to 120 } you will probably have experienced a lot of surprising things that enabled your survival. None of this is in any way dependent on quantum mechanics, still less on the many-worlds interpretation.
It seems to me that "believing in quantum immortality" is a matter of one's own values and interpretive choices, much more than of any actual beliefs about how the world is. But I may be missing something.
Replies from: khafra, wedrifid↑ comment by khafra · 2013-06-14T15:34:42.849Z · LW(p) · GW(p)
I should perhaps be more clear that I'm not distinguishing between "MWI and functionalism are true" and "quantum immortality works." That is, if "I" consciously experience dying, and my consciouness ceases, but "I" go on experiencing things in other everett branches, I'm counting that as QI.
Expecting conditional on still being alive? Surely you expect that even without "quantum immortality"...What does it mean to expect to find yourself still alive?
I'm currently making observations consistent with my own existence. If I stop making that kind of observation, I consider that no longer being alive.
Going again with the example of a 35 year old: Conditional on having been born, I have a 96% chance of still being alive. So whatever my prior on QI, that's far less than a decibel of evidence in favor of it. Still, ceteris paribus, it's more likely than it was at age 5.
Replies from: gjm↑ comment by gjm · 2013-06-15T14:36:31.096Z · LW(p) · GW(p)
Sure. But I'm not sure I made the point I was trying to make as clearly as I hoped, so I'll try again.
Imagine two possible worlds. In one of them, QM works basically as currently believed, and the way it does this is exactly as described by MWI. In the other, there is at every time a single kinda-classical-ish state of the world, with Copenhagen-style collapses or something happening as required.
In either universe it is possible that you will find yourself still alive at 120 (or much more) despite having had plenty of opportunities to be killed off by accident, illness, etc. In either universe, the probability of this is very low (which in the former case means most of the measure of where we are now ends up with you dead earlier, and in the latter means whatever exactly probability means in a non-MWI world). In either universe, every observation you make will show yourself alive, however improbable that may seem.
How does observing yourself still alive at 150 count as evidence for MWI, given all that?
What you mustn't say (so it seems to me): "The probability of finding myself alive is very low on collapse theories and high on MWI, so seeing myself still alive at 150 is evidence for MWI over collapse theories". If you mean the probability conditional on you making the observation at age 150, it's 1 in both cases. If you mean the probability not conditional on that, it's tiny in both cases. (Assuming arguendo that Pr(nanotech etc. makes lots of people live to be very old by then) is negligible.) The same applies if you try to go halfway and take the probability simply conditional on you making the observation: MWI or no MWI, only a tiny fraction of observations you make will be at age 150.
Replies from: khafra↑ comment by khafra · 2013-06-17T19:03:57.383Z · LW(p) · GW(p)
In either universe it is possible that you will find yourself still alive at 120
In the MWI-universe, it is probable at near unity that I will find myself still alive at 120. In the objective collapse universe, there's only a small fraction of a percent chance that I'll find myself alive at 120. In the objective collapse universe, every observation I make will show myself alive--but there's only a fraction of a percent of a chance that I'll make an observation that shows my age as 120.
If you mean the probability conditional on you making the observation at age 150, it's 1 in both cases.
The probability of my making the observation "I am 150 years old," given objective collapse, is one of those probabilities so small it's dominated by "stark raving mad" type scenarios. Nobody you've ever known has made that observation; neither has anybody they know. How can this not be evidence?
Replies from: gjm↑ comment by gjm · 2013-06-17T20:34:53.285Z · LW(p) · GW(p)
What's the observation you're going to make that has probability near-1 on MWI and probabilty near-0 on collapse -- and probability given what?
"I'm alive at 120, here and now" -- that has small probability either way. (On most branches of the wavefunction that include your present self, no version of you gets to say that. Ignoring, as usual, irrelevant details involving positive singularities, very large universes, etc.)
"90 years from now I'll still be alive" (supposing arguendo that you're 30 now) -- that has small probability either way.
"I'm alive at 120, conditional on my still being alive at 120" -- that obviously has probability 1 either way.
"On some branch of the wavefunction I'm still alive at 120" -- sure, that's true on MWI and (more or less by definition) false on a collapse interpretation; but it's not something you can observe. It corresponds exactly to "With nonzero probability I'm still alive at 120", which is true on collapse.
Replies from: khafra↑ comment by khafra · 2013-06-18T12:50:33.962Z · LW(p) · GW(p)
"90 years from now I'll still be alive" (supposing arguendo that you're 30 now) -- that has small probability either way.
This is the closest one. However, that's not an observation, it's a prediction. The observation is "90 years ago, I was 30." That's an observation that almost certainly won't be made in a collapse-based world; but will be made somewhere in an MWI world.
"I'm alive at 120, here and now" -- that has small probability either way. (On most branches of the wavefunction that include your present self, no version of you gets to say that.)
"small probability either way" only applies if I want to locate myself precisely, within a branch as well as within a possible world. If I only care about locating myself in one possible world or the other, the observation has a large probability in MWI.
↑ comment by Luke_A_Somers · 2013-06-13T15:00:27.098Z · LW(p) · GW(p)
If Charles Babbage had built his analytic engine, then that would seem to me to have gotten programming started long earlier, such that FAI work would in turn start much sooner, and so we'd have no hardware overhang to worry about. Imagine if this conversation were taking place with 1970's technology.
↑ comment by CarlShulman · 2013-06-13T02:30:32.589Z · LW(p) · GW(p)
Rolf,
I don't mean to pick on you specifically, but the genre of vague "anthropic filter!" speculations whenever anything is said to be possibly linked even slightly to catastrophe needs to be dialed back on LW. Such speculations almost never a) designate a definite theoretical framework on which that makes sense b) make any serious effort to show a non-negligible odds ratio (e.g. more than 1%) on any (even implausible) account of anthropic reasoning.
However, they do invite a lot of nonsense.
comment by KatjaGrace · 2013-06-24T21:10:52.162Z · LW(p) · GW(p)
I responded here.
comment by Wei Dai (Wei_Dai) · 2013-06-13T21:52:45.689Z · LW(p) · GW(p)
So far as I can tell, the most likely reason we wouldn't get Friendly AI is the total serial research depth required to develop and implement a strong-enough theory of stable self-improvement with a possible side order of failing to solve the goal transfer problem.
I'm curious that you seem to think the former problem is harder or less likely to be solved than the latter. I've been thinking the opposite, and one reason is that the latter problem seems more philosophical and the former more technical, and humanity seems to have a lot of technical talent that we can eventually recruit to do FAI research, but much less untapped philosophical talent.
Also as another side note, I don't think we should be focusing purely on the "we come up with a value-stable architecture and then the FAI will make a billion self-modifications within the same general architecture" scenario. Another possibility might be that we don't solve the stable self-improvement problem at all, but instead solve the value transfer problem in a general enough way that the FAI we build immediately creates an entirely new architecture for the next generation FAI and transfer its values to its creation using our solution, and this happens just a few times. (The FAI doesn't try to make a billion self-modifications to itself because, just like us, it knows that it doesn't know how to safely do that.) (Cousin_it make a similar comment earlier.)
In all I can see three arguments for prioritizing the value transfer problem over the stable self-improvement problem: 1) the former seems harder so we need to get started on it earlier; 2) we know the former definitely needs to be solved whereas the latter may not need to be; 3) the former involves work that's less useful for building UFAI.
(On the main topic of the post, I've been assuming, without having put too much thought into it, that slower economic growth is good for eventually getting FAI. Now after reading the discussions here and on Facebook I realize that I haven't put enough thought into it and perhaps should be less certain about it than I was.)
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2013-06-20T00:13:49.780Z · LW(p) · GW(p)
On the main topic of the post, I've been assuming, without having put too much thought into it, that slower economic growth is good for eventually getting FAI.
Here's an attempt to verbalize why I think this, which is a bit different from Eliezer's argument (which I also buy to some extent). First I think UFAI is much easier than FAI and we are putting more resources into the former than the latter. To put this into numbers for clarity, let's say UFAI takes 1000 units of work, and FAI takes 2000 units of work, and we're currently putting 10 units of work into UFAI per year, and only 1 unit of work per year into FAI. If we had a completely stagnant economy, with 0% growth, we'd have 100 years to do something about this, or for something to happen to change this, before it's too late. If the economy was instead growing at 5% per year, and this increased both UFAI and FAI work by 5% per year, the window of time "for something to happen" shrinks to about 35 years. The economic growth might increase the probability per year of "something happening" but it doesn't seem like it would be enough to compensate for the shortened timeline.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-20T00:26:20.023Z · LW(p) · GW(p)
Also: Many likely reasons for something to happen about this center around, in appropriate generality, the rationalist!EA movement. This movement is growing at a higher exponent than current economic growth.
Replies from: owencb↑ comment by owencb · 2013-08-03T10:40:02.828Z · LW(p) · GW(p)
I think this is the strongest single argument that economic growth might currently be bad. However even then what matters is the elasticity of movement growth rates with economic growth rates. I don't know how we can measure this; I expect it's positive and less than one, but I'm rather more confident about that lower bound than the upper bound.
comment by paulfchristiano · 2013-06-14T09:39:31.986Z · LW(p) · GW(p)
This position seems unlikely to me at face value. It relies on a very long list of claims, and given the apparently massive improbability of the conjunction, there is no way this consideration is going to be the biggest impact of economic progress:
- The most important determinant of future welfare is whether you get FAI or UFAI (this presupposes a relatively detailed model of how AI works, of what the danger looks like, etc.)
- This will happen quite soon, and relevant AI work is already underway.
- The main determinant of FAI vs. UFAI is whether an appropriate theoretical framework for goal-stability is in place.
- As compared to UFAI work, the main difficulty for developing such a framework for goal-stability is the serial depth of the problem.
- A 1% boost in economic activity this year has a non-negligible effect on the degree of parallelization of relevant AI work.
I don't see how you can defend giving any of those points more than 1/2 probability, and I would give the conjunction less than 1% probability. Moreover, even in this scenario, the negative effect from economic progress is quite small. (Perhaps a 1% increase in sustained economic productivity makes the future 0.1% worse if this story is true, and each year of a 1% increase in economic productivity makes the future 0.002% worse?)
So on balance it seems to me like this would say that a 1% increase in economic productivity would make the future 0.00002% worse? That is a ridiculously tiny effect; even if you couldn't see any particular reason that economic progress helped or hurt I think your prior should expect other effects to dominate, and you can use other considerations to get a handle on the sign. I guess you think that this is an underestimate, but I would be interested to know where you disagree.
I would guess that the positive effect from decreasing the cumulative risk of war alone are several orders of magnitude higher than that. I know you think that a world war that killed nearly everyone might be positive rather than negative, but in that case slowing down economic progress would still have positive effects that are orders of magnitude larger than the effect on FAI parallelization.
Even without doing any calculations, it is extraordinarily hard to imagine that the difference between "world at war" and "world at peace" is less than the difference between "world with slightly more parallelization in AI work" and "world with slightly less parallelization;" almost everyone would disagree with you, and as far as I can tell their reasons seem better than yours. Similarly, it is hard to imagine that the interaction between generational turnover and economic activity wouldn't swamp this by several orders of magnitude.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-14T14:40:10.797Z · LW(p) · GW(p)
General remark: At some point I need to write a post about how I'm worried that there's an "unpacking fallacy" or "conjunction fallacy fallacy" practiced by people who have heard about the conjunction fallacy but don't realize how easy it is to take any event, including events which have already happened, and make it look very improbable by turning one pathway to it into a large series of conjunctions. E.g. I could produce a long list of things which allegedly have to happen for a moon landing to occur, some of which turned out to not be necessary but would look plausible if added to the list ante facto, with no disjunctive paths to the same destination, and thereby make it look impossible. Generally this manifests when somebody writes a list of alleged conjunctive necessities, and I look over the list and some of the items seem unnecessary (my model doesn't go through them at all), obvious disjunctive paths have been omitted, the person has assigned sub-50% probability to things that I see as mainline 90% probabilities, and conditional probabilities when you assume the theory was right about 1-N would be significantly higher for N+1. Most of all, if you imagine taking the negation of the assertion and unpacking it into a long list of conjunctive probabilities, it would look worse - there should be a name for the problem of showing that X has weak arguments but not considering that ~X has even weaker arguments. Or on a meta level, since it is very easy to make things look more conjunctive, we should perhaps not be prejudiced against things which somebody has helpfully unpacked for us into a big conjunction, when the core argument still seems pretty simple on some level.
When I look over this list, my reaction is that:
(1) is a mainline assumption with odds of 5:1 or 10:1 - of course future intergalactic civilization bottlenecks through the goals of a self-improving agency, how would you get to an intergalactic civilization without that happening? If this accounts for much of our disagreement then we're thinking about entirely different scenarios, and I'm not sure how to update from your beliefs about mostly scenario B to my beliefs about mostly scenario A. It makes more sense to call (1) into question if we're really asking about global vs. local, but then we get into the issue of whether global scenarios are mostly automatic losses anyway. If (1) is really about whether we should be taking into account a big chunk of survivable global scenarios then this goes back to a previous persistent disagreement.
(2) I don't see the relevance - why does a long time horizon vs. a short time horizon matter? 80 years would not make me relax and say that we had enough serial depth, though it would certainly be good news ceteris paribus, there's no obvious threshold to cross.
Listing (3) and (4) as separate items was what originally made my brain shout "unpacking fallacy!" There are several subproblems involved in FAI vs. UFAI, of which the two obvious top items are the entire system being conducive to goal stability through self-improvement which may require deducing global properties to which all subsystems must be conducive, and the goal loading problem. These both seem insight-heavy which will require serial time to solve. The key hypothesis is just that there are insight-heavy problems in FAI which don't parallelize well relative to the wide space of cobbled-together designs which might succeed for UFAI. Odds here are less extreme than for (1) but still in the range of 2:1-4:1. The combined 3-4 issue is the main weak point, but the case for "FAI would parallelize better than UFAI" is even weaker.
(5) makes no sense to ask as a conditionally independent question separate from (1); if (1) is true then the only astronomical effects of modern-day economic growth are whatever effects that growth has on AI work, and to determine if economic growth is qualitatively good or bad, we ask about the sign of the effect neglecting its magnitude. I suppose if the effect were trivial enough then we could just increase the planet's growth rate by 5% for sheer fun and giggles and it would have no effect on AI work, but this seems very unlikely; a wealthier planet will ceteris paribus have more AI researchers. Odds of 10:1 or better.
On net, this says that in my visualization the big question is just "Does UFAI parallelize better than FAI, or does FAI parallelize better than UFAI?" and we find that the case for the second clause is weaker than the first; or equivalently "Does UFAI inherently require serial time more than FAI requires serial time?" is weaker than "Does FAI inherently require serial time more than UFAI requires serial time?" This seems like a reasonable epistemic state to me.
The resulting shove at the balance of the sign of the effect of economic growth would have to be counterbalanced by some sort of stronger shove in the direction of modern-day economic growth having astronomical benefits. And the case for e.g. "More econ growth means friendlier international relations and so they endorse ideal Y which leads them to agree with me on policy Z" seems even more implausible when unpacked into a series of conjunctions. Lots of wealthy people and relatively friendly nations right now are not endorsing policy Z.
To summarize and simplify the whole idea, the notion is:
Right now my estimate of the sign of the astronomical effect of modern-day economic growth is dominated by a 2-node conjunction of, "Modern-day econ growth has a positive effect on resources into both FAI and UFAI" and "The case for FAI parallelizing better than UFAI is weaker than the converse case". For this to be not true requires mainly that somebody else demonstrate an effect or set of effects in the opposite direction which has better net properties after its own conjunctions are taken into account. The main weakness in the argument and lingering hope that econ growth is good, isn't that the original argument is very conjunctive, but rather it's that faster econ growth seems like it should have a bunch of nice effects on nice things and so the disjunction of other econ effects might conceivably swing the sign the other way. But it would be nice to have at least one plausible such good effect without dropping our standards so low that we could as easily list a dozen equally (im)plausible bad effects.
Even without doing any calculations, it is extraordinarily hard to imagine that the difference between "world at war" and "world at peace" is less than the difference between "world with slightly more parallelization in AI work" and "world with slightly less parallelization;"
With small enough values of 'slightly' obviously the former will have a greater effect, the question is the sign of that effect; also it's not obvious to me that moderately lower amounts of econ growth lead to world wars, and war seems qualitatively different in many respects from poverty. I also have to ask if you are possibly maybe being distracted by the travails of one planet as a terminal value, rather than considering that planet's instrumental role in future galaxies.
Replies from: lukeprog, paulfchristiano, ModusPonies↑ comment by lukeprog · 2013-10-04T02:14:56.831Z · LW(p) · GW(p)
At some point I need to write a post about how I'm worried that there's an "unpacking fallacy" or "conjunction fallacy fallacy" practiced by people who have heard about the conjunction fallacy but don't realize how easy it is to take any event, including events which have already happened, and make it look very improbable by turning one pathway to it into a large series of conjunctions.
Related: There's a small literature on what Tversky called "support theory," which discusses packing and unpacking effects: Tversky & Koehler (1994); Ayton (1997); Rottenstreich & Tversky (1997); Macchi et al. (1997); Fox & Tversky (1998); Brenner & Koehler (1999); Chen et al. (2001); Boven & Epley (2003); Brenner et al. (2005); Bligin & Brenner (2008).
Replies from: ChrisHallquist, lukeprog↑ comment by ChrisHallquist · 2013-10-18T08:33:47.651Z · LW(p) · GW(p)
Luke asked me to look into this literature for a few hours. Here's what I found.
The original paper (Tversky and Koehler 1994) is about disjunctions, and how unpacking them raises people’s estimate of the probaility. So for example, asking people to estimate the probability someone died of “heart disease, cancer, or other natural causes” yields a higher probability estimate than if you just ask about “natural causes.”
They consider the hypothesis this might be because they take the researcher’s apparent emphasis as evidence that’s it’s more likely, but they tested & disconfirmed this hypothesis by telling people to take the last digit of their phone number and estimate the percentage of couples that have that many children. Percentages sum to greater than 1.
Finally, they check whether experts are vulnerable to this bias by doing an experiment similar to the first experiment, but using physicians at Stanford University as the subjects and asking them about a hypothetical case of a woman admitted to an emergency room. They confirmed that yes, experts are vulnerable to this mistake too.
This phenomenon is known as “subadditivity.” A subsequent study (RottenStreich and Tversky 1997) found that subadditivity can even occur when dealing with explicit conjunctions. Macci et al. (1999) found evidence of superadditivity: ask some people how probable it is that the freezing point of alcohol is below that of gasoline, other people how probable it is that the freezing point of gasoline is below that of alcohol, average answers sum to less than 1.
Other studies try to refine the mathematical model of how people make judgements in these kinds of cases, but the experiments I’ve described are the most striking empirical results, I think. One experiment that talks about unpacking conjunctions (rather than disjunctions, like the experiments I’ve described so far) is Boven and Epley (2003, particularly their first experiment, where they ask people how much an oil refinery should be punished for pollution. This pollution is described either as leading to an increase in “asthma, lung cancer, throat cancer, or all varieties of respiratory diseases,” or just as leading to an increase in “all varieties of respiratory diseases.” In the first condition, people want to punish refinery more. But, in spite of being notably different from previous unpacking experiments, still not what Eliezer was talking about.
Below are some other messy notes I took:
http://commonsenseatheism.com/wp-content/uploads/2013/10/Fox-Tversky-A-belief-based-account-of-decision-under-uncertainty.pdf Uses support theory to develop account of decision under uncertainty.
http://commonsenseatheism.com/wp-content/uploads/2013/10/Brenner-Koehler-Subjective-probability-of-disjunctive-hypotheses-local-weight-models-for-decomposition-and-evidential-support.pdf Something about local weights; didn't look at this one much.
http://commonsenseatheism.com/wp-content/uploads/2013/10/Chen-et-al-The-relation-between-probability-and-evidence-judgment-an-extension-of-support-theory.pdf Tweaking math behind support theory to allow for superadditivity.
http://commonsenseatheism.com/wp-content/uploads/2013/10/Brenner-et-al-Modeling-patterns-of-probability-calibration-with-random-support-theory.pdf Introduces notion of random support theory.
http://bear.warrington.ufl.edu/brenner/papers/bilgin-brenner-jesp08.pdf Unpacking effects weaker when dealing with near future as opposed to far future.
Other articles debating how to explain basic support theory results: http://bcs.siu.edu/facultypages/young/JDMStuff/Sloman%20(2004)%20unpacking.pdf http://aris.ss.uci.edu/~lnarens/Submitted/problattice11.pdf http://eclectic.ss.uci.edu/~drwhite/pw/NarensNewfound.pdf
Replies from: Nick_Beckstead↑ comment by Nick_Beckstead · 2014-10-29T16:41:58.742Z · LW(p) · GW(p)
What this shows is that people are inconsistent in a certain way. If you ask them the same question in two different ways (packed vs. unpacked) you get different answers. Is there any indication of which is the better way to ask the question, or whether asking it some other way is better still? Without an answer to this question, it's unclear to me whether we should talk about an "unpacking fallacy" or a "failure to unpack fallacy".
↑ comment by lukeprog · 2014-01-12T00:43:17.441Z · LW(p) · GW(p)
Here's a handy example discussion of related conjunction issues from the Project Cyclops report:
We have outlined the development of technologically competent life on Earth as a succession of steps to each of [which] we must assign an a priori probability less than unity. The probability of the entire sequence occurring is the product of the individual (conditional) probabilities. As we study the chain of events in greater detail we may become aware of more and more apparently independent or only slightly correlated steps. As this happens, the a priori probability of the entire sequence approaches zero, and we are apt to conclude that, although life indeed exists here, the probability of its occurrence elsewhere is vanishingly small.
The trouble with this reasoning is that it neglects alternate routes that converge to the same (or almost the same) end result. We are reminded of the old proof that everyone has only an infinitesimal chance of existing. One must assign a fairly small probability to one's parents and all one's grandparents and (great)^n-grandparents having met and mated. Also one must assign a probability on the order of 2^-46 to the exact pairing of chromosomes arising from any particular mating. When the probabilities of all these independent events that led to a particular person are multiplied, the result quickly approaches zero. This is all true. Yet here we all are. The [explanation] is that, if an entirely different set of matings and fertilizations had occurred, none of "us" would exist, but a statistically indistinguishable generation would have been born, and life would have gone on much the same.
↑ comment by paulfchristiano · 2013-06-17T11:05:59.008Z · LW(p) · GW(p)
Regarding the "unpacking fallacy": I don't think you've pointed to a fallacy here. You have pointed to a particular causal pathway which seems to be quite specific, and I've claimed that this particular causal pathway has a tiny expected effect by virtue of its unlikeliness. The negation of this sequence of events simply can't be unpacked as a conjunction in any natural way, it really is fundamentally a disjunction. You might point out that the competing arguments are weak, but they can be much stronger in the cases were they aren't predicated on detailed stories about the future.
As you say, even events that actually happened can also be made to look quite unlikely. But those events were, for the most part, unlikely ex ante. This is like saying "This argument can suggest that any lottery number probably wouldn't win the lottery, even the lottery numbers that actually won!"
If you had a track record of successful predictions, or if anyone who embraced this view had a track record of successful predictions, maybe you could say "all of these successful predictions could be unpacked, so you shouldn't be so skeptical of unpackable arguments." But I don't know of anyone with a reasonably good predictive record who takes this view, and most smart people seem to find it ridiculous.
I don't understand your argument here. Yes, future civilization builds AI. It doesn't follow that the value of the future is first determined by what type of AI they build (they also build nanotech, but the value of the future isn't determined by the type of nanotech they build, and you haven't offered a substantial argument that discriminates between the cases). There could be any number of important events beforehand or afterwards; there could be any number of other important characteristics surrounding how they build AI which influence whether the outcome is positive or negative.
Do you think the main effects of economic progress in 1600 were on the degree of parallelization in AI work? 1800? The magnitude of the direct effects of economic progress on AI work depends on how close the economic progress is to the AI work; as the time involved gets larger, indirect effects come to dominate.
You have a specific view, that there is a set of problems which need to be solved in order to make AI friendly, and that these problems have some kind of principled relationship to the problems that seem important to you now. This is as opposed to e.g. "there are two random approaches to AI, one of which leads to good outcomes and one of which leads to bad outcomes," or "there are many approaches to AI, and you have to think about it in advance to figure out which lead to good outcomes" or "there is a specific problem that you can't have solved by the time you get to AI if you want to to have a positive outcome" or an incredible variety of alternative models. The "parallelization is bad" argument doesn't apply to most of these models, and in some you have "parallelization is good."
Even granting that your picture of AI vs. FAI is correct, and there are these particular theoretical problems that need to be solved, it is completely unclear that more people working in the field makes things worse. I don't know why you think this follows from 3 or can be sensibly lumped with 3, and you don't provide an argument. Suppose I said "The most important thing about dam safety is whether you have a good theoretical understanding the dam before building it" and you said "Yes, and if you increase the number of people working on the dam you are less likely to understand it by the time it gets built, because someone will stumble across an ad hoc way to build a dam." This seems ridiculous both a priori and based on the empirical evidence. There are many possible models for the way that important problems in AI get solved, and you seem to be assuming a particular one.
Suppose that I airdrop in a million knowledge workers this year and they leave next year, corresponding to an exogenous boost in productivity this year. You are claiming that this obviously increases the degree of parallelization on relevant AI work. This isn't obvious, unless a big part of the relevant work is being done today (which seems unlikely, casually?)
I agree that I've only argued that your argument has a tiny impact; it could still dominate if there was literally nothing else going on. But even granting 1-5 there seem to be other big effects from economic growth.
The case in favor of growth seems to be pretty straightforward; I linked to a blog post in the last comment. Let me try to make the point more clearly:
Increasing economic activity speeds up a lot of things. Speeding up everything is neutral, so the important point is the difference between what it speeds up and what it doesn't speed up. Most things people are actually trying to do get sped up, while a bunch of random things (aging and disease, natural disasters, mood changes) don't get sped up. Lots of other things get sped up but significantly less than 1-for-1, because they have some inputs that get sped up and some that don't (accidents of all kinds, conflicts of all kinds, resource depletion). Given that things people are trying to do get sped up, and the things that happen which they aren't trying to do get sped up less, we should expect the effect to be to positive, as long as people are trying to do good things.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-17T14:33:31.978Z · LW(p) · GW(p)
What's a specific relevant example of something people are trying to speed up / not speed up besides AGI (= UFAI) and FAI? You pick out aging, disease, and natural disasters as not-sped-up but these seem very loosely coupled to astronomical benefits.
Replies from: paulfchristiano↑ comment by paulfchristiano · 2013-06-17T15:44:52.751Z · LW(p) · GW(p)
Increasing capital stocks, improving manufacturing, improving education, improving methodologies for discourse, figuring out important considerations. Making charity more efficient, ending poverty. Improving collective decision-making and governance. All of the social sciences. All of the hard sciences. Math and philosophy and computer science. Everything that everyone is working on, everywhere in the world.
I picked out conflict, accidents, and resource depletion as not being sped up 1-for-1, i.e. such that a 1% boost in economic activity corresponds to a <1% boost in those processes. Most people would say that war and accidents account for many bad things that happen. War is basically defined by people making decisions that are unusually misaligned with aggregate welfare. Accidents are basically defined by people not getting what they want. I could have lumped in terrorism, and then accounted for basically all of the ways that we can see things going really badly in the present day.
You have a particular story about how a bad thing might happen in the future. Maybe that's enough to conclude the future will be entirely unlike the present. But it seems like (1) that's a really brittle way to reason, however much you want to accuse its detractors of the "unpacking fallacy," and most smart people take this view, and (2) even granting almost all of your assumptions, it's pretty easy to think of scenarios where war, terrorism, or accidents are inputs into AI going badly, or where better education, more social stability, or better decision-making are inputs into AI going well. People promoting these positive changes are also working against forces that wouldn't be accelerated, like people growing old and dying and thereby throwing away their accumulated human capital, or infrastructure being stressed to keep people alive, etc. etc.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-17T16:08:00.906Z · LW(p) · GW(p)
Increasing capital stocks, improving manufacturing, improving education, improving methodologies for discourse, figuring out important considerations. Making charity more efficient, ending poverty. Improving collective decision-making and governance. All of the social sciences. All of the hard sciences. Math and philosophy and computer science. Everything that everyone is working on, everywhere in the world.
How is an increased capital stock supposed to improve our x-risk / astronomical benefit profile except by being an input into something else? Yes, computer science benefits, that's putatively the problem. We need certain types of math for FAI but does math benefit more from increased capital stocks compared to, say, computing power? Which of these other things are supposed to save the world faster than computer science destroys it, and how? How the heck would terrorism be a plausible input into AI going badly? Terrorists are not going to be the most-funded organizations with the smartest researchers working on AGI (= UFAI) as opposed to MIT, Google or Goldman Sachs.
Does your argument primarily reduce to "If there's no local FOOM then economic growth is a good thing, and I believe much less than you do in local FOOM"? Or do you also think that in local FOOM scenarios higher economic growth now expectedly results in a better local FOOM? And if so is there at least one plausible specific scenario that we can sketch out now for how that works, as opposed to general hopes that a higher economic growth exponent has vague nice effects which will outweigh the shortening of time until the local FOOM with a correspondingly reduced opportunity to get FAI research done in time? When you sketch out a specific scenario, this makes it possible to point out fragile links which conjunctively decrease the probability of that scenario, and often these fragile links generalize, which is why it's a bad idea to keep things vague and not sketch out any concrete scenarios for fear of the conjunction fallacy.
It seems to me that a lot of your reply, going by the mention of things like terrorism and poverty, must be either prioritizing near-term benefits over the astronomical future, or else being predicated on a very different model from local FOOM. We already have a known persistent disagreement on local FOOM. This is an important modular part of the disagreement on which other MIRIfolk do not all line up on one side or another. Thus I would like to know how much we disagree about expected goodness of higher econ growth exponents given local FOOM, and whether there's a big left over factor where "Paul Christiano thinks you're just being silly even assuming that a FOOM is local", especially if this factor is not further traceable to a persistent disagreement about competence of elites. It would then be helpful to sketch out a concrete scenario corresponding to this disagreement to see if it looks even more fragile and conjunctive.
(Note that e.g. Wei Dai also thought it was obviously true that faster econ growth exponents had a negative-sign effect on FAI, though, like me, this debate made him question (but not yet reject) the 'obvious' conclusion.)
Replies from: ESRogs, paulfchristiano↑ comment by ESRogs · 2013-06-18T23:13:36.024Z · LW(p) · GW(p)
(Note that e.g. Wei Dai also thought it was obviously true that faster econ growth exponents had a negative-sign effect on FAI, though, like me, this debate made him question (but not yet reject) the 'obvious' conclusion.)
I'm confused by the logic of this sentence (in particular how the 'though' and 'like me' fit together). Are you saying that you and Wei both at first accepted that faster econ growth meant less chance of FAI, but then were both caused to doubt this conclusion by the fact that others debated the claim?
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-18T23:37:56.968Z · LW(p) · GW(p)
Yep.
Replies from: ESRogs↑ comment by paulfchristiano · 2013-06-17T17:01:53.308Z · LW(p) · GW(p)
Even given a very fast local foom (to which I do assign a pretty small probability, especially as we make the situation more detailed and conclude that fewer things are relevant), I would still expect higher education and better discourse to improve the probability that people handle the situation well. It's weird to cash this out as a concrete scenario, because that just doesn't seem like how reasonable reasoning works.
But trying anyway: someone is deciding whether to run an AI or delay, and they correctly choose to delay. Someone is arguing that research direction X is safer than research direction Y, and others are more likely to respond selectively to correct arguments. Someone is more likely to notice there is a problem with a particular approach and they should do something differently, etc. etc.
Similarly, I expect war or external stressors to make things worse, but it seems silly to try and break this down as very specific situations. In general, people are making decisions about what to do, and if they have big alternative motivations (like winning a war, or avoiding social collapse, or what have you), I expect them to make decisions that are less aligned with aggregate welfare. They choose to run a less safe AI, they pursue a research direction that is less safe, etc. Similarly, I expect competent behavior by policy-makers to improve the situation across a broad distribution of scenarios, and I think that is less likely given other pressing issues. We nationalize AI projects, we effectively encourage coordination of AI researchers, we fund more safety-conscious research, etc. Similarly, I expect that an improved understanding of forecasting and decision-making would improve outcomes, and improved understanding of social sciences would play a small role in this. And so on.
But at any rate, my main question is how you can be so confident of local foom that you think this tiny effect given local foom scenarios dominates the effect given business as usual? I don't understand where you are coming from there. The secondary objection is to your epistemic framework. I have no idea how you would have thought about the future if you lived in 1800 or even 1900; it seems almost certain that this framework reasoning would have led you to crazy conclusions, and I'm afraid that the same thing is true in 2000. You just shouldn't expect to be able to think of detailed situations that determine the whole value of the universe, unless you are in an anomalous situation, but that doesn't mean that your actions have no effect and that you should condition on being in an anomalous situation.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-17T17:10:51.611Z · LW(p) · GW(p)
Even given a very fast local foom (to which I do assign a pretty small probability, especially as we make the situation more detailed and conclude that fewer things are relevant), I would still expect higher education and better discourse to improve the probability that people handle the situation well. It's weird to cash this out as a concrete scenario, because that just doesn't seem like how reasonable reasoning works.
But trying anyway: someone is deciding whether to run an AI or delay, and they correctly choose to delay. Someone is arguing that research direction X is safer than research direction Y, and others are more likely to respond selectively to correct arguments. Someone is more likely to notice there is a problem with a particular approach and they should do something differently, etc. etc.
How did this happen as a result of economic growth having a marginally greater exponent? Doesn't that just take us to this point faster and give less time for serial thought, less time for deep theories, less time for the EA movement to spread faster than the exponent on economic growth, etcetera? This decision would ceteris paribus need to be made at some particular cumulative level of scientific development, which will involve relatively more parallel work and relatively less serial work if the exponent of econ growth is higher. How does that help it be made correctly?
Exposing (and potentially answering) questions like this is very much the point of making the scenario concrete, and I have always held rather firmly on meta-level epistemic grounds that visualizing things out concretely is almost always a good idea in math, science, futurology and anywhere. You don't have to make all your predictions based on that example but you have to generate at least one concrete example and question it. I have espoused this principle widely and held to it myself in many cases apart from this particular dispute.
But at any rate, my main question is how you can be so confident of local foom that you think this tiny effect given local foom scenarios dominates the effect given business as usual?
Procedurally, we're not likely to resolve that particular persistent disagreement in this comment thread which is why I want to factor it out.
My secondary objection is to your epistemic framework. I have no idea how you would have thought about the future if you lived in 1800 or even 1900; it seems almost certain that this framework reasoning would have led you to crazy conclusions, and I'm afraid that the same thing is true in 2000.
I could make analogies about smart-people-will-then-decide and don't-worry-the-elite-wouldn't-be-that-stupid reasoning to various historical projections that failed, but I don't think we can get very much mileage out of nonspecifically arguing which of us would have been more wrong about 2000 if we had tried to project it out while living in 1800. I mean, obviously a major reason I don't trust your style of reasoning is that I think it wouldn't have worked historically, not that I think your reasoning mode would have worked well historically but I've decided to reject it because I'm stubborn. (If I were to be more specific, when I listen to your projections of future events they don't sound very much like recollections of past events as I have read about them in history books, where jaw-dropping stupidity usually plays a much stronger role.)
I think an important thing to keep in mind throughout is that we're not asking whether this present world would be stronger and wiser if it were economically poorer. I think it's much better to frame the question as whether we would be in a marginally better or worse position with respect to FAI today if we had the present level of economic development but the past century from 1913-2013 had taken ten fewer years to get there so that the current date were 2003. This seems a lot more subtle.
Replies from: jkaufman, paulfchristiano↑ comment by jefftk (jkaufman) · 2013-06-18T06:22:21.352Z · LW(p) · GW(p)
past events as I have read about them in history books, where jaw-dropping stupidity usually plays a much stronger role.
How sure are you that this isn't hindsight bias, that if various involved historical figures had been smarter they would have understood the situation and not done things that look unbelievably stupid looking back?
Do you have particular historical events in mind?
↑ comment by paulfchristiano · 2013-06-17T19:24:37.696Z · LW(p) · GW(p)
We are discussing the relative value of two different things: the stuff people do intentionally (and the byproducts thereof), and everything else.
In the case of the negative scenarios I outlined this is hopefully clear: wars aren't sped up 1-for-1, so there will be fewer wars between here and any relevant technological milestones. And similarly for other stressors, etc.
Regarding education: Suppose you made everything 1% more efficient. The amount of education a person gets over their life is 1% higher (because you didn't increase the pace of aging / turnover between people, which is the thing people were struggling against, and so people do better at getting what they want).
Other cases seem to be similar: some things are a wash, but more things get better than worse, because systematically people are pushing on the positive direction.
Procedurally, we're not likely to resolve that particular persistent disagreement in this comment thread which is why I want to factor it out.
This discussion was useful for getting a more precise sense of what exactly it is you assign high probability to.
Replies from: lukeprog↑ comment by lukeprog · 2013-06-21T03:13:40.372Z · LW(p) · GW(p)
I wish you two had the time for a full-blown adversarial collaboration on this topic, or perhaps on some sub-problem within the topic, with Carl Shulman as moderator.
↑ comment by ModusPonies · 2013-06-17T17:30:35.316Z · LW(p) · GW(p)
At some point I need to write a post about how I'm worried that there's an "unpacking fallacy" or "conjunction fallacy fallacy" practiced by people who have heard about the conjunction fallacy...
Please do this. I really, really want to read that post. Also I think writing it would save you time, since you could then link to it instead of re-explaining it in comments. (I think this is the third time I've seen you say something about that post, and I don't read everything you write.)
If there's anything I can do to help make this happen (such as digging through your old comments for previous explanations of this point, copyediting, or collecting a petition of people who want to see the post to provide motivation), please please please let me know.
Replies from: jkaufman↑ comment by jefftk (jkaufman) · 2013-06-18T06:12:29.902Z · LW(p) · GW(p)
If there's anything I can do to help make this happen (such as digging through your old comments for previous explanations of this point, copyediting, or collecting a petition of people who want to see the post to provide motivation), please please please let me know.
My experience has been that asking people "let me know if I can help" doesn't result in requests for help. I'd suggest just going ahead and compiling a list of relevant comments (like this one) and sending them along.
(If Eliezer doesn't end up writing the post, well, you now have a bunch of comments you could use to get started on a post yourself.)
comment by lukeprog · 2013-06-12T20:20:02.720Z · LW(p) · GW(p)
The "normal view" is expressed by GiveWell here. Eliezer's post above can be seen as a counterpoint to that. GiveWell does acknowledge that "One of the most compelling cases for a way in which development and technology can cause harm revolves around global catastrophic risks..."
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-13T19:18:22.948Z · LW(p) · GW(p)
Some Facebook discussion here including Carl's opinion:
https://www.facebook.com/yudkowsky/posts/10151665252179228
Replies from: lukeprog, NancyLebovitz↑ comment by lukeprog · 2013-09-27T02:15:27.259Z · LW(p) · GW(p)
I'm reposting Carl's Facebook comments to LW, for convenience. Carl's comments were:
Economic growth makes the world more peaceful and cooperative in multiple ways, reducing the temptation to take big risks with dangerous technologies to get ahead, risk of arms races, mistrust stopping international coordination, the chance of a nuclear war brutalizing society, and others. Link 1
Economic growth also makes people care more about long-term problems like global warming, and be more inclusive of and friendly towards foreigners and other neglected groups. Link 2
Then there's the fact that Moore's law is much faster than economic growth, and software spending is also growing as a share of the economy. So an overall stagnant economy does not mean stagnant technology.
Plus the model of serial-intensive FAI action you are using to drive the benefits of moving early relies on a lot of extreme predictions relative to the distribution of expert opinion, without a good predictive track record to back them up, and a plausible bias explanation. Otherwise, in the likely event that it is not dispositive, other factors predominate. It's too small relative to all the other things that get affected.
[So] generally I think a uniform worldwide increase in per capita incomes improves global odds of good long-run futures.
Eliezer replied to Carl:
Replies from: John_Maxwell_IVThe most powerful mechanisms for this, in your model, are that (a) wealth transmits to international cooperation which improves FAI vs. UFAI somehow, and (b) wealth transmits to concern about global tidiness which you think successfully transmits more to FAI vs. UFAI? Neither of these forces have very much effectual power at all in my visualization - I wouldn't mention them in the same breath as Moore's Law or total funding for AI. They're both double-fragile mechanisms.
↑ comment by John_Maxwell (John_Maxwell_IV) · 2013-09-29T02:12:52.899Z · LW(p) · GW(p)
It's worth noting that the relationship between economic growth and the expected quality of global outcomes is not necessarily a linear one. The optimal speed of economic growth may be neither super-slow nor super-fast, but some "just right" value in between that makes peace, cooperation, and long-term thinking commonplace while avoiding technological advancement substantially faster than what we see today.
↑ comment by NancyLebovitz · 2013-06-14T01:24:32.044Z · LW(p) · GW(p)
The possibility of AI being invented to deal with climate change hadn't occurred to me, but now that it's mentioned, it doesn't seem impossible, especially if climate engineering is on the agenda.
Any thoughts about whether climate is a sufficiently hard problem to inspire work on AIs?
Replies from: shminux↑ comment by Shmi (shminux) · 2013-06-14T01:43:13.469Z · LW(p) · GW(p)
Any thoughts about whether climate is a sufficiently hard problem to inspire work on AIs?
Climate seems far easier. At least it's known what causes climate change, more or less. No one knows what it would take to make an AGI.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2013-06-14T02:06:41.513Z · LW(p) · GW(p)
I didn't mean work on climate change might specifically be useful for developing an AI, I meant that people might develop AI to work on weather/climate prediction.
Replies from: shminux↑ comment by Shmi (shminux) · 2013-06-14T02:32:04.579Z · LW(p) · GW(p)
Right, and my reply was that AGI is much harder, so unlikely. Sorry about not being clear.
comment by NancyLebovitz · 2013-06-13T09:11:12.318Z · LW(p) · GW(p)
Any thoughts about what sort of society optimizes for insight into difficult problems?
Replies from: D_Alex↑ comment by D_Alex · 2013-06-14T08:14:03.417Z · LW(p) · GW(p)
I have a few thoughts.... Naturally first question is what does "optimise for insight" mean.
- A society which values leisure and prosperity, eg the current Scandinavians...? Evidence; They punch well above their weight economically, produce world class stuff (Volvo, Nokia, Ericsson, Bang&Olufsen spring to mind), but working pace from my experience could be described as "leisurely". Possibly best "insights/manhour" ratio.
- A society which values education, but which somehow ended up with a screwed up economy, eg the USSR...? Evidence: first man in space, atomic power/weapons, numerous scientific breakthroughs... possibly best "Insight/stuff" ratio.
- A wealthy modern capitalist democracy which values growth, eg the USA...? Evidence: more "science" and inventions produced in total than anywhere else. Possibly best "insights/time" ratio.
comment by Douglas_Reay · 2013-06-13T06:37:17.959Z · LW(p) · GW(p)
Something to take into account:
Speed of economic growth affects the duration of the demographic transition from high-birth-rate-and-high-death-rate to low-birth-rate-and-low-death-rate, for individual countries; and thus affects the total world population.
A high population world, full of low-education people desperately struggling to survive (ie low on Maslow's hierarchy), might be more likely to support making bad decisions about AI development for short term nationalistic reasons.
Replies from: NancyLebovitz, knb↑ comment by NancyLebovitz · 2013-06-13T09:08:25.481Z · LW(p) · GW(p)
UFAI might be developed by a large company as well as by a country.
Replies from: Thomas↑ comment by Thomas · 2013-06-13T09:11:26.271Z · LW(p) · GW(p)
Or by a garage firm.
Replies from: None↑ comment by [deleted] · 2013-06-13T17:02:06.879Z · LW(p) · GW(p)
Is it plausible that UFAI (or any kind of strong AI) will be created by just one person? It seems like important mathematical discoveries have been made single-handedly, like calculus.
Replies from: JoshuaZ, Randaly, Thomas, NancyLebovitz↑ comment by JoshuaZ · 2013-06-13T18:20:25.613Z · LW(p) · GW(p)
Neither Newton nor Liebniz invented calculus single-handedly as is often described. There was a lot of precursor work. Newton for example credited the idea of the derivative to Fermat's prior work on drawing tangent lines (which itself was a generalization of ancient Greek ideas about tangents for conic sections). Others also discussed similar notions before Newton and Liebniz such as the mean speed theorem. After both of them, a lot of work still needed to be done to make calculus useful. The sections of calculus which Newton and Liebniz did is only about half of what is covered in a normal into calc class today.
A better example might be Shannon's development of information theory which really did have almost no precursors and did leap from his brow fully formed like Athena.
↑ comment by Randaly · 2013-06-13T17:15:23.810Z · LW(p) · GW(p)
UFAI is not likely to be a purely mathematical discovery. The most plausible early UFAI designs will require vast computational resources and huge amounts of code.
In addition, UFAI has a minimum level of intelligence required before it becomes a threat; one might well say that UFAI is analogous not to calculus itself, but rather to solving a particular problem in calculus that uses tools not invented for hundreds of years after Newton and Leibniz.
↑ comment by Thomas · 2013-06-13T17:37:36.424Z · LW(p) · GW(p)
AI is a math problem, yes. And almost all math problems have been solved by a single person. And several math theories were also build this way. Single-headedly.
Abraham Lincoln invented another proof for Pythagorean Theorem. Excellent for a POTUS, more than most mathematicians ever accomplish. Not good enough for anything like AI.
Could be, that the AI problem is not harder than Fermat Last Theorem. Could be that it is much harder. Harder than Riemann's conjecture, maybe.
It is also possible that it is just hard enough for one dedicated (brilliant) human and will be solved suddenly.
↑ comment by NancyLebovitz · 2013-06-13T17:27:23.382Z · LW(p) · GW(p)
I don't think it will be just one person, but I don't have a feeling for how large a team it would take. Opinions?
↑ comment by knb · 2013-06-13T07:51:18.625Z · LW(p) · GW(p)
How likely is it that AI would be developed first in a poor, undeveloped country that hasn't gone through the demographic transition? My guess is: extremely low.
Replies from: Douglas_Reay↑ comment by Douglas_Reay · 2013-06-13T11:43:12.028Z · LW(p) · GW(p)
I'd agree. But point out the troubles originating with a country often don't stay within the borders of that country. If you are a rich but small country, with an advanced computer industry, and your neighbour is a large but poor country, with a strong military, this is going to affect your decisions.
comment by Dr_Manhattan · 2013-06-13T15:35:22.344Z · LW(p) · GW(p)
Great Stagnation being good news
Per Thiel the computer industry is the exception to the Great Stagnation, so not sure how much it really helps. You can claim that building flying cars would take resources away from UFAI progress, though intelligence research (i.e. machine learning) is so intertwined with every industry that this is a weak argument.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2013-06-14T07:31:17.401Z · LW(p) · GW(p)
How likely is it that better growth prospects in non-software industries would lead to investment dollars being drawn away from the software industry to those industries and a decrease in UFAI progress on net?
Replies from: Dr_Manhattan↑ comment by Dr_Manhattan · 2013-06-14T13:12:11.514Z · LW(p) · GW(p)
"Not likely", since
though intelligence research (i.e. machine learning) is so intertwined with every industry that this is a weak argument.
= software is eating the world.
comment by owencb · 2013-08-03T12:02:19.568Z · LW(p) · GW(p)
A key step in your argument is the importance of the parallel/serial distinction. However we already have some reasonably effective institutions for making naturally serial work parallelizable (e.g. peer review), and more are arising. This has allowed new areas of mathematics to be explored pretty quickly. These provide a valve which should mean that extra work on FAI is only slightly less effective than you'd initially think.
You could still think that this was the dominant point if economic growth would increase the speed of both AI and AI safety work to the same degree, but I think this is very unclear. Even granting points (1)-(3) of Paul's disjunction, it seems like the most important question is the comparative relationship between the elasticity of AI work with economic growth and the elasticity of FAI work with economic growth.
I currently incline to think that in general in a more prosperous society, proportionally more people are likely to fund or be drawn into work which is a luxury in the short term, and AI safety work fits into this category (whereas AI work is more likely to have short term economic benefits, so get funding). However, this question could do with more investigation! In particular I think it's plausible that the current state of EA movement growth means we're not in a general position. I haven't yet thought carefully through all of the ramifications of this.
Nick Beckstead made comments on some related questions in this talk: http://intelligence.org/wp-content/uploads/2013/07/Beckstead-Evaluating-Options-Using-Far-Future-Standards.pdf
By the way, I can guess at some of the source of disagreement here. The language of your post (e.g. "I wish I had more time, not less, in which to work on FAI") suggests that you imagine that the current people working on FAI will account for a reasonable proportion of the total AI safety work which will be done. I think it more likely that there will be a lot more people working on it before we get AI, so the growth rate dominates.
comment by Tyrrell_McAllister · 2013-06-12T20:18:22.824Z · LW(p) · GW(p)
What kinds of changes to the economy would disproportionately help FAI over UFAI? I gather that your first-order answer is "slowing down", but how much slower? (In the limit, both kinds of research grind to a halt, perhaps only to resume where they left off when the economy picks up.) Are there particular ways in which the economy could slow down (or even speed up) that would especially help FAI over UFAI?
comment by AlexMennen · 2013-06-13T08:49:48.968Z · LW(p) · GW(p)
I would also expect socialist economic policies to increase chances of successful FAI, for two reasons. First, it would decrease incentives to produce technological advancements that could lead to UFAI. Second, it would make it easier to devote resources to activities that do not result in a short-term personal profit, such as FAI research.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2013-06-13T09:04:18.186Z · LW(p) · GW(p)
Socialist economic policies, perhaps yes. On the other hand, full-blown socialism...
How likely would a socialist government insist that its party line must be hardcoded into the AI values, and what would be the likely consequences? How likely would the scientists working on the AI be selected by their rationality, as opposed to their loyalty to regime?
Replies from: AlexMennen↑ comment by AlexMennen · 2013-06-13T09:44:17.247Z · LW(p) · GW(p)
How does anything in my comment suggest that I think brutal dictatorships increase the chance of successful FAI? I only mentioned socialist economic policies.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2013-06-13T09:57:25.167Z · LW(p) · GW(p)
I don't think you suggested that; I just wanted to prevent a possible connotation (that I think some people are likely to make, including me).
Note: I also didn't downvote your comment - because I think it is reasonable - so probably someone else made that interpretation. Probably influenced by my comment. Sorry for that.
This said, I don't think a regime must be a brutal dictatorship to insist that its values must be hardcoded into the AI values. I can imagine nice people insisting that you hardcode there The Universal Declaration of Human Rights, religious tolerance, diversity, tolerance to minorities, preserving cultural heritage, preserving the nature, etc. Actually, I imagine that most people would consider Eliezer less reliable to work on Friendly AI than someone who professes all the proper applause lights.
Replies from: AlexMennen↑ comment by AlexMennen · 2013-06-13T13:10:10.898Z · LW(p) · GW(p)
If a government pursued its own AGI project, that could be a danger, but not hugely more so than private AI work. In order to be much more threatening, it would have to monopolize AI research, so that organizations like MIRI couldn't exist. Even then, FAI research would probably be easier to do in secret than making money off of AI research (the primary driver of UFAI risk) would be.
comment by Rob Bensinger (RobbBB) · 2013-06-13T07:20:28.064Z · LW(p) · GW(p)
It's easy to see why rationalists shouldn't help develop technologies that speed AI. (On paper, even an innovation that speeds FAI twice as much as it speeds AI itself would probably be a bad idea if it weren't completely indispensable to FAI. On the other hand, the FAI field is so small right now that even a small absolute increase in money, influence, or intellectual power for FAI should have a much larger impact on our future than a relatively large absolute increase or decrease in the rate of progress of the rest of AI research. So we should be more interested in how effective altruism impacts the good guys than how it impacts the bad guys.)
I'm having a harder time accepting that discouraging rationalists from pulling people out of poverty is a good thing. We surely don't expect EA as currently practiced by most to have a significant enough impact on global GDP to reliably and dramatically affect AI research in the near term. If EA continues to flourish in our social circles (and neighboring ones), I would expect the good reputation EA helps build for rationality activists and FAI researchers (in the eyes of the public, politicians, charitable donors, and soft-hearted potential researchers) to have a much bigger impact on FAI prospects than how many metal rooves Kenyans manage to acquire.
(Possibly I just haven't thought hard enough about Eliezer's scenarios, hence I have an easier time seeing reasoning along his lines discouraging prosaic effective altruism in the short term than having a major effect on the mid-term probability of e.g. completely overhauling a country's infrastructure to install self-driving electric cars. Perhaps my imagination is compromised by absurdity bias or more generally by motivated reasoning; I don't want our goals to diverge either.)
As it stands, I wouldn't even be surprised if the moral/psychological benefits to rationalists of making small measurable progress in humanitarian endeavors outweighed the costs if those endeavors turned out to be slightly counterproductive in the context of FAI. Bracketing the open problems within FAI itself, the largest obstacles we're seeing are failures of motivation (inciting heroism rather than denial or despair), of imagination (understanding the problem's scope and our power as individuals to genuinely affect it), and of communication (getting our message out). Even (subtly) ineffective (broadly) conventional altruistic efforts seem like they could be useful ways of addressing all three of those problems.
comment by jimrandomh · 2013-06-12T22:59:56.319Z · LW(p) · GW(p)
Motivated reasoning warning: I notice that want it to be the case that economic growth improves the FAI win rate, or at least doesn't reduce it. I am not convinced of either side, but here are my thoughts.
Moore's Law, as originally formulated, was that the unit dollar cost per processor element halves in each interval. I am more convinced that this is serially limited, than I am that FAI research is serially limited. In particular, semiconductor research is saturated with money, and FAI research isn't; this makes it much more likely to have used up any gains from parallelism. This reasoning suggests that when slowing down or speeding up the overall clock, Moore's Law is one of the few things that won't be affected.
I'm also not fully convinced that RGDP growth and the amount of hack-AI research are monotonically connected. The limiting input is top programmers; economic progress does not translate straightforwardly into creating them, and stronger economies both give them more autonomy (so less likely to divert into something trivial like making people click on ads), and stronger economies let them retire more easily. These effects push in opposite directions, and it's not at all clear which is stronger.
Regardless of what happens to the economy, the first- and second-wave Internet generations will take power on a fixed schedule. That's way more kaboomy than economic growth is.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-12T23:08:26.261Z · LW(p) · GW(p)
Moore's Law is one of the few things that won't be affected.
FAI seems to me to be mostly about serial depth of research. UFAI seems to be mostly about cumulative parallel volume of research. Things that affect this are effectual even if Moore's Law is constant.
I'm also not fully convinced that RGDP growth and the amount of hack-AI research are monotonically connected.
We could check how economic status affects science funding.
Regardless of what happens to the economy, the first- and second-wave Internet generations will take power on a fixed schedule. That's way more kaboomy than economic growth is.
? What does your model claim happens here?
Replies from: jimrandomh, Vaniver↑ comment by jimrandomh · 2013-06-13T16:05:47.393Z · LW(p) · GW(p)
? What does your model claim happens here?
Right now the institutional leadership in the US is (a) composed almost entirely of baby boomers, a relatively narrow age band, and (b) significantly worse than average (as compared to comparable institutions and leaders in other countries). When they start retiring, they won't be replaced with people who are only slightly younger, but by people who're much younger and spread across a larger range of ages, causing organizational competence to regress to the mean, in many types of institutions simultaneously.
I also believe - and this is much lower confidence - that this is the reason for the Great Stagnation; institutional corruption is suppressing and misrouting most research, and a leadership turnover may reverse this process, potentially producing an order of magnitude increase in useful research done.
↑ comment by Vaniver · 2013-06-13T02:40:01.858Z · LW(p) · GW(p)
FAI seems to me to be mostly about serial depth of research. UFAI seems to be mostly about cumulative parallel volume of research. Things that affect this are effectual even if Moore's Law is constant.
I wonder how much of this estimate is your distance to the topic; it seems like there could be a bias to think that one's work is more serial than it actually is, and other's work more parallelizable.
(Apply reversal test: what would I expect to see if the reverse were true? Well, a thought experiment: would you prefer two of you working for six months (either on the same project together, or different projects) and then nothing for six months, or one of you working for a year? The first makes more sense in parallel fields, the second more sense in serial fields. If you imagined that instead of yourself, it was someone in another field, what would you think would be better for them? What's different?)
comment by Shmi (shminux) · 2013-06-12T21:07:20.641Z · LW(p) · GW(p)
This question is quite loaded, so maybe it's good to figure out which part of the economic or technological growth is potentially too fast. For example, would the rate of the Moore's law matching the rate of the economic growth, say, 4-5% annual, instead of exceeding it by an order of magnitude, make a difference?
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-12T21:23:03.906Z · LW(p) · GW(p)
Offhand I'd think a world like that would have a much higher chance of survival. Their initial hardware would be much weaker and use much better algorithms. They'd stand a vastly better change of getting intelligence amplification before AI. Advances in neuroscience would have a long lag time before translating into UFAI. Moore's Law is not like vanilla econ growth - I felt really relieved when I realized that Moore's Law for serial speeds had definitively broken down. I am much less ambiguous about that being good news than I am about the Great Stagnation or Great Recession being disguised good news.
Replies from: rhollerith_dot_com↑ comment by RHollerith (rhollerith_dot_com) · 2013-06-13T02:30:21.388Z · LW(p) · GW(p)
How bad is an advance (e.g., a better programming language) that increases the complexity and sophistication of the projects that a team of programmers can successfully complete?
My guess is that it is much worse than an advance picked at random that generates the same amount of economic value, and about half or 2 thirds as bad as an improvement in general-purpose computing hardware that generates an equal amount of economic value.
comment by pnrjulius · 2013-06-12T20:22:38.997Z · LW(p) · GW(p)
This question is broader than just AI. Economic growth is closely tied to technological advancement, and technological advancement in general carries great risks and great benefits.
Consider nuclear weapons, for instance: Was humanity ready for them? They are now something that could destroy us at any time. But on the other hand, they might be the solution to an oncoming asteroid, which could have destroyed us for millions of years.
Likewise, nanotechnology could create a grey goo event that kills us all; or it could lead to a world without poverty, without disease, where we all live as long as we like and have essentially unlimited resources.
It's also worth asking whether slowing technology would even help; cultural advancement seems somewhat dependent upon technological advancement. It's not clear to me that had we taken another 100 years to get nuclear weapons we would have used them any more responsibly; perhaps it simply would have taken that much longer to achieve the Long Peace.
In any case, I don't really see any simple intervention that would slow technological advancement without causing an enormous amount of collateral damage. So unless you're quite sure that the benefit in terms of slowing down dangerous technologies like unfriendly AI outweighs the cost in slowing down beneficial technologies, I don't think slowing down technology is the right approach.
Instead, find ways to establish safeguards, and incentives for developing beneficial technologies faster. To some extent we already do this: Nuclear research continues at CERN and Fermilab, but when we learn that Iran is working on similar technologies we are concerned, because we don't think Iran's government is trustworthy enough to deal with these risks. There aren't enough safeguards against unfriendly AI or incentives to develop friendly AI, but that's something the Singularity Institute or similar institutions could very well work on. Lobby for legislation on artificial intelligence, or raise funds for an endowment that supports friendliness research.
Replies from: Eliezer_Yudkowsky, gwern, RobbBB↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-12T21:28:23.843Z · LW(p) · GW(p)
To be clear, the question is not whether we should divert resources from FAI research to trying to slow world economic growth, that seems risky and ineffectual. The question is whether, as a good and ethical person, I should avoid any opportunities to join in ensembles trying to increase world economic growth.
Replies from: NancyLebovitz, Epiphany↑ comment by NancyLebovitz · 2013-06-12T22:10:45.770Z · LW(p) · GW(p)
If the ideas for increasing world economic growth can be traced back to you, might the improvement in your reputation increase the odds of FAI?
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-12T22:36:45.958Z · LW(p) · GW(p)
Sounds like a rather fragile causal pathway. Especially if one is joining an ensemble.
Replies from: ialdabaoth↑ comment by ialdabaoth · 2013-06-13T11:13:16.337Z · LW(p) · GW(p)
Follow-up: If you are part of an ensemble generating ideas for increasing world economic growth, how much information will that give you about the specific ways in which economic growth will manifest, compared to not being part of that ensemble? How easily leveraged is that information towards directly controlling or exploiting a noticeable fraction of the newly-grown economy?
As a singular example: how much money could you get from judicious investments, if you know where things are going next? How usable would those funds be towards mitigating UFAI risks and optimizing FAI research, in ratio to the increased general risk of UFAI caused by the economic growth itself?
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-13T13:55:51.893Z · LW(p) · GW(p)
That's why I keep telling people about Scott Sumner, market monetarism, and NGDP level determinism - it might not let you beat the stock market indices, but you can end up with some really bizarre expectations if you don't know about the best modern concept of "tight money" and "loose money". E.g. all the people who were worried about hyperinflation when the Fed lowered interest rates to 0.25 and started printing huge amounts of money, while the market monetarists were saying "You're still going to get sub-trend inflation, our indicators say there isn't enough money being printed."
Beating the market is hard. Not being stupid with respect to the market is doable.
↑ comment by Epiphany · 2013-06-13T03:19:27.475Z · LW(p) · GW(p)
Perhaps a better question would be "If my mission is to save the world from UFAI, should I expend time and resources attempting to determine what stance to take on other causes?" No matter your level of potential to learn multiple subjects, investing that time and energy into FAI would, in theory, result in a better outcome with FAI - though I am becoming increasingly aware of the fact that there are limits to how good I can be with subjects I haven't specialized in and if you think about it, you may realize that you have limitations as well. One of the most intelligent people I've ever met said to me (on a different subject):
"I don't know enough to do it right. I just know enough to get myself in trouble."
If you can do anything with the time and effort this ensemble requires of you to make a quality decision and participate in activities, what would make the biggest difference?
↑ comment by gwern · 2013-06-13T21:32:52.075Z · LW(p) · GW(p)
But on the other hand, they might be the solution to an oncoming asteroid, which could have destroyed us for millions of years.
Not much of a point in nukes' favor since there are so many other ways to redirect asteroids; even if nukes had a niche for taking care of asteroids very close to impact, it'd probably be vastly cheaper to just put up a better telescope network to spot all asteroids further off.
↑ comment by Rob Bensinger (RobbBB) · 2013-06-13T21:16:40.108Z · LW(p) · GW(p)
Nukes and bioweapons don't FOOM in quite the way AGI is often thought to, because there's a nontrivial proliferation step following the initial development of the technology. (Perhaps they resemble Oracle AGI in that respect; subsequent to being created, the technology has to unlock itself, either suddenly or by a gradual increase in influence, before it can have a direct catastrophic impact.)
I raise this point because the relationship between technology proliferation and GDP may differ from that between technology development and GDP. More, global risks tied to poverty (regional conflicts resulting in biological or nuclear war; poor sanitation resulting in pandemic diseases; etc.) may compete with ones tried to prosperity.
Of course, these risks might be good things if they provided the slowdown Eliezer wants, gravely injuring civilization without killing it. But I suspect most non-existential catastrophes would have the opposite effect. Long-term thinking and careful risk assessment are easier when societies (and/or theorists) feel less immediately threatened; post-apocalyptic AI research may be more likely to be militarized, centralized, short-sighted, and philosophically unsophisticated, which could actually speed up UFAI development.
Two counter-arguments to the anti-apocalypse argument:
- A catastrophe that didn't devastate our intellectual elites would make them more cautious and sensitive to existential risks in general, including UFAI. An AI-related crisis (that didn't kill everyone, and came soon enough to alter our technological momentum) would be particularly helpful.
- A catastrophe would probably favor strong, relatively undemocratic leadership, which might make for better research priorities, since it's easier to explain AI risk to a few dictators than to a lot of voters.
So unless you're quite sure that the benefit in terms of slowing down dangerous technologies like unfriendly AI outweighs the cost in slowing down beneficial technologies
As an alternative to being quite sure that the benefits somewhat outweigh the risks, you could somewhat less confidently believe that the benefits overwhelmingly outweigh the risks. In the end, inaction requires just as much moral and evidential justification as action.
comment by Eli Tyre (elityre) · 2020-08-10T07:09:19.109Z · LW(p) · GW(p)
One countervailing thought: I want AGI to be developed in a high trust, low-scarcity, social-pyshoclogical context, because that seems like it matters a lot for safety.
Slow growth enough and society as a whole becomes a lot more bitter and cutthroat?
comment by answer · 2013-06-13T06:38:16.380Z · LW(p) · GW(p)
Relative to UFAI, FAI work seems like it would be mathier and more insight-based, where UFAI can more easily cobble together lots of pieces. This means that UFAI parallelizes better than FAI. UFAI also probably benefits from brute-force computing power more than FAI. Both of these imply, so far as I can tell, that slower economic growth is good news for FAI; it lengthens the deadline to UFAI and gives us more time to get the job done.
Forgive me if this is a stupid question, but wouldn't UFAI and FAI have identical or near-identical computational abilities/methods/limits and differ only by goals/values?
Replies from: knb, TheOtherDave↑ comment by knb · 2013-06-13T07:04:13.255Z · LW(p) · GW(p)
Forgive me if this is a stupid question, but wouldn't UFAI and FAI have identical or near-identical computational abilities/methods/limits and differ only by goals/values?
An FAI would have to be created by someone who had a clear understanding of how the whole system worked--in order for them to know it would be able to maintain the original values its creator wanted it to have. Because of that, an FAI would probably have to have fairly clean, simple code. You could also imagine a super-complex kludge of different systems (think of the human brain) that work when backed by massive processing power, but is not well-understood. It would be hard to predict what that system would do without turning it on. The overwhelming probability is that it would be a UAI, since FAIs are such a small fraction of the set of possible mind designs.
It's not that a UFAI needs more processing power, but that if tons of processing power is needed, you're probably not running something which is provably Friendly.
↑ comment by TheOtherDave · 2013-06-13T15:58:54.712Z · LW(p) · GW(p)
Yes. The OP is assuming that the process of reliably defining the goals/values which characterize FAI is precisely what requires a "mathier and more insight-based" process which parallelizes less well and benefits less from brute-force computing power.
comment by fubarobfusco · 2013-06-13T02:12:27.819Z · LW(p) · GW(p)
The Great Stagnation has come with increasing wealth and income disparity.
This is to say: A smaller and smaller number of people are increasingly free to spend an increasing fraction of humanity's productive capacity on the projects they choose. Meanwhile, a vastly larger number of people are increasingly restricted to spend more of their personal productive capacity on projects they would not choose (i.e. increasing labor hours), and in exchange receive less and less control of humanity's productive capacity (i.e. diminishing real wages) to spend on projects that they do choose.
How does this affect the situation with respect to FAI?
comment by Halfwit · 2013-06-13T19:27:41.378Z · LW(p) · GW(p)
I think we're past the point where it matters. If we had a few lost decades in the mid-twentieth century, maybe, (and just to be cognitively polite here, this is just my intuition talking) the intelligence explosion could have been delayed significantly. We are just a decade off from home computers with >100 teraflops, not to mention the distressing trend of neuromorphic hardware (Here's Ben Chandler of the SyNAPSE project talking about his work on HackerNews)With all this inertia, it would take an extremely large downturn to slow us now. Engineering a new AI winter seems like a better idea, though I'm confused about how this could be done. Perceptrons discredited connectionist approaches for a surprisingly long time, perhaps a similar book could discredit (and indirectly defund) dangerous branches of AI which aren't useful for FAI research--but this seems unlikely, though less so than OP significantly altering economic growth either way.
comment by lsparrish · 2013-06-13T14:28:31.798Z · LW(p) · GW(p)
Any ideas to make FAI parallelize better? Or make there be less UFAI resources without reducing economic growth?
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-13T14:32:36.578Z · LW(p) · GW(p)
If there were a sufficiently smart government with a sufficiently demonstrated track record of cluefulness whose relevant officials seemed to genuinely get the idea of pro-humanity/pro-sentience/galactic-optimizing AI, the social ideals and technical impulse behind indirect normativity, and that AI was incredibly dangerous, I would consider trusting them to be in charge of a Manhattan Project with thousands of researchers with enforced norms against information leakage, like government cryptography projects. This might not cure required serial depth but it would let FAI parallelize more without leaking info that could be used to rapidly construct UFAI. I usually regard this scenario as a political impossibility.
Things that result in fewer resources going into AI specifically would result in fewer UFAI resources without reducing overall economic growth, but it needs to be kept in mind that some such research occurs in financial firms pushing trading algorithms, and a lot more in Google, not just in places like universities.
Replies from: Benja↑ comment by Benya (Benja) · 2013-11-03T20:54:35.175Z · LW(p) · GW(p)
Things that result in fewer resources going into AI specifically would result in fewer UFAI resources without reducing overall economic growth, but it needs to be kept in mind that some such research occurs in financial firms pushing trading algorithms, and a lot more in Google, not just in places like universities.
To the extent that industry researchers publish less than academia (this seems particularly likely in financial firms, and to a lesser degree at Google), a hypothetical complete shutdown of academic AI research should reduce uFAI's parallelization advantage by 2+ orders of magnitude, though (presumably, the largest industrial uFAI teams are much smaller than the entire academic AI research community). It seems that reducing academic funding for AI only somewhat should translate pretty well into less parallel uFAI development as well.
comment by Epiphany · 2013-06-13T02:58:44.058Z · LW(p) · GW(p)
I'm not convinced that slowing economic growth would result in FAI developing faster than UFAI and I think your main point of leverage for getting an advantage lies elsewhere (explained). The key is obviously the proportion between the two, not just slowing down the one or speeding up the other, so I suggest a brainstorm to consider all of the possible ways that slow economic growth could also slow FAI. For one thought: do non-profit organizations do disproportionately poorly during recessions?
The major point of leverage, I think, is people, not the economy. How that would work:
Selfish people have a goal that divides them. If your goal is to be an AI trillionaire, you have to start your own AI company. If your goal is to save the world from AI, your goal benefits if you co-operate with like-minded people as much as possible. The aspiring trillionaire's goals will not be furthered by co-operating with competitors, and the world-saver's goals will not be furthered by competing with potential allies. This means that even if UFAI workers outnumber FAI workers 10 to 1, it's possible for the FAI effort to unite that 10% and conquer the 90% with sheer numbers, assuming that the 90% is divided into fragments smaller than 10% each.
An organization made up of altruistic people might be stronger and more efficient than an organization made up of selfish people. If FAI workers are more honest or less greedy, this could yield practical benefits like a reduction in efficiency-draining office politics games, a lower risk of being stolen from or betrayed by those within the organization, and being able to put more money toward research because paychecks may not need to be as large. Also, people who are passionate about their goal derive meaning from their work. They might work harder or have more moments of creative inspiration than people who are working simply to get more money.
The FAI movement is likely to attract people who are forward-thinking and/or more sensible, realistic or rational. These people may be more likely to succeed at what they do than those attracted to UFAI projects.
People look down on those who hurt others for profit. Right now, the average person probably does not know about the UFAI risk and how important it is that people should not work on such projects. UFAI is a problem that would affect everyone, so it's likely that the average person will eventually take interest in it and shun those who work on UFAI. If they're accurately informed about which projects should be considered UFAI, I think it would seriously deter AI workers from choosing UFAI jobs.
It has been said by people knowledgeable about geniuses that they typically don't prioritize money as highly as others. Unfortunately, there isn't enough solid research on the gifted population (let alone geniuses) but if it's true that money is not the most important thing to a genius, you may find that a disproportionate number of geniuses would prioritize working on FAI over taking the biggest possible paycheck.
Unlike the economy, all of these are things that MIRI can take action on. If the FAI movement can take advantage of any of these to get more workers, more good minds, or more good people than the UFAI projects, I want to see it happen.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2013-06-13T08:52:19.399Z · LW(p) · GW(p)
Uhm, it is not that simple. Perhaps selfish people cooperate less, but among altruistic people often the price for cooperation is worshiping the same applause lights. Selfish people optimize for money or power, altruistic people often optimize for status in altruistic community. Selfish people may be more agenty, simply because they know that if they don't work for their selfish benefits, no one else will. Altruistic people often talk about what others should do, what the government should do, etc. Altruistic people collect around different causes, they compete for donor money and public attention, even their goals may sometimes be opposed; e.g. "protecting nature" vs "removing the suffering inherent in nature"; "spreading rationality" vs "spreading religious tolerance"; "making people equal and happy" vs "protecting the cultural heritage". People don't like those who hurt others, but they also admire high-status people and despise low-status people. Geniuses are often crazy.
I'm not saying it is exactly the other way round as you said. Just: it's complicated. I scanned through your comment and listed all the counterarguments that immediately came to my mind. If good intentions and intelligence translated to success so directly, then communists wouldn't have killed millions of people, Mensa would rule the world now, and we all would be living in the post-singularity paradise already.
comment by drethelin · 2013-06-12T22:28:32.262Z · LW(p) · GW(p)
I think this depends on how much you think you have the ability to cash in on any given opportunity. eg, you gaining a ton of money is probably going to help the cause of FAI more than whatever amount of economic growth is generated helps bring about AI. So basically either put your money where your theories are or don't publicly theorize?
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-12T23:05:50.839Z · LW(p) · GW(p)
This is true for non-super-huge startups that donate any noticeable fraction of generated wealth to EA, yes - that amount is not a significant percentage of overall global econ growth, and would be a much larger fraction of FAI funding.
comment by James_Miller · 2013-06-12T22:08:59.600Z · LW(p) · GW(p)
This might come down to eugenics. Imagine that in 15 years, with the help of genetic engineering, lots of extremely high IQ people are born, and their superior intelligence means that in another 15 or so years (absent a singularity) they will totally dominate AGI software development. The faster the economic growth rate the more likely that AGI will be developed before these super-geniuses come of age.
Replies from: Eliezer_Yudkowsky, mwengler, John_Maxwell_IV↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-12T22:40:36.430Z · LW(p) · GW(p)
Are these high-IQ folk selectively working on FAI rather than AGI to a sufficient degree to make up for UFAI's inherently greater parallelizability?
EDIT: Actually, smarter researchers probably count for more relative bonus points on FAI than on UFAI to a greater extent than even differences of serial depth of cognition, so it's hard to see how this could be realistically bad. Reversal test, dumber researchers everywhere would not help FAI over UFAI.
Replies from: James_Miller↑ comment by James_Miller · 2013-06-12T23:12:32.378Z · LW(p) · GW(p)
I'm not sure, this would depend on their personalities. But you might learn a lot about their personalities while they were still too young to be effective programmers. In one future earth you might trust them and hope for enough time for them to come of age, whereas in another you might be desperately trying to create a foom before they overtake you.
Hopefully, lots of the variance in human intelligence comes down to genetic load, having a low genetic load often makes you an all around great and extremely smart person, someone like William Marshal, and we soon create babies with extremely low genetic loads. If this is to be our future we should probably hope for slow economic growth.
↑ comment by mwengler · 2013-06-14T15:11:49.011Z · LW(p) · GW(p)
Extremely high IQ arising from engineering... is that not AI?
This is not a joke. UAI is essentially the fear that "we" will be replaced by another form of intelligence, outcompeted for resources by what is essentially another life form.
But how do "we" not face the same threat from an engineered lifeform just because some of the ingredients are us? If such a new engineered lifeform replaces natural humanity, is that not a UAI? If we can build in some curator instinct or 3 laws or whatever into this engineered superhuman, is that not FAI?
The interesting thing to me here is what we mean by "we." I think it is more common for a lesswrong poster to identify as "we" with an engineered superhuman in meat substrate than with an engineered non-human intelligence in non-meat substrate.
Considering this, maybe an FAI is just an AI that learns enough about what we think of as human so that it can hack it. It could construct itself so that it felt to us like our descendent, our child. Then "we" do not resent the AI for taking all "our" resources because the AI has successfully lead us to be happy to see our child succeed beyond what we managed.
Perhaps one might say but of course this would be on our list of things we would define as unfriendly. Then we build AI's that "curate" humans as we are now and we are precluding from enhancing ourselves or evolving past some limit we have preprogrammed in to our FAI?
comment by mwengler · 2013-06-14T14:29:17.796Z · LW(p) · GW(p)
For FAI to beat UAI, sufficient work on FAI needs to be done before sufficient work on AI is done.
If slowing the world economy doesn't change the proportion of work done on things, then a slower world economy doesn't increase the chance of FAI over UAI, it merely delays the time at which one or the other happens. Without specifying how the worlds production is turned down, wouldn't we need to assume that EY's productivity is turned down along with the rest of the world's?
If we assume all of humanity except EY slows down, AND that EY is turning the FAI knob harder than the other knobs relative to the rest of humanity, then we increase the chance of FAI preceding UAI.
comment by blogospheroid · 2013-06-14T05:04:59.329Z · LW(p) · GW(p)
I'm not sure that humane values would survive in a world that rewards cooperation weakly. Azathoth grinds slow, but grinds fine.
To oversimplify, there seem to be 2 main factors that increase cooperation, 2 basic foundations for law. Religion and Economic growth. Of this, religion seems to be far more prone-to-volatility. It is possible to get some marginally more intelligent people to point out the absurdity of the entire doctrine and along with the religion, all the other societal values collapse.
Economic growth seems to be a far more promising foundation for law as the poor and the low in status can be genuinely assured that they will get a small share of a growing pie. If economic growth slows down too much, it's back to values ingrained by evolution.
comment by CronoDAS · 2013-06-13T00:17:10.304Z · LW(p) · GW(p)
If we were perpetually stuck at Roman Empire levels of technology, we'd never have to worry about UFAI at all. That doesn't make it a good thing.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-13T00:32:12.327Z · LW(p) · GW(p)
If we all got superuniversal-sized computers with halting oracles, we'd die within hours. I'm not sure the implausible extremes are a good way to argue here.
Replies from: cody-bryce, CronoDAS, CronoDAS↑ comment by cody-bryce · 2013-06-13T02:34:02.707Z · LW(p) · GW(p)
Why do you find the idea of having the level of technology from the Roman empire to be so extreme? It seems like the explosion in technological development and use in recent centuries could be the fluke. There was supposedly a working steam engine in the Library of Alexandria in antiquity, but no one saw any reason to encourage that sort of thing. During the middle ages people didn't even know what the Roman aqueducts were for. With just a few different conditions, it seems like it's within the realm of possibility that ancient Roman technology could have been a nearly-sustainable peak of human technology.
Much more feasible would be staying foragers for the life of the species, though.
Replies from: asr, asr, mwengler, PaulS↑ comment by asr · 2013-06-13T14:10:59.782Z · LW(p) · GW(p)
Some good ideas were lost when the Roman Empire went to pieces, but there were a number of important technical innovations made in formerly-Roman parts of Western Europe in the centuries after the fall of the empire. In particular, it was during the Dark Ages that Europeans developed the stirrup, the horse collar and the moldboard plow. Full use of the domesticated horse was a Medieval development, and an important one, since it gave a big boost to agriculture and war. Likewise, the forced-air blast furnace is an early-medieval development.
The conclusion I draw is that over the timescale of a few centuries, large-scale political disruption did not stop technology from improving.
Replies from: CronoDAS, gwern, cody-bryce↑ comment by gwern · 2013-06-14T23:11:57.483Z · LW(p) · GW(p)
In particular, it was during the Dark Ages that Europeans developed the stirrup, the horse collar and the moldboard plow.
Sure about that? http://richardcarrier.blogspot.com/2007/07/experimental-history.html http://richardcarrier.blogspot.com/2007/08/lynn-white-on-horse-stuff.html
↑ comment by cody-bryce · 2013-06-14T20:29:49.636Z · LW(p) · GW(p)
Although it's still a point worth making that those technologies were adopted, they were not innovations--they were eastern inventions from antiquity that were adopted.
Stirrups in particular are a fascinating tale of progress not being a sure thing. The stirrup predates not only the fall of Rome, but the founding of Rome. Despite constant trade with the Parthians/Sassanids as well as constantly getting killed by their cavalry, the Romans never saw fit to adopt such a useful technology. Like the steam engine, we see that technological adoption isn't so inevitable.
Replies from: gwern↑ comment by mwengler · 2013-06-14T14:34:46.607Z · LW(p) · GW(p)
Much more feasible would be staying foragers for the life of the species, though.
I guess we could have just skipped all the evolution that took us from Chimp-Bonobo territory to where we are and would never have had to worry about UAI. Or Artificial Intelligence of any sort.
Heck, we wouldn't have even had to worry much about unfriendly or frienly Natural Intelligence either!
↑ comment by PaulS · 2013-06-13T07:19:40.722Z · LW(p) · GW(p)
With just a few different conditions, it seems like it's within the realm of possibility that ancient Roman technology could have been a nearly-sustainable peak of human technology.
What makes you think that? Technological growth had already hit a clear exponential curve by the time of Augustus. The large majority of the time to go from foraging to industry had already passed, and it doesn't look like our history was an unusually short one. Barring massive disasters, most other Earths must fall at least within an order of magnitude of variation from this case.
In any case, we're definitely at a point now where indefinite stagnation is not on the table... unless there's a serious regression or worse.
↑ comment by CronoDAS · 2013-06-13T15:19:32.311Z · LW(p) · GW(p)
Oh, come on. I'm sure at least a few people would end up with a Fate Worse Than Death instead! ;)
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-13T19:12:50.222Z · LW(p) · GW(p)
Actually I'd be quite confident in no fates worse than death emerging from that scenario. There wouldn't be time for anyone to mess up on constructing something almost in our moral frame of reference, just the first working version of AIXI / Schmidhuber's Godel machine eating its future light cone.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2013-06-15T17:30:52.347Z · LW(p) · GW(p)
but it can't AIXI the other halting oracles?
comment by [deleted] · 2013-08-14T23:29:43.981Z · LW(p) · GW(p)
If you are pessimistic about global catastrophic risk from future technology and you are most concerned with people alive today rather than future folk, slower growth is better unless the effects of growth are so good that they outweigh time discounting.
But growth in the poorest countries is good because it contributes negligibly to research and national economies are relatively self-contained, and more growth there means more human lives lived before maybe the end.
Also, while more focused efforts are obviously better in general than trying to affect growth, there is (at least) one situation where you might face an all-or-nothing decision: voting. I'm afraid the ~my solution here~ candidate will not be available.
comment by lukeprog · 2013-06-24T17:17:23.229Z · LW(p) · GW(p)
Katja comments here:
Replies from: Eliezer_YudkowskyAs far as I can tell, the effect of economic growth on parallelization should go the other way. Economic progress should make work in a given area less parallel, relatively helping those projects that do not parallelize well.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-24T17:34:04.218Z · LW(p) · GW(p)
My model of Paul Christiano does not agree with this statement.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2013-06-24T21:54:30.927Z · LW(p) · GW(p)
I was fortunate to discuss this with Paul and Katja yesterday, and he seemed to feel that this was a strong argument.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-24T23:54:19.887Z · LW(p) · GW(p)
...odd. I'm beginning to wonder if we're wildly at skew angles here.
Replies from: paulfchristiano↑ comment by paulfchristiano · 2013-06-25T11:34:39.685Z · LW(p) · GW(p)
I do think the bigger point is that your argument is a tiny effect, even if its correct, so gets dwarfed by any number of random other things (like better educated people, lower cumulative probability of war) and even moreso by the general arguments that suggest the effects of growth would be differentially positive.
But if you accept all of your argument except the last step, Katja's point seems right, and so I think you've gotten the sign on this particular effect wrong. More economic growth means more work per person and the same number of people working in parallel---do you disagree with that? (If so, do you think that its because more economic activity means a higher population, or that it means diverting people from other tasks to AI? I agree there will be a little bit of the latter, but its a pretty small effect and you haven't even invoked the relevant facts about the world---marginal AI spending is higher than average AI spending---in your argument.)
So if you care about parallelization in time, the effect is basically neutral (the same number of people are working on AI at any given time). If you care about parallelization across people, the effect is significant and positive, because each person does a larger fraction of the total project of building AI. It's not obvious to me that insight-constrained projects (as opposed to the "normal" AI) care particularly about either. But if they care somewhat about both, then this would be a positive effect. They would have to care several times more about parallelization in time than parallelization in people in order for you to have gotten the sign right.
comment by Izeinwinter · 2013-06-18T06:47:43.330Z · LW(p) · GW(p)
As long as we are in a world where billions are still living in absolute poverty, low economic growth is politically radicalizing and destabilizing. This can prune world branches quite well on its own, no AI needed. Remember, the armory of apocalypse is already unlocked. It is not important which project succeeds first, if the world gets radiatively sterilized, poisoned, ect before either one succeeds. So, no. Not helpful.
comment by Yosarian2 · 2013-06-17T21:08:32.250Z · LW(p) · GW(p)
Before you can answer this question, I think you have to look at a more fundamental question, which is simply: why are so few people interested in supporting FAI research/ concerned about the possibility of UFAI?
It seems like there are a lot of factors involved here. In times of economic stress, short-term survival tends to dominate over long-term thinking. For people who are doing long-term thinking, there are a number of other problems that many of them are more focused on, such as resource depletion, global warming, ect; even if you don't think those are as big a threat to our future as UFAI is, if we move towards solving them it will free up some of that "intellectual energy" for other problems. Also, pessimistic thinking about the future in general, and a widespread pessimism about our odds of ever advancing to the point of having true GAI, is also likely to be a factor.
In general, I think that the better the economy is, the more people see the world getting better, and the more other problems we are able to solve, the more likely people are to start thinking about long-term risks, such as UFAI, nanotechnological weapons, ect. I don't think the difference between a better economy or a worse economy is likely to make much different of the rate of technological change (graphs seem to demonstrate that, for example, the Great Depression didn't slow down technology much), but it is likely to have a major impact on the kind of short-term thinking that people have during a crisis and the kind of long-term thinking that people are more able to do when the short-term issues seem to be under control.
comment by jmmcd · 2013-06-14T19:59:22.709Z · LW(p) · GW(p)
To be clear, the idea is not that trying to deliberately slow world economic growth would be a maximally effective use of EA resources and better than current top targets; this seems likely to have very small marginal effects, and many such courses are risky. The question is whether a good and virtuous person ought to avoid, or alternatively seize, any opportunities which come their way to help out on world economic growth.
It sounds like status quo bias. If growth was currently 2% higher, should the person then seize on growth-slowing opportunities?
One answer: it could be that any effort is likely to have little success in slowing world growth, but a large detrimental effect on the person's other projects. Fair enough, but presumably it applies equally to speeding growth.
Another: an organisation that aspires to political respectability shouldn't be seen to be advocating sabotage of the economy.
comment by elharo · 2013-06-13T11:12:09.168Z · LW(p) · GW(p)
For economic growth, don't focus on the total number. Ask about the distribution. Increasing world wealth by 20% would have minimal impact on making people's lives better if that increase is concentrated among the top 2%. It would have huge impact if it's concentrated in the bottom 50%.
So if you have a particular intervention in mind, ask yourself, "Is this just going to make the rich richer, or is it going to make the poor richer?" An intervention that eliminates malaria, or provides communication services in refugee camps, or otherwise assists the most disadvantaged, can be of great value without triggering your fears.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2013-06-13T14:31:12.721Z · LW(p) · GW(p)
I'm not sure you're right-- if the crucial factors (decent nutrition, access to computing power, free time (have I missed something?)) become more widely distributed, the odds of all sorts of innovation including UFAI might go up.
comment by [deleted] · 2013-06-13T00:34:22.448Z · LW(p) · GW(p)
If a good outcome requires that influential people cooperate and have longer time-preferences, then slower economic growth than expected might increase the likelihood of a bad outcome.
It's true that periods of increasing economic growth haven't always lead to great technology decision-making (Cold War), but I'd expect an economic slowdown, especially in a democratic country, to make people more willingly to take technological risks (to restore economic growth), and less likely to cooperate with or listen to or fund cautious dissenters, (like people who say we should be worried about AI).
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-13T00:36:26.025Z · LW(p) · GW(p)
One could just as easily argue that an era of slow growth will take technological pessimism seriously, while an era of fast growth is likely to want to go gung-ho full-speed-ahead on everything.
Replies from: None, Luke_A_Somers↑ comment by Luke_A_Somers · 2013-06-13T15:15:23.664Z · LW(p) · GW(p)
A culture in which they go gung-ho full-speed-ahead on everything might build autonomous AI into a robot, and it turns out to be unfriendly in some notable way while not also being self-improving.
Seems to me like that would be one of the most reliable paths to getting people to take FAI seriously. A big lossy messy factory recall, lots of media attention, irate customers.
comment by Vaniver · 2013-06-12T20:21:48.725Z · LW(p) · GW(p)
Will 1% and 4% RGDP growth worlds have the same levels of future shock? A world in which production doubles every 18 years and a world in which production doubles every 70 years seem like they will need very different abilities to deal with change.
I suspect that more future shock would lead to more interest in stable self-improvement, especially on the institutional level. But it's not clear what causes some institutions to do the important but not urgent work of future-proofing, and others to not- it may be the case that in the more sedate 1% growth world, more effort will be spent on future-proofing, which is good news for FAI relative to UFAI.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-13T14:49:38.723Z · LW(p) · GW(p)
We've already had high levels of future shock and it hasn't translated into any such interest. This seems like an extremely fragile and weak transmission mechanism. (So do most transmission mechanisms of the form, "Faster progress will lead people to believe X which will support ideal Y which will lead them to agree with me/us on policy Z.")
comment by D_Alex · 2013-06-14T08:43:59.172Z · LW(p) · GW(p)
Eliezer, this post reeks of an ego trip.
"I wish I had more time, not less, in which to work on FAI"... Okay, world, lets slow right down for a while. And you, good and viruous people with good for technological or economic advancement: just keep quiet until it is safe.
comment by Armok_GoB · 2013-06-14T22:42:01.549Z · LW(p) · GW(p)
Am I correct in assuming you have an ethical injunction against the obvious solution of direct sabotage? 'Cause other people don't but would probably just slow you down if you're already trying it.
Edit: direct sabotage does not mean "blow things up", it means "subtly trigger crabs-in-a-bucket effects via clever voting" or something.
comment by mira · 2013-06-13T09:13:06.875Z · LW(p) · GW(p)
We know economic growth cannot be sustainable long term (do the math), if we were to think longer term (Rather than the current "race to AI" thinking), we would still need to decide when to stop pursuing economic growth. Maybe since we have to stop growth somewhere, stopping it before AI is a good idea.
Replies from: Baughn↑ comment by Baughn · 2013-06-13T20:02:55.441Z · LW(p) · GW(p)
We have to cope with humanity as it is, at least until there's an AI overlord. Ignoring the other counterarguments for a moment, does voluntarily stopping economic expansion and halting technological progress sound like something humanity is capable of doing?
Replies from: mira↑ comment by mira · 2013-06-13T21:58:34.055Z · LW(p) · GW(p)
I agree, humanity is probably not capable of stopping economic growth. And personally I'm glad of that, because I would choose UFAI over humanity in its current form (partly because in this form humanity is likely to destroy itself before long).
The point I was making is that if you value UFAI as negatively as EY does, maybe avoiding UFAI is worth avoiding all AI.
comment by CannibalSmith · 2013-06-12T21:28:11.830Z · LW(p) · GW(p)
I think the nearest deadline is our own lifespans. FAI is no good if you're dead before it comes along. Also cryonics requires fast economic growth.
Replies from: ArisKatsaris, Eliezer_Yudkowsky, James_Miller, Will_Newsome↑ comment by ArisKatsaris · 2013-06-12T22:09:35.612Z · LW(p) · GW(p)
FAI is no good if you're dead before it comes along
I think FAI is very good even if I'm dead before it comes along.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-12T21:32:00.642Z · LW(p) · GW(p)
This is not how I make choices.
↑ comment by James_Miller · 2013-06-12T22:10:24.917Z · LW(p) · GW(p)
Cryonics requires a good singularity.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2013-06-13T01:31:23.277Z · LW(p) · GW(p)
Cryonics requires a good singularity.
Why? This isn't obvious to me. It seems plausible that we could develop the technology to directly repair bodies without a singularity. Cryonics being used for uploading seems likely to only occur in a singularity situation, but even then it seems sufficiently far off that it isn't clear.
↑ comment by Will_Newsome · 2013-06-13T00:22:26.801Z · LW(p) · GW(p)
What the Hell, Hero?