What To Do: Environmentalism vs Friendly AI (John Baez)
post by XiXiDu · 2011-04-24T18:03:07.743Z · LW · GW · Legacy · 63 commentsContents
63 comments
In a comment on my last interview with Yudkowsky, Eric Jordan wrote:
John, it would be great if you could follow up at some point with your thoughts and responses to what Eliezer said here. He’s got a pretty firm view that environmentalism would be a waste of your talents, and it’s obvious where he’d like to see you turn your thoughts instead. I’m especially curious to hear what you think of his argument that there are already millions of bright people working for the environment, so your personal contribution wouldn’t be as important as it would be in a less crowded field.
I’ve been thinking about this a lot.
[...]
This a big question. It’s a bit self-indulgent to discuss it publicly… or maybe not. It is, after all, a question we all face. I’ll talk about me, because I’m not up to tackling this question in its universal abstract form. But it could be you asking this, too.
[...]
I’ll admit I’d be happy to sit back and let everyone else deal with these problems. But the more I study them, the more that seems untenable… especially since so many people are doing just that: sitting back and letting everyone else deal with them.
[...]
I think so far the Azimuth Project is proceeding in a sufficiently unconventional way that while it may fall flat on its face, it’s at least trying something new.
[...]
The most visible here is the network theory project, which is a step towards the kind of math I think we need to understand a wide variety of complex systems.
[...]
I don’t feel satisfied, though. I’m happy enough—that’s never a problem these days—but once you start trying to do things to help the world, instead of just have fun, it’s very tricky to determine the best way to proceed.
Link: johncarlosbaez.wordpress.com/2011/04/24/what-to-do/
His answer, as far as I can tell, seems to be that his Azimuth Project does trump the possibility of working directly on friendly AI or to support it indirectly by making and contributing money.
It seems that he and other people who understand all the arguments in favor of friendly AI and yet decide to ignore it, or disregard it as unfeasible, are rationalizing.
I myself took a different route, I was rather trying to prove to myself that the whole idea of AI going FOOM is somehow flawed rather than trying to come up with justifications for why it would be better to work on something else.
I still have some doubts though. Is it really enough to observe that the arguments in favor of AI going FOOM are logically valid? When should one disregard tiny probabilities of vast utilities and wait for empirical evidence? Yet I think that compared to the alternatives the arguments in favor of friendly AI are water-tight.
The problem why I and other people seem to be reluctant to accept that it is rational to support friendly AI research is that the consequences are unbearable. Robin Hanson recently described the problem:
Reading the novel Lolita while listening to Winston’s Summer, thinking a fond friend’s companionship, and sitting next to my son, all on a plane traveling home, I realized how vulnerable I am to needing such things. I’d like to think that while I enjoy such things, I could take them or leave them. But that’s probably not true. I like to think I’d give them all up if needed to face and speak important truths, but well, that seems unlikely too. If some opinion of mine seriously threatened to deprive me of key things, my subconscious would probably find a way to see the reasonableness of the other side.
So if my interests became strongly at stake, and those interests deviated from honesty, I’ll likely not be reliable in estimating truth.
I believe that people like me feel that to fully accept the importance of friendly AI research would deprive us of the things we value and need.
I feel that I wouldn't be able to justify what I value on the grounds of needing such things. It feels like that I could and should overcome everything that isn't either directly contributing to FAI research or that helps me to earn more money that I could contribute.
Some of us value and need things that consume a lot of time...that's the problem.
63 comments
Comments sorted by top scores.
comment by nhamann · 2011-04-25T02:08:32.074Z · LW(p) · GW(p)
It was interesting to see the really negative comment from (presumably the real) Greg Egan:
Replies from: loqi, Steve_Rayhawk, None, XiXiDu, shokwave, CarlShulmanThe Yudkowsky/Bostrom strategy is to contrive probabilities for immensely unlikely scenarios, and adjust the figures until the expectation value for the benefits of working on — or donating to — their particular pet projects exceed the benefits of doing anything else. Combined with the appeal to vanity of “saving the universe”, some people apparently find this irresistible, but frankly, their attempt to prescribe what rational altruists should be doing with their time and money is just laughable, and it’s a shame you’ve given it so much air time.
↑ comment by loqi · 2011-04-25T04:43:11.932Z · LW(p) · GW(p)
Speaking as someone whose introduction to transhumanist ideas was the mind-altering idea shotgun titled Permutation City, I've been pretty disappointed with his take on AI and the existential risks crowd.
A reoccurring theme in Egan's fiction is that "all minds face the same fundamental computing bottlenecks", serving to establish the non-existence of large-scale intrinsic cognitive disparities. I always figured this was the sort of assumption that was introduced for the sake of telling a certain class of story - the kind that need only be plausible (e.g., "an asteroid is on course to hit us"), and didn't think much more about it.
But from what I recall of Egan's public comments on the issue of foom (I lack links, sorry) he appears to have a firm intuition that it's impossible, grounded by handwaving "halting problem is unsolvable"-style arguments. Which in turn seemingly forms the basis of his estimation of uFAI scenarios as "immensely unlikely". With no defense on offer for his initial "cognitive universality" assumption, he takes the only remaining course of argumentation...
but frankly, their attempt to prescribe what rational altruists should be doing with their time and money is just laughable
...derision.
This spring...
Egan, musing: Some people apparently find this irresistible
Greg Egan is...
Egan, screaming: The probabilities are approaching epsilon!!!
Above the Argument.
Replies from: CarlShulmanEgan, grimly: Yudkowsky is off the air.
↑ comment by CarlShulman · 2011-04-25T05:15:40.619Z · LW(p) · GW(p)
A reoccurring theme in Egan's fiction is that "all minds face the same fundamental computing bottlenecks", serving to establish the non-existence of large-scale intrinsic cognitive disparities.
This still allows for AIs to be millions of times faster than humans, undergo rapid population explosion and reduced training/experimentation times through digital copying, be superhumanly coordinated, bring up the average ability in each field to peak levels (as seen in any existing animal or machine, with obvious flaws repaired), etc. We know that human science can produce decisive tech and capacity gaps, and growth rates can change enormously even using the same cognitive hardware (Industrial Revolution).
I just don't see how even extreme confidence in the impossibility of qualitative superintelligence rules out an explosion of AI capabilities.
Replies from: loqi↑ comment by loqi · 2011-04-25T05:41:57.920Z · LW(p) · GW(p)
Agreed, thanks for bringing this up - I threw away what I had on the subject because I was having trouble expressing it clearly. Strangely, Egan occasionally depicts civilizations rendered inaccessible by sheer difference of computing speed, so he's clearly aware of how much room is available at the bottom.
↑ comment by Steve_Rayhawk · 2011-04-25T20:05:11.905Z · LW(p) · GW(p)
Previous arguments by Egan:
http://metamagician3000.blogspot.com/2009/09/interview-with-greg-egan.html
Sept. 2009, from an interview in Aurealis.
http://metamagician3000.blogspot.com/2008/04/transhumanism-still-at-crossroads.html
From April 2008. Only in the last few comments does Egan actually express an argument for the key intuition that has been driving the entire rest of his reasoning.
(To my eyes, this intution of Egan's refers to a completely irrelevant hypothetical, in which humans somehow magically and reliably are always able to acquire possession of and make appropriate use of any insentient software tools that will be required, at any given moment, in order for humans to maintain hypothetical strategic parity with any contemporary AI's.)
Replies from: timtyler↑ comment by [deleted] · 2011-04-25T08:11:08.812Z · LW(p) · GW(p)
Greg Egan's view was discussed here a few months ago.
↑ comment by XiXiDu · 2011-04-25T09:34:54.882Z · LW(p) · GW(p)
I think Greg Egan makes an important point there that I have mentioned before and John Baez seems to agree:
I agree that multiplying a very large cost or benefit by a very small probability to calculate the expected utility of some action is a highly unstable way to make decisions.
Actually this was what I had in mind when I voiced my first attempt at criticizing the whole endeavour of friendly AI, I just didn't know what exactly was causing my uneasiness.
I am still confused about it but think that it isn't much of a problem as long as friendly AI research is not being funded at the cost of other risks that are more thoroughly based on empirical evidence rather than the observation of logically valid arguments.
To be clear, as I wrote in the post above, I think that there are very strong arguments in support of friendly AI research. I believe that it is currently the most important cause one could support, but I also think that there is a limit to what one should do in the name of mere logical implications. Therefore I partly agree with Greg Egan.
ETA
There's now another comment by Greg Egan:
All of Yudkowsky’s arguments about the dangers and benefits of AI are just appeals to intuition of various kinds, as indeed are the counter-arguments. So I wouldn’t hold your breath waiting for that to be settled. If he wants to live his own life based on his own hunches, that’s fine, but I see no reason for anyone else to take his land-grabs on terms like “rationality” and “altruism” at all seriously, merely because it’s not currently possible to provide mathematically rigorous proofs that his assignments of probabilities to various scenarios are incorrect. There’s an almost limitless supply of people who believe that their ideas are of Earth-shattering importance, and that it’s incumbent on the rest of the world to either follow them or spend their life proving them wrong.
But clearly you’re showing no signs of throwing in productive work to devote your life to “Friendly AI” — or of selling a kidney in order to fund other people’s research in that area — so I should probably just breathe a sigh and relief, shut up and go back to my day job, until I have enough free time myself to contribute something useful to the Azimuth Project, get involved in refugee support again, or do any of the other “Rare Disease for Cute Kitten” activities on which the fate of all sentient life in the universe conspicuously does not hinge.
↑ comment by shokwave · 2011-04-25T05:43:36.035Z · LW(p) · GW(p)
Surely not ... Does Greg Egan understand how "a small chance every year" can build into "almost certain by this date"? Because that was convincing for me:
I can easily see humans building work-arounds or stop-gaps for most major problems, and continuing business mostly as usual. We run out of fossil fuels, so we get over our distrust of nuclear energy because it's the only way. We don't slow environmental damage enough, so agriculture suffers, so we get over our distrust of genetically modified plants because it's the only way. And so on.
Then some article somewhere reminded me that business as usual includes repeated attempts at artificial intelligence. And runaway AI is not something we can build a work-around for; given a long enough timespan and faith in human ingenuity, we'll push through all the other non-instant-game-over events until we finally succeed at making the game end instantly.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-04-25T20:29:31.366Z · LW(p) · GW(p)
Does Greg Egan understand how "a small chance every year" can build into "almost certain by this date"?
If independent.
↑ comment by CarlShulman · 2011-04-25T02:58:05.023Z · LW(p) · GW(p)
Yep, Egan created a Yudkowsky and Overcoming Bias/LessWrong stand-ins for mockery in his most recent novel, Zendegi. There was a Less Wrong discussion at the time.
comment by CarlShulman · 2011-04-24T19:46:21.701Z · LW(p) · GW(p)
I believe that people like me feel that to fully accept the importance of friendly AI research would deprive us of the things we value and need.
The idea of moral demandingness is really a separate issue. The ability to save thousands of lives through donation to 3rd world public health poses a similar problem for many people. Risks like nuclear war (plus nuclear winter/irrecoverable social collapse), engineered pandemics, and other non-AI existential risks give many orders of magnitude increased stakes (within views that weight future people additively). AI may give some additional orders of magnitude, but in terms of demandingness it's not different in kind.
Most people who care a lot about 3rd world public health, e.g. Giving What We Can members, are not cutting out their other projects, and come to stable accommodations between their various concerns, including helping others, family, reading books, various forms of self-expression, and so forth.
If you have desires pulling you towards X and Y, just cut a deal between the components of your psychology and do both X and Y well, rather than feeling conflicted and doing neither well. See this Bostrom post, or this Yudkowsky one on the virtues of divvying up specialized effort into separate buckets.
Replies from: torekp, XiXiDu↑ comment by torekp · 2011-04-24T23:08:17.365Z · LW(p) · GW(p)
Carl, you hit the biggest nail on the head. But I think there's another nail there. If not for XiXiDu, then for many others. Working on fooming AI issues makes one a weirdo. Wearing a tinfoil hat would only be slightly less embarrassing. Working on environmental problems is downright normal, at least within some (comfortably large) social circles.
Back to that biggest nail - it needs another whack. AI threatens to dramatically worsen the world within our children's lifetimes. Robin Hanson, sitting next to his son, will feel significantly less comfortable upon thinking such thoughts. That provides a powerful motive to rationalize the problem away - or to worry at it, I suppose, depending on one's personality, but I find denial to be more popular than worrywarting.
Replies from: CarlShulman↑ comment by CarlShulman · 2011-04-24T23:32:42.882Z · LW(p) · GW(p)
I agree with these points. I was responding to XiXiDu's focus in his post about availability of time and resources for other interests.
↑ comment by XiXiDu · 2011-04-25T16:10:31.911Z · LW(p) · GW(p)
The idea of moral demandingness is really a separate issue.
I don't see what difference it makes if you are selfish or not regarding friendly AI research. I believe that being altruistic is largely instrumental in maximizing egoistic satisfactions.
Thanks for the post by Nick Bostrom. But it only adds to the general confusion.
There seem to be dramatic problems with both probability-utility calculations and moral decision making. Taking those problems into account makes it feel like one could as well throw a coin to decide.
Michael Anissimov recently wrote:
For instance, you must have made decisions for your children that were more in alignment with what they would want if they were smarter. If you made judgments in alignment with their actual preferences (like wanting to eat candy all day — I don’t know your kids but I know a lot of kids would do this), they would suffer for it in the longer term.
This sounds good but seems to lead to dramatic problems. In the end it is merely an appeal to intuition without any substance.
If you don't try to satisfy your actual preferences, what else?
In the example, stated by Anissimov, what actually happens is that parents try to satisfy their own preferences by not allowing their children to die of candy-intoxication.
If we were going to disregard our current preferences and postpone having fun in favor of gathering more knowledge then we would eventually end up as perfectly rational agents in static game theoretic equilibria.
The problem with the whole utility-maximization heuristic is that it eventually does deprive us of human nature by reducing our complex values to mere game theoretic models.
Part of human nature, what we value, is the way we like to decide. It won't work to just point at hyperbolic discounting and say it is time-inconsistent and therefore irrational.
Human preferences are always actual, we do not naturally divide decisions according to instrumental and terminal goals.
I don't want a paperclip maximizer to burn the cosmic commons. I don't want to devote most of my life to mitigate that risk. This is not a binary decision, that is not how human nature seems to work.
If you try to force people into a binary decision between their actual preferences and some idealistic far mode then you cause them to act according to academic considerations rather than their complex human values they are supposed protect.
Suppose you want to eat candies all day and are told that you can eat a lot more candies after the Singularity, if only you work hard enough right now. The problem is that there is always another Singularity that promises more candies. At what point are you going to actually eat candies? But that is a rather academic problem. There is a more important problem concerning human nature, as demonstrated by extreme sport. Humans care much more about living their life's according to their urges than about maximizing utility.
What does it even mean to "maximize utility"? Many sportsmen and sportswomen are aware of the risks associated with their favorite activity. Yet they take the risk.
It seems that humans are able to assign infinite utility to pursuing a certain near-mode activity.
Deliberately risking your life doesn't seem to be maximizing experience utility, as you would be able to experience a lot more of the same or similar experience differently. And how can one "maximize" terminal decision utility?
Replies from: None↑ comment by [deleted] · 2011-04-26T08:22:05.089Z · LW(p) · GW(p)
When applying your objections to my own perspective, I find that I see my actions that aren't focused on reducing involuntary death (eating candies, playing video games, sleeping) as necessary for the actual pursuit of my larger goals.
I am a vastly inefficient engine. My productive power goes to the future, but much of it bleeds away - not as heat and friction, but as sleep and candy-eating. Those things are necessary for the engine to run, but they aren't necessary evils. I need to do them to be happy, because a happy engine is an efficient one.
I recognized two other important points. One is that I must work daily to improve the efficiency of my engine. I stopped playing video games, so I could work harder. I stopped partying so often so I could be more productive. Etcetera.
The other point is that it's crucial to remember why I'm doing this stuff in the first place. I only care about reducing existential risk and signing up for cryonics and destroying death because of the other things I care about: eating candies, sleeping, making friends, traveling, learning, improving, laughing, dancing, drinking, moving, seeing, breathing, thinking... I am trying to satisfy my actual preferences.
The light at the end of the tunnel is utopia. If I want to get there, I need to make sure the engine runs clean. I don't think working on global warming will do it - but if I did, that's where I'd be putting in my time.
comment by XiXiDu · 2011-04-25T18:51:04.628Z · LW(p) · GW(p)
There is a "discussion" unleashing between Eliezer Yudkowsky and Greg Egan, scroll down here. It yielded this highly interesting comment by Eliezer Yudkowsky:
Replies from: None, timtyler, timtyler, steven0461I don’t think the odds of us being wiped out by badly done AI are small. I think they’re easily larger than 10%. And if you can carry a qualitative argument that the probability is under, say, 1%, then that means AI is probably the wrong use of marginal resources – not because global warming is more important, of course, but because other ignored existential risks like nanotech would be more important. I am not trying to play burden-of-proof tennis. If the chances are under 1%, that’s low enough, we’ll drop the AI business from consideration until everything more realistic has been handled.
↑ comment by [deleted] · 2011-04-28T05:26:16.575Z · LW(p) · GW(p)
Finally, some quantification!
Here's a sequence of interpretations of this passage, in decreasing order of strength:
- The odds of us being wiped out by badly done AI are easily larger than 10%
- The odds of us being wiped out by badly done AI are larger than or equal to 10%
- There can be no compelling qualitative argument that the probability of us being wiped out by badly done AI is less that 1%
- There is a compelling argument that the probability of us being wiped out by badly done AI is greater than or equal to 1%
I would be very grateful to see the weakest of these claims, number 4, supported with some calculations.
Of course I wish that there was a date attached to these claims. Easily greater than 10% chance that we'll be wiped out in the next 50 years?
Replies from: XiXiDu, hairyfigment↑ comment by XiXiDu · 2011-04-28T09:08:27.174Z · LW(p) · GW(p)
Of course I wish that there was a date attached to these claims. Easily greater than 10% chance that we'll be wiped out in the next 50 years?
Eliezer Yudkowsky says:
John did ask about timescales and my answer was that I had no logical way of knowing the answer to that question and was reluctant to just make one up.
[...]
Replies from: NoneAs for guessing the timescales, that actually seems to me much harder than guessing the qualitative answer to the question “Will an intelligence explosion occur?”
↑ comment by [deleted] · 2011-04-28T17:32:22.517Z · LW(p) · GW(p)
We seem to have caught Yudkowsky in a moment of hypocrisy: he doesn't know when an intelligence explosion will occur.
Replies from: ata↑ comment by ata · 2011-04-29T09:22:21.682Z · LW(p) · GW(p)
The post "I don't know" is about refusing to assign probability distributions at all. That's entirely different from refusing to assign an overly focused probability distribution when your epistemic state doesn't actually provide you enough information to do so; the latter is the technical way to say "I don't know" when you really don't know. In this case I do recall Eliezer saying at some point (something like) that he spends about 50% of his planning effort on scenarios where the singularity happens before 2040(?) and about 50% on scenarios where it happens after 2040, so he clearly does have a probability distribution he's working with, it's just that the probability mass is spread pretty broadly.
Replies from: None↑ comment by [deleted] · 2011-04-29T14:46:08.831Z · LW(p) · GW(p)
I agree that "spread out probability mass" is a good technical replacement for "I don't know." Note that the more spread out it is, the less concentrated it is in the near future. That is, the less confident you are betting on this particular random variable (time until human extinction), the safer you should feel from human extinction.
"50% before 2040" doesn't sound like such a high-entropy RV to me, though...
↑ comment by hairyfigment · 2011-04-30T02:28:51.936Z · LW(p) · GW(p)
Well, let's start with the conditional probability if humans don't find some other way to kill ourselves or end civilization before it comes to this. Eliezer seems to argue the following:
A. Given we survive long enough, we'll find a way to write a self-modifying program that has, or can develop, human-level intelligence. (The capacity for self-modification follows from 'artificial human intelligence,' but since we've just seen links to writers ignoring that fact I thought I'd state it explicitly.) This necessarily gives the AI the potential for greater-than-human intelligence due to our known flaws. (I don't know how we'd give it all of our disadvantages even if we wanted to. If we did, then someone else could and eventually would build an AI without such limits.)
B. Given A, the intelligence would improve itself to the point where we could no longer predict its actions in any detail.
C. Given B, the AI could escape from any box we put it in. (IIRC this excludes certain forms of encryption, but I see no remotely credible scenario in which we sufficiently encrypt every self-modifying AI forever.)
D. Given B and C, the AI could wipe out humanity if it 'wanted' to do so.
My estimate for the probability of some of these fluctuates from day to day, but I tend to give them all a high number. Claim A in particular seems almost undeniable given the evidence of our own existence. (I only listed that one separately so that people who want to argue can do so more precisely.) And when it comes to Claim E saying that if you tell a computer to kill you it will try to kill you, I don't think the alternative has enough evidence to even consider. So I find it hard to imagine anyone rationally getting a total lower than 12% or just under 1/8.
Now that all applies to the conditional probability (if human technological civilization lives that long). I don't know how to evaluate the timescale involved or the chance of us killing ourselves before the issue would come up. The latter certainly feels like less than 11/12.
The question would grow in importance if we found out that we needed to convince a nationally important number of people to pay attention to the issue before someone creates a theory of Friendly AI including AI goal stability. I really hope that doesn't apply, because I suspect that if it does we're screwed.
↑ comment by timtyler · 2011-04-26T20:33:06.040Z · LW(p) · GW(p)
From E.Y on the same page, we have this:
I’m generally reluctant to assign exact probabilities to topics like these; I consider it a sin, like giving five many significant digits on something you cannot calculate to 1 part in 10,000 precision.
We should be assigning error bars to our probabilities? A cute idea - but surely life is too short for that.
↑ comment by timtyler · 2011-04-26T19:47:09.528Z · LW(p) · GW(p)
There is a "discussion" unleashing between Eliezer Yudkowsky and Greg Egan, scroll down here. It yielded this highly interesting comment by Eliezer Yudkowsky:
I don’t think the odds of us being wiped out by badly done AI are small. I think they’re easily larger than 10%. [...]
I would take the other side of a bet on that at 1000:1 odds.
:-) <--
↑ comment by steven0461 · 2011-04-25T19:25:24.339Z · LW(p) · GW(p)
The theory here seems to be that if someone believes preserving the environment is the most important thing, you can explain to them why preserving the environment is not the most important thing, and they will stop believing that preserving the environment is the most important thing. But is there any precedent for this result actually happening?
comment by JoshuaZ · 2011-04-25T03:18:11.642Z · LW(p) · GW(p)
It seems that he and other people who understand all the arguments in favor of friendly AI and yet decide to ignore it, or disregard it as unfeasible, are rationalizing.
They may have genuinely different estimates for various probabilities. Don't be so quick to assume that people who disagree are rationalizing. That's an easy way to get into a death spiral.
Yet I think that compared to the alternatives the arguments in favor of friendly AI are water-tight.
As I've pointed out here before, a lot of the versions of fooming that are discussed here seem to rest on assuming massive software optimization, not just hardware optimization. This runs into strongly believed theoretical comp sci limits such as the likelyhood that P != NP. These issues also come up in hardware design. It may be my own cognitive biases in trying to make something near my field feel useful, but it does seem like this sort of issue is not getting sufficient attention when discussing the probability of AI going foom.
Replies from: roystgnr↑ comment by roystgnr · 2011-04-25T18:49:22.663Z · LW(p) · GW(p)
This runs into strongly believed theoretical comp sci limits such as the likelyhood that P != NP.
Does it? There are certainly situations (breaking encryption) where the problem statement looks something like "I'd like my program to be able to get the single exact solution to this problem in polynomial time", but for optimization we're often perfectly happy with "I'd like my program to be able to get close to the exact solution in polynomial time", or even just "I'd like my program to be able to get a much better solution than people's previous intuitive guesses".
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-04-25T19:35:53.632Z · LW(p) · GW(p)
Yes, this is one of the issues that does need to be thought about. But there are limits to this. However, there's been work in the last few years (especially for problems in graph theory) where one can show that being able to find close to optimal solutions is equivalent to being able to find optimal solutions.
The encryption example is also an interesting one, in that many foom scenarios involve the hypothetical AI gaining control over lots of the world's computer systems. And in fact, no known encryption system rest on a problems which is NP-complete, so for all we know we could have P !=NP in a strong sense and yet still have essentially all encryption be vulnerable. Also, the AI might find security vulnerabilities completely independent of the math due to implementation issues.
Another reason that P != NP might not be a severe barrier that cousin_it pointed out is that our AI would be unlikely to need to solve general NP-complete problems, just real-world instances which are likely to have additional regularity in them. Moreover, for most NP-complete problems, random instances of the problems are generally easy (indeed, this is part of what went wrong in Deolalikar's attempted proof. The general behavior on aggregate of NP-complete problems like K-sat for fixed k>=3 looks a lot like 2-SAT which is in P). So something trying to foom might not even run into such instances when trying to engage in practical applications of NP hard problems (such as where say graph coloring comes in optimizing memory allocation.) And there are some NP hard problems where one really does care about some restricted form of the problem. One example of this is protein folding where it seems that most models of protein folding are at least NP hard.
However, when trying to foom, an AI will probably want to directly improve the algorithms it is using on a regular basis. While not directly a P/NP issue, it is noteworthy that for some common procedures, such as finding gcds, and solving linear programming, we have what are in many ways close to optimal algorithms (the situation is more complicated for linear programing than that but the general point is correct). So the AI probably cannot focus primarily on modifying software because a lot of the basic things that the AI would want to optimize are already close to best possible even for both the average cases and the worst cases.
There are also other problems with this sort of thing. So for example, it might turn out that P != NP but that there's a general NP-complete solver that runs in almost polynomial time with a really small constant in the Big-Oh. There's as I understand it some very weak results showing that this can't happen for some functions very close to polynomial growth.
There are other things that could go wrong. If for example, quantum computers are much more powerful than classical systems then the AI could if it had access to one just side-step almost all of these issues. We can't for example even show that BQP is contained in NP. If BQP is larger than anticipated then this is a serious problem. Moreover, even if P !=NP in a strong sense, it is plausible that there's a quantum algorithm which can solve NP-complete problems in practically close to polynomial time (this is more plausible than the similar idea in the previous paragraph about the classical case). So, if we end up with practical quantum computing, running an AI on it would be a bad idea.
There are also a handful of more exotic possibilities. Scott Aaaronson has discussed using minature wormholes to aid computation. And if you can do this, then all of PSPACE collapses. From the perspective of an AI trying to foom, this is good. From our perspective, probably not. The good news here is that the tech level required is probably well beyond anything we have and so no AI would probably have access to this sort of thing unless it had already foomed or was so close to fooming that it makes no difference.
The overall upshot of this is that while it is possible that an AI with access to a quantum computer could plausibly foom with most of the focus on software improvement, comp sci issues suggest that an AI trying to foom on a classical system would need to engage in both hardware and software improvement. Trying to improve your hardware is tough without either prior access to molecular nanotech(which is its own can of nastiness regardless of whether humans or AI have access) or a lot of cooperative humans who have given one access to circuit board and chip manufacturing facilities. Moreover, hardware generally produces diminishing marginal returns as long as the hardware remains the same type (i.e. just improving on classical computers) and could run into its own problems as design issues themselves involve problems which are computationally complex (although cousin_it again has pointed out that this could be less of a problem than one might think since there are regularities that might show up).
I will be able to sleep better at night once we've proven that P != NP.
(Ok. There's a lot of material here, and some of it probably should get expanded. How would people feel about expanding this into a top-level post with nice citations, better organization, and all that?)
Replies from: CarlShulman, timtyler, jsalvatier↑ comment by CarlShulman · 2011-04-26T19:28:09.568Z · LW(p) · GW(p)
I see a bit of a disconnect here from historical algorithmic improvements. In the last five decades humans have created algorithms for solving many problems that had previously been intractable, and given orders of magnitude improvement on others. Many of these have come from math/compsci innovation that was not particularly hardware-limited, i.e. if you had the same (or a larger/smarter-on-average/better-organized) research community but with frozen primitive hardware many of the insights would have been found.
At the moment there are some problems for which we have near-optimal algorithms, where we can show that near-optimal algorithms are out of reach but further performance improvements are unlikely. There are also problems where we are clearly far from the reachable frontier (whether that is near-optimal performance, or just the best that can be done given resource constraints).
The huge swathe of skills wielded by humans but not by existing AI systems shows that in terms of behavioral capabilities there is a lot of room for growth in capacity that does not depend on outperforming the algorithms where we have near-optimal methods (or optimal under resource constraint). The fact that we are the first species on Earth to reach civilization-supporting levels of cognitive capacities, suggests that there is room to grow beyond that in terms of useful behavioral capacities (which may be produced using various practical strategies that involve different computational problems) before hitting the frontier of feasibility. So long as enough domains have room to grow, they can translate into strategic advantage even if others are stable.
Also, I would note that linear performance gains on one measure can lead to much greater gains on another, e.g. linear improvements in predicting movements in financial markets translate to exponential wealth gains, gains in social manipulation or strategic acumen give disproportionate returns when they enable one to reliably outmaneuver one's opposition, linear gains in chess performance translate into an exponential drop-off in the number of potential human challengers, etc.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-04-26T20:55:23.193Z · LW(p) · GW(p)
In the last five decades humans have created algorithms for solving many problems that had previously been intractable, and given orders of magnitude improvement on others. Many of these have come from math/compsci innovation that was not particularly hardware-limited, i.e. if you had the same (or a larger/smarter-on-average/better-organized) research community but with frozen primitive hardware many of the insights would have been found.
Yes. I agree strongly with this. One major thing we've found in the last few years is just that P turns out to be large and a lot of problems have turned out to be in there that were not obviously so. If one asked people in the early 1970s whether they would expect primality testing to be in P they would probably say no. Moreover, some practical problems have simply had their implementations improved a lot even as the space and time of the algorithms remain in the same complexity classes.
There are also problems where we are clearly far from the reachable frontier (whether that is near-optimal performance, or just the best that can be done given resource constraints).
Can you expand on this? I'm not sure I follow.
So long as enough domains have room to grow, they can translate into strategic advantage even if others are stable
Sure. But that doesn't say much about how fast that growth will occur. The standard hard-take off narratives have the AI becoming functionally in control of its light cone in a matter of hours or at most weeks. I agree that there is likely a lot of room for improvement in cognitive capability but the issue in this context is whether it is likely for that sort of improvement to occur quickly.
linear gains in chess performance translate into an exponential drop-off in the number of potential human challengers, etc.
I agree with your other examples. And it is a valid point. I don't think that a strong form P !=NP makes fooming impossible, just that it makes it much less likely. The chess example however has an issue that needs to be nitpicked. As I understand it, this isn't really about linear gains in chess translating into exponential drop off but rather an artifact of the Elo system which sort of requires that linear increase corresponds to quick improvement.
Replies from: CarlShulman↑ comment by CarlShulman · 2011-04-26T22:30:06.562Z · LW(p) · GW(p)
The standard hard-take off narratives have the AI becoming functionally in control of its light cone in a matter of hours or at most weeks.
The human field of AI is about half a million hours old, computer elements can operate at a million times human speed (given enough parallel elements). To the extent that many of the important discoveries were not limited by chip speeds but by the pace of CS, math, and AI researchers' thinking (with most of the work done by some thousands of people who spent much of that time eating, sleeping, goofing off, getting up to speed on existing knowledge in the field).
With a big fast hardware base (relative to the program) and AI sophisticated enough to keep learning without continual human guidance and grok AI theory, gains comparable to the history of AI so far in a few hours or weeks would be reasonable from speedup alone.
I agree that one could have scenarios in which there are AI programs with humanlike capacities that are not yet capable of such development (e.g. a super-bloated system running on massive server farms). However, they tend to involve AI development happening very surprisingly quickly, and don't seem stable for long (bloated implementations can be made more efficient, with strong positive feedback in the improvement, and superhuman hardware will come soon after powerful AI if not before).
an artifact of the Elo system which sort of requires that linear increase corresponds to quick improvement
I agree that this is true, but people often cite chess as an example where exponential hardware increases in the same algorithms led to only linear (Elo) gains.
Replies from: JoshuaZ, None, JoshuaZ↑ comment by JoshuaZ · 2011-04-27T02:21:35.139Z · LW(p) · GW(p)
With a big fast hardware base (relative to the program) and AI sophisticated enough to keep learning without continual human guidance and grok AI theory, gains comparable to the history of AI so far in a few hours or weeks would be reasonable from speedup alone.
Sure. But the end result of all that might end up be very small improvements in actual algorithmic efficiency. It might turn out for example that the best factoring algorithms are of the same order as the current sieves, and it might turn out that after thousands of additional hours of comp sci work the end result is a very difficult proof of that. If the complexity hierarchy doesn't collapse in a strong sense, then even with lots of resources to spend just thinking about algorithms, the AI won't improve the algorithms by that much in terms of actual speed, because they can't be.
Replies from: CarlShulman↑ comment by CarlShulman · 2011-04-27T04:29:25.297Z · LW(p) · GW(p)
But the end result of all that might end up be very small improvements in actual algorithmic efficiency. It might turn out for example that the best factoring algorithms are of the same order as the current sieves, and it might turn out that after thousands of additional hours of comp sci work the end result is a very difficult proof of that.
Yes, I agreed that we should expect this on some problems, but that we don't have reason to expect it across most problems, weighted by practical impact. Especially so for the specific skills where humans greatly outperform computers, skills with great relevance for strategic advantage.
Do you think we have much reason to expect that the algorithms underlying human performance (in the problems where humans greatly outperform today's AI) are mostly near optimal at what they do, such that AIs won't have any areas of huge advantage to leverage?
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-04-28T03:59:46.057Z · LW(p) · GW(p)
Yes, I agreed that we should expect this on some problems, but that we don't have reason to expect it across most problems, weighted by practical impact, especially for the specific skills where humans greatly outperform computers, skills with great relevance for strategic advantage.
I agree with the human skills. I disagree with the claim for problems by practical impact. For example, many practical problems turn out in the general cases to be NP hard or NP complete, or are believed to be not solvable in polynomial time. Examples include the traveling salesman and graph coloring both of which come up very frequently in practical applications across a wide range of contexts.
Do you think we have much reason to expect that the algorithms underlying human performance (in the problems where humans greatly outperform today's AI) are mostly near optimal at what they do, such that AIs won't have any areas of huge advantage to leverage?
Many of those algorithms might be able to be optimized a lot. There's an argument that we should expect humans to be near optimal (since we've spent a million years evolving to be really good at face recognition, understanding other human minds etc.) and our neural nets are trained from a very young age to do this. But there's a lot of evidence that we are in fact suboptimal. Evidence for this includes Dunbar's number and a lot of classical cognitive biases such as the illusion of transparency.
But a lot of those aren't that relevant to fooming. Most humans can do facial recognition pretty fast and pretty reliably. If an AI can do that with a much tinier set of resources, more quickly and more reliably, that's really neat but that isn't going to help it go foom.
↑ comment by [deleted] · 2011-04-26T23:49:18.775Z · LW(p) · GW(p)
I agree that one could have scenarios in which there are AI programs with humanlike capacities that are not yet capable of such development (e.g. a super-bloated system running on massive server farms). However, they tend to involve AI development happening very surprisingly quickly, and don't seem stable for long (bloated implementations can be made more efficient, with strong positive feedback in the improvement, and superhuman hardware will come soon after powerful AI if not before).
I'm not sure how to interpret what you're saying. You say:
they tend to involve AI development happening very surprisingly quickly
which sounds to me like a summary of long experience. But you also seem to be talking about a scenario which you cannot possibly have experienced even once. So, I'm not sure what you're saying.
Replies from: CarlShulman↑ comment by CarlShulman · 2011-04-27T04:12:12.271Z · LW(p) · GW(p)
I'm saying that in my experience of people working out consistent scenarios that involve AI development with sustained scarcity, the scenarios offered usually involve the development of human-level AI early, before hardware can advance much further.
↑ comment by JoshuaZ · 2011-04-27T04:10:53.618Z · LW(p) · GW(p)
Also, regarding
I agree that this is true, but people often cite chess as an example where exponential hardware increases in the same algorithms led to only linear (Elo) gains.
This is people being stupid in one direction. This isn't a good reason to be stupid in another direction. The simplest explanation is the Elo functions as something like a a log scale of actual ability.
Replies from: CarlShulman↑ comment by CarlShulman · 2011-04-27T04:15:40.686Z · LW(p) · GW(p)
Just to clarify, what do you mean by "actual ability'? In something like the 100 m dash, I can think of "actual ability" as finish time. We could construct an Elo rating based on head-to-head races of thousands of sprinters, and it wouldn't be a log scale of finish times. Do you just mean percentile in the human distribution?
Replies from: JoshuaZ↑ comment by timtyler · 2011-04-26T20:03:10.999Z · LW(p) · GW(p)
As I've pointed out here before, a lot of the versions of fooming that are discussed here seem to rest on assuming massive software optimization, not just hardware optimization. This runs into strongly believed theoretical comp sci limits such as the likelyhood that P != NP. These issues also come up in hardware design. It may be my own cognitive biases in trying to make something near my field feel useful, but it does seem like this sort of issue is not getting sufficient attention when discussing the probability of AI going foom.
I don't really see how P vs NP concerns are relevant, frankly.
Moreover, when trying to foom, an AI will probably want to directly improve the algorithms it is using on a regular basis. While not directly a P/NP issue, it is noteworthy that for some common procedures, such as finding gcds, and solving linear programming, we have what are in many ways close to optimal algorithms (the situation is more complicated for linear programing than that but the general point is correct). So the AI probably cannot focus primarily on modifying software because a lot of the basic things that the AI would want to optimize are already close to best possible even for both the average cases and the worst cases.
The core technical problem of machine intelligence is building an agent that correctly performs inductive inference. That is not a problem where we are particularly close to an optimal solution. Rather, solving it looks really, really difficult. Possibly machines will crack the problem and rapidly zoom up to optimal performance. However, that would represent tremendous improvement, not a lack of self-improvement.
So: your concerns about P and NP don't seem very relevant to me.
The overall upshot of this is that while it is possible that an AI with access to a quantum computer could plausibly foom with most of the focus on software improvement, comp sci issues suggest that an AI trying to foom on a classical system would need to engage in both hardware and software improvement.
Not really. We have more hardware than we know how to use. Some call it a "hardware overhang". Software improvement alone could take us far, at today's general tech level. However, of course, faster software improvement leads to faster hardware improvement. Any suggestion that we could conceivably have one without the other seems unwarranted.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-04-26T20:40:23.273Z · LW(p) · GW(p)
The core technical problem of machine intelligence is building an agent that correctly performs inductive inference.
This seems to be highly non-obvious. Even if an AI already had access to a theory of everything, and could engage in near-optimal induction, it isn't at all clear that this helps much for practical purposes. The most obvious example is the example of cryptography as brought up by Roy. And many other things an AI might want to do seem to simply be computationally intensive by our current methods.
Say for example an AI wants to synthesize a virus to modify some species out there or some members of a species (like say some of those pesky humans). Well, that requires at minimum being able to do protein folding in advance.Similarly, if the AI decides it needs to use its memory more efficiently, that leads to difficult computational tasks.
It may be that we're focusing on different issues. It seems that you are focusing on "how difficult is inductive inference from a computational perspective?" which is relevant for what sorts of AI we can practically build. That's not connected to once we have an AI what it will do.
We have more hardware than we know how to use. Some call it a "hardware overhang". Software improvement alone could take us far, at today's general tech level.
This seems irrelevant. Hardware overhang is due to the fact that the vast majority of personal clock cycles aren't being used. The vast majority of that hardware won't be accessible to our AGI unless something has already gone drastically wrong. I agree that an AGI that can get control of a large fraction of the internet accessible computers will likely quickly get very powerful completely separately from computational complexity questions.
It may be that we are imagining different situations. My intent was primarily to address foom narratives that put much more emphasis on software than improvements in hardware. Moreover, to make the point that without increasing software efficiency, one could easily have diminishing marginal returns in attempts to improve hardware.
Replies from: timtyler, timtyler↑ comment by timtyler · 2011-04-26T21:20:12.792Z · LW(p) · GW(p)
The vast majority of that hardware won't be accessible to our AGI unless something has already gone drastically wrong. I agree that an AGI that can get control of a large fraction of the internet accessible computers will likely quickly get very powerful completely separately from computational complexity questions.
What's the problem? Google got quite a few people to contribute to Google Compute.
You think that a machine intelligence would be unsuccessful at coming up with better bait for this? Or that attempts to use user cycles are necessarily evil?
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-04-27T01:11:08.585Z · LW(p) · GW(p)
You think that a machine intelligence would be unsuccessful at coming up with better bait for this?
Not necessarily. But if the use of such cycles becomes a major necessity for the AI to go fooming that's still a reason to reduce our estimates that an AI will go foom.
↑ comment by timtyler · 2011-04-26T21:13:29.933Z · LW(p) · GW(p)
The core technical problem of machine intelligence is building an agent that correctly performs inductive inference.
This seems to be highly non-obvious. Even if an AI already had access to a theory of everything, and could engage in near-optimal induction, it isn't at all clear that this helps much for practical purposes.
Not obvious, perhaps, but surely pretty accurate.
Hutter: http://prize.hutter1.net/hfaq.htm#compai
Mahoney: http://cs.fit.edu/~mmahoney/compression/rationale.html
Tyler - Part 2 on: http://matchingpennies.com/machine_forecasting/
FWIW, a theory of everything is not required - induction is performed on sense-data, or preprocessed sense data.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-04-27T01:01:42.482Z · LW(p) · GW(p)
You are overestimating how much we can do just by compression. The key issue is not just the ability to predict accurately but the ability to predict accurately when using limited resources. For example, let A(n) be the Ackermann function and let P(n) be the nth prime number. Then the sequence described by P(A(n)) mod 3 is really compressible. But the time and space resources needed to expand that compressed form is probably massive.
There's a similar concern here. To again use the protein folding example, even if an AI has a really good model for predicting how proteins will fold if it takes a lot of time to run that model, then it doesn't do a good job. Similarly, if a pattern rests on the behavior of prime numbers, then the smallest Turing machine which outputs 1 iff a number is prime is probably Euclid's sieve. But the AKS algorithm which requires a much larger Turing machine will return the data in less time.
Replies from: timtyler↑ comment by timtyler · 2011-04-27T09:15:46.700Z · LW(p) · GW(p)
What you are saying is correct - but it seems kind-of obvious. Without resource limits, you could compress perfectly, using a variant of Solomonoff induction.
However, to get from there to me overestimating the value of compression, seems like quite a leap. I value stream compression highly because it is the core technical problem of machine intelligence.
I am not claiming that compression ratio is everything, while resource usage is irrelevant. Rather, sensible resource usage is an intrinsic property of a good quality compressor. If you think I am not giving appropriate weight to resource usage, that's just a communications failure.
I do go on about resource considerations - for example here:
Replies from: JoshuaZ, NoneAnother issue is that compressors are not just judged by the size of their output. There are other issues to consider - for example, their compression speed, their usage of space, and their energy demands.
↑ comment by JoshuaZ · 2011-04-28T02:08:19.109Z · LW(p) · GW(p)
If you prefer a different framework think of it this way: compression matters, but for many interesting sequences how much practical compression you can with given time and space constraints depends on complexity theoretic questions.
Replies from: timtyler↑ comment by timtyler · 2011-04-28T07:16:46.123Z · LW(p) · GW(p)
Again, isn't what you are saying simply obvious?
Where are you getting the impression that this is somehow different from what I thought in the first place?
Or maybe don't answer that. I would rather you spent your time revisiting the idea that massive software optimization progress runs into strongly believed theoretical computer science limits.
Maybe eventually - but not for a long way yet, allowing for impressive progress in the interim on the central problem of machine intelligence: induction.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-04-28T14:15:00.735Z · LW(p) · GW(p)
Where are you getting the impression that this is somehow different from what I thought in the first place?
I got this from your remark that:
The core technical problem of machine intelligence is building an agent that correctly performs inductive inference. That is not a problem where we are particularly close to an optimal solution. Rather, solving it looks really, really difficult. Possibly machines will crack the problem and rapidly zoom up to optimal performance. However, that would represent tremendous improvement, not a lack of self-improvement.
So: your concerns about P and NP don't seem very relevant to me.
I would rather you spent your time revisiting the idea that massive software optimization progress runs into strongly believed theoretical computer science limits.
I'm not sure what aspect of this you want me to expand. If the complexity hierarchy doesn't collapse to a moderate extent (in the sense that P, NP, co-NP, PSPACE, and EXP are all distinct) then many practical problems that come up cannot have that much improvement. Moreover, this can be made more rigorous with slightly stronger assumptions. Thus for example, the exponential time hypothesis is a (somewhat) explicit conjecture about the bounds on solving 3-SAT. If this conjecture is true, then this gives severe limits on how much one can improve both graph coloring, traveling salesman, and Steiner trees, among other practical NP -complete problems. Both these problems show up in real world context, in terms of things like hardware and memory design, while some of these come up directly in predicting the behavior of real world systems.
Moreover, one of the very fruitful set of results in the last few years (albeit an area I must say that I don't have much knowledge about) has been in taking NP-hard problems and producing approximate solutions or asking how good an approximation one can get. Those results however, have shown that even some old approximation algorithms are asymptotically best possible if P != NP (and require no other assumptions to show that) which include the approximate forms of graph coloring and the set cover problem. Moreover, if one assumes slightly stronger assumptions than P != NP, one gets similar results for other NP problems. It is noteworthy that many of the approximation algorithms in question date back to the 1970s, which suggests that even when finding approximation algorithms one gets close to best possible very quickly.
Another class of problems to look at are problems which are just frequently used. Common examples are finding gcds, multiplying numbers, and the two closely related problems of multiplying matrices and finding determinants.
In the case of gcds, we have near optimal algorithms in strong senses (although it is open whether there's some way of making gcd calculations parallel, most people suspect that there's not much improvement possible here).
Multiplying numbers can be done more efficiently than naive multiplication (which is O((log n)^2) for 2 n digits numbers) by using a version of the fast Fourier transform. This may be an example where unexpected speed-ups really do exist in what may be long-standing basic processes. However, there are clear limits to how much this can be improved, since explicit bounds exist for minimum computation required by FFTs (and these bounds exist without explicit constants). While there are still open questions about how efficient one can make FFTs, even if one assumed the most generous possible marginally plausible results, FFTs would not become much more efficient.
In the case of finding determinants, we have good algorithms that are much more efficient than the naive algorithms. For an n x n it follows from not too much linear algebra that one can calculate determinants in O(n^3) arithmetic operations, and better algorithms exist that run around O(n^A) where A is a nasty constant slightly less than 2.4. There's a clear limit here to how much this can be improved. One cannot go better than O(n^2) because an n x n matrix has n^2 entries. So this is close to the naive lower bound. Not much room for improvement there. The primary room for improvement is that this uses the Coppersmith–Winograd for matrix multiplication, which is the most efficient algorithm for matrix multiplication known, but it isn't used in practice, because the constants involved make it less efficient than other algorithms in the practical range of matrices (I don't know precisely where CW exceeds other algorithms I think it is around n = 10^9 or so.) so there's room for improvement there, but not much room for improvement. And most people who have thought about these issues are confident that determinants cannot be calculated in O(n^2).
So the upshot is that while there's still work to do, for a lot of these problems, it doesn't look like there's room for "massive software optimization".
Replies from: timtyler, timtyler↑ comment by timtyler · 2011-04-28T18:27:17.708Z · LW(p) · GW(p)
Where are you getting the impression that this is somehow different from what I thought in the first place?
I got this from your remark that:
The core technical problem of machine intelligence is building an agent that correctly performs inductive inference. That is not a problem where we are particularly close to an optimal solution. Rather, solving it looks really, really difficult. Possibly machines will crack the problem and rapidly zoom up to optimal performance. However, that would represent tremendous improvement, not a lack of self-improvement.
So: your concerns about P and NP don't seem very relevant to me.
I would rather you spent your time revisiting the idea that massive software optimization progress runs into strongly believed theoretical computer science limits.
So, when you said: "complexity theoretic questions", you were actually talking about the possible equivalence of different classes within the complexity hierarchy? That wasn't obvious to me - I took what you wrote as referring to any old complexity theoretic questions. Such as what language to use when measuring K-complexity. The idea that K-complexity has something to do with compression ratios is what I was calling obvious. A communications breakdown.
Replies from: JoshuaZ↑ comment by timtyler · 2011-04-28T20:48:06.768Z · LW(p) · GW(p)
OK. FWIW, I don't need examples of existing algorithms approaching theoretical limits.
As I said, the main problem where I think there is important scope for software improvement is induction. By my estimate, the brain spends about 80% of its resources on induction - so performance on induction seems important.
Current performance on the Hutter prize suggests that a perfect compressor could do about 25% better on the target file than the current champion program does.
So, perhaps it depends on how you measure it. If you think a mere 25% is not much improvement, you may not be too impressed. However, there are a few things to bear in mind:
Induction progress works in a similar manner to the rating scale in go: the higher you climb, the more difficult it is to make further progress.
There's another similarity to go's rating scale. In go, God is estimated to be 11-dan - not all that much better than a 9-dan human champion. However, this apparent close proximity to perfection is kind-of an illusion. Play go on bigger boards, and larger margins between humans and God are likely to become apparent. Similarly, measure induction using a more challenging corpus, and today's programs will not appear to be so close to optimal.
The other thing to bear in mind is that intelligent agents are a major part of the environment of other intelligent agents. This means that it is not very realistic to model a fixed set of environmental problems (TSPs, etc), and to measure intelligence with respect to them. Rather there is an intelligence arms race - with many of the problems which intelligent agents face being posed by other agents.
We can see a related effect in mathematics. Many mathematicians work on the unresolved problems at the boundary of what is known in their field. The more progress they make, the harder the unresolved problems become, and the more intelligence is required to deal with them.
↑ comment by [deleted] · 2011-04-27T10:32:57.262Z · LW(p) · GW(p)
I value stream compression highly because it is the core technical problem of machine intelligence.
We'll know what the core technical problem of machine intelligence is once we achieve machine intelligence. Achieve it, and then I'll believe your claim that you know what's involved in achieving it.
Replies from: timtyler↑ comment by timtyler · 2011-04-27T10:36:35.734Z · LW(p) · GW(p)
IMO, we can clearly see what it is now.
In general, there is no need to perform an engineering feat before you can claim to have understood what problem it involves. We understood the basics of flight before we could ourselves fly. That is also true for machine intelligence today - we have a general theory of intelligence, and can see what the technical side of the problem of building it consists of.
Induction power is the equivalent of lift. Not the only thing you need, but the most central and fundamental element, once you already have a universal computer and storage.
Replies from: None↑ comment by [deleted] · 2011-04-27T11:41:12.912Z · LW(p) · GW(p)
We understood the basics of flight before we could ourselves fly.
We did not understand in the sense of having a correct theory of fluid dynamics. We understood in the sense of having a working model, a paper plane, which actually flew.
Today it is the reverse. We have a theory, but we have no convincing paper brain. We have no working model.
That is also true for machine intelligence today - we have a general theory of intelligence, and can see what the technical side of the problem of building it consists of.
We have the reverse of what ancient China had. They had no theory but they had a paper plane. We have a theory but we have no paper brain.
In general, there is no need to perform an engineering feat before you can claim to have understood what problem it involves.
But with paper planes, we actually performed the essential engineering feat, in paper. The plane flew. But we have no paper brain that thinks.
Replies from: timtyler↑ comment by jsalvatier · 2011-04-25T21:47:34.664Z · LW(p) · GW(p)
I like that notion, but I'm not a mathematician.
comment by Mitchell_Porter · 2011-04-27T10:49:32.662Z · LW(p) · GW(p)
The arguments about vast utilities with small probabilities are both dubious and unnecessary. Eliezer says the probability of unfriendly AI is large, and I can agree. Egan says assertions about the fate of the galaxy involve extreme speculation, and I can also agree.
All you have to argue is that AI will become superhuman enough to win a battle with humanity, and that it can be alien enough to be unfriendly to us, and you have made the case for friendly AI.
comment by XiXiDu · 2011-04-24T18:55:39.604Z · LW(p) · GW(p)
I forgot to add a link to the original post by John Baez, added it now.
comment by timtyler · 2011-04-24T19:59:52.994Z · LW(p) · GW(p)
It seems that he and other people who understand all the arguments in favor of friendly AI and yet decide to ignore it, or disregard it as unfeasible, are rationalizing.
That seems to be a reasonable assessment.
Various others don't seem to understand the issue - e.g.:
http://hplusmagazine.com/2010/12/15/problem-solved-unfriendly-ai/
Replies from: nazgulnarsil↑ comment by nazgulnarsil · 2011-04-25T07:19:10.602Z · LW(p) · GW(p)
that was unworthy of a transhumanist magazine. it read like an AI column for a fashion magazine.
comment by timtyler · 2011-04-24T19:49:12.845Z · LW(p) · GW(p)
It seems like a non-reply.
I would council taking care when recruiting from the environmentalist camp. They typically just want to slam on the brakes. That itself can easly be a risky strategy. Think of how a rocket behaves after one of its booster stages is sabotaged.