Which are the useful areas of AI study?
post by PeterisP · 2011-01-15T23:03:25.044Z · LW · GW · Legacy · 40 commentsContents
40 comments
I'm stuck wondering on a peculiar question lately - which are the useful areas of AI study? What got me thinking is the opinion occasionally stated (or implied) by Eliezer here that performing general AI research might likely have negative utility, due to indirectly facilitating a chance of unfriendly AI being developed. I've been chewing on the implications of this for quite a while, as acceptance of these arguments would require quite a change in my behavior.
Right now I'm about to start my CompSci PhD studies soon, and had initially planned to focus on unsupervised domain-specific knowledge extraction from the internet, as my current research background is mostly with narrow AI issues in computational linguistics, such as machine-learning, formation of concepts and semantics extraction. However, in the last year my expectations of singularity and existential risks of unfriendly AI have lead me to believe that focusing my efforts on Friendly AI concepts would be a more valuable choice; as a few years of studies in the area would increase the chance of me making some positive contribution later on.
What is your opinion?
Do studies of general AI topics and research in the area carry a positive or negative utility ? What are the research topics that would be of use to Friendly AI, but still are narrow and shallow enough to make some measurable progress by a single individual/tiny team in the course of a few years of PhD thesis preparation? Are there specific research areas that should be better avoided until more progress has been made on Friendliness research ?
40 comments
Comments sorted by top scores.
comment by Vaniver · 2011-01-16T09:26:23.295Z · LW(p) · GW(p)
I would not worry at all about developing narrow AI. That strikes me as generally positive utility, since you don't have to solve the hard problem of giving it an enduring general purpose (as you can just give it a fixed narrow one).
Replies from: XiXiDu↑ comment by XiXiDu · 2011-01-16T15:26:18.599Z · LW(p) · GW(p)
I would not worry at all about developing narrow AI.
An narrow AI employed with solving problems in molecular nanotechnology could be an existential risk nonetheless. It is just a question of scope and control. If it can access enough resources and if humans are sufficiently reckless in implenting whatever it comes up with, then you could end up with runaway real world MNT (if possible at all):
"We report the development of Robot Scientist “Adam,” which advances the automation of both. Adam has autonomously generated functional genomics hypotheses about the yeast Saccharomyces cerevisiae and experimentally tested these hypotheses by using laboratory automation." -- The Automation of Science
...and...
"Without any prior knowledge about physics, kinematics, or geometry, the algorithm discovered Hamiltonians, Lagrangians, and other laws of geometric and momentum conservation. The discovery rate accelerated as laws found for simpler systems were used to bootstrap explanations for more complex systems, gradually uncovering the “alphabet” used to describe those systems. " -- Computer Program Self-Discovers Laws of Physics
Just look at what genetic algorithms and evolutionary computation can already do:
This aim was achieved within 3000 generations, but the success was even greater than had been anticipated. The evolved system uses far fewer cells than anything a human engineer could have designed, and it does not even need the most critical component of human-built systems - a clock. How does it work? Thompson has no idea, though he has traced the input signal through a complex arrangement of feedback loops within the evolved circuit. In fact, out of the 37 logic gates the final product uses, five of them are not even connected to the rest of the circuit in any way - yet if their power supply is removed, the circuit stops working. It seems that evolution has exploited some subtle electromagnetic effect of these cells to come up with its solution, yet the exact workings of the complex and intricate evolved structure remain a mystery (Davidson 1997).
Another example:
When the GA was applied to this problem, the evolved results for three, four and five-satellite constellations were unusual, highly asymmetric orbit configurations, with the satellites spaced by alternating large and small gaps rather than equal-sized gaps as conventional techniques would produce. However, this solution significantly reduced both average and maximum revisit times, in some cases by up to 90 minutes. In a news article about the results, Dr. William Crossley noted that "engineers with years of aerospace experience were surprised by the higher performance offered by the unconventional design".
So what could possible happen if you add some machine intelligence and a bunch of irrational and reckless humans?
Replies from: Vaniver, snarles↑ comment by Vaniver · 2011-01-16T23:42:41.699Z · LW(p) · GW(p)
An narrow AI employed with solving problems in molecular nanotechnology could be an existential risk nonetheless.
That strikes me as mostly the risks inherent in molecular nanotech; the AI isn't the problematic part. For example, is anything going to go wrong because a GA is optimizing satellite paths?
comment by Vladimir_Nesov · 2011-01-16T01:06:26.465Z · LW(p) · GW(p)
My current guess at FAI-relevant material: Recommended Reading for Friendly AI Research. In short, develop further the kinds of decision theory discussed on LW.
comment by timtyler · 2011-01-15T23:25:24.222Z · LW(p) · GW(p)
Compression, compression, compression! (in answer to the title).
http://timtyler.org/machine_forecasting/
http://timtyler.org/sequence_prediction/
http://cs.fit.edu/~mmahoney/compression/rationale.html
Others might be better placed to comment on the implications for risk.
On the one hand, having forecasting systems first will be a great boon. It will produce knowledge of the consequences of actions without very much in the way of imposing different values on us.
Also, they illustrate what utility function engineering will look like. I had previously feared people doing utility engineering using reinforcement learning. If you have a forecasting component, that starts to look unnecessary - utility engineering looks set to involve evaluating predicted sensory data instead.
On the less positive side, forecasing systems look easier to build than systems with their own values and tree pruning systems. However, once they are built, the floodgates seem very likely to open. Some will paint that as being bad news.
Replies from: endoself↑ comment by endoself · 2011-01-16T01:57:24.595Z · LW(p) · GW(p)
This seems too nonspecific. Of course an AI can separate the interesting degrees of freedom and ignore the others. The question is how to do this.
Replies from: timtyler↑ comment by timtyler · 2011-01-16T10:16:36.146Z · LW(p) · GW(p)
How to compress is one question. How to tree-prune is a second, and what values machines should have is a third.
What I didn't realise - until fairly recently - is the extent to which the first problem is key. It appears that can be usefully be factored out and solved independently.
General-purpose stream compression is a fairly specific computer science problem. However, it is true that we still need to figure out how to do it well.
Replies from: endoself↑ comment by endoself · 2011-01-16T18:51:20.442Z · LW(p) · GW(p)
Having a very detailed understanding of general patterns in English gives you almost as much compression of a textbook as the additional ability to fully understand the subject matter. How does progress on the first ability get us closer to GAI?
Replies from: timtyler↑ comment by timtyler · 2011-01-16T19:27:47.411Z · LW(p) · GW(p)
You need general purpose compression - not just text compression - if you are being general.
Matt explains why compression is significant here.
Obviously the quality of the model depends on the degree of compression. Poorer compression results in reduced understanding of the material.
Replies from: endoself↑ comment by endoself · 2011-01-16T21:13:10.660Z · LW(p) · GW(p)
In the link, he proves that optimal compression is equivalent to AI. He gives no reason why improving compression is the best approach to developing AI. The OP asked what areas of AI study are useful, not which ones would work in principle. You may have evidence that this is the best approach in practice, but you have not presented it yet.
Replies from: timtyler↑ comment by timtyler · 2011-01-16T22:07:19.314Z · LW(p) · GW(p)
In the link, he proves that optimal compression is equivalent to AI.
I don't actually think that compression is equivalent to intelligence. However, it is pretty close!
The OP asked what areas of AI study are useful, not which ones would work in principle. You may have evidence that this is the best approach in practice, but you have not presented it yet.
My main presentation in that area is in the first of the links I gave:
http://timtyler.org/machine_forecasting/
Brief summary:
Compression allows an easy way of measuring progress - an area which has been explored by Shane Legg. Also, it successfully breaks a challenging problem down into sub-components - often an important step on the way to solving the problem. Lastly, but perhaps most significantly, developing good quality stream compression engines looks like an easier problem than machine intelligence - and it is one which immediately suggests possible ways to solve it.
I don't know that it is the best approach in practice - just that it looks like a pretty promising one.
Replies from: endoself↑ comment by endoself · 2011-01-17T00:13:19.230Z · LW(p) · GW(p)
It is not clear to me that what we are measuring is progress. We are definitely improving something, but that does not necessarily get us closer to GAI. Different algorithms could have very different compression efficiencies on different types of data. Some of these may require real progress toward AI, but many types of data can be compressed significantly with little intelligence. A program that could compress any type of non-random data could be improved significantly just by focusing on thing that are easy to predict.
Replies from: timtyler↑ comment by timtyler · 2011-01-17T07:23:52.818Z · LW(p) · GW(p)
It is not clear to me that what we are measuring is progress.
Compression ratio, on general Occamian data defined w.r.t a small reference machine. If in doubt, refer to Shane Legg: http://www.vetta.org/publications/
A program that could compress any type of non-random data could be improved significantly just by focusing on thing that are easy to predict.
Not by very much usually, if there is lots of data. Powerful general purpose compressors would actually work well.
Anyway, that is the idea - to build a general purpose Occamian system - not one with lots of preconceptions. Non-Occamian preconceptions don't buy that much and are not really needed - if you are sufficiently smart and have a little while to look at the world and see what it is like.
Replies from: endoself↑ comment by endoself · 2011-01-17T10:13:55.295Z · LW(p) · GW(p)
A program that could compress any type of non-random data could be improved significantly just by focusing on thing that are easy to predict.
Not by very much usually, if there is lots of data. Powerful general purpose compressors would actually work well.
Huffman encoding also works very well in many cases. Obviously you cannot compress any compressible data without GAI, but there are no programs in existence that can compress any compressible data and things like Huffman encoding do make numerical progress in compression, but they do not represent any real conceptual progress. Do you know of any compression programs that make conceptual progress toward GAI? If not, then why do you think that focusing on the compression aspect is likely to provide such progress?
Replies from: timtyler↑ comment by timtyler · 2011-01-17T18:29:34.211Z · LW(p) · GW(p)
Huffman coding dates back to 1952 - it has been replaced by othehr schemes in size-sensitive applications.
Do you know of any compression programs that make conceptual progress toward GAI?
Alexander Ratushnyak seems to be making good progress - see: http://prize.hutter1.net/
Replies from: endoself↑ comment by endoself · 2011-01-18T18:08:00.046Z · LW(p) · GW(p)
I realize that Huffman coding is outdated, I was just trying to make the point that compression progress is possible without AI progress.
Do you have Alexander Ratushnyak's source code? What techniques does he use that teach us something that could be useful for GAI?
Replies from: timtyler↑ comment by timtyler · 2011-01-18T19:04:42.358Z · LW(p) · GW(p)
I was just trying to make the point that compression progress is possible without AI progress.
If you have a compressor that isn't part of a machine intelligence, or is part of a non-state of the art machine intelligence, then that is likely to be true.
However, if your compressor is in a state of the art machine intelligence - and it is the primary tool being used to predict the consequences of its actions, then compression progress (smaller or faster), would translate into increased intelligence. It would help the machine to better predict the consequences of its possible actions, and/or to act more quickly.
Do you have Alexander Ratushnyak's source code?
That is available here.
What techniques does he use that teach us something that could be useful for GAI?
Alas, delving in is beyond the scope of this comment.
Replies from: endoself↑ comment by endoself · 2011-01-19T03:42:53.314Z · LW(p) · GW(p)
However, if your compressor is in a state of the art machine intelligence - and it is the primary tool being used to predict the consequences of its actions, then compression progress (smaller or faster), would translate into increased intelligence. It would help the machine to better predict the consequences of its possible actions, and/or to act more quickly.
What techniques does he use that teach us something that could be useful for GAI?
Alas, delving in is beyond the scope of this comment.
Your arguments seem to be more inside-view than mine, so I will update my estimates in favour of your point.
Replies from: timtyler↑ comment by timtyler · 2011-01-19T09:37:27.406Z · LW(p) · GW(p)
I got something from you too. One of the problems with a compression-based approach to machine intelligence is that so far, it hasn't been very popular. There just aren't very many people working on it.
Compression is a tough traditional software engineering problem. It seems relatively unglamourous - and there are some barriers to entry, in the form of a big mountain of existing work. Building on that work might not be the most direct way towards the goal - but unless you do that, you can't easily make competitive products in the field.
Sequence prediction (via stream compression) still seems like the number 1 driving problem to me - and a likely path towards the goal - but some of the above points do seem to count against it.
comment by Perplexed · 2011-01-17T01:48:34.685Z · LW(p) · GW(p)
Well, one idea closely related to Friendliness, but possibly more up your alley, is CEV.
Perhaps I completely misunderstand, but I take it to be the case that locating the CEV of mankind is essentially a knowledge-extraction problem - you go about it by interviewing a broad sample of mankind to get their opinions, you then synthesize one or more theories explaining their viewpoint(s), and finally you try to convince them to sign-off on your interpretation of what they believe. (There may well be a rinse-and-repeat cycle.) But the difference is that you are dealing here with values, rather than 'knowledge'.
I think that it would easily merit a PhD. to take some existing work on interactive domain knowledge extraction and to adapt it to the fuzzier fields of ethics and/or aesthetics. A related topic - also quite important in the context of CEV is the construction of models of ethical opinion which are more-or-less independent of the natural language used by the test subjects.
Replies from: PeterisP, timtyler, wedrifid↑ comment by PeterisP · 2011-01-17T21:35:41.472Z · LW(p) · GW(p)
I'm still up in the air regarding Eliezer's arguments about CEV.
I have all kinds of ugh-factors coming in mind about not-good or at least not-'PeterisP-good' issues an aggregate of 6 billion hairless ape opinions would contain.
The 'Extrapolated' part is supposed to solve that; but in that sense I'd say that it turns the whole concept of this problem from knowledge extraction to the extrapolation. In my opinion, the difference between the volition of Random Joe and volition of Random Mohammad (forgive me for stereotyping for the sake of a short example) is much smaller than the difference between volition of Random Joe and the extrapolated volition of Random Joe 'if he knew more, thought faster, was more the person he wishes he was'. Ergo, the idealistic CEV version of 'asking everyone' seems a bit futile. I could go into more detail, but in that case that's probably material for a separate discussion, analyzing the parts of CEV point by point.
Replies from: Perplexed↑ comment by Perplexed · 2011-01-18T02:09:55.375Z · LW(p) · GW(p)
the idealistic CEV version of 'asking everyone' seems a bit futile.
As I see it, adherence to the forms of democracy is important primarily for political reasons - it is firstly a process for gaining the consent of mankind to a compromise, and only secondly a technique for locating the 'best' compromise (best by some metric).
Also, as I see it, it is a temporary compromise. We don't just do a single opinion survey and then extrapolate. We institute a constitution guaranteeing that mankind is to be repeatedly consulted as people become accustomed to the Brave New World of immortality, cognitive enhancement, and fun theory.
Replies from: PeterisP↑ comment by PeterisP · 2011-01-18T22:00:22.614Z · LW(p) · GW(p)
In that sense, it's still futile. The whole reason for the discussion is that AI doesn't really need permission or consent of anyone; the expected result is that AI - either friendly or unfriendly - will have the ability to enforce the goals of its design. Political reasons will be easily satisfied by a project that claims to try CEV/democracy but skips it in practice, as afterwards the political reasons will cease to have power.
Also, a 'constitution' matters only if it is within the goal system of a Friendly AI, otherwise it's not worth the paper it's written on.
Replies from: Perplexed↑ comment by Perplexed · 2011-01-19T00:21:16.253Z · LW(p) · GW(p)
a 'constitution' matters only if it is within the goal system of a Friendly AI
Well, yes. I am assuming that the 'constitution' is part of the CEV, and we are both assuming that CEV or something like it is part of the goal system of the Friendly AI.
The whole reason for the discussion is that AI doesn't really need permission or consent of anyone.
I wouldn't say that it is the whole reason for the discussion, though that is the assumption explaining why many people consider it urgent to get the definition of Friendliness right on the first try. Personally, I think that it is a bad assumption - I believe it should be possible to avoid the all-powerful singleton scenario, and create a 'society' of slightly less powerful AIs, each of which really does need the permission and consent of its fellows to continue to exist. But a defense of that position also belongs in a separate discussion.
↑ comment by timtyler · 2011-01-17T19:26:12.969Z · LW(p) · GW(p)
I take it to be the case that locating the CEV of mankind is essentially a knowledge-extraction problem - you go about it by interviewing a broad sample of mankind to get their opinions, you then synthesize one or more theories explaining their viewpoint(s), and finally you try to convince them to sign-off on your interpretation of what they believe.
Or - if you are Ben - you could just non-invasively scan their brains!
Replies from: Perplexed↑ comment by Perplexed · 2011-01-17T20:25:37.903Z · LW(p) · GW(p)
Thx for the link. It appears that Goertzel's CAV is closer to what I was talking about than Yudkowsky's CEV. But I strongly doubt that scanning brains will be the best way to acquire knowledge of mankind's collective volition. At least if we want to have already extracted the CEV before the Singularity.
Replies from: timtyler↑ comment by timtyler · 2011-01-17T21:39:05.271Z · LW(p) · GW(p)
My comment on that topic at the time was:
Non-invasively scanning everyone's brains to figure out what they want is all very well - but what if we get intelligent machines long before such scans become possible?
The other variation from around then was Roko's document:
Bootstrapping Safe AGI Goal Systems - CEV and variants thereof.
I still have considerable difficulty in taking this kind of material seriously.
↑ comment by wedrifid · 2011-01-18T02:43:57.971Z · LW(p) · GW(p)
you go about it by interviewing a broad sample of mankind to get their opinions,
No, that is most decidedly not the way to coherently extrapolate volition.
Replies from: NancyLebovitz, Perplexed↑ comment by NancyLebovitz · 2011-01-18T19:50:20.449Z · LW(p) · GW(p)
Do you have ideas about how you would coherently extrapolate volition, or are you just sure that interviewing isn't it?
Replies from: wedrifid↑ comment by Perplexed · 2011-01-18T03:05:13.836Z · LW(p) · GW(p)
No, that is most decidedly not the way to coherently extrapolate volition.
Not the way to extrapolate, certainly. But perhaps a reasonable way to find a starting point for the extrapolation. To 'extrapolate' is definitely not 'to derive from first principles'.
Replies from: wedrifid↑ comment by wedrifid · 2011-01-18T03:30:34.895Z · LW(p) · GW(p)
To 'extrapolate' is definitely not 'to derive from first principles'.
Yes, that is one of the various things that could be even worse than opinion polls. Random selection from a set of constructible volition functions would be another.
comment by rwallace · 2011-01-15T23:13:29.875Z · LW(p) · GW(p)
Friendly AI is to real-life AI R&D as telepathic contact with aliens from Zeta Reticuli is to real-life spaceflight R&D. I wouldn't waste your time on that stuff. (Yes, I'm aware this is going to get downvotes from the faithful.)
Computational linguistics and machine learning is a good combination, substantial chance of producing something useful and a good background even if you decide to switch to something else later; the tools and concepts you'll learn will be useful in quite a few interesting domains. I recommend going ahead with that.
Replies from: knb, wedrifid↑ comment by knb · 2011-01-16T01:15:22.029Z · LW(p) · GW(p)
You got a downvote from me for preemptively dismissing those who disagree with you as "the faithful".
Replies from: James_Miller↑ comment by James_Miller · 2011-01-16T03:21:41.750Z · LW(p) · GW(p)
He isn't "preemptively dismissing those who disagree with [him] as 'the faithful.'" He is saying that the faithful will down vote him without commenting on what the non-faithful will do.
Replies from: wedrifid