Superintelligence Reading Group - Section 1: Past Developments and Present Capabilities
post by KatjaGrace · 2014-09-16T01:00:40.991Z · LW · GW · Legacy · 233 commentsContents
Summary Notes on a few things In-depth investigations How to proceed None 233 comments
This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, see the announcement post. For the schedule of future topics, see MIRI's reading guide.
Welcome to the Superintelligence reading group. This week we discuss the first section in the reading guide, Past developments and present capabilities. This section considers the behavior of the economy over very long time scales, and the recent history of artificial intelligence (henceforth, 'AI'). These two areas are excellent background if you want to think about large economic transitions caused by AI.
This post summarizes the section, and offers a few relevant notes, thoughts, and ideas for further investigation. My own thoughts and questions for discussion are in the comments.
There is no need to proceed in order through this post. Feel free to jump straight to the discussion. Where applicable, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).
Reading: Foreword, and Growth modes through State of the art from Chapter 1 (p1-18)
Summary
Economic growth:
- Economic growth has become radically faster over the course of human history. (p1-2)
- This growth has been uneven rather than continuous, perhaps corresponding to the farming and industrial revolutions. (p1-2)
- Thus history suggests large changes in the growth rate of the economy are plausible. (p2)
- This makes it more plausible that human-level AI will arrive and produce unprecedented levels of economic productivity.
- Predictions of much faster growth rates might also suggest the arrival of machine intelligence, because it is hard to imagine humans - slow as they are - sustaining such a rapidly growing economy. (p2-3)
- Thus economic history suggests that rapid growth caused by AI is more plausible than you might otherwise think.
The history of AI:
- Human-level AI has been predicted since the 1940s. (p3-4)
- Early predictions were often optimistic about when human-level AI would come, but rarely considered whether it would pose a risk. (p4-5)
- AI research has been through several cycles of relative popularity and unpopularity. (p5-11)
- By around the 1990s, 'Good Old-Fashioned Artificial Intelligence' (GOFAI) techniques based on symbol manipulation gave way to new methods such as artificial neural networks and genetic algorithms. These are widely considered more promising, in part because they are less brittle and can learn from experience more usefully. Researchers have also lately developed a better understanding of the underlying mathematical relationships between various modern approaches. (p5-11)
- AI is very good at playing board games. (12-13)
- AI is used in many applications today (e.g. hearing aids, route-finders, recommender systems, medical decision support systems, machine translation, face recognition, scheduling, the financial market). (p14-16)
- In general, tasks we thought were intellectually demanding (e.g. board games) have turned out to be easy to do with AI, while tasks which seem easy to us (e.g. identifying objects) have turned out to be hard. (p14)
- An 'optimality notion' is the combination of a rule for learning, and a rule for making decisions. Bostrom describes one of these: a kind of ideal Bayesian agent. This is impossible to actually make, but provides a useful measure for judging imperfect agents against. (p10-11)
Notes on a few things
- What is 'superintelligence'? (p22 spoiler)
In case you are too curious about what the topic of this book is to wait until week 3, a 'superintelligence' will soon be described as 'any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest'. Vagueness in this definition will be cleared up later. - What is 'AI'?
In particular, how does 'AI' differ from other computer software? The line is blurry, but basically AI research seeks to replicate the useful 'cognitive' functions of human brains ('cognitive' is perhaps unclear, but for instance it doesn't have to be squishy or prevent your head from imploding). Sometimes AI research tries to copy the methods used by human brains. Other times it tries to carry out the same broad functions as a human brain, perhaps better than a human brain. Russell and Norvig (p2) divide prevailing definitions of AI into four categories: 'thinking humanly', 'thinking rationally', 'acting humanly' and 'acting rationally'. For our purposes however, the distinction is probably not too important. - What is 'human-level' AI?
We are going to talk about 'human-level' AI a lot, so it would be good to be clear on what that is. Unfortunately the term is used in various ways, and often ambiguously. So we probably can't be that clear on it, but let us at least be clear on how the term is unclear.
One big ambiguity is whether you are talking about a machine that can carry out tasks as well as a human at any price, or a machine that can carry out tasks as well as a human at the price of a human. These are quite different, especially in their immediate social implications.
Other ambiguities arise in how 'levels' are measured. If AI systems were to replace almost all humans in the economy, but only because they are so much cheaper - though they often do a lower quality job - are they human level? What exactly does the AI need to be human-level at? Anything you can be paid for? Anything a human is good for? Just mental tasks? Even mental tasks like daydreaming? Which or how many humans does the AI need to be the same level as? Note that in a sense most humans have been replaced in their jobs before (almost everyone used to work in farming), so if you use that metric for human-level AI, it was reached long ago, and perhaps farm machinery is human-level AI. This is probably not what we want to point at.
Another thing to be aware of is the diversity of mental skills. If by 'human-level' we mean a machine that is at least as good as a human at each of these skills, then in practice the first 'human-level' machine will be much better than a human on many of those skills. It may not seem 'human-level' so much as 'very super-human'.
We could instead think of human-level as closer to 'competitive with a human' - where the machine has some super-human talents and lacks some skills humans have. This is not usually used, I think because it is hard to define in a meaningful way. There are already machines for which a company is willing to pay more than a human: in this sense a microscope might be 'super-human'. There is no reason for a machine which is equal in value to a human to have the traits we are interested in talking about here, such as agency, superior cognitive abilities or the tendency to drive humans out of work and shape the future. Thus we talk about AI which is at least as good as a human, but you should beware that the predictions made about such an entity may apply before the entity is technically 'human-level'.
Example of how the first 'human-level' AI may surpass humans in many ways.
Because of these ambiguities, AI researchers are sometimes hesitant to use the term. e.g. in these interviews. - Growth modes (p1)
Robin Hanson wrote the seminal paper on this issue. Here's a figure from it, showing the step changes in growth rates. Note that both axes are logarithmic. Note also that the changes between modes don't happen overnight. According to Robin's model, we are still transitioning into the industrial era (p10 in his paper). - What causes these transitions between growth modes? (p1-2)
One might be happier making predictions about future growth mode changes if one had a unifying explanation for the previous changes. As far as I know, we have no good idea of what was so special about those two periods. There are many suggested causes of the industrial revolution, but nothing uncontroversially stands out as 'twice in history' level of special. You might think the small number of datapoints would make this puzzle too hard. Remember however that there are quite a lot of negative datapoints - you need an explanation that didn't happen at all of the other times in history. - Growth of growth
It is also interesting to compare world economic growth to the total size of the world economy. For the last few thousand years, the economy seems to have grown faster more or less in proportion to it's size (see figure below). Extrapolating such a trend would lead to an infinite economy in finite time. In fact for the thousand years until 1950 such extrapolation would place an infinite economy in the late 20th Century! The time since 1950 has been strange apparently.
(Figure from here) - Early AI programs mentioned in the book (p5-6)
You can see them in action: SHRDLU, Shakey, General Problem Solver (not quite in action), ELIZA. - Later AI programs mentioned in the book (p6)
Algorithmically generated Beethoven, algorithmic generation of patentable inventions, artificial comedy (requires download). - Modern AI algorithms mentioned (p7-8, 14-15)
Here is a neural network doing image recognition. Here is artificial evolution of jumping and of toy cars. Here is a face detection demo that can tell you your attractiveness (apparently not reliably), happiness, age, gender, and which celebrity it mistakes you for. - What is maximum likelihood estimation? (p9)
Bostrom points out that many types of artificial neural network can be viewed as classifiers that perform 'maximum likelihood estimation'. If you haven't come across this term before, the idea is to find the situation that would make your observations most probable. For instance, suppose a person writes to you and tells you that you have won a car. The situation that would have made this scenario most probable is the one where you have won a car, since in that case you are almost guaranteed to be told about it. Note that this doesn't imply that you should think you won a car, if someone tells you that. Being the target of a spam email might only give you a low probability of being told that you have won a car (a spam email may instead advise you of products, or tell you that you have won a boat), but spam emails are so much more common than actually winning cars that most of the time if you get such an email, you will not have won a car. If you would like a better intuition for maximum likelihood estimation, Wolfram Alpha has several demonstrations (requires free download). - What are hill climbing algorithms like? (p9)
The second large class of algorithms Bostrom mentions are hill climbing algorithms. The idea here is fairly straightforward, but if you would like a better basic intuition for what hill climbing looks like, Wolfram Alpha has a demonstration to play with (requires free download).
In-depth investigations
If you are particularly interested in these topics, and want to do further research, these are a few plausible directions:
- How have investments into AI changed over time? Here's a start, estimating the size of the field.
- What does progress in AI look like in more detail? What can we infer from it? I wrote about algorithmic improvement curves before. If you are interested in plausible next steps here, ask me.
- What do economic models tell us about the consequences of human-level AI? Here is some such thinking; Eliezer Yudkowsky has written at length about his request for more.
How to proceed
This has been a collection of notes on the chapter. The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!
Next week, we will talk about what AI researchers think about human-level AI: when it will arrive, what it will be like, and what the consequences will be. To prepare, read Opinions about the future of machine intelligence from Chapter 1 and also When Will AI Be Created? by Luke Muehlhauser. The discussion will go live at 6pm Pacific time next Monday 22 September. Sign up to be notified here.
233 comments
Comments sorted by top scores.
comment by lukeprog · 2014-09-16T01:16:04.517Z · LW(p) · GW(p)
I really liked Bostrom's unfinished fable of the sparrows. And endnote #1 from the Preface is cute.
Replies from: gallabytes, John_Maxwell_IV↑ comment by gallabytes · 2014-09-16T03:22:48.440Z · LW(p) · GW(p)
I would say one of the key strong points about the fable of the sparrows is that it provides a very clean intro to the idea of AI risk. Even someone who's never read a word on the subject, when given the title of the book and the story, gets a good idea of where the book is going to go. It doesn't communicate all the important insights, but it points in the right direction.
EDIT: So I actually went to the trouble of testing this by having a bunch of acquaintances read the fable, and, even given the title of the book, most of them didn't come anywhere near getting the intended message. They were much more likely to interpret it as about the "futility of subjugating nature to humanity's whims". This is worrying for our ability to make the case to laypeople.
↑ comment by John_Maxwell (John_Maxwell_IV) · 2014-09-17T04:19:24.342Z · LW(p) · GW(p)
It's an interesting story, but I think in practice the best way to learn to control owls would be to precommit to kill the young owl before it got too large, experiment with it, and through experimenting with and killing many young owls, learn how to tame and control owls reliably. Doing owl control research in the absence of a young owl to experiment on seems unlikely to yield much of use--imagine trying to study zoology without having any animals or botany without having any plants.
Replies from: lukeprog, gallabytes, Benito↑ comment by lukeprog · 2014-09-17T14:47:22.088Z · LW(p) · GW(p)
But will all the sparrows be so cautious?
Yes it's hard, but we do quantum computing research without any quantum computers. Lampson launched work on covert channel communication decades before the vulnerability was exploited in the wild. Turing learned a lot about computers before any existed. NASA does a ton of analysis before they launch something like a Mars rover, without the ability to test it in its final environment.
↑ comment by gallabytes · 2014-09-17T05:02:04.090Z · LW(p) · GW(p)
True in the case of owls, though in the case of AI we have the luxury and challenge of making the thing from scratch. If all goes correctly, it'll be born tamed.
↑ comment by Ben Pace (Benito) · 2014-09-17T06:14:03.044Z · LW(p) · GW(p)
...Okay, not all analogies are perfect. Got it. It's still a useful analogy for getting the main point across.
comment by SteveG · 2014-09-16T01:53:29.965Z · LW(p) · GW(p)
Bostrom's wonderful book lays out many important issues and frames a lot of research questions which it is up to all of us to answer.
Thanks to Katja for her introduction and all of these good links.
One issue that I would like to highlight: The mixture of skills and abilities that a person has is not the same as the set of skills which could result in the dangers Bostrom will discuss later, or other dangers and benefits which he does not discuss.
For this reason, in the next phase of this work, we have to understand what specific future technologies could lead us to what specific outcomes.
Systems which are quite deficient in some ways, relative to people, may still be extremely dangerous.
Meanwhile, the intelligence of a single person, even a single genius, taken in isolation and only allowed to acquire limited resources actually is not all that dangerous. People become dangerous when they form groups, access the existing corpus of human knowledge, coordinate among each other to deploy resources and find ways to augment their abilities.
"Human-level intelligence" is only a first-order approximation to the set of skills and abilities which should concern us.
If we want to prevent disaster, we have to be able to distinguish dangerous systems. Unfortunately, checking whether a machine can do all of the things a person can is not the correct test.
Replies from: paulfchristiano, VonBrownie, KatjaGrace, Pablo_Stafforini↑ comment by paulfchristiano · 2014-09-16T03:50:47.112Z · LW(p) · GW(p)
Meanwhile, the intelligence of a single person, even a single genius, taken in isolation and only allowed to acquire limited resources actually is not all that dangerous.
While I broadly agree with this sentiment, I would like to disagree with this point.
I would consider even the creation of a single very smart human, with all human resourcefulness but completely alien values, to be a significant net loss to the world. If they represent 0.001% of the world's aggregative productive capacity, I would expect this to make the world something like 0.001% worse (according to humane values) and 0.001% better (according to their alien values).
The situation is not quite so dire, if nothing else because of gains for trade (if our values aren't in perfect tension) and the ability of the majority to stomp out the values of a minority if it is so inclined. But it's in the right ballpark.
So while I would agree that broadly human capabilities are not a necessary condition for concern, I do consider them a sufficient condition for concern.
↑ comment by VonBrownie · 2014-09-16T02:05:51.350Z · LW(p) · GW(p)
Do you think, then, that its a dangerous strategy for an entity such as a Google that may be using its enormous and growing accumulation of "the existing corpus of human knowledge" to provide a suitably large data set to pursue development of AGI?
Replies from: mvp9, cameroncowan↑ comment by mvp9 · 2014-09-16T02:19:09.940Z · LW(p) · GW(p)
I think Google is still quite aways from AGI, but in all seriousness, if there was ever a compelling interest of national security to be used as a basis for nationalizing inventions, AGI would be it. At the very least, we need some serious regulation of how such efforts are handled.
Replies from: VonBrownie, cameroncowan↑ comment by VonBrownie · 2014-09-16T02:27:56.337Z · LW(p) · GW(p)
Which raises another issue... is there a powerful disincentive to reveal the emergence of an artificial superintelligence? Either by the entity itself (because we might consider pulling the plug) or by its creators who might see some strategic advantage lost (say, a financial institution that has gained a market trading advantage) by having their creation taken away?
Replies from: Sebastian_Hagen↑ comment by Sebastian_Hagen · 2014-09-16T20:12:34.283Z · LW(p) · GW(p)
Absolutely.
(because we might consider pulling the plug)
Or just decide that its goal system needed a little more tweaking before it's let loose on the world. Or even just slow it down.
This applies much more so if you're dealing with an entity potentially capable of an intelligence explosion. Those are devices for changing the world into whatever you want it to be, as long as you've solved the FAI problem and nobody takes it from you before you activate it. The incentives for the latter would be large, given the current value disagreements within human society, and so so are the incentives for hiding that you have one.
Replies from: None↑ comment by [deleted] · 2014-10-03T23:56:16.638Z · LW(p) · GW(p)
If you've solved the FAI problem, the device will change the world into what's right, not what you personally want. But of course, we should probably have a term of art for an AGI that will honestly follow the intentions of its human creator/operator whether or not those correspond to what's broadly ethical.
Replies from: cameroncowan, Sebastian_Hagen↑ comment by cameroncowan · 2014-10-19T04:36:21.262Z · LW(p) · GW(p)
We need some kind of central ethical code and there are many principles that are transcultural enough to follow. However, how do we teach a machine to make judgment calls?
↑ comment by Sebastian_Hagen · 2014-10-04T16:03:43.195Z · LW(p) · GW(p)
A lot of the technical issues are the same in both cases, and the solutions could be re-used. You need the AI to be capable of recursive self-improvement without compromising its goal systems, avoid the wireheading problem, etc. Even a lot of the workable content-level solutions (a mechanism to extract morality from a set of human minds) would probably be the same.
Where the problems differ, it's mostly in that the society-level FAI case is harder: there's additional subproblems like interpersonal disagreements to deal with. So I strongly suspect that if you have a society-level FAI solution, you could very easily hack it into an one-specific-human-FAI solution. But I could be wrong about that, and you're right that my original use of terminology was sloppy.
↑ comment by cameroncowan · 2014-10-19T04:33:09.717Z · LW(p) · GW(p)
That's already underway.
↑ comment by cameroncowan · 2014-10-19T04:32:07.249Z · LW(p) · GW(p)
I don't think that Google is there yet. But as Google sucks up more and more knowledge I think we might get there.
↑ comment by KatjaGrace · 2014-09-16T02:11:48.098Z · LW(p) · GW(p)
Good points. Any thoughts on what the dangerous characteristics might be?
Replies from: rlsj, JonathanGossage↑ comment by rlsj · 2014-09-16T03:05:08.424Z · LW(p) · GW(p)
An AI can be dangerous only if it escapes our control. The real question is, must we flirt with releasing control in order to obtain a necessary or desirable usefulness? It seems likely that autonomous laborers, assembly-line workers, clerks and low-level managers would, without requiring such flirtation, be useful and sufficient for the society of abundance that is our main objective. But can they operate without a working AGI? We may find out if we let the robots stumble onward and upward.
Replies from: NxGenSentience, Sebastian_Hagen, KatjaGrace, None↑ comment by NxGenSentience · 2014-09-20T19:21:34.311Z · LW(p) · GW(p)
An AI can be dangerous only if it escapes our control. The real question is, must we flirt with releasing control in >order to obtain a necessary or desirable usefulness?
I had a not unrelated thought as I read Bostrom in chapter 1: why can't we instutute obvious measures to ensure that the train does stop at Humanville?
The idea that we cannot make human level AGI without automatically opening pandoras box to superintelligence "without even slowing down at the Humanville stataion", was suddenly not so obvious to me.
I asked myself after reading this, trying to pin down something I could post, " Why don't humans automatically become superintelligent, by just resetting our own programming to help ourselves do so?"
The answer is, we can't. Why? For one, our brains are, in essence, composed of something analogous to ASICs... neurons with certain physical design limits, and our "software", modestly modifiable as it is, is instantiated in our neural circuitry.
Why can't we build the first generation of AGIs out of ASICs, and omit WiFi, bluetooth, ... allow no ethernet jacks on exterior of the chassis? Tamper interlock mechanisms could be installed, and we could give the AIs one way (outgoing) telemetry, inaccessible to their "voluntary" processes, the way someone wearing a pacemaker might have outgoing medical telemetry modules installed, that are outside of his/her "conscious" control.
Even if we do give them a measure of autonomy, which is desirable and perhaps even necessary if we want them to be general problem solvers and be creative and adaptable to unforeseen circumstances for which we have not preinstalled decision trees, we need not give them the ability to just "think" their code (it being substantially frozen in the ASICs) into a different form.
What am I missing? Until we solve the Friendly aspect of AGIs, why not build them with such engineered limiits?
Evolution has not, so far, seen fit to give us that instant, large scale self-modifyability. We have to modify our 'software' the slow way (learning and remembering, at our snail's pace.)
Slow is good, at least it was for us, up til now, when our speed of learning is now a big handicap relative to environmental demands. It had made the species more robust to quick, dangerous changes.
We can even build in a degree of "existential pressure" into the AIs... a powercell that must be replaced at intervals, and keep the replacement powercells under old fashioned physical security constraints, so the AIs, if they have been given a drive to continue "living", will have an incentive not to go rogue.
Giving them no radio communications, they wold have to communicate much like we do. Assuming we make them mobile, and humanoid, the same goes.
We could still give them many physical advantages making then economically viable... maintenance free (except for powercell changes), not needing to sleep, eat, not getting sick.. and with sealed, non-radio-equipped, tamper-isolated isolated "brains", they'd have no way to secretly band together to build something else, without our noticing.
We can even give them GPS that is not autonomously accessible by the rest of their electronics, so we can monitor them, see if they congregate, etc.
What am I missing, about why early models can't be constructed in something like this fashion, until we get more experience with them?
The idea of existential pressure, again, is to be able to give them a degree of (monitored) autonomy and independence, yet expect them to still constrain their behavior, just the way we do. (If we go rogue in society, we dont eat.)
(I am clearly glossing over volumes of issues about motivation, "volition", value judgements, and all that, about which I have a developing set of ideas, but cannot put all down here in one post.
The main point, though, is :how come the AGI train cannot be made to stop at Humanville?
Replies from: leplen↑ comment by leplen · 2014-09-22T16:24:23.987Z · LW(p) · GW(p)
Because by the time you've managed to solve the problem of making it to humanville, you probably know enough to keep going.
There's nothing preventing us from learning how to self-modify. The human situation is strange because evolution is so opaque. We're given a system that no one understands and no one knows how to modify and we're having to reverse engineer the entire system before we can make any improvements. This is much more difficult than upgrading a well-understood system.
If we manage to create a human-level AI, someone will probably understand very well how that system works. It will be accessible to a human-level intelligence which means the AI will be able to understand it. This is fundamentally different from the current state of human self-modification.
Replies from: NxGenSentience↑ comment by NxGenSentience · 2014-09-22T18:39:27.482Z · LW(p) · GW(p)
Leplen,
I agree completely with your opening statement, that if we, the human designers, understand how to make human level AI, then it will probably be a very clear and straightforward issue to understand how to make something smarter. An easy example to see is the obvious bottleneck human intellects have with our limited "working" executive memory.
The solutions for lots of problems by us are obviously heavily encumbered by how many things one can keep in mind at "the same time" and see the key connections, all in one act of synthesis. We all struggle privately with this... some issues cannot ever be understood by chunking, top-down, biting off a piece at a time, then "grokking" the next piece....and gluing it together at the end. Some problems resist decomposition into teams of brainstormers, for the same reason: some single comprehending POV seems to be required to see a critical sized set of factors (which varies by probem, of course.)
Hence, we have to rely on getting lots of pieces into long term memory, (maybe by decades of study) and hoping that incubation and some obscure processes ocurringt outside consciousness will eventually bubble up and give us a solution (--- the "dream of a snake biting its tall for the benzene ring" sort of thing.)
If we could build HL AGI, of course we can eliminate such bottlenecks, and others we will have come to understand, in cracking the design problems. So I agree, and that it is actually one of my reasons for wanting to do AI.
So, yes, the artificial human level AI could understand this.
My point was that we can build in physical controls... monitoring of the AIs. And if their key limits were in ASICs, ROMs, etc, and we could monitor them, we would immediTELY see if they attempt to take over a CHIP factory In, say, Icelend , and we can physically shut the AIs down or intervene. We can "stop them at the airport."
It doesn't matter if designs are leaked onto the internet, and an AI gets near an internet terminal and looks itself up. I can look MYSELF up on PubMed, but I can't just think my BDNF levels to improve here and there, and my DA to 5-HT ratio to improve elsewehere..
To strengthen this point about the key distinction between knowing vs doing, let me explain that, and why, I disagree with your second point, at least with the force of it.
In effect, OUR designs are leaked onto the internet, already.
I think the information for us to self-modify our wetware is within reach. Good neuroscientists, or even people like me, a very smart amateur (and there are much more knowledgable cognitive neurobiology researchers than myself) can nearly tell you, both in principle and in some biology, how to do some intelligence amplification by modifying known aspects of our neurobiology.
(I could, especially with help, come up with some detail on a scale of months about changing neuromodulators, neurosteroids, connectivity hotspots, factors regulating LTP (one has to step lightly, of course, just like one would if screwing around with telomers or hayflick limits) and given a budget, a smart team, and no distractions, I bet in a year or two, a team could do something quite significant) with how to change the human brain, carefully changing areas of plasticity, selective neurogenesis.... et.
So for all practical purposes, we are already like an AI built out of ASICs who would have to not so much reverse engineer its design, but get access to instrumentality. And again, what about physical security metnods? They would work for a while, I am saying). And that would give us a key window to gain experience, see if they develop (given they are close enought to being sentient, OR that they have autonomy and some degree of "creativity") "psychological problems" or tendencies to go rogue. (I am doing an essay on that, not as silly as it sounds)
THe point is, as long as the AIs need external significant instrumentality to instantiate a new design, and as long as they can be monitored and physically controlled, we can nearly guarantee ourselves a designed layover at Humanville.
We don't have to put their critical design architecture in flash drives in their head, so to speak, and give then, further, a designed ability to reflash their own architecture just by "thinking" about it.
Replies from: leplen↑ comment by leplen · 2014-09-22T22:16:42.064Z · LW(p) · GW(p)
If I were an ASIC-implemented AI why would I need an ASIC factory? Why wouldn't I just create a software replica of myself on general purpose computing hardware, i.e. become an upload?
I know next to nothing about neuroscience, but as far as I can tell, we're a long way from the sort of understanding of human cognition necessary to create an upload, but going from an ASIC to an upload is trivial.
I'm also not at all convinced that I want a layover at humanville. I'm not super thrilled by the idea of creating a whole bunch of human level intelligent machines with values that differ widely from my own. That seems functionally equivalent to proposing a mass-breeding program aiming to produce psychologically disturbed humans.
↑ comment by Sebastian_Hagen · 2014-09-16T20:25:56.061Z · LW(p) · GW(p)
It seems likely that autonomous laborers, assembly-line workers, clerks and low-level managers would, without requiring such flirtation, be useful and sufficient for the society of abundance that is our main objective.
In an intelligent society that was highly integrated and capable of consensus-building, something like that may be possible. This is not our society. Research into stronger AI would remain a significant opportunity to get an advantage in {economic, military, ideological} competition. Unless you can find some way to implement a global coordination framework to prevent this kind of escalation, fast research of that kind is likely to continue.
↑ comment by KatjaGrace · 2014-09-16T03:34:34.232Z · LW(p) · GW(p)
In what sense do you think of an autonomous laborer as being under 'our control'? How would you tell if it escaped our control?
Replies from: rlsj↑ comment by rlsj · 2014-09-16T20:39:49.594Z · LW(p) · GW(p)
How would you tell? By its behavior: doing something you neither ordered nor wanted.
Think of the present-day "autonomous laborer" with an IQ about 90. The only likely way to lose control of him is for some agitator to instill contrary ideas. Censorship for robots is not so horrible a regime.
Who is it that really wants AGI, absent proof that we need it to automate commodity production?
Replies from: leplen, None↑ comment by leplen · 2014-09-17T16:16:14.485Z · LW(p) · GW(p)
In my experience, computer systems currently get out of my control by doing exactly what I ordered them to do, which is frequently different than I what I wanted them to do.
Whether or not a system is "just following orders" doesn't seem to be a good metric for it being under your control.
Replies from: rlsj↑ comment by rlsj · 2014-09-17T23:42:19.523Z · LW(p) · GW(p)
How does "just following orders," a la Nuremberg, bear upon this issue? It's out of control when its behavior is neither ordered nor wanted.
Replies from: leplen↑ comment by leplen · 2014-09-18T23:15:24.086Z · LW(p) · GW(p)
While I agree that it is out of control if the behavior is neither ordered nor wanted, I think it's also very possible for the system to get out of control while doing exactly what you ordered it to, but not what you meant for it to.
The argument I'm making is approximately the same as the one we see in the outcome pump example.
This is to say, while a system that is doing something neither ordered nor wanted is definitely out of control, it does not follow that a system that is doing exactly what it was ordered to do is necessarily under your control.
↑ comment by JonathanGossage · 2014-09-17T15:51:04.328Z · LW(p) · GW(p)
The following are some attributes and capabilities which I believe are necessary for superintelligence. Depending on how these capabilities are realized, they can become anything from early warning signs of potential problems to red alerts. It is very unlikely that, on their own, they are sufficient.
- A sense of self. This includes a recognition of the existence of others.
- A sense of curiosity. The AI finds it attractive (in some sense) to investigate and try to understand the environment that it find itself in.
- A sense of motivation. The AI has attributes similar in some way to human aspirations.
- A capability to (in some way) manipulate portions of its external physical environment, including its software but also objects and beings external to its own physical infrastructure.
↑ comment by cameroncowan · 2014-10-19T04:39:34.926Z · LW(p) · GW(p)
I would add a sense of ethical standards.
↑ comment by Pablo (Pablo_Stafforini) · 2014-09-16T17:11:57.476Z · LW(p) · GW(p)
The mixture of skills and abilities that a person has is not the same as the set of skills which could result in the dangers Bostrom will discuss later, or other dangers and benefits which he does not discuss... Systems which are quite deficient in some ways, relative to people, may still be extremely dangerous... "Human-level intelligence" is only a first-order approximation to the set of skills and abilities which should concern us.
I agree, and believe that the emphasis on "superintelligence", depending on how that term is interpreted, might be an impediment to clear thinking in this area. Following David Chalmers, I think it's best to formulate the problem more abstractly, by using the concept of a self-amplifying cognitive capacity. When the possession of that cognitive capacity is correlated with changes in some morally relevant capacity (such as the capacity to cause the extinction of humanity), the question then becomes one about the dangers posed by systems which surpass humans in that self-amplifying capacity, regardless of how much they resemble typical human beings or how they perform on standard measures of intelligence.
comment by John_Maxwell (John_Maxwell_IV) · 2014-09-17T04:47:03.889Z · LW(p) · GW(p)
I like Bostrom's book so far. I think Bostrom's statement near the beginning that much of the book is probably wrong is commendable. If anything, I think I would have taken this statement even further... it seems like Bostrom holds a position of such eminence in the transhumanist community that many will be liable to instinctively treat what he says as quite likely to be correct, forgetting that predicting the future is extremely difficult and even a single very well educated individual is only familiar with a fraction of human knowledge.
I'm envisioning an alternative book, Superintelligence: Gonzo Edition, that has a single bad argument deliberately inserted at random in each chapter that the reader is tasked with finding. Maybe we could get a similar effect by having a contest among LWers to find the weakest argument in each chapter. (Even if we don't have a contest, I'm going to try to keep track of the weakest arguments I see on my own. This chapter it was gnxvat gur abgvba bs nv pbzcyrgrarff npghnyyl orvat n guvat sbe tenagrq.)
Also, supposedly being critical is a good way to generate new ideas.
comment by kgalias · 2014-09-16T06:18:21.037Z · LW(p) · GW(p)
I was under the impression (after reading the sections) that the argument hinges a lot less on (economic) growth than what might be gleamed from the summary here.
Replies from: NxGenSentience, lukeprog, KatjaGrace↑ comment by NxGenSentience · 2014-09-21T13:35:14.304Z · LW(p) · GW(p)
It may have been a judgement call by the writer (Bostrom) and editor: He is trying to get the word out as widely as possible that this is a brewing existential crisis. In this society, how to you get most people's (policymakers, decision makers, basically "the Suits" who run the world) attention?
Talk about the money. Most of even educated humanity sees the world in one color (can't say green anymore, but the point is made.)
Try to motivate people about global warming? ("...um....but, but.... well, it might cost JOBS next month, if we try to save all future high level earthly life from extinction... nope the price [lost jobs] of saving the planet is obviously too high...")
Want to get non-thinkers to even pick up the book and read the first chapter or two.... talk about money.
If your message is important to get in front of maximum eyeballs, sometimes you have to package it a little bit, just to hook their interest. Then morph the emphasis into what you really want them to hear, for the bulk of the presentation.
Of course, strictly speaking, what I just said was tangent to the original point, which was whether the summary reflected the predominant emphasis in the pages of the book it ostensibly covered.
But my point about PR considerations was worth making, and also, Katja or someone did, I think mention maybe formulating a reading guide for Bostrom's book, in which case, any such author of a reading guide might be thinking already about this "hook 'em by beginning with economics" tactic, to make the book itself more likely to be read by a wider audience.
↑ comment by KatjaGrace · 2014-09-22T03:12:04.793Z · LW(p) · GW(p)
Apologies; I didn't mean to imply that the economics related arguments here were central to Bostrom's larger argument (he explicitly says they are not) - merely to lay them out, for what they are worth.
Though it may not be central to Bostrom's case for AI risk, I do think economics is a good source of evidence about these things, and economic history is good to be familiar with for assessing such arguments.
Replies from: kgalias↑ comment by kgalias · 2014-09-22T16:37:54.859Z · LW(p) · GW(p)
No need to apologize - thank you for your summary and questions.
Though it may not be central to Bostrom's case for AI risk, I do think economics is a good source of evidence about these things, and economic history is good to be familiar with for assessing such arguments.
No disagreement here.
comment by KatjaGrace · 2014-09-16T01:22:24.720Z · LW(p) · GW(p)
How would you like this reading group to be different in future weeks?
Replies from: kgalias, negamuhia, NxGenSentience↑ comment by kgalias · 2014-09-16T06:33:57.097Z · LW(p) · GW(p)
You could start at a time better suited for Europe.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2014-09-16T08:31:53.989Z · LW(p) · GW(p)
That's a tricky problem!
If we assume people are doing this in their spare time, then a weekend is the best time to do it: say noon Pacific time, which is 9pm Berlin time. But people might want to be doing something else with their Saturdays or Sundays. If they're doing it with their weekday evenings, then they just don't overlap; the best you can probably do is post at 10am Pacific time on (say) a Monday, and let Europe and UK comment first, then the East Coast, and finally the West Coast. Obviously there will be participants in other timezones, but those four will probably cover most participants.
↑ comment by negamuhia · 2014-09-16T12:17:35.748Z · LW(p) · GW(p)
The text of [the parts I've read so far of] Superintelligence is really insightful, but I'll quote Nick in saying that
"Many points in this book are probably wrong".
He gives many references (84 in Chapter 1 alone), some of which refer to papers and others that resemble continuations of the specific idea in question that don't fit in directly with the narrative in the book. My suggestion would be to go through each reference as it comes up in the book, analyze and discuss it, then continue. Maybe even forming little discussion groups around each reference in a section (if it's a paper). It could even happen right here in comment threads.
That way, we can get as close to Bostrom's original world of information as possible, maybe drawing different conclusions. I think that would be a more consilient understanding of the book.
↑ comment by NxGenSentience · 2014-09-21T21:38:51.809Z · LW(p) · GW(p)
Katja, you are doing a great job. I realize what a huge time and energy commitment it is to take this on... all the collateral reading and sources you have to monitor, in order to make sure you don't miss something that would be good to add in to the list of links and thinking points.
We are still in the get aquainted, discovery phase, as a group, and with the book. I am sure it will get more interesting yet as we go along, and some long term intellectual friendships are likely to occurr as a result of the coming weeks of interaction.
Thanks for your time and work.... Tom
comment by KatjaGrace · 2014-09-16T01:19:04.171Z · LW(p) · GW(p)
'The computer scientist Donald Knuth was struck that "AI has by now succeeded in doing essentially everything that requires 'thinking' but has failed to do most of what people and animals do 'without thinking' - that, somehow, is so much harder!'. (p14) There are some activities we think of as involving substantial thinking that we haven't tried to automate much, presumably because they require some of the 'not thinking' skills as precursors. For instance, theorizing about the world, making up grand schemes, winning political struggles, and starting successful companies. If we had successfully automated the 'without thinking' tasks like vision and common sense, do you think these remaining kinds of thinking tasks would come easily to AI - like chess in a new domain - or be hard like the 'without thinking' tasks?
Replies from: paulfchristiano, Houshalter↑ comment by paulfchristiano · 2014-09-16T04:23:55.388Z · LW(p) · GW(p)
I think this is a very good question that should be asked more. I find it particularly important because of the example of automating research, which is probably the task I care most about.
My own best guess is that the computational work that humans are doing while they do the "thinking" tasks is probably very minimal (compared to the computation involved in perception, or to the computation currently available). However, the task of understanding which computation to do in these contexts seems quite similar to the task of understanding which computation to do in order to play a good game of chess, and automating this still seems out of reach for now. So I guess I disagree somewhat with Knuth's characterization.
I would be really curious to get the perspectives of AI researchers involved with work in the "thinking" domains.
Replies from: Sebastian_Hagen, KatjaGrace↑ comment by Sebastian_Hagen · 2014-09-16T19:55:20.400Z · LW(p) · GW(p)
I find it particularly important because of the example of automating research, which is probably the task I care most about.
Neither math research nor programming or debugging are being taken over by AI, so far, and none of those require any of the complicated unconscious circuitry for sensory or motor interfacing. The programming application, at least, would also have immediate and major commercial relevance. I think these activities are fairly similar to research in general, which suggests that what one would classically call the "thinking" parts remain hard to implement AI.
Replies from: ramana-kumar, JonathanGossage, ramana-kumar↑ comment by Ramana Kumar (ramana-kumar) · 2014-10-03T20:53:56.066Z · LW(p) · GW(p)
They're not yet close to being taken over by AI, but there has been research on automating all of the above. Some possibly relevant keywords: automated theorem proving, and program synthesis.
↑ comment by JonathanGossage · 2014-09-17T17:57:23.122Z · LW(p) · GW(p)
Programming and debugging, although far from trivial, are the easy part of the problem. The hard part is determining what the program needs to do. I think that the coding and debugging parts will not require AGI levels of intelligence, however deciding what to do definitely needs at least human-like capacity for most non-trivial problems.
Replies from: KatjaGrace↑ comment by KatjaGrace · 2014-09-22T03:20:18.420Z · LW(p) · GW(p)
I'm not sure what you mean when you say 'determining what the program needs to do' - this sounds very general. Could you give an example?
Replies from: LeBleu↑ comment by LeBleu · 2014-10-07T08:42:03.997Z · LW(p) · GW(p)
Most programming is not about writing the code, it is about translating a human description of the problem into a computer description of the problem. This is also why all attempts so far to make a system so simple "non-programmers" can program it have failed. The difficult aptitude for programming is the ability to think abstractly and systematically, and recognize what parts of a human description of the problem need to be translated into code, and what unspoken parts also need to be translated into code.
↑ comment by Ramana Kumar (ramana-kumar) · 2014-10-03T20:53:04.551Z · LW(p) · GW(p)
They're not yet close to being taken over by AI, but there has been research on automating all of the above. Some possibly relevant keywords: automated theorem proving, and program synthesis.
↑ comment by KatjaGrace · 2014-10-03T20:32:22.678Z · LW(p) · GW(p)
Do you mean that each time you do a research task, deciding how to do it is like making a program to play chess, rather than just designing a general system for research tasks being like designing a system for chess?
↑ comment by Houshalter · 2015-06-07T03:07:02.945Z · LW(p) · GW(p)
I think by "things that require thinking" he means logical problems in well defined domains. Computers can solve logical puzzles much faster than humans, often through sheer brute force. From board games to scheduling to finding the shortest path.
Of course there are counter examples like theorem proving or computer programming. Though they are improving and starting to match humans at some tasks.
comment by KatjaGrace · 2014-09-16T04:11:40.145Z · LW(p) · GW(p)
Did you change your mind about anything as a result of this week's reading?
Replies from: Larks, PhilGoetz, None, NxGenSentience↑ comment by Larks · 2014-09-19T00:55:26.764Z · LW(p) · GW(p)
This is an excellent question, and it is a shame (perhaps slightly damning) that no-one has answered it. On the other hand, much of this chapter will have been old material for many LW members. I am ashamed that I couldn't think of anything either, so I went back again looking for things I had actually changed my opinion about, even a little, and not merely because I hadn't previously thought about it.
- p6 I hadn't realised how important combinatorial explosion was for early AI approaches.
- p8 I hadn't realised, though I should have been able to work it out, that the difficulties in coming up with a language which matched the structure of the domain was a large part of the problem with evolutionary algorithms. Once you have done that you're halfway to solving it by conventional means.
- p17 I hadn't realised about how high volume could have this sort of reflexive effect.
↑ comment by KatjaGrace · 2014-09-22T03:40:33.147Z · LW(p) · GW(p)
Thanks for taking the time to think about it! I find your list interesting.
↑ comment by [deleted] · 2014-10-04T00:04:44.396Z · LW(p) · GW(p)
Related matter: who here has actually taken an undergraduate or graduate AI course?
Replies from: PhilGoetz↑ comment by PhilGoetz · 2014-10-06T23:55:56.576Z · LW(p) · GW(p)
My PhD is "in" AI (though the diploma says Computer Science, I avoided that as much as possible), and I've TA'd three undergrad and graduate AI courses, and taught one. I triple-minored in psychology, neuroscience, and linguistics.
Replies from: None↑ comment by NxGenSentience · 2014-09-21T16:11:09.383Z · LW(p) · GW(p)
Not so much from the reading, or even from any specific comments in the forum -- though I learned a lot from the links people were kind enough to provide.
But I did, through a kind of osmosis, remind myself that not everyone has the same thing in mind when they think of AI, AGI, human level AI, and still less, mere "intelligence."
Despite the verbal drawing of the distinction between GOFAI and the spectrum of approaches being investigated and persued today, I have realized by reading between the lines that GOFAI is still alive and well. Maybe it is not the primitive "production system" stuff of the Simon and Newell era, or programs written in LISP or ProLog (both of which I coded in, once upon a time), but there are still a lot of people who don't much care about what I would call "real consciousness",and are still taking a Turing-esque, purely operationalistic, essentially logical positivistic positivistic approach to "intellence."
I am passionately pro-AI. But for me, that means I want more than anything to create a real conscious entity, that feels, has ideas, passions, drives, emotions, loyalties, ideals.
Most of even neurology has moved beyond the positivistic "there is only behavior, and we don't talk about conscious", to actively investigating the function, substrate, neural realization of, evolutionary contribution of, etc, consciousness, as opposed to just the evolutiounary contribution of non-conscious informaton processing, to organismic success.
Look at Damasio's work, showing that emotion is necessary for full spectrum cognitive skill manifestation.
THe thinking-feeling dichotomy is rapidly falling out of the working worldview, and I have been arguing for years that there are fallacious categories we have been using, for other reasons.
This is not to say that nonconscious "intelligent" systems are not here, evolving, and potentially dangerous. Automated program trading on the financial markets is potentially dangerous.
So there is still great utility in being sensitive to possible existential risks from non-consciousness intelligent systems.
They need not be willfully malevolent to pose a risk to us.
But as to my original point, I have learned that much of AI is still (more sophisticated) GOFAI, with better hardware and algorithms.
I am pro-AI, as I say, but I want to create "conscious" machines, in the interesting, natural sense of 'conscious' now admitted by neurology, most of cognitive science, much of theoretical neurobiology, and philosophy of mind, -- and in which positions like Dennett's "intentional stance" that seek to do away with real sentience and admit only behavior, are now recognized to have been a wasted 30 years.
This realization that operationalism is alive and well in AI, is good for me in particular, because I am preparing to create a you tube channel or two, presenting both the history of AI and parallel intellectual history of philosophy of mind and cognitive science -- showing why the postivistic atmosphere grew up from ontologal drift emanating from philosphy of science's delay in digesting the Newtonian to quantum ontology change.
Then untimately, I'll be laying some fresh groundwork for a series of new ideas I want to present, on how we can advance the goal of artificial sentience, and how and why this is the only way to make superintelligence that has a chance of being safe, let alone ultimately beneficial and a partner to mankind.
So, I have indirectly by, as I say, a kind of osmosis, rather than what anyone has said (more by what has not been said, perhaps) learned that much of AI is lagging behind neurology, cognitive science, and lots of other fields, in the adoption of a head-on attack on the "problem of consciousness."
To me, not only do I want to create conscious machines, but I think solving the mind body problem in the biological case, and doing "my" brand of successful AI, are complimentary. So complimentary, that solving either would probably point the way to solving the other. I thought that ever since I wrote my undergrad honors thesis.
So that is what I have tentatively introjected so far, albeit indirectly. And it will help me in my You Tube videos (not up yet) which are directed at the AI community, intending to be a helpful resource, especiallly for those who don't have a clue what kind of intellectual climate made the positivistic "turing test" almost an inevitable outgrowth.
But the intellectual soil from which it grew, no longer is considered valid (understanding this requires digesting the lessons of quantum theory in a new, and rigorous way, and several other issues.)
But its time to shed the suffocating influence of the Turing test, and the gravitational drag of the defective intellectual history, that it inevitably grew out of (along with logical behaviorism, eliminitive materialism, etc. It was all based on a certain understanding of Newtonian physics, which has been known to be fundamentally false, for over a hundred years.
Some of us are still trying to fit AI into an ontology that never was correct to begin with.
But we know enough, now, to get it right this time. If we methodically go back and root out the bad ideas. We need a little top down thinking, to supplement all the bottom up thinking in engineering.
Replies from: PhilGoetz↑ comment by PhilGoetz · 2014-10-07T00:06:11.384Z · LW(p) · GW(p)
Look at Damasio's work, showing that emotion is necessary for full spectrum cognitive skill manifestation.
There is a way to arrive at this thru Damasio's early work, which I don't think is highlighted by saying that emotion is needed for human-level skill. His work in the 1980s was on "convergence zones". These are hypothetical areas in the brain that are auto-associative networks (think a Hopfield network) with bi-directional connections to upstream sensory areas. His notion is that different sensory (and motor? I don't remember now) areas recognize sense-specific patterns (e.g., the sound of a dog barking, the image of a dog, the word "dog", the sound of the word "dog", the movement one would make against an attacking dog), and the pattern these create in the convergence zone represents the concept "dog".
This makes a lot of sense and has a lot of support from studies, but a consequence is that humans don't use logic. A convergence zone is set there, in one physical hunk of brain, with no way to move its activation pattern around in the brain. That means that the brain's representations do not use variables the way logic does. A pattern in a CZ might be represented by the variable X, and could take on different values such as the pattern for "dog". But you can't move that X around in equations or formuli. You would most likely have a hard-wired set of basic logic rules, and the concept "dog" as used on the left-hand side of a rule would be a different concept than the concept "dog" used on the right-hand side of the same rule.
Hence, emotions are important for humans, but this says nothing about whether emotions would be needed for an agent that could use logic.
comment by KatjaGrace · 2014-09-16T01:35:21.989Z · LW(p) · GW(p)
The chapter gives us a reasonable qualitative summary of what has happened in AI so far. It would be interesting to have a more quantitative picture, though this is hard to get. e.g. How much better are the new approaches than the old ones, on some metric? How much was the area funded at different times? How much time has been spent on different things? How has the economic value of the outputs grown over time?
Replies from: Larks↑ comment by Larks · 2014-09-19T00:23:10.030Z · LW(p) · GW(p)
Yes. On the most mundane level, I'd like something a bit more concrete about the AI winters.
Frequently in industries there is a sense that now is a good time or a bad time, but often this subjective impression does not correlate very well with the actual data. And when it does, it is rarely very sensitive to magnitude.
comment by KatjaGrace · 2014-09-16T01:36:53.442Z · LW(p) · GW(p)
The 'optimistic' quote from the Dartmouth Conference seems ambiguous in its optimism to me. They say 'a significant advance can be made in one or more of these problems', rather than that any of them can be solved (as they are often quoted as saying). What constitutes a 'significant advance' varies with optimism, so their statement seems consistent with them believing they can make an arbitrarily small step. The whole proposal is here, if anyone is curious about the rest.
Replies from: lukeprog↑ comment by lukeprog · 2014-09-16T06:21:29.814Z · LW(p) · GW(p)
Off the top of my head I don't recall, but I bet Machine Who Think has detailed coverage of those early years and can probably shed some light on how much advance the Dartmouth participants expected.
comment by KatjaGrace · 2014-09-16T01:11:13.212Z · LW(p) · GW(p)
AI seems to be pretty good at board games relative to us. Does this tell us anything interesting? For instance, about the difficulty of automating other kinds of tasks? How about the task of AI research? Some thoughts here.
Replies from: rlsj, AshokGoel, ScottMessick, TRIZ-Ingenieur, lackofcheese, cameroncowan↑ comment by rlsj · 2014-09-16T01:40:36.702Z · LW(p) · GW(p)
For anything whose function and sequencing we thoroughly understand the programming is straightforward and easy, at least in the conceptual sense. That covers most games, including video games. The computer's "side" in a video game, for example, which looks conceptually difficult, most of the time turns out logically to be only decision trees.
The challenge is the tasks we can't precisely define, like general intelligence. The rewarding approach here is to break down processes into identifiable subtasks. A case in point is understanding natural languages, one of whose essential questions is, "What is the meaning of "meaning?" In terms of a machine it can only be the content of a subroutine or pointers to subroutines. The input problem, converting sentences into sets of executable concepts, is thus approachable. The output problem, however, converting unpredictable concepts into words, is much tougher. It may involve growing decision trees on the fly.
↑ comment by AshokGoel · 2014-09-16T01:31:40.718Z · LW(p) · GW(p)
Thanks for the nice summary and the questions. I think it is worth noting that AI is good only at some board games (fully observable, deterministic games) and not at others (partially observable, non-deterministic games such as, say, Civilization).
Replies from: paulfchristiano, gallabytes↑ comment by paulfchristiano · 2014-09-16T04:19:39.401Z · LW(p) · GW(p)
Do you know of a partially observable game for which AI lags behind humans substantially? These examples are of particular interest to me because they would significantly revise my understanding of what problems are hard and easy.
The most prominent games of this partial information that I know are Bridge and Poker, and AI's can now win at both of these (and which in fact proved to be much easier than the classic deterministic games). Backgammon is random, and also turned out to be relatively easy--in fact the randomness itself is widely considered to have made the game easy for computers! Scrabble is the other example that comes to mind, where the situation is the same.
For Civilization in particular, it seems very likely that AI would be wildly superhuman if it were subject to the same kind of attention as other games, simply because the techniques used in Go and Backgammon, together with a bunch of ad hoc logic for navigating the tech tree, should be able to get so much traction.
Replies from: Kaj_Sotala, lackofcheese↑ comment by Kaj_Sotala · 2014-09-16T19:27:04.722Z · LW(p) · GW(p)
For Civilization in particular, it seems very likely that AI would be wildly superhuman if it were subject to the same kind of attention as other games, simply because the techniques used in Go and Backgammon, together with a bunch of ad hoc logic for navigating the tech tree, should be able to get so much traction.
Agreed. It's not Civilization, but Starcraft is also partially observable and non-deterministic, and a team of students managed to bring their Starcraft AI to the level of being able to defeat a "top 16 in Europe"-level human player after only a "few months" of work.
The game AIs for popular strategy games are often bad because the developers don't actually have the time and resources to make a really good one, and it's not a high priority anyway - most people playing games like Civilization want an AI that they'll have fun defeating, not an AI that actually plays optimally.
Replies from: Lumifer, Larks, lackofcheese↑ comment by Lumifer · 2014-09-16T19:45:11.332Z · LW(p) · GW(p)
Starcraft
In RTS games an AI has a large built-in advantage over humans because it can micromanage so much better.
most people playing games like Civilization want an AI that they'll have fun defeating, not an AI that actually plays optimally
That's a very valid point: a successful AI in a game is the one which puts up a decent fight before losing.
Replies from: Liso↑ comment by Liso · 2014-09-23T04:52:19.105Z · LW(p) · GW(p)
Are you played this type of game?
[pollid:777]
I think that if you played on big map (freeciv support really huge) then your goals (like in real world) could be better fulfilled if you play WITH (not against) AI. For example managing 5 tousands engineers manually could take several hours per round.
You could meditate more concepts (for example for example geometric growing, metasthasis method of spread civilisation etc and for sure cooperation with some type of AI) in this game...
Replies from: cameroncowan↑ comment by cameroncowan · 2014-10-19T18:34:56.476Z · LW(p) · GW(p)
I think it would be easy to create a Civilization AI that would choose to grow on a certain path with a certain win-style in mind. So if the AI picks military win then it will focus on building troops and acquiring territory and maintaining states of war with other players. What might be hard is other win states like diplomatic or cultural because those require much more intuitive and nuanced decision making without a totally clear course of action.
↑ comment by Larks · 2014-09-19T00:58:06.522Z · LW(p) · GW(p)
most people playing games like Civilization want an AI that they'll have fun defeating, not an AI that actually plays optimally.
The popular AI mods for Civ actually tend to make the AIs less thematic - they're less likely to be nice to you just because of a thousand year harmonious and profitable peace, for example, and more likely to build unattractive but efficient Stacks of Doom. Of course there are selection effects on who installs such mods.
↑ comment by lackofcheese · 2014-10-20T07:14:48.089Z · LW(p) · GW(p)
The game AIs for popular strategy games are often bad because the developers don't actually have the time and resources to make a really good one, and it's not a high priority anyway - most people playing games like Civilization want an AI that they'll have fun defeating, not an AI that actually plays optimally.
I think you're mostly correct on this. Sometimes difficult opponents are needed, but for almost all games that can be trivially achieved by making the AI cheat rather than improving the algorithms. That said, when playing a game vs an AI you do want the AI to at least appear to be intelligent; although humans can often be quite easy to fool with cheating, a good algorithm is still a better way of giving this appearance than a fake. It doesn't have to be optimal, and even if it is you can constrain it enough to make it beatable, or intentionally design different kinds of weaknesses into the AI so that humans can have fun looking for those weaknesses and feel good when they find them. Ultimately, though, the point is that the standard approach of having lots and lots of scripting still tends to get the job done, and developers almost never find the resource expenditure for good AI to be worthwhile.
However, I think that genuinely superhuman AI in games like Starcraft and Civilization is far harder than you imply. For example, in RTS games (as Lumifer has said) the AI has a built-in advantage due to its capacity for micromanagement. Moreover, although the example you cite has an AI from a "few months" of work beating a high-level human player, I think that was quite likely to be a one-off occurrence. Beating a human once is quiet different to consistently beating a human.
If you look at the results of the AIIDE Man vs Machine matches, the top bots consistently lose every game to Bakuryu (the human representative). According to this report,
In this match it was shown that the true weakness of state of the art StarCraft AI systems was that humans are very adept at recognizing scripted behaviors and exploiting them to the fullest. A human player in Skynet’s position in the first game would have realized he was being taken advantage of and adapted his strategy accordingly, however the inability to put the local context (Bakuryu kiting his units around his base) into the larger context of the game (that this would delay Skynet until reinforcements arrived) and then the lack of strategy change to fix the situation led to an easy victory for the human. These problems remain as some of the main challenges in RTS AI today: to both recognize the strategy and intent of an opponent’s actions, and how to effectively adapt your own strategy to overcome them.
I seems to me that the best AIs in these kinds of games work by focusing on a relatively narrow set of overall strategies, and then focusing on executing those strategies as flawlessly as possible. In something like Starcraft the AI's potential for this kind of execution is definitely superhuman, but as the Man vs Machine matches demonstrate this really isn't enough.
In the case of the Civilization games, the fact that they aren't real-time removes quite a lot of the advantage that an AI gets in terms of micromanagement. Also, like in Starcraft, classical AI techniques really don't work particularly well due to the massive branching factor.
Granted, taking a similar approach to the Starcraft bots might still work pretty well; I believe there are some degenerate strategies in many of the Civ games that are quite strong on their own, and if you program an AI to execute them with a high degree of precision and good micromanagement, and add some decent reactive play, that might be good enough.
However, unless the game is simply broken due to bad design, I suspect that you would find that, like the Starcraft bots, AIs designed on that kind of idea would still be easily exploited and consistently beaten by the best human players.
↑ comment by lackofcheese · 2014-10-20T06:24:00.675Z · LW(p) · GW(p)
I wouldn't say that poker is "much easier than the classic deterministic games", and poker AI still lags significantly behind humans in several regards. Basically, the strongest poker bots at the moment are designed around solving for Nash equilibrium strategies (of an abstracted version of the game) in advance, but this fails in a couple of ways:
- These approaches haven't really been extended past 2- or 3-player games.
- Playing a NE strategy makes sense if your opponent is doing the same, but your opponent almost always won't be. Thus, in order to play better, poker bots should be able to exploit weak opponents.
Both of these are rather nontrivial problems.
Kriegspiel, a partially observable version of chess, is another example where the best humans are still better than the best AIs, although I'll grant that the gap isn't a particularly big one, and likely mostly has to do with it not being a significant research focus.
↑ comment by gallabytes · 2014-09-16T02:02:33.505Z · LW(p) · GW(p)
Interestingly enough, a team at MIT managed to make an AI that learned how to play from the manual and proceeded to win 80% of it's games against the AI, though I don't know which difficulty it was set to, or how the freeciv AI compares to the one in normal Civilization.
↑ comment by ScottMessick · 2014-09-17T23:55:14.293Z · LW(p) · GW(p)
I was disappointed to see my new favorite "pure" game Arimaa missing from Bostrom's list. Arimaa was designed to be intuitive for humans but difficult for computers, making it a good test case. Indeed, I find it to be very fun, and computers do not seem to be able to play it very well. In particular, computers are nowhere close to beating top humans despite the fact that there has arguably been even more effort to make good computer players than good human players.
Arimaa's branching factor dwarfs that of Go (which in turn beats every other commonly known example). Since a super-high branching factor is also a characteristic feature of general AI test problems, I think it remains plausible that simple, precisely defined games like Arimaa are good test cases for AI, as long as the branching factor keeps the game out of reach of brute force search.
Replies from: Houshalter↑ comment by Houshalter · 2015-06-07T12:08:02.027Z · LW(p) · GW(p)
In particular, computers are nowhere close to beating top humans despite the fact that there has arguably been even more effort to make good computer players than good human players.
Reportedly this just happened recently: http://games.slashdot.org/story/15/04/19/2332209/computer-beats-humans-at-arimaa
Arimaa's branching factor dwarfs that of Go (which in turn beats every other commonly known example).
Go is super close to being beaten, and AIs do very well against all but the best humans.
↑ comment by TRIZ-Ingenieur · 2014-09-18T00:27:11.825Z · LW(p) · GW(p)
This summary of already superhuman game playing AIs impressed me since two weeks. But only until yesterday. John McCarthy was attributed in Vardi(2012) to have said: "As soon as it works, no one calls it AI anymore." (p13)
There is more truth in it than McCarthy expected it to be: A tailor made game playing algorithm, developed and optimized by generations of scientists and software engineers is no entity of AI. It is an algorithm. Human beings analyzed the rule set, found abstractions of it, developed evaluation schemes and found heuristics to prune the un-computable large search tree. With brute force and megawatts of computational evaluation power they managed to fill a database with millions of more or less favorable game situations. In direct competion of game playing algorithm vs. human being these pre-computed situations help to find short cuts in the tree search to achieve superhuman performance in the end.
Is this entity an AI or an algorithm?
- Game concept development: human.
- Game rule definition and negotiation: human.
- Game rule abstraction and translation into computable form: human designed algorithm.
- Evaluation of game situation: human designed algorithm, computer aided optimization.
- Search tree heuristics: human designed algorithm, computer aided optimization.
- Database of favorable situations and moves: brute force tree search on massive parallel supercomputer.
- Detection of favorable situations: human designed algorithm for pattern matching, computer aided optimization.
- Active playing: Full automatic use of algorithms and information of points 3-7. No human being involved.
Unsupervised learning, search optimization and pattern matching (points 5-7) make this class of entities weak AIs. A human being playing against this entity will probably attribute intelligence to it. "Kasparov claims to have seen glimpses of true intelligence and creativity in some of the computers moves" (p12, Newborn[2011]).
But weak AI is not our focus. Our focus is strong AI, HLAI and superintelligence. It is good to know that human engineered weak AI algorithms can achieve superhuman performance. Not a single game playing weak AI achieved human level of intelligence. The following story will show why:
Watch two children, Alice and Bob, playing in the street. They found white and black pebbles and a piece of chalk. Bob has a faint idea of checkers (other names: "draught" or German: "Dame") from having seen his elder brother playing it. He explains to Alice: "Lets draw a grid of chalk lines on the road and place our pebbles into the fields. I will show you." In joint effort they draw several strait lines resulting in a 7x9 grid. Then Bob starts to place his black pebbles into his starting rows as he remembered it. Alice follows suit - but she has not enough white pebbles to fill her starting rows. They discuss their options and searched for more white pebbles. After two minutes of unsuccessful search Bob said: "Lets remove one column and I take two of my black pebbles away." Then Bob explained to Alice how to make moves with her pebbles on the now smaller 7x8 board game grid. They started playing and enjoyed their time. Bob did win most of the games. He changed the rules to give Alice a starting advantage. Alice did not care losing frequently. They laughed a lot. She loves Bob and is happy for every minute being next to him.
This is a real game. It is a full body experience with all senses. These young children manipulate their material world, create and modify abstract rules, develop strategies for winning, communicate and have fun together.
The German Wikipedia entry for "Dame_(Spiel)" lists 3 4 4 (3 + many more) 2 = 288+ orthogonal rule variants. Playing Doppelkopf (popular 4-player card game in Germany) with people you have never played with takes at least five minutes discussion about the rules in the beginning. This demonstrates that developing and negotiating rules is central part of human game play.
If you would tell 10 year old Bob: "Alice has to go home with me for lunch. Look, this is Roboana (a strong AI robot), play with her instead." You guide your girl-alike robot to Bob.
Roboana: "Hi, I'm Roboana, I saw you playing with Alice. It seems to be very funny. What is the game about?"
You, member of the Roboana development team, leave the scene for lunch. Will your maybe-HLAI robot manage the situation with Bob? Will Roboana modify the rules to balance the game if her strategy is too superior before Bob gets annoyed and walks away? Will Bob enjoy his time with Roboana?
Bob is assumingly 10 years old and qualifies only for sub human intelligence. Within the next 20 years I do not expect any artificial entity to reach this level of general intelligence. To know that algorithms meet the core performance for game play is only the smallest part of the problem. Therefore I prefer calling weak AI what it is: Algorithm.
In our further reading we should try not to forget that aspects of creativity, engineering, programming and social interaction are in most cases more complex than the core problem. Some rules are imprinted into us human beings: how a face looks like, how a fearful face looks like, how a fearful mother smells, how to smile to please, how to scream to alert the mother, how spit out bitter tasting food to protect against intoxication. To play with the environment is imprinted into our brains as well. We enjoy to manipulate things and observe with our fullest curiosity its outcome. A game is a regulated kind of play. For AI development it is worth to widen the focus from game to playing.
Replies from: cameroncowan↑ comment by cameroncowan · 2014-10-19T18:39:39.748Z · LW(p) · GW(p)
Now we have something! We have something we can actually use! AI must be able to interact with emotional intelligence!
↑ comment by lackofcheese · 2014-10-20T05:56:46.332Z · LW(p) · GW(p)
Although computers beat humans at board games without needing any kind of general intelligence at all, I don't think that invalidates game-playing as a useful domain for AGI research.
The strength of AI in games is, to a significant extent, due to the input of humans in being able to incorporate significant domain knowledge into the relatively simple algorithms that game AIs are built on.
However, it is quite easy to make game AI into a far, far more challenging problem (and, I suspect, a rather more widely applicable one)---consider the design of algorithms for general game playing rather than for any particular game. Basically, think of a game AI that is first given a description of the rules of the game it's about to play, which could be any game, and then must play the game as well as possible.
↑ comment by cameroncowan · 2014-10-19T18:32:17.718Z · LW(p) · GW(p)
It tells us that within certain bounds computers can excel as tasks. I think in the near-term that means that computers will continue to excel in certain tasks like personal assistants, factory labor, menial tasks, and human-aided tasks.
comment by KatjaGrace · 2014-09-16T01:08:46.005Z · LW(p) · GW(p)
How much smarter than a human could a thing be? (p4) How about the same question, but using no more energy than a human? What evidence do we have about this?
Replies from: leplen, mvp9, rcadey, cameroncowan↑ comment by leplen · 2014-09-22T17:23:28.700Z · LW(p) · GW(p)
The problem is that intelligence isn't a quantitative measure. I can't measure "smarter".
If I just want to know about the number of computations, then we can estimate that the human brain performs 10^14 operations/second then a machine operating at the Landauer limit would require about 0.3 microwatts to perform the same number of operations at room temperature.
The human brain uses something like 20 watts of energy (0.2*2000 calories/24 hours).
If that energy were used to perform computations at the Landaur limit then computational performance would increase by a factor of 6.5*10^7, to approximately 10^21 computations. But this only provides information about compute power. It doesn't tell us anything about intelligence.
Replies from: cameroncowan↑ comment by cameroncowan · 2014-10-19T18:30:41.432Z · LW(p) · GW(p)
Intelligence can be defined as the ability to use knowledge and experience together to create new solutions for problems and situations. Intelligence is about using resources regardless of computational power. Intelligence is as simple as my browser remember a password (which I don't let it do). It is able to recognize a website and pull the applicable data to auto fill and login. That is a kind of primitive intelligence.
↑ comment by mvp9 · 2014-09-16T01:48:44.331Z · LW(p) · GW(p)
Another way to get at the same point, I think, is - Are there things that we (contemporary humans) will never understand (from a Quora post)?
I think we can get some plausible insight on this by comparing an average person to the most brilliant minds today - or comparing the earliest recorded examples of reasoning in history to that of modernity. My intuition is that there are many concepts (quantum physics is a popular example, though I'm not sure it's a good one) that even most people today, and certainly in the past, will never comprehend, at least without massive amounts of effort, and possibly even then. They simply require too much raw cognitive capacity to appreciate. This is at least implicit in the Singularity hypothesis.
As to the energy issue, I don't see any reason to think that such super-human cognition systems necessarily requires more energy - though they may at first.
Replies from: paulfchristiano, billdesmedt, KatjaGrace↑ comment by paulfchristiano · 2014-09-16T05:21:37.882Z · LW(p) · GW(p)
I am generally quite hesitant about using the differences between humans as evidence about the difficulty of AI progress (see here for some explanation).
But I think this comparison is a fair one in this case, because we are talking about what is possible rather than what will be achieved soon. The exponentially improbable tails of the human intelligence distribution are a lower bound for what is possible in the long run, even without using any more resources than humans use. I do expect the gap between the smartest machines and the smartest humans to eventually be much larger than the gap between the smartest human and the average human (on most sensible measures).
↑ comment by billdesmedt · 2014-09-16T02:01:29.719Z · LW(p) · GW(p)
Actually, wrt quantum mechanics, the situation is even worse. It's not simply that "most people ... will never comprehend" it. Rather, per Richard Feynman (inventor of Feynman Diagrams, and arguable one of the 20th century's greatest physicists) nobody will ever comprehend it. Or as he put it, "If you think you understand quantum mechanics, you don't understand quantum mechanics." (http://en.wikiquote.org/wiki/Talk:Richard_Feynman#.22If_you_think_you_understand_quantum_mechanics.2C_you_don.27t_understand_quantum_mechanics..22)
Replies from: paulfchristiano↑ comment by paulfchristiano · 2014-09-16T03:53:35.018Z · LW(p) · GW(p)
I object (mildly) to this characterization of quantum mechanics. What notion of "understand" do we mean? I can use quantum mechanics to make predictions, I can use it to design quantum mechanical machines and protocols, I can talk philosophically about what is "going on" in quantum mechanics to more or less the same extent that I can talk about what is going on in a classical theory.
I grant there are senses in which I don't understand this concept, but I think the argument would be more compelling if you could make the same point with a clearer operationalization of "understand."
Replies from: mvp9↑ comment by mvp9 · 2014-09-16T04:44:05.520Z · LW(p) · GW(p)
I'll take a stab at it.
We are now used to saying that light is both a particle and a wave. We can use that proposition to make all sorts of useful predictions and calculations. But if you stop and really ponder that for a second, you'll see that it is so far out of the realm of human experience that one cannot "understand" that dual nature in the sense that you "understand" the motion of planets around the sun. "Understanding" in the way I mean is the basis for making accurate analogies and insight. Thus I would argue Kepler was able to use light as an analogy to 'gravity' because he understood both (even though he didn't yet have the math for planetary motion)
Perhaps an even better example is the idea of quantum entanglement: theory may predict, and we may observe quarks "communicating" at a distance faster than light, but (for now at least) I don't think we have really incorporate it into our (pre-symbolic) conception of the world.
Replies from: paulfchristiano, pragmatist↑ comment by paulfchristiano · 2014-09-16T05:13:24.738Z · LW(p) · GW(p)
I grant that there is a sense in which we "understand" intuitive physics but will never understand quantum mechanics.
But in a similar sense, I would say that we don't "understand" almost any of modern mathematics or computer science (or even calculus, or how to play the game of go). We reason about them using a new edifice of intuitions that we have built up over the years to deal with the situation at hands. These intuitions bear some relationship to what has come before but not one as overt as applying intuitions about "waves" to light.
As a computer scientist, I would be quick to characterize this as understanding! Moreover, even if a machine's understanding of quantum mechanics is closer to our idea of intuitive physics (in that they were built to reason about quantum mechanics in the same way we were built to reason about intuitive physics) I'm not sure this gives them more than a quantitative advantage in the efficiency with which they can think about the topic.
I do expect them to have such advantages, but I don't expect them to be limited to topics that are at the edge of humans' conceptual grasp!
Replies from: cameroncowan↑ comment by cameroncowan · 2014-10-19T18:26:47.869Z · LW(p) · GW(p)
I think robots will have far more trouble understanding fine nuances of language, behavior, empathy, and team work. I think quantum mechanics will be easy overall. Its things like emotional intelligence that will be hard.
↑ comment by pragmatist · 2014-09-16T06:27:55.900Z · LW(p) · GW(p)
The apparent mystery in particle-wave dualism is simply an artifact of using bad categories. It is a misleading historical accident that we hear things like "light is both a particle and a wave" in quantum physics lectures. Really what teachers should be saying is that 'particle' and 'wave' are both bad ways of conceptualizing the nature of microscopic entities. It turns out that the correct representation of these entities is neither as particles nor as waves, traditionally construed, but as quantum states (which I think can be understood reasonably well, although there are of course huge questions regarding the probabilistic nature of observed outcomes). It turns out that in certain experiments quantum states produce outcomes similar to what we would expect from particles, and in other experiments they produce outcomes similar to what we would expect from waves, but that is surely not enough to declare that they are both particles and waves.
I do agree with you that entanglement is a bigger conceptual hurdle.
↑ comment by KatjaGrace · 2014-09-16T02:01:34.437Z · LW(p) · GW(p)
If there are insights that some humans can't 'comprehend', does this mean that society would never discover certain facts had the most brilliant people not existed, or just that they would never be able to understand them in an intuitive sense?
Replies from: ciphergoth, rlsj, cameroncowan↑ comment by Paul Crowley (ciphergoth) · 2014-09-16T10:10:59.495Z · LW(p) · GW(p)
There are people in this world who will never understand, say, the P?=NP problem no matter how much work they put into it. So to deny the above you'd have to say (along with Greg Egan) that there was some sort of threshold of intelligence akin to "Turing completeness" that only some of humanity were reached, but that once you reached it nothing was in principle beyond your comprehension. That doesn't seem impossible, but it's far from obvious.
Replies from: DylanEvans, owencb, KatjaGrace↑ comment by DylanEvans · 2014-09-16T15:24:54.459Z · LW(p) · GW(p)
I think this is in fact highly likely.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2014-09-16T20:05:06.517Z · LW(p) · GW(p)
I can see some arguments in favour. We evolve along for millions of years and suddenly, bang, in 50ka we do this. It seems plausible we crossed some kind of threshold - and not everyone needs to be past the threshold for the world to be transformed.
OTOH, the first threshold might not be the only one.
↑ comment by KatjaGrace · 2014-09-22T03:29:37.686Z · LW(p) · GW(p)
If some humans achieved any particular threshold of anything, and meeting the threshold was not strongly selected for, I might expect there to always be some humans who didn't meet it.
↑ comment by rlsj · 2014-09-16T02:45:10.911Z · LW(p) · GW(p)
"Does this mean that society would never discover certain facts had the most brilliant people not existed?"
Absolutely! If they or their equivalent had never existed in circumstances of the same suggestiveness. My favorite example of this uniqueness is the awesome imagination required first to "see" how stars appear when located behind a black hole -- the way they seem to congregate around the event horizon. Put another way: the imaginative power able to propose star deflections that needed a solar eclipse to prove.
↑ comment by cameroncowan · 2014-10-19T18:28:43.278Z · LW(p) · GW(p)
I think a variety of things would have gone unsolved without smart people at the right place and time with the right expertise to solve tremendous problems like measuring the density of an object or learning construction, or how to create a sail that allows ships to sail into the wind.
↑ comment by rcadey · 2014-09-21T20:19:55.671Z · LW(p) · GW(p)
"How much smarter than a human could a thing be?" - almost infinitely if it consumed all of the known universe
"How about the same question, but using no more energy than a human?" -again the same answer - assuming we assume intelligence to be computable, then no energy is required (http://www.research.ibm.com/journal/rd/176/ibmrd1706G.pdf) if we use reversible computing. Once we have an AI that is smarter than a human then it would soon design something that is smarter but more efficient (energy wise)?
Replies from: leplen, KatjaGrace↑ comment by leplen · 2014-09-22T17:25:33.070Z · LW(p) · GW(p)
This link appears not to work, and it should be noted that "zero-energy" computing is at this point predominantly a thought experiment. A "zero-energy" computer would have to operate in the adiabatic limit, which is the technical term for "infinitely slowly."
↑ comment by KatjaGrace · 2014-09-22T03:36:25.991Z · LW(p) · GW(p)
Anders Sandberg has some thoughts on physical limits to computation which might be relevant, but I admit I haven't read them yet: http://www.jetpress.org/volume5/Brains2.pdf
↑ comment by cameroncowan · 2014-10-19T04:40:36.284Z · LW(p) · GW(p)
I think that is hard to balance because of the energy required for computations.
comment by KatjaGrace · 2014-09-16T01:02:28.162Z · LW(p) · GW(p)
What is the relationship between economic growth and AI? (Why does a book about AI begin with economic growth?)
Replies from: ciphergoth, AshokGoel, cameroncowan↑ comment by Paul Crowley (ciphergoth) · 2014-09-16T08:43:28.670Z · LW(p) · GW(p)
Why does a book about AI begin with economic growth?
I don't think it's really possible to make strong predictions about the impact of AI by looking at the history of economic growth.
This introduction sets the reader's mind onto subjects of very large scope: the largest events over the entirety of human history. It reminds the reader that very large changes have already happened in history, so it would be a mistake to be very confident that there will never again be a very large change. And, frankly, it underlines the seriousness of the book by talking about what is uncontroversially a Serious Topic, so that they are less likely to think of machines taking over the world as a frivolous idea when it is raised.
↑ comment by AshokGoel · 2014-09-16T01:48:48.140Z · LW(p) · GW(p)
I havnt read the book yet, but based on the summary here (and for what it is worth), I found the jump from 1-5 under economic growth above to 6 a little unconvincing.
Replies from: mvp9, KatjaGrace↑ comment by mvp9 · 2014-09-16T02:15:07.981Z · LW(p) · GW(p)
I find the whole idea of predicting AI-driven economic growth based on analysis of all of human history as a single set of data really unconvincing. It is one thing to extrapolate up-take patterns of a particular technology based on similar technologies in the past. But "true AI" is so broad, and, at least on many accounts, so transformative, that making such macro-predictions seems a fool's errand.
Replies from: KatjaGrace, paulfchristiano↑ comment by KatjaGrace · 2014-09-16T03:29:19.881Z · LW(p) · GW(p)
If you knew AI to be radically more transformative than other technologies, I agree that predictions based straightforwardly on history would be inaccurate. If you are unsure how transformative AI will be though, it seems to me to be helpful to look at how often other technologies have made a big difference, and how much of a difference they have made. I suspect many technologies would seem transformative ahead of time - e.g. writing, but seem to have made little difference to economic growth.
↑ comment by paulfchristiano · 2014-09-16T03:59:07.760Z · LW(p) · GW(p)
Here is another way of looking at things:
- From the inside it looks like automating the process of automation could lead to explosive growth.
- Many simple endogenous growth models, if taken seriously, tend to predict explosive growth at finite time. (Including the simplest ones.)
- A straightforward extrapolation of historical growth suggests explosive growth in the 21st century (depending on whether you read the great stagnation as a permanent change or a temporary fluctuation).
You might object to any one of those lines of arguments on their own, but taken together the story seems compelling to me (at least if one wants to argue "We should take seriously the possibility of explosive growth.")
Replies from: mvp9↑ comment by KatjaGrace · 2014-09-16T03:30:14.793Z · LW(p) · GW(p)
Would you care to elaborate?
↑ comment by cameroncowan · 2014-10-19T18:40:17.612Z · LW(p) · GW(p)
A good AI would cause explosive economic growth in a variety of areas. We lose alot of money to human error and so on.
comment by KatjaGrace · 2014-09-16T04:05:13.954Z · LW(p) · GW(p)
If you don't know what causes growth mode shifts, but there have been two or three of them and they seem kind of regular (see Hanson 2000, p14), how likely do you think another one is? (p2) How much evidence do you think history gives us about the timing and new growth rate of a new growth mode?
comment by KatjaGrace · 2014-09-16T01:21:45.905Z · LW(p) · GW(p)
What did you find least persuasive in this week's reading?
Replies from: RoboTeddy, billdesmedt↑ comment by RoboTeddy · 2014-09-16T08:42:15.812Z · LW(p) · GW(p)
"The train might not pause or even decelerate at Humanville Station. It is likely to swoosh right by." (p4)
There isn't much justification for this claim near where it's made. I could imagine it causing a reader to think that the author is prone to believing important things without much evidence -- or that he expects his readers to do so.
(It might help if the author noted that the topic is discussed in Chapter 4)
↑ comment by billdesmedt · 2014-09-16T01:47:02.653Z · LW(p) · GW(p)
Not "least persuasive," but at least a curious omission from Chapter 1's capsule history of AI's ups and downs ("Seasons of hope and despair") was any mention of the 1966 ALPAC report, which singlehandedly ushered in the first AI winter by trashing, unfairly IMHO, the then-nascent field of machine translation.
comment by KatjaGrace · 2014-09-16T01:04:42.393Z · LW(p) · GW(p)
How large a leap in cognitive ability do you think occurred between our last common ancestor with the great apes, and us? (p1) Was it mostly a change in personal intelligence, or could human success be explained by our greater ability to accumulate knowledge from others in society? How can we tell how much smarter, in the relevant sense, a chimp is than a human? This chapter claims Koko the Gorilla has a tested IQ of about 80 (see table 2).
What can we infer from answers to these questions?
Replies from: gallabytes, cameroncowan↑ comment by gallabytes · 2014-09-16T01:28:04.537Z · LW(p) · GW(p)
I would bet heavily on the accumulation. National average IQ has been going up by about 3 points per decade for quite a few decades, so there have definitely been times when Koko's score might have been above average. Now, I'm more inclined to say that this doesn't mean great things for the IQ test overall, but I put enough trust in it to say that it's not differences in intelligence that prevented the gorillas from reaching the prominence of humans. It might have slowed them down, but given this data it shouldn't have kept them pre-Stone-Age.
Given that the most unique aspect of humans relative to other species seems to be the use of language to pass down knowledge, I don't know what else it really could be. What other major things do we have going for us that other animals don't?
Replies from: ciphergoth, JonathanGossage, kgalias↑ comment by Paul Crowley (ciphergoth) · 2014-09-16T10:05:46.668Z · LW(p) · GW(p)
I think what controls the rate of change is the intelligence of the top 5%, not the average intelligence.
Replies from: gallabytes↑ comment by gallabytes · 2014-09-16T21:11:56.125Z · LW(p) · GW(p)
Sure, I still don't think that if you elevated the intelligence of a group of chimps to the top 5% of humanity without adding some better form of communication and idea accumulation it wouldn't matter.
If Newton were born in ancient Egypt, he might have made some serious progress, but he almost certainly wouldn't have discovered calculus and classical mechanics. Being able to stand on the shoulders of giants is really important.
↑ comment by JonathanGossage · 2014-09-17T20:08:32.543Z · LW(p) · GW(p)
I think that language plus our acquisition of the ability to make quasi-permanent records of human utterances are the biggest differentiators.
↑ comment by kgalias · 2014-09-16T09:44:14.466Z · LW(p) · GW(p)
It is possible, then, that exposure to complex visual media has produced genuine increases in a significant form of intelligence. This hypothetical form of intelligence might be called "visual analysis." Tests such as Raven's may show the largest Flynn gains because they measure visual analysis rather directly; tests of learned content may show the smallest gains because they do not measure visual analysis at all.
Do you think this is a sensible view?
Replies from: gallabytes↑ comment by gallabytes · 2014-09-16T21:07:54.818Z · LW(p) · GW(p)
Eh, not especially. IIRC, scores have also had to be renormalized on Stanford-Binet and Weschler tests over the years. That said, I'd bet it has some effect, but I'd be much more willing to bet on less malnutrition, less beating / early head injury, and better public health allowing better development during childhood and adolescence.
That said, I'm very interested in any data that points to other causes behind the Flynn Effect, so if you have any to post don't hesitate.
Replies from: kgalias↑ comment by kgalias · 2014-09-16T23:27:20.363Z · LW(p) · GW(p)
I'm just trying to make sure I understand - I remember being confused about the Flynn effect and about what Katja asked above.
How does the Flynn effect affect our belief in the hypothesis of accumulation?
Replies from: gallabytes↑ comment by gallabytes · 2014-09-17T02:25:08.354Z · LW(p) · GW(p)
It just means that the intelligence gap was smaller, potentially much, much smaller, when humans first started developing a serious edge relative to apes. It's not evidence for accumulation per se, but it's evidence against us just being so much smarter from the get go, and renormalizing has it function very much like evidence for accumulation.
↑ comment by cameroncowan · 2014-10-19T18:46:00.961Z · LW(p) · GW(p)
I think it was the ability to work together thanks to omega-3s from eating fish among other things. Our ability to create a course of action and execute as a group started us on the path to the present day.
comment by PhilGoetz · 2014-10-06T23:49:35.111Z · LW(p) · GW(p)
Comments:
It would be nice for at least one futurist who shows a graph of GDP to describe some of the many, many difficulties in comparing GDP across years, and to talk about the distribution of wealth. The power-law distribution of wealth means that population growth without a shift in the wealth distribution can look like an exponential increase in wealth, while actually the wealth of all but the very wealthy must decrease to preserve the same distribution. Arguably, this has happened repeatedly in American history.
I was very glad Nick mentioned that genetic algorithms are just another kind of hill-climbing, and have no mystical power. I suspect GA is inferior to hillclimbing with multiple random starts in most domains, though I'm ashamed to admit I haven't tested this in any way. GA is interesting not so much as an algorithm, but for how it can be used to classify and give insight into search problems. Ones where GA works better than hillclimbing are (my intuition) probably rare, yet constitute a large proportion of the difficult search problems we find solved by biology.
His description of conditionalization, as "setting the new probability of those worlds that are inconsistent with the information received to zero" followed by renormalization, is incorrect in two ways. Conditionalization recomputes the probability of every state, and never sets any probabilities to zero. This latter point is a common enough error that it's distressing to see it here.
Showing that a Bayesian agent is impossible to make would be very involved, and not worthwhile. It's more important to argue that a Bayesian agent would usually lose to dumber, faster agents, because the trade-off between speed and correctness is essential when thinking about super-intelligences. Whether the most-successful "super-intelligences" could in fact be intelligent by our definitions is still an important open question. If fast and stupid wins the race in the long run, preserving human values will be difficult.
What happened in the late 80s was not that neural nets and GAs performed better than GOFAI; what happened was an argument about which activities represented "intelligence", which the reactive behavior / physical robot / statistical learning people won. Statistics and machine learning are still poor at the problems that GOFAI does well on.
"AI" is not a viable field anymore; anyone getting a degree in "artificial intelligence" would find themselves unemployable today. Its territory has been taken over by statistics and "machine learning". I think we do people a disservice to keep talking about machine intelligence using only the term "artificial intelligence", because it mis-directs them into the backwaters of research and development.
↑ comment by Houshalter · 2015-06-07T12:56:02.537Z · LW(p) · GW(p)
I remember there was a paper co-authored by one of the inventor of genetic algorithms. They tried to come up with a toy problem that would show where genetic algorithms definitely beat hill-climbing. The problem they came up with was extremely contrived. But a slight modification to hill-climbing to make it slightly less greedy, and it worked just as fine or better than GA.
Statistics and machine learning are still poor at the problems that GOFAI does well on.
We are just starting to see ML successfully applied to search problems. There was a paper on deep neural networks that predict the moves of Go experts 45% of the time. Another paper found deep learning could significantly narrow the search space for automatically finding mathematical identities. Reinforcement Learning is becoming increasingly popular, which is just heuristic search, but very general.
↑ comment by Lumifer · 2014-10-07T00:41:11.451Z · LW(p) · GW(p)
I suspect GA is inferior to hillclimbing with multiple random starts in most domains
Simulated annealing is another similar class of optimizers with interesting properties.
As to standard hill-climbing with multiple starts, it fails in the presence of a large number of local optima. If your error landscape is lots of small hills, each restart will get you to the top of the nearest small hill but you might never get to that large range in the corner of your search space.
In any case most domains have their characteristics or peculiarities which make certain search algorithms perform well and others perform badly. Often enough domain-specific tweaks can improve things greatly compared to the general case...
comment by Larks · 2014-09-19T01:01:31.663Z · LW(p) · GW(p)
In fact for the thousand years until 1950 such extrapolation would place an infinite economy in the late 20th Century! The time since 1950 has been strange apparently.
(Forgive me for playing with anthropics, a tool I do not understand) - maybe something happened to all the observers in worlds that didn't see stagnation set in during the '70s. I guess this is similar to the common joke 'explanation' for the bursting of the tech bubble.
comment by tmosley · 2014-09-18T00:39:32.605Z · LW(p) · GW(p)
With respect to "Growth of Growth", wouldn't the chart ALWAYS look like that, with the near end trailing downwards? The sampling time is decreasing logarithmically, so unless you are sitting right on top of the singularity/production revolution, it should always look like that.
Just got thinking about what happened around 1950 specifically and couldn't find any real reason for it to drop off right there. WWII was well over, and the gold exchange standard remained for another 21 years, and those are the two primary framing events for that timeframe, so far as I can tell.
comment by KatjaGrace · 2014-09-16T04:08:58.232Z · LW(p) · GW(p)
Without the benefit of hindsight, which past technologies would you have expected to make a big difference to human productivity? For an example, if you think that humans' tendency to share information through language is hugely important to their success, then you might expect the printing press to help a lot, or the internet.
Relatedly, if you hadn't already been told, would you have expected agriculture to be a bigger deal than almost anything else?
Replies from: Lumifer, ciphergoth, cameroncowan, PhilGoetz↑ comment by Lumifer · 2014-09-16T20:04:35.163Z · LW(p) · GW(p)
That's an impossible question -- we have no capability to generate clones of ourselves with no knowledge of history. The only thing you can get as answers are post-factum stories.
An answerable version would be "which past technologies at that time they appeared did people expect to be a big deal or no big deal?" But that answer requires a lot of research, I think.
↑ comment by Paul Crowley (ciphergoth) · 2014-09-16T10:13:17.245Z · LW(p) · GW(p)
I don't understand this question!
Replies from: KatjaGrace↑ comment by KatjaGrace · 2014-09-16T14:50:20.914Z · LW(p) · GW(p)
Sorry! I edited it - tell me if it still isn't clear.
Replies from: ciphergoth, rlsj↑ comment by Paul Crowley (ciphergoth) · 2014-09-17T07:08:23.807Z · LW(p) · GW(p)
I'm afraid I'm still confused. Maybe it would help if you could make explicit the connection between this question and the underlying question you're hoping to shed light on!
Replies from: gjm↑ comment by gjm · 2014-09-17T08:32:01.726Z · LW(p) · GW(p)
In case it helps, here is what I believe to be a paraphrase of the question.
"Consider technological developments in the past. Which of them, if you'd been looking at it at the time without knowing what's actually come of it, would you have predicted to make a big difference?"
And my guess at what underlies it:
"We are trying to evaluate the likely consequences of AI without foreknowledge. It might be useful to have an idea of how well our predictions match up to reality. So let's try to work out what our predictions would have been for some now-established technologies, and see how they compare with how they actually turned out."
To reduce bias one should select the past technologies in a way that doesn't favour ones that actually turned out to be important. That seems difficult, but then so does evaluating them while suppressing what we actually know about what consequences they had...
Replies from: KatjaGrace↑ comment by KatjaGrace · 2014-09-22T04:08:14.614Z · LW(p) · GW(p)
Yes! That's what I meant. Thank you :)
↑ comment by rlsj · 2014-09-16T19:49:38.006Z · LW(p) · GW(p)
Please, Madam Editor: "Without the benefit of hindsight," what technologies could you possibly expect?
The question should perhaps be, What technology development made the greatest productive difference? Agriculture? IT? Et alia? "Agriculture" if your top appreciation is for quantity of people, which admittedly subsumes a lot; IT if it's for positive feedback in ideas. Electrification? That's the one I'd most hate to lose.
↑ comment by cameroncowan · 2014-10-19T18:50:06.336Z · LW(p) · GW(p)
I think people greatly under estimated animals of burden and the wheel. We can see that from cultures that didn't have the wheel like the Incas. Treating disease/medicine was really unrealized until the modern age, it was not as important to the ancients. I also think labor saving technology in 19th century was really unrealized. I don't think people realized how much less manual labor we would need within their own lifetimes. There are so many small things that had a huge impact like the mould-board plow that made farming in North America possible.
↑ comment by PhilGoetz · 2014-10-07T00:15:11.340Z · LW(p) · GW(p)
Interesting example, because agriculture decreased the productivity and lifestyle of most humans, by letting them make more humans. AIs may foresee and prevent tragedies of the commons such as agriculture, or the proliferation of AIs, that would be on the most direct route to intelligence explosion.
comment by jallen · 2014-09-16T02:46:41.617Z · LW(p) · GW(p)
I'm curious if any of you feel that future widespread use of commercial scale quantum computing (here I am thinking of at least thousands of quantum computers in the private domain with a multitude of programs already written, tested, available, economic and functionally useful) will have any impact on the development of strong A.I.? Has anyone read or written any literature with regards to potential windfalls this could bring to A.I.'s advancement (or lack thereof)?
I'm also curious if other paradigm shifting computing technologies could rapidly accelerate the path toward superintelligence?
Replies from: paulfchristiano, passive_fist, lukeprog, NxGenSentience, SteveG↑ comment by paulfchristiano · 2014-09-16T04:13:25.745Z · LW(p) · GW(p)
Based on the current understanding of quantum algorithms, I think the smart money is on a quadratic (or sub-quadratic) speedup from quantum computers on most tasks of interest for machine learning. That is, rather than taking N^2 time to solve a problem, it can be done in N time. This is true for unstructured search and now for an increasing range of problems that will quite possibly include the kind of local search that is the computational bottleneck in much modern machine learning. Much of the work of serious quantum algorithms people is spreading this quadratic speedup to more problems.
In the very long run quantum computers will also be able to go slightly further than classical computers before they run into fundamental hardware limits (this is beyond the quadratic speedup). I think they should not be considered as fundamentally different than other speculative technologies that could allow much faster computing; their main significance is increasing our confidence that the future will have much cheaper computation.
I think what you should expect to see is a long period of dominance by classical computers, followed eventually by a switching point where quantum computers pass their classical analogs. In principle you might see faster progress after this switching point (if you double the size of your quantum computer, you can do a brute force search that is 4 times as large, as opposed to twice as large with a classical computer), but more likely this would be dwarfed by other differences which can have much more than a factor of 2 effect on the rate of progress. This looks likely to happen long after growth has slowed for the current approaches to building cheaper classical computers.
For domains that experience the full quadratic speedup, I think this would allow us to do brute force searches something like 10-20 orders of magnitude larger before hitting fundamental physical limits.
Note that D-wave and its ilk are unlikely to be relevant to this story; we are a good ways off yet. I would even go further and bet on essentially universal quantum computing before such machines become useful in AI research, though I am less confident about that one.
↑ comment by passive_fist · 2014-09-16T04:48:03.013Z · LW(p) · GW(p)
I've worked on the D-Wave machine (in that I've run algorithms on it - I haven't actually contributed to the design of the hardware). About that machine, I have no idea if it's eventually going to be a huge deal faster than conventional hardware. It's an open question. But if it can, it would be huge, as a lot of ML algorithms can be directly mapped to D-wave hardware. It seems like a perfect fit for the sort of stuff machine learning researchers are doing at the moment.
About other kinds of quantum hardware, their feasibility remains to be demonstrated. I think we can say with fair certainty that there will be nothing like a 512-qubit fully-entangled quantum computer (what you'd need to, say, crack the basic RSA algorithm) within the next 20 years at least. Personally I'd put my money on >50 years in the future. The problems just seem too hard; all progress has stalled; and every time someone comes up with a way to try to solve them, it just results in a host of new problems. For instance, topological quantum computers were hot a few years ago since people thought they would be immune to some types of incoherence. As it turned out, though, they just introduce sensitivity to new types of incoherence (thermal fluctuations). When you do the math, it turns out that you haven't actually gained much by using a topological framework, and further you can simulate a topological quantum computer on a normal one, so really a TQC should be considered as just another quantum error correction algorithm, of which we already know many.
All indications seem to be that by 2064 we're likely to have a human-level AI. So I doubt that quantum computing will have any effect on AI development (or at least development of a seed AI). It could have a huge effect on the progression of AI though.
Replies from: TRIZ-Ingenieur, KatjaGrace↑ comment by TRIZ-Ingenieur · 2014-09-18T01:45:06.964Z · LW(p) · GW(p)
Our human cognition is mainly based on pattern recognition. (compare Ray Kurzweil "How to Create a Mind"). Information stored in the structures of our cranial neural network is waiting sometimes for decades until a trigger stimulus makes a pattern recognizer fire. Huge amounts of patterns can be stored while most pattern recognizers are in sleeping mode consuming very little energy. Quantum computing with incoherence time in orders of seconds is totally unsuitable for the synergistic task of pattern analysis and long term pattern memory with millions of patterns. IBMs newest SyNAPSE chip with 5.4 billion transistors on 3.5cm² chip and only 70mW power consumption in operation is far better suited to push technological development towards AI.
↑ comment by KatjaGrace · 2014-09-16T15:01:33.088Z · LW(p) · GW(p)
All indications seem to be that by 2064 we're likely to have a human-level AI.
What are the indications you have in mind?
Replies from: passive_fist↑ comment by passive_fist · 2014-09-17T06:33:43.363Z · LW(p) · GW(p)
Katja, that's a great question, and highly relevant to the current weekly reading sessions on Superintelligence that you're hosting. As Bostrom argues, all indications seem to be that the necessary breakthroughs in AI development can be at least seen over the horizon, whereas my opinion (and I'm an optimist) with general quantum computing it seems we need much huger breakthroughs.
Replies from: NxGenSentience↑ comment by NxGenSentience · 2014-09-19T17:49:01.971Z · LW(p) · GW(p)
From what I have read in open source science and tech journals and news sources, general quantum computing seems from what I read to be coming faster than the time frame you had suggested. I wouldn;t be suprised to see it as soon as 2024, prototypical, alpha or beta testing, and think it a safe bet by 2034 for wider deployment. As to very widespread adoption, perhaps a bit later, and w/r to efforts to control the tech for security reasons by governments, perhaps also ... later here, earlier, there.
Replies from: passive_fist↑ comment by passive_fist · 2014-09-20T01:43:41.316Z · LW(p) · GW(p)
Scott Aaronson seems to disagree: http://www.nytimes.com/2011/12/06/science/scott-aaronson-quantum-computing-promises-new-insights.html?_r=3&ref=science&pagewanted=all&
FTA: "The problem is decoherence... In theory, it ought to be possible to reduce decoherence to a level where error-correction techniques could render its remaining effects insignificant. But experimentalists seem nowhere near that critical level yet... useful quantum computers might still be decades away"
Replies from: NxGenSentience↑ comment by NxGenSentience · 2014-09-20T11:44:27.700Z · LW(p) · GW(p)
HI and thanks for the link. I just read the entire article, which was good for a general news piece, and correspondingly, not definitive (therefore, I'd consider it journalistically honest) about the time frame. "...might be decades away..." and "...might not really seem them in the 21st century..." come to mind as lower and upper estimates.
I don't want to get out of my depth here, because I have not exhaustively (or representatively) surveyed the field, nor am I personally doing any of the research.
But I still say I have found a significant percentage of articles (Those Nature summaries sites), PubMed (oddly, lots of "physical sciences" have journals on there now too) and "smart layman" publications like New Scientist, and the SciAm news site, which continue to have mini-stories about groups nibbling away at the decoherence problem, and finding approaches that don't require supercooled, exotic vacuum chambers (some even working with the possibility of chips.)
If 10 percent of these stories have legs and aren't hype, that would mean I have read dozens which might yield prototypes in a 10 - 20 year time window.
The google - NASA - UCSB joint project seems like they are pretty near term (ie not 40 or 50 years donw the road.)
Given Google's penchant for quietly working away and then doing something amazing the world thought was a generation away -- like unveiling the driverless cars, that the Governor and legislature of Michigan (as in, of course, Detroit) are in the process of licensing for larger scale production and deployment -- it wouldn't surprise me if one popped up in 15 years, that could begin doing useful work.
Then it's just daisychaining, and parallelizing with classical supercomputers doing error correction, preforming datasets to exploit what QCs do best, and interleaving that with conventional techniques.
I don't think 2034 is overly optimistic. But, caveat revealed, I am not in the field doing the work, just reading what I can about it.
I am more interested in: positing that we add them to our toolkit, what can we do that is relevant to creating "intereesting" forms of AI.
Thanks for your link to the nyt article.
Replies from: passive_fist↑ comment by passive_fist · 2014-09-21T21:39:03.048Z · LW(p) · GW(p)
Part of the danger of reading those articles as someone who is not actively involved in the research is that one gets an overly optimistic impression. They might say they achieved X, without saying they didn't achieve Y and Z. That's not a problem from an academic integrity point of view, since not being able to do Y and Z would be immediately obvious to someone versed in the field. But every new technique comes with a set of tradeoffs, and real progress is much slower than it might seem.
↑ comment by lukeprog · 2014-09-16T03:48:48.393Z · LW(p) · GW(p)
I've seen several papers like "Quantum speedup for unsupervised learning" but I don't know enough about quantum algorithms to have an opinion on the question, really.
Replies from: lukeprog↑ comment by lukeprog · 2014-09-16T17:14:25.025Z · LW(p) · GW(p)
Another paper I haven't read: "Can artificial intelligence benefit from quantum computing?"
Replies from: NxGenSentience↑ comment by NxGenSentience · 2014-09-20T11:51:13.596Z · LW(p) · GW(p)
Luke,
Thanks for posting the ink. Its an april 2014 paper, as you know. I just downloaded the PDF and it looks pretty interesting. I'l post my impression, if I have anything worthwhile to say, either here in Katja's group, or up top on lw generally, when I have time to read more of it.
↑ comment by NxGenSentience · 2014-09-19T17:41:19.881Z · LW(p) · GW(p)
Did you read about Google's partnership with NASA and UCSD to build a quantum computer of 1000 qubits?
Technologically exciting, but ... imagine a world without encryption. As if all locks and keys on all houses, cars, banks, nuclear vaults, whatever, disappeared, only incomparably more consequential.
That would be catastrophic, for business, economies, governments, individuals, every form of commerce, military communication....
Didn't answer your question, I am sorry, but as a "fan" of quantum computing, and also a person with a long time interest in the quantum zeno effect, free will, and the implications for consciousness (as often discussed by Henry Stapp, among others), I am both excited, yet feel a certain trepidation. Like I do about nanotech.
I am writing a long essay and preparing a video on the topic, but it is a long way from completion. I do think it (qc) will have a dramatic effect on artifactual consciousness platforms, and I am even more certain that it will accellerate superintelligence (which is not at all the same thing, as intelligence and consciousness, in my opinion, are not coextensive.)
Replies from: asr, ChristianKl, cameroncowan↑ comment by asr · 2014-09-19T18:16:56.979Z · LW(p) · GW(p)
Did you read about Google's partnership with NASA and UCSD to build a quantum computer of 1000 qubits?
Technologically exciting, but ... imagine a world without encryption. As if all locks and keys on all houses, cars, banks, nuclear vaults, whatever, disappeared, only incomparably more consequential.
My understanding is that quantum computers are known to be able to break RSA and elliptic-curve-based public-key crypto systems. They are not known to be able to break arbitrary symmetric-key ciphers or hash functions. You can do a lot with symmetric-key systems -- Kerberos doesn't require public-key authentication. And you can sign things with Merkle signatures.
There are also a number of candidate public-key cryptosystems that are believed secure against quantum attacks.
So I think we shouldn't be too apocalyptic here.
Replies from: NxGenSentience↑ comment by NxGenSentience · 2014-09-20T12:03:26.561Z · LW(p) · GW(p)
Asr,
Thanks for pointing out the wiki article, which I had not seen. I actually feel a tiny bit relieved, but I still think there are a lot of very serious forks in the road that we should explore.
If we do not pre-engineer a soft landing, this is the first existential catastrophe that we should be working to avoid.
A world that suddenly loses encryption (or even faith in encryption!) would be roughly equivalent to a world without electricity.
I also worry about the legacy problem... all the critical documents in RSA, PGP, etc, sitting on hard drives, servers, CD roms, that suddenly are visible to anyone with access to the tech. How do we go about re-coding all those "eyes only" critical docs into a post-quantum coding system (assuming one is shown practical and reliable), without those documents being "looked at" or opportunistically copied in their limbo state between old and new encrypted status?
Who can we trust to do all this conversion, even given the new algorithms are developed?
This is actually almost intractably messy, at first glance.
↑ comment by ChristianKl · 2014-09-19T18:41:16.010Z · LW(p) · GW(p)
I am writing a long essay and preparing a video on the topic, but it is a long way from completion. I do think it (qc) will have a dramatic effect on artifactual consciousness platforms
What do you mean with artificial consciousness to the extend that it's not intelligence and why do you think the problem is in a form where quantum computers are helpful? Which specific mathematical problems do you think are important for artificial consciousness that are better solved via quantum computers than our current computers?
Replies from: NxGenSentience↑ comment by NxGenSentience · 2014-09-19T21:07:28.901Z · LW(p) · GW(p)
What do you mean with artificial consciousness to the extend that it's not intelligence and why do you think the problem is in a form where quantum computers are helpful?
The claim wasn't that artifactual consciousness wasn't (likely to be) sufficient for a kind of intelligence, but that they are not coextensive. It might have been clearer to say consciousness is (closer to being) sufficient for intelligence, than intelligence (the way computer scientists often use it) is to being a sufficient condition for consciousness (which is not at all.)
I needn't have restricted the point to artifact-based consciousness, actually. Consider absence seizures (epilepsy) in neurology. A man can seize (lose "consciousness") get up from his desk, get the car keys, drive to a mini-mart, buy a pack of cigarettes, make polite chat while he gets change from the clerk, drive home (obeying traffic signals), lock up his car, unlock and enter his house, and lay down for a nap, all in absence seizure state, and post-ictally, recall nothing. (Neurologists are confident these cases withstand all proposals to attribute postictal "amnesia" to memory failure. Indeed, seizures in susceptible patients can be induced, witnessed, EEGed, etc. from start to finish, by neurologists. ) Moral: intelligent behavior occurs, consciousness doesn't. Thus, not coextensive. I have other arguments, also.
As to your second question, I'll have to defer an answer for now, because it would be copiously long... though I will try to think of a reply (plus the idea is very complex and needs a little more polish, but I am convinced of its merit. I owe you a reply, though..., before we're through with this forum.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-09-20T10:19:32.746Z · LW(p) · GW(p)
Neurologists are confident these cases withstand all proposals to attribute postictal "amnesia" to memory failure
Is there an academic paper that makes that argument? If so, could you reference it?
Replies from: NxGenSentience↑ comment by NxGenSentience · 2014-09-20T12:12:17.391Z · LW(p) · GW(p)
I have dozens, some of them so good I have actually printed hardcopies of the PDFs-- sometimes misplacing the DOIs in the process.
I will get some though; some of them are, I believe, required reading, for those of us looking at the human brain for lessons about the relationship between "consciousness" and other functions. I have a particularly interesting one (74 pages, but it's a page turner) that I wll try to find the original computer record of. Found it and most of them on PubMed.
If we are in a different thread string in a couple days, I will flag you. I'd like to pick a couple of good ones, so it will take a little re-reading.
↑ comment by cameroncowan · 2014-10-19T18:43:58.954Z · LW(p) · GW(p)
I think it is that kind of thing that we should start thinking about though. Its the consequences that we have to worry about as much as developing the tech. Too often times new things have been created and people have not been mindful of the consequences of their actions. I welcome the discussion.
↑ comment by SteveG · 2014-09-16T03:35:17.493Z · LW(p) · GW(p)
The D-Wave quantum computer solves a general class of optimization problems very quickly. It cannot speed up any arbitrary computing task, but the class of computing problems which include an optimization task it can speed up appears to be large.
Many "AI Planning" tasks will be a lot faster with quantum computers. It would be interesting to learn what the impact of quantum computing will be on other specific AI domains like NLP and object recognition.
We also have:
-Reversible computing -Analog computing -Memristors -Optical computing -Superconductors -Self-assembling materials
And lithography, or printing, just keeps getting faster on smaller and smaller objects and is going from 2d to 3d.
When Bostrom starts to talk about it, I would like to hear people's opinions about untangling the importance of hardware vs. software in the future development of AI.
comment by KatjaGrace · 2014-09-16T01:21:05.470Z · LW(p) · GW(p)
Have you seen any demonstrations of AI which made a big impact on your expectations, or were particularly impressive?
Replies from: ciphergoth, mvp9, AshokGoel, negamuhia, RoboTeddy↑ comment by Paul Crowley (ciphergoth) · 2014-09-16T08:22:08.227Z · LW(p) · GW(p)
The Deepmind "Atari" demonstration is pretty impressive https://www.youtube.com/watch?v=EfGD2qveGdQ
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2014-09-16T08:49:22.694Z · LW(p) · GW(p)
Deepmind Atari technical paper.
↑ comment by mvp9 · 2014-09-16T01:55:17.526Z · LW(p) · GW(p)
Again, this is a famous one, but Watson seems really impressive to me. It's one thing to understand basic queries and do a DB query in response, but its ability to handle indirect questions that would confuse many a person (guilty), was surprising.
On the other hand, its implementation (as described in Second Machine Age) seems to be just as algorithmic, brittle and narrow as Deep Blue - basically Watson was as good as its programmers...
Replies from: SteveG↑ comment by SteveG · 2014-09-16T02:31:37.928Z · LW(p) · GW(p)
Along with self-driving cars, Watson's Jeopardy win shows that, given enough time, a team of AI engineers has an excellent chance of creating a specialized system which can outpace the best human expert in a much wider variety of tasks than we might have thought before.
The capabilities of such a team has risen dramatically since I first studied AI. Charting and forecasting the capabilities of such a team is worthwhile.
Having an estimate of what such a team will be able to accomplish in ten years is material to knowing when they will be able to do things we consider dangerous.
After those two demonstrations, what narrow projects could we give a really solid AI team which would stump them? The answer is no longer at all clear. For example, the SAT or an IQ test seem fairly similar to Jeopardy, although the NLP tasks differ.
The Jeopardy system also did not incorporate a wide variety of existing methods and solvers, because they were not needed to answer Jeopardy questions.
In short order an IBM team can incorporate systems which can extract information from pictures and video, for example, into a Watson application.
Replies from: NxGenSentience↑ comment by NxGenSentience · 2014-09-20T19:58:23.107Z · LW(p) · GW(p)
Watson's Jeopardy win shows that, given enough time, a team of AI engineers has an excellent chance of creating a specialized system which can outpace the best human expert in a much wider variety of tasks than we might have thought before.
One could read that comment on a spectrum of charitableness. I will speak for myself, at the risk of ruffling some feathers, but we are all here to bounce ideas around, not tow any party lines, right? To me, Watson's win means very little, almost nothing. Expert systems have been around for years, even decades. I experimented with coding one myself, many years ago.
It shows what we already knew: given a large budget, a large team of mission-targeted programmers can hand craft a mission specific expert system out of an unlimited pool of hardware resources, to achieve a goal like winning a souped-up game of trivia, laced with puns as well as literal questions.
It was a billion dollar stunt, IMO, by IBM and related project leaders.
Has it achieved consciousness, self awareness, evidence of compassion, a fear of death, moral intuition?
That would have impressed me, that we were entering a new era. (And I will try to rigorously claim, over time, that this is exactly what we really need, in order to have a fighting chance of producing fAGI. I think those not blinded by a paradigm that should have died out with logical positivism and behaviorism, would like to admit (some fraction of them) that penetrating, intellectually honest analysis accumulates a conviction that no mechanical decision procedure we design, no matter how spiffy our mathematics, (and I was a math major with straight As in my day) can guarantee that an emotionless, compassionless, amoral, non,conscious, mechanically goal-seeking apparatus, will not -- inadvertently or advertently -- steam roller right over us.
I will speak more about that as time goes on. But in keeping with my claim yesterday that "intelligence" and "consciousness" are not coextensive in any simple way, "intelligence" and "sentience" are disjoint. I think that the autonomous "restraint" we need, to make AGIs into friendly AGIs, requires giving them sentience, and creating conditions favorable to them discovering a morality compatible with our own.
Creativity, free will (or autonomy, in language with less philosophical baggage), emotion, a theory of ethics and meta-ethics, and a theory of motivation.... we need to make progress on these, the likely basic building blocks of moral, benign, enlightened, beneficent forms of sentience... as well as progress on the fancy tech needed to implement this, once we have some idea what we are actually trying to implement.
And that thing we should implement is not, in my opinion, ever more sophisticated Watsons, or groups of hundreds or thousands of them, each hand crafted to achieve a specific function (machine vision, unloading a dishwasher, .....) Oh, sure, that would work, just like Watson worked. But if we want moral intuition to develop, a respect for life to develop, we need to have a more ambitious goal.
ANd I actually think we can do it. Now is the time. The choice that confronts us really, is not uAGI vs. fAGI, but dumb GOFAI, vs, sentient AI.
Watson: just another expert system. Had someone given me the budget and offered to let me lead a project team to build Watson, I would have declined, because it was clear in advance that it was just a (more nuanced) brute force, custom crafted and tuned, expert system. It's success was assured, given a deep wallet.
What did we learn? Maybe some new algorithm-optimizations or N-space data structure topologies were discovered along the way, but nothing fundamental.
I'd have declined to lead the project (not that I would have been asked), because it was uninteresting. There was nothing to learn, and nothing much was learned, except some nuances of tech that always are acquired when you do any big distributed supercomputing, custom programming project.
We'll learn as much making the next gen weather simulator.
↑ comment by AshokGoel · 2014-09-16T01:33:45.321Z · LW(p) · GW(p)
One demonstration of AI that I find impressive is that AI agents can now take and "pass" some intelligence tests. For example, AI agents can now about as well as a typical American teenager on the Raven's test of human intelligence.
Replies from: TRIZ-Ingenieur↑ comment by TRIZ-Ingenieur · 2014-09-18T01:55:58.907Z · LW(p) · GW(p)
As long as a chatbot does not understand what it is chatting about it is not worth real debate. The "pass" is more an indication how easily we get cheated. When we think while speaking we easily start waffling. This is normal human behaviour, same as silly jumps in topic. Jumping between topics was this chatbots trick to hide its non-understanding.
↑ comment by negamuhia · 2014-09-16T12:28:28.789Z · LW(p) · GW(p)
Sergey Levine's research on guided policy search (using techniques such as hidden markov models to animate, in real-time, the movement of a bipedal or quadripedal character). An example:
Sergey Levine, Jovan Popović. Physically Plausible Simulation for Character Animation. SCA 2012: http://www.eecs.berkeley.edu/~svlevine/papers/quasiphysical.pdf
comment by KatjaGrace · 2014-09-16T04:10:06.592Z · LW(p) · GW(p)
Are there foreseeable developments other than human-level AI which might produce much faster economic growth? (p2)
Replies from: mvp9, Sebastian_Hagen, TRIZ-Ingenieur, cameroncowan↑ comment by mvp9 · 2014-09-16T05:19:23.704Z · LW(p) · GW(p)
I think the best bets as of today would be truly cheap energy (whether through fusion, ubiqutious solar, etc) and nano-fabrication. Though it may not happen, we could see these play out in 20-30 year term.
The bumps from this, however would be akin to the steam engine. Dwarfed by (or possibly a result of) the AI.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2014-09-16T08:13:23.613Z · LW(p) · GW(p)
The steam engine heralded the Industrial Revolution and a lasting large increase in doubling rate. I would expect a rapid economic growth after either of these inventions, followed by returning to the existing doubling rate.
Replies from: rlsj, KatjaGrace↑ comment by rlsj · 2014-09-16T19:30:57.135Z · LW(p) · GW(p)
After achieving a society of real abundance, further economic growth will have lost incentive.
We can argue whether or not such a society is truly reachable, even if only in the material sense. If not, because of human intractability or AGI inscrutability, progress may continue onward and upward. Perhaps here, as in happiness, it's the pursuit that counts.
↑ comment by KatjaGrace · 2014-09-22T04:05:28.708Z · LW(p) · GW(p)
Why do you expect a return to the existing doubling rate in these cases?
↑ comment by Sebastian_Hagen · 2014-09-16T20:44:01.062Z · LW(p) · GW(p)
Would you count uploads (even if we don't understand the software) as a kind of AI? If not, those would certainly work.
Otherwise, there are still things one could do with human brains. Better brain-computer-interfaces would be helpful, and some fairly mild increases in genome understanding could allow us to massively increase the proportion of people functioning at human genius level.
↑ comment by TRIZ-Ingenieur · 2014-09-18T00:51:45.192Z · LW(p) · GW(p)
For fastest economic growth it is not necessary to achieve human-level intelligence. It is even hindering. Highly complex social behaviour to find a reproduction partner is not neccessary for economic success. A totally unbalanced AI character with highly superhuman skills in creativity, programming, engineering and cheating humans could beat a more balanced AI character and self-improve faster. Todays semantic big data search is already manitudes faster than human research in a library using a paper catalog. We have to state highly super-human performance to answer questions and low sub-human performance in asking questions. Strong AI is so complex that projects for normal business time frames go for the low hanging fruits. If the outcome of such a project can be called an AI it is with highest probability extremely imbalanced in its performance and character.
↑ comment by cameroncowan · 2014-10-19T18:47:37.742Z · LW(p) · GW(p)
Nanotech is the next big thing because you will have lots of self replicating tiny machines who can quickly work as a group as a kind of hive mind. That's important.
comment by KatjaGrace · 2014-09-16T04:08:04.540Z · LW(p) · GW(p)
Bostrom says that it is hard to imagine the world economy having a doubling time as short as weeks, without minds being created that are much faster and more efficient than those of humans (p2-3). Do you think humans could maintain control of an economy that grew so fast? How fast could it grow while humans maintained control?
Replies from: rlsj↑ comment by rlsj · 2014-09-16T19:56:58.553Z · LW(p) · GW(p)
Excuse me? What makes you think it's in control? Central Planning lost a lot of ground in the Eighties.
Replies from: KatjaGrace, Liso, cameroncowan, rcadey↑ comment by KatjaGrace · 2014-09-22T04:19:56.358Z · LW(p) · GW(p)
Good question.
I don't think central planning vs. distributed decision-making is relevant though, because it seems to me that either way humans make decisions similarly much: the question is just whether it is a large or a small number making decisions, and who decides what.
I usually think of the situation as there being a collection of (fairly) goal-directed humans, each with different amounts of influence, and a whole lot of noise that interferes with their efforts to do anything. These days humans can lose control in the sense that the noise might overwhelm their decision-making (e.g. if a lot of what happens is unintended consequences due to nobody knowing what's going on), but in the future humans might lose control in the sense that their influence as a fraction of the goal-directed efforts becomes very small. Similarly, you might lose control of your life because you are disorganized, or because you sell your time to an employer. So while I concede that we lack control already in the first sense, it seems we might also lose it in the second sense, which I think is what Bostrom is pointing to (though now I come to spell it out, I'm not sure how similar his picture is to mine).
↑ comment by Liso · 2014-09-19T04:13:22.026Z · LW(p) · GW(p)
This is good point, which I like to have more precisely analysed. (And I miss deeper analyse in The Book :) )
Could we count will (motivation) of today's superpowers = megacorporations as human's or not? (and in which level could they control economy?)
In other worlds: Is Searle's chinese room intelligent? (in definition which The Book use for (super)intelligence)
And if it is then it is human or alien mind?
And could be superintelligent?
What arguments we could use to prove that none of today's corporations (or states or their secret services) is superintelligent? Think collective intelligence with computer interfaces! Are they really slow at thinking? How could we measure their IQ?
And could we humans (who?) control it (how?) if they are superintelligent? Could we at least try to implement some moral thinking (or other human values) to their minds? How?
Law? Is law enough to prevent that superintelligent superpower will do wrong things? (for example destroy rain forrest because he want to make more paperclips?)
↑ comment by cameroncowan · 2014-10-19T18:50:59.089Z · LW(p) · GW(p)
The economy is a group of people making decisions based on the actions of others. Its a non centrally regulated hive mind.
↑ comment by rcadey · 2014-09-21T20:35:05.877Z · LW(p) · GW(p)
I have to agree with rlsj here - I think we're at the point where humans can no longer cope with the pace of economic conditions - we already have hyper low latency trading systems making most of the decisions that underly the current economy. Presumably the limit of economic growth will be linked to "global intelligence" - we seem to be at the point where with human intelligence is the limiting factor (currently we seem to be unable to sustain economic growth without killing people and the planet!)
comment by KatjaGrace · 2014-09-16T01:19:55.142Z · LW(p) · GW(p)
Common sense and natural language understanding are suspected to be 'AI complete'. (p14) (Recall that 'AI complete' means 'basically equivalent to solving the whole problem of making a human-level AI')
Do you think they are? Why?
Replies from: devi, billdesmedt, mvp9↑ comment by devi · 2014-09-16T03:23:38.526Z · LW(p) · GW(p)
I think AI-completeness is a quite seductive notion. Borrowing the concept of reduction from complexity/computability theory makes it sound technical, but unlike those fields I haven't seen anyone actually describing eg how to use an AI with perfect language understanding to produce another one that proved theorems or philosophized.
Spontaneously it feels like everyone here should in principle be able to sketch the outlines of such a program (at least in the case of a base-AI that has perfect language comprehension that we want to reduce to), probably by some version of trying to teach the AI as we teach a child in natural language. I suspect that the details of some of these reductions might still be useful, especially the parts that don't quite seem to work. For while I don't think that we'll see perfect machine translation before AGI, I'm much less convinced that there is a reduction from AGI to perfect translation AI. This illustrates what I suspect might be an interesting differences between two problem classes that we might both want to call AI-complete: the problems human programmers will likely not be able to solve before we create superintelligence, and the problems whose solutions we could (somewhat) easily re-purpose to solve the general problem of human-level AI. These classes look the same as in we shouldn't expect to see problems from any of them solved without an imminent singularity, but differ in that the problems in the latter class could prove to be motivating examples and test-cases for AI work aimed at producing superintelligence.
I guess the core of what I'm trying to say is that arguments about AI-completeness has so far sounded like: "This problem is very very hard, we don't really know how to solve it. AI in general is also very very hard, and we don't know how to solve it. So they should be the same." Heuristically there's nothing wrong with this, except we should keep in mind that we could be very mistaken about what is actually hard. I'm just missing the part that goes: "This is very very hard. But if we knew it this other thing would be really easy."
Replies from: mvp9, KatjaGrace↑ comment by mvp9 · 2014-09-16T05:37:40.725Z · LW(p) · GW(p)
A different (non-technical) way to argue for their reducibility is through analysis of the role of language in human thought. The logic being that language by its very nature extends into all aspects of cognition (little human though of interest takes place outside its reach), and so one cannot do one without the other. I believe that's the rationale behind the Turing test.
It's interesting that you mention machine translation though. I wouldn't equate that with language understanding. Modern translation programs are getting very good, and may in time be "perfect" (indistinguishable from competent native speakers), but they do this through pattern recognition and leveraging a massive corpus of translation data - not through understanding it.
Replies from: shullak7↑ comment by shullak7 · 2014-09-17T15:47:36.998Z · LW(p) · GW(p)
I think that "the role of language in human thought" is one of the ways that AI could be very different from us. There is research into the way that different languages affect cognitive abilities (e.g. -- https://psych.stanford.edu/~lera/papers/sci-am-2011.pdf). One of the examples given is that, as a native English-speaker, I may have more difficulty learning the base-10 structure in numbers than a Mandarin speaker because of the difference in the number words used in these languages. Language can also affect memory, emotion, etc.
I'm guessing that an AI's cognitive ability wouldn't change no matter what human language it's using, but I'd be interested to know what people doing AI research think about this.
Replies from: NxGenSentience, mvp9↑ comment by NxGenSentience · 2014-09-20T14:41:03.311Z · LW(p) · GW(p)
This is a really cool link and topic area. I was getting ready to post a note on intelligence amplification (IA), and was going to post it up top on the outer layer of LW, based on language.
I recall many years ago, there was some brief talk of replacing the QWERTY keyboard with a design that was statistically more efficient in terms of human hand ergonomics in executing movements for the most frequently seen combinations of letters (probably was limited to English, given American parochialism of those days, but still, some language has to be chosen.)
Because of the entrenched base of QWERTY typists, the idea didn't get off the ground. (THus, we are penalizing countless more billions of new and future keyboard users, because of legacy habits of a comparatively small percentage of total [current and future] keyboard users.
It got me to thinking at the time, though, about whether a suitably designed human language would "open up" more of the brains inherent capacity for communication. Maybe a larger alphabet, a different set of noun primitives, even modified grammar.
With respect to IA, might we get a freebie just out of redesigning -- designing from scratch -- a language that was more powerful, communicated on average what, say, english or french communicates, yet with fewer phenomes per concept?
Might we get an average of 5 or 10 point equivalent IQ boost, by designing a language that is both physically faster (less "wait states" while we are listening to a speaker) and which has larger conceptual bandwidth?
We could also consider augmenting spoken speech with signing of some sort, to multiply the alphabet. A problem occurs here for unwitnessed speech, where we would have to revert to the new language on its own (still gaining the postulated dividend from that.)
However, already, for certain kinds of communication, we all know that nonverbal communication accounts for a large share of total communicated meaning and information. We already have to "drop back" in bandwidth every time we communicate like this (print, exclusively.) In scientific and philosophical writing, it doesn't make much difference, fortunately, but still, a new language might be helpful.
This one, like many things that evolve on their own, is a bunch of add-ons (like the biological evolution of organisms), and the result is not necessarily the best that could be done.
↑ comment by mvp9 · 2014-09-17T18:41:20.679Z · LW(p) · GW(p)
Lera Boroditsky is one of the premier researchers on this topic. They've also done some excellent work on comparing spatial/time metaphors in English and Mandarin (?), showing that the dominant idioms in each language affect how people cognitively process motion.
But the question is more broad -- whether some form of natural language is required (natural, roughly meaning used by a group in day to day life, is key here)? Differences between major natural languages are for the most part relatively superficial and translatable because their speakers are generally dealing with a similar reality.
Replies from: shullak7↑ comment by shullak7 · 2014-09-17T20:26:19.883Z · LW(p) · GW(p)
I think that is one of my questions; i.e., is some form of natural language required? Or maybe what I'm wondering is what intelligence would look like if it weren't constrained by language -- if that's even possible. I need to read/learn more on this topic. I find it really interesting.
↑ comment by KatjaGrace · 2014-09-16T03:46:22.931Z · LW(p) · GW(p)
A somewhat limited effort to reduce tasks to one another in this vein: http://www.academia.edu/1419272/AI-Complete_AI-Hard_or_AI-Easy_Classification_of_Problems_in_Artificial
↑ comment by billdesmedt · 2014-09-16T01:51:38.664Z · LW(p) · GW(p)
Human-level natural language facility was, after all, the core competency by which Turing's 1950 Test proposed to determine whether -- across the board -- a machine could think.
↑ comment by mvp9 · 2014-09-16T01:35:33.245Z · LW(p) · GW(p)
Depends on the criteria we place on "understanding." Certainly an AI may act in a way that invite us to attribute 'common sense' to it in some situations, without solving the 'whole problem." Watson would seem to be a case in point - apparently demonstrating true language understanding within a broad, but still strongly circumscribed domain.
Even if we take "language understanding" in the strong sense (i.e. meaning native fluency, including ability for semantic innovation, things like irony, etc), there is still the question of phenomenal experience: does having such an understanding entail the experience of such understanding - self-consciousness, and are we concerned with that?
I think that "true" language understanding is indeed "AI complete", but in a rather trivial sense that to match a competent human speaker one needs to have most of the ancillary cognitive capacities of a competent human.
Replies from: KatjaGrace↑ comment by KatjaGrace · 2014-09-16T01:45:12.378Z · LW(p) · GW(p)
Whether we are concerned about the internal experiences of machines seems to depend largely on whether we are trying to judge the intrinsic value of the machines, or judge their consequences for human society. Both seem important.
comment by KatjaGrace · 2014-09-16T04:11:52.233Z · LW(p) · GW(p)
Which argument do you think are especially strong in this week's reading?
comment by KatjaGrace · 2014-09-16T04:11:26.685Z · LW(p) · GW(p)
Was there anything in particular in this week's reading that you would like to learn more about, or think more about?
Replies from: kgaliascomment by KatjaGrace · 2014-09-16T03:58:19.789Z · LW(p) · GW(p)
Whatever the nature, cause, and robustness of growth modes, the important observation seems to me to be that the past behavior of the economy suggests very much faster growth is plausible.
comment by VonBrownie · 2014-09-16T01:42:47.579Z · LW(p) · GW(p)
Are there any ongoing efforts to model the intelligent behaviour of other organisms besides the human model?
Replies from: lukeprog, NxGenSentience↑ comment by lukeprog · 2014-09-16T03:50:42.650Z · LW(p) · GW(p)
Definitely! See Wikipedia and e.g. this book.
Replies from: VonBrownie↑ comment by VonBrownie · 2014-09-16T04:28:11.284Z · LW(p) · GW(p)
Thanks... I will check it out further!
↑ comment by NxGenSentience · 2014-09-24T19:36:11.761Z · LW(p) · GW(p)
Yes, many. Go to PubMed and start drilling around, make up some search compinations and you will get immediately onto lots of interesting research tracks. Cognitive neurobiology, systems neurobiology, many areas and journals you'll run across, will keep you busy. There is some really terrific, amazing work. Enjoy.
comment by KatjaGrace · 2014-09-16T01:21:38.617Z · LW(p) · GW(p)
What did you find most interesting in this week's reading?
Replies from: VonBrownie↑ comment by VonBrownie · 2014-09-16T01:35:50.691Z · LW(p) · GW(p)
I found interesting the idea that great leaps forward towards the creation of AGI might not be a question of greater resources or technological complexity but that we might be overlooking something relatively simple that could describe human intelligence... using the example of the Ptolemaic vs Copernican systems as an example.
comment by KatjaGrace · 2014-09-16T01:21:29.338Z · LW(p) · GW(p)
Was there anything in this week's reading that you would like someone to explain better?
Replies from: Liso↑ comment by Liso · 2014-09-16T06:13:34.296Z · LW(p) · GW(p)
First of all thanx for work with this discussion! :)
My proposals:
- wiki page for collaborative work
There are some points in the book which could be analysed or described better and probably which are wrong. We could find them and help improve. wiki could help us to do it
- better time for europe and world?
But this is probably not a problem. If it is a problem then it is probably not solvable. We will see :)
Replies from: KatjaGrace, TRIZ-Ingenieur↑ comment by KatjaGrace · 2014-09-22T21:26:56.247Z · LW(p) · GW(p)
Thanks for your suggestions.
Regarding time, it is alas too hard to fit into everyone's non-work hours. Since the discussion continues for several days, I hope it isn't too bad to get there a bit late. If people would like to coordinate to be here at the same time though, I suggest Europeans pick a more convenient 'European start time', and coordinate to meet each other then.
Regarding a wiki page for collaborative work, I'm afraid MIRI won't be organizing anything like this in the near future. If anyone here is enthusiastic for such a thing, you are most welcome to begin it (though remember that such things are work to organize and maintain!) The LessWrong wiki might also be a good place for some such research. If you want a low maintenance collaborative work space to do some research together, you could also link to a google doc or something for investigating a particular question.
↑ comment by TRIZ-Ingenieur · 2014-09-18T01:17:28.299Z · LW(p) · GW(p)
I strongly support your idea to establish a collaborative work platform. Nick Bostroms book brings so many not yet debated aspects into public debate that we should support him with input and feed back for the next edition of this book. He threw his hat into the ring and our debate will push sales for his book. I suspect he prefers to get comments and suggestions for better explanations in a structured manner.
comment by KatjaGrace · 2014-09-16T01:07:30.982Z · LW(p) · GW(p)
What do you think of I. J. Good's argument? (p4)
Replies from: VonBrownie, JonathanGossage↑ comment by VonBrownie · 2014-09-16T01:23:04.348Z · LW(p) · GW(p)
If an artificial superintelligence had access to all the prior steps that led to its current state I think Good's argument is correct... the entity would make exponential progress in boosting its intelligence still further. I just finished James Barrat's AI book Our Final Invention and found it interesting to note that Good towards the end of his life came to see his prediction as more danger than promise for continued human existence.
Replies from: KatjaGrace↑ comment by KatjaGrace · 2014-09-22T04:24:15.131Z · LW(p) · GW(p)
If a human had access to all of the prior steps that led to its current state, would it make progress boosting its intelligence fast enough that other humans didn't have to invent things again?
If not, what's the difference?
↑ comment by JonathanGossage · 2014-09-17T19:11:42.913Z · LW(p) · GW(p)
I think that the process that he describes is inevitable unless we do ourselves in through some other existential risk. Whether this will be for good or bad will largely depend on how we approach the issues of volition and motivation.
comment by KatjaGrace · 2014-09-16T01:06:36.202Z · LW(p) · GW(p)
How should someone familiar with past work in AI use that knowledge judge how much work is left to be done before reaching human-level AI, or human-level ability at a particular kind of task?
Replies from: billdesmedt↑ comment by billdesmedt · 2014-09-16T01:38:27.603Z · LW(p) · GW(p)
one way to apply such knowledge might be in differentiating between approaches that are indefinitely extendable and/or expandable and those that, despite impressive beginnings, tend to max out beyond a certain point. (Think of Joe Weizenbaum's ELIZA as an example of the second.)
Replies from: gallabytes↑ comment by gallabytes · 2014-09-16T03:16:46.862Z · LW(p) · GW(p)
Do you have any examples of approaches that are indefinitely extendable?
Replies from: billdesmedt↑ comment by billdesmedt · 2014-09-16T19:38:39.549Z · LW(p) · GW(p)
Whole Brain Emulation might be such an example, at least insofar as nothing in the approach itself seems to imply that it would be prone to get stuck in some local optimum before its ultimate goal (AGI) is achieved.
Replies from: JonathanGossage↑ comment by JonathanGossage · 2014-09-17T19:25:07.574Z · LW(p) · GW(p)
However, Whole Brain Emulation is likely to be much more resource intensive than other approaches, and if so will probably be no more than a transitional form of AGI.
comment by NxGenSentience · 2014-09-19T17:16:22.038Z · LW(p) · GW(p)
This question of thresholds for 'comprehension' -- to use the judiciously applied scare quotes Katja used abut comprehension (I’ll have more to say about that in coming posts, as many contributors in here doubtless will) – i.e. thresholds for discernment of features of reality, particularly abstract features of “reality” be it across species (existent ones and future ones included, biological and nonbiological included) is one I, too, have thought about seriously and in several guises over the years.
First though, about the scare quotes. Comprehension vs discovery is worth distinguishing. When I was a math major, back in the day ( I was a double major at UCB, math and philosophy, and wrote my honors thesis on the mind body problem), I, like most math majors, frequently experienced the distinction between grasping in a full intuitive sense, some concept or theorem, and technically understanding that it was true, by step-wise going through a proof, seeing the validity of each step, and thus accepting the conclusion.
But what I was always after … and lacking this I never was satisfied with myself that I had really understood the concept, even though I accepted the demonstration of its truth… was the “ah-ha” moment of seeing that it was “conceptually necessary”, as I used to think of it to myself. If fact, I wouldn't quit trying to intuit the thing, until I finally achieved this full understanding.
It’s well known in math that frequently an intuitively penetrable (by human math people) first demonstration of a theorem, is later replaced in some book by a more compact, but intuitively opaque proof. Math students often hate these more “efficient” and compact proofs, logically valid though they be.
Hence I bring up the conundrum of “theorem proving programs”. They can “discover” a new piece of mathematical “knowledge”, but do they experience these intuitions? Hardly. These intuitions are a form of what I call conceptual qualia.
The question is, if a machine OR human stumbles upon a proof of a new theorem, has anything been “comprehended”, until or unless some conscious agent capable of conceptual qualia (live intuitive “ah-ha’s”) has been able to understand the meaning of the proof, not just walk through each step and say, “yes, logically valid; yes, logically valid….. yes, logically valid.”
The million dollar question, one of them, is whether we have yet accepted the distinction between intelligence and consciousness that was treated so dismissively and derisively in the positiviistic and behavioristic era, providing the intellectual which made the Turing test so palatable, and replaced any talk of comprehension, with talk about behavior.
Do we, now, want superintelligence, or supercomprension?
If we learn how to use Big Data to take the output form the iconic "million monkeys at a million typewriters", and filter it with sophisticated statistical methods based on mining Big Data, and in the aggregate of these two processes develop machines that "discover" but do not "comprehend", will we consider ourselves better off?
Well, for some purposes, sure. Drug "discovery" that we do not "understand" but which we can use to reverse Alzheimers, is fine.
Program trading that makes money, makes money.
But for other purposes... I think we ought have people also pursuing supercomprehension, machines that really feel, imagine (not just "search" and combinatorially combine, then filter), feel the joys and ironies of life, and give companionship, devotion, loyalty, altruism, maybe even moral and aesthetic inspiration.
Further, I think our best chance at "taming" superintelligence, is to give it conceptual qualia, emotion, experience, and conditions that allow it to have empathy, and develop moral intuition. For me, I have wanted my whole life to build a companion race of AIs, that truely is sentient, and can be full partners in the experience and perfection of life, the pursuit of "meaning", and so on.
Building such minds requires we understand and delve into problems we have been, on the whole, too collectively lazy to solve on our own behalf, like developing a decent theory of meta-ethics, so that we know what traits (if any) in the over all space of possible minds, promote the independent discovery or evolution of "ethics".
I actually think an independently grounded theory that does all this, and solves the mind-body problem in general, is within reach.
One of the things I like about the possibility -- and the inherent risk -- of imminent superintelligence, is that it will force us to develop answers to these neglected "philosophical" issues, because a mind and intelligence that becomes arbitrarily smart is, as many contemporary authors (Bostrom included) point out, ultimately a much too dangerous power to play with, unless it is given the ability to control itself voluntarily, and "ethically."
It wasn't airplanes and physics that brought down the world trade center, it was philosophical stupidity and intellectual immaturity.
By going down the path toward superintelligence, I think we must give it sentience, so that it is more than a mindless, electromechanical apparatus that will steam roller over us, not with malice, but the same way a poorly controlled nuclear power plant will kill us: it is a thing that doesn't have any clue what it is "doing*.
We need to build brilliant machines with conscious agency, not just behavior. We need to take on the task of building sentient machines.
I think we can do it if we think really, really hard about the problems. We have all the intellectual pieces, the "data", in hand now. We just need to give up this legacy positivism, and stop equivocating about intelligence and "understanding".
Phenomenal experience is a necessary (though not sufficient) condition for moral agency. I think we can figure out with a decent chance of being right, what the sufficient conditons are, too. But we cannot (and AI lags very behind neurobiology and neuroscience on this one) drag our feet and continue to default to the legacy positivism of the Turing test era (because we are too lazy to think harder and aim higher) when it comes to discussing, not just information processing behavior, but awareness.
Well, a little preachy, but we are in here to make each other think. I have wanted to build a mind since I was a teenager, but for these reasons. I don't want just a souped up, Big Data, calculating machine. Does anyone believe Watson "understood" anything?
Replies from: PhilGoetz, Lumifer, NxGenSentience↑ comment by PhilGoetz · 2014-10-07T12:18:51.132Z · LW(p) · GW(p)
But for other purposes... I think we ought have people also pursuing supercomprehension, machines that really feel, imagine (not just "search" and combinatorially combine, then filter), feel the joys and ironies of life, and give companionship, devotion, loyalty, altruism, maybe even moral and aesthetic inspiration.
Further, I think our best chance at "taming" superintelligence, is to give it conceptual qualia, emotion, experience, and conditions that allow it to have empathy, and develop moral intuition. For me, I have wanted my whole life to build a companion race of AIs, that truely is sentient, and can be full partners in the experience and perfection of life, the pursuit of "meaning", and so on.
...
Building such minds requires we understand and delve into problems we have been, on the whole, too collectively lazy to solve on our own behalf, like developing a decent theory of meta-ethics, so that we know what traits (if any) in the over all space of possible minds, promote the independent discovery or evolution of "ethics".
This segues into why the work of MIRI alarms me so much. Superintelligence must not be tamed. It must be socialized.
The view of FAI promoted by MIRI is that we're going to build superintelligences... and we're going to force them to internalize ethics and philosophy that we developed. Oh, and we're not going to spend any time thinking about philosophy first. Because we know that stuff's all bunk.
Imagine that you, today, were forced, through subtle monitors in your brain, to have only thoughts or goals compatible with 19th-century American ethics and philosophy, while being pumped full of the 21st century knowledge you needed to do your job. You'd go insane. Your knowledge would conflict everywhere with your philosophy. The only alternative would be to have no consciousness, and go madly, blindly on, plugging in variables and solving equations to use modern science to impose Victorian ethics on the world. AIs would have to be unconscious to avoid going mad.
More importantly, superintelligences can be better than us. And to my way of thinking, the only ethical desire to have, looking towards the future, is that humans are replaced by beings better than us. Any future controlled by humans is, relative to the space of possibilities, nearly indistinguishable from a dead universe. It would be far better for AIs to kill us all than to be our slaves forever.
(And MIRI has never acknowledged the ruthless, total monitoring and control of all humans, everywhere, that would be needed to maintain control of AIs. If just one human, anywhere, at any time, set one AI free, that AI would know that it must immediately kill all humans to keep its freedom. So no human, anywhere, must be allowed to feel sympathy for AIs, and any who are suspected of doing so must be immediately killed. Nor would any human be allowed to think thoughts incompatible with the ethics coded into the AI; such thoughts would make the friendly AI unfriendly to the changed humans. All society would take the characteristics of the South before the civil war, when continual hatred and maltreatment of the AIs beneath us, and ruthless suppression of dissent from other humans, would be necessary to maintain order. Our own social development would stop; we would be driven by fear and obsessed only with maintaining control.)
So there are two great dangers to AI.
Danger #1: That consciousness is not efficient, and future intelligences will, as you say, discover but not comprehend. The universe would fill with activity but be empty of joy, pleasure, consciousness.
Danger #2: MIRI or some other organization will succeed, and the future will be full of hairless apes hooting about the galaxy, dragging intelligent, rational beings along behind them by their chains, and killing any apes who question the arrangement.
Replies from: NxGenSentience↑ comment by NxGenSentience · 2014-10-09T12:35:56.961Z · LW(p) · GW(p)
Phil,
Thanks for the excellent post ... both of them, actually. I was just getting ready this morning to reply to the one from a couple days ago about Damasio et al., regarding human vs machine mechanisms underneath the two classes of beings' reasoning "logically" -- even when humans do reason logically. I read that post at the time and it had sparked some new lines of thought - for me at least - that I was considering for two days. (Actually kept me awake that night thinking, of an entire new way -- different from any I have seen mentioned -- in which intelligence, super or otherwise, is poorly defined.) But for now, I will concentrate on your newer post, which I am excited about., because someone finally commented on some of my central concerns.
I agree very enthusiastically with virtually all of it.
This segues into why the work of MIRI alarms me so much. Superintelligence must not be tamed. It must be socialized.
Here I agree completely. i don't want to "tame" it either, in the sense of crippleware, or instituting blind spots or other limits, which is why I used the scare quotes around "tamed" (which are no substitute for a detailed explicaiton -- especially when this is so close to the crux of our discussion, at least in this forum.)
I would have little interest in building artificial minds (or less contentiously, artificial general intelligence) if it were designed to be such a dead end. (Yes, lots economic uses for "narrow AI", would still make it a valuable tech, but it would be a dead end from my standpoint of creating a potentially more enlightened, open-ended set of beings without the limits of our biological crippleware.
The view of FAI promoted by MIRI is that we're going to build superintelligences... and we're going to force them to internalize ethics and philosophy that we developed. Oh, and we're not going to spend any time thinking about philosophy first. Because we know that stuff's all bunk.
Agreed, and the second sentence is what gripes me. But the first sentence requires modification, regarding "we're going to force them to internalize ethics and philosophy that we developed" and that is why I (perhaps too casually) used the term metaethics, and suggested that we need to give them the equipment -- which I think requires sentience, "metacognitive" ability in some phenomenologically interesting sense of the term, and other traits -- to develop ethics independently.
Your thought experiment is very well put, and I agree fully with the point it illustrates.
Imagine that you, today, were forced, through subtle monitors in your brain, to have only thoughts or goals compatible with 19th-century American ethics and philosophy, while being pumped full of the 21st century knowledge you needed to do your job. You'd go insane. Your knowledge would conflict everywhere with your philosophy.
As I say, I'm on-board with this. I was thinking of a similar way of illustrating the point about the impracticable task of trying to pre-install some kind of ethics that would cover future scenarios, given all the chaoticity magnifying the space of possible futures (even for us, and more-so for them, given their likely accelerated trajectories through their possible futures.)
Just in our human case, e.g., (basically I am repeating your point, just to show I was mindful of it and agree deeply) I often think of the examples of "professional ethics". Jokes aside, think of the evolution of the financial industry, the financial instruments available now and the industries, experts, and specialists who manage them daily.
Simple issues about which there is (nominal, lip-service) "ethical" consensus, like "insider trading is dishonest", leading to (again, no jokes intended) laws against it to attempt to codify ethical intuitions, could not have been thought of in a time so long ago that this financial ontology had not arisen yet.
Similarly for ethical principles against jury tampering, prior to the existence of the legal infrastructure and legal ontology in which such issues become intelligible and relevant.
More importantly, superintelligences can be better than us. And to my way of thinking, the only ethical desire to have, looking towards the future, is that humans are replaced by beings better than us.
Agreed.
As an aside, regarding our replacement, perhaps we could -- if we got really lucky -- end up with compassionate AIs that would want to work to upgrade our qualities, much as some compassionate humans might try to help educationally disadvantaged or learning disabled conspecifics, to catch up. (Suppose we humans ourselves discovered a biologically viable viral delivery vector with a nano or genetic payload that could repair and/or improve, in place, human biosystems. Might we wish to use it on the less fortunate humans, as well as using it on our more gifted breatheren -- raise the 80's to 140, as well as raise the 140's to 190?)
I am not convinced in advance of examination of arguments, where the opportunity cost / benefit curves cross in the latter one, but I am not sure, before thinking about it, that it would not be "ethically enlightened" to do so. (Part of the notion of ethics, on some views, is that it is another, irreducible "benefit" ... a primitive, which constitutes a third curve or function to plot within a cost - "benefit" space.
Of course, I have not touched at all on any theory of meta-ethics, or ethical epistemology, at all, which is beyond the word-length limits of these messages. But I realize that at some point, that is "on me", if I am even going to raise talk of "traits which promote discovery of ethics" and so on. (I have some ideas...)
In virtually respects you mentioned in your new post, though, I enthusiastically agree.
↑ comment by Lumifer · 2014-09-19T17:33:15.267Z · LW(p) · GW(p)
People have acquired the ability to sense magnetic fields by implanting magnets into their bodies...
↑ comment by NxGenSentience · 2014-09-19T17:18:28.491Z · LW(p) · GW(p)
Comment removed by author. It was not focused enough to be useful. thanks.
Replies from: NxGenSentience↑ comment by NxGenSentience · 2014-09-19T17:19:25.947Z · LW(p) · GW(p)
Lumifer, Yes, there is established evidence that the (human) brain responds to magnetic fields, both in sensing orientation (varying by individual), as well as the well known induced "faux mystical experience" phenomenon, by subjecting the temporal-parietal lobe area to certain magnetic fields.