AlphaGo versus Lee Sedol

post by gjm · 2016-03-09T12:22:53.237Z · LW · GW · Legacy · 183 comments

There have been a couple of brief discussions of this in the Open Thread, but it seems likely to generate more so here's a place for it.

The original paper in Nature about AlphaGo.

Google Asia Pacific blog, where results will be posted. DeepMind's YouTube channel, where the games are being live-streamed.

Discussion on Hacker News after AlphaGo's win of the first game.

183 comments

Comments sorted by top scores.

comment by Kaj_Sotala · 2016-03-11T11:30:08.184Z · LW(p) · GW(p)

'Yeah, we could maybe have AlphaGo learn everything totally from scratch and reach a superhuman level of knowledge just by playing itself, not using any human games for training material. Of course, reinventing everything that humanity has figured out while playing Go for the last 2,500 years, that's going to take quite a bit of time. Like a few months or so.'

Actually, the AlphaGo algorithm, this is something we’re going to try in the next few months — we think we could get rid of the supervised learning starting point and just do it completely from self-play, literally starting from nothing. It’d take longer, because the trial and error when you’re playing randomly would take longer to train, maybe a few months. But we think it’s possible to ground it all the way to pure learning.

http://www.theverge.com/2016/3/10/11192774/demis-hassabis-interview-alphago-google-deepmind-ai

comment by Elo · 2016-03-10T01:41:17.988Z · LW(p) · GW(p)

We accidentally had a meetup as the game was ending. For the first time in my life - got to walk in to a room and say; "Who's been watching the big game". It was great, and then about 10mins later the resignation happened. was pretty exciting!

comment by cousin_it · 2016-03-09T12:56:20.314Z · LW(p) · GW(p)

When I started hearing about the latest wave of results from neural networks, I thought to myself that Eliezer was probably wrong to bet against them. Should MIRI rethink its approach to friendliness?

Replies from: Wei_Dai, Houshalter, turchin, Raiden, skeptical_lurker, Squark, turchin, jacob_cannell
comment by Wei Dai (Wei_Dai) · 2016-03-10T10:37:06.813Z · LW(p) · GW(p)

Compared to its competition in the AGI race, MIRI was always going to be disadvantaged by both lack of resources and the need to choose an AI design that can predictably be made Friendly as opposed to optimizing mainly for capability. For this reason, I was against MIRI (or rather the Singularity Institute as it was known back then) going into AI research at all, as opposed to pursuing some other way of pushing for a positive Singularity.

In any case, what other approaches to Friendliness would you like MIRI to consider? The only other approach that I'm aware of that's somewhat developed is Paul Christiano's current approach (see for example https://medium.com/ai-control/alba-an-explicit-proposal-for-aligned-ai-17a55f60bbcf), which I understand is meant to be largely agnostic about the underlying AI technology. Personally I'm pretty skeptical but then I may be overly skeptical about everything. What are your thoughts? I don't recall seeing you having commented on them much.

Are you aware of any other ideas that MIRI should be considering?

Replies from: paulfchristiano, cousin_it, Lumifer, jacob_cannell
comment by paulfchristiano · 2016-03-10T18:57:00.415Z · LW(p) · GW(p)

Do you have a concise explanation of skepticism about the overall approach, e.g. a statement of the difficulty or difficulties you think will be hardest to overcome by this route?

Or is your view more like "most things don't work, and there isn't much reason to think this would work"?

In discussion you most often push on the difficulty of doing reflection / philosophy. Would you say this is your main concern?

My take has been that we just need to meet the lower bar of "wants to defer to human views about philosophy, and has a rough understanding of how humans want to reflect and want to manage their uncertainty in the interim."

Regarding philosophy/metaphilosophy, is it fair to describe your concern as one of:

  1. The approach I am pursuing can't realistically meet even my lower bar,
  2. Meeting my lower bar won't suffice for converging to correct philosophical views,
  3. Our lack of philosophical understanding will cause problems soon in subjective time (we seem to have some disagreement here, but I don't feel like adopting your view would change my outlook substantially), or
  4. AI systems will be much better at helping humans solve technical than philosophical problems, driving a potentially long-lasting (in subjective time) wedge between our technical and philosophical capability, even if ultimately we would end up at the right place?

My hope is that thinking and talking more about bootstrapping procedures would go a long way to resolving the disagreements between us (either leaving you more optimistic or me more pessimistic). I think this is most plausible if #1 is the main disagreement. If our disagreement is somewhere else, it may be worth also spending some time focusing somewhere else. Or it may be necessary to better define my lower bar in order to tell where the disagreement is.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2016-03-10T22:59:09.821Z · LW(p) · GW(p)

It seems to be a combination of all of these.

  1. Training an AI to defer to one's eventual philosophical judgments and interim method of managing uncertainty (and not falling prey to marketing worlds and incorrect but persuasive philosophical arguments etc) seems really hard, and made harder by the recursive structure in ALBA and the fact that the first level AI is sub-human in capacity which then has to handle being bootstrapped and training the next level AI. What percent of humans can accomplish this task, do you think? (I'd argue that the answer is likely zero, but certainly very small.) How do the rest use your AI?
  2. Assuming that deferring to humans on philosophy and managing uncertainty is feasible but costly, how many people could resist dropping this feature and the associated cost, in favor of adopting some sort of straightforward utility maximization framework with a fixed utility function that they think captures most or all of their values, if that came as a suggestion from the AI with an apparently persuasive argument? If most people do this and only a few don't (and those few are also disadvantaged in the competition to capture the cosmic commons due to deciding to carry these costs), that doesn't seem like much of a win.
  3. This is tied in with 1 and 2, in that correct meta-philosophical understanding is needed to accomplish 1, and unreasonable philosophical certainty would cause people to fail step 2.
  4. Even if the AIs keep deferring to their human users and don't end up short-circuit their philosophical judgements, if the AI/human systems become very powerful while still having incorrect and strongly held philosophical views, that seems likely to cause disaster. We also don't have much reason to think that if we put people in such positions of power (for example, being able to act as a god in some simulation or domain of their choosing), that most will eventually realize their philosophical errors and converge to correct views, that the power itself wouldn't further distort their already error-prone reasoning processes.
Replies from: paulfchristiano
comment by paulfchristiano · 2016-03-11T22:31:56.980Z · LW(p) · GW(p)

Re 1:

For a working scheme, I would expect it to be usable by a significant fraction of humans (say, comparable to the fraction that can learn to write a compiler).

That said, I would not expect almost anyone to actually play the role of the overseer, even if a scheme like this one ended up being used widely. An existing analogy would be the human trainers who drive facebook's M (at least in theory, I don't know how that actually plays out). The trainers are responsible for getting M to do what the trainers want, and the user trusts the trainers to do what the user wants. From the user's perspective, this is no different from delegating to the trainers directly, and allowing them to use whatever tools they like.

I don't yet see why "defer to human judgments and handle uncertainty in a way that they would endorse" requires evaluating complex philosophical arguments or having a correct understanding of metaphilosophy. If the case is unclear, you can punt it to the actual humans.

If I imagine an employee who sucks at philosophy but thinks 100x faster than me, I don't feel like they are going to fail to understand how to defer to me on philosophical questions. I might run into trouble because now it is comparatively much harder to answer philosophical questions, so to save costs I will often have to do things based on rough guesses about my philosophical views. But the damage from using such guesses depends on the importance of having answers to philosophical questions in the short-term.

It really feels to me like there are two distinct issues:

  1. Philosophical understanding may help us make good decisions in the short term, for example about how to trade off extinction risk vs faster development, or how to prioritize the suffering of non-human animals. So having better philosophical understanding (and machines that can help us build more understanding) is good.
  2. Handing off control of civilization to AI systems might permanently distort society's values. Understanding how to avoid this problem is good.

These seem like separate issues to me. I am convinced that #2 is very important, since it seems like the largest existential risk by a fair margin and also relatively tractable. I think that #1 does add some value, but am not at all convinced that it is a maximally important problem to work on. As I see it, the value of #1 depends on the importance of the ethical questions we face in the short term (and on how long-lasting are the effects of differential technological progress that accelerates our philosophical ability).

Moreover, it seems like we should evaluate solutions to these two problems separately. You seem to be making an implicit argument that they are linked, such that a solution to #2 should only be considered satisfactory if it also substantially addresses #1. But from my perspective, that seems like a relatively minor consideration when evaluating the goodness of a solution to #2. In my view, solving both problems at once would be at most 2x as good as solving the more important of the two problems. (Neither of them is necessarily a crisp problem rather than an axis along which to measure differential technological development.)

I can see several ways in which #1 and #2 are linked, but none of them seem very compelling to me. Do you have something in particular in mind? Does my position seem somehow more fundamentally mistaken to you?

(This comment was in response to point 1, but it feels like the same underlying disagreement is central to points 2 and 3. Point 4 seems like a different concern, about how the availability of AI would itself change philosophical deliberation. I don't really see much reason to think that the availability of powerful AI would make the endpoint of deliberation worse rather than better, but probably this is a separate discussion.)

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2016-03-13T00:10:04.203Z · LW(p) · GW(p)

The trainers are responsible for getting M to do what the trainers want, and the user trusts the trainers to do what the user wants.

In that case, there would be severe principle-agent problems, given the disparity between power/intelligence of the trainer/AI systems and the users. If I was someone who couldn't directly control an AI using your scheme, I'd be very concerned about getting uneven trades or having my property expropriated outright by individual AIs or AI conspiracies, or just ignored and left behind in the race to capture the cosmic commons. I would be really tempted to try another AI design that does purport to have the AI serve my interests directly, even if that scheme is not as "safe".

If I imagine an employee who sucks at philosophy but thinks 100x faster than me, I don't feel like they are going to fail to understand how to defer to me on philosophical questions.

If an employee sucks at philosophy, how does he even recognize philosophical problems as problems that he needs to consult you for? Most people have little idea that they should feel confused and uncertain about things like epistemology, decision theory, and ethics. I suppose it might be relatively easy to teach an AI to recognize the specific problems that we currently consider to be philosophical, but what about new problems that we don't yet recognize as problems today?

Aside from that, a bigger concern for me is that if I was supervising your AI, I would be constantly bombarded with philosophical questions that I'd have to answer under time pressure, and afraid that one wrong move would cause me to lose control, or lock in some wrong idea.

Consider this scenario. Your AI prompts you for guidance because it has received a message from a trading partner with a proposal to merge your AI systems and share resources for greater efficiency and economy of scale. The proposal contains a new AI design and control scheme and arguments that the new design is safer, more efficient, and divides control of the joint AI fairly between the human owners according to your current bargaining power. The message also claims that every second you take to consider the issue has large costs to you because your AI is falling behind the state of the art in both technology and scale, becoming uncompetitive, so your bargaining power for joining the merger is dropping (slowly in the AI's time-frame, but quickly in yours). Your AI says it can't find any obvious flaws in the proposal, but it's not sure that you'd consider the proposal to really be fair under reflective equilibrium or that the new design would preserve your real values in the long run. There are several arguments in the proposal that it doesn't know how to evaluate, hence the request for guidance. But it also reminds you not to read those arguments directly since they were written by a superintelligent AI and you risk getting mind-hacked if you do.

What do you do? This story ignores the recursive structure in ALBA. I think that would only make the problem even harder, but I could be wrong. If you don't think it would go like this, let me know how you think this kind of scenario would go.

In terms of your #1, I would divide the decisions requiring philosophical understanding into two main categories. One is decisions involved in designing/improving AI systems, like in the scenario above. The other, which I talked about in an earlier comment, is ethical disasters directly caused by people who are not uncertain, but just wrong. You didn't reply to that comment, so I'm not sure why you're unconcerned about this category either.

Replies from: paulfchristiano, paulfchristiano, paulfchristiano, paulfchristiano, paulfchristiano
comment by paulfchristiano · 2016-03-19T21:42:30.271Z · LW(p) · GW(p)

A general note: I'm not really taking a stand on the importance of a singleton, and I'm open to the possibility that the only way to achieve a good outcome even in the medium-term is to have very good coordination.

A would-be singleton will also need to solve the AI control problem, and I am just as happy to help with that problem as with the version of the AI control problem faced by a whole economy of actors each using their own AI systems.

The main way in which this affects my work is that I don't want to count on the formation of a singleton to solve the control problem itself.

You could try to work on AI in a way that helps facilitate the formation of a singleton. I don't think that is really helpful, but moreover it again seems like a separate problem from AI control. (Also don't think that e.g. MIRI is doing this with their current research, although they are open to solving AI control in a way that only works if there is a singleton.)

comment by paulfchristiano · 2016-03-19T21:35:22.376Z · LW(p) · GW(p)

every second you take to consider the issue has large costs to you because your AI is falling behind the state of the art in both technology and scale, becoming uncompetitive, so your bargaining power for joining the merger is dropping

In general I think that counterfactual oversight has problems in really low-latency environments. I think the most natural way to avoid them is synthesizing training data in advance. It's not clear whether that proposal will work.

If your most powerful learners are strong enough to learn good-enough answers to these kinds of philosophical questions, then you only need to provide philosophical input during training and so synthesizing training data can take off time pressure. If your most powerful AI is not able to learn how to answer these philosophical questions, then the time pressure seems harder to avoid. In that case though, it seems quite hard to avoid the time pressure by any mechanism. (Especially if we are better at learning than we would be at hand-coding an algorithm for philosophical deliberation---if we are better at learning and our learner can't handle philosophy, then we simply aren't going to be able to build an AI that can handle philosophy.)

comment by paulfchristiano · 2016-03-19T21:30:35.907Z · LW(p) · GW(p)

One is decisions involved in designing/improving AI systems, like in the scenario above. The other, which I talked about in an earlier comment, is ethical disasters directly caused by people who are not uncertain, but just wrong. You didn't reply to that comment, so I'm not sure why you're unconcerned about this category either.

I replied to your earlier comment.

My overall feeling is still that these are separate problems. We can evaluate a solution to AI control, and we can evaluate philosophical work that improves our understanding of potentially-relevant issues (or metaphilosophical work to automate philosophy).

I am both less pessimistic about philosophical errors doing damage, and more optimistic about my scheme's ability to do philosophy, but it's not clear to me that either of those is the real disagreement (since if I imagining caring a lot about philosophy and thinking this scheme didn't help automate philosophy, I would still feel like we were facing two distinct problems).

comment by paulfchristiano · 2016-03-19T21:26:49.349Z · LW(p) · GW(p)

If an employee sucks at philosophy, how does he even recognize philosophical problems as problems that he needs to consult you for? Most people have little idea that they should feel confused and uncertain about things like epistemology, decision theory, and ethics. I suppose it might be relatively easy to teach an AI to recognize the specific problems that we currently consider to be philosophical, but what about new problems that we don't yet recognize as problems today?

Is this your reaction if you imagine delegating your affairs to an employee today? Are you making some claim about the projected increase in the importance of these philosophical decisions? Or do you think that a brilliant employees' lack of metaphilosophical understanding would in fact cause great damage right now?

I would divide the decisions requiring philosophical understanding into two main categories. One is decisions involved in designing/improving AI systems, like in the scenario above...

I agree that AI may increase the stakes for philosophical decisions. One of my points is that a natural argument that it might increase the stakes---by forcing us to lock in an answer to philosophical questions---doesn't seem to go through if you pursue this approach to AI control. There might be other arguments that building AI systems force us to lock in important philosophical views, but I am not familiar with those arguments.

I agree there may be other ways in which AI systems increase the stakes for philosophical decisions.

I like the bargaining example. I hadn't thought about bargaining as competitive advantage before, and instead had just been thinking about the possible upside (so that the cost of philosophical error was bounded by the damage of using a weaker bargaining scheme). I still don't feel like this is a big cost, but it's something I want to think about somewhat more.

If you think there are other examples like this that might help move my view. On my current model, these are just facts that increase my estimates for the importance of philosophical work, I don't really see it as relevant to AI control per se. (See the sibling, which is the better place to discuss that.)

one wrong move would cause me to lose control

I don't see cases where a philosophical error causes you to lose control, unless you would have some reason to cede control based on philosophical arguments (e.g. in the bargaining case). Failing that, it seems like there is a philosophically simple, apparently adequate notion of "remaining in control" and I would expect to remain in control at least in that sense.

comment by paulfchristiano · 2016-03-19T21:04:51.575Z · LW(p) · GW(p)

In that case, there would be severe principle-agent problems, given the disparity between power/intelligence of the trainer/AI systems and the users. If I was someone who couldn't directly control an AI using your scheme, I'd be very concerned about getting uneven trades or having my property expropriated outright by individual AIs or AI conspiracies, or just ignored and left behind in the race to capture the cosmic commons. I would be really tempted to try another AI design that does purport to have the AI serve my interests directly, even if that scheme is not as "safe".

Are these worse than the principal-agent problems that exist in any industrialized society? Most humans lack effective control over many important technologies, both in terms of economic productivity and especially military might. (They can't understand the design of a car they use, they can't understand the programs they use, they don't understand what is actually going on with their investments...) It seems like the situation is quite analogous.

Moreover, even if we could build AI in a different way, it doesn't seem to do anything to address the problem, since it is equally opaque to an end user who isn't involved in the AI development process. In any case, they are in some sense at the mercy of the AI developer. I guess this is probably the key point---I don't understand the qualitative difference between being at the mercy of the software developer on the one hand, and being at the mercy of the software developer + the engineers who help the software run day-to-day on the other. There is a slightly different set of issues for monitoring/law enforcement/compliance/etc., but it doesn't seem like a huge change.

(Probably the rest of this comment is irrelevant.)

To talk more concretely about mechanisms in a simple example, you might imagine a handful of companies who provide AI software. The people who use this software are essentially at the mercy of the software providers (since for all they know the software they are using will subvert their interests in arbitrary ways, whether or not there is a human involved in the process). In the most extreme case an AI provider could effectively steal all of their users' wealth. They would presumably then face legal consequences, which are not qualitatively changed by the development of AI if the AI control problem is solved. If anything we expect the legal system and government to better serve human interests.

We could talk about monitoring/enforcement/etc., but again I don't see these issues as interestingly different from the current set of issues, or as interestingly dependent on the nature of our AI control techniques. The most interesting change is probably the irrelevance of human labor, which I think is a very interesting issue economically/politically/legally/etc.

I agree with the general point that as technology improves a singleton becomes more likely. I'm agnostic on whether the control mechanisms I describe would be used by a singleton or by a bunch of actors, and as far as I can tell the character of the control problem is essentially the same in either case.

I do think that a singleton is likely eventually. From the perspective of human observers, a singleton will probably be established relatively shortly after wages fall below subsistence (at the latest). This prediction is mostly based on my expectation that political change will accelerate alongside technological change.

Replies from: ESRogs
comment by ESRogs · 2016-04-15T03:51:52.097Z · LW(p) · GW(p)

I agree with the general point that as technology improves a singleton becomes more likely. I'm agnostic on whether the control mechanisms I describe would be used by a singleton or by a bunch of actors, and as far as I can tell the character of the control problem is essentially the same in either case.

I wonder -- are you also relatively indifferent between a hard and slow takeoff, given sufficient time before the takeoff to develop ai control theory?

(One of the reasons a hard takeoff seems scarier to me is that it is more likely to lead to a singleton, with a higher probability of locking in bad values.)

comment by cousin_it · 2016-03-10T12:10:52.099Z · LW(p) · GW(p)

As far as I can tell, Paul's current proposal might still suffer from blackmail, like his earlier proposal which I commented on. I vaguely remember discussing the problem with you as well.

One big lesson for me is that AI research seems to be more incremental and predictable than we thought, and garage FOOM probably isn't the main danger. It might be helpful to study the strengths and weaknesses of modern neural networks and get a feel for their generalization performance. Then we could try to predict which areas will see big gains from neural networks in the next few years, and which parts of Friendliness become easy or hard as a result. Is anyone at MIRI working on that?

Replies from: Wei_Dai, Gunnar_Zarncke, paulfchristiano
comment by Wei Dai (Wei_Dai) · 2016-03-10T21:27:40.764Z · LW(p) · GW(p)

Then we could try to predict which areas will see big gains from neural networks in the next few years, and which parts of Friendliness become easy or hard as a result. Is anyone at MIRI working on that?

If they did that, then what? Try to convince NN researchers to attack the parts of Friendliness that look hard? That seems difficult for MIRI to do given where they've invested in building their reputation (i.e., among decision theorists and mathematicians instead of in the ML community). (It would really depend on people trusting their experience and judgment since it's hard to see how much one could offer in the form of either mathematical proof or clearly relevant empirical evidence.) You'd have a better chance if the work was carried out by some other organization. But even if that organization got NN researchers to take its results seriously, what incentives do they have to attack parts of Friendliness that seem especially hard, instead of doing what they've been doing, i.e., racing as fast as they can for the next milestone in capability?

Or is the idea to bet on the off chance that building an FAI with NN turns out to be easy enough that MIRI and like-minded researchers can solve the associated Friendliness problems themselves and then hand the solutions to whoever ends up leading the AGI race, and they can just plug the solutions in at little cost to their winning the race?

Or you're suggesting aiming/hoping for some feasible combination of both, I guess. It seems pretty similar to what Paul Christiano is doing, except he has "generic AI technology" in place of "NN" above. To me, the chance of success of this approach seems low enough that it's not obviously superior to what MIRI is doing (namely, in my view, betting on the off chance that the contrarian AI approach they're taking ends up being much easier/better than the mainstream approach, which is looking increasingly unlikely but still not impossible).

comment by Gunnar_Zarncke · 2016-03-10T18:41:57.434Z · LW(p) · GW(p)

One big lesson for me is that AI research seems to be more incremental and predictable than we thought, and garage FOOM probably isn't the main danger.

That may be true but that is hindsight bias. MIRIs (or EYs for that matter) approach to hedge against that being true was nonetheless a very (and maybe given the knowledge at the time only) reasonable approach.

comment by paulfchristiano · 2016-03-10T18:31:19.039Z · LW(p) · GW(p)

As far as I can tell, Paul's current proposal might still suffer from blackmail, like his earlier proposal which I commented on

What problem do you have in mind here?

I thought that the previous problem was mostly psychological, i.e. that if humans were rational agents then this AI would be roughly as vulnerable to blackmail as its designers. So I thought the issue was the psychological strangeness (and great length) of the weird hypothetical.

Here we have no such hypothetical, and the system's behavior only depends on the predicted behavior of humans in the real world. That seems to address the narrow version of your concern.

I can see two analogous problems:

  1. The learning system may believe itself to be in a simulation, as discussed here. But that seems to cash out to a straightforward technical problem: we want to ensure that as long as there is some important scenario where the AI behaves strangely, we provide feedback on one such scenario. And this technical problem both (1) looks like it should be soluble in principle, (2) looks pretty tractable right now, and (3) is needed to resolve a whole host of other problems (a bunch of exotic failures, but also a bunch of more prosaic difficulties).
  2. The behavior of the system depends on what a human would do across many cases. We can sample from these cases at random, but maybe a small fraction of them are sufficiently surprising that they would lead a human to believe they are in a simulation. We would like to sample these situations with disproportionate probability as well so that the human can won't panic when they find themselves in one, but I haven't talked in any detail about how to do that and it's not obvious whether it is possible. (I do think it's possible.)

Did you have in mind 1, 2, or something else?

Replies from: cousin_it
comment by cousin_it · 2016-03-10T19:15:59.290Z · LW(p) · GW(p)

I mostly had in mind 2. Not sure how predicting humans is different from putting humans in hypotheticals. It seems like the same problems could happen.

Replies from: paulfchristiano
comment by paulfchristiano · 2016-03-11T05:58:31.627Z · LW(p) · GW(p)

I agree that the same problem appears for ALBA. I was originally working with proposals where the improbability of the human's situation was bounded, but the recursive structure can lead to arbitrarily large improbability. I hadn't thought about this explicitly.

Predicting humans is different from putting humans in hypotheticals, in the sense that in principle you can actually sample from the situations that cause humans to think they are in a simulation or whatever.

For example, suppose the human had access to a button that said "This is weird, I'm probably in a simulation," and suppose that we expected the human to press it in any case where they would start behaving weirdly. Then we could potentially sample from the subset of situations where the human presses the button. And if we manage to do that, then the human isn't right to suspect they are in a simulation (any more than they already should believe they are in a simulation, prior to even building the AI).

I agree that you shouldn't expect the current version of the scheme to cope with this problem. And it's certainly not obvious that the approach sketched above can work, there are many important questions. But I do think that there is a clear path to fixing this kind of problem.

(Interestingly, this is exactly analogous to what the active learning scheme has to do, in order to get the prediction algorithms to predict well.)

ETA: although I hadn't thought about this explicitly in the context of ALBA, I have expected to need some way to overweight "weird" situations in order to stop them from being problematic, ever since here.

comment by Lumifer · 2016-03-10T15:58:59.442Z · LW(p) · GW(p)

Compared to its competition in the AGI race, MIRI was always going to be disadvantaged

Is MIRI even in the AGI race? It certainly doesn't look like it.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2016-03-10T19:01:12.356Z · LW(p) · GW(p)

They're working on figuring out what we want the AGI to do, not building one. (I believe Nate has stated this in previous LW comments.)

Replies from: Lumifer, None
comment by Lumifer · 2016-03-10T19:35:12.221Z · LW(p) · GW(p)

Yes, and the point is that MIRI is pondering the situation at the finish line, but is not running in the race.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2016-03-10T19:42:25.310Z · LW(p) · GW(p)

A different analogy would be that MIRI is looking at the map and the compass to figure out what's the right way to go, while others are just running in any random direction.

Replies from: Lumifer
comment by Lumifer · 2016-03-10T19:47:14.092Z · LW(p) · GW(p)

Not quite. The others are not running around in random directions, they are all running in a particular direction and MIRI is saying "Hold on, guys, there may be bears and tigers and pits of hell at your destination". Which is all fine, but it still is not running.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2016-03-11T11:26:25.634Z · LW(p) · GW(p)

Still better than running into all the bears and tigers and getting eaten, particularly if it lets you figure out the correct route eventually.

Replies from: Lumifer
comment by Lumifer · 2016-03-11T15:51:31.671Z · LW(p) · GW(p)

The question was not what is better, the question was whether MIRI is competing in the AGI race.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2016-03-11T18:24:02.299Z · LW(p) · GW(p)

Sure. I wasn't objecting to the "MIRI isn't competing in the AGI race" point, but to the negative connotations that one might read into your original analogy.

comment by [deleted] · 2016-03-10T19:11:27.334Z · LW(p) · GW(p)

Which unfortunately presumes that an AGI would be tasked with doing something and given free reign to do so, a truly naïve and unlikely outcome.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2016-03-10T19:41:34.950Z · LW(p) · GW(p)

How does it presume that?

Replies from: None
comment by [deleted] · 2016-03-10T21:42:40.541Z · LW(p) · GW(p)

They're working on figuring out what we want the AGI to do

Aka friendliness research. But why does that matter? If the machine has no real effectors and lots of human oversight, then why should there even be concern over friendliness? It wouldn't matter in that context. Tell a machine to do something, and it finds an evil-stupid way of doing it, and human intervention prevents any harm.

Why is it a going concern at all whether we can assure ahead of time that the actions recommended by a machine are human-friendly unless the machine is enabled to independently take those actions without human intervention? Just don't do that and it stops being a concern.

Replies from: Kaj_Sotala, TheAncientGeek
comment by Kaj_Sotala · 2016-03-11T11:23:04.575Z · LW(p) · GW(p)

Humanity is having trouble coordinating and enforcing even global restrictions in greenhouse gasses. Try ensuring that nobody does anything risky or short-sighted with a technology that has no clearly-cut threshold between a "safe" and "dangerous" level of capability, and which can be beneficial for performing in pretty much any competitive and financially lucrative domain.

Restricting the AI's capabilities may work for a short while, assuming that only a small group of pioneers manages to develop the initial AIs and they're responsible with their use of the technology - but as Bruce Schneier says, today's top-secret programs become tomorrow's PhD theses and the next day's common applications. If we want to survive in the long term, we need to figure out how to make the free-acting AIs safe, too - otherwise it's just a ticking time bomb before the first guys accidentally or intentionally release theirs.

Replies from: TheAncientGeek, None
comment by TheAncientGeek · 2016-03-11T14:42:55.581Z · LW(p) · GW(p)

Humanity has done more than zero and less that optimality about things like climate change. Importantly, the situation isbelow the immanent existential threat level.

If you are going to complain that alternative proposals face coordination problems, you need to show that yours dont, or you are committing the fallacy of the dangling comparision. If people aren't going to refrain from building dangerously powerful superintellugences, assuming is possible, why would they have the sense to fit MIRIs safety features, assuming they are possible? If the law can make people fit safety features, why cant it prevent them building dangerous AIs ITFP?

no clearly-cut threshold between a "safe" and "dangerous" level of capability

I would suggest a combination of generality and agency. And what problem domain requires both?

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2016-03-11T18:31:27.555Z · LW(p) · GW(p)

If you allow for autonomously acting AIs, then you could have Friendly autonomous AIs tracking down and stopping Unfriendly / unauthorized AIs.

This of course depends on people developing the Friendly AIs first, but ideally it'd be enough for only the first people to get the design right, rather than depending on everyone being responsible.

Importantly, the situation isbelow the immanent existential threat level.

It's unclear whether AI risk will become obviously imminent, either. Goertzel & Pitt 2012 argue in section 3 of their paper that this is unlikely.

I would suggest a combination of generality and agency. And what problem domain requires both?

Business (which by nature covers just about every domain in which you can make a profit, which is to say just about every domain relevant for human lives), warfare, military intelligence, governance... (see also my response to Mark)

Replies from: Lumifer, TheAncientGeek
comment by Lumifer · 2016-03-14T15:06:56.552Z · LW(p) · GW(p)

then you could have Friendly autonomous AIs tracking down and stopping Unfriendly / unauthorized AIs.

Somehow that reminds me of Sentinels from X-Men: Days of Future Past.

comment by TheAncientGeek · 2016-03-13T21:04:21.724Z · LW(p) · GW(p)

If you allow for autonomously acting AIs, then you could have Friendly autonomous AIs tracking down and stopping Unfriendly / unauthorized AIs.

You could, but if you don't have autonomously acting agents, you don't need Gort AIs. Building an agentive superintelligence that is powerful enough to take down any othe, as as MIRI conceives it, is a very risky proposition, since you need to get the value system exactly right. So its better not to be in a place where you have to do that,

This of course depends on people developing the Friendly AIs first, but ideally it'd be enough for only the first people to get the design right, rather than depending on everyone being responsible.

The first people have to be able as well as willing to get everything right, Safety through restraint is easier and more reliable. -- you can omit a feature more reliably than you can add one.

Business (which by nature covers just about every domain in which you can make a profit, which is to say just about every domain relevant for human lives), warfare, military intelligence, governance...

These organizations have a need for widespread intelligence gathering , and for agentive AI, but that doesn't mean they need both in the same package. The military don't need their entire intelligence database in every drone, and don't want drones that change their mind about who the bad guys are in mid flight. Businesses don't want HFT applications that decide capitalism is a bad thing.

We want agents to act on our behalf, which means we want agents that are predictable and controllable to the required extent. Early HFT had problems which led to the addition of limits and controls. Control and predictability are close to safety. There is no drive to power that is also a drive away from safety, because uncontrolled power is of no use.

Based on the behaviour of organisations, there seems to be natural division between high-level, unpredictable decision information systems and lower level, faster acting genitive systems. In other words, they voluntarily do some of what would be required for an incremental safety programme.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2016-03-14T09:28:44.031Z · LW(p) · GW(p)

I agree that it would be better not to have autonomously acting AIs, but not having any autonomously acting AIs would require a way to prevent anyone deploying them, and so far I haven't seen a proposal for that that'd seem even remotely feasible.

And if we can't stop them from being deployed, then deploying Friendly AIs first looks like the scenario that's more likely to work - which still isn't to say very likely, but at least it seems to have a chance of working even in principle. I don't see that an even-in-principle way for "just don't deploying autonomous AIs" to work.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2016-03-15T09:58:44.789Z · LW(p) · GW(p)

When you say autonomous AIs, do you mean AIs that are autonomous and superinteligent?

Do you think they could he deployed by basement hackers, or only by large organisations?

Do you think an organisation like the military or business has a motivation to deploy them?

Do you agree that there are dangers to an FAI project that goes wrong?

Do you have a plan B to cope with a FAI that goes rogue?

Do you think that having a AI potentially running the world is an attractive idea to a lot of people?

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2016-03-18T10:58:28.431Z · LW(p) · GW(p)

When you say autonomous AIs, do you mean AIs that are autonomous and superinteligent?

AIs that are initially autonomous and non-superintelligent, then gradually develop towards superintelligence. (With the important caveat that it's unclear whether an AI needed to be generally superintelligent in order to pose a major risk for society. It's conceivable that superintelligence in some more narrow domain, like cybersecurity, would be enough - particularly in a sufficiently networked society.)

Do you think they could he deployed by basement hackers, or only by large organisations?

Hard to say. The way AI has developed so far, it looks like the capability might be restricted to large organizations with lots of hardware resources at first, but time will likely drive down the hardware requirements.

Do you think an organisation like the military or business has a motivation to deploy them?

Yes.

Do you agree that there are dangers to an FAI project that goes wrong?

Yes.

Do you have a plan B to cope with a FAI that goes rogue?

Such a plan would seem to require lots of additional information about both the specifics of the FAI plan, and also the state of the world at that time, so not really.

Do you think that having a AI potentially running the world is an attractive idea to a lot of people?

Depends on how we're defining "lots", but I think that the notion of a benevolent dictator has often been popular in many circles, who've also acknowledged its largest problems to be that 1) power tends to corrupt 2) even if you got a benevolent dictator, you also needed a way to ensure that all of their successors were benevolent. Both problems could be overcome with an AI, so on that basis at least I would expect lots of people to find it attractive. I'd also expect it to be considered more attractive in e.g. China, where people seem to be more skeptical towards democracy than they are in the West.

Additionally, if the AI wouldn't be the equivalent of a benevolent dictator, but rather had a more hands-off role that kept humans in power and only acted to e.g. prevent disease, violent crime, and accidents, then that could be attractive to a lot of people who preferred democracy.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2016-04-11T15:47:28.722Z · LW(p) · GW(p)

When you say autonomous AIs, do you mean AIs that are autonomous and superinteligent?

AIs that are initially autonomous and non-superintelligent, then gradually develop towards superintelligence

If you believe in the conjunction of claims that people are motivated to create autonomous, not just agentive, AIs, and that pretty well any AI can evolve into dangerous superintelligence, then the situation is dire, because you cannot guarantee to get in first with an AI policeman as a solution to AI threat.

The situation is better, but only slightly better with legal restraint as a solution to AI threat, because you can lower the probability of disaster by banning autonomous AI...but you can only lower it, not eliminate it, because no ban is 100% effective.

And how serious are you about the threat level? Compare with micro biological research. It could be the case that someone will accidentally create an organism that spells doom for the human race, it cannot be ruled out, but no one is panicing now because there is no specific reason to rule it in, no specific pathway to it. It is a remote possibility, not a serious one.

Someone who sincerely believed that rapid self improvement towards autonomous AI could happen at any time, because there are no specific precondition or precursors for it, is someone who effectively believes it could happen now. But someone who genuinely believes an AI apocalypse could happen now is someone who would e revealing their belief in their behaviour by heading for the hills, or smashing every computer they see.

(With the important caveat that it's unclear whether an AI needed to be generally superintelligent in order to pose a major risk for society.

Narrow superintelligences may well be less dangerous than general superintelligences, and if you are able to restrict the generality of an AI, that could be a path to incremental safety.

But if the path to some kind of spontaneous superintelligence in an autonomous AI is also a path to spontaneous generality, that is hopeless. -- if the one can happen for no particular reason, so can the other. But is the situation really bad, or are these scenarios remote possibilities, like genetically engineered super plagues?

Do you think they could he deployed by basement hackers, or only by large organisations?

Hard to say. The way AI has developed so far, it looks like the capability might be restricted to large organizations with lots of hardware resources at first, but time will likely drive down the hardware requirements.

But by the time the hardware requirements have been driven down for entry level AI, the large organizations will already have more powerful systems, and they will dominate for better or worse. If benevolent, they will supress dangerous AIs coming out of basements, if dangerous they will suppress rivals. The only problematic scenario is where the hackers get in first, since they are less likely to partition agency from intelligence, as I have argued a large organisation would.

But the one thing we know for sure about AI is that it is hard.The scenario where a small team hits on the One Weird Trick to achieve ASI is the most worrying, but also the least likely.

Do you think an organisation like the military or business has a motivation to deploy [autonomous AI]?

Yes.

Which would be what?

Do you agree that there are dangers to an FAI project that goes wrong?

Yes.

Do you have a plan B to cope with a FAI that goes rogue?

Such a plan would seem to require lots of additional information about both the specifics of the FAI plan, and also the state of the world at that time, so not really.

But building an FAI capable of policing other AIs is potentially dangerous, since it would need to be both a general intelligence and super intelligence.

Do you think that having a AI potentially running the world is an attractive idea to a lot of people?

Depends on how we're defining "lots",

For the purposes of the current argument, a democratic majority.

but I think that the notion of a benevolent dictator has often been popular in many circles, who've also acknowledged its largest problems to be that 1) power tends to corrupt 2) even if you got a benevolent dictator, you also needed a way to ensure that all of their successors were benevolent. Both problems could be overcome with an AI,

There are actually three problems with benevolent dictators. As well. as power corrupting, and successorship, there is the problem of ensuring or detecting benevolence in the first place.

You have conceded that Gort AI is potentially dangerous. The danger is that it is fragile in a specific way: a near miss to a benevolent value system is a dangerous one,

so on that basis at least I would expect lots of people to find it attractive. I'd also expect it to be considered more attractive in e.g. China, where people seem to be more skeptical towards democracy than they are in the West.

Additionally, if the AI wouldn't be the equivalent of a benevolent dictator, but rather had a more hands-off role that kept humans in power and only acted to e.g. prevent disease, violent crime, and accidents, then that could be attractive to a lot of people who preferred democracy

That also depends on both getting it right, and convincing people you have got it right

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2016-04-19T10:06:48.020Z · LW(p) · GW(p)

If you believe in the conjunction of claims that people are motivated to create autonomous, not just agentive, AIs, and that pretty well any AI can evolve into dangerous superintelligence, then the situation is dire, because you cannot guarantee to get in first with an AI policeman as a solution to AI threat.

The situation is better, but only slightly better with legal restraint as a solution to AI threat,

Indeed.

And how serious are you about the threat level? Compare with micro biological research. It could be the case that someone will accidentally create an organism that spells doom for the human race, it cannot be ruled out, but no one is panicing now because there is no specific reason to rule it in, no specific pathway to it. It is a remote possibility, not a serious one.

Someone who sincerely believed that rapid self improvement towards autonomous AI could happen at any time, because there are no specific precondition or precursors for it, is someone who effectively believes it could happen now. But someone who genuinely believes an AI apocalypse could happen now is someone who would e revealing their belief in their behaviour by heading for the hills, or smashing every computer they see.

I don't think that rapid self-improvement towards a powerful AI could happen at any time. It'll require AGI, and we're still a long way from that.

Narrow superintelligences may well be less dangerous than general superintelligences, and if you are able to restrict the generality of an AI, that could be a path to incremental safety.

It could, yes.

But by the time the hardware requirements have been driven down for entry level AI, the large organizations will already have more powerful systems, and they will dominate for better or worse.

Assuming they can keep their AGI systems in control.

Do you think an organisation like the military or business has a motivation to deploy [autonomous AI]?

Yes.

Which would be what?

See my response here and also section 2 in this post.

But building an FAI capable of policing other AIs is potentially dangerous, since it would need to be both a general intelligence and super intelligence. [...] You have conceded that Gort AI is potentially dangerous. The danger is that it is fragile in a specific way: a near miss to a benevolent value system is a dangerous one,

Very much so.

comment by [deleted] · 2016-03-11T16:00:10.801Z · LW(p) · GW(p)

I think you very much misunderstand my suggestion. I'm saying that there is no reason to presume AI will be given the keys to the kingdom from day one, not advocating for some sort of regulatory regime.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2016-03-11T18:28:08.052Z · LW(p) · GW(p)

So what do you see as the mechanism that will prevent anyone from handing the AI those keys, given the tremendous economic pressure towards doing exactly that?

As we discussed in Responses to AGI Risk:

As with a boxed AGI, there are many factors that would tempt the owners of an Oracle AI to transform it to an autonomously acting agent. Such an AGI would be far more effective in furthering its goals, but also far more dangerous.

Current narrow-AI technology includes HFT algorithms, which make trading decisions within fractions of a second, far too fast to keep humans in the loop. HFT seeks to make a very short-term profit, but even traders looking for a longer-term investment benefit from being faster than their competitors. Market prices are also very effective at incorporating various sources of knowledge [135]. As a consequence, a trading algorithmʼs performance might be improved both by making it faster and by making it more capable of integrating various sources of knowledge. Most advances toward general AGI will likely be quickly taken advantage of in the financial markets, with little opportunity for a human to vet all the decisions. Oracle AIs are unlikely to remain as pure oracles for long.

Similarly, Wallach [283] discuss the topic of autonomous robotic weaponry and note that the US military is seeking to eventually transition to a state where the human operators of robot weapons are ‘on the loop’ rather than ‘in the loop’. In other words, whereas a human was previously required to explicitly give the order before a robot was allowed to initiate possibly lethal activity, in the future humans are meant to merely supervise the robotʼs actions and interfere if something goes wrong.

Human Rights Watch [90] reports on a number of military systems which are becoming increasingly autonomous, with the human oversight for automatic weapons defense systems—designed to detect and shoot down incoming missiles and rockets—already being limited to accepting or overriding the computerʼs plan of action in a matter of seconds. Although these systems are better described as automatic, carrying out pre-programmed sequences of actions in a structured environment, than autonomous, they are a good demonstration of a situation where rapid decisions are needed and the extent of human oversight is limited. A number of militaries are considering the future use of more autonomous weapons.

In general, any broad domain involving high stakes, adversarial decision making and a need to act rapidly is likely to become increasingly dominated by autonomous systems. The extent to which the systems will need general intelligence will depend on the domain, but domains such as corporate management, fraud detection and warfare could plausibly make use of all the intelligence they can get. If oneʼs opponents in the domain are also using increasingly autonomous AI/AGI, there will be an arms race where one might have little choice but to give increasing amounts of control to AI/AGI systems.

Miller [189] also points out that if a person was close to death, due to natural causes, being on the losing side of a war, or any other reason, they might turn even a potentially dangerous AGI system free. This would be a rational course of action as long as they primarily valued their own survival and thought that even a small chance of the AGI saving their life was better than a near-certain death.

Some AGI designers might also choose to create less constrained and more free-acting AGIs for aesthetic or moral reasons, preferring advanced minds to have more freedom.

Replies from: None
comment by [deleted] · 2016-03-11T21:23:21.329Z · LW(p) · GW(p)

What "tremendous economic pressure"? The argument doesn't hold weight absent that unsubstantiated justification.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2016-03-14T09:24:45.356Z · LW(p) · GW(p)

I thought my excerpt answered that, but maybe that was illusion of transparency speaking. In particular, this paragraph:

In general, any broad domain involving high stakes, adversarial decision making and a need to act rapidly is likely to become increasingly dominated by autonomous systems. The extent to which the systems will need general intelligence will depend on the domain, but domains such as corporate management, fraud detection and warfare could plausibly make use of all the intelligence they can get. If oneʼs opponents in the domain are also using increasingly autonomous AI/AGI, there will be an arms race where one might have little choice but to give increasing amounts of control to AI/AGI systems.

To rephrase: the main trend is history has been to automate everything that can be automated, both to reduce costs and because machines can do things better than humans do. This isn't going to stop: I've already seen articles calling for both company middle managers, as well as government bureaucrats, to be replaced with AIs. If you have any kind of a business, you could potentially make it run better by putting a sufficiently sophisticated AI in charge - because it can think faster and smarter, deal with more information at once, and not have the issue of self-interest leading to office politics leading to many employees acting suboptimally from the company's point of view, that you'd get if you had a thousand human employees rather than a single AI.

This trend has been going on throughout history, doesn't show any signs of stopping, and inherently involves giving the AI systems whatever agency they need in order to run the company better.

And if your competitors are having AIs run their company and you don't, you're likely to be outcompeted, so you'll want to make sure your AIs are smarter and more capable of acting autonomously than the competitors. These pressures aren't just going to vanish at the point when AIs start approaching human capability.

The same considerations also apply to other domains than business - like governance - but the business and military domains are the most likely to have intense arms race dynamics going on.

Replies from: None
comment by [deleted] · 2016-03-14T19:11:23.857Z · LW(p) · GW(p)

Yes, illusion of transparency at work here. That paragraph has always been so clearly wrong to me that I wrote it off as the usual academic prose fluff, and didn't realize it was in fact the argument being made. Here is the issue I take with that:

You can find instances where industry is clamoring to use AI to reduce costs / improve productivity. For example, Uber and self-driving cars. However in these cases there are a combination of two factors at work: (1) the examples are necessarily specialized narrow AI, not general decision making; and/or (2) the costs of poor decision making are externalized. Let's look at these points in more detail:

Anytime a human is being used as a meat robot, e.g. an Uber driver, a machine can do the job better and more efficiently with quantifiable tradeoffs due to the machine's own quirks. However one must not forget that this is the case because the context has already been specialized! One can replace a minimum wage burger flipper with a machine because the job is part of a three-ring binder enterprise that has already been exhaustively thought out to such a degree that every component task can be taught to a minimum wage, distracted teenage worker. If the mechanical burger flipper fails, you go back to paying a $10/hr meat robot to do the trick. But what happens when the corporate strategy robot fails and the new product is a flop? You lose hundreds of millions of invested dollars. And worse, you don't know until it is all over and played out. Not comparable at all.

Uber might want a fleet of self-driving cars. But that's because the costs of being wrong are externalized. Get in an accident? It's your driver's problem, not Uber. Self-driving car get in an accident? It's the owner of the car's problem which, surprise, is not Uber. The applications of AGI have risks that are not so easily externalized, however.

I can see how one might think that unchecked AGI would improve the efficiency of corporate management, fraud detection, and warfare. However that's confirmation bias. I assure you that the corporate strategists, fraud specialists, and generals get paid the big bucks to think about risk and the ways in which things can go wrong. I can give examples of what could go wrong when an alien AGI psychology tries to interact with irrational humans, but it's much simpler to remember that even presumably superhuman AGIs have error rates, and these error rates will be higher than humans for a good duration of time while the technology is still developing. And what happens when an AGI makes a mistake?

  1. A corporate strategist AGI makes a mistake, and the directors of the corporation who have a fiduciary responsibility to shareholders are held personally accountable. Indemnity insurance refuses to pay out as upper management purposefully took themselves out of the loop, an action that is considered irresponsible in hindsight.

  2. A fraud specialist AGI makes a mistake, and its company turns a blind eye to hundreds of millions of dollars of fraud that a human would have seen. Business goes belly-up.

  3. An war-making AGI makes a mistake, and you are now dead.

I hope that you'll forgive me, but I must call on anecdotal evidence here. I am the co-founder of a startup that has raised >$75MM. I understand very well how investors, upper management, and corporate strategists manage risk. I also have observed how extremely terrified of additional risk they are. The supposition that they would be willing to put a high-risk proto-AGI in the driver's seat is naïve to say the least. These are the people that are held accountable and suffer the largest losses when things go wrong, and they are terrified of that outcome.


What is likely to happen, on the other hand, is a hybridization of machine and human. AGI cognitive assistance will permeate these industries, but their job is to give recommendations, not steer things directly. And it's not at all so clear to me that this approach, "Oracle AI" as it is called on LW, is so dangerous.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2016-03-15T06:15:19.517Z · LW(p) · GW(p)

Thank you for the patient explanation! This is an interesting argument that I'll have to think about some more, but I've already adjusted my view of how I expect things to go based on it.

Two questions:

First, isn't algorithmic trading a counterexample to your argument? It's true that it's a narrow domain, but it's also one where AI systems are trusted with enormous sums of money, and have the potential to make enormous losses. E.g. one company apparently lost $440 million in less than an hour due to a glitch in their software. Wikipedia on the consequences:

Knight Capital took a pre-tax loss of $440 million. This caused Knight Capital's stock price to collapse, sending shares lower by over 70% from before the announcement. The nature of the Knight Capital's unusual trading activity was described as a "technology breakdown".[14][15]

On Sunday, August 5 the company managed to raise around $400 million from half a dozen investors led by Jefferies in an attempt to stay in business after the trading error. Jefferies' CEO, Richard Handler and Executive Committee Chair Brian Friedman structured and led the rescue and Jefferies purchased $125 million of the $400 million investment and became Knight's largest shareholder. [2]. The financing would be in the form of convertible securities, bonds that turn into equity in the company at a fixed price in the future.[16]

The incident was embarrassing for Knight CEO Thomas Joyce, who was an outspoken critic of Nasdaq's handling of Facebook's IPO.[17] On the same day the company's stock plunged 33 percent, to $3.39; by the next day 75 percent of Knight's equity value had been erased.[18]

Also, you give several examples of AGIs potentially making large mistakes with large consequences, but couldn't e.g. a human strategist make a similarly big mistake as well?

You suggest that the corporate leadership could be held more responsible for a mistake by an AGI than if a human employer made the mistake, and I agree that this is definitely plausible. But I'm not sure whether it's inevitable. If the AGI was initially treated the way a junior human employee would, i.e. initially kept subject to more supervision and given more limited responsibilities, and then had its responsibilities scaled up as people came to trust it more and it learned from its mistakes, would that necessarily be considered irresponsible by the shareholders and insurers? (There's also the issue of privately held companies with no need to keep external shareholders satisfied.)

Replies from: Lumifer
comment by Lumifer · 2016-03-15T14:35:23.632Z · LW(p) · GW(p)

one where AI systems are trusted with enormous sums of money

Kinda. They are carefully watched and have separate risk management systems which impose constraints and limits on what they can do.

E.g. one company apparently lost $440 million in less than an hour due to a glitch in their software.

Yes, but that has nothing to do with AI: "To err is human, but to really screw up you need a computer". Besides, there are equivalent human errors (fat fingers, add a few zeros to a trade inadvertently) with equivalent magnitude of losses.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2016-03-18T11:02:34.377Z · LW(p) · GW(p)

have separate risk management systems which impose constraints and limits on what they can do.

If those risk management systems are themselves software, that doesn't really change the overall picture.

Yes, but that has nothing to do with AI:

If we're talking about "would companies place AI systems in a role where those systems could cost the company lots of money if they malfunctioned", then examples of AI systems having been placed in roles where they cost the company a lot of money have everything to do with the discussion.

Replies from: Lumifer
comment by Lumifer · 2016-03-18T15:01:59.325Z · LW(p) · GW(p)

If those risk management systems are themselves software, that doesn't really change the overall picture.

It does because the issue is complexity and opaqueness. A simple gatekeeper filter along the lines of

  if (trade.size > gazillion) { reject(trade) }

is not an "AI system".

Replies from: Torchlight_Crimson
comment by Torchlight_Crimson · 2016-03-19T00:52:48.541Z · LW(p) · GW(p)

In which case the AI splits the transaction into 2 transactions, each just below a gazillion.

Replies from: Lumifer
comment by Lumifer · 2016-03-21T14:50:48.219Z · LW(p) · GW(p)

I'm talking about contemporary-level-of-technology trading systems, not about future malicious AIs.

Replies from: Crownless_Prince
comment by Crownless_Prince · 2016-03-21T23:58:56.246Z · LW(p) · GW(p)

So? An opaque neural net would quickly learn how to get around trade size restrictions if given the proper motivations.

Replies from: Lumifer
comment by Lumifer · 2016-03-22T00:07:42.885Z · LW(p) · GW(p)

So? An opaque neural net would quickly learn how to get around trade size restrictions if given the proper motivations.

At which point the humans running this NN will notice that it likes to go around risk control measures and will... persuade it that it's a bad idea.

It's not like no one is looking at the trades it's doing.

Replies from: Crownless_Prince
comment by Crownless_Prince · 2016-03-22T00:16:58.239Z · LW(p) · GW(p)

At which point the humans running this NN will notice that it likes to go around risk control measures and will... persuade it that it's a bad idea.

How? By instituting more complex control measures? Then you're back to the problem Kaj mentioned above.

Replies from: Lumifer
comment by Lumifer · 2016-03-22T16:59:08.936Z · LW(p) · GW(p)

How?

In the usual way. Contemporary trading systems are not black boxes full of elven magic. They are models, that is, a bunch of code and some data. If the model doesn't do what you want it to do, you stick your hands in there and twiddle the doohickeys until it stops outputting twaddle.

Besides, in most trading systems the sophisticated part ("AI") is an oracle. Typically it outputs predictions (e.g. of prices of financial assets) and its utility function is some loss function on the difference between the prediction and the actual. It has no concept of trades, or dollars, or position limits.

Translating these predictions into trades is usually quite straightforward.

comment by TheAncientGeek · 2016-03-11T13:49:02.709Z · LW(p) · GW(p)

I suspect that this dates back to a time when MIRI believed the answer to AI safety was to both build an agentive, maximal supeintelligence and align its values with ours, and put it in charge of all the other AIs.

The first idea has been effectively shelved, since MIRI had produced about zero lines of code,..but the idea that AI safety is value alignment continues with considerable momentum. And value alignment only makes sense if you are building an agentive AI (and have given up on corrigibility).

comment by jacob_cannell · 2016-03-12T08:55:57.023Z · LW(p) · GW(p)

Briefly skimming Christiano's post, this is actually one of the few/first proposals from someone MIRI related that actually seems to be on the right track (and similar to my own loose plans). Basically it just boils down to learning human utility functions with layers of meta-learning, with generalized RL and IRL.

comment by Houshalter · 2016-03-09T22:10:45.612Z · LW(p) · GW(p)

EY was influenced by E.T. Jaynes, who was really against neural networks, in favor of bayesian networks. He thought NNs were unprincipled and not mathematically elegant, and bayes nets were. I see the same opinions in some of EY's writings, like the one you link. And the general attitude that "non-elegant = bad" is basically MIRI's mission statement.

I don't agree with this at all. I wrote a thing here about how NNs can be elegant, and derived from first principles. But more generally, AI should use whatever works. If that happens to be "scruffy" methods, then so be it.

Replies from: hairyfigment, turchin, V_V, cousin_it
comment by hairyfigment · 2016-03-11T08:18:44.487Z · LW(p) · GW(p)

But more generally, AI should use whatever works. If that happens to be "scruffy" methods, then so be it.

This seems like a bizarre statement if we care about knowable AI safety. Near as I can tell, you just called for the rapid creation of AGI that we can't prove non-genocidal.

Replies from: dxu, jacob_cannell
comment by dxu · 2016-03-11T17:40:46.216Z · LW(p) · GW(p)

I don't believe Houshalter was referring to proving Friendliness (or something along those lines); my impression is that he was talking about implementing an AI, in which case neural networks, while "scruffy", should be considered a legitimate approach. (Of course, the "scruffiness" of NN's could very well affect certain aspects of Friendliness research; my relatively uninformed impression is that it's very difficult to prove results about NN's.)

comment by jacob_cannell · 2016-03-12T09:02:01.739Z · LW(p) · GW(p)

If you can prove anything interesting about a system, that system is too simple to be interesting. Logic can't handle uncertainty, and doesn't scale at all to describing/modelling systems as complex as societies, brains, AIs, etc.

Replies from: Gurkenglas
comment by Gurkenglas · 2016-03-12T16:23:44.284Z · LW(p) · GW(p)

AIXI is simple, and if our universe happened to allow turing machines to calculate endlessly behind cartesian barriers, it could be interesting in the sense of actually working.

Replies from: jacob_cannell
comment by jacob_cannell · 2016-03-12T18:34:02.939Z · LW(p) · GW(p)

We have wildly different definitions of interesting, at least in the context of my original statement. :)

comment by turchin · 2016-03-09T22:14:31.352Z · LW(p) · GW(p)

Yes, we need to find the way to make existing AIs safe.

comment by V_V · 2016-03-10T01:20:55.978Z · LW(p) · GW(p)

I don't agree with this at all. I wrote a thing here about how NNs can be elegant, and derived from first principles.

Nice post.

Anyway, according to some recent works (ref, ref), it seems to be possible to directly learn digital circuits from examples using some variant of backproagation. In principle, if you add a circuit size penalty (which may be well the tricky part) this becomes time-bounded maximum a posteriori Solomonoff induction.

Replies from: Houshalter
comment by Houshalter · 2016-03-10T04:42:45.832Z · LW(p) · GW(p)

Yes binary neural networks are super interesting because they can be made much more compact in hardware than floating point ops. However there isn't much (theoretical) advantage otherwise. Anything a circuit can do, an NN can do, and vice versa.

A circuit size penalty is already a very common technique. It's called weight decay, where the synapses are encouraged to be as close to zero as possible. A synapse of 0 is the same as it not being there, which means the neural net parameters requires less information to specify.

comment by cousin_it · 2016-03-09T22:23:45.466Z · LW(p) · GW(p)

Agreed on all points.

I suppose the main lesson for us can be summarized by the famous verse:

A little learning is a dangerous thing;
Drink deep, or taste not the Pierian spring:
There shallow draughts intoxicate the brain,
And drinking largely sobers us again.

The sequences definitely qualify as shallow draughts that intoxicate the brain :-(

comment by turchin · 2016-03-09T15:58:33.299Z · LW(p) · GW(p)

I think that MIRI did a mistake than decided not be evolved in actual AI research, but only in AI safety research. In retrospect the nature of this mistake is clear: MIRI was not recognised inside AI community, and its safety recommendations are not connected with actual AI development paths.

It is like a person would decide not to study nuclear physics but only nuclear safety. It even may work until some point, as safety laws are similar in many systems. But he will not be the first who will learn about surprises in new technology.

Replies from: gjm, cousin_it, James_Miller
comment by gjm · 2016-03-09T16:54:38.638Z · LW(p) · GW(p)

I think that MIRI did a mistake than decided not be evolved in actual AI research [...] MIRI was not recognised inside AI community

Being involved in actual AI research would have helped with that only if MIRI had been able to do good AI research, and would have been a net win only if MIRI had been able to do good AI research at less cost to their AI safety research than the gain from greater recognition in the AI community (and whatever other benefits doing AI research might have brought).

I think you're probably correct that MIRI would be more effective if it did AI research, but it's not at all obvious.

Replies from: turchin
comment by turchin · 2016-03-09T17:45:20.871Z · LW(p) · GW(p)

Maybe it should be some AI research which is relevant to safety, like small self evolving agents, or AI-agent which inspects other agents. It would also generate some profit.

comment by cousin_it · 2016-03-09T18:58:49.186Z · LW(p) · GW(p)

Agreed on all points.

LW was one handshake away from DeepMind, we interviewed Shane Legg and referred to his work many times. But I guess we didn't have the right attitude, maybe still don't. Now is probably a good time to "halt, melt and catch fire" as Eliezer puts it.

Replies from: hg00, Larks, James_Miller
comment by hg00 · 2016-03-12T19:09:42.737Z · LW(p) · GW(p)

I'm confused what you would have done with the benefit of hindsight (beyond having folks like Jaan Tallin and Elon Musk who were concerned with AI safety become investors in DeepMind, which was in fact done).

comment by Larks · 2016-03-10T03:10:01.386Z · LW(p) · GW(p)

What do you mean by "one handshake"?

comment by James_Miller · 2016-03-09T19:26:44.034Z · LW(p) · GW(p)

Google bought DeepMind for, reportedly, more than $500 million. Other than possibly Eliezer, MIRI probably doesn't have the capacity to employ people that the market places such a high value on.

Replies from: turchin, cousin_it
comment by turchin · 2016-03-09T19:33:33.057Z · LW(p) · GW(p)

EY could have such price if he invested more time in studying neural networks, but not in writing science fiction. Lesswrong is also full of clever minds which probably could be employed in any tiny AI project.

Replies from: V_V
comment by V_V · 2016-03-09T22:22:52.011Z · LW(p) · GW(p)

EY could have such price if he invested more time in studying neural networks, but not in writing science fiction.

Has he ever demonstrated any ability to produce anything technically valuable?

Replies from: turchin
comment by turchin · 2016-03-09T22:27:03.083Z · LW(p) · GW(p)

He has ability to attract groups of people and write interesting texts. So he could attract good programmers for any task.

Replies from: V_V
comment by V_V · 2016-03-09T23:45:11.155Z · LW(p) · GW(p)

He has ability to attract groups of people and write interesting texts. So he could attract good programmers for any task.

He has the ability to attract self-selected groups of people by writing texts that these people find interesting. He has shown no ability to attract, organize and lead a group of people to solve any significant technical task. The research output of SIAI/SI/MIRI has been relatively limited and most of the interesting stuff came out when he was not at the helm anymore.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2016-03-10T18:38:21.068Z · LW(p) · GW(p)

While this may be formally right the question is what it shows (or should show)? Because on the other hand MIRI does have quite some research output as well as impact on AI safety - and that is what they set out for.

Replies from: V_V
comment by V_V · 2016-03-10T22:39:10.976Z · LW(p) · GW(p)

Most MIRI research output (papers, in particular the peer-reviewed ones) was produced under the direction of Luke Muehlhauser or Nate Soares. Under the direction of EY the prevalent outputs were the LessWrong sequences and Harry Potter fanfiction.

The impact of MIRI research on the work of actual AI researchers and engineers is more difficult to measure, my impression is that it has not been very much so far.

Replies from: Gunnar_Zarncke, gjm
comment by Gunnar_Zarncke · 2016-03-10T23:19:51.606Z · LW(p) · GW(p)

That looks like judgment from availability bias. How do you think MIRI did go about getting researchers and these better directors? And funding? And all those connections that seem to lead to AI safety being a thing now?

Replies from: V_V
comment by V_V · 2016-03-11T00:06:22.971Z · LW(p) · GW(p)

IMHO, AI safety is a thing now because AI is a thing now and when people see AI breakthroughs they tend to think of the Terminator.

Anyway, I agree that EY is good at getting funding and publicity (though not necessarily positive publicity), my comment was about his (lack of) proven technical abilities.

Replies from: dxu
comment by dxu · 2016-03-11T17:37:26.478Z · LW(p) · GW(p)

IMHO, AI safety is a thing now because AI is a thing now and when people see AI breakthroughs they tend to think of the Terminator.

Under that hypothesis, shouldn't AI safety have become a "thing" (by which I assume you mean "gain mainstream recognition") back when Deep Blue beat Kasparov?

Replies from: V_V
comment by V_V · 2016-03-12T15:35:03.999Z · LW(p) · GW(p)

If you look up mainstream news article written back then, you'll notice that people were indeed concerned. Also, maybe it's a coincidence, but The Matrix movie, which has AI uprising as it's main premise, came out two years later.

The difference is that in 1997 there weren't AI-risk organizations ready to capitalize on these concerns.

Replies from: dxu
comment by dxu · 2016-03-12T18:57:21.540Z · LW(p) · GW(p)

Which organizations are you referring to, and what sort of capitalization?

comment by gjm · 2016-03-11T00:55:41.206Z · LW(p) · GW(p)

Was Eliezer ever in charge? I thought that during the OB, LW and HP eras his role was something like "Fellow" and other people (e.g., Goertzel, Muelhauser) were in charge.

comment by cousin_it · 2016-03-09T21:21:38.014Z · LW(p) · GW(p)

I'm not saying MIRI should've hired Shane Legg. It was more of a learning opportunity.

comment by James_Miller · 2016-03-09T16:08:32.756Z · LW(p) · GW(p)

MIRI will never have a comparative advantage in doing the parts of AI research that the big players think will lead to profitable outcomes.

Replies from: Manfred
comment by Manfred · 2016-03-10T15:38:55.538Z · LW(p) · GW(p)

They might indeed have comparative advantages, though not absolute ones.

comment by Raiden · 2016-03-09T17:56:06.671Z · LW(p) · GW(p)

Neural networks may very well turn out to be the easiest way to create a general intelligence, but whether they're the easiest way to create a friendly general intelligence is another question altogether.

Replies from: turchin
comment by turchin · 2016-03-09T18:02:08.044Z · LW(p) · GW(p)

They may be used to create complex but boring part of the real AI like image recognition. DeepMind is no where near to NN, it combines several architectures. So NNs are like ToolAIs inside large AI system: they do a lot of work but on low level.

comment by skeptical_lurker · 2016-03-11T05:56:23.316Z · LW(p) · GW(p)

I may be missing something, but why does this matter? An AI has components, as does the human mind. When reasoning about friendliness, what matters is the goal component. Can't the perception/probability estimate module just be treated as an interchangeable black box, regardless of whether it is a DNN, or MCTS Solomov induction approximation, or Bayes nets or anything else?

Replies from: Torchlight_Crimson
comment by Torchlight_Crimson · 2016-03-11T07:35:15.388Z · LW(p) · GW(p)

Can't the perception/probability estimate module just be treated as an interchangeable black box, regardless of whether it is a DNN, or MCTS Solomov induction approximation, or Bayes nets or anything else?

Not necessarily. If the goal component what's to respect human preferences, it will be vital that the perception component isn't going to correctly identify what constitutes a "human".

Replies from: skeptical_lurker
comment by skeptical_lurker · 2016-03-11T08:51:06.096Z · LW(p) · GW(p)

This doesn't seem like a major problem, or one which is exclusive to friendliness - computers can already recognise pictures of humans, and any AGI is going to have to be able to identify and categorise things.

Replies from: bogus
comment by bogus · 2016-03-11T18:02:08.579Z · LW(p) · GW(p)

computers can already recognise pictures of humans

Well, not quite.

comment by Squark · 2016-04-15T16:41:30.591Z · LW(p) · GW(p)

"Neural networks" vs. "Not neural networks" is a completely wrong way to look at the problem.

For one thing, there are very different algorithms lumped under the title "neural networks". For example Boltzmann machines and feedforward networks are both called "neural networks" but IMO it's more because it's a fashionable name than because of actual similarity in how they work.

More importantly, the really significant distinction is making progress by trail and error vs. making progress by theoretical understanding. The goal of AI safety research should be shifting the balance towards the second option, since the second option is much more likely to yield results that are predictable and satisfy provable guarantees. In this context I believe MIRI correctly identified multiple important problems (logical uncertainty, decision theory, naturalized induction, Vingean reflection). I am mildly skeptical about the attempts to attack these problems using formal logic, but the approaches based on complexity theory and statistical learning theory that I'm pursuing seem completely compatible with various machine learning techniques including ANNs.

comment by turchin · 2016-03-12T10:53:32.162Z · LW(p) · GW(p)

I have one more thought about it. If we work on AI safety problem, we should find the way to secure exiting AIs, not ideal AIs. As if we work on nuclear energy safety, we it would be easy to secure nuclear reactors than nuclear weapons, but knowing that the weapons will be created, we still need to find the way to make the safe.

The world had chosen to develop neural net based AI. So we should think how install safety in it.

comment by jacob_cannell · 2016-03-12T08:50:24.413Z · LW(p) · GW(p)

When I started hearing about the latest wave of results from neural networks, I thought to myself that Eliezer was probably wrong to bet against them. Should MIRI rethink its approach to friendliness?

Yes.

comment by Kaj_Sotala · 2016-03-09T18:41:42.732Z · LW(p) · GW(p)

I found this interesting: AlphaGo's internal statistics predicted victory with high confidence at about three hours into the game (Lee Sedol resigned at about three and a half hours):

For me, the key moment came when I saw Hassabis passing his iPhone to other Google executives in our VIP room, some three hours into the game. From their smiles, you knew straight away that they were pretty sure they were winning – although the experts providing live public commentary on the match weren’t clear on the matter, and remained confused up to the end of the game just before Lee resigned.

Hassabis’s certainty came from Google’s technical team, who pore over AlphaGo’s evaluation of its position, information that isn’t publicly available. I’d been asking Silver how AlphaGo saw the game going, and he’d already whispered back: “It’s looking good”.

And I realised I had a lump in my throat. From that point on, it was crushing for me to watch Lee’s struggle.

Towards the end of the match, Michael Redmond, an American commentator who is the only westerner to reach the top rank of 9 dan pro, said the game was still “very close”. But Hassabis was frowning and shaking his head – he knew that AlphaGo was definitely winning. And then Lee resigned, three and a half hours in.

Also this bit, suggesting that Lee might still win some matches:

Silver said that – judging from the statistics he’d seen when sitting in Google’s technical room – “Lee Sedol pushed AlphaGo to its limits”.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2016-03-09T19:27:23.911Z · LW(p) · GW(p)

although the experts providing live public commentary on the match weren’t clear on the matter

This could be motivated thinking/speaking though.

Replies from: entirelyuseless, Vaniver, ChristianKl
comment by entirelyuseless · 2016-03-10T18:16:55.537Z · LW(p) · GW(p)

I watched the whole of both games played so far. In the first game, Redmond definitely thought that Lee Sedol was winning, and at a point close to the end, he said, "I don't think it's going to be close," and I am fairly confident he meant that Lee Sedol would win by a substantial margin. Likewise, he definitely showed real surprise when the resignation came: even at that point, he expected a human victory.

In the second game, he was more cautious and refused to commit himself, but still seemed to think there were points where Lee Sedol had the advantage. However, in this one he did end up admitting that AlphaGo was winning long before the end came.

Replies from: Vaniver
comment by Vaniver · 2016-03-10T19:32:41.758Z · LW(p) · GW(p)

In particular, I thought Redmond's handling of the top right corner was striking. He identified it as a potential attack for white several times before white's actual attack, and then afterwards thought that a move was 'big' that AlphaGo ignored; later, on calculation, he realized that it was only (if I recall correctly) a one point move.

It looked to me like an example of the human bias towards the corners and walls, combined with his surprise at some of AlphaGo's moves that made significant changes in the center.

comment by Vaniver · 2016-03-09T19:30:26.910Z · LW(p) · GW(p)

This could be motivated thinking/speaking though.

The English commentators seemed to take a significant time to come up with score estimates, to the point where I think they were genuinely uncertain in a way that AlphaGo wasn't. (What would be interesting, for example, would be to look at AlphaGo's estimation of the score of historical tournament games that had commentary and see how well the two track each other.)

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2016-03-09T19:40:42.278Z · LW(p) · GW(p)

Myungwan Kim seemed to be quicker to reach the right conclusion - IIRC, by the time that the fighting in the lower right corner ended, he was pretty sure of AlphaGo winning, to the extent of guessing that a move that lost AlphaGo around 1.5 points down there was because AG would win anyway.

comment by ChristianKl · 2016-03-10T12:30:41.292Z · LW(p) · GW(p)

I think it's more likely that the Go professionals (both the commentator and the Lee) simply score certain patterns a few points differently than AlphaGo did then there being motivated thinking in the sense that the commentator wants Lee to win.

comment by gjm · 2016-03-09T13:49:44.470Z · LW(p) · GW(p)

https://gogameguru.com/alphago-defeats-lee-sedol-game-1/ has some (non-video) comments on the game, and promises more detailed commentary later.

comment by Kaj_Sotala · 2016-03-09T18:40:31.454Z · LW(p) · GW(p)

9p Myungwan Kim's commentary (I much preferred this over the official commentary; he's also commenting tomorrow, so recommend following his stream then, though he might start an hour delayed like he did today).

Fun comment from him: "[AlphaGo] play like a god, a god of Go".

comment by cousin_it · 2016-03-10T08:29:05.194Z · LW(p) · GW(p)

Lee Sedol has just resigned the second game.

Replies from: WalterL, SquirrelInHell
comment by WalterL · 2016-03-10T13:18:54.442Z · LW(p) · GW(p)

I thought he was ahead for a lot of game two. I wonder if that was true, or if AlphaGo was in control all along.

Replies from: gjm, Vaniver
comment by gjm · 2016-03-10T14:00:59.657Z · LW(p) · GW(p)

According to this, Lee Sedol said in the post-game press conference that he didn't think he was ahead at any point in the game.

Replies from: WalterL
comment by WalterL · 2016-03-10T16:07:07.685Z · LW(p) · GW(p)

He did...but...like, you can't really trust that. He'd have said that (or similar) no matter what. It isn't game commentary, its signalling.

There's a sort of humblebrag attitude that permeates all of Go. Every press conference is the same. Your opponent was very strong, you were fortunate, you have deep respect for your opponent and thank him for the opportunity.

In the game commentary you get the real dish. They stop using names and use "White/Black" to talk about either side. There things are much more honest.

comment by Vaniver · 2016-03-10T19:28:30.291Z · LW(p) · GW(p)

I thought he was ahead for a lot of game two. I wonder if that was true, or if AlphaGo was in control all along.

I thought it was an very likely AlphaGo victory about an hour in, and nearly certain about two hours in.

comment by SquirrelInHell · 2016-03-10T08:47:43.569Z · LW(p) · GW(p)

And looking at how he used up his time much sooner, he was more cautious today. He still lost and probably also took a psychological hit, so now my estimate of chances of Lee Sedol winning the whole match went down to ~5%.

Replies from: gjm
comment by gjm · 2016-03-10T12:43:26.752Z · LW(p) · GW(p)

Ignoring psychology and just looking at the results:

  1. Delta-function prior at p=1/2 -- i.e., completely ignore the first two games and assume they're equally matched. Lee Sedol wins 12.5% of the time.

  2. Laplace's law of succession gives a point estimate of 1/4 for Lee Sedol's win probability now. That means Lee Sedol wins about 1.6% of the time. [EDITED to add:] Er, no, actually if you're using the rule of succession you should apply it afresh after each game, and then the result is the same as with a uniform prior on [0,1] as in #3 below. Thanks to Unnamed for catching my error.

  3. Uniform-on-[0,1] prior for Lee Sedol's win probability means posterior density is f(p)=3(1-p)^2, which means he wins the match exactly 5% of the time.

  4. I think most people expected it to be pretty close. Take a prior density f(p)=4p(1-p), which favours middling probabilities but not too outrageously; then he wins the match about 7.1% of the time.

So ~5% seems reasonable without bringing psychological factors into it.

Replies from: Unnamed
comment by Unnamed · 2016-03-10T23:41:04.055Z · LW(p) · GW(p)

Laplace's law of succession gives Lee Sedol a 5% chance of winning the match (and AlphaGo a 50% chance of a 5-0 sweep). It gives him a 1/4 chance of winning game 3, a 2/5 chance of winning game 4 conditional on winning game 3, and a 1/2 chance of winning game 5 conditional on winning games 3&4. It's important to keep updating the probability after each game, because 1/4 is just a point estimate for a distribution of true win probabilities and the cases where he wins game 3 tend to come from the part of the distribution where his true win probability is larger than 1/4. It is not a coincidence that Laplace's law (with updating) gives the same result as #3 - Laplace's law can be derived from assuming a uniform prior.

Replies from: gjm
comment by gjm · 2016-03-10T23:59:31.066Z · LW(p) · GW(p)

Hmm, I explicitly considered whether using LLS we should update after each new game and decided it was a mistake, but on reflection you're right. (Of course what's really right is to have an actual prior and do Bayesian updates, which is one reason why I didn't consider at greater length and maybe get the right answer :-).)

Sorry about that.

comment by James_Miller · 2016-03-10T17:58:10.644Z · LW(p) · GW(p)

For me the most interesting part of this match was the part where one of the DeepMind team confirmed that because AlphaGo optimizes for probability of winning rather than expected score difference, games where it has the advantage will look close. It changes how you should interpret the apparent closeness of a game

Qiaochu Yuan, or him quoting someone.

Replies from: dxu, TheAltar, Douglas_Knight
comment by dxu · 2016-03-11T17:49:40.256Z · LW(p) · GW(p)

This appears to be a general property of the Monte Carlo search algorithm, which AlphaGo employs.

comment by TheAltar · 2016-03-10T21:52:09.260Z · LW(p) · GW(p)

I was worried about something like this after the first game. I wasn't sure if expert Go players could discern the difference between AlphaGo playing slightly better than a 9dan versus playing massively better than a 9dan due to how the AI was set up and how difficult it might be to look at players better than the ones already at the top.

comment by Douglas_Knight · 2016-03-11T04:18:34.741Z · LW(p) · GW(p)

Almost all algorithms for almost all games play to win. That isn't anything special about AlphaGo. Maybe there's something special about Go or about this algorithm that makes is harder to assess, but that isn't a reason.

comment by Vaniver · 2016-03-09T14:01:50.385Z · LW(p) · GW(p)

Amazing match. Well worth staying up to 2 AM to watch.

Replies from: Vaniver
comment by Vaniver · 2016-03-09T14:35:27.156Z · LW(p) · GW(p)

Several things I thought were interesting:

  1. The commentator (on the Deepmind channel) calling out several of AlphaGo's moves as conservative. Essentially, it would play an additional stone to settle or augment some group that he wouldn't necessarily have played around. What I'm curious about is how much this reflects an attempt by AlphaGo to conserve computational resources. "I think move A is a 12 point swing, and move B is a 10 point swing, but move B narrows the search tree for future moves in a way that I think will net me at least 2 more points." (It wouldn't be verbalized like that, since it's not thinking verbally, but you can get this effect naturally from the tree search and position evaluator.)

  2. Both players took a long time to play "obvious" moves. (Typically, by this I mean something like a response to a forced move.) 이 sometimes didn't--there were a handful of moves he played immediately after AlphaGo's move--but I was still surprised by the amount of thought that went into some of the moves. This may be typical for tournament play--I haven't watched any live before this.

  3. AlphaGo's willingness to play aggressively and get involved in big fights with 이, and then not lose. I'm not sure that all the fights developed to AlphaGo's advantage, but evidently enough of them did by enough.

  4. I somewhat regret 이 not playing the game out to the end; it would have been nice to know the actual score. (I'm sure estimates will be available soon, if not already.)

Replies from: V_V, SquirrelInHell, gjm, ChristianKl, ChristianKl, ChristianKl
comment by V_V · 2016-03-09T16:29:21.909Z · LW(p) · GW(p)

What I'm curious about is how much this reflects an attempt by AlphaGo to conserve computational resources.

If I understand correctly, at least according to the Nature paper, it doesn't explicitly optimize for this. Game-playing software is often perceived as playing "conservatively", this is a general property of minimax search, and in the limit the Nash equilibrium consists of maximally conservative strategies.

but I was still surprised by the amount of thought that went into some of the moves.

Maybe these obvious moves weren't so obvious at that level.

Replies from: Error, Vaniver
comment by Error · 2016-03-09T18:16:03.785Z · LW(p) · GW(p)

I don't know about that level, but I can think of at least one circumstance where I think far longer than would be expected over a forced move. If I've worked out the forced sequence in my head and determined that the opponent doesn't gain anything by it, but they play it anyway, I start thinking "Danger, Danger, they've seen something I haven't and I'd better re-evaluate."

Most of the time it's nothing and they just decided to play out the position earlier than I would have. But every so often I discover a flaw in the "forced" defense and have to start scrabbling for an alternative.

Replies from: WalterL
comment by WalterL · 2016-03-09T18:34:51.279Z · LW(p) · GW(p)

This is very true in Go. If you are both playing down a sequence of moves without hesitation, anticipating a payoff, one of you is wrong (kind of. It's hard to put in words.) It is always worth making double sure that it isn't you.

comment by Vaniver · 2016-03-09T19:20:34.525Z · LW(p) · GW(p)

Maybe these obvious moves weren't so obvious at that level.

Sure. And I'm pretty low as amateurs go--what I found surprising was that there were ~6 moves where I thought "obviously play X," and 이 immediately played X in half of them and spent 2 minutes to play X in the other half of them. It wasn't clear to me if 이 was precomputing something he would need later, or was worried about something I wasn't, or so on.

Most of the time I was thinking something like "well, I would play Y, but I'm pretty unconfident that's the right move" and then 이 or AlphaGo play something that are retrospectively superior to Y, or I was thinking something like "I have only the vaguest sense of what to do in this situation." So I guess I'm pretty well-calibrated, even if my skill isn't that great.

comment by SquirrelInHell · 2016-03-10T01:40:49.803Z · LW(p) · GW(p)

The commentator (on the Deepmind channel) calling out several of AlphaGo's moves as conservative. Essentially, it would play an additional stone to settle or augment some group that he wouldn't necessarily have played around. What I'm curious about is how much this reflects an attempt by AlphaGo to conserve computational resources. "I think move A is a 12 point swing, and move B is a 10 point swing, but move B narrows the search tree for future moves in a way that I think will net me at least 2 more points."

If the search tree is narrowed, it is narrowed for both players, so why would it be a gain?

Replies from: Vaniver
comment by Vaniver · 2016-03-10T01:45:15.235Z · LW(p) · GW(p)

If the search tree is narrowed, it is narrowed for both players, so why would it be a gain?

There may be an asymmetry between successful modes of attack and successful modes of defense--if there's a narrow thread that white can win through, and a thick thread that black can threaten through, then white wins computationally by closing off that tree.

But thanks for asking: I was confused somewhat because I was thinking about AI vs. human games, but the AI is trained mostly on human vs. human and AI vs. AI games, neither of which will have the AI vs. human feature. Well, except for bots playing on KGS.

Replies from: Vaniver
comment by Vaniver · 2016-03-21T18:22:56.082Z · LW(p) · GW(p)

But thanks for asking: I was confused somewhat because I was thinking about AI vs. human games, but the AI is trained mostly on human vs. human and AI vs. AI games, neither of which will have the AI vs. human feature. Well, except for bots playing on KGS.

As it turns out, we learned later that Fan Hui started working with Deepmind on AlphaGo after their match, and played a bunch of games against it as it improved. So it did have a number of AI vs. human training games.

comment by gjm · 2016-03-09T16:51:14.399Z · LW(p) · GW(p)

I'm sure estimates will be available soon

I saw some blog comment from someone claiming to be (IIRC) an amateur 3-4 dan -- i.e., good enough to estimate this sort of thing pretty well -- reckoning probably 3.5 or 4.5 points in white's favour. That would be after the komi of 7.5 points given to white as compensation for moving second, or so I assume from the half-points in the figure. So that would correspond to black being ahead by 3-4 points before komi.

comment by ChristianKl · 2016-03-10T09:04:58.474Z · LW(p) · GW(p)

I somewhat regret 이 not playing the game out to the end; it would have been nice to know the actual score. (I'm sure estimates will be available soon, if not already.)

That wouldn't have given you the actual score as AlphaGo didn't care to maximize the score in the endgame.

comment by ChristianKl · 2016-03-10T09:05:17.607Z · LW(p) · GW(p)

Both players took a long time to play "obvious" moves. (Typically, by this I mean something like a response to a forced move.)

Which specific moves do you mean?

Replies from: Vaniver
comment by Vaniver · 2016-03-10T19:34:48.774Z · LW(p) · GW(p)

I would have to rewatch the game, since the easily available record doesn't have the time it took them to make each move.

comment by ChristianKl · 2016-03-09T16:58:53.846Z · LW(p) · GW(p)

"I think move A is a 12 point swing, and move B is a 10 point swing, but move B narrows the search tree for future moves in a way that I think will net me at least 2 more points."

No. 2 points is a lot at that level. If the commentator would think a move cost 2 points he wouldn't call it conversative but he would call it an error.

Not playing out every move is more about keeping aji open and not wasting possible ko threads. Unfortunately I don't know how to translate aji into English.

Replies from: Vaniver, polymathwannabe
comment by Vaniver · 2016-03-09T19:28:32.797Z · LW(p) · GW(p)

No. 2 points is a lot at that level. If the commentator would think a move cost 2 points he wouldn't call it conversative but he would call it an error.

I think B actually results in more points overall, which is why it would play it; my curiosity is what fraction is due to direct effects vs. indirect effects.

For example, one could imagine the board position evaluation function being different for different timing schemes. If you're playing a blitz game where both players have 10 seconds to play each turn, some positions might move from mildly favoring black to strongly favoring black because white needs to do a bunch of thinking to navigate the game tree successfully.

Replies from: ChristianKl
comment by ChristianKl · 2016-03-10T09:21:56.222Z · LW(p) · GW(p)

It's no blitz game and there plenty of time to think through moves.

Just for the record at my prime I used to play Go at around 2 kyu.

comment by polymathwannabe · 2016-03-09T19:11:48.962Z · LW(p) · GW(p)

I understand aji as potential for future moves that is currently not too usable but may be after the board configuration has evolved.

Replies from: ChristianKl, Vaniver
comment by ChristianKl · 2016-03-10T12:47:39.654Z · LW(p) · GW(p)

It goes in that direction but moves don't have to be used directly to constrain movements elsewhere on the board.

When playing around with Fold.it there was a similar scenario. It's often possible to run a script to get a higher local maxima. However that made the fold more "rigid". The experienced folders did only run the script to search the local maximas at the end when they manually did everything that could be done. With my usage of Go vocabulary running the script to optimize locally beforehand would also be a case of aji-keshi.

Aji is for me a phenomological primitive that I learned while playing Go and that I can use outside of Go but which doesn't have an existing English or German word.

comment by Vaniver · 2016-03-10T02:01:37.871Z · LW(p) · GW(p)

The way I think about aji is something fragile on a ledge--sure, it's safe now, but as things shift around, it may suddenly become unsafe.

comment by HungryHobo · 2016-03-09T18:07:16.452Z · LW(p) · GW(p)

I'm quite interested in how many of the methods employed in this AI can be applied to more general strategic problems.

From talking to a friend who did quite a bit of work in machine composition, he was of the opinion that tools for handling strategy tasks like go would also apply strongly to many design tasks like composing good music.

Replies from: Houshalter, ChristianKl, Gunnar_Zarncke, MrMind
comment by Houshalter · 2016-03-09T22:30:19.797Z · LW(p) · GW(p)

Sure, you can model music composition as a RL task. The AI composes a song, then predicts how much a human will like it. It then tries to produce songs that are more and more likely to be liked.

Another interesting thing that alphago did, was start by predicting what moves a human would make. Then it switched to reinforcement learning. So for a music AI, you would start with one that can predict the next note in a song. Then you switch to RL, and adjust it's predictions so that it is more likely to produce songs humans like, and less likely to produce ones we don't like.

However automated composition is something that a lot of people have experimented with before. So far there is nothing that works really well.

Replies from: ShardPhoenix, Vaniver
comment by ShardPhoenix · 2016-03-10T00:15:16.427Z · LW(p) · GW(p)

One difference is that you can't get feedback as fast when dealing with human judgement rather than win/lose in a game (where AlphaGo can play millions of games against itself).

Replies from: Houshalter, gwern
comment by Houshalter · 2016-03-10T04:52:16.669Z · LW(p) · GW(p)

Yes it would require a lot of human input.

However the AI could learn to predict what humans like, and then use that as it's judge. Trying to produce songs that it predicts humans will like. Then when it tests it on actual humans, it can see if it's predictions were right and improve them.

This is also a domain with vast amounts of unsupervised data available. We've created millions of songs, which it can learn from. Out of the space of all possible sounds, we've decided that this tiny subset is pleasing to listen to. There's a lot of information in that.

comment by gwern · 2016-03-10T00:44:36.603Z · LW(p) · GW(p)

You can get fast feedback by reusing existing databases if your RL agent can do off-policy learning. (You can consider this what the supervised pre-learning phase is 'really' doing.) Your agent doesn't have to take an action before it can learn from it. Consider the experience replay buffers. You could imagine a song-writing RL agent which has a huge experience replay buffer which is made just of fragments of songs you grabbed online (say, from the Touhou megatorrent with its 50k tracks).

comment by Vaniver · 2016-03-09T22:34:32.047Z · LW(p) · GW(p)

However automated composition is something that a lot of people have experimented with before. So far there is nothing that works really well.

Emily Howell?

Replies from: Houshalter, 0mnus
comment by Houshalter · 2016-03-10T00:58:29.051Z · LW(p) · GW(p)

I was thinking more like these examples:

https://ericye16.com/music-rnn/

http://www.hexahedria.com/2015/08/03/composing-music-with-recurrent-neural-networks/

https://www.youtube.com/watch?v=0VTI1BBLydE

https://highnoongmt.wordpress.com/2015/05/22/lisls-stis-recurrent-neural-networks-for-folk-music-generation/

Replies from: gjm
comment by gjm · 2016-03-10T12:47:57.870Z · LW(p) · GW(p)

I think what Vaniver means is: It seems that Emily Howell works pretty damn well, contrary to your claim that nothing does. (By, so far as I understand, means very different from any sort of neural network.)

comment by 0mnus · 2016-03-18T12:19:39.887Z · LW(p) · GW(p)

I know the conversation here has run its course, but I just wanted to add: whether or not Emily Howell is seen as something that "works really well" as an automated system is probably up for debate. It seems to require quite a bit of input from Cope himself in order to come up with sensible, interesting music. For example, one of the most popular pieces from Emily Howell is this fugue: https://www.youtube.com/watch?v=jLR-_c_uCwI - we really don't know how much influence Cope had in creating this piece of music, because the process of composition was not transparent at all.

comment by ChristianKl · 2016-03-10T10:46:49.471Z · LW(p) · GW(p)

I think Deep Mind did focus on building this engine because they belief the methods they develop while doing it could potentially be transfered to other tasks.

comment by Gunnar_Zarncke · 2016-03-10T07:25:59.911Z · LW(p) · GW(p)

I think the basic method could be applied to a more general engine like that of Zillions of Games. And having an engine that plays any kind of strategy game well would be astonishing.

comment by MrMind · 2016-03-10T08:28:12.265Z · LW(p) · GW(p)

AlphaGo has convolutional neural network, supervised learning, self-generated supervised learning, a mix-up strategy between Monte Carlo rollouts and goal function estimation.
All these strategies are apted to go because it is a spatial game with a very well defined strategy function.
While I do see CNN and supervised learning well worth of being used for music, it is much more difficult to come up with something that resembles the third step in AlphaGo: generating millions of random 'games' (simphonies) with their own label (good music/bad music) to train an 'intuitive' network.

Replies from: gwern, ChristianKl
comment by gwern · 2016-03-10T15:03:57.864Z · LW(p) · GW(p)

While I do see CNN and supervised learning well worth of being used for music, it is much more difficult to come up with something that resembles the third step in AlphaGo: generating millions of random 'games' (simphonies) with their own label (good music/bad music) to train an 'intuitive' network.

Adversarial generative networks give you a good objective if you want to take a purely supervised approach.

comment by ChristianKl · 2016-03-10T10:53:57.699Z · LW(p) · GW(p)

generating millions of random 'games' (simphonies) with their own label (good music/bad music) to train an 'intuitive' network.

A spotify like service could be used to label the quality.

Alternatively it would also be nice to have music that's trained for specific goals like helping people concentrate while they work or reducing stress.

comment by turchin · 2016-03-13T09:27:41.573Z · LW(p) · GW(p)

Go champion Lee Se-dol strikes back to beat Google's DeepMind AI for first time in forth game 3:1 http://www.theverge.com/2016/3/13/11184328/alphago-deepmind-go-match-4-result

Replies from: philh, skeptical_lurker
comment by philh · 2016-03-15T12:09:52.272Z · LW(p) · GW(p)

Has anyone from Google commented much on AlphaGo's mistakes here? Why it made the mistake at 79, why it didn't notice until later that it was suddenly losing, and why it started playing so badly when it did notice.

(I've seen commentary from people who've played other monte-carlo based bots, but I'm curious whether Google has confirmed them.)

I don't think I've seen anyone say this explicitly: I would guess that part of the problem was AG hasn't had much training in "mistakes humans are likely to make". With good play, it could have recovered against Lee, but not against itself, and it didn't know it was playing Lee; somehow, the moves it actually played were ones that would have increased its chances of winning if it was playing itself.

Replies from: ChristianKl, skeptical_lurker
comment by ChristianKl · 2016-03-15T16:11:37.872Z · LW(p) · GW(p)

I think the DeepMind folks said that they have to get back to London to analyse the case in detail.

somehow, the moves it actually played were ones that would have increased its chances of winning if it was playing itself.

I don't think that's a good explanation. There's no way that removing it's own ko threats with moves like P14 and O11 would have increased it's chances if it would have played against itself.

It look's a bit like belief propagation to update after missing an important move doesn't really work.

comment by skeptical_lurker · 2016-03-15T12:35:29.097Z · LW(p) · GW(p)

I think its policy net was only trained on amateurs, not professionals or self-play, making it a little weak. Normally, I suppose that reading large numbers of game trees compensates, but the odds of Lee making his brilliant move 78 (and one other move, but I can't remember which) were 1/10000, so I think that AG never even analysed the first move of that sequence.

In other words:

David Ormerod of GoGameGuru stated that although an analysis of AlphaGo's play around 79–87 was not yet available, he believed it was a result of a known weakness in play algorithms which use Monte Carlo tree search. In essence, the search attempts to prune sequences which are less relevant. In some cases a play can lead to a very specific line of play which is significant, but which is overlooked when the tree is pruned, and this outcome is therefore "off the search radar".[56]

I wonder if Google could publish a sgf showing the most probable lines of play as calculated at each move, as well as the estimated probability of each of Lee's moves?

I wonder if the best thing to do would be to train nets on: strong amateur games (lots of games, but perhaps lower quality moves?); pro games (fewer games but higher quality?); and self-play (high quality, but perhaps not entirely human-like?) and then take the average of the three nets?

Of course, this triples the GPU cycles needed, but it could perhaps be implemented just for the first few moves in the game tree?

Replies from: ChristianKl, philh
comment by ChristianKl · 2016-03-15T15:54:43.394Z · LW(p) · GW(p)

I don't think the issue is that 78 was a human like move. It's just a move that's hard to see both for humans and non-humans.

comment by philh · 2016-03-15T13:19:00.066Z · LW(p) · GW(p)

Naively, pruning seems like it would cause a mistake at 77 (allowing the brilliant followup 78), not at 79 (when you can't accidentally prune 78 because it's already on the board). But people have been saying that it made a mistake at 79.

I don't recall much detail about AG, but I thought the training it did was to improve the policy net? If the policy net was only trained on amateurs, what was it learning from self-play?

Replies from: skeptical_lurker
comment by skeptical_lurker · 2016-03-15T13:40:10.616Z · LW(p) · GW(p)

not at 79 (when you can't accidentally prune 78 because it's already on the board

Of course, but I can't remember which was the other very low-probability move, so perhaps it was one of the later moves in that sequence?

I don't recall much detail about AG, but I thought the training it did was to improve the policy net? If the policy net was only trained on amateurs, what was it learning from self-play?

I thought the self-play only trained the value net (because they want it to predict human moves, not its own moves), but I might be remembering incorrectly. Pity that the paper is behind a paywall.

comment by skeptical_lurker · 2016-03-13T18:22:58.055Z · LW(p) · GW(p)

Alpha Go seems to play really bad moves when it is loosing - this makes some sense as humans also make overplays out of desperation, but it suggests that Alpha Go would be bad at handicap games, unless they change the algorithum to maximise score instead of win probability.

Replies from: WalterL
comment by WalterL · 2016-03-14T17:21:04.131Z · LW(p) · GW(p)

Nothing "bad" about desperate overplays while losing from Alpha Go's perspective. In the same way that it doesn't care about winning by more than a half point, it doesn't mind making its loss more crushing. Invade every territory. If it doesn't work, you lose by a bit more. Boo hoo. If it works, you might winl

I'm very interested in the fact that they coded a "resign" function into it. I wouldn't have expected that.

Replies from: ChristianKl
comment by ChristianKl · 2016-03-14T18:13:54.449Z · LW(p) · GW(p)

Nothing "bad" about desperate overplays while losing from Alpha Go's perspective.

That's not what happened. T9 wasn't a desperate overplay. It was just bad. J10 might have made more sense as desperate overplay.

comment by Gunnar_Zarncke · 2016-03-19T23:12:32.161Z · LW(p) · GW(p)

Discussion on FiveThirtyEight about experts discussing consequences from the AlphaGo-Lee Sedol match.

Replies from: gjm
comment by gjm · 2016-03-20T00:31:48.510Z · LW(p) · GW(p)

Your link is broken -- you have a parenthesis ( instead of a bracket [ at the start of it.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2016-03-20T09:03:20.608Z · LW(p) · GW(p)

fixed

comment by ChristianKl · 2016-03-15T19:54:16.524Z · LW(p) · GW(p)

It's worth noting that the match is under Chinese rules and not the more popular Japanese style rules (Korean rules are also Japanese style). That's because Chinese rules are easier for computers.

It would be interesting to have another match played on Japanese style rules.

comment by skeptical_lurker · 2016-03-15T12:45:46.609Z · LW(p) · GW(p)

I listened to the ending press conference. Interestingly, Demis Hassabis discusses AI ethics twice, saying that development will be largely open-sourced to ensure that AI "is for the many, not just the few." So, this gives the impression that Google AI ethics is more thinking along the lines of 'AI based economy renders many unemployed' rather than 'hard takeoff destroys humanity', or at least that is what they are publicly discussing at this time.

On a lighter note, one reporter asked IIRC "How many versions of alphago are there, and how long does it take to clone alphago?" as if alphago was a living thing that could be cloned like a plant, but which took time because it had to be grown and nurtured. Perhaps it was an error in translation from Korean, but it really did seem like she thought that alphago was alive. This rather confused the deepmind people answering the question.

Replies from: PipFoweraker
comment by PipFoweraker · 2016-03-18T10:54:39.897Z · LW(p) · GW(p)

The thought intrigued me enough to check with a native Korean speaking friend, and they said that cloning doesn't necessarily translate well and it could have been a question about the size of AlphaGo (in terms of copying it or the datasets) or its reproducability / iterations (i.e. are there v1.01, v1.02's floating around).

comment by turchin · 2016-03-11T12:52:18.820Z · LW(p) · GW(p)

It is also interesting to know the size of Alphago.

Wiki says: "The distributed version in October 2015 was using 1,202 CPUs and 176 GPUs (and was developed by teem of 100 scientists). Assuming that it was best GPU on the market in 2015, with power around 1 teraflop, total power of AlphaGO was around 200 teraplop or more. (I would give it 100 Teraflop - 1 Petaflop with 75 probability estimate). I also think that the size of the program is around terabytes, but only conclude it from the number of computers in use.

This could provide us with minimal size of AI on current level of technologies. Fooming for such AI will be not easy as it would require sizeable new resources and rewriting of it complicated inner structure.

And it is also not computer virus size yet, so it can't run away. A private researcher probably don't have such computational resources, but hacker could use botnet

But if such AI will be used to create more effective master algorithms, it may foom.

Replies from: gwern, ChristianKl
comment by gwern · 2016-03-11T16:32:26.485Z · LW(p) · GW(p)

I also think that the size of the program is around terabytes, but only conclude it from the number of computers in use.

I don't think that's true. The distributed system for playing is using multiple copies of the CNN value network so each one can do board evaluation during the MCTS on its own without the performance disaster of sending it over the network to a GPU or something crazy like that, not a single one sharded over two hundred servers (CPU!=computer). Similarly for training: each was training the same network in parallel, not 1/200th of the full NN. (You could train something like AlphaGo on your laptop's GPU, it'd just take like 2 years by their wallclock numbers.)

The actual CNN is going to be something like 10MB-1GB, because more than that and you can't fit it on 1 GPU to do training. Reading the paper, it seems to be fairly comparable in size to ImageNet competitors:

Neural network architecture. The input to the policy network is a 19×19×48 image stack consisting of 48 feature planes. The first hidden layer zero pads the input into a 23×23 image, then convolves k filters of kernel size 5×5 with stride 1 with the input image and applies a rectifier nonlinearity. Each of the subsequent hidden layers 2 to 12 zero pads the respective previous hidden layer into a 21×21 image, then convolves k filters of kernel size 3×3 with stride 1, again followed by a rectifier nonlinearity. The final layer convolves 1 filter of kernel size 1×1 with stride 1, with a different bias for each position, and applies a softmax function. The match version of AlphaGo used k=192 filters; Fig. 2b and Extended Data Table 3 additionally show the results of training with k=128, 256 and 384 filters.The input to the value network is also a 19×19×48 image stack, with an additional binary feature plane describing the current colour to play. Hidden layers 2 to 11 are identical to the policy network, hidden layer 12 is an additional convolution layer, hidden layer 13 convolves 1 filter of kernel size 1×1 with stride 1, and hidden layer 14 is a fully connected linear layer with 256 rectifier units. The output layer is a fully connected linear layer with a single tanh unit.

So 500M would be a reasonable guess if you don't want to work out how many parameters that 13-layer network translates to. Not large at all, and model compression would at least halve that.

This could provide us with minimal size of AI on current level of technologies. Fooming for such AI will be not easy as it would require sizeable new resources and rewriting of it complicated inner structure.

200 GPUs is not that expensive. Amazon will rent you 1 GPU at spot for ~$0.2/hour, so <$1k/day.

Replies from: turchin
comment by turchin · 2016-03-11T16:51:20.708Z · LW(p) · GW(p)

Thanks for clarification. If size is rally 500 MB it could easily be stolen or run away, and in 1к a day seems affordable to dedicating hacker.

comment by ChristianKl · 2016-03-14T18:17:53.730Z · LW(p) · GW(p)

Demis said that AlphaGo also works on a single computer. The distributed version has 75% winning chance against the one computer version. The hardware they used seem to be the point where there are dimishing return of adding additional hardware.

comment by TheAltar · 2016-03-10T16:32:01.466Z · LW(p) · GW(p)

Does anyone know the current odds being given of Lee Sedol winning any of the three remaining games against AlphaGo? I'm curious if at this point is AlphaGo likely possible to beat by a human player better than Sedol (assuming there are any) or if we're looking at an AI player that is better than a human can be.

Replies from: Vaniver
comment by Vaniver · 2016-03-10T19:26:31.804Z · LW(p) · GW(p)

The odds I saw for the second match were about 2:3 favoring AlphaGo; my guess is the odds moving forward will be more like 1:4 favoring AlphaGo (but probably it should be closer to something like 1:9).

if we're looking at an AI player that is better than a human can be.

This is my estimation.

Replies from: entirelyuseless
comment by entirelyuseless · 2016-03-13T15:39:44.569Z · LW(p) · GW(p)

Lee Sedol has now won the fourth game, which makes this very improbable. I still think AlphaGo is better than him, but this basically means that its competence can still be measured on a human scale.

Replies from: Vaniver
comment by Vaniver · 2016-03-13T19:54:31.485Z · LW(p) · GW(p)

Agreed that match 4 was a big surprise under my model (I thought it was about 1:20 favoring AlphaGo).

comment by ChristianKl · 2016-03-10T15:04:56.271Z · LW(p) · GW(p)

Does AlphaGo use the history of the 1,000,000 games it played against itself to look up similar situations, or is that game history only used to train weights of the algorithm?

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2016-03-10T18:46:20.543Z · LW(p) · GW(p)

AlphaGo doesn't do any lookup of positions.

comment by Stingray · 2016-03-11T18:32:02.520Z · LW(p) · GW(p)

Lee Sedol isn't at the top of Go ratings. How would Ke Jie fare against AlphaGo? A match against the best human player would be a better test of AlphaGo capabilities.

Replies from: Dentin
comment by Dentin · 2016-03-12T17:19:45.670Z · LW(p) · GW(p)

Honestly that hundred point difference at the top of the Go ratings isn't really going to matter. At best, it probably means that the top player has a ten percent chance of winning a single game, instead of a two percent chance. I wouldn't be surprised at all to hear that AlphaGo is playing at a rating in excess of four thousand, and could be expected to beat the best human players 99% of the time. Frankly, my gut instinct watching the livestream is that AlphaGo is playing at such a high level that even the players at the top of the rankings are having a hard time identifying it.

It must be very frustrating to be in that position - you're supposedly one of the best in the world, and for the first half of the game your opponent makes mostly ok but not great moves including some likely mistakes and weird moves that seem pointless. Then near endgame you've somehow ended up 20 points behind with no hope of victory and you're not even sure how it happened.

Replies from: Dentin
comment by Dentin · 2016-03-13T16:25:07.429Z · LW(p) · GW(p)

Update: given the most recent win by Lee Sodol, my hypothesis above seems much less likely. AlphaGo may only be in the 3600-3800 range.