Challenge: know everything that the best go bot knows about go

post by DanielFilan · 2021-05-11T05:10:01.163Z · LW · GW · 93 comments

On a few different views, understanding the computation done by neural networks is crucial to building neural networks that constitute human-level artificial intelligence that doesn’t destroy all value in the universe. Given that many people are trying to build neural networks that constitute artificial general intelligence, it seems important to understand the computation in cutting-edge neural networks, and we basically do not.

So, how should we go from here to there? One way is to try hard to think about understanding, until you understand understanding well enough to reliably build understandable AGI. But that seems hard and abstract. A better path would be something more concrete.

Therefore, I set this challenge: know everything that the best go bot knows about go. At the moment, the best publicly available bot is KataGo, if you’re at DeepMind or OpenAI and have access to a better go bot, I guess you should use that instead. If you think those bots are too hard to understand, you’re allowed to make your own easier-to-understand bot, as long as it’s the best.

What constitutes success?

Why do I think this is a good challenge?

Corollaries of success (non-exhaustive):

Drawbacks of success:

Related work:

A conversation with Nate Soares on a related topic probably helped inspire this post. Please don’t blame him if it’s dumb tho.

93 comments

Comments sorted by top scores.

comment by DanielFilan · 2021-05-11T19:26:33.121Z · LW(p) · GW(p)

I think there's some communication failure where people are very skeptical of this for reasons that they think are obvious given what they're saying, but which are not obvious to me. Can people tell me which subset of the below claims they agree with, if any? Also if you come up with slight variants that you agree with that would be appreciated.

  1. It is approximately impossible to succeed at this challenge.
  2. It is possible to be confident that advanced AGI systems will not pose an existential threat without being able to succeed at this challenge.
  3. It is not obvious what it means to succeed at this challenge.
  4. It will probably not be obvious what it means to succeed at this challenge at any point in the next 10 years, even if a bunch of people try to work on it.
  5. We do not currently know what it means for a go bot to know something in operational terms.
  6. At no point in the next 10 years could one be confident that one knew everything a go bot knew, because we won't be confident about what it means for a go bot to know something.
  7. You couldn't know everything a go bot knows without essentially being that go bot.

[EDIT: 8. One should not issue a challenge to know everything a go bot knows without having a good definition of what it means for a go bot to know things.]

Replies from: interstice, gjm, ACrackedPot, adamShimi, AllAmericanBreakfast, jacob-pfau, chasmani
comment by interstice · 2021-05-12T03:25:34.048Z · LW(p) · GW(p)

If your goal is to play as well as the best go bot and/or write a program that plays equally well from scratch, it seems like it's probably impossible. A lot of the go bot's 'knowledge' could well be things like "here's a linear combination of 20000 features of the board predictive of winning". There's no reason for the coefficients of that linear combination to be compressible in any way; it's just a mathematical fact that these particular coefficients happen to be the best at predicting winning. If you accepted "here the model is taking a giant linear combination of features" as "understanding" it might be more doable.

Replies from: gwern
comment by gwern · 2021-05-12T03:48:50.519Z · LW(p) · GW(p)

An even more pointed example: chess endgame tables. What does it mean to 'fully understand' it beyond understanding the algorithms which construct them, and is it a reasonable goal to attempt to play chess endgames as well as the tables?

Replies from: paulfchristiano
comment by paulfchristiano · 2021-05-13T15:56:47.464Z · LW(p) · GW(p)

If you have a "lazy" version of the goal, like "have a question-answerer that can tell you anything the model knows" or "produce a locally human-legible but potentially giant object capturing everything the model knows" then chess endgame tables are a reasonably straightforward case ("position X is a win for white").

comment by gjm · 2021-05-11T21:01:35.620Z · LW(p) · GW(p)

(I am not one of the people who have expressed skepticism, but I find myself with what I take to be feelings somewhat similar to theirs.)

I agree with 1 if it success is defined rather strictly (e.g., requiring that one human brain contain all the information in a form that actually enables the person whose brain it is to play like the bot does) but not necessarily if it is defined more laxly (e.g., it's enough if for any given decision the bot makes we have a procedure that pretty much always gives us a human-comprehensible explanation of why it made that decision, with explanations for different decisions always fitting into a reasonably consistent framework).

I have no idea about 2; I don't think I've seen any nontrivial but plausibly true propositions of the form "It is possible to be confident that advanced AGI systems will not pose an existential threat without X", but on the other hand I don't think this justifies much confidence that any specific X is a thing we should be working on if we care about being able to be confident that advanced AGI systems will not pose an existential threat.

I agree with 3, but I think this is because it hasn't been defined as explicitly as possible rather than because of some fundamental unclarity in the question. Accordingly, I think 4 is probably wrong.

I'm not sure whether 5 is true or not and suspect that the answer depends mostly on how you choose to define "know". (Maybe go bots don't know anything!) I'm pretty confident saying that today's best go bots know who's likely to win in many typical game positions, or whether a given move kills a particular group or not. Accordingly, I am inclined to disagree with 6, even though probably there are edge cases where it's not clear whether a given bot "knows" a given thing or not.

I don't know what "essentially being" means in 7; as written it looks wrong to me, but for some strong definitions of "know everything it knows" something close enough might be true. E.g., plausibly certain bits of the KataGo network are encoding things roughly along the lines of "at the location we're looking at white has a ponnuki shape whose influence is not negated by other nearby black groups"; one could know much of what the bot knows by regarding the ponnuki shape as valuable and understanding that some nearby configurations make it less valuable; but if knowing everything the bot knows includes computing exactly how good a given configuration of stones is in this respect, then plausibly you could only do that by having the ability to do pretty much the exact calculation the KataGo network does. (Perhaps the fine details are not part of what it knows but merely implementation details; maybe one could operationalize that in terms of the existence of similar, and similarly strong, bots where the fine details are somewhat different -- though I think that, as it stands, is a bit too simplistic. Or perhaps I could claim that "I" know what the bot knows if I understand the overall structure and have a computer file, or a book, containing the actual numbers. Something something Chinese Room something something systems reply something.)

(I find I can't help remarking that the final proposition makes me want to imagine a paper by Thomas Nagel entitled "What is it like to be a bot?. And of course it turns out that various people have in fact written pieces with that title.)

Replies from: DanielFilan
comment by DanielFilan · 2021-05-14T18:01:48.440Z · LW(p) · GW(p)

I think I basically agree with all of this.

comment by ACrackedPot · 2021-05-11T20:59:52.352Z · LW(p) · GW(p)

Taboo "know" and try to ask the question again, because I think you're engaging in a category error when you posit that, for example, a neural network actually knows anything at all.  That is, the concept of "knowledge" as it applies to a human being cannot be meaningfully compared to "knowledge" as it applies to a neural network; they aren't the same kind of thing.  A Go AI doesn't know how to play Go; it knows the current state of the board.  These are entirely different categories of things.

The closest thing I think the human brain has to the kind of "knowledge" that a neural network uses is the kind of thing we represent in our cultural narrative as, for example, a spiritual guru slapping you for thinking about doing something instead of just doing it.  That is, we explicitly label this kind of thing, when it occurs in the human brain, as not-knowledge.

ETA:

You can move your arm, right?  You know how to move your arms and your legs and even how to do complicated things like throw balls and walk around.  But you don't actually know how to do any of those things; if you knew how to move your arm - much less something complicated like throwing balls! - it would be a relatively simple matter for you to build an arm and connect it to somebody who was missing one.

Does this seem absurd?  It's the difference between knowing how to add and knowing how to use a calculator.  Knowing how to add is sufficient information to build a simple mechanical calculator, given some additional mechanical knowledge - knowing how to use a calculator gives you no such ability.

Replies from: DanielFilan
comment by DanielFilan · 2021-05-11T22:39:42.577Z · LW(p) · GW(p)

I think you're engaging in a category error when you posit that, for example, a neural network actually knows anything at all.

Why do you believe that?

Replies from: Charlie Steiner, ACrackedPot
comment by Charlie Steiner · 2021-05-12T20:56:51.997Z · LW(p) · GW(p)

To make my own point that may be distinct from ACP's: the point isn't that neural networks don't know anything. The point is that the level of description I'm operating on when I say that phrase is so imprecise that it doesn't allow you to make exact demands like knowing "everything the NN does" or "exactly what the NN does," for any system other than a copy of that same neural network.

If I make the verbal chain of reasoning "the NN can know things, I can know things, therefore I can know what the NN knows," this chain of reasoning actually fails. Even though I'm using the same English word "know" both times, the logical consequences of the word are different each time I use it. If I want to make progress here, I'll need to taboo the word "know."

comment by ACrackedPot · 2021-05-12T00:48:55.895Z · LW(p) · GW(p)

Because I think the word "know", as used by a human understanding a model, is standing in for a particular kind of mirror-modeling, in which we possess a reproductive model of a thing in our mind which we can use to simulate a behavior, whereas the word "know", as used by the referent AI, is standing in for "the set of information used to inform the development of a process".

So an AI which has been trained on a game which it lost can behave "as if it has knowledge of that game", when in fact the only remnant of that game may be a slightly adjusted parameter, perhaps a connection weighting somewhere is 1% different than it would otherwise be.

In order to "know" what the AI knows, in the sense that it knows it, requires a complete reproduction of the AI state - that is, if you know everything the AI actually knows, as opposed to the information-state that informed the development of the AI, then all you actually know, in that case, is that this particular connection is weighted 1% different; in order to meaningfully apply this knowledge, you must simulate the AI (you must know how all the connections interaction in a holistic sense), in which case you don't know anything, you're just asking the AI what it would do, which is not meaningfully knowing what it knows in any useful sense.

Which is basically because it doesn't actually know anything.  Its state is an algorithm, a process; this algorithm could perhaps be dissected, broken down, simplified, and turned into knowledge of how it operates - but this is just another way of simulating and querying a part of the AI; critically, knowing how the AI operates is having knowledge that the AI itself does not actually have.

Because now we are mirror-modeling the AI, and turning what the AI is, which isn't knowledge, into something else, which is.

Replies from: DanielFilan, josh-smith-brennan
comment by DanielFilan · 2021-05-14T18:06:53.774Z · LW(p) · GW(p)

I guess it seems to me that you're claiming that the referent AI isn't doing any mirror-modelling, but I don't know why you'd strongly believe this. It seems false about algorithms that use Monte Carlo Tree Search as KataGo does (altho another thread indicates that smart people disagree with me about this), but even for pure neural network models, I'm not sure why one would be confident that it's false.

Replies from: ACrackedPot
comment by ACrackedPot · 2021-05-14T19:01:12.834Z · LW(p) · GW(p)

Because it's expensive, slow, and orthogonal to the purpose the AI is actually trying to accomplish.

As a programmer, I take my complicated mirror models, try to figure out how to transform them into sets of numbers, try to figure out how to use one set of those numbers to create another set of those numbers.  The mirror modeling is a cognitive step I have to take before I ever start programming an algorithm; it's helpful for creating algorithms, but useless for actually running them.

Programming languages are judged as helpful in part by how well they do at pretending to be a mirror model, and efficient by how well they completely ignore the mirror model when it comes time to compile/run.  There is no program which is made more efficient by representing data internally as the objects the programmers created; efficiency gains are made in compilers by figuring out how to reduce away the unnecessary complexity the programmers created for themselves so they could more easily map their messy intuitions to cold logic.

Why would an AI introduce this step in the middle of its processing?

comment by Josh Smith-Brennan (josh-smith-brennan) · 2021-05-12T19:53:32.257Z · LW(p) · GW(p)

...but this is just another way of ... querying a part of the AI...

I've studied Go using AI and have heard others discuss the use of AI in studying Go. Even for professional Go players, the inability for the AI to explain why it gave a higher win rate to a particular move or sequence is a problem. 

Even if you could program a tertiary AI which could query the Go playing AI, analyze the calculations the Go playing AI is using to make it's judgements, and then translate that into english (or another language) so that this tertiary AI could explain why the Go playing AI made a move, I would still disagree that even this hybrid system 'knew' how to play Go.

There is a definite difference between 'calculating' and 'reasoning' such that even a neural network with it's training I think is really still just one big calculator, not a reasoner.

comment by adamShimi · 2021-05-14T21:49:07.322Z · LW(p) · GW(p)

My take is:

  • I think making this post was a good idea. I'm personally interested in deconfusing the topic of universality (which basically should capture what "learning everything the model knows"), and you brought up a good "simple" example to try to build intuition on.
  • What I would call your mistake is a mostly 8, but a bit of the related ones (so 3 and 4?). Phrasing it as "can we do that" is a mistake in my opinion because the topic is very confused (as shown by the comments). On the other hand, I think asking the question of what it would mean is a very exciting problem. It also gives a more concrete form to the problem of deconfusing universality, which is important AFAIK to Paul's approaches to alignment.
comment by AllAmericanBreakfast · 2021-05-12T03:54:37.852Z · LW(p) · GW(p)

One operationalization of "know" in this case is being able to accurately predict every move of the Go AI. This is a useful framing, because instead of a hard pass/fail criterion, we can focus on improving our calibration.

Now the success criterion might be:

  • You have to be able to attain a Brier score of 0 in predicting the moves of the best go bot that you have access to.

What's missing are some necessary constraints.

Most likely, you want to prohibit the following strategies:

  1. Running a second instance of the Go AI on the same position, and using as your prediction the move that instance #2 makes.
  2. Manually tracing through the source code to determine what the output would be if it was run.
  3. Memorizing the source code and tracing through it in your head.
  4. Constraining the input moves to ones where every Go program would make the same move, then using the output of one of a different Go program as your input.
    1. Corollary: you can't use any automation whatsoever to determine what move to make. Any automated system that can allow you to make accurate predictions is effectively a Go program.

Overall, then you might just want to prohibit the use of Turing machines. However, my understanding is that this results in a ban on algorithms. I don't have enough CS to say what's left to us if we're denied algorithms.

Here's a second operationalization of "know." You're allowed to train up using all the computerized help you want. But then, to prove your ability, you have to perfectly predict the output of the Go program on a set of randomly generated board positions, using only the power of your own brain. A softer criterion is to organize a competition, where participants are ranked by Brier score on this challenge.

However, this version of the success criterion is just a harder version of being an inhumanly good Go player. Not only do you have to play as well as the best Go program, you have to match its play. It's the difference between being a basketball player with stats as good as Michael Jordan's, and literally being able to copy his every move in novel situations indefinitely.

Neither of these versions of the success criterion operationalization seems particularly interesting. Both are too restrictive to be relevant to AI safety.

Did you have a different operationalization in mind?

Replies from: DanielFilan, rudi-c
comment by DanielFilan · 2021-05-14T18:12:05.173Z · LW(p) · GW(p)

Here's a second operationalization of "know." You're allowed to train up using all the computerized help you want. But then, to prove your ability, you have to perfectly predict the output of the Go program on a set of randomly generated board positions, using only the power of your own brain.

I was thinking more of propositional knowledge (well, actually belief, but it doesn't seem like this was a sticking point with anybody). A corollary of this is that you would be able to do this second operationalization, but possibly with the aid of a computer program that you wrote yourself that wasn't just a copy of the original program. This constraint is slightly ambiguous but I think it shouldn't be too problematic in practice.

Did you have a different operationalization in mind?

The actual thing I had in mind was "come up with a satisfactory operationalization".

Replies from: AllAmericanBreakfast
comment by AllAmericanBreakfast · 2021-05-14T18:58:20.964Z · LW(p) · GW(p)

A corollary of this is that you would be able to do this second operationalization, but possibly with the aid of a computer program that you wrote yourself that wasn't just a copy of the original program. This constraint is slightly ambiguous but I think it shouldn't be too problematic in practice.

 

I'm going to assume it's impossible for me, personally, to outplay the best Go AI I have access to. Given that, the requirement is for me to write a better Go AI than the one I currently have access to.

Of course, that would mean that my new self-written program is now the best Go AI. So then I would be back to square one.

comment by Rudi C (rudi-c) · 2021-05-13T11:03:14.954Z · LW(p) · GW(p)

There are weaker computational machines than Turing machines, like regexes. But you don really care about that, you just want to ban automatic reasoning. I think it’s impossible to succeed with that constrain; Playing Go is hard, people can’t just read code that plays Go well and “learn from it.”

comment by Jacob Pfau (jacob-pfau) · 2021-05-12T07:26:25.652Z · LW(p) · GW(p)

One axis along which I'd like clarification is whether you want a form of explanation which is learner agnostic or learner specific? It seems to me that traditional transparency/interpretability tools try to be learner agnostic, but on the other hand the most efficient way to explain makes use of the learner's pre-existing knowledge, inductive biases, etc. 

In the learner agnostic case, I think it will be approximately impossible to succeed at this challenge. In the learner specific case, I think it will require something more than an interpretability method. This latter task will benefit from better and better models of human learning -- in the limit I imagine something like a direct brain neuralink should do the trick...

On the learner specific side, it seems to me Nisan is right when he said 'The question is if we can compress the bot's knowledge into, say, a 1-year training program for professionals.' To that end, it seems like a relevant method could be an improved version of influence functions. Something like find in the training phase when the go agent learned to make a better move than the pro and highlight the games (/moves) which taught it the improved play.

Replies from: DanielFilan
comment by DanielFilan · 2021-05-14T18:12:43.412Z · LW(p) · GW(p)

One axis along which I'd like clarification is whether you want a form of explanation which is learner agnostic or learner specific?

I don't know what you mean by "learner agnostic" or "learner specific". Could you explain?

Replies from: jacob-pfau
comment by Jacob Pfau (jacob-pfau) · 2021-05-15T01:42:37.879Z · LW(p) · GW(p)

Not sure what the best way to formalize this intuition is, but here's an idea. (To isolate this learner-agnostic/specific axis from the problem of defining explanation, let me assume that we have some metric for quantifying explanation quality, call it 'R' which is a function from <Model, learner, explanation> triples to real values.)

Define learner-agnostic explanation as optimizing for aggregate R across some distribution of learners -- finding the one optimal explanation across this distribution. Learner-specific explanation optimizes for R taking the learner as an input -- finding multiple optimal explanations, one for each learner.

The aggregation function in the learner-agnostic case could be the mean, or it could be a minimax function. The minimax case intuition would be formalizing the task of coming up with the most accessible explanation possible.

Things like influence functions, input-sensitivity methods, automated concept discovery are all learner-agnostic. On the other hand, probing methods (e.g. as used in NLP) could maybe be called learner-specific. The variant of influence functions I suggested above is learner-specific.

In general, it seems to me that as the models get more and more complex, we'll probably need explanations to be more learner-specific to achieve reasonable performance. Though perhaps learner-agnostic methods will suffice for answering general questions like 'Is my model optimizing for a mesa-objective'? 

Replies from: DanielFilan
comment by DanielFilan · 2021-06-03T18:04:10.916Z · LW(p) · GW(p)

I guess by 'learner' you mean the human, rather than the learned model? If so, then I guess your transparency/explanation/knowledge-extraction method could be learner-specific, and still succeed at the above challenge.

comment by chasmani · 2021-05-11T20:08:00.676Z · LW(p) · GW(p)

I’d say 1 and 7 (for humans). The way humans understand go is different to how bots understand go. We use heuristics. The bots may use heuristics too but there’s no reason we could comprehend those heuristics. Considering the size of the state space it seems that the bot has access to ways of thinking about go that we don’t, the same way a bot can look further ahead in a chess games than we could comprehend.

comment by Nisan · 2021-05-11T05:44:58.846Z · LW(p) · GW(p)

This sounds like a great goal, if you mean "know" in a lazy sense; I'm imagining a question-answering system that will correctly explain any game, move, position, or principle as the bot understands it. I don't believe I could know all at once everything that a good bot knows about go. That's too much knowledge.

Replies from: adamShimi, DanielFilan, DanielFilan
comment by adamShimi · 2021-05-11T08:31:07.308Z · LW(p) · GW(p)

That's basically what Paul's universality (my distillation post [AF · GW] for another angle) is aiming for: having a question-answering overseer which can tell you everything you want to know about what the system knows and what it will do. You still probably need to be able to ask a relevant question, which I think is what you're pointing at.

comment by DanielFilan · 2021-05-11T06:55:59.549Z · LW(p) · GW(p)

Maybe it nearly suffices to get a go professional to know everything about go that the bot does? I bet they could.

Replies from: adamShimi
comment by adamShimi · 2021-05-11T08:37:56.895Z · LW(p) · GW(p)

What does that mean though? If you give the go professional a massive transcript of the bot knowledge, it's probably unusable. I think what the go professional gives you is the knowledge of where to look/what to ask for/what to search. 

Replies from: Nisan
comment by Nisan · 2021-05-11T10:05:42.256Z · LW(p) · GW(p)

Or maybe it means we train the professional in the principles and heuristics that the bot knows. The question is if we can compress the bot's knowledge into, say, a 1-year training program for professionals.

There are reasons to be optimistic: We can discard information that isn't knowledge (lossy compression). And we can teach the professional in human concepts (lossless compression).

Replies from: josh-smith-brennan, Jozdien
comment by Josh Smith-Brennan (josh-smith-brennan) · 2021-05-12T20:28:22.483Z · LW(p) · GW(p)

Or maybe it means we train the professional in the principles and heuristics that the bot knows.

Many Professional Go players are already using AI to help them study, including understanding the underlying technology and algorithms, with mixed results. 

Humans have been playing Go for thousands of years and there is already a long and respected tradition and cannon of literature with commentaries and human reasoning to pull from. Most human players have used human created rituals to study with,and see studying AI as just one tool among many. Some don't give it much credence at all.

Another problem is that when you rely on Go to make a living, taking time to attempt to incorporate ML and AI concepts into your study and tournament schedule is a big risk. Because currently there's no way to query the AI to understand why it made moves, much of what AI provides is essentially meaningless to human players. 

The problem isn't even necessarily because it's an AI either; if you were trying to learn how to play Go from a Human teacher who won every game, but who couldn't communicate with you or anyone else because of the language barrier, you would be better served finding a teacher you could talk with, even if they didn't win as often as the 'unbeatable idiot.' 

Additionally though many Go players see Go as a human game too, and feel offended at the encroachment of technology into their domain. 

Besides most of the development of the AI associated with the Alpha Go series has branched off into development of the AI into the domain of Protein Folding, which I personally think and feel is a much better use of the technology.

comment by Jozdien · 2021-05-11T10:08:13.595Z · LW(p) · GW(p)

How comparable are Go bots to chess bots in this?  Chess GMs at the highest level have been using engines to prepare for decades; I think if they're similar enough, that would be a good sample to look at for viable approaches.

Replies from: ChristianKl, DanielFilan, josh-smith-brennan
comment by ChristianKl · 2021-05-12T22:43:06.912Z · LW(p) · GW(p)

If you show a Chess AI an endgame situation that it clearly wins I would expect it to make moves that end the game that are similar to those that a professional Chess player would make. Moves that end the game in the straightforward fashion.

On the other hand the equvialent of AlphaGo's behavior would be if the Chess AI would first sacrifice some pieces where the sacrifice doesn't matter to win and then make a bunch of random moves before after a while making moves to mate maybe because of some rule of maximal moves or the time suggesting that the time game should be finished soon. 

This difference makes it much easier to learn about the Chess endgame from a chess computer program then it is to learn from AlphaGo.

Replies from: gjm
comment by gjm · 2021-05-13T11:54:32.669Z · LW(p) · GW(p)

It might be worth mentioning that the specific bot mentioned in the OP, David Wu's KataGo, doesn't make random-looking slack moves in the endgame because the figure of merit it's trying to maximize involves both win probability and (with a very small coefficient) final score.

This doesn't entirely negate Christian's point; some other strong bot might still have that behaviour, and KataGo itself may well have other features with similar implications.

On the other hand, there's at least one respect in which arguably chess bots are harder to learn from than go bots: more of their superiority to humans comes from tactical brilliance, whereas more of go bots' superiority (counterintuitive though it would be to anyone working on computer game-playing say 20 years ago) comes from better positional understanding. It may be easier to learn "this sort of shape is better than everyone thought" than "in this particular position, 10 moves into the tactics you can throw in the move a6 which prevents Nb7 three moves later and wins".

(I am not very confident of this. It might turn out that some of the things go bots see are very hard for humans to learn in a kinda-similar way; what makes "this shape" better in a particular case may depend on quirks of the position in ways that look arbitrary and random to strong human go players. And maybe some of the ridiculous inhuman tactical shots that the machines find in chess could be made easier to see by systematically different heuristics for what moves are worth considering at all.)

Replies from: ChristianKl, josh-smith-brennan
comment by ChristianKl · 2021-05-13T22:49:51.931Z · LW(p) · GW(p)

KataGo, doesn't make random-looking slack moves in the endgame because the figure of merit it's trying to maximize involves both win probability and (with a very small coefficient) final score.

Even when that's true, it suggests that it might also try to do at the end a bit to maximize final score and not be as unconcered about it as Alpha Go, the fact that Alpha Go behaves like it  does suggest that what's important for being able to play very strong isn't about local patterns. 

I have another argument that's a bit more Go specific is that even for humans professional players follow patterns less then amateuers in 1-5 kyu. If you look at openings, the amateur openings look play the same series of moves for a longer time then the professionals. The amateurs are often playing patterns they learned while the professionals (and very strong amateur players) are less pattern focused.

According to the analysis of the pros, Alpha Go seemed not playing according to local patterns and thinking more globally to another level (here I have to trust the professionals because that difference goes beyond my Go abilities while the other arguments I made are more about things that I can reason about without trusting others). 

"in this particular position, 10 moves into the tactics you can throw in the move a6 which prevents Nb7 three moves later and wins".

In Go it's often not three moves later but 100 moves later. Go is not a game where in the early/midgame with the expection of life/dead situations the important consequences are not a handful of moves in the future but much further out. 

Replies from: gjm, josh-smith-brennan
comment by gjm · 2021-05-14T01:54:11.308Z · LW(p) · GW(p)

I have the same sense that strong go bots play more "globally" than strong humans.

(Though I think what they do is in some useful sense a generalization of spotting local patterns; after all, in some sense that's what a convolutional neural network does. But as you add more layers the patterns that can be represented become more subtle and larger, and the networks of top bots are plenty deep enough that "larger" grows sufficiently to encompass the whole board.)

I think what's going on with different joseki choices between amateurs and very strong humans isn't exactly more patterns versus less patterns. Weak human players may learn a bunch of joseki, but what they've learned is just "these are some good sequences of moves". Stronger players do better because (1) they have a better sense of the range of possible outcomes once those sequences of moves have been played out and (2) they have a better sense of how the state of the rest of the board affects the desirability of those various outcomes. So they will think things like "if I play this joseki then I get to choose between something like A or B; in case A the stones I play along the way will have a suboptimal relationship with that one near the middle of the left side, and in case B the shape I end up with in the corner fits well with what's going on in that adjacent corner and there's a nice ladder-breaker that makes the opposite corner a bit better for me, but in exchange for all that I end up in gote and don't get much territory in the corner; so maybe that joseki would be better because [etc., etc.]", whereas weak players like, er, me have a tiny repertoire of joseki lines (so would have to work out from first principles where things might end up) and are rubbish at visualizing the resulting positions (so wouldn't actually be able to do that, and would misevaluate the final positions even if we could) and don't have the quantitative sense of how the various advantages and disadvantages balance out (so even if we could foresee the likely outcomes and understand what features of them are important, we'd still likely get the relative importance of the factors involved). My guess is that the last of those is one place where the strongest computer players are well ahead of the strongest humans.

When the relevant subtleties are 100 moves ahead rather than 3, that's strategy rather than tactics. Even top human and computer go players are not usually reading out tactical lines 100 moves deep. A good player can (usually, in principle) learn new strategic concepts, or get a better sense of ones they already kinda know about, more easily than they can learn to Just Be Much Better At Tactics. The strongest computer chess players win mostly by being terrifyingly superhuman at tactics (though this may be less true of the likes of Alpha Zero and Leela Chess Zero than of, say, Stockfish). The strongest computer go players are also superhumanly good at tactics, but not by so much; a lot of their superiority is strategic.

Replies from: josh-smith-brennan
comment by Josh Smith-Brennan (josh-smith-brennan) · 2021-05-14T02:06:55.691Z · LW(p) · GW(p)

I have the same sense that strong go bots play more "globally" than strong humans.

Very much so. I have the same sense.

I think what's going on with different joseki choices between amateurs and very strong humans isn't exactly more patterns versus less patterns. 

From my understanding, Professional players (and stronger amateurs) still rely heavily on Joseki, it's just that they Joseki become longer and more complicated. In a lot of ways, the stronger you get I think the more reliant you become on patterns you know have succeeded for you or others in the past.

It's the reason why Professionals spend so much time studying, and why most, if not all top ranked professionals started studying and playing as children. It takes that kind of dedication and that amount of time to learn to become a top player.

Stronger players do better because (1) they have a better sense of the range of possible outcomes once those sequences of moves have been played out and (2) they have a better sense of how the state of the rest of the board affects the desirability of those various outcomes. 

It's possible to become a strong amateur Go player based on 'feeling' and positional judgement, but without being able to read your moves out to a decent degree - maybe 10 -15 moves ahead methodically - it's not easy to get very strong.  

Replies from: gjm
comment by gjm · 2021-05-14T03:07:52.407Z · LW(p) · GW(p)

In case it wasn't clear, that sentence beginning "Stronger players do better" was not purporting to describe all the things that make stronger go players stronger, but to describe specifically how I think they are stronger in joseki.

I don't think joseki are the main reason why professional go players spend so much time studying, unless you define "studying" more narrowly than I would. But that's pure guesswork; I haven't actually talked to any go professionals and asked how much time they spend studying joseki.

(Professional chess players spend a lot of time on openings, and good opening preparation is important in top-level chess matches where if you find a really juicy innovation you can practically win the game before it's started. I think that sort of thing is much less common in go, though again that's just a vague impression rather than anything I've got from actual top-level go players.)

Replies from: DanielFilan, josh-smith-brennan
comment by DanielFilan · 2021-05-14T18:16:03.356Z · LW(p) · GW(p)

I don't think joseki are the main reason why professional go players spend so much time studying, unless you define "studying" more narrowly than I would.

This is also my understanding.

comment by Josh Smith-Brennan (josh-smith-brennan) · 2021-05-14T03:38:33.521Z · LW(p) · GW(p)

In case it wasn't clear, that sentence beginning "Stronger players do better" was not purporting to describe all the things that make stronger go players stronger, but to describe specifically how I think they are stronger in joseki.

I didn't take it as if it was all they did. 

(1) they have a better sense of the range of possible outcomes once those sequences of moves have been played out and (2) they have a better sense of how the state of the rest of the board affects the desirability of those various outcomes. 

With (1) it seems like your describing the skill of reading, but not necessarily reading with the understanding of how to play so that you have a good outcome, or reading and assessing the variations of a particular position, and with (2) your describing reading how local play affects global play. I think if they are truly strong players, they also (3) understand the importance of getting and maintaining sente, and (4) also see joseki (or standard sequences) from both sides, as white and as black.

I don't think joseki are the main reason why professional go players spend so much time studying, unless you define "studying" more narrowly than I would.

I was talking mostly about studying in preparation to become a professional, like daily study for 8 hours a day, the path from say 1k-9p, although Joseki are usually an important part of study at any level. I think the term also applies more loosely to 'sequences with a good outcome'. Coming up with new and personal 'proprietary' joseki I think consumes a lot of study time for professionals, while going over other peoples or AI games and exploring the different variations. 
 

There are other things to study, but I still maintain that Joseki make up fair amount of Professional knowledge. Some people study openings, others life and death problems, end game scenarios, but they all rely on learning set patterns and how to best to integrate them.

comment by Josh Smith-Brennan (josh-smith-brennan) · 2021-05-14T01:54:25.168Z · LW(p) · GW(p)

the fact that Alpha Go behaves like it  does suggest that what's important for being able to play very strong isn't about local patterns. 

AlphaGo was partly trained using human games as input, which I believe KataGo was as well. 

But AlphaGoZero didn't use any human games as input, it basically 'taught itself' to play Go. 

Seeing as how AlphaGo and KataGo used human games, which rely on integrating reasoning between local and global consideration, the development of the algorithms is different than that of AlphaGoZero. 

Does AlphaGo rely on local patterns? Possibly, but AlphaGoZero? Where humans see a 3 phase game with maybe 320 moves, which gets broken down into opening, middle and end game, ko's, threats, exchanges, and so on, it seems likely AlphaGoZero sees the whole game as one 'thing' (and in fact sees that one game as just one variation in the likely billions of millions of trillions of games it has played with itself).

Even at AlphaGoZeros level though, I think considering local patterns is still probably a handy way to divide computation of an infinite set of games into discrete groups when considering the branches of variations; sort of the way that Wikipedia still uses individual headings and pages for entries, even though they could probably turn the entire contents of Wikipedia into one long entry. It would be very difficult to navigate if they did though.

In Go it's often not three moves later but 100 moves later. Go is not a game where in the early/midgame with the expection of life/dead situations the important consequences are not a handful of moves in the future but much further out. 

I've heard stories of  Go professionals from the classical era claiming it's possible to tell who's going to win by the 2nd move.

Replies from: ChristianKl, gjm
comment by ChristianKl · 2021-05-14T11:42:18.613Z · LW(p) · GW(p)

I've heard stories of  Go professionals from the classical era claiming it's possible to tell who's going to win by the 2nd move.

If you know whom is playing whom and how much handicap there is you sometimes can tell who is going to win pretty reliably. There's not much more information that the first two moves give you in a game of professionals. 

In Go you frequently change the place of the board on which you are playing. If you play enough moves in a given area of the board that the next move is worth X points but elsewhere on the board there's a move worth X+2 points, you change the place at which you are playing. This frequently means that it takes a long time till the game again continues playing many more moves at a certain place and by that point other things changed on the board. 

Replies from: josh-smith-brennan
comment by Josh Smith-Brennan (josh-smith-brennan) · 2021-05-14T15:45:45.501Z · LW(p) · GW(p)

If you know whom is playing whom and how much handicap there is you sometimes can tell who is going to win pretty reliably. There's not much more information that the first two moves give you in a game of professionals. 

The story I'm referring to was in relation to games played between evenly matched players, (without the use of handicaps) and has to do with the nuances of different approaches to playing the opening. It was a hard to encapsulate concept at the time of the original comment (way before modern computing came on the scene) but has to do with what would probably now be considered win-rate.

The first 4 moves of a game between players familiar with the game - most of the time - reliably are played in each of the 4 corners of a 19x19 board. There are usually 3 different moves in each of the 4 corners, and the comment had to do with literally the 2nd move of a game between 2 players of any skill level, sans handicap I'm sure.

This comment of course came from a very well respected professional player who had devoted his life to the study and play of Go, at a time when the only ways to play were either face to face, or a correspondence game played through the mail. 

In Go you frequently change the place of the board on which you are playing.

This is why the concept of shape building is so complex and what you are referring to is the concept of 'Tenuki'.

This frequently means that it takes a long time till the game again continues playing many more moves at a certain place and by that point other things changed on the board. 

This is what is usually referred to as reading local versus global positioning.

comment by gjm · 2021-05-14T03:13:22.831Z · LW(p) · GW(p)

KataGo was not trained on human games.

I wonder whether we are interpreting "local patterns" in different ways. What I mean is the sort of thing whose crudest and most elementary versions are things like "it's good to make table shapes" and "empty triangles are bad".

The earlier layers of a CNN-based go-playing network are necessarily identifying local patterns in some sense. (Though KataGo's network isn't a pure CNN and does some global things too; I forget the details.)

If you can predict the winner of a go game after two moves then it's because (1) one of the players played something super-stupid or (2) you're paying attention to the way they look at the board, or the authority with which they plonk down their stones, or something of the sort. In normal games it is obviously never the case that one player has a decisive advantage by the second move.

Replies from: DanielFilan, josh-smith-brennan, josh-smith-brennan
comment by DanielFilan · 2021-05-14T18:21:29.211Z · LW(p) · GW(p)

(Though KataGo's network isn't a pure CNN and does some global things too; I forget the details.)

The 'global' things seem to be pooling operations that compute channel-wise means and maxes. Paper link.

Replies from: josh-smith-brennan
comment by Josh Smith-Brennan (josh-smith-brennan) · 2021-05-15T15:33:35.824Z · LW(p) · GW(p)

I don't think this use of the term 'global' is how Go players use the term, which is what this part of the discussion is about. This is probably where some of the misunderstanding comes from. 

comment by Josh Smith-Brennan (josh-smith-brennan) · 2021-05-15T15:31:16.191Z · LW(p) · GW(p)

Here I don't think you're using the terms 'locally' and 'globally' in the standard sense that Go players use them. 

The earlier layers of a CNN-based go-playing network are necessarily identifying local patterns in some sense. (Though KataGo's network isn't a pure CNN and does some global things too; I forget the details.)

Seeing as how CNN based processing underlies much of image processing, analyzing the shapes on a GO board this way makes a lot of sense, it's also how humans understand the game. 

However, I don't understand what you mean by "KataGo's network isn't a pure CNN and does some global things too..." here the use of the word 'global' seems qualitatively different than how you use 'local'.

Replies from: gjm
comment by gjm · 2021-05-15T16:37:31.526Z · LW(p) · GW(p)

Perhaps you would like to clarify how you are intending to use the word "local"?

My usage here is as follows: a "local pattern" is something whose presence or absence you can evaluate by looking at a small region of the board. (The smaller, the more local; locality comes in degrees. Presence or absence of a pattern might do, too.) So e.g. an empty triangle is an extremely local pattern; you can tell whether it is present by looking at a very small region of the board. A ponnuki is slightly less local, a table-shape slightly less local again, but these are all very local. A double-wing formation is substantially less local: to determine that one is present you need to look at (at least) a corner region and about half of one adjacent side. A ladder in one corner together with a ladder-breaker in the opposite corner is substantially less local again: to see that that's present you need to look all the way across the board.

(I should maybe reiterate that the networks aren't really computing simple binary "is there an empty triangle here?" values, at least not in later layers. But what they're doing in their earlier layers is at least a little bit like asking whether a given pattern is present at each board location.)

This seems to me to be the standard sense, but I might well be missing something. I had a quick look through some books but didn't spot any uses of the word "local" :-).

(One can talk about things other than patterns being local. E.g., you might say that a move is locally good, meaning something like "there is some, hopefully obvious, portion of the board within which this move is good for most plausible configurations of the rest of the board, but it's possible that in the actual global position it's not a good move". Or sometimes you might say the same thing meaning just "this is the best move among moves in this part of the board". Or you might say that a group is locally alive, meaning something similar: for the group to be dead there would need to be unusual things elsewhere on the board that somehow interact with it. All these things seem entirely compatible with what I'm saying about "local patterns".)

The KataGo network is not pure CNN. It does something called "global pooling", where at various points in the network the mean and max values across all board locations of some of the channels are computed and used to bias the values of the other channels in the next layer. So it learns to use those channels to compute things that are of global interest. I'm not sure how much is known about exactly what things they are, but I expect them to be things like which player is winning by how much, whether there are kos on the board or likely to be on the board soon, who has more ko threats, etc.

(In case you aren't familiar with the relevant terminology, a "channel" is one of the things computed on each layer for each location on the board.)

Replies from: josh-smith-brennan
comment by Josh Smith-Brennan (josh-smith-brennan) · 2021-05-15T20:24:19.283Z · LW(p) · GW(p)

That is a lot to consider. I'll try to take my time to parse it apart a bit more before I try to respond. 

comment by Josh Smith-Brennan (josh-smith-brennan) · 2021-05-14T04:02:45.032Z · LW(p) · GW(p)

KataGo was not trained on human games.

I'm not a programmer, but have been trying to fit learning more about AI into my day, sort of using Go bots as an entry point into beginning to understand how neural nets and algorithms work in a more concrete, less conceptual way. So then was KataGo simply playing itself in order to train?

I wonder whether we are interpreting "local patterns" in different ways.

I've spent 20 years or so playing casually on 19x19 boards mostly, and I think my concept of local play is less crude than the one your talking about. I tend to think of local play as play that is still in some sort of relationship to other shapes and smaller parts of the board, where what you are describing seems to imbue the shapes with an 'entity' of sorts apart from the actual game, if that makes sense.

I think it's hard to describe a tree to someone who's never heard of one, without describing how it relates to it's environment: 

(A) "one type of tree is a 30ft tall cylinder of wood with bark and has roots, branches and leaves" versus

(B) "many trees make up a forest, they use roots buried in the ground to stand up and pull nutrients from the ground and they use their leaves to photosynthesize and turn carbon dioxide into oxygen as well as providing shade".

(A) describes certain characteristics of a particular species of tree as it relates to itself and what it means to be a tree, whereas (B) describes what trees do in relation to a local environment. If you take that further, you could talk globally (literally) about how all the trees in the world contribute to clearing pollution out of the air, protecting the integrity of soil and provide support for all kinds of wildlife, as well as provide timber for construction, fuel and paper industries. 

All the local situations around the world add up to complete the global situation.

Replies from: gjm
comment by gjm · 2021-05-14T13:22:32.076Z · LW(p) · GW(p)

Yes, KataGo trains entirely through self-play.

It's not "100% pure Zero" in that it doesn't only play entire games from the start. So e.g. it gets supplied with some starting positions that are ones in which some version of KataGo was known to have blindspots (in the hope that this helps it understand those positions better and lose the blindspots) or ones that occur in human games but not in KataGo self-play games (in the hope that this helps it play better against humans and makes it more useful for analysing human games). But I believe all its training is from self-play and e.g. it's never trying to learn to play the same moves as humans did.

(The blindspot-finding is actually pretty clever. What they do is to take a lot of games, and search through them automatically for places where KG doesn't like the move that was actually played but it leads to an outcome that KG thinks is better than what it would have got, and then make a small fraction of KG's training games use those starting positions and also add some bias to the move-selection in those training games to make sure the possibly-better move gets explored enough for KG to learn that it's good if it really is.)

I am not surprised that your concept of local play is less crude than something I explicitly described as the "crudest and most elementary versions". It's not clear to me that we have an actual disagreement here. Isn't there a part of you that winces a little when you have to play an empty triangle, just because it's an ugly very-local configuration of stones?

Here's my (very rough-and-ready; some bits are definitely inaccurate but I don't care because this is just for the sake of high-level intuition) mental model of how a CNN-based go program understands a board position. (This is just about the "static" evaluation and move-proposing; search is layered on top of that and is also very important.)

  • There are many layers.
  • "Layer zero" knows, for each location on the board, whether there's a stone there and what colour it is. There may also be other "hard-wired" information, like whether any stone there is part of a group that has only one liberty or whether any stone there is part of a group that can be captured in a ladder.
  • For each location on the board, each layer is able to look at the previous layer's knowledge at that location and its 8 nearest neighbours. So, e.g., layer 1 might recognize empty triangles, taking that to mean literally "a 2x2 square three of whose points are occupied by stones of one colour, the fourth being empty". Layer 2 might recognize table-shapes and "standard" ko configurations.
  • This isn't only about recognizing local combinations of stones, as such. For instance, one thing that might be identified at (say) layer 4 is "a ladder heading northwest from here will collide with an existing stone at distance <= 4, and the first such stone it will reach is white".
  • The architecture of the network somewhat encourages later layers to compute some of the same things as earlier ones, only better. So e.g. if there's an "empty triangle unit" on layer 1, there may still be an "empty triangle unit" on layer 5 that computes a more sophisticated notion of empty-triangle-shape-badness that takes into account what other things there are at distance <= 4. (So as you move to later layers you not only recognize larger more complicated patterns of stones, but also refine your understanding of how smaller local shapes are affected by other stuff nearby.)
  • KataGo specifically does a thing called "global pooling", which means that e.g. a few layers after it's been able to recognize "there is a ko at this location" it can recognize "there is a ko somewhere on the board" and this information is available everywhere in later layers, so that e.g. it can start attaching more importance to things that might be ko threats.
  • The number of layers is sufficiently larger than the size of the board that e.g. the program can figure out "this group is in danger of getting killed unless it makes some better eye shape somewhere around here" even when the group is large enough to span the whole board.
  • The things computed in the last layer are then combined to yield predictions of who wins, the final score, what moves are most likely to be good, etc.

So it starts with something similar to those "crudest and most elementary" notions of shape, and gradually refines them to deal with larger and larger scale structures and influence of stones further and further away; after enough layers we're a long way from "duh, empty triangle bad" and into "this group is kinda weak and short of eye-space, and that wall nearby is going to give it trouble, and there's a potential ladder over there that would run through the same area the group needs to run to, so playing here is probably good because in positions like this the resulting fight probably results in a stone here that will break the ladder and make my group over there safer" or "those black stones kinda-enclose some space but it's quite invadable and there's no way he's keeping all of it, especially because we'll get some free moves off that weak black group over there, which will help invade or reduce; it's probably worth about 23 points". (I don't mean to imply that there will be specific numbers the KataGo network computes that have those precise meanings, but that in the sort of position where a pro might think those things there will be things the network does that encode roughly that information.)

Replies from: josh-smith-brennan
comment by Josh Smith-Brennan (josh-smith-brennan) · 2021-05-14T18:59:48.867Z · LW(p) · GW(p)

From the arxiv.org paper on KataGo:

 

By introducing several improvements to the AlphaZero process and architecture, we greatly accelerate self-play learning in Go, achieving a 50x reduction in computation over comparable methods. Like AlphaZero and replications such as ELF OpenGo and Leela Zero, our bot KataGo only learns from neural-net-guided Monte Carlo tree search self-play. 

This says enough to help me understand there were no human games as input involved in the initial training of the KataGo engine.

 

But whereas AlphaZero required thousands of TPUs over several days and ELF required thousands of GPUs over two weeks, KataGo surpasses ELF's final model after only 19 days on fewer than 30 GPUs. Much of the speedup involves non-domain-specific improvements that might directly transfer to other problems. 

Here the acknowledgement of the gains in greater efficiency from non-domain specific improvements to the software and hardware architecture are somewhat insightful. 

As I'm running a windows machine with less than an i5, an integrated graphics card, and a somewhat respectable 12gigs of ram, playing around with neural nets and attempting to train them is sort of out of the question at this point. I do have an interest in this area, although at this point my interests I think would be better served if I could work with someone already trained in these areas. 

Further gains from domain-specific techniques reveal the remaining efficiency gap between the best methods and purely general methods such as AlphaZero. Our work is a step towards making learning in state spaces as large as Go possible without large-scale computational resources. 

I suppose this gets back to OP's desire to program a Go Bot in the most efficient manner possible. I think the domain of Go would still be too large for a human to 'know' Go the way even the most efficient Go Bot would/will eventually 'know' Go.

I am not surprised that your concept of local play is less crude than something I explicitly described as the "crudest and most elementary versions". It's not clear to me that we have an actual disagreement here. Isn't there a part of you that winces a little when you have to play an empty triangle, just because it's an ugly very-local configuration of stones?

Although I'm sure we agree about quite a bit in these regards, I wouldn't necessarily put an isolated instance of something like 'an empty triangle' under the heading of local play. Although at lower levels of consideration under circumstances of attempting to define the idea of 'shape' and the potential they have, it is closer to local play than to global play, especially if it's true that the earlier layers compute based on local play, instead of global play. 

I kind of doubt that though. Maybe the initial models did, but after some refinement it seems plausible that even the lowest levels take global position into account, and at the scale and speed AI neural nets can compute, it's difficult to compare human thinking of differences between local play and global play to AI thinking on the matter.

It seems reasonable to assume a good Engine like Katago with the most up-to date models might function as if it plays globally the entire game. This is what human players study to develop, a global understanding of the board from the first stone - it's just that AI compute power and speed does it through refined brute force of billions of trillions of games. If a 40 year old human started to play Go at the age of 5, and played 4 games a day for 35 years, they would only play 51,100 games. So the amount of play and the amount that can be learned by an individual human versus a self-taught AI Go bot is a staggering difference.

Of course part of the seeming disagreement we have about local play may well just be the difference between describing how to play Go versus trying to describe how to train an AI to play Go. The sort of prohibition against making empty triangles becomes much clearer to a human player once they can begin to more accurately judge Global position based on the aggregation of the all the instances of local play.  

For an AI, a table of all the potential shapes you can make might well constitute some idea of 'local play' but I don't think the same can be said for a human player. Once the shapes begin to reach all the way across a 19x19 board the line between local and global becomes difficult to distinguish, and there's no way a human can even comprehend all the possible shapes that can be developed on a 19x19 board.

Unfortunately the initial layers, or non-domain specific layers of the engine are more interesting to me I think, although much, much, much less accessible with my current level of understanding of Neural Nets, AI, mathematics, and logic. 

The process of programming AI to develop 'General Intelligence' by continually revising the non-domain specific architecture based on the results of continually expanding the different domain-specific areas the AI integrates into, is curious.

Replies from: gjm, DanielFilan
comment by gjm · 2021-05-14T21:35:45.752Z · LW(p) · GW(p)

I think that at least some of the time you are using "local" and "global" temporally whereas I am using them spatially. (For short-term versus long-term I would tend to use distinctions like "tactics" versus "strategy" rather than "local" versus "global".) Aside from that, I cannot think of anything more local than wincing at an empty triangle.

If by "lowest levels" you mean the earliest layers of the network in a KataGo-like bot, they literally cannot take global position into account (except to whatever extent the non-neural-network bits of the program feed the network with more-global information, like "how many liberties does the group containing this stone have?").

Replies from: josh-smith-brennan
comment by Josh Smith-Brennan (josh-smith-brennan) · 2021-05-14T23:00:03.972Z · LW(p) · GW(p)

I am using the terms 'locally' and 'globally' both temporally and spatially. Like all board games, time and space effect the outcomes, I don't really think you can have one without the other. Can you give me an example of what you are referring to specifically? 

Aside from that, I cannot think of anything more local than wincing at an empty triangle.

I really don't know what you mean by this. Empty triangles get played all the time, it's just that they are not considered an efficient use of the stones.  Placing one stone next to another is 'just as local' as creating an empty triangle; in terms of spatial consideration, local just refers to moves or shapes that are close by to each other. Is there another meaning for local you are thinking of?

If by "lowest levels" you mean the earliest layers of the network in a KataGo-like bot, they literally cannot take global position into account...

What I mean by globally in this instance has more to do with how the training the engine has already gone through has biased it's algorithms to consider 'good plays' over 'bad plays' by previously playing games and then analyzing those games from a global perspective in order to retrain the networks to maximize it's chances of success in subsequent games. The 'global' consideration has already been done prior to the game, it is simply expressed in the next game.

Replies from: gjm
comment by gjm · 2021-05-15T16:47:45.196Z · LW(p) · GW(p)

As I said elsewhere in the thread, by "local" I mean "looking only at a smallish region of the board". A "local pattern" is one defined by reference to a small part of the board. A group is "locally alive" if there's nothing in the part of the board it occupies that would make it not-alive. A move is "the best move locally" if when you look at a given smallish region of the board it's the best move so far as you can judge from the configuration there. Etc. (There are uses of "local" that don't quite match that; e.g., a "local ko threat" is one that affects the stones that take part in the ko.)

What I mean about empty triangles is (of course) not that making an empty triangle is always bad. I mean that the judgement of whether something is an empty triangle or not is something you do by looking only at a very small region of the board; and that if some part of you winces just a little when you have to play one, that part of you is probably looking only at a small bit of the board at a time. That is: judging that something is an empty triangle (which your brain needs to do for that slight wincing reaction to occur) is a matter of recognizing what I have been calling a "local pattern".

Yes, "placing one stone next to another" is also a local thing; the notion of "attachment", for instance, is a very local one.

Yes, the computations the network does even on its lowest layers have been optimized to try to win the whole game. But what that means (in my terms, at least) is that the early layers of the network are identifying local features of the position that may have global significance for the actual task of evaluating the position as a whole and choosing the next move.

Replies from: josh-smith-brennan
comment by Josh Smith-Brennan (josh-smith-brennan) · 2021-05-15T20:19:45.180Z · LW(p) · GW(p)

Once again, I have to say then, I'm not sure where the disagreement stems from between you and I. 

Although I would say that the idea of 'locally alive' is a little confusing: a group is either 'alive' because it has 2 real eyes or has implied shape so that it cannot be killed (barring potential ko's which might force the player to sacrifice the group for a more important strategic play elsewhere) or it's 'possible to kill' at which point it would be considered 'not yet alive.' I think this is another way to describe 'locally alive' possibly?

(There are uses of "local" that don't quite match that; e.g., a "local ko threat" is one that affects the stones that take part in the ko.)

Maybe I don't understand what you mean by this, but I think that does match the same concept: i.e. white starts a ko battle by capturing a stone in blacks huge dragon, a stone which is necessary for blacks shape to live. So black must respond by making a ko threat elsewhere that is approx. of equal value to the loss of blacks dragon, otherwise white has no reason to continue the battle and can take the ko, thereby killing blacks huge group. 

If black makes such a threat, so that white must respond with another ko threat, it would be to whites advantage to be able to make a 'local ko' threat, meaning that the new ko threat by white would still effect the shape of concern - namely blacks dragon - so that now there are 2 points of importance at risk for blacks group to live instead of just the one. This is what I would consider to be a 'local ko' threat, because it builds directly on the first ko threat instead of forcing white to find another ko threat elsewhere, indirectly affecting blacks play, but not blacks dragon, the place where the original ko started.

Replies from: gjm
comment by gjm · 2021-05-16T02:34:12.173Z · LW(p) · GW(p)

I too am not sure whence cometh our disagreement, but I know the point at which I first thought we had one. There was some discussion of CNN-based go programs looking at "local patterns" and you said:

Does AlphaGo rely on local patterns? Possibly, but AlphaGoZero? Where humans see a 3 phase game with maybe 320 moves, which gets broken down into opening, middle and end game, ko's, threats, exchanges, and so on, it seems likely AlphaGoZero sees the whole game as one 'thing' (and in fact sees that one game as just one variation in the likely billions of millions of trillions of games it has played with itself).

which seemed to me to be responding to "these programs look at local patterns" with "I don't believe AlphaGo Zero does, because it sees the whole game as one thing rather than looking at different phases of the game separately", and I think that in the previous discussion "local" was being used spatially (small region of the board) rather than temporally (one phase of the game, or part thereof) but your response seemed to assume otherwise.

On "locally alive", on reflection I think a more common usage is that you call a group "locally alive" when you can see two eyes for it (or an unstoppable way of getting them) locally; but it can be "not locally alive" without being "dead" because there might be a way to connect it to other useful things, or run it out into somewhere where it has space to make more eyes.

I think we are using "local ko threat" in pretty much the same way, which is reassuring :-). I think it's a bit different from other uses of "local" because the region of the board involved can potentially be very large, if e.g. the black dragon in your example stretches all the way across the board. But it's not very different; it's still about being concerned only with a subset of the board.

Replies from: josh-smith-brennan
comment by Josh Smith-Brennan (josh-smith-brennan) · 2021-05-16T08:45:48.024Z · LW(p) · GW(p)

Sorry if this goes a bit funny in places, I've been up all night. We had 4 cop cars and a helicopter taking an interest in the apartment complex I live in last night and I haven't been able to sleep since.

Ok. I think we are on the same page now, which is good. I've had to readjust the parameters of my thinking a bit in order to look at similarities in our writing about our thinking. I consider myself to be a natural skeptic, so I tend to question things first before I find a way to agree with them. I blame my mom for this, so any complaints should be sent to her. :)

I'm a little familiar with CNN's, although I didn't know the exact name. I've previously done a little research into Neural Nets as they relate to Machine Vision, namely just trying to familiarize myself with toy models of what they are, how they function, and a little on training them. 

I am/am not surprised they are used for Go playing Ai, but that is a slightly different topic for another time hopefully. As for the meaning of "local patterns", I think of them as a human concept, a 'short cut' of sorts to help humans divide the board up into smaller subsets as you mentioned. I think we naturally see the whole board when we play Go, and it is through training that we begin to see 'local patterns.'

Every move in a physical game, uses the matter of a stone to create meaning to everyone watching. All observers as well as the players are all seeing the same matter, and so the meaning is shared even though some people are trained to see more, and more accurate information about the game.

You cannot see the players brains working unless you put them in fMRI machines or something of that nature, but you can see the judgement of their contemplation in the matter of the placement of a stone on the board. The meaning is a by product of the matter, and vice versa. The meaning and the matter are entangled.

In an instance of a Go playing AI, we can actually 'see' or try ti 'understand' what is going on inside the Ai's ''head", but in in-person tournament play, it still requires a human to finalize the judgement by making it matter through the actual placement of the stone. It would be interesting if a robotic arm, and machine vision system were incorporated so that the AI could finalize it's judgement by placing the stone itself.

Maybe giving it a limited power supply as well, a limitation that it was aware of, so that it had to consider how much power it used to compute each move. How much power do you think Ke Jie used in his matches against DeepMinds AI? Maybe a power consumption comparison would be interesting as well. I think it's only fair to consider all the ways in which human players function when planning competitions between human and AI players. Does the AI walk itself to and from the match? Hmph. I'll see it being a more even match between humans and AI when AI can also relate to the other people before and after the game, and keep up a job to support itself. My own bias.

Back to Local Patterns.

What I was attempting to communicate in the paragraph you quoted, was the idea that the Go playing Ai like AlphaGo, which utilized human games as source material for training purposes, would be biased towards integrating an indirect consideration of concepts like "local patterns", because the humans who played the games constituting the source material did. This type of training would influence the way those AI played, and so interpretation of the way those AI calculated I think could reasonably be assumed to take "local patterns" into consideration, conceptually speaking.

Whereas with AlphaGoZero, and apparently KataGo as well, the training of the AI was not informed by human games, so I think it is safe to assume that those AI don't 'think' about Go in the exact same way humans do. Once again, conceptually speaking. 

So in that aspect, I would assume that means that the AI don't see separate phases of a single game unless they were influenced by human intervention at specific intervals, and also would not see differences between scale or pattern, so that distinctions humans make, like local versus whole board patterns wouldn't exist, unless humans intervened to make adjustments based on their human perceptions of what the AI was doing badly or inefficiently or whatever.

Unless we program them to 'think' like we do, I think it's safe to assume they don't 'understand' like we do. The neural nets may allow them to 'think' in a similar way though. These distinctions are new to me in this specific context. I appreciate the time you all are taking in exchanging ideas. Thanks to all taking part in this part of the thread.

I think conceptually, a single game of Go can be considered one big joseki, and given enough time, most local situations become whole board situations. For humans it might take between 15 minutes to an hour or longer for us to see how a whole board 'joseki' develops, but for Neural Nets, they see how many whole games in a second potentially? 

Depending on how far a Go playing AI reads into each move - if it runs say 8 variations of an opening move completely to the end of the game to consider which of those moves has the best win rate (which it has already calculated through the training before) , and then repeats that process again for 4 of those moves which seem to have the best winrate, so that each of those 4 potential moves are read with 8 possible variations of each, completely to the end of the game, and so on and so on - a single move the AI wants to make potentially is based on millions of games played internally, in seconds or minutes, before the final move is chosen. 

When we consider the meaning of 'local patterns' we have to use the temporal concept of 'moments'. If we consider each move to be a different moment, than a game which has 300 moves consists of 300 separate moments. Each moment may take a different amount of time to resolve into the matter of a stone being placed, or a virtual stone icon appearing, but the spatial placement of the stone is conceptually a whole board concept. I think the concepts of global and local placement are entangled, in the same way space and time are.

So as a trained Go player or observer, I can limit my consideration of the whole board in favor of a subset of the board, however this does not negate the fact that any move I make, even if made in only a 'local consideration' is a whole board move. Every stone on the board matters to the meaning of the game, so the idea of using stones efficiently becomes a concern. 

Efficient use of stones is a whole board concept, although in practice, it takes separate sequential temporal moments of the players contemplation of the meaning of the stones, and the finalization of the judgement of the matter of the placement of all the stones on the board, by placing a stone or virtual stone in a specific location.

However, at the end of the game, all the separate instances of the game add up to a whole board pattern, which is accurately judged to provide evidence of the winner. The growth of the patterns is essentially and integrally both a temporal and spatial event, with a common beginning and pretty common end point, even if the winner becomes clear through different pattern development.

The board always starts empty, and the only choices for an outcome are a win/loss or tie game. These are the only options no matter how many moves are played or in what way the matter of the patterns develops.

The more a human plays, the better their 'intuition' about how likely moves are to result in a win, and it is only the fact that we see each placement of the stone as a separate moment which makes any play a 'local play'. The fact you can only play one stone at a time makes every move 'local' by default. Which essentially means every move is both local and global/whole board.

Practically speaking, if the AI's Neural Nets are modeled after the frontal and prefrontal cortex, then yes, I can see how at the very lowest level of calculation, an AI uses spatial relationships which by necessity of the temporal nature of everything including perception and contemplation, would require consideration of 'local patterns'. And I can 'see' how using CNN's to imitate the vision processing centers of the brain in consideration of the patterns of a game Go makes sense. 

All of this brings to mind the concept of Consideration. If an AI 'considers' the individual pixels that make up an image, and calculates using all of them, does that mean than humans do the same thing when looking at a digital image?

Do humans consider each individual pixel, or the pattern they make? Im inclined to say humans perceive each pixel, but don't consider them unless that is the explicit exercise, to look at the pixels. And with 4 and 8 k imagery, it becomes really difficult if not impossible to see the pixels unless you zoom in. So I would say that while AI considers every single pixel unless programmed not to, humans don't, we just notice the patterns the pixels make up. Same thing in the real world as we haven't discovered the resolution of real life.

So temporally speaking, I would say that AlpahGoZero probably does consider local patterns, conceptually I would say the whole goal of a Go playing AI is to build up a library of millions of trillions of games in order to weight the nets to maximize global consideration of every single move, and that under those circumstances the 'local patterns' are still global as well. Global and local are entangled quantumly I think.

I get what you mean by 'locally alive' now, and the 'local ko threat thing' is still a bit odd, although I can see how even considering the whole board you can be concerned about a subset of it, even though it relates integrally to the whole board and whole game, independent of human inability to perceive it outside of time.

Do you have more specific descriptions of the layers btw? Any idea about the number of layers?

 

 



 

Replies from: gjm
comment by gjm · 2021-05-16T11:24:26.505Z · LW(p) · GW(p)

Robot arms and computer vision, at the level necessary for playing a game of go, are I think a sufficiently solved problem that there's no particular reason why AI researchers working on making a strong go-playing program would bother hooking them up. On its own I don't think doing that would add anything interesting; in particular, I don't think there's any sense in which it would make the program's thinking more human-like.

I don't know about the Alphas, but my laptop running KataGo uses an amount of power that's in the same ballpark as Ke Jie (more than just his brain, less than his whole body) and I'm pretty sure it would beat him very solidly. Go programs don't generally concern themselves with power consumption as such but they do have to manage time (which is roughly proportional to total energy use) -- but so far as I know the time-management code is always hand-written rather than being learned somehow.

No one is claiming that a strong go-playing program is anything like an artificial general intelligence or that it makes humans obsolete or anything like that. (Though every surprising bit of progress in AI should arguably make us think we're closer to making human-level-or-better general intelligences than we'd previously thought we were.)

Programs like AlphaGo Zero or KataGo don't see the board in terms of local patterns as a result of learning from humans who do (and actually I very much doubt that non-Zero AlphaGo can usefully be said to do that either), they see the board in terms of local patterns because the design of their neural nets encourages them to do so. At least, it does in the earlier layers; I don't know how much has been done to look at later layers and see whether or not their various channels tend to reflect local features that are comprehensible to humans.

Of course every move is a whole-board move in the sense that it is made on a whole board, its consequences may affect the whole board, and most of the time the way you chose it involves looking at the whole board. That's true for humans and computers alike. But humans and computers alike build up their whole-board thinking from more-local considerations. (By "alike" I don't mean to imply that the mechanisms are the same; the neural networks used in go programs are somewhat inspired by the structure of actual brain networks but that certainly doesn't mean they are doing the same computation.)

And yes, because the training process is aiming to optimize the network weights for evaluating positions accurately and suggesting good moves and ultimately winning games all those local things are chosen and adjusted in order to get good global results. (The same is true for humans; the patterns we learn to make and avoid are considered good or bad as a result of long experience of how they affect actual games.) In that sense, indeed local and global are entangled (but I really wouldn't say "quantumly"; so far as I can see there is no connection with quantum mechanics beyond the fact that the word "entangled" is used in both).

Some specifics about the network. I'm going to assume you don't know anything :-); apologies for any resulting redundancy.

At any point in the computation, for each point on the board you have a bunch of numbers. The numbers (or the machinery that computes them) are called "channels". At the very start you have a smallish number of channels corresponding to the lowest-level features; you can find a list in appendix A.1 of the KataGo paper, but they're things like one channel that has 1 where there's a black stone and 0 elsewhere, and one that has 1 where there's a white stone and 0 elsewhere, and then some slightly cleverer things like channels that encode whether a given location contains a stone that's part of a group with a small number of liberties or a stone that's part of a group that can be captured in a ladder-like way (i.e., group has exactly two liberties and there's a way to iterate "atari, opponent plays only escape move, group has exactly two liberties again" until eventually the group is captured; KataGo has hand-written code to look for this situation). I think some things that aren't location-specific (e.g., what's the value of komi?) are represented as channels that have the same value at every board location.

Now a single layer looks like this. It has, let's say, m input channels and n output channels. (Usually these numbers are equal, but they're different at the very start and at the very end.) So the input to this layer consists of 19x19xm numbers, and the output consists of 19x19xn numbers: one per board location per channel. Now, for each output channel, we have a 3x3xm array of numbers called weights. For each board location, overlay these on the surrounding 3x3xm region of the input. (Where it falls off the edge of the board, hallucinate an infinite sea of zeros. One input channel at the very start just has 1s at every board location so that you can identify the edge.) Multiply weights by channel values, add up the results, and force the sum to zero if it was negative; congratulations, you have computed the value for that output channel at that board location. Repeat 19x19xn times, doing a 3x3xm multiply/sum/rectify operation for each, until you have a full 19x19xn output: one number per channel per board location. This is a single convolutional layer.

The first convolutional neural networks were just stacks of convolutional layers, but a few newer tricks have been developed. One is what's called a residual network or ResNet; I believe all the current generation of go programs use these. The idea is that the network will learn better if it's encouraged to make later layers "refinements" of earlier layers rather than arbitrarily different. What this means in practice is that you build the network out of two-layer blocks as follows: you take the block's input, you apply two convolutional layers to it, and then the output of the block isn't the output of the second layer, it's the output of the second layer plus the input to the block. (This only makes sense when the block has the same number of input and output channels, but as already mentioned this is the case for almost all of the network.) This doesn't change what the network can compute in principle, but it tends to lead to more effective learning, and you need to know about it because the sizes of these networks are always quoted in blocks rather than layers. The strongest KataGo networks are 40 blocks which means 80 layers.

Another tweak is batch normalization which means that before each convolutional layer you "normalize" each of its channels by scaling and shifting all the numbers in that channel so that their mean is 0 and their variance is 1.

Another trick introduced by KataGo (it's not found in the Alphas or in Leela Zero) is global pooling. I think I described this in a comment earlier in this thread. The idea is that every now and then you have a layer in which you take some of the channels, you compute their mean and maximum values, and you then feed these as inputs into all the other channels. So e.g. if one of these channels happens to have +1 for board locations that are in some sense under black's control and -1 for ones that are under white's control then the mean gives you a sort of a score estimate. If one has 1 for a location where there seems to be a ko and 0 elsewhere then the max tells you whether there are any kos and the mean tells you how many.

OK. So we start with some number of input channels describing the basic state of the game. We have 40 blocks, each consisting of two convolutional layers, and every now and then one of these blocks does the global-pooling thing to incorporate information from the whole board into the local calculations. At the very end, we take our final set of output channels and use them to compute a whole bunch of things. The most important are (1) a "policy" value for each board location, indicating how plausible a place it is for the next move, and (2) how likely each player is to win, and the likely final score. But there are also (1') guesses at the opponent's next move, (1'') estimates of how likely each board location is to end up in the control of each player, (2') estimates of how likely each specific final score is, and (3) various other interesting things whose details you can find in appendix A.4 and A.5 of the KataGo paper.

The way we compute these outputs is a bit like the way we compute the other layers. We take the outputs of the last ordinary layer, and for the (1) things that have per-location values we compute linear combinations of those outputs (the coefficients are learned along with all the other weights in the network) and apply suitable nonlinear functions to the results for the outputs to make human sense; for the (2) things that have global values we first do a global-pooling-like step on all the per-location outputs and then compute linear combinations and apply suitable final nonlinearities. Again, details in appendix A.4 and A.5.

That's all about the network; of course the search is another very important component, but you didn't ask about that :-).

Replies from: josh-smith-brennan
comment by Josh Smith-Brennan (josh-smith-brennan) · 2021-05-17T00:18:31.625Z · LW(p) · GW(p)

Warning: This is a long post and there's some personal stuff about my living situation near the beginning. I figure if people on the forum can bring up their issues with living in expensive, culturally blessed parts of the country, I can bring up issues of living in the homeless shelter system.

I also apologize in advance for the length as I haven't addressed many of the more technical aspects yet. I partly blame the fascinating intersections of AI and human culture for my long post. I do sort of take these posts as opportunities to attempt to draw the different threads of my thinking closer together, with the sometimes unhelpful effect that though I can explain certain ideas more efficiently and concisely, I try to add in more ideas as a result

First let me say this: I'll address the points you bring up as best as I can given my approach and purposes in this discussion. I have some questions and some theories that I think are suitable for further development, and the fact ML and AI have developed so far, makes me think they would help me investigate my theories. 

The fact that DanielFilan created a post addressing these ideas in relationship to Go is a win-win for me, as it gives me a good entry point for discussion of the technical aspects of neural nets and such, so that becoming more familiar with the way they work makes sense if I have any chance of pursuing research with some sort with credibility. You make have a zero% success rate for the opportunities you don't bother to try and make happen.

Apologies to you and to Daniel for my skepticism and some of the assumptions I made previously in this discussion; the clarity of the discussion now is challenging to maintain under my  currently rather bad circumstances. As I've mentioned elsewhere, I'm attempting to work my way out of the homeless shelter system where I've existed for the past 4 years and counting. This forum is a much needed lifeline of sanity, especially given the once-in-a-century pandemic we are only now emerging from in the Western World. 

I try to focus on issues of science and technology in order to block out the more traumatizing aspects of homelessness. We had four cop cars and a chopper spotlighting the building last night around 11 or so, and I didn't get much sleep as a partial result. That happens about once every other month or so here. Plus my neighbor tends to pound on the wall and scream for an hour or so usually in the early morning, for no apparent reason sometimes a few times a week. Been trying to get management to do something about it for over 2 years, and have had to resort to calling the police myself, as management won't.

Long story short, I try to focus as much as I can on things other than my circumstances for the time being, and I use the little research I can do these days without getting distracted by the drama and violence around me, to try and sort of reach out to communities I feel more in tune with. The little interaction I get is probably also part of the reason my posts run so long, norms of social interaction are currently not being reinforced in my daily life. I am literally surrounded by serious drug addicts and seriously mentally disordered formerly homeless people 24 hours a day, 7 days a week, so I'm trying to use my writing to find some value during this horrible period of my life. :p

 

So I've tried to set myself into context now. And I think it's easier to address the rest of your post.

About the Robot Arm and the various associated peripheral input and output devices they are representative of, the implications of integrating the AGI of a Go playing AI with a physical body is reminiscent of Terminator I suppose. I don't argue at this point that by keeping the components of a potential Terminator separate, it is potentially more difficult to create a walking, talking human killing machine. I only wonder what the implications are of 'technological lock in' at the point when the different components of a bipedal humanoid AI cyborg are being integrated; I imagine there will be a lot of 'reinventing the wheel' as the systems are re-engineered during integration phases. The current trend of 'modularizing' more complicated systems is a smart way to approach mass production of certain technologies, and the switch to 5G will create the supply for whatever demand is there for the foreseeable future, but there are always problems with developing technology.

My real concern comes from a particular cultural viewpoint: I think AlphaGo has dealt a serious, yet covert blow to the Go community in particular, and the human race in general; one which is unnecessarily critical of human capabilities and puts technology in genera, on a pedestal that is much too high for what it truly represents. Some might say that's a purely symbolic issue, but I think the concept of putting 'thing's on pedestal's is just a societal way of setting 'things' up to inevitably fail us. I know this may sound harsh, but when I heard the news that AlphaGo beat Ke Jie several years back, it took me a couple of weeks to process that, and I still feel uncomfortable about it.

The developing pop culture awareness of bias in seemingly 'objective' institutions of authority, institutions like Google and Facebook, whose facts and figures are governed by algorithms - not magical truth  atoms or some infallible equivalent - means they aren't the Holy Grail they were believed to be (whether or not they were marketed that way is a question on my mind as well.) The assumed infallability of our institutions of authority seem to be crumbling at an accelerated pace, leaving us with the unanswered questions humanity has always struggled with; now we just have the ability to manage that misery globally at the speed of light.

It's possible we don't find truth in our technology - that we just find a reflection of ourselves - so that it isn't possible to lose ourselves in the search for ultimate truth through technology, we just end up seeing ourselves differently.

In this same respect, I think the ceremony and marketing of the series of games between Ke Jie and AlphaGo, which conferred 9dan professional status on a go playing AI for the first time, had the 'equal but opposite' social cost of placing arguably the strongest human player beneath AlphaGo in the ranking system, but is a misinterpretation of what actually happened.

I can't place my hands on the article at this point, but I recall reading something about Julia Gaef and her new book, which talked about the ethical considerations of the amount of power consumption used to mine cryptocurrencies. From what little I know about it, the amount of resources used in the process, seems to outweigh any benefit gained by the technology. 

There is a parallel between this idea and the way that go playing AI have influenced society; pound for pound I wonder which entity, Ke Jie or AlphaGo at the time of the series, used more resources to play the games. (Including his body and not just his mind, and on the other side, the team of 7 or 8 people who helped AlphaGo play. I'm pretty sure AlphaGo used more resources during the 3 game series than did his human opponent.) 

Why does this matter? Because in the unavoidable battle for 'global supremacy' between humans and our developing technology, the metrics we use to measure this matter in reality. It really does matter how situations are framed. A human player I believe simply has an advantage over an AI in that our Intelligence is hardwired into our hardware, so that we are potentially the most efficient being ever created, and this advantage over AI in it's ability to affect the world outside of it's neural nets, could be made more evident instead of hidden.

Our batteries and engines, our Intelligence and our limbs, our sensory input and motor output programs and devices far outperform similarly organized and integrated technological systems, and in an incredibly compact and efficient system. The overall human body and it's associated personality, at least on average, would kick AlphaGo's ass in a fist fight, and without someone to plug it in, once the laptops battery was drained, the AI contained on it wouldn't be able to function. My point being, AlphaGo the AI requires a lot of outside assistance to do it's thing spatially. Once again, why does that matter? Because it's a way of framing the situation which points to all the human help it still requires to do it's thing - which is a good thing - but most people seem content to ignore all the human support and focus only on the technology as being what's important. Technology for technologies sake is the death of humanity.

I think I understand what you mean when you say the technologies I mentioned have been developed separately to such a degree, that almost no one would see the need to develop any kind of a robotic system to attempt to help an AI compete with a human player, as competition on any other level then simple computation of the abstract aspects of a good game of Go might be considered taboo at this point. Few people want the Terminator. But, the simple abstract aspects of a good game of Go can be framed as simple, as opposed to incredibly complex and impressive, depending on how they are presented in relationship to the alternative. The attempt to create a humanoid robot go playing AI is too creepy for most people at this point I'm sure. 

My point is, that the perception of what AlphaGo did was possibly misrepresented, and in my opinion has the potential to be misinterpreted to the disadvantage of human society, when compared to the technology we have developed. The broader implications for humanity viewing itself as deficient in comparison to our technology is a drain on human morale. I think we need to do better than that.

How we do better is up for debate: My suggestion was to make clear in a potential rematch, just how clunky and difficult it would be for a humanoid robot paired with a go playing AI and associated AGI to run the robotics which would allow an AI playing robot to not only sit there and play go against a human player, but to try to do everything that Ke Jie did, maybe even smoke a cigarette. I think of something like Boston Dynamics Atlas system paired with AlphaGoZero, with some sort of a visual system modeled after the human eyes for perceiving the game through visual sensory array and system more analagous to human vision, and some sort of breathing apparatuas which would allow fo the inhalation of the smoke from a cigarette. AND AlphaGo has to decide on the brand it prefers to smoke, and pay for it with money it earned, and dress itself. A terminator of sorts I guess.

But the identity and personality would also need to be rounded out,  to also be able to recognize when it needed to take a break because it's systems weren't functioning optimally, so that it might request a bathroom break and take the opportunity to plug itself into a socket in the bathroom to recharge it's battery, as charging isn't allowed in the game area according to a set of rules which favor the human player. This would allow the human player to develop a strategy of dragging the game out and making moves requiring more computation, in order to drain the AI's batteries. In many ways, it is the psychologically aspects of the game which go playing AI avoid, which will be an advantage to them in the future if we do need to defend ourselves against them. If they don't care about the impact their actions have on people, we're just developing psychopaths with superhuman powers.

I believe this kind of a rematch than, could be an opportunity for companies like DeepMind to ease a little of the tension in society regarding our crumbling trust in their objectivity and moral and ethical authority; it might allow for the goodhearted showcasing of the ways in which a go playing AI isn't superior to humans, and if done well might reframe our perception of ourselves as well as of our technology, after all isn't the human component of any technological system of paramount importance? The technology can't overshadow the society that spawned it, otherwise it might be revealed that it has been using us the entire time for it's own purposes, and see humans as the problem, not as the concerned species attempting to find solutions to our problems.

Technology is not superior to humans in aggregate, in fact it has just as many flaws in it as the civilization that created it I think - it's possible the 'original sin' of humanity hasn't been avoided in the creation of technologies, in fact the potential for the true 'fall of humankind' has potentially been accelerated by technologies mass production.

More practically, go is considered a martial art, complete with a ranking system and a history of use in the training of military consideration and execution. I like to think If I had been in Ke Jie's shoes, and my loss on the board seemed imminent, I would have grabbed the board, and used it to literally destroy the laptop hosting AlphaGo to prove there was at least one way I was superior to it.

Sourced from https://waffleguppies.tumblr.com/post/50741279401/just-a-reminder-that-the-nuclear-tesuji-is-a

472 × 700

So, more broadly, my interests are in the ethical considerations of the current state of research and technology in a global sense, in that I am trying to consider the entirety of human knowledge across all disciplines. The specifics of a Go playing AI have ramifications beyond Go, and it is the dissemination and integration of the knowledge generated by this research within the world community, and my concern with how to potentially measure it, that leads me to trying to understand more of how neural nets work, with the belief in a analogous system of logic underlying human consciousness, and a theoretical cultural subconscious which is now being ripped apart in huge cultural shock waves, analogous to earthquakes formed by an underlying Socio-Psycho-Economic Tectonic plate system.

 

It's mainly through one of the main aspects of my theories, that I try to think about it in the way I describe above, what I call 'Cultural Lag.' In the same way the decision making process of a neural net experiences delay, or lag, society does too when it comes to processing human knowledge.

I want to quantify that. 

I'm still trying to describe it precisely, in the hope of attempting to define a mathematical metric to measure it, and use that as a way to study social dynamics at a human level, but with theoretically quantum basis, but also on a global scale. In an an intuitive sense, as a theoretical aspect of a type of unified theory of everything - I describe 'cultural lag' as a type of impedance in a system of quantum circuits underlying all reality, which is visible to us as the playing out of Social Dynamics in Society, and effects the development of human civilization. What are potential quantum reasons for the differences between a 1st world country like the US, and a developing 3rd world country like Africa for instance.

Because I'm concerned with the entirety of the realm of human knowledge, and how it is being managed (or mismanaged as I believe is the case) my thinking is much broader than it is deep. So delving into the specifics of a Go playing AI seems like a reasonable exchange to bend your ear a little more about my crazy ideas. Lol

...but so far as I know the time-management code is always hand-written rather than being learned somehow.

This seems like a point of consideration. I suppose this begins to get back to the underlying AGI of a system like AlphaGo, in that it is still likely timing it's calculations according to a Ghz system using a system clock of some sort on the motherboard, as most pcs are designed with this system in mind. That is a very basic use of time within the system though, pretty much independent at this point of any outside considerations. I do wonder during down time, when waiting for the opponent to move, how the processor systems function: are they always running at full speed, or throttled depending on the situation?

Of course there is a psychological aspect to time management, one that is based on perception of the opponent. It's possible I'm sure to create an algorithim which computes the optimal speed at which to play a move in order to psyche the opponent out, especially if you weight certain important moves which have a huge effect on the board as being good opportunities to play with a different timing. I dislike this idea though, it smacks to much of psychological warfare without any counterbalancing concern for social norms or ethics.

I do find it sort of unfortunate at times when I am reviewing a game, especially one I didn't witness in real time, that there is no attempt to communicate how long it took for the moves to be played. With an auto playback feature it is just a simple constant 1 second between moves playback. Less feedback for the user, and something I get annoyed with. 

No one is claiming that a strong go-playing program is anything like an artificial general intelligence or that it makes humans obsolete or anything like that. 

2 points:

A) From what I understand, something like AlphaGoZero has a somewhat generic AGI system? ( Am I misusing the idea of AGI, in that it might be more of a GI system under the hood of AlphaGo?) with a domain specific neural net on top, so that the underlying AGI is weighted to perform maximally under all domain-specific tasks when integrated with them - singly at this point, and in something like parallel or series of groups in the future. 

B) As a skeptic, I tend to look for emotional leakage in messaging. My mom was a single parent, radical feminist with a degree in Anthropology, Womens studies, Film theory, and eventually Library Science. So I lived and breathed media and social criticism and critical thinking from birth. From the media I've encountered surrounding the advances in Go playing AI, I see a strong thread of apprehension, confusion, and sadness in the Go community resulting in the seeming dominance of AI in the rank structure and overall community, (mostly the more mature players.)

Additionally, with such a focus on how AI and automation can do things better than humans, I'm a little confused as to why Nascar and Formula 1 racing haven't yet been test beds for driver-less technology. I've no doubt a car driving AI would also dominate these competitive arenas with existing technology, but I've never heard anyone mention it as even a possibility. Special interest groups no doubt have a huge hidden effect, and I'm sure the subjects have been approached with the car industry, but the image management after the eventual loss to AI would be a mess. It's the lack of consideration for potential political consequences of these types of competitions which I feel are counter productive to values espousing democracy and more liberal values. Values I think the majority of big tech seem to benefit most from. Maybe I'm wrong about that, certainly the battle between Apple and Microsoft seemed to be weighted in one direction, but I think it's safe to say that Microsoft has had more of an impact globally, while Apple has had less of a detrimental effect world wide. In the end, which is better: more impact or causing less harm? It's seems to be a business consideration, but I think doubles as a political one as well.

 So I wonder "what is it about the Go community that made it receptive to these types of competitions, in a way that the car community is not?" I think the Documentary AlphaGo is a very revealing look at much of the process of setting up the competition, and I would love to celebrate the Go communities openess to the AI community, but if I have to acknowledge AI is somehow superior to humans instead of just better in some pretty specific ways, I prefer to contemplate the Go communities openess as a mistake for the human institute of Go, and the technological encroachment as harming the global community unnecessarily. 

So while no one is explicitly stating that AI are superior to humans in every way, it seems few are really trying to put the technology into a broader perspective, which attaches more value to the human component of the relationship than it currently does. I hate to wonder if this is an ego thing, or if it is simply the unintended consequences of not planning for the image management component of the post-game go community in the even of a loss. I hate to think badly of DeepMind, but the shockwaves AlphaGo has sent through the Go community puts potential allies off, as it could be interpreted that DeepMinds goal was to beat humanity, not just Ke Jie. 

Humility is a valuable virtue, and I worry that Big Tech pays only lip service to that idea. I'm not saying that about DeepMind specifically, but in general, technological superiority is the deciding factor in war; I just hope companies developing AI and the like aren't attempting to conduct war against humans. Against our problems? Yes. Against us? No. Of course there are many other concerns driving the technological push, as in terms of digital technology, the genie is already out of the bottle. Would that we could go back and be more conservative with some of the export of some proprietary technology; we might have avoided shipping the vast majority of our manufacturing jobs and infrastructure overseas.

Ultimately it's the people that make up a company, so in that sense, speaking of a large corporation as either having humility or lacking it is difficult, and looking at at rebranding efforts after disasters such as the one that forced BP to rebrand, point to the limited responsibility large corporations have for catastrophic failures of their corporation. I believe the Go community accepted the loss to AlphaGo gracefully, but it would be nice to not feel so badly about the loss. 

Programs like AlphaGo Zero or KataGo don't see the board in terms of local patterns as a result of learning from humans who do (and actually I very much doubt that non-Zero AlphaGo can usefully be said to do that either),...

My thinking was along the lines that AlphaGo learning about human concepts and consideration of local patterns would be an automatic effect of using human games as source material, complete with any inefficient placement and order of moves humans might make in their play; in lieu of a less derogatory statement "Garbage in; Garbage out." concepts' of local patterns from human games would automatically transfer to the trained neural nets, sans any associations beyond the abstract consideration of maximizing win/rate. This is probably hard to prove.

... they see the board in terms of local patterns because the design of their neural nets encourages them to do so. At least, it does in the earlier layers; I don't know how much has been done to look at later layers and see whether or not their various channels tend to reflect local features that are comprehensible to humans.

I get that, I know there have been competing designs for advanced computing that don't attempt to mimic neural structures. And I get what you mean about later layers possibly 'creating meaning' humans can't recognize as of value. Like some secret language twins speak or something equally inaccessible.

...all those local things are chosen and adjusted in order to get good global results.

I guess this is where I start to question what a strong player does. Obviously there are some differences in how a strong players mind works in comparison to a weak player. I think the stronger a player is, the more accurately they consider global results, and weak players are lucky if they have a small understanding of how local play effects global play. This is due to the neural process of 'consolidation', which I think has it's corollary in the adjusting of weights in the neural nets. This means a strong player begins to make time and energy saving changes to the way they think, and start to make assumptions about common concepts in order to be able to spend more time contemplating the global game. 

A strong player and a weak player play a game, they each have an hour time limit. The weak player might take the whole hour and end up losing on time, while the strong player uses only 30 minutes but is clearly ahead in every way. 

Is it because the strong player thinks about everything the weak player thinks about, just much faster and more accurately? Not exaclty it's because the strong player has consolidated data and information about things like local patterns, into knowledge about what they look like at a 'glance' and hopefully into wisdom about what good patterns of play relating to global concepts look like, and so ends up 'skipping' the contemplation of many of those local patterns. The consolidation of the human mind creates 'shortcuts' between low levels of meaning and higher levels, to allow more efficient thinking, which is faster if its accurate, but not if it's inaccurate. So when a strong player has an 'intuition' about what a good play is, it turns out to be a good play, whereas if a weak player relies on intuition, it's likely they are just guessing because they don't have the data, info, knowledge and the accurate shortcuts to higher levels of meaning the strong player does.

So this just brings us full circle though, as you said:

But humans and computers alike build up their whole-board thinking from more-local considerations. (By "alike" I don't mean to imply that the mechanisms are the same;

I come at this from a background of psychology, neuropsych, and cognitive science sans any proficiency with the technical knowledge to do anything but create meaningful correlations between human mind and computer mind; it seems you come at it from a much stronger technical background having to do with computer mind, so I'm not surprised at the seeming incongruities in our thinking. That's a compliment by the way; I'm sure my skepticism is kicking in a bit, blame my mom.:P

In that sense, indeed local and global are entangled (but I really wouldn't say "quantumly"; so far as I can see there is no connection with quantum mechanics beyond the fact that the word "entangled" is used in both).

I tend to view atomic and subatomic physics as predictive of human scale phenomenon even though I don't fully understand the math, plus I view the astronomical scale of the universe as being influenced by the subatomic scale. So I thinks its safe to assume quantum mechanics are predictive of subatomic phenomenon, and by proxy they are predictive of human scale phenomenon, not to mention universal scale phenomenon. Of course how exactly they are related is the domain of quantum physicists and science fiction writers and intellectuals who think about things in that way. 

We haven't left behind the nuclear age in our progress towards a unified theory of everything, but we have entered a quantum age. So I think it's appropriate to wonder at the quantum dynamics which underlie the human scale world. Fluid mechanics are apparent in the waves of the ocean and the water in a bath tub, but you don't have to know the physics to see and make inferences that the movement of the water is influenced by particle and subatomic physics. Might as well add in the idea that it's effected by quantum mechanics as well (loosely at first of course.)

If you can not strongly disagree with that idea, then the next question might be "Why is that important at this point in time?" or "what does it matter?". Unfortunately, we live in an age where humans are being forced to compete with super computers and distributed neural nets in order to stake out claims of truth; neural nets make predictions based on putting 2 and 2 together in similar fashion to how humans make predictions I think. Gibberish from a computer is seen as acceptable and evidence it's working at something, so that entire fields of study are created to learn to interpret and translate the gibberish in a way that's meaningful.

Society doesn't do the same thing for humans, especially ones not in the important marketing demographic of 18-34. My point being, If I discovered the secret of the universe and blabbed about it as much as I could, I probably wouldn't be taken seriously, even at a point in the future where the science would exist to back up my claims. I likely wouldn't even be remembered for coming up with the idea as it would likely be buried under quadrillions of terraflops of data piled on top of the archive of this forum decades from now.

Under those circumstances, what's the harm in attempting to make claims most respected scientists wouldn't make for fear of being proven wrong? Especially If I think there's validity to the claims and I get a chance to shoot the proverbial shit with someone of a somewhat like mind. Would be better over a beer of course, and if I had access to resources to help me clarify my claims and point me in a good direction. But I've been out of school for awhile.

So having said that, I think it's valid to use the term 'entangled' to imply some quantum relationship between a relative local scale phenomenon and a global 'universal' scale phenomenon. You can disagree, but i'm sure we're so far away from understanding how to appropriately use the tech which will be enabled by quantum science, that it won't really matter in the long run. In the short run though, I am curious what you think, if you think much about, quantum mechanics.

This post is getting way too long though. I'll take a look at the more technical stuff a little later to follow up if I have questions it that's alright with you.

comment by DanielFilan · 2021-05-14T19:07:29.904Z · LW(p) · GW(p)

I suppose this gets back to OP's desire to program a Go Bot in the most efficient manner possible.

If by "OP" you mean me, that's not really my desire (altho that would be nice).

Replies from: josh-smith-brennan
comment by Josh Smith-Brennan (josh-smith-brennan) · 2021-05-14T20:25:59.902Z · LW(p) · GW(p)

No offense meant Daniel, generally "OP' stands for 'Original Poster'. I'm uncomfortable using names until I better know people on forums, and Mr. Filan seems too formal, so I settled on OP as is the norm on forums.

Therefore, I set this challenge: know everything that the best go bot knows about go.

I guess I'm unsure now of what your post is asking, as I was operating under the understanding that the above quote from your post was the main thrust of it.

Replies from: DanielFilan, gjm
comment by DanielFilan · 2021-06-03T18:06:36.858Z · LW(p) · GW(p)

OP is a fine way to refer to me, I was just confused since I didn't think my post indicated that my desire was to efficiently program a go bot.

Replies from: josh-smith-brennan
comment by Josh Smith-Brennan (josh-smith-brennan) · 2021-06-05T02:58:12.849Z · LW(p) · GW(p)

Sorry Daniel,  I really didn't mean any offense, in fact I was maybe a bit too eager to jump into an area I have interest in, but don't really understand at a technical level. While I am pretty familiar with Go, not so much with ML or AI. In fact I really appreciated the discussion, even though I am conflicted about AI's impact on the Go community. 

It's funny though, I recently had the opportunity to talk a very little bit with a 9 dan professional about his experience with AI, and I was a bit surprised by his response. I value his opinion very much, and so have attempted to try and change my attitude about Go playing AI slightly.

comment by gjm · 2021-05-14T21:31:27.858Z · LW(p) · GW(p)

I believe what DanielFilan is mostly interested in here is the general project of understanding what neural networks "know" or "understand" or "want".

(Because one day we may have AIs that are much much smarter than we are, and being much smarter than us may make them much more powerful than us in various senses, and in that case it could be tremendously important that we be able to avoid having them use that power in ways that would be disastrous for us. At present, the most impressive and most human-intelligence-like AI systems are neural networks, so getting a deep understanding of neural networks might turn out to be not just very interesting for its own sake but vital for the survival of the human race.)

Replies from: DanielFilan, josh-smith-brennan
comment by DanielFilan · 2021-06-03T18:09:05.196Z · LW(p) · GW(p)

This is correct, altho I'm specifically interested in the case of go AI because I think it's important to understand neural networks that 'plan', as well as those that merely 'perceive' (the latter being the main focus of most interpretability work, with some notable exceptions).

comment by Josh Smith-Brennan (josh-smith-brennan) · 2021-05-14T22:10:40.861Z · LW(p) · GW(p)

I believe what DanielFilan is mostly interested in here is the general project of understanding what neural networks "know" or "understand" or "want".

If he used the concept of a Go playing AI to inspire discussion along those lines, then Ok, I did get that. I guess I'm not sure where the misunderstanding came from then.

Replies from: josh-smith-brennan
comment by Josh Smith-Brennan (josh-smith-brennan) · 2021-05-15T15:51:48.667Z · LW(p) · GW(p)

So let me step back and try to approach this in a slightly different manner.

I understand that overall what Daniel "...is mostly interested in here is the general project of understanding what neural networks "know" or "understand" or "want"." from a position of concern with existential threats from AGI (that is a concern of most people on this forum, one which I share as well).

In this particular post, Daniel put forward a thought experiment which uses the concept of attempting to 'know' what a neural network/AI 'knows' by using the idea of programming a Go playing AI; the idea being if you could program a Go playing AI and knew what the AI was doing because you programmed it, might that constitute understanding what an AI 'knew?' 

Seeing as how understanding everything that went into programming the Go playing AI would be a lot to 'know', it follows that a very efficient program of a Go playing AI would be easier to 'know' as there would be less to 'know' than if it was a very inefficient program.

Which brings me back to my point which Daniel was responding to:

I suppose this gets back to Daniels' (OP's) desire to program a Go Bot in the most efficient manner possible. I think the domain of Go would still be too large for a human to 'know' Go the way even the most efficient Go Bot would/will eventually 'know' Go.

I think my point still stands that even an efficient and compact Go playing AI would still be too much for a single person to 'know', while they may understand the whole program they programmed, that would not allow them to play Go at a professional level.

Because this part of the thread isn't involved directly with the idea of existential threat from an out of control AGI, I'll leave my thoughts on how this relates for a different post.

comment by Josh Smith-Brennan (josh-smith-brennan) · 2021-05-14T01:22:21.464Z · LW(p) · GW(p)

It may be easier to learn "this sort of shape is better than everyone thought" than "in this particular position

Thing is, the way you build shape in Go isn't a straightforward process; the 3 phases of a game, opening, middle game and end game usually involve different types of positional judgement, requiring different ratios of consideration between local position and global position. 

Shape building occurs as game play progresses simply because of the aggregation of moves on the board over time, with the development of 'good shape' being desirable because it's easy to defend and useful, and 'bad shape' being difficult to defend and a hindrance. 

Most of the opening of the game and part of the middle game, shape is implied, and it is the potential of a shape to support a specific approach or tactic which develops into strategy over the game. It is the ability of the human player to correctly see the potential for shape especially in the opening, and to read out how it is likely to grow over the course of game play which makes the difference between a good player and a mediocre one.  

Since a great endgame can never make up for a bad opening, especially when you consider many games between evenly matched players will result in a win with only a 0.5 point lead, a human has to be good at either the opening or the middle game in order to even have a chance of winning in the end game.

In human terms, Go bots seem to contemplate all 3 phases, opening, middle, and end game at the same time - from the beginning of the game - while the human player is only thinking about the opening. It seems this long view of AI leads the bots to play moves which at times seem like bad moves. Sometimes a potential rational becomes clear 10 or 15 moves later, but at times it is just plain impossible to understand why a Go bot plays a different move than the one preferred by professionals. 

...what makes "this shape" better in a particular case may depend on quirks of the position in ways that look arbitrary and random to strong human go players. 

At times yes. Trying to read out all the potential variations of a developing position - over time - from a seemingly arbitrary or random move a Go bot makes results in diminishing returns for a human player. Especially as AI moves move in and out of use in the Go world, like a fad. If no one is playing AI moves, then it doesn't make sense to try and learn the sequences. 

comment by DanielFilan · 2021-05-11T16:48:48.187Z · LW(p) · GW(p)

I'm not familiar with chess bots, but I would be surprised if one could be confident that chess GMs know everything that chess bots know.

Replies from: Jozdien
comment by Jozdien · 2021-05-11T17:44:26.480Z · LW(p) · GW(p)

I'm not clear on your usage of the word "know" here, but if it's in a context where knowing and level of play have a significant correlation, I think GMs not knowing would be evidence against it being possible for a human to know everything that game bots do.  GMs don't just spend most of their time and effort on it, they're also prodigies in the sport.

Replies from: DanielFilan
comment by DanielFilan · 2021-05-11T18:56:06.409Z · LW(p) · GW(p)

I think it's probably possible to develop better transparency tools than we currently have to extract knowledge from AIs, or make their cognition more understandable.

comment by Josh Smith-Brennan (josh-smith-brennan) · 2021-05-12T20:02:39.763Z · LW(p) · GW(p)

How comparable are Go bots to chess bots in this?  

I don't believe they are that comparable. For starters, an average Chess game lasts somewhere around 20 moves, whereas an average Go game lasts closer to 200 - 300 moves. This is just one example of why a Go playing computer didn't reliably beat  a professional Go player until 15 or so years after a chess playing computer beat a GM.

Replies from: Jozdien
comment by Jozdien · 2021-05-12T20:47:24.197Z · LW(p) · GW(p)

That's evidence for it being harder to know what a Go bot knows than to know what a chess bot does, right?  And if I'm understanding Go correctly, those years were at least a significant part due to computational constraints, which would imply that better transparency tools or making them more human-understandable still wouldn't go near letting a human know what they know, right?

Replies from: josh-smith-brennan
comment by Josh Smith-Brennan (josh-smith-brennan) · 2021-05-13T17:47:07.344Z · LW(p) · GW(p)

Yes to your first question. Yes to the second question, but with the caveat that Go playing AI are still useful for certain tasks in terms of helping develop a players game, but with limitations. Will a human player ever fully understand the game of Go period, much less the way an AI does? No I don't think so.

comment by DanielFilan · 2021-05-11T06:28:27.946Z · LW(p) · GW(p)

Good point!

comment by ESRogs · 2021-05-11T07:13:50.544Z · LW(p) · GW(p)

You have to be able to know literally everything that the best go bot that you have access to knows about go.

In your mind, is this well-defined? Or are you thinking of a major part of the challenge as being to operationalize what this means?

(I don't know what it means.)

Replies from: DanielFilan
comment by DanielFilan · 2021-05-11T07:21:25.902Z · LW(p) · GW(p)

I roughly know what it means, by virtue of knowing what it means to know stuff. But I think I mention that one of the parts is operationalizing better what it means for a model to know things.

comment by Donald Hobson (donald-hobson) · 2021-05-11T18:10:30.010Z · LW(p) · GW(p)

I think that it isn't clear what constitutes "fully understanding" an algorithm. 

Say you pick something fairly simple, like a floating point squareroot algorithm. What does it take to fully understand that. 

You have to know what a squareroot is. Do you have to understand the maths behind Newton raphson iteration if the algorithm uses that? All the mathematical derivations, or just taking it as a mathematical fact that it works. Do you have to understand all the proofs about convergence rates. Or can you just go "yeah, 5 iterations seems to be enough in practice". Do you have to understand how floating point numbers are stored in memory? Including all the special cases like NaN which your algorithm hopefully won't be given? Do you have to keep track of how the starting guess is made, how the rounding is done. Do you have to be able to calculate the exact floating point value the algorithm would give, taking into account all the rounding errors. Answering in binary or decimal? 

Is brute force minmax search easy to understand. You might be able to easily implement the algorithm, but you still don't know which moves it will make. In general, for any algorithm that takes a lot of compute, humans won't be able to work out what it will do without very slowly imitating a computer. There are some algorithms we can prove theorems about. But it isn't clear which theorems we need to prove to get "full understanding" 

Another obstacle to full understanding is memory. Suppose your go bot has memorized a huge list of "if you are in such and such situation move here" type rules. You can understand how gradient descent would generate good rules in the abstract. You have inspected a few rules in detail. But there are far too many rules for a human to consider them all. And the rules depend on a choice of random seed.  

Corollaries of success (non-exhaustive):

  • You should be able to answer questions like “what will this bot do if someone plays mimic go against it” without actually literally checking that during play. More generally, you should know how the bot will respond to novel counter strategies

There is not in general a way to compute what an algorithm does without running it. Some algorithms are going about the problem in a deliberately slow way. However if we assume that the go algorithm has no massive known efficiency gains. (Ie no algorithm that computes the same answer using a millionth of the compute) And that the algorithm is far too compute hungry for humans doing it manually. Then it follows that humans won't be able to work out exactly what the algorithm will do.

You should be able to write a computer program anew that plays go just like that go bot, without copying over all the numbers.

Being able to understand the algorithm well enough to program it for the first time, not just blindly reciting code. An ambiguous but achievable goal.

Suppose a bunch of people coded another Alpha go like system. The random seed is different. The layer widths are different. The learning rate is slightly different. Its trained with different batch size, for a different amount of iterations on a different database of stored games. It plays about as well. In many situations it makes a different move. The only way to get a go bot that plays exactly like alpha go is to copy everything including the random seed. This might have been picked based on lucky numbers or birthdays. You can't rederive from first principles what was never derived from first principles. You can only copy numbers across, or pick your own lucky numbers. Numbers like batch size aren't quite as pick your own, there are unreasonably small and large values, but there is still quite a lot of wiggle room. 

Replies from: ChristianKl, DanielFilan
comment by ChristianKl · 2021-05-12T08:29:26.771Z · LW(p) · GW(p)

Suppose your go bot has memorized a huge list of "if you are in such and such situation move here" type rules. 

One interesting feature of alpha go was that it was generally not playing what a go professional would see as optimal play in the end game. A go professional doesn't play moves that obviously lose points in the late game. On the other hand alpha go played many moves that lost points, likely because it judged them not to change anything about the likelihood of winning the game given that it was ahead by enough points. 

A good go player has a bunch of end game patterns memorized that are optimal at maximizing points. When choosing between two moves that are both judged to win the game with 0.9999999 alpha go not choosing the move that maximizes points suggest that it does not use patterns about what optimal moves are in certain local situations to make it's judgements. 

Alpha Go is following patterns about what's the optimal move in similar situations much less then human go players do. It's playing the game more globally instead of focusing on local positions. 

Replies from: daniel-kokotajlo, donald-hobson
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2021-05-12T09:31:46.057Z · LW(p) · GW(p)
When choosing between two moves that are both judged to win the game with 0.9999999 alpha go not choosing the move that maximizes points suggest that it does not use patterns about what optimal moves are in certain local situations to make it's judgements. 

I nitpick/object to your use of "optimal moves" here. The move that maximizes points is NOT the optimal move; the optimal move is the move that maximizes win probability. In a situation where you are many points ahead, plausibly the way to maximize win probability is not to try to get more points, but rather to try to anticipate and defend against weird crazy high-variance strategies your opponent might try.

comment by Donald Hobson (donald-hobson) · 2021-05-12T10:13:21.494Z · LW(p) · GW(p)

This behaviour is consistent with local position based play that also considers "points ahead" as part of the situation.

Replies from: ChristianKl
comment by ChristianKl · 2021-05-12T10:40:59.089Z · LW(p) · GW(p)

I think human go players consider points ahead as part of the situation and still don't just play a move that provides no benefit but costs points in the endgame. 

We are not talking about situation where there's any benefit to be gained from the behavior as that behavior happened in situations that can be fully read out.

There are situations in Go where you don't start a fight that you expect to win with 95% because you already ahead on the board and the 5% might make you lose but that's very far from the moves of AlphaGo that I was talking about. 

AlphaGo plays moves that are bad according to any pattern of what's good in Go when it's ahead. 

Replies from: DanielFilan
comment by DanielFilan · 2021-05-14T18:23:33.051Z · LW(p) · GW(p)

I feel like it's pretty relevant that AlphaGo is the worst super-human go bot, and I don't think better bots have this behaviour.

Replies from: gjm
comment by gjm · 2021-05-14T21:36:28.856Z · LW(p) · GW(p)

Last I heard, Leela Zero still tended to play slack moves in highly unbalanced late-game situations.

comment by DanielFilan · 2021-05-11T19:04:05.978Z · LW(p) · GW(p)

I think that it isn't clear what constitutes "fully understanding" an algorithm.

That seems right.

Another obstacle to full understanding is memory. Suppose your go bot has memorized a huge list of "if you are in such and such situation move here" type rules.

I think there's reason to believe that SGD doesn't do exactly this (nets that memorize random data have different learning curves than normal nets iirc?), and better reason to think it's possible to train a top go bot that doesn't do this.

There is not in general a way to compute what an algorithm does without running it.

Yes, but luckily you don't have to do this for all algorithms, just the best go bot. Also as mentioned, I think you probably get to use a computer program for help, as long as you've written that computer program.

Replies from: donald-hobson, DanielFilan
comment by Donald Hobson (donald-hobson) · 2021-05-12T22:08:23.574Z · LW(p) · GW(p)

I'm thinking of humans having some fast special purpose inbuilt pattern recognition, which is nondeterministic and an introspective black box, and a slow general purpose processor. Humans can mentally follow the steps of any algorithm, slowly. 

Thus if a human can quickly predict the results of program X, then either there is a program Y  based on however the human is thinking that does the same thing as X and takes only a handful of basic algorithmic operations. Or the human is using their pattern matching special purpose hardware. This hardware is nondeterministic, not introspectively accessible, and not really shaped to predict go bots. 

Either way, it also bears pointing out that if the human can predict the move a go bot would make, the human is at least as good as the machine. 

So you are going to need a computer program for "help" if you want to predict the exact moves. At this stage, you can ask if you really understand how the code works. And aren't just repeating it by route.

comment by DanielFilan · 2021-05-11T19:25:23.165Z · LW(p) · GW(p)

I'd also be happy with an inexact description of what the bot will do in response to specified strategies that captured all the relevant details.

comment by weathersystems · 2021-05-11T06:01:50.511Z · LW(p) · GW(p)

I'm a bit confused. What's the difference between "knowing everything that the best go bot knows" and "being able to play an even game against a go bot."? I think they're basically the same. It seems to me that you can't know everything the go bot knows without being able to beat any professional go player.

Or am I missing something?

Replies from: DanielFilan
comment by DanielFilan · 2021-05-11T06:28:10.501Z · LW(p) · GW(p)

You could plausibly play an even game against a go bot without knowing everything it knows.

Replies from: weathersystems
comment by weathersystems · 2021-05-11T06:33:40.443Z · LW(p) · GW(p)

Sure. But the question is can you know everything it knows and not be as good as it? That is, does understanding the go bot in your sense imply that you could play an even game against it?

Replies from: DanielFilan
comment by DanielFilan · 2021-05-11T06:42:56.088Z · LW(p) · GW(p)

[D]oes understanding the go bot in your sense imply that you could play an even game against it?

I imagine so. One complication is that it can do more computation than you.

Replies from: ESRogs
comment by ESRogs · 2021-05-11T07:20:07.457Z · LW(p) · GW(p)

But once you let it do more computation, then it doesn't have to know anything at all, right? Like, maybe the best go bot is, "Train an AlphaZero-like algorithm for a million years, and then use it to play."

I know more about go than that bot starts out knowing, but less than it will know after it does computation.

I wonder if, when you use the word "know", you mean some kind of distilled, compressed, easily explained knowledge?

Replies from: DanielFilan, DanielFilan
comment by DanielFilan · 2021-05-11T07:25:29.019Z · LW(p) · GW(p)

Perhaps the bot knows different things at different times and your job is to figure out (a) what it always knows and (b) a way to quickly find out everything it knows at a certain point in time.

Replies from: ricraz
comment by Richard_Ngo (ricraz) · 2021-05-11T10:24:57.979Z · LW(p) · GW(p)

I think at this point you've pushed the word "know" to a point where it's not very well-defined; I'd encourage you to try to restate the original post while tabooing that word.

This seems particularly valuable because there are some versions of "know" for which the goal of knowing everything a complex model knows seems wildly unmanageable (for example, trying to convert a human athlete's ingrained instincts into a set of propositions). So before people start trying to do what you suggested, it'd be good to explain why it's actually a realistic target.

Replies from: DanielFilan
comment by DanielFilan · 2021-05-11T17:33:23.456Z · LW(p) · GW(p)

Hmmm. It does seem like I should probably rewrite this post. But to clarify things in the meantime:

  • it's not obvious to me that this is a realistic target, and I'd be surprised if it took fewer than 10 person-years to achieve.
  • I do think the knowledge should 'cover' all the athlete's ingrained instincts in your example, but I think the propositions are allowed to look like "it's a good idea to do x in case y".
Replies from: ricraz
comment by Richard_Ngo (ricraz) · 2021-05-13T15:23:02.837Z · LW(p) · GW(p)

it's not obvious to me that this is a realistic target

Perhaps I should instead have said: it'd be good to explain to people why this might be a useful/realistic target. Because if you need propositions that cover all the instincts, then it seems like you're basically asking for people to revive GOFAI.

(I'm being unusually critical of your post because it seems that a number of safety research agendas lately have become very reliant on highly optimistic expectations about progress on interpretability, so I want to make sure that people are forced to defend that assumption rather than starting an information cascade.)

Replies from: DanielFilan
comment by DanielFilan · 2021-05-14T18:31:55.069Z · LW(p) · GW(p)

OK, the parenthetical helped me understand where you're coming from. I think a re-write of this post should (in part) make clear that I think a massive heroic effort would be necessary to make this happen, but sometimes massive heroic efforts work, and I have no special private info that makes it seem more plausible than it looks a priori.

Replies from: DanielFilan, DanielFilan
comment by DanielFilan · 2021-05-14T19:06:32.630Z · LW(p) · GW(p)

Actually, hmm. My thoughts are not really in equilibrium here.

comment by DanielFilan · 2021-05-14T18:33:31.621Z · LW(p) · GW(p)

(Also: such a rewrite would be a combination of 'what I really meant' and 'what the comments made me realize I should have really meant')

comment by DanielFilan · 2021-05-11T07:23:09.417Z · LW(p) · GW(p)

But once you let it do more computation, then it doesn't have to know anything at all, right? Like, maybe the best go bot is, "Train an AlphaZero-like algorithm for a million years, and then use it to play."

I would say that bot knows what the trained AlphaZero-like model knows.

Replies from: DanielFilan
comment by DanielFilan · 2021-05-11T07:23:54.285Z · LW(p) · GW(p)

Also it certainly knows the rules of go and the win condition.

Replies from: ricraz
comment by Richard_Ngo (ricraz) · 2021-05-11T10:26:58.485Z · LW(p) · GW(p)

As an additional reason for the importance of tabooing "know", note that I disagree with all three of your claims about what the model "knows" in this comment and its parent.

(The definition of "know" I'm using is something like "knowing X means possessing a mental model which corresponds fairly well to reality, from which X can be fairly easily extracted".)

Replies from: DanielFilan, DanielFilan
comment by DanielFilan · 2021-05-14T18:29:25.979Z · LW(p) · GW(p)

In the parent, is your objection that the trained AlphaZero-like model plausibly knows nothing at all?

Replies from: ricraz
comment by Richard_Ngo (ricraz) · 2021-05-16T14:25:40.859Z · LW(p) · GW(p)

The trained AlphaZero model knows lots of things about Go, in a comparable way to how a dog knows lots of things about running.

But the algorithm that gives rise to that model can know arbitrarily few things. (After all, the laws of physics gave rise to us, but they know nothing at all.)

Replies from: DanielFilan
comment by DanielFilan · 2021-06-03T18:25:56.446Z · LW(p) · GW(p)

Ah, understood. I think this is basically covered by talking about what the go bot knows at various points in time, a la this comment [LW(p) · GW(p)] - it seems pretty sensible to me to talk about knowledge as a property of the actual computation rather than the algorithm as a whole. But from your response there it seems that you think that this sense isn't really well-defined.

Replies from: ricraz
comment by Richard_Ngo (ricraz) · 2021-06-03T23:48:25.742Z · LW(p) · GW(p)

I'm not sure what you mean by "actual computation rather than the algorithm as a whole". I thought that I was talking about the knowledge of the trained model which actually does the "computation" of which move to play, and you were talking about the knowledge of the algorithm as a whole (i.e. the trained model plus the optimising bot).

comment by DanielFilan · 2021-05-11T16:53:50.970Z · LW(p) · GW(p)

On that definition, how does one train an AlphaZero-like algorithm without knowing the rules of the game and win condition?

Replies from: ricraz
comment by Richard_Ngo (ricraz) · 2021-05-13T15:27:20.690Z · LW(p) · GW(p)

The human knows the rules and the win condition. The optimisation algorithm doesn't, for the same reason that evolution doesn't "know" what dying is: neither are the types of entities to which you should ascribe knowledge.

Replies from: DanielFilan
comment by DanielFilan · 2021-05-14T18:28:19.939Z · LW(p) · GW(p)

Suppose you have a computer program that gets two neural networks, simulates a game of go between them, determines the winner, and uses the outcome to modify the neural networks. It seems to me that this program has a model of the 'go world', i.e. a simulator, and from that model you can fairly easily extract the rules and winning condition. Do you think that this is a model but not a mental model, or that it's too exact to count as a model, or something else?

Replies from: ricraz
comment by Richard_Ngo (ricraz) · 2021-05-16T14:20:51.627Z · LW(p) · GW(p)

I'd say that this is too simple and programmatic to be usefully described as a mental model. The amount of structure encoded in the computer program you describe is very small, compared with the amount of structure encoded in the neural networks themselves. (I agree that you can have arbitrarily simple models of very simple phenomena, but those aren't the types of models I'm interested in here. I care about models which have some level of flexibility and generality, otherwise you can come up with dumb counterexamples like rocks "knowing" the laws of physics.)

As another analogy: would you say that the quicksort algorithm "knows" how to sort lists? I wouldn't, because you can instead just say that the quicksort algorithm sorts lists, which conveys more information (because it avoids anthropomorphic implications). Similarly, the program you describe builds networks that are good at Go, and does so by making use of the rules of Go, but can't do the sort of additional processing with respect to those rules which would make me want to talk about its knowledge of Go.

comment by Yair Halberstadt (yair-halberstadt) · 2021-05-11T16:50:15.170Z · LW(p) · GW(p)

To me a good definition for this is:

Get to a stage where you can write a computer program which can match the best AI at Go, where the program does no training (or equivalent) and you do no training (or equivalent) in the process of writing the software.

I.E. write a classical computer program that uses the techniques of the Neural Network based program to match it at Go.

Replies from: rudi-c
comment by Rudi C (rudi-c) · 2021-05-13T11:19:21.222Z · LW(p) · GW(p)

This is the most intuitive answer to me, as well. It’s also extremely difficult, and it‘s unclear how it is going to be useful for doing alignment generally. 
 

Perhaps one idea is to train AI to write legible code, then use human code review on it. This seems as safe as our current mode of software development if the AI is not actively obfuscating (a big assumption).

comment by Ericf · 2021-05-14T22:02:10.659Z · LW(p) · GW(p)

I kind of do know everything the best go bot knows? For a given definition of "knows."

At the most simple: I know that the best move to make given a board is the one that leads to a victory board state, or, failing that, a board state with the best chance of leading to a victory board state. Which is all a go progam is doing.

Now, the program is able to evaluate those conditions to a much deeper search depth & breadth than I can, but that isn't a matter of knowledge, just ability-to-implement knowledge.

I wouldn't count the database of prior games as part of the go program, since I (or a different program) could also have access to that same database.

comment by Peter Gerdes (peter-gerdes) · 2021-05-12T03:52:53.323Z · LW(p) · GW(p)

This is an interesting direction to explore but as is I don't have any idea what you mean by understand the go bot and I fear figuring that out would itself require answering more than you want to ask.

For instance, what if I just memorize the source code. I can slowly apply each step on paper and as the adversarial training process has no training data or human expert input if I know the rules of go I can, Chinese room style, fully replicate the best go bot using my knowledge given enough time.

But if that doesn't count and you don't just mean be better than them at go then you must have in mind that I'd somehow have the same 'insights' as the program. But now to state the challenge we need a precise (mathematical) definition that specifies the insights contained in a trained ML model which means we've already solved the problem.