Some thoughts after reading Artificial Intelligence: A Modern Approach

post by swift_spiral · 2019-03-19T23:39:46.810Z · LW · GW · 4 comments

Contents

  Intro
  Generating heuristics
  Reasons for failure to learn
  Boosting
None
4 comments

Intro

I have been trying to learn what I need to know to contribute to AI safety research. To that end, I recently finished working through Artificial Intelligence: A Modern Approach. Before reading this textbook, I knew a bit about machine learning, but did not know much about search algorithms, knowledge representation, Bayesian networks, or the many many other topic AIMA covers. I found this textbook a useful overview of its field. Below I will talk about a few concepts that I found particularly interesting.

Generating heuristics

If an algorithm is searching through an action space, it needs some way of deciding how close the possible next states it can reach are to its goal. One way of generating a useful heuristic is to relax some of the limitations on actions. Computing “how hard is it to get from state A to the goal?” takes a lot of work, so instead the algorithm could calculate “how hard would it be to get from state A to the goal if I could do things I cannot?” For example, if you are trying to plan how to coordinate planes, fuel, and cargo to fly various packages where they need to go, you could create a heuristic by ignoring fuel limitations of planes, forming a problem that is easier to solve but still gives information about the true problem. I would expect it to be hard to find relaxed problems that are still close enough to the full problem to give useful heuristics, but in the examples the book gives it seems to work pretty well.

Reasons for failure to learn

There are a few different reasons a learning algorithm can fail to find the right function: the correct hypothesis might not be in the hypothesis space, the data can be noisy so that perfect prediction from the information that the algorithm has access to is impossible, sample variance can makes the data the algorithm sees unrepresentative of the true distribution, and complexity of the hypothesis space can prevents the algorithm from finding the correct hypothesis. I found this a useful summary of possible barriers to learning.

Boosting

In boosting, you train an initial classifier, see which data it gets wrong, and then train another classifier by weighting the training samples that the first classifier got wrong more heavily. You do this a few times and then combine the classifiers to make your actual predictions. This gives surprisingly (to me, at least) good performance. If the first classifier gets examples wrong because of noise, adding more classifiers doesn’t seem like it would help, because the new classifiers don’t have any new information. I guess it would help if the first classifier predicted based on the more common samples, while the later classifiers can help improve performance on the more uncommon data points. It could also help if one of the classifiers gets stuck in a bad local minimum.

4 comments

Comments sorted by top scores.

comment by Hazard · 2019-03-20T17:11:51.878Z · LW(p) · GW(p)

Yay self-study! Positive reinforcement!

Replies from: habryka4
comment by habryka (habryka4) · 2019-03-20T22:16:54.566Z · LW(p) · GW(p)

Agree, +1

comment by habryka (habryka4) · 2019-03-20T02:21:21.622Z · LW(p) · GW(p)

Small edit note: Replaced your headings with real headings, to make the ToC work. Let me know if you want it changed back.

Replies from: swift_spiral
comment by swift_spiral · 2019-03-20T11:25:44.228Z · LW(p) · GW(p)

This is better, thank you!