Attempting to refine "maximization" with 3 new -izers

post by agilecaveman · 2015-08-11T06:07:41.000Z · LW · GW · 1 comments

This is a link post for https://www.overleaf.com/read/pxkqtdwhwgkc

1 comments

Comments sorted by top scores.

comment by IAFF-User-111 (Imported-IAFF-User-111) · 2016-01-28T00:06:00.000Z · LW(p) · GW(p)

skimmed it.

It would be helpful to define "stopping point" and "stopping distance".


Wrt local optima:

Deep Neural Nets were historically thought to suffer from local optima. Recently, this viewpoint has been challenged; see, e.g. "The Loss Surfaces of Multilayer Networks" http://arxiv.org/abs/1412.0233 and references.

Although the issue remains unclear, I currently suspect that local optima are not a practical obstacle for an (omniscient) hill-climber in the real world.


I wasn't convinced overall by the statement about tiling (or not). I think you should give more detailed arguments about why you do or don't expect these agents to tile, and explain the set-up a bit more, too: are you imagining agents that take a single action, based on their current policy, to adopt a new policy, which is then not subject to further modification? Or how can you ensure that agents do not modify their policy in such a way that policy_new encourages further modifications which can compound?