comment by Chantiel ·
2021-04-08T02:23:02.737Z · LW(p) · GW(p)
I've been working on defining "optimizer", and I'm wondering about what people consider to be or not be an optimizer. I'm planning on taking about it in my own post, but I'd like to ask here first because I'm a scaredy cat.
I know a person or AI refining plans or hypotheses would generally be considered an optimizer.
What about systems that evolve? Would an entire population of a type of creature be its own optimizer? It's optimizing for genetic fitness of the individuals, so I don't see why it wouldn't be. Evolutionary programming just emulates it, and it's definitely an optimizer.
How do you draw the line between systems that evolve and systems that don't evolve? Is a sterile rock an optimization process? I suppose there is potential for the rocks' contents to evolve. I mean, it's maybe eventually, through the right collisions, life could evolve in a pile of rocks, and then it would be evolve like normal. Are rocks not optimizers, or just really weak, slow optimizers, that take a really really long time to come up with a configuration that isn't equally horrible as everything else in the rock for self-reproduction.
What about systems that tend towards stable configurations? Imagine you have a box with lots of action figures and props and you're bouncing it around. I think such a system would, if feasible, tend towards stable configurations of its contents. For example, initially, the action figures might be all scattered about and bouncing everywhere. But eventually, the system might make the action figures in secure, stable positions. For example, maybe Spiderman would end up with his arm securely longed in a prop and his adjustable spider web accessory securely wrapped around a miniature street light? Is that system an optimizer? What if the toys also come with little motors and a microcontroller to control them, and change their program them by bouncing them around? If you tried this for a sufficiently long time, you could potentially end up with your action figures producing clever strategies to maintain their despite shakes and configuration and avoid further changes in their programs.
What about annealing? Basically annealing involves putting a piece of metal in an oven and heating it for a while. It changes durability and ductility. Normally, people wouldn't think of a piece of metal to be an optimizer. However, there's an optimization algorithm called "simulated annealing". It works pretty much the same way as actual annealing. Actual annealing works as a process in which the things in the metal end up in low-energy states. I don't know how I could justify calling a simulated annealing program and optimizer and not call actual annealing an optimizer.
To what extent is people's intuition of "optimizer" well-defined? I at first clearly say general people and AIs as optimizer, but I don't know about the above things.
Am I right that "optimizer" is a fuzzy concept?
And is it well-defined? I imagined so, but I've been thinking about a lot of things that my intuition doesn't say is or isn't an optimizer.
How much should we care about our notion of "optimizer"? It seems like the main point of the concept is that we know that some optimizers have the potential to be super powerfully or dangerously good at something. So what if we just directly focused on how to tell if a system has the potentially to be super dangerously or powerfully good at something?