Yann LeCun: We only design machines that minimize costs [therefore they are safe]
post by tailcalled · 2024-06-15T17:25:59.973Z · LW · GW · 8 commentsThis is a link post for https://twitter.com/ylecun/status/1801955431209672966
Contents
Yann LeCun None 8 comments
Just a tweet I saw:
Yann LeCun
Doomers: OMG, if a machine is designed to maximize utility, it will inevitably diverge
Engineers: calm down, dude. We only design machines that minimize costs. Cost functions have a lower bound at zero. Minimizing costs can't cause divergence unless you're really stupid.
Some commentary:
I think Yann LeCun is being misleading here. While people intuitively think maximization and minimization are different, the real distinction is between convex optimization (where e.g. every local optimum is a global optimum) and non-convex optimization. When dealing with AGI, typically what people hope to solve is non-convex optimization.
Translating back to practical matters, you are presumably going to end up with some cost functions where you don't reach the lower point of zero, just because there are some desirable outcomes that require tradeoffs or have resource limitations or similar. If you backchain these costs through the causal structure of the real world, that gives you instrumental convergence for standard reasons [LW · GW], just as you get when backchaining utilities.
8 comments
Comments sorted by top scores.
comment by Razied · 2024-06-15T20:15:17.673Z · LW(p) · GW(p)
Very many things wrong with all of that:
- RL algorithms don't minimize costs, but maximize expected reward, which can well be unbounded, so it's wrong to say that the ML field only minimizes cost.
- LLMs minimize expected log probability of correct token, which is indeed bounded at zero from below, but achieving zero in that case means perfectly predicting every single token on the internet.
- The boundedness of the thing you're minimizing is totally irrelevant, since maximizing is exactly the same as maximizing where g is a monotonic function. You can trivially turn a bounded function into an unbounded one without changing anything to the solution sets.
- Even if utility is bounded between 0 and 1, an agent maximizing the expected utility will still never stop, because you can always decrease the probability you were wrong. Quadruple-check every single step and turn the universe into computronium to make sure you didn't make any errors.
This is very dumb, Lecun should know better, and I'm sure he *would* know better if he spent 5 minutes thinking about any of this.
Replies from: tailcalled↑ comment by tailcalled · 2024-06-15T20:21:26.800Z · LW(p) · GW(p)
- RL algorithms don't minimize costs, but maximize expected reward, which can well be unbounded, so it's wrong to say that the ML field only minimizes cost.
Yann LeCun's proposals are based on cost-minimization.
Replies from: Razied↑ comment by Razied · 2024-06-15T20:32:48.434Z · LW(p) · GW(p)
Do you expect Lecun to have been assuming that the entire field of RL stops existing in order to focus on his specific vision?
Replies from: tailcalled↑ comment by tailcalled · 2024-06-15T20:52:48.859Z · LW(p) · GW(p)
I'm not sure he has coherent expectations, but I'd expect his vibe is some combination of "RL doesn't currently work" and "fields generally implement safety standards".
comment by Adam Jermyn (adam-jermyn) · 2024-06-15T17:57:58.756Z · LW(p) · GW(p)
Another objection is that you can minimize the wrong cost function. Making "cost" go to zero could mean making "the thing we actually care about" go to (negative huge number).
Replies from: tailcalled↑ comment by tailcalled · 2024-06-15T18:18:50.844Z · LW(p) · GW(p)
I don't think this objection lands unless one first sees why the safety guarantees we usually associate with cost minimization don't apply to AGI. Like what sort of mindset would hear Yann LeCun's objection, go "ah, so we're safe", and then hear your objection, and go "oh I see, so Yann LeCun was wrong"?
comment by FlorianH (florian-habermacher) · 2024-06-15T18:33:57.501Z · LW(p) · GW(p)
Dear Yan LeCun, dear all,
Time to reveal myself: I'm actually just a machine designed to minimize cost. It's a sort of weighted cost of deviation from a few competing aims I harbor.
And, dear Yan LeCun, while I wish it was true, it's absolutely laughable to claim I'd be unable do implement things none of you like, if you gave me enough power (i.e. intelligence).
∎.
I mean to propose this as a trivial proof by contradiction against his proposition. Or am I overlooking sth?? I guess 1. I can definitely be implemented by what we might call cost minimizationf[1], and sadly, however benign my today's aims in theory, 2. I really don't think anyone can fully trust me or the average human if any of us got infinitely powerful.[2] So, suffices to think about us humans to see the supposed "Engineers"' (euhh) logic falter, no?
- ^
Whether with or without a strange loop making me (or if you want making it appear to myself that I would be) sentient doesn't even matter for the question.
- ^
Say, I'd hope I'd do great stuff, be a huge savior, but who really knows, and, either way, still rather plausible that I'd do things a large share of people might find rather dystopian.