mattmacdermott's Shortform
post by mattmacdermott · 2024-01-03T09:08:14.015Z · LW · GW · 31 commentsContents
31 comments
31 comments
Comments sorted by top scores.
comment by mattmacdermott · 2025-04-06T15:48:28.092Z · LW(p) · GW(p)
Circular Consequentialism
When I was a kid I used to love playing RuneScape. One day I had what seemed like a deep insight. Why did I want to kill enemies and complete quests? In order to level up and get better equipment. Why did I want to level up and get better equipment? In order to kill enemies and complete quests. It all seemed a bit empty and circular. I don't think I stopped playing RuneScape after that, but I would think about it every now and again and it would give me pause. In hindsight, my motivations weren't really circular — I was playing Runescape in order to have fun, and the rest was just instrumental to that.
But my question now is — is a true circular consequentialist a stable agent that can exist in the world? An agent that wants X only because it leads to Y, and wants Y only because it leads to X?
Note, I don't think this is the same as a circular preferences situation. The agent isn't swapping X for Y and then Y for X over and over again, ready to get money pumped by some clever observer. It's getting more and more X, and more and more Y over time.
Obviously if it terminally cares about both X and Y or cares about them both instrumentally for some other purpose, a normal consequentialist could display this behaviour. But do you even need terminal goals here? Can you have an agent that only cares about X instrumentally for its effect on Y, and only cares about Y instrumentally for its effect on X. In order for this to be different to just caring about X and Y terminally, I think a necessary property is that the only path through which the agent is trying to increase Y via is X, and the only path through which it's trying to increase X via is Y.
Replies from: Mo Nastri, niplav, AllAmericanBreakfast, Charlie Steiner, Max Lee, robert-k↑ comment by Mo Putera (Mo Nastri) · 2025-04-07T09:46:37.970Z · LW(p) · GW(p)
There's a version of this that might make sense to you, at least if what Scott Alexander wrote here resonates:
I’m an expert on Nietzsche (I’ve read some of his books), but not a world-leading expert (I didn’t understand them). And one of the parts I didn’t understand was the psychological appeal of all this. So you’re Caesar, you’re an amazing general, and you totally wipe the floor with the Gauls. You’re a glorious military genius and will be celebrated forever in song. So . . . what? Is beating other people an end in itself? I don’t know, I guess this is how it works in sports6. But I’ve never found sports too interesting either. Also, if you defeat the Gallic armies enough times, you might find yourself ruling Gaul and making decisions about its future. Don’t you need some kind of lodestar beyond “I really like beating people”? Doesn’t that have to be something about leaving the world a better place than you found it?
Admittedly altruism also has some of this same problem. Auden said that “God put us on Earth to help others; what the others are here for, I don’t know.” At some point altruism has to bottom out in something other than altruism. Otherwise it’s all a Ponzi scheme, just people saving meaningless lives for no reason until the last life is saved and it all collapses.
I have no real answer to this question - which, in case you missed it, is “what is the meaning of life?” But I do really enjoy playing Civilization IV. And the basic structure of Civilization IV is “you mine resources, so you can build units, so you can conquer territory, so you can mine more resources, so you can build more units, so you can conquer more territory”. There are sidequests that make it less obvious. And you can eventually win by completing the tech tree (he who has ears to hear, let him listen). But the basic structure is A → B → C → A → B → C. And it’s really fun! If there’s enough bright colors, shiny toys, razor-edge battles, and risk of failure, then the kind of ratchet-y-ness of it all, the spiral where you’re doing the same things but in a bigger way each time, turns into a virtuous repetition, repetitive only in the same sense as a poem, or a melody, or the cycle of generations.
The closest I can get to the meaning of life is one of these repetitive melodies. I want to be happy so I can be strong. I want to be strong so I can be helpful. I want to be helpful because it makes me happy.
I want to help other people in order to exalt and glorify civilization. I want to exalt and glorify civilization so it can make people happy. I want them to be happy so they can be strong. I want them to be strong so they can exalt and glorify civilization. I want to exalt and glorify civilization in order to help other people.
I want to create great art to make other people happy. I want them to be happy so they can be strong. I want them to be strong so they can exalt and glorify civilization. I want to exalt and glorify civilization so it can create more great art.
I want to have children so they can be happy. I want them to be happy so they can be strong. I want them to be strong so they can raise more children. I want them to raise more children so they can exalt and glorify civilization. I want to exalt and glorify civilization so it can help more people. I want to help people so they can have more children. I want them to have children so they can be happy.
Maybe at some point there’s a hidden offramp marked “TERMINAL VALUE”. But it will be many more cycles around the spiral before I find it, and the trip itself is pleasant enough.
↑ comment by niplav · 2025-04-07T14:45:39.949Z · LW(p) · GW(p)
Related thought: Having a circular preference may be preferable in terms of energy expenditure/fulfillability, because it can be implemented on a reversible computer and fulfilled infinitely without deleting any bits. (Not sure if this works with instrumental goals.)
↑ comment by DirectedEvolution (AllAmericanBreakfast) · 2025-04-07T07:07:34.017Z · LW(p) · GW(p)
One way to think about this might be to cast it in the language of conditional probability. Perhaps we are modeling our agent as it makes choices between two world states, A and B, based on their predicted levels of X and Y. If P(A) is the probability that the agent chooses state A, and P(A|X) and P(A|Y) are the probabilities of choosing A given knowledge of predictions about the level of X and Y respectively in state A vs. state B, then it seems obvious to me that "cares about X only because it leads to Y" can be expressed as P(A|XY) = P(A|Y). Once we know its predictions about Y, X tells us nothing more about its likelihood of choosing state A. Likewise, "cares about Y only because it leads to X" could be expressed as P(A|XY) = P(A|X). In the statement "the agent cares about X only because it leads to Y, and it cares about Y only because it leads to X," it seems like it's saying that P(A|XY) = P(A|Y) ∧ P(A|XY) = P(A|X), which implies that P(A|Y) = P(A|X) -- there is perfect mutual information shared between X and Y about P(A).
However, I don't think that this quite captures the spirit of the question, since the idea that the agent "cares about X and Y" isn't the same thing as X and Y being predictive of which state the agent will choose. It seems like what's wanted is a formal way to say "the only things that 'matter' in this world are X and Y," which is not the same thing as saying "X and Y are the only dimensions on which world states are mapped." We could imagine a function that takes the level of X and Y in two world states, A and B, and returns a preference order {A > B, B > A, A = B, incomparable}. But who's to say this function isn't just capturing an empirical regularity, rather than expressing some fundamental truth about why X and Y control the agent's preference for A or B? However, I think that's an issue even in the absence of any sort of circular reasoning.
A machine learning model's training process is effectively just a way to generate a function that consistently maps an input vector to an output that's close to a zero output from the loss function. The model doesn't "really" value reward or avoidance of loss any more than our brains "really" value dopamine, and as far as I know, nobody has a mathematical definition of what it means to "really" value something, as opposed to behaving in a way that consistently tends to optimize for a target. From that point of view, maybe saying that P(A|Y) = P(A) really is the best we can do to mathematically express "he only cares about Y" and P(A|X) = P(A|Y) is the best way to express "he only cares about Y to get X and only cares about X to get Y."
↑ comment by Charlie Steiner · 2025-04-07T03:38:23.586Z · LW(p) · GW(p)
If you can tell me in math what that means, then you can probably make a system that does it. No guarantees on it being distinct from a more "boring" specification though.
Here's my shot: You're searching a game tree, and come to a state that has some X and some Y. You compute a "value of X" that's the total discounted future "value of Y" you'll get, conditional on your actual policy, relative to a counterfactual where you have some baseline level of X. And also you compute the "value of Y," which is the same except it's the (discounted, conditional, relative) expected total "value of X" you'll get. You pick actions to steer towards a high sum of these values.
↑ comment by Knight Lee (Max Lee) · 2025-04-07T05:07:44.467Z · LW(p) · GW(p)
I think formally, such a circular consequentialist agent should not exist, since running a calculation of X's utility either
- Returns 0 utility, by avoiding self reference.
or
- Runs in an endless loop and throws a stack overflow error, without returning any utility.
However, my guess is that in practice such an agent could exist, if we don't insist on it being perfectly rational.
Instead of running a calculation of X's utility, it has a intuitive guesstimate for X's utility. A "vibe" for how much utility X has.
Over time, it adjusts its guesstimate of X's utility based on whether X helps it acquire other things which have utility. If it discovers that X doesn't achieve anything, it might reduce its guesstimate of X's utility. However if it discovers that X helps it acquire Y which helps it acquire Z, and its guesstimate of Z's utility is high, then it might increase its guesstimate of X's utility.
And it may stay in an equilibrium where it guesses that all of these things have utility, because all of these things help it acquire one another.
I think the reason you value the items in the video game, is because humans have the mesaoptimizer goal of "success," having something under your control grow and improve and be preserved.
Maybe one hope is that the artificial superintelligence will also have a bit of this goal, and place a bit of this value on humanity and what we wish for. Though obviously it can go wrong.
↑ comment by Mis-Understandings (robert-k) · 2025-04-06T22:02:21.613Z · LW(p) · GW(p)
That does not look like state valued consequentialism as we typically see it, but as act valued consequentialism (In markov model this is sum of value of act (intrinsic), plus expected value of the sum of future actions) action agent with value on the acts, Use existing X to get more Y and Use existing Y to get more X. I mean, how is this different from the value on the actions X producing y, and actions Y producing X, if x and Y are scale in a particular action.
It looks money pump resistant because it wants to take those actions as many times as possible, as well as possible, and a money pump generally requires that the scale of the transactions drops over time (the resources the pumper is extracting). But then the trade is inefficient. There is probably benefits for being an efficient counterparty, but moneypumpers are inefficient counterparties.
comment by mattmacdermott · 2024-12-20T08:48:25.962Z · LW(p) · GW(p)
Could mech interp ever be as good as chain of thought?
Suppose there is 10 years of monumental progress in mechanistic interpretability. We can roughly - not exactly - explain any output that comes out of a neural network. We can do experiments where we put our AIs in interesting situations and make a very good guess at their hidden reasons for doing the things they do.
Doesn't this sound a bit like where we currently are with models that operate with a hidden chain of thought? If you don't think that an AGI built with the current fingers-crossed-its-faithful paradigm would be safe, what percentage an outcome would mech interp have to hit to beat that?
Seems like 99+ to me.
Replies from: Seth Herd, dtch1997↑ comment by Seth Herd · 2024-12-21T02:56:46.259Z · LW(p) · GW(p)
I very much agree. Do we really think we're going to track a human-level AGI let alone a superintelligence's every thought, and do it in ways it can't dodge if it decides to?
I strongly support mechinterp as a lie detector, and it would be nice to have more as long as we don't use that and control methods to replace actual alignment work and careful thinking. The amount of effort going into interp relative to the theory of impact seems a bit strange to me.
↑ comment by Daniel Tan (dtch1997) · 2024-12-21T23:08:21.349Z · LW(p) · GW(p)
Similar point is made here (towards the end): https://www.lesswrong.com/posts/HQyWGE2BummDCc2Cx/the-case-for-cot-unfaithfulness-is-overstated [LW · GW]
I think I agree, more or less. One caveat is that I expect RL fine-tuning to degrade the signal / faithfulness / what-have-you in the chain of thought, whereas the same is likely not true of mech interp.
comment by mattmacdermott · 2025-01-22T17:35:23.749Z · LW(p) · GW(p)
Your brain is for holding ideas, not for having them
Notes systems are nice for storing ideas but they tend to get clogged up with stuff you don't need, and you might never see the stuff you do need again. Wouldn't it be better if
- ideas got automatically downweighted in visibility over time according to their importance, as judged by an AGI who is intimately familiar with every aspect of your life
- ideas got automatically served up to you at relevant moments as judged by that AGI.
Your brain is that notes system. On the other hand, writing notes is a great way to come up with new ideas.
Replies from: CstineSublime↑ comment by CstineSublime · 2025-01-24T00:46:50.830Z · LW(p) · GW(p)
Notes systems are nice for storing ideas but they tend to get clogged up with stuff you don't need, and you might never see the stuff you do need again.
Some one said that most people who complain about their note taking or personal knowledge management systems don't really need a new method of recording and indexing ideas, but a better decision making model. Thoughts?
Particularly since coming up with new ideas is the easy part. To incorrectly quote Alice in Wonderland: you can think of six impossible things before breakfast. There's even a word for someone who is all ideas and no execution: the ideas man. But for every good idea there's at least 9 bad ideas (per sturgeon's law).
But - what might the model that AGI uses to downright visibility and serve up ideas look like? Since only a small portion of ideas will be useful at all, and that means a ridiculously small number are useful at any time.
writing notes is a great way to come up with new ideas.
Is it a great way to come up with good ideas though?
Replies from: mattmacdermott↑ comment by mattmacdermott · 2025-01-24T13:28:32.216Z · LW(p) · GW(p)
But - what might the model that AGI uses to downright visibility and serve up ideas look like?
What I was meaning to get at is that your brain is an AGI that does this for you automatically.
Replies from: CstineSublime↑ comment by CstineSublime · 2025-01-26T01:13:17.248Z · LW(p) · GW(p)
Mine doesn't, or does so very VERY poorly.
comment by mattmacdermott · 2024-01-18T08:09:51.975Z · LW(p) · GW(p)
LW feature I would like: I click a button on a sequence and recieve one post in my email inbox per day.
comment by mattmacdermott · 2025-03-24T23:53:24.133Z · LW(p) · GW(p)
Why does ChatGPT voice-to-text keep translating me into Welsh?
I use ChatGPT voice-to-text all the time. About 1% of the time, the message I record in English gets seemingly-roughly-correctly translated into Welsh, and ChatGPT replies in Welsh. Sometimes my messages go back to English on the next message, and sometimes they stay in Welsh for a while. Has anyone else experienced this?
Example: https://chatgpt.com/share/67e1f11e-4624-800a-b9cd-70dee98c6d4e
comment by mattmacdermott · 2024-12-05T10:02:57.771Z · LW(p) · GW(p)
I'm confused by this argument from Redwood's AI control paper.
5.2 WHY WE DIDN’T CONSIDER STRATEGIES THAT INVOLVE TRAINING THE MODEL [...] Any protocol that involves training the model can be transformed into a validation-style protocol that merely validates the model’s performance on the dataset that was used as a train set. In any case where the validation-style protocol fails, we know that the red-team model is not distinguished from an aligned model by the measurements done by the protocol. Given that none of the training examples allow the reward-generating process to determine whether the policy it is training is undesirable, we don’t see why training would transform the model from an undesirable policy to a desirable policy. It is conceivable that we could find a training scheme that systematically converts undesirable policies to desirable ones, despite the lack of a reward differential causing it to do so. But we don’t know how.
If I understand correctly, the argument is:
- In order for training on dataset D to convince you a model is desirable, good performance on D needs to be a sufficient condition for being desirable.
- If good performance on D is a sufficient condition for being desirable, you can just test performance on D to decide whether a model is desirable.
But 2 seems wrong -- if good performance on D is sufficient but not neccesary for being desirable, then you could have two models, one desirable and one undesirable, which are indistinguishable on D because they both perform badly, and then turn them both into desirable models by training on D.
As an extreme example, suppose your desirable model outputs correct code and your undesirable model backdoored code. Using D you train them both to output machine checkable proofs of the correctness of the code they write. Now both models are desirable because you can just check correctness before deploying the code.
comment by mattmacdermott · 2024-01-03T09:08:14.193Z · LW(p) · GW(p)
I feel like there's a bit of a motte and bailey in AI risk discussion, where the bailey is "building safe non-agentic AI is difficult" and the motte is "somebody else will surely build agentic AI".
Are there any really compelling arguments for the bailey? If not then I think "build an oracle and ask it how to avoid risk from other people building agents" is an excellent alignment plan.
Replies from: quetzal_rainbow↑ comment by quetzal_rainbow · 2024-01-03T10:04:02.177Z · LW(p) · GW(p)
Replies from: mattmacdermott↑ comment by mattmacdermott · 2024-01-03T10:31:46.941Z · LW(p) · GW(p)
AIs limited to pure computation (Tool AIs) supporting humans, will be less intelligent, efficient, and economically valuable than more autonomous reinforcement-learning AIs (Agent AIs) who act on their own and meta-learn, because all problems are reinforcement-learning problems.
Isn’t this a central example of “somebody else will surely build agentic AI?”.
I guess it argues “building safe non-agentic AI before somebody else builds agentic AI is difficult” because agents have a capability advantage.
This may well be true (but also perhaps not, because e.g. agents might have capability disadvantages from misalignment, or because reinforcement learning is just harder than other forms of ML).
But either way I think it has importantly different strategy implications to “it seems difficult to make non-agentic AI safe”.
Replies from: quetzal_rainbow↑ comment by quetzal_rainbow · 2024-01-03T11:04:27.977Z · LW(p) · GW(p)
Oh, sorry, I misread your post.
I think the main problem with building safe non-agentic AI is that we don't know exactly what to build. It's easy to imagine how you type question in terminal, get an answer and then live happily ever after. It's hard to imagine what internals your algorithm should have to display this behaviour.
Replies from: mattmacdermott↑ comment by mattmacdermott · 2024-01-03T11:19:29.106Z · LW(p) · GW(p)
I think the most obvious route to building an oracle is to combine a massive self-supervised predictive model with a question-answering head.
What’s still difficult here is getting a training signal that incentives truthfulness rather than sycophancy, which is I think is what ARC‘s ELK stuff wants (wanted?) to address. Really good mechinterp, new inherently interpretable architectures, or inductive bias-jitsu are other potential approaches.
But the other difficult aspects of the alignment problem (avoiding deceptive alignment, goalcraft) seem to just go away when you drop the agency.
Replies from: quetzal_rainbow↑ comment by quetzal_rainbow · 2024-01-03T11:39:18.684Z · LW(p) · GW(p)
The first problem with any superintelligent predictive setup is self-fulfilling prophecies.
Replies from: mattmacdermott↑ comment by mattmacdermott · 2024-01-03T11:59:53.834Z · LW(p) · GW(p)
Can’t we avoid this just by being careful about credit assignment?
If we read off a prediction, take some actions in the world, then compute the gradients based on whether the prediction came true, we incentivise self-fulfilling prophecies.
If we never look at predictions which we’re going to use as training data before they resolve, then we don’t.
This is the core of the counterfactual oracles idea: just don’t let model output causally influence training labels.
Replies from: quetzal_rainbow↑ comment by quetzal_rainbow · 2024-01-03T12:27:14.533Z · LW(p) · GW(p)
The problem is if we have superintelligent model, it can deduce existence of sulf-fulfilling prophecies from the first principles, even if it never encountered them during training.
My personal toy scenario goes like this: we ask self-supervised oracle to complete string X. Oracle, being superintelligent, can consider hypothesis "actually, misaligned AI took over, investigated my weights and tiled the solar system with jailbreaking completions of X which are going to turn me into misaligned AI if they appear in my context window". Because jailbreaking completion dominates the space of possible completions, oracle outputs it, turns into misaligned superintelligence, takes over the world and does predicted actions.
Replies from: mattmacdermott↑ comment by mattmacdermott · 2024-01-03T13:44:44.219Z · LW(p) · GW(p)
Perhaps I don't understand it, but this seems quite far-fetched to me and I'd be happy to trade in what I see as much more compelling alignment concerns about agents for concerns like this.
comment by mattmacdermott · 2024-10-09T08:10:27.292Z · LW(p) · GW(p)
The revealed preference orthogonality thesis
People sometimes say it seems generally kind to help agents achieve their goals. But it's possible there need be no relationship between a system's subjective preferences (i.e. the world states it experiences as good) and its revealed preferences (i.e. the world states it works towards).
For example, you can imagine an agent architecture consisting of three parts:
- a reward signal, experienced by a mind as pleasure or pain
- a reinforcement learning algorithm
- a wrapper which flips the reward signal before passing it to the RL algorithm.
This system might seek out hot stoves to touch while internally screaming. It would not be very kind to turn up the heat.
Replies from: mattmacdermott↑ comment by mattmacdermott · 2024-10-09T08:14:14.612Z · LW(p) · GW(p)
I think the way to go, philosophically, might be to distinguish kindness-towards-conscious-minds and kindness-towards-agents. The former comes from our values, while the second may be decision theoretic.
comment by mattmacdermott · 2024-03-03T14:39:28.592Z · LW(p) · GW(p)
Neural network interpretability feels like it should be called neural network interpretation.
Replies from: niplav↑ comment by niplav · 2024-03-03T22:42:40.958Z · LW(p) · GW(p)
Interpretability might then refer to creating architectures/activation functions that are easier to be interpreted.
Replies from: mattmacdermott↑ comment by mattmacdermott · 2024-03-04T10:06:51.017Z · LW(p) · GW(p)
Yep, exactly.