Posts

Minimal Motivation of Natural Latents 2024-10-14T22:51:58.125Z
Values Are Real Like Harry Potter 2024-10-09T23:42:24.724Z
We Don't Know Our Own Values, but Reward Bridges The Is-Ought Gap 2024-09-19T22:22:05.307Z
... Wait, our models of semantics should inform fluid mechanics?!? 2024-08-26T16:38:53.924Z
Interoperable High Level Structures: Early Thoughts on Adjectives 2024-08-22T21:12:38.223Z
A Robust Natural Latent Over A Mixed Distribution Is Natural Over The Distributions Which Were Mixed 2024-08-22T19:19:28.940Z
Some Unorthodox Ways To Achieve High GDP Growth 2024-08-08T18:58:56.046Z
A Simple Toy Coherence Theorem 2024-08-02T17:47:50.642Z
A Solomonoff Inductor Walks Into a Bar: Schelling Points for Communication 2024-07-26T00:33:42.000Z
(Approximately) Deterministic Natural Latents 2024-07-19T23:02:12.306Z
3C's: A Recipe For Mathing Concepts 2024-07-03T01:06:11.944Z
Corrigibility = Tool-ness? 2024-06-28T01:19:48.883Z
What is a Tool? 2024-06-25T23:40:07.483Z
Towards a Less Bullshit Model of Semantics 2024-06-17T15:51:06.060Z
Natural Latents Are Not Robust To Tiny Mixtures 2024-06-07T18:53:36.643Z
Calculating Natural Latents via Resampling 2024-06-06T00:37:42.127Z
Why Care About Natural Latents? 2024-05-09T23:14:30.626Z
Why Would Belief-States Have A Fractal Structure, And Why Would That Matter For Interpretability? An Explainer 2024-04-18T00:27:43.451Z
Generalized Stat Mech: The Boltzmann Approach 2024-04-12T17:47:31.880Z
How We Picture Bayesian Agents 2024-04-08T18:12:48.595Z
Natural Latents: The Concepts 2024-03-20T18:21:19.878Z
A Shutdown Problem Proposal 2024-01-21T18:12:48.664Z
Natural Latents: The Math 2023-12-27T19:03:01.923Z
Some Rules for an Algebra of Bayes Nets 2023-11-16T23:53:11.650Z
Trying to understand John Wentworth's research agenda 2023-10-20T00:05:40.929Z
Why Not Subagents? 2023-06-22T22:16:55.249Z
Lessons On How To Get Things Right On The First Try 2023-06-19T23:58:09.605Z

Comments

Comment by David Lorell on johnswentworth's Shortform · 2024-10-28T08:50:41.397Z · LW · GW

I think that "getting good" at the "free association" game is in finding the sweet spot / negotiation between full freedom of association and directing toward your own interests, probably ideally with a skew toward what the other is interested in. If you're both "free associating" with a bias toward your own interests and an additional skew toward perceived overlap, updating on that understanding along the way, then my experience says you'll have a good chance of chatting about something that interests you both. (I.e. finding a spot of conversation which becomes much more directed than vibey free association.) Conditional on doing something like that strategy, I find it ends up being just a question of your relative+combined ability at this and the extent of overlap (or lack thereof) in interests.

So short model is: Git gud at free association (+sussing out interests) -> gradient ascend yourselves to a more substantial conversation interesting to you both.

Comment by David Lorell on [deleted post] 2024-09-20T22:29:46.415Z

wiggitywiggitywact := fact about the world which requires a typical human to cross a large inferential gap.

Comment by David Lorell on [deleted post] 2024-09-20T22:27:08.378Z

wact := fact about the world
mact := fact about the mind
aact := fact about the agent more generally

vwact := value assigned by some agent to a fact about the world
 

Comment by David Lorell on [deleted post] 2024-09-20T22:23:13.578Z

Seems accurate to me. This has been an exercise in the initial step(s) of CCC, which indeed consist of "the phenomenon looks this way to me. It also looks that way to others? Cool. What are we all cottoning on to?"

Comment by David Lorell on [deleted post] 2024-09-20T22:20:12.554Z

Wait. I thought that was crossing the is-ought gap. As I think of it, the is ought gap refers to the apparent type-clash and unclear evidential entanglement between facts-about-the-world and values-an-agent-assigns-to-facts-about-the-world. And also as I think of it, "should be" always is short hand for "should be according to me" though possibly means some kind of aggregated thing but also ground out in subjective shoulds.

So "how the external world is" does not tell us "how the external world should be" .... except in so far as the external world has become causally/logically entangled with a particular agent's 'true values'. (Punting on what are an agent's "true values" are as opposed to the much easier "motivating values" or possibly "estimated true values." But for the purposes of this comment, its sufficient to assume that they are dependent on some readable property (or logical consequence of readable properties) of the agent itself.)

Comment by David Lorell on [deleted post] 2024-09-20T22:12:32.199Z

We have at least one jury rigged idea! Conceptually. Kind of.

Comment by David Lorell on [deleted post] 2024-09-20T22:11:59.502Z

Yeeeahhh.... But maybe it's just awkwardly worded rather than being deeply confused. Like: "The learned algorithms which an adaptive system implements may not necessarily accept, output, or even internally use data(structures) which have any relationship at all to some external environment." "Also what the hell is 'reference'."

Comment by David Lorell on [deleted post] 2024-09-20T22:08:11.874Z

Seconded. I have extensional ideas about "symbolic representations" and how they differ from.... non-representations.... but I would not trust this understanding with much weight.

Comment by David Lorell on [deleted post] 2024-09-20T22:06:45.966Z

Seconded. Comments above.

Comment by David Lorell on [deleted post] 2024-09-20T22:04:27.772Z

Indeed, our beliefs-about-values can be integrated into the same system as all our other beliefs, allowing for e.g. ordinary factual evidence to become relevant to beliefs about values in some cases.

Super unclear to the uninitiated what this means. (And therefore threateningly confusing to our future selves.)

Maybe: "Indeed, we can plug 'value' variables into our epistemic models (like, for instance, our models of what brings about reward signals) and update them as a result of non-value-laden facts about the world."

Comment by David Lorell on [deleted post] 2024-09-20T22:01:11.743Z

But clearly the reward signal is not itself our values.

Ahhhh

Maybe: "But presumably the reward signal does not plug directly into the action-decision system."?

Or: "But intuitively we do not value reward for its own sake."? 

Comment by David Lorell on [deleted post] 2024-09-20T21:59:15.301Z

It does seem like humans have some kind of physiological “reward”, in a hand-wavy reinforcement-learning-esque sense, which seems to at least partially drive the subjective valuation of things.

Hrm... If this compresses down to, "Humans are clearly compelled at least in part by what 'feels good'." then I think it's fine. If not, then this is an awkward sentence and we should discuss.

Comment by David Lorell on [deleted post] 2024-09-20T21:57:26.367Z

an agent could aim to pursue any values regardless of what the world outside it looks like;

Without knowing what values are, it's unclear that an agent could aim to pursue any of them. The implicit model here is that there is something like a value function in DP which gets passed into the action-decider along with the world model and that drives the agent. But I think we're saying something more general than that.

Comment by David Lorell on [deleted post] 2024-09-20T21:54:08.746Z

but the fact that it makes sense to us to talk about our beliefs

Better terminology for the phenomenon of "making sense" in the above way?

Comment by David Lorell on [deleted post] 2024-09-20T21:51:58.406Z

“learn” in the sense that their behavior adapts to their environment.

I want a new word for this. "Learn" vs "Adapt" maybe. Learn means updating of symbolic references (maps) while Adapt means something like responding to stimuli in a systematic way.

Comment by David Lorell on We Don't Know Our Own Values, but Reward Bridges The Is-Ought Gap · 2024-09-20T21:33:18.898Z · LW · GW

Not quite what we were trying to say in the post. Rather than tradeoffs being decided on reflection, we were trying to talk about the causal-inference-style "explaining away" which the reflection gives enough compute for. In Johannes's example, the idea is that the sadist might model the reward as coming potentially from two independent causes: a hardcoded sadist response, and "actually" valuing the pain caused. Since the probability of one cause, given the effect, goes down when we also know that the other cause definitely obtained, the sadist might lower their probability that they actually value hurting people given that (after reflection) they're quite sure they are hardcoded to get reward for it. That's how it's analagous to the ant thing.

Comment by David Lorell on We Don't Know Our Own Values, but Reward Bridges The Is-Ought Gap · 2024-09-20T19:58:29.082Z · LW · GW

Suppose you have a randomly activated (not dependent on weather) sprinkler system, and also it rains sometimes. These are two independent causes for the sidewalk being wet, each of which are capable of getting the job done all on their own. Suppose you notice that the sidewalk is wet, so it definitely either rained, sprinkled, or both. If I told you it had rained last night, your probability that the sprinklers went on (given that it is wet) should go down, since they already explain the wet sidewalk. If I told you instead that the sprinklers went on last night, then your probability of it having rained (given that it is wet) goes down for a similar reason. This is what "explaining away" is in causal inference. The probability of a cause given its effect goes down when an alternative cause is present.

In the post, the supposedly independent causes are "hardcoded ant-in-mouth aversion" and "value of eating escamoles", and the effect is negative reward. Realizing that you have a hardcoded ant-in-mouth aversion is like learning that the sprinklers were on last night. The sprinklers being on (incompletely) "explain away" the rain as a cause for the sidewalk being wet. The hardcoded ant-in-mouth aversion explains away the-amount-you-value-escamoles as a cause for the low reward.

I'm not totally sure if that answers your question, maybe you were asking "why model my values as a cause of the negative reward, separate from the hardcoded response itself"? And if so, I think I'd rephrase the heart of the question as, "what do the values in this reward model actually correspond to out in the world, if anything? What are the 'real values' which reward is treated as evidence of?" (We've done some thinking about that and might put out a post on that soon.)

Comment by David Lorell on ... Wait, our models of semantics should inform fluid mechanics?!? · 2024-08-29T21:30:53.290Z · LW · GW

This is fascinating and I would love to hear about anything else you know of a similar flavor.

Seconded!!

Comment by David Lorell on Most People Don't Realize We Have No Idea How Our AIs Work · 2023-12-23T00:13:47.618Z · LW · GW

Anecdotal 2¢: This is very accurate in my experience. Basically every time I talk to someone outside of tech/alignment about AI risk, I have to go through the whole "we don't know what algorithms the AI is running to do what it does. Yes, really." thing. Every time I skip this accidentally, I realize after a while that this is where a lot of confusion is coming from.

Comment by David Lorell on On Trust · 2023-12-06T20:38:59.652Z · LW · GW

1.  "Trust" does seem to me to often be an epistemically broken thing that rides on human-peculiar social dynamics and often shakes out to gut-understandings of honor and respect and loyalty etc.

2. I think there is a version that doesn't route through that stuff. Trust in the "trust me" sense is a bid for present-but-not-necessarily-permanent suspension of disbelief, where the stakes are social credit. I.e. When I say, "trust me on this," I'm really saying something like, "All of that anxious analysis you might be about to do to determine if X is true? Don't do it. I claim that using my best-effort model of your values, the thing you should assume/do to fulfill them in this case is X. To the extent that you agree that I know you well and want to help you and tend to do well for myself in similar situations, defer to me on this. I predict you'll thank me for it (because, e.g., confirming it yourself before acting is costly), and if not...well I'm willing to stake some amount of the social credit I have with you on it." [Edit: By social credit here I meant something like: The credence you give to it being a good idea to engage with me like this.]

Similarly:

  • "I decided to trust her" -> "I decided to defer to her claims on this thing without looking into it much myself (because it would be costly to do otherwise and I believe-- for some reason-- that she is sufficiently likely to come to true conclusions on this, is probably trying to help me, knows me fairly well etc.) And if this turns out badly, I'll (hopefully) stop deciding to do this." 
  • "Should I trust him?" -> "Does the cost/benefit analysis gestured at above come out net positive in expectation if I defer to him on this?"
  • "They offered me their trust" -> "They believe that deferring to me is their current best move and if I screw this up enough, they will (hopefully) stop thinking that."

So, I feel like I've landed fairly close to where you did but there is a difference in emphasis or maybe specificity. There's more there than asking “what do they believe, and what caused them to believe it?” Like, that probably covers it but more specifically the question I can imagine people asking when wondering whether or not to "trust" someone is instead, "do I believe that deferring these decisions/assumptions to them in this case will turn out better for me than otherwise?" Where the answer can be "yes" because of things like cost-of-information or time constraints etc. If you map "what do they believe" to "what do they believe that I should assume/do" and "what caused them to believe it" to "how much do they want to help me, how well do they know me, how effective are they in this domain, ..." then we're on the same page.

Comment by David Lorell on Why Not Subagents? · 2023-06-22T22:31:47.228Z · LW · GW

Some nits we know about but didn't include in the problems section:

  1. P[mushroom->anchovy] = 0. The current argument does not handle the case where subagents believe that there is a probability of 0 on one of the possible states. It wouldn't be possible to complete the preferences exactly as written, then.
  2. Indifference. If anchovy were placed directly above mushroom in the preference graph above (so that John is truly indifferent between them), then that might require some special handling. But also it might just work if the "Value vs Utility" issue is worked out. If the subagents are not myopic / handle instrumental values, then whether anchovy is less, identically, or more desirable than mushroom doesn't really matter so much on its own as opposed to what opportunities are possible afterward from the anchovy state relative to the mushroom state.

Also, I think I buy the following part but I really wish it were more constructive.

Now, we haven't established which distribution of preferences the system will end up sampling from. But so long as it ends up at some non-dominated choice, it must end up with non-strongly-incomplete preferences with probability 1 (otherwise it could modify the contract for a strict improvement in cases where it ends up with non-strongly-incomplete preferences). And, so long as the space of possibilities is compact and arbitrary contracts are allowed, all we have left is a bargaining problem. The only way the system would end up with dominated preference-distribution is if there's some kind of bargaining breakdown.

Comment by David Lorell on Would more model evals teams be good? · 2023-02-25T22:25:31.667Z · LW · GW

Might be worth thinking about / comparing how and why things went wrong to produce the 2007/8 GFC. iirc credit raters had misaligned incentives that rhyme with this question/post.

Comment by David Lorell on evhub's Shortform · 2022-12-03T04:19:08.671Z · LW · GW

Disclaimer: At the time of writing, this has not been endorsed by Evan.

I can give this a go.

Unpacking Evan's Comment:
My read of Evan's comment (the parent to yours) is that there are a bunch of learned high-level-goals ("strategies") with varying levels of influence on the tactical choices made, and that a well-functioning end-to-end credit-assignment mechanism would propagate through action selection ("thoughts directly related to the current action" or "tactics") all the way to strategy creation/selection/weighting. In such a system, strategies which decide tactics which emit actions which receive reward are selected for at the expense of strategies less good at that. Conceivably, strategies aiming directly for reward would produce tactical choices more highly rewarded than strategies not aiming quite so directly.

One way for this not to be how humans work would be if reward did not propagate to the strategies, and they were selected/developed by some other mechanism while reward only honed/selected tactical cognition. (You could imagine that "strategic cognition" is that which chooses bundles of context-dependent tactical policies, and "tactical cognition" is that which implements a given tactic's choice of actions in response to some context.) This feels to me close to what Evan was suggesting you were saying is the case with humans.

One Vaguely Mechanistic Illustration of a Similar Concept:
A similar way for this to be broken in humans, departing just a bit from Evan's comment, is if the credit assignment algorithm could identify tactical choices with strategies, but not equally reliably across all strategies. As a totally made up concrete and stylized illustration: Consider one evolutionarily-endowed credit-assignment-target: "Feel physically great,"  and two strategies: wirehead with drugs (WIRE), or be pro-social (SOCIAL.) Whenever WIRE has control, it emits some tactic like "alone in my room, take the most fun available drug" which takes actions that result in  physical pleasure over a day. Whenever SOCIAL has control, it emits some tactic like "alone in my room, abstain from dissociative drugs and instead text my favorite friend" taking actions which result in  physical pleasure over a day. 

Suppose also that asocial cognitions like "eat this" have poorly wired feed-back channels and the signal is often lost and so triggers credit-assignment only some small fraction of the time. Social cognition is much better wired-up and triggers credit-assignment every time. Whenever credit assignment is triggered, once a day, reward emitted is 1:1 with the amount of physical pleasure experienced that day.

Since WIRE only gets credit a fraction of the time that it's due, the average reward (over 30 days, say) credited to WIRE is . If and only if , like if the drug is heroin or your friends are insufficiently fulfilling, WIRE will be reinforced more relative to SOCIAL. Otherwise, even if the drug is somewhat more physically pleasurable than the warm-fuzzies of talking with friends, SOCIAL will be reinforced more relative to WIRE.

Conclusion:
I think Evan is saying that he expects advanced reward-based AI systems to have no such impediments by default, even if humans do have something like this in their construction. Such a stylized agent without any signal-dropping would reinforce WIRE over SOCIAL every time that taking the drug was even a tiny bit more physically pleasurable than talking with friends.

Maybe there is an argument that such reward-aimed goals/strategies would not produce the most rewarding actions in many contexts, or for some other reason would not be selected for / found in advanced agents (as Evan suggests in encouraging someone to argue that such goals/strategies require concepts which are unlikely to develop,) but the above might be in the rough vicinity of what Evan was thinking.

REMINDER: At the time of writing, this has not been endorsed by Evan.

Comment by David Lorell on Where I agree and disagree with Eliezer · 2022-10-20T17:30:13.387Z · LW · GW

This feels like stepping on a rubber duck while tip-toeing around sleeping giants but:

Don't these analogies break if/when the complexity of the thing to generate/verify gets high enough? That is, unless you think the difficulty of verification of arbitrarily complex plans/ideas is asymptotic to some human-or-lower level of verification capability (which I doubt you do) then at some point humans can't even verify the complex plan.

So, the deeper question just seems to be takeoff speeds again: If takeoff is too fast, we don't have enough time to use "weak" AGI to help produce actually verifiable plans which solve alignment. If takeoff is slow enough, we might. (And if takeoff is too fast, we might not notice that we've passed the point of human verifiability until it's too late.)

(I am consciously not bringing up ideas about HCH / other oversight-amplification ideas because I'm new to the scene and don't feel familiar enough with them.)

Comment by David Lorell on Don't leave your fingerprints on the future · 2022-10-12T07:07:41.021Z · LW · GW

But I'm not really accusing y'all of saying "try to produce a future that has no basis in human values." I am accusing this post of saying "there's some neutral procedure for figuring out human values, we should use that rather than a non-neutral procedure."

My read was more "do the best we can to get through the acute risk period in a way that lets humanity have the time and power to do the best it can at defining/creating a future full of value." And that's in response and opposed to positions like "figure out / decide what is best for humanity (or a procedure that can generate the answer to that) and use that to shape the long term future."

Comment by David Lorell on Don't leave your fingerprints on the future · 2022-10-12T06:53:20.017Z · LW · GW

The point is that as moral attitudes/thoughts change, societies or individuals which exist long enough will likely come to regret permanently structuring the world according to the morality of a past age. The Roman will either live to regret it, or the society that follows the Roman will come to regret it even if the Roman dies happy, or the AI is brainwashing everyone all the time to prevent moral progress. The analogy breaks down a bit with the third option since I'd guess most people today would not accept it as a success and it's today's(ish) morals that might get locked in, not ancient Rome's.