Posts

Money Pump Arguments assume Memoryless Agents. Isn't this Unrealistic? 2024-08-16T04:16:23.159Z
But Where do the Variables of my Causal Model come from? 2024-08-09T22:07:57.395Z
Why do Minimal Bayes Nets often correspond to Causal Models of Reality? 2024-08-03T12:39:44.085Z
When Are Results from Computational Complexity Not Too Coarse? 2024-07-03T19:06:44.953Z
Epistemic Motif of Abstract-Concrete Cycles & Domain Expansion 2023-10-10T03:28:43.356Z
Least-problematic Resource for learning RL? 2023-07-18T16:30:48.535Z
Gearing Up for Long Timelines in a Hard World 2023-07-14T06:11:05.153Z
Dalcy's Shortform 2022-12-14T18:45:28.852Z

Comments

Comment by Dalcy (Darcy) on Ruby's Quick Takes · 2024-09-28T18:16:02.932Z · LW · GW

I'd also love to have access!

Comment by Dalcy (Darcy) on Dalcy's Shortform · 2024-09-10T20:30:40.881Z · LW · GW

Any thoughts on how to customize LessWrong to make it LessAddictive? I just really, really like the editor for various reasons, so I usually write a bunch (drafts, research notes, study notes, etc) using it but it's quite easy to get distracted.

Comment by Dalcy (Darcy) on Least-problematic Resource for learning RL? · 2024-09-06T21:17:48.464Z · LW · GW

(the causal incentives paper convinced me to read it, thank you! good book so far)

if you read Sutton & Barto, it might be clearer to you how narrow are the circumstances under which 'reward is not the optimization target', and why they are not applicable to most AI things right now or in the foreseeable future

Can you explain this part a bit more?

My understanding of situations in which 'reward is not the optimization target' is when the assumptions of the policy improvement theorem don't hold. In particular, the theorem (that iterating policy improvement step must yield strictly better policies and it converges at the optimal, reward maximizing policy) assumes that each step we're updating the policy  by greedy one-step lookahead (by argmaxing the action via ).

And this basically doesn't hold irl because realistic RL agents aren't forced to explore all states (the classic example of "I can explore the state of doing cocaine, and I'm sure my policy will drastically change in a way that my reward circuit considers an improvement, but I don't have to do that). So my opinion that the circumstances under which 'reward is the optimization target' is very narrow remains unchanged, and I'm interested in why you believe otherwise.

Comment by Dalcy (Darcy) on Agent Boundaries Aren't Markov Blankets. [Unless they're non-causal; see comments.] · 2024-08-31T02:55:17.450Z · LW · GW

I think something in the style of abstracting causal models would make this work - defining a high-level causal model such that there is a map from the states of the low-level causal model to it, in a way that's consistent with mapping low-level interventions to high-level interventions. Then you can retain the notion of causality to non-low-level-physical variables with that variable being a (potentially complicated) function of potentially all of the low-level variables.

Comment by Dalcy (Darcy) on Dalcy's Shortform · 2024-08-13T22:50:24.985Z · LW · GW

Unidimensional Continuity of Preference  Assumption of "Resources"?

tl;dr, the unidimensional continuity of preference assumption in the money pumping argument used to justify the VNM axioms correspond to the assumption that there exists some unidimensional "resource" that the agent cares about, and this language is provided by the notion of "souring / sweetening" a lottery.

Various coherence theorems - or more specifically, various money pumping arguments generally have the following form:

If you violate this principle, then [you are rationally required] / [it is rationally permissible for you] to follow this trade that results in you throwing away resources. Thus, for you to avoid behaving pareto-suboptimally by throwing away resources, it is justifiable to call this principle a 'principle of rationality,' which you must follow.

... where "resources" (the usual example is money) are something that, apparently, these theorems assume exist. They do, but this fact is often stated in a very implicit way. Let me explain.

In the process of justifying the VNM axioms using money pumping arguments, one of the three main mathematical primitives are: (1) lotteries (probability distribution over outcomes), (2) preference relation  (general binary relation), and (3) a notion of Souring/Sweetening of a lottery. Let me explain what (3) means.

  • Souring of  is denoted , and a sweetening of  is denoted .
  •  is to be interpreted as "basically identical with A but strictly inferior in a single dimension that the agent cares about." Based on this interpretation, we assume . Sweetening is the opposite, defined in the obvious way.

Formally, souring could be thought of as introducing a new preference relation , which is to be interpreted as "lottery B is basically identical to lottery A, but strictly inferior in a single dimension that the agent cares about".

  • On the syntactic level, such  is denoted as .
  • On the semantic level, based on the above interpretation,  is related to  via the following: 

This is where the language to talk about resources come from. "Something you can independently vary alongside a lottery A such that more of it makes you prefer that option compared to A alone" sounds like what we'd intuitively call a resource[1].

Now that we have the language, notice that so far we haven't assumed sourings or sweetenings exist. The following assumption does it:

Unidimensional Continuity of Preference: If , then there exists a prospect  such that 1)  is a souring of X and 2) .

Which gives a more operational characterization of souring as something that lets us interpolate between the preference margins of two lotteries - intuitively satisfied by e.g., money due to its infinite divisibility.

So the above assumption is where the assumption of resources come into play. I'm not aware of any money pump arguments for this assumption, or more generally, for the existence of a "resource." Plausibly instrumental convergence.

  1. ^

    I don't actually think this + the assumption below fully capture what we intuitively mean by "resources", enough to justify this terminology. I stuck with "resources" anyways because others around here used that term to (I think?) refer to what I'm describing here.

Comment by Dalcy (Darcy) on Dalcy's Shortform · 2024-08-12T08:48:18.820Z · LW · GW

Yeah I'd like to know if there's a unified way of thinking about information theoretic quantities and causal quantities, though a quick literature search doesn't show up anything interesting. My guess is that we'd want separate boundary metrics for informational separation and causal separation.

Comment by Dalcy (Darcy) on Dalcy's Shortform · 2024-08-11T22:43:07.849Z · LW · GW

I no longer think the setup above is viable, for reasons that connect to why I think Critch's operationalization is incomplete and why boundaries should ultimately be grounded in Pearlian Causality and interventions.

(Note: I am thinking as I'm writing, so this might be a bit rambly.)

The world-trajectory distribution is ambiguous.

Intuition: Why does a robust glider in Lenia intuitively feel like a system possessing boundary? Well, I imagine various situations that happen in the world (like bullets) and this pattern mostly stays stable in face of them.

Now, notice that the measure of infiltration/exfiltration depends on , a distribution over world history. 

So, for the above measure to capture my intuition, the approximate Markov condition (operationalized by low infil & exfil) must consider the world state  that contains the Lenia pattern with it avoiding bullets.

Remember,  is the raw world state, no coarse graining. So  is the distribution over the raw world trajectory. It already captures all the "potentially occurring trajectories under which the system may take boundary-preserving-action." Since everything is observed, our distribution already encodes all of "Nature's Intervention." So in some sense Critch's definition is already causal (in a very trivial sense), by the virtue of requiring a distribution over the raw world trajectory, despite mentioning no Pearlian Causality.

Issue: Choice of 

Maybe there is some canonical true  for our physical world that minds can intersubjectively arrive at, so there's no ambiguity.

But when I imagine trying to implement this scheme on Lenia, there's immediately an ambiguity as to which distribution (representing my epistemic state on which raw world trajectories that will "actually happen") we should choose:

  1. Perhaps a very simple distribution: assigning uniform probability over world trajectories where the world contains nothing but the glider moving in a random direction with some initial point offset.
    • I suspect many stances other the one factorizing the world into gliders would have low infil/exfil, because the world is so simple. This is the case of "accidental boundary-ness."
  2. Perhaps something more complicated: various trajectories where e.g., the Lenia patterns encounters bullets, evolves with various other patterns, etc.
    • This I think rules out "accidental boundary-ness."

I think the latter works. But now there's a subjective choice of the distribution, and what are the set of possible/realistic "Nature's Intervention" - all the situations that can ever be encountered by the system under which it has boundary-like behaviors - that we want to implicitly encode into our observational distribution. I don't think it's natural for  assign much probability to a trajectory whose initial conditions are set in a very precise way such that everything decays into noise. But this feels quite subjective.

Hints toward a solution: Causality

I think the discussion above hints at a very crucial insight:

 must arise as a consequence of the stable mechanisms in the world.

Suppose the world of Lenia contains various stable mechanisms like a gun that shoots bullets at random directions, scarce food sources, etc.

We want  to describe distributions that the boundary system will "actually" experience in some sense. I want the "Lenia pattern dodges bullet" world trajectory to be considered, because there is a plausible mechanism in the world that can cause such trajectories to exist. For similar reasons, I think the empty world distributions are impoverished, and a distribution containing trajectories where the entire world decays into noise is bad because no mechanism can implement it.

Thus, unless you have a canonical choice of , a better starting point would be to consider the abstract causal model that encodes the stable mechanisms in the world, and using Discovering Agents-style interventional algorithms that operationalize the notion "boundaries causally separate environment and viscera."

  • Well, because of everything mentioned above on how the causal model informs us on which trajectories are realistic, especially in the absence of a canonical . It's also far more efficient, because the knowledge of the mechanism informs the algorithm of the precise interventions to query the world for, instead of having to implicitly bake them in .

There are still a lot more questions, but I think this is a pretty clarifying answer as to how Critch's boundaries are limiting and why DA-style causal methods will be important.

Comment by Dalcy (Darcy) on Dalcy's Shortform · 2024-08-11T18:03:45.317Z · LW · GW

I think it's plausible that the general concept of boundaries can possibly be characterized somewhat independently of preferences, but at the same time have boundary-preservation be a quality that agents mostly satisfy (discussion here. very unsure about this). I see Critch's definition as a first iteration of an operationalization for boundaries in the general, somewhat-preference-independent sense.

But I do agree that ultimately all of this should tie back to game theory. I find Discovering Agents most promising in this regards, though there are still a lot of problems - some of which I suspect might be easier to solve if we treat systems-with-high-boundaryness as a sort of primitive for the kind-of-thing that we can associate agency and preferences with in the first place.

Comment by Dalcy (Darcy) on Dalcy's Shortform · 2024-08-11T14:25:26.183Z · LW · GW

EDIT: I no longer think this setup is viable, for reasons that connect to why I think Critch's operationalization is incomplete and why boundaries should ultimately be grounded in Pearlian Causality and interventions. Check update.


I believe there's nothing much in the way of actually implementing an approximation of Critch's boundaries[1] using deep learning.

Recall, Critch's boundaries are:

  • Given a world (markovian stochastic process) , map its values  (vector) bijectively using  into 'features' that can be split into four vectors each representing a boundary-possessing system's Viscera, Active Boundary, Passive Boundary, and Environment.
  • Then, we characterize boundary-ness (i.e. minimal information flow across features unmediated by a boundary) using two mutual information criterion each representing infiltration and exfiltration of information.
  • And a policy of the boundary-posessing system (under the 'stance' of viewing the world implied by ) can be viewed as a stochastic map (that has no infiltration/exfiltration by definition) that best approximates the true  dynamics.
    • The interpretation here (under low exfiltration and infiltration) is that  can be viewed as a policy taken by the system in order to perpetuate its boundary-ness into the future and continue being well-described as a boundary-posessing system.

All of this seems easily implementable using very basic techniques from deep learning!

  • Bijective feature map are implemented using two NN maps each way, with an autoencoder loss.
  • Mutual information is approximated with standard variational approximations. Optimize  to minimize it.
    • (the interpretation here being - we're optimizing our 'stance' towards the world in a way that best views the world as a boundary-possessing system)
  • After you train your 'stance' using the above setup, learn the policy using an NN with standard SGD, with fixed .

A very basic experiment would look something like:

  • Test the above setup on two cellular automata (e.g., GoL, Lenia, etc) systems, one containing just random ash, and the other some boundary-like structure like noise-resistant glider structures found via optimization (there are a lot of such examples in the Lenia literature).[2]
  • Then (1) check if the infiltration/exfiltration values are lower for the latter system, and (2) do some interp to see if the V/A/P/E features or the learned policy NN have any interesting structures.

I'm not sure if I'd be working on this any time soon, but posting the idea here just in case people have feedback.

  1. ^

    I think research on boundaries - both conceptual work and developing practical algorithms for approximating them & schemes involving them - are quite important for alignment for reasons discussed earlier in my shortform.

  2. ^

    Ultimately we want our setup to detect boundaries that aren't just physically contiguous chunks of matter, like informational boundaries, so we want to make sure our algorithm isn't just always exploiting basic locality heuristics.

    I can't think of a good toy testbed (ideas appreciated!), but one easy thing to try is to just destroy all locality by mapping the automata lattice (which we were feeding as input) with the output of a complicated fixed bijective map over it, so that our system will have to learn locality if it turns out to be a useful notion in its attempt at viewing the system as a boundary.

Comment by Dalcy (Darcy) on Dalcy's Shortform · 2024-08-03T11:32:05.424Z · LW · GW

Damn, why did Pearl recommend readers (in the preface of his causality book) to read all the chapters other than chapter 2 (and the last review chapter)? Chapter 2 is literally the coolest part - inferring causal structure from purely observational data! Almost skipped that chapter because of it ...

Comment by Dalcy (Darcy) on Dalcy's Shortform · 2024-07-31T07:31:39.425Z · LW · GW

Here's my current take, I wrote it as a separate shortform because it got too long. Thanks for prompting me to think about this :)

Comment by Dalcy (Darcy) on Dalcy's Shortform · 2024-07-31T07:31:17.185Z · LW · GW

I find the intersection of computational mechanics, boundaries/frames/factored-sets, and some works from the causal incentives group - especially discovering agents and robust agents learn causal world model (review) - to be a very interesting theoretical direction.

By boundaries, I mean a sustaining/propagating system that informationally/causally insulates its 'viscera' from the 'environment,' and only allows relatively small amounts of deliberate information flow through certain channels in both directions. Living systems are an example of it (from bacteria to humans). It doesn't even have to be a physically distinct chunk of spacetime, they can be over more abstract variables like societal norms. Agents are an example of it.

I find them very relevant to alignment especially from the direction of detecting such boundary-possessing/agent-like structures embedded in a large AI system and backing out a sparse relationship between these subsystems, which can then be used to e.g., control the overall dynamic. Check out these posts for more.

A prototypical deliverable would be an algorithm that can detect such 'boundaries' embedded in a dynamical system when given access to some representation of the system, performs observations & experiments and returns a summary data structure of all the 'boundaries' embedded in a system and their desires/wants, how they game-theoretically relate to one another (sparse causal relevance graph?), the consequences of interventions performed on them, etc - that's versatile enough to detect e.g., gliders embedded in Game of Life / Particle Lenia, agents playing Minecraft while only given coarse grained access to the physical state of the world, boundary-like things inside LLMs, etc. (I'm inspired by this)

Why do I find the aforementioned directions relevant to this goal?

  • Critch's Boundaries operationalizes boundaries/viscera/environment as functions of the underlying variable that executes policies that continuously prevents information 'flow' [1] between disallowed channels, quantified via conditional transfer entropy.
  • Relatedly, Fernando Rosas's paper on Causal Blankets operationalize boundaries using a similar but subtly different[2] form of mutual information constraint on the boundaries/viscera/environment variables than that of Critch's. Importantly, they show that such blankets always exist between two coupled stochastic processes (using a similar style of future morph equivalence relation characterization from compmech, and also a metric they call "synergistic coefficient" that quantifies how boundary-like this thing is.[3]
  • More on compmech, epsilon transducers generalize epsilon machines to input-output processes. PALO (Perception Action Loops) and Boundaries as two epsilon transducers coupled together?
  • These directions are interesting, but I find them still unsatisfactory because all of them are purely behavioral accounts of boundaries/agency. One of the hallmarks of agentic behavior (or some boundary behaviors) is adapting ones policy if an intervention changes the environment in a way that the system can observe and adapt to.[4][5]
  • (is there an interventionist extension of compmech?)
  • Discovering agents provide a genuine causal, interventionist account of agency and an algorithm to detect them, motivated by the intentional stance. I think the paper is very enlightening from a conceptual perspective, but there are many problems yet to be solved before we can actually implement this. Here's my take on it.
  • More fundamentally, (this is more vibes, I'm really out of my depth here) I feel there is something intrinsically limiting with the use of Bayes Nets, especially with the fact that choosing which variables to use in your Bayes Net already encodes a lot of information about the specific factorization structure of the world. I heard good things about finite factored sets and I'm eager to learn more about them.
  1. ^

    Not exactly a 'flow', because transfer entropy conflates between intrinsic information flow and synergistic information - a 'flow' connotes only the intrinsic component, while transfer entropy just measures the overall amount of information that a system couldn't have obtained on its own. But anyways, transfer entropy seems like a conceptually correct metric to use.

  2. ^

    Specifically, Fernando's paper criticizes blankets of the following form ( for viscera,  and  for active/passive boundaries,  for environment):

      • DIP implies 
    • This clearly forbids dependencies formed in the past that stays in 'memory'.

    but Critch instead defines boundaries as satisfying the following two criteria:

    •  (infiltration)
      • DIP implies 
    •  (exfiltration)
      • DIP implies 
    • and now that the independencies are entangled across different t, there is no longer a clear upper bound on , so I don't think the criticisms apply directly.
  3. ^

    My immediate curiosities are on how these two formalisms relate to one another. e.g., Which independency requirements are more conceptually 'correct'? Can we extend the future-morph construction to construct Boundaries for Critch's formalism? etc etc

  4. ^

    For example, a rock is very goal-directed relative to 'blocking-a-pipe-that-happens-to-exactly-match-its-size,' until one performs an intervention on the pipe size to discover that it can't adapt at all.

  5. ^

    Also, interventions are really cheap to run on digital systems (e.g., LLMs, cellular automata, simulated environments)! Limiting oneself to behavioral accounts of agency would miss out on a rich source of cheap information.

Comment by Dalcy (Darcy) on Dalcy's Shortform · 2024-07-31T06:56:55.584Z · LW · GW

Discovering agents provide a genuine causal, interventionist account of agency and an algorithm to detect them, motivated by the intentional stance. I find this paper very enlightening from a conceptual perspective!

I've tried to think of problems that needed to be solved before we can actually implement this on real systems - both conceptual and practical - on approximate order of importance.

  • There are no 'dynamics,' no learning. As soon as a mechanism node is edited, it is assumed that agents immediately change their 'object decision variable' (a conditional probability distribution given its object parent nodes) to play the subgame equilibria.
  • Assumption of factorization of variables into 'object' / 'mechanisms,' and the resulting subjectivity. The paper models the process by which an agent adapts its policy given changes in the mechanism of the environment via a 'mechanism decision variable' (that depends on its mechanism parent nodes), which modulates the conditional probability distribution of its child 'object decision variable', the actual policy.
    • For example, the paper says a learned RL policy isn't an agent, because interventions in the environment won't make it change its already-learned policy - but that a human or a RL policy together with its training process is an agent, because it can adapt. Is this reasonable?
      1. Say I have a gridworld RL policy that's learned to get cheese (3 cell world, cheese always on left) by always going to the left. Clearly it can't change its policy when I change the cheese distribution to favor right, so it seems right to call this not an agent.
      2. Now, say the policy now has sensory access to the grid state, and correctly generalized (despite only being trained on left-cheese) to move in the direction where it sees the cheese, so when I change the cheese distribution, it adapts accordingly. I think it is right to call this an agent?
      3. Now, say the policy is an LLM agent (static weight) on an open world simulation which reasons in-context. I just changed the mechanism of the simulation by lowering the gravity constant, and the agent observes this, reasons in-context, and adapts its sensorimotor policy accordingly. This is clearly an agent?
    • I think this is because the paper considers, in the case of the RL policy alone, the 'object policy' to be the policy of the trained neural network (whose induced policy distribution is definitionally fixed), and the 'mechanism policy' to be a trivial delta function assigning the already-trained object policy. And in the case of the RL policy together with its training process, the 'mechanism policy' is now defined as the training process that assigns the fully-trained conditional probability distribution to the object policy.
    • But what if the 'mechanism policy' was the in-context learning process by which it induces an 'object policy'? Then changes in the environment's mechanism can be related to the 'mechanism policy' and thus the 'object policy' via in-context learning as in the second and third example, making them count as agents.
    • Ultimately, the setup in the paper forces us to factorize the means-by-which-policies-adapt into mechanism vs object variables, and the results (like whether a system is to be considered an agent) depends on this factorization. It's not always clear what the right factorization is, how to discover them from data, or if this is the right frame to think about the problem at all.
  • Implicit choice of variables that are convenient for agent discovery. The paper does mention that the algorithm is dependent in the choice of the variable, as in: if the node corresponding to the 'actual agent decision' is missing but its children is there, then the algorithm will label its children to be the decision nodes. But this is already a very convenient representation!
    • Prototypical example: Minecraft world with RL agents interacting represented as a coarse-grained lattice (dynamical Bayes Net?) with each node corresponding to a physical location and its property, like color. Clearly no single node here is an agent, because agents move! My naive guess is that in principle, everything will be labeled an agent.
    • So the variables of choice must be abstract variables of the underlying substrate, like functions over them. But then, how do you discover the right representation automatically, in a way that interventions in the abstract variable level can faithfully translate to actually performable interventions in the underlying substrate?
  • Given the causal graph, even the slightest satisfaction of the agency-criterion labels the nodes as decision / utility. No "degree-of-agency" - maybe by summing over the extent to which the independencies fail to satisfy?
  • Then different agents are defined as causally separated chunks (~connected component) of [set-of-decision-nodes / set-of-utility-nodes]. How do we accommodate hierarchical agency (like subagents), systems with different degrees of agency, etc?
  • The interventional distribution on the object/mechanism variables are converted into a causal graph using the obvious [perform-do()-while-fixing-everything-else] algorithm. My impression is that causal discovery doesn't really work in practice, especially in noisy reality with a large number of variables via gazillion conditional independence tests.
  • The correctness proof requires lots of unrealistic assumptions, e.g., agents always play subgame equilibria, though I think some of this can be relaxed.
Comment by Dalcy (Darcy) on Dalcy's Shortform · 2024-07-31T03:15:55.078Z · LW · GW

Thanks, it seems like the link got updated. Fixed!

Comment by Dalcy (Darcy) on Dalcy's Shortform · 2024-07-30T04:50:28.778Z · LW · GW

Quick paper review of Measuring Goal-Directedness from the causal incentives group.

tl;dr, goal directedness of a policy wrt a utility function is measured by its min distance to one of the policies implied by the utility function, as per the intentional stance - that one should model a system as an agent insofar as doing so is useful.

Details

  • how is "policies implied by the utility function" operationalized? given a value , we define a set containing policies of maximum entropy (of the decision variable, given its parents in the causal bayes net) among those policies that attain the utility .
  • then union them over all the achievable values of  to get this "wide set of maxent policies," and define goal directedness of a policy  wrt a utility function  as the maximum (negative) cross entropy between  and an element of the above set. (actually we get the same result if we quantify the min operation over just the set of maxent policies achieving the same utility as .)

Intuition

intuitively, this is measuring: "how close is my policy  to being 'deterministic,' while 'optimizing  at the competence level ' and not doing anything else 'deliberately'?"

  • "close" / "deterministic" ~ large negative  means small 
  • "not doing anything else deliberately'" ~ because we're quantifying over maxent policies. the policy is maximally uninformative/uncertain, the policy doesn't take any 'deliberate' i.e. low entropy action, etc.
  • "at the competence level " ~ ... under the constraint that it is identically competent to 

and you get the nice property of the measure being invariant to translation / scaling of .

  • obviously so, because a policy is maxent among all policies achieving  on  iff that same policy is maxent among all policies achieving  on , so these two utilities have the same "wide set of maxent policies."

Critiques

I find this measure problematic in many places, and am confused whether this is conceptually correct.

  • one property claimed is that the measure is maximum for uniquely optimal / anti-optimal policy.
    • it's interesting that this measure of goal-directedness isn't exactly an ~increasing function of , and i think it makes sense. i want my measure of goal-directedness to, when evaluated relative to human values, return a large number for both aligned ASI and signflip ASI.
  • ... except, going through the proof one finds that the latter property heavily relies on the "uniqueness" of the policy.
    • My policy can get the maximum goal-directedness measure if it is the only policy of its competence level while being very deterministic. It isn't clear that this always holds for the optimal/anti-optimal policies or always relaxes smoothly to epsilon-optimal/anti-optimal policies.
  • Relatedly, the fact that the quantification is only happening over policies of the same competence level, which feels problematic.
  • minimum for uniformly random policy (this would've been a good property, but unless I'm mistaken I think the proof for the lower bound is incorrect, because negative cross entropy is not bounded below.)
  • honestly the maxent motivation isn't super clear to me.
  • not causal. the reason you need causal interventions is because you want to rule out accidental agency/goal-directedness, like a rock that happens to be the perfect size to seal a water bottle - does your rock adapt when I intervene to change the size of the hole? discovering agents is excellent in this regards.
Comment by Dalcy (Darcy) on Natural Latents: The Math · 2024-07-19T21:25:53.888Z · LW · GW

Thank you, that is very clarifying!

Comment by Dalcy (Darcy) on Natural Latents: The Math · 2024-07-18T14:50:47.388Z · LW · GW

I've been doing a deep dive on this post, and while the main theorems make sense I find myself quite confused about some basic concepts. I would really appreciate some help here!

  • So 'latents' are defined by their conditional distribution functions whose shape is implicit in the factorization that the latents need to satisfy, meaning they don't have to always look like , they can look like , etc, right?
  • I don't get the 'standard form' business. It seems like a procedure to turn one latent variable  into another relative to ? I don't get what the notation  means—does it mean that it takes  defined by some conditional distribution function like  and converts it into ? That doesn't seem so, the notation looks more like a likelihood function than a conditional distribution. But then what conditional distribution defines this latent ?

The Resampling stuff is a bit confusing too:

if we have a natural latent , then construct a new natural latent by resampling  conditional on  (i.e. sample from ), independently of whatever other stuff  we’re interested in.

  • I don't know what operation is being performed here - what CPDs come in, what CPDs leave.
  • "construct a new natural latent by resampling  conditional on  (i.e. sample from ), independently of whatever other stuff  we’re interested in." isn't this what we are already doing when stating a diagram like , which implies a factorization , none of which have ! What changes when resampling? aaaaahhh I think I'm really confused here.
  • Also does all this imply that we're starting out assuming that  shares a probability space with all the other possible latents, e.g. ? How does this square with a latent variable being defined by the CPD implicit in the factorization?

And finally:

In standard form, a natural latent is always approximately a deterministic function of . Specifically: .

...

Suppose there exists an approximate natural latent over . Construct a new random variable  sampled from the distribution . (In other words: simultaneously resample each  given all the others.) Conjecture:  is an approximate natural latent (though the approximation may not be the best possible). And if so, a key question is: how good is the approximation?

Where is the top result proved, and how is this statement different from the Universal Natural Latent Conjecture below? Also is this post relevant to either of these statements, and if so, does that mean they only hold under strong redundancy?

Comment by Dalcy (Darcy) on Dalcy's Shortform · 2024-07-15T17:16:41.158Z · LW · GW

Does anyone know if Shannon arrive at entropy from the axiomatic definition first, or the operational definition first?

I've been thinking about these two distinct ways in which we seem to arrive at new mathematical concepts, and looking at the countless partial information decomposition measures in the literature all derived/motivated based on an axiomatic basis, and not knowing which intuition to prioritize over which, I've been assigning less premium on axiomatic conceptual definitions than i used to:

The basis of comparison would be its usefulness and ease-of-generalization to better concepts:

  • at least in the case of fernando's synergistic information, it seems far more useful because i at least know what i'm exactly getting out of it, unlike having to compare between the axiomatic definitions based on handwavy judgements.
  • for ease of generalization, the problem with axiomatic definitions is that there are many logically equivalent ways to state the initial axiom (from which they can then be relaxed), and operational motivations seem to ground these equivalent characterizations better, like logical inductors from the decision theoretic view of probability theory

(obviously these two feed into each other)

Comment by Dalcy (Darcy) on Alexander Gietelink Oldenziel's Shortform · 2024-07-11T16:37:41.181Z · LW · GW

Just finished the local causal states paper, it's pretty cool! A couple of thoughts though:

I don't think the causal states factorize over the dynamical bayes net, unlike the original random variables (by assumption). Shalizi doesn't claim this either.

  • This would require proving that each causal state is conditionally independent of its nondescendant causal states given its parents, which is a stronger theorem than what is proved in Theorem 5 (only conditionally independent of its ancestor causal states, not necessarily all the nondescendants)

Also I don't follow the Markov Field part - how would proving:

if we condition on present neighbors of the patch, as well as the parents of the patch, then we get independence of the states of all points at time t or earlier. (pg 16)

... show that the causal states is a markov field (aka satisfies markov independencies (local or pairwise or global) induced by an undirected graph)? I'm not even sure what undirected graph the causal states would be markov with respect to. Is it the ...

  • ... skeleton of the dynamical Bayes Net? that would require proving a different theorem: "if we condition on parents and children of the patch, then we get independence of all the other states" which would prove local markov independency
  • ... skeleton of the dynamical Bayes Net + edges for the original graph for each t? that would also require proving a different theorem: "if we condition on present neighbors, parents, and children of the patch, then we get independence of all the other states" which would prove local markov independency

Also for concreteness I think I need to understand its application in detecting coherent structures in cellular automata to better appreciate this construction, though the automata theory part does go a bit over my head :p

Comment by Dalcy (Darcy) on Agent Boundaries Aren't Markov Blankets. [Unless they're non-causal; see comments.] · 2024-07-09T21:51:15.188Z · LW · GW

a Markov blanket represents a probabilistic fact about the model without any knowledge you possess about values of specific variables, so it doesn't matter if you actually do know which way the agent chooses to go.

The usual definition of Markov blankets is in terms of the model without any knowledge of the specific values as you say, but I think in Critch's formalism this isn't the case. Specifically, he defines the 'Markov Boundary' of  (being the non-abstracted physics-ish model) as a function of the random variable  (where he writes e.g.  ), so it can depend on the values instantiated at .

  • it would just not make sense to try to represent agent boundaries in a physics-ish model if we were to use the usual definition of Markov blankets - the model would just consist of local rules that are spacetime homogeneous, so there is no reason to expect one can apriori carve out an agent from the model without looking at its specific instantiated values.
  •  can really be anything, so  doesn't necessarily have to correspond to physical regions (subsets) of , but they can be if we choose to restricting our search of infiltration/exfiltration-criteria-satisfying  to functions that only return boundaries-in-the-sense-of-carving-the-physical-space.
    • e.g.  can represent which subset of  the physical boundary is, like 0, 0, 1, 0, 0, ... 1, 1, 0

So I think under this definition of Markov blankets, they can be used to denote agent boundaries, even in physics-ish models (i.e. ones that relate nicely to causal relationships). I'd like to know what you think about this.

Comment by Dalcy (Darcy) on When Are Results from Computational Complexity Not Too Coarse? · 2024-07-05T16:07:54.918Z · LW · GW

I thought if one could solve one NP-complete problem then one can solve all of them. But you say that the treewidth doesn't help at all with the Clique problem. Is the parametrized complexity filtration by treewidth not preserved by equivalence between different NP-complete problems somehow?

All NP-complete problems should have parameters that makes the problem polynomial when bounded, trivially so by the => 3-SAT => Bayes Net translation, and using the treewidth bound.

This isn't the case for the clique problem (finding max clique) because it's not NP-complete (it's not a decision problem), so we don't necessarily expect its parameterized version to be polynomial tractable — in fact, it's the k-clique problem (yes/no is there a clique larger than size k) that is NP-complete. (so by the above translation argument, there certainly exists some graphical quantity that when bounded makes the k-clique problem tractable, though I'm not aware of it, or whether it's interesting)

To me, the interesting question is whether:

  • (1) translating a complexity-bounding parameter from one domain to another leads to quantities that are semantically intuitive and natural in the respective domain.
  • (2) and whether easy instances of the problems in the latter domain in-practice actually have low values of the translated parameter.
    • rambling: If not, then that implies we need to search for a new  for this new domain. Perhaps this can lead to a whole new way of doing complexity class characterization by finding natural -s for all sorts of NP-complete problems (whose natural -s don't directly translate to one another), and applying these various "difficulty measures" to characterize your NP-complete problem at hand! (I wouldn't be surprised if this is already widely studied.)

Looking at the 3-SAT example ( are the propositional variables,  the ORs, and  the AND with  serving as intermediate ANDs):

  • re: (1), at first glance the treewidth of 3-SAT (clearly dependent on the structure of the  -  interaction) doesn't seem super insightful or intuitive, though we may count the statement "a 3-SAT problem gets exponentially harder as you increase the formula-treewidth" as progress.
  • ... but even that wouldn't be an if and only if characterization of 3-SAT difficulty, because re: (2) there exist easy-in-practice 3-SAT problems that don't necessarily have bounded treewidth (i haven't read the link).

I would be interested in a similar analysis for more NP-complete problems known to have natural parameterized complexity characterization.

Comment by Dalcy (Darcy) on When Are Results from Computational Complexity Not Too Coarse? · 2024-07-05T15:00:05.624Z · LW · GW

You mention treewidth - are there other quantities of similar importance?

I'm not familiar with any, though ChatGPT does give me some examples! copy-pasted below:

  • Solution Size (k): The size of the solution or subset that we are trying to find. For example, in the k-Vertex Cover problem, k is the maximum size of the vertex cover. If k is small, the problem can be solved more efficiently.
  • Treewidth (tw): A measure of how "tree-like" a graph is. Many hard graph problems become tractable when restricted to graphs of bounded treewidth. Algorithms that leverage treewidth often use dynamic programming on tree decompositions of the graph.
  • Pathwidth (pw): Similar to treewidth but more restrictive, pathwidth measures how close a graph is to a path. Problems can be easier to solve on graphs with small pathwidth.
  • Vertex Cover Number (vc): The size of the smallest vertex cover of the graph. This parameter is often used in graph problems where knowing a small vertex cover can simplify the problem.
  • Clique Width (cw): A measure of the structural complexity of a graph. Bounded clique width can be used to design efficient algorithms for certain problems.
  • Max Degree (Δ): The maximum degree of any vertex in the graph. Problems can sometimes be solved more efficiently when the maximum degree is small.
  • Solution Depth (d): For tree-like or hierarchical structures, the depth of the solution tree or structure can be a useful parameter. This is often used in problems involving recursive or hierarchical decompositions.
  • Branchwidth (bw): Similar to treewidth, branchwidth is another measure of how a graph can be decomposed. Many algorithms that work with treewidth also apply to branchwidth.
  • Feedback Vertex Set (fvs): The size of the smallest set of vertices whose removal makes the graph acyclic. Problems can become easier on graphs with a small feedback vertex set.
  • Feedback Edge Set (fes): Similar to feedback vertex set, but involves removing edges instead of vertices to make the graph acyclic.
  • Modular Width: A parameter that measures the complexity of the modular decomposition of the graph. This can be used to simplify certain problems.
  • Distance to Triviality: This measures how many modifications (like deletions or additions) are needed to convert the input into a simpler or more tractable instance. For example, distance to a clique, distance to a forest, or distance to an interval graph.
  • Parameter for Specific Constraints: Sometimes, specific problems have unique natural parameters, like the number of constraints in a CSP (Constraint Satisfaction Problem), or the number of clauses in a SAT problem.
Comment by Dalcy (Darcy) on When Are Results from Computational Complexity Not Too Coarse? · 2024-07-05T14:56:00.510Z · LW · GW

I like to think of treewidth in terms of its characterization from tree decomposition, a task where you find a clique tree (or junction tree) of an undirected graph.

Clique trees for an undirected graph is a tree such that:

  • node of a tree corresponds to a clique of a graph
  • maximal clique of a graph corresponds to a node of a tree
  • given two adjacent tree nodes, the clique they correspond to inside the graph is separated given their intersection set (sepset)

You can check that these properties hold in the example below. I will also refer to nodes of a clique tree as 'cliques'. (image from here)

hello

My intuition for the point of tree decompositions is that you want to coarsen the variables of a complicated graph so that they can be represented in a simpler form (tree), while ensuring the preservation of useful properties such as:

  • how the sepset mediates (i.e. removing the sepset from the graph separates the variables associated with all of the nodes on one side of the tree with another) the two cliques in the original graph
  • clique trees satisfy the running intersection property: if (the cliques corresponding to) the two nodes of a tree contain the same variable X, then X is also contained in (the cliques corresponding to) each and all of the intermediate nodes of the tree.
    • Intuitively, information flow about a variable must connected, i.e. they can't suddenly disappear and reappear in the tree.

Of course tree decompositions aren't unique (image):

So we define .

  • Intuitively, treewidths represent the degree to which nodes of a graph 'irreducibly interact with one another', if you really want to see the graph as a tree.
    • e.g., the first clique tree of the image above is a suboptimal coarse graining in that it misleadingly characterizes  as having a greater degree of 'irreducible interaction' among.
Comment by Dalcy (Darcy) on Dalcy's Shortform · 2024-07-03T23:10:52.559Z · LW · GW

Bayes Net inference algorithms maintain its efficiency by using dynamic programming over multiple layers.

Level 0: Naive Marginalization

  • No dynamic programming whatsoever. Just multiply all the conditional probability distribution (CPD) tables, and sum over the variables of non-interest.

Level 1: Variable Elimination

  • Cache the repeated computations within a query.
  • For example, given a chain-structured Bayes Net , instead of doing , we can do . Check my post for more.

Level 2: Clique-tree based algorithms — e.g., Sum-product (SP) / Belief-update (BU) calibration algorithms

  • Cache the repeated computations across queries.
  • Suppose you have a fixed Bayes Net, and you want to compute the marginalization not only , but also . Clearly running two instances of Variable Elimination as above is going to contain some overlapping computation.
  • Clique-tree is a data structure where, given the initial factors (in this case the CPD tables), you "calibrate" a tree whose nodes correspond to a subset of the variables. Cost can be amortized by running many queries over the same Bayes Net.
    • Calibration can be done by just two passes across the tree, after which you have the joint marginals for all the nodes of the clique tree.
    • Incorporating evidence is equally simple. Just zero-out the entries of variables that you are conditioning on for some node, then "propagate" that information downwards via a single pass across the tree.

Level 3: Specialized query-set answering algorithms over a calibrated clique tree.

  • Cache the repeated computations across a certain query-class
  • e.g., computing  for every pair of variables can be done by using yet another layer of dynamic programming by maintaining a table of  for each pair of clique-tree nodes ordered according to their distance in-between.
Comment by Dalcy (Darcy) on Dalcy's Shortform · 2024-06-30T19:47:52.414Z · LW · GW

Perhaps I should one day in the far far future write a sequence on bayes nets.

Some low-effort TOC (this is basically mostly koller & friedman):

  • why bayesnets and markovnets? factorized cognition, how to do efficient bayesian updates in practice, it's how our brain is probably organized, etc. why would anyone want to study this subject if they're doing alignment research? explain philosophy behind them.
  • simple examples of bayes nets. basic factorization theorems (the I-map stuff and separation criterion)
  • tangent on why bayes nets aren't causal nets, though Zack M Davis had a good post on this exact topic, comment threads there are high insight
  • how inference is basically marginalization (basic theorems of: a reduced markov net represents conditioning, thus inference upon conditioning is the same as marginalization on a reduced net)
  • why is marginalization hard? i.e. NP-completeness of exact and approximate inference worst-case
    what is a workaround? solve by hand simple cases in which inference can be greatly simplified by just shuffling in the order of sums and products, and realize that the exponential blowup of complexity is dependent on a graphical property of your bayesnet called the treewidth
  • exact inference algorithms (bounded by treewidth) that can exploit the graph structure and do inference efficiently: sum-product / belief-propagation
  • approximate inference algorithms (works in even high treewidth! no guarantee of convergence) - loopy belief propagation, variational methods, etc
  • connections to neuroscience: "the human brain is just doing belief propagation over a bayes net whose variables are the cortical column" or smth, i just know that there is some connection
Comment by Dalcy (Darcy) on Dalcy's Shortform · 2024-06-08T20:57:48.782Z · LW · GW

Just read through Robust agents learn causal world models and man it is really cool! It proves a couple of bona fide selection theorems, talking about the internal structure of agents selected against a certain criteria.

  • Tl;dr, agents selected to perform robustly in various local interventional distributions must internally represent something isomorphic to a causal model of the variables upstream of utility, for it is capable of answering all causal queries for those variables.
    • Thm 1: agents achieving optimal policy (util max) across various local interventions must be able to answer causal queries for all variables upstream of the utility node
    • Thm 2: relaxation of above to nonoptimal policies, relating regret bounds to the accuracy of the reconstructed causal model
    • the proof is constructive - an algorithm that, when given access to regret-bounded-policy-oracle wrt an environment with some local intervention, queries them appropriately to construct a causal model
      • one implication is an algorithm for causal inference that converts black box agents to explicit causal models (because, y’know, agents like you and i are literally that aforementioned ‘regret-bounded-policy-oracle‘)
    • These selection theorems could be considered the converse of the well-known statement that given access to a causal model, one can find an optimal policy. (this and its relaxation to approximate causal models is stated in Thm 3)
  • Thm 1 / 2 is like a ‘causal good regulator‘ theorem.
    • gooder regulator theorem is not structural - as in, it gives conditions under which a model of the regulator must be isomorphic to the posterior of the system - a black box statement about the input-output behavior.
  • theorem is limited. only applies to cases where the decision node is not upstream of the environment nodes (eg classification. a negative example would be an mdp). but authors claim this is mostly for simpler proofs and they think this can be relaxed.
Comment by Dalcy (Darcy) on Dalcy's Shortform · 2024-05-03T19:45:47.073Z · LW · GW

Thoughtdump on why I'm interested in computational mechanics:

  • one concrete application to natural abstractions from here: tl;dr, belief structures generally seem to be fractal shaped. one major part of natural abstractions is trying to find the correspondence between structures in the environment and concepts used by the mind. so if we can do the inverse of what adam and paul did, i.e. 'discover' fractal structures from activations and figure out what stochastic process they might correspond to in the environment, that would be cool
    • ... but i was initially interested in reading compmech stuff not with a particular alignment relevant thread in mind but rather because it seemed broadly similar in directions to natural abstractions.
  • re: how my focus would differ from my impression of current compmech work done in academia: academia seems faaaaaar less focused on actually trying out epsilon reconstruction in real world noisy data. CSSR is an example of a reconstruction algorithm. apparently people did compmech stuff on real-world data, don't know how good, but effort-wise far too less invested compared to theory work
    • would be interested in these reconstruction algorithms, eg what are the bottlenecks to scaling them up, etc.
  • tangent: epsilon transducers seem cool. if the reconstruction algorithm is good, a prototypical example i'm thinking of is something like: pick some input-output region within a model, and literally try to discover the hmm model reconstructing it? of course it's gonna be unwieldly large. but, to shift the thread in the direction of bright-eyed theorizing ...
  • the foundational Calculi of Emergence paper talked about the possibility of hierarchical epsilon machines, where you do epsilon machines on top of epsilon machines and for simple examples where you can analytically do this, you get wild things like coming up with more and more compact representations of stochastic processes (eg data stream -> tree -> markov model -> stack automata -> ... ?)
    • this ... sounds like natural abstractions in its wildest dreams? literally point at some raw datastream and automatically build hierarchical abstractions that get more compact as you go up
    • haha but alas, (almost) no development afaik since the original paper. seems cool
  • and also more tangentially, compmech seemed to have a lot to talk about providing interesting semantics to various information measures aka True Names, so another angle i was interested in was to learn about them.
    • eg crutchfield talks a lot about developing a right notion of information flow - obvious usefulness in eg formalizing boundaries?
    • many other information measures from compmech with suggestive semantics—cryptic order? gauge information? synchronization order? check ruro1 and ruro2 for more.
Comment by Dalcy (Darcy) on Why Would Belief-States Have A Fractal Structure, And Why Would That Matter For Interpretability? An Explainer · 2024-04-23T20:33:06.061Z · LW · GW

re: second diagram in the "Bayesian Belief States For A Hidden Markov Model" section, shouldn't the transition probabilities for the top left model be 85/7.5/7.5 instead of 90/5/5?

Comment by Dalcy (Darcy) on Transformers Represent Belief State Geometry in their Residual Stream · 2024-04-17T02:19:49.662Z · LW · GW

What is the shape predicted by compmech under a generation setting, and do you expect it instead of the fractal shape to show up under, say, a GAN loss? If so, and if their shapes are sufficiently distinct from the controls that are run to make sure the fractals aren't just a visualization artifact, that would be further evidence in favor of the applicability of compmech in this setup.

Comment by Dalcy (Darcy) on What does Eliezer Yudkowsky think of the meaning of life now? · 2024-04-11T19:02:14.534Z · LW · GW

If after all that it still sounds completely wack, check the date. Anything from before like 2003 or so is me as a kid, where "kid" is defined as "didn't find out about heuristics and biases yet", and sure at that age I was young enough to proclaim AI timelines or whatevs.

https://twitter.com/ESYudkowsky/status/1650180666951352320

Comment by Dalcy (Darcy) on "Fractal Strategy" workshop report · 2024-04-07T00:58:27.993Z · LW · GW

btw there's no input box for the "How much would you pay for each of these?" question.

Comment by Dalcy (Darcy) on LessWrong's (first) album: I Have Been A Good Bing · 2024-04-04T15:16:54.178Z · LW · GW

although I've practiced opening those emotional channels a bit, so this is a less uncommon experience for me than for most

i'm curious, what did you do to open those emotional channels?

Comment by Dalcy (Darcy) on Natural Abstractions: Key claims, Theorems, and Critiques · 2024-03-16T16:55:24.866Z · LW · GW

Out of the set of all possible variables one might use to describe a system, most of them cannot be used on their own to reliably predict forward time evolution because they depend on the many other variables in a non-Markovian way. But hydro variables have closed equations of motion, which can be deterministic or stochastic but at the least are Markovian.

This idea sounds very similar to this—it definitely seems extendable beyond the context of physics:

We argue that they are both; more specifically, that the set of macrostates forms the unique maximal partition of phase space which 1) is consistent with our observations (a subjective fact about our ability to observe the system) and 2) obeys a Markov process (an objective fact about the system's dynamics).

Comment by Dalcy (Darcy) on MIRI 2024 Mission and Strategy Update · 2024-01-06T22:03:15.761Z · LW · GW

I don't see any feasible way that gene editing or 'mind uploading' could work within the next few decades. Gene editing for intelligence seems unfeasible because human intelligence is a massively polygenic trait, influenced by thousands to tens of thousands of quantitative trait loci.

I think the authors in the post referenced above agree with this premise and still consider human intelligence augmentation via polygenic editing to be feasible within the next few decades! I think their technical claims hold up, so personally I'd be very excited to see MIRI pivot towards supporting their general direction. I'd be interested to hear your opinions on their post.

Comment by Dalcy (Darcy) on Dalcy's Shortform · 2023-12-24T02:33:05.425Z · LW · GW

I am curious as to how often the asymptotic results proven using features of the problem that seem basically practically-irrelevant become relevant in practice.

Like, I understand that there are many asymptotic results (e.g., free energy principle in SLT) that are useful in practice, but i feel like there's something sus about similar results from information theory or complexity theory where the way in which they prove certain bounds (or inclusion relationship, for complexity theory) seem totally detached from practicality?

  • joint source coding theorem is often stated as why we can consider the problem of compression and redundancy separately, but when you actually look at the proof it only talks about possibility (which is proven in terms of insanely long codes) and thus not-at-all trivial that this equivalence is something that holds in the context of practical code-engineering
  • complexity theory talks about stuff like quantifying some property over all possible boolean circuits of a given size which seems to me considering a feature of the problem just so utterly irrelevant to real programs that I'm suspicious it can say meaningful things about stuff we see in practice
    • as an aside, does the P vs NP distinction even matter in practice? we just ... seem to have very good approximation to NP problems by algorithms that take into account the structures specific to the problem and domains where we want things to be fast; and as long as complexity methods doesn't take into account those fine structures that are specific to a problem, i don't see how it would characterize such well-approximated problems using complexity classes.
    • Wigderson's book had a short section on average complexity which I hoped would be this kind of a result, and I'm unimpressed (the problem doesn't sound easier - now how do you specify the natural distribution??)
Comment by Dalcy (Darcy) on Self-Embedded Agent's Shortform · 2023-10-21T07:34:23.824Z · LW · GW

Found an example in the wild with Mutual information! These equivalent definitions of Mutual Information undergo concept splintering as you go beyond just 2 variables:

    • interpretation: relative entropy b/w joint and product of margin
    • interpretation: joint entropy minus all unshared info
      • ... become bound information

... each with different properties (eg co-information is a bit too sensitive because just a single pair being independent reduces the whole thing to 0, total-correlation seems to overcount a bit, etc) and so with different uses (eg bound information is interesting for time-series).

Comment by Dalcy (Darcy) on [Cross-post]The theoretical computational limit of the Solar System is 1.47x10^49 bits per second. · 2023-10-17T16:22:36.381Z · LW · GW

The limit's probably much higher with sub-Landauer thermodynamic efficiency.

Comment by Dalcy (Darcy) on Dalcy's Shortform · 2023-09-23T00:02:56.351Z · LW · GW

'Symmetry' implies 'redundant coordinate' implies 'cyclic coordinates in your Lagrangian / Hamiltonian' implies 'conservation of conjugate momentum'

And because the action principle (where the true system trajectory extremizes your action, i.e. integral of Lagrangian) works in various dynamical systems, the above argument works in non-physical dynamical systems.

Thus conserved quantities usually exist in a given dynamical system.

mmm, but why does the action principle hold in such a wide variety of systems though? (like how you get entropy by postulating something to be maximized in an equilibrium setting)

Comment by Dalcy (Darcy) on 6 non-obvious mental health issues specific to AI safety · 2023-08-20T03:24:42.376Z · LW · GW

Bella is meeting a psychotherapist, but they treat her fear as something irrational. This doesn't help, and only makes Bella more anxious. She feels like even her therapist doesn't understand her.

How would one find a therapist in their local area who's aware of what's going on in the EA/rat circles such that they wouldn't find statements about, say, x-risks as being schizophrenic/paranoid?

Comment by Dalcy (Darcy) on Feedbackloop-first Rationality · 2023-08-08T02:23:45.856Z · LW · GW

I am very interested in this, especially in the context of alignment research and solving not-yet-understood problems in general. Since I have no strong commitments this month (and was going to do something similar to this anyways), I will try this every day for the next two weeks and report back on how it goes (writing this comment as a commitment mechanism!)

Have a large group of people attempt to practice problems from each domain, randomizing the order that they each tackle the problems in. (The ideal version of this takes a few months)

...

As part of each problem, they do meta-reflection on "how to think better", aiming specifically to extract general insights and intuitions. They check what processes seemed to actually lead to the answer, even when they switch to a new domain they haven't studied before.

Within this upper-level feedback loop (at the scale of whole problems, taking hours or days), I'm guessing a lower-level loop would involve something like cognitive strategy tuning to get real-time feedback as you're solving the problems?

Comment by Dalcy (Darcy) on Dalcy's Shortform · 2023-08-06T23:33:00.144Z · LW · GW

I had something like locality in mind when writing this shortform, the context being: [I'm in my room -> I notice itch -> I realize there's a mosquito somewhere in my room -> I deliberately pursue and kill the mosquito that I wouldn't have known existed without the itch]

But, again, this probably wouldn't amount to much selection pressure, partially due to the fact that the vast majority of mosquito population exists in places where such locality doesn't hold i.e. in an open environment.

Comment by Dalcy (Darcy) on Dalcy's Shortform · 2023-08-05T13:43:50.272Z · LW · GW

Makes sense. I think we're using the terms differently in scope. By "DL paradigm" I meant to encompass the kind of stuff you mentioned (RL-directing-SS-target (active learning), online learning, different architecture, etc) because they really seemed like "engineering challenges" to me (despite them covering a broad space of algorithms) in the sense that capabilities researchers already seem to be working on & scaling them without facing any apparent blockers to further progress, i.e. in need of any "fundamental breakthroughs"—by which I was pointing more at paradigm shifts away from DL like, idk, symbolic learning.

Comment by Dalcy (Darcy) on Dalcy's Shortform · 2023-08-05T13:31:49.056Z · LW · GW

But the evolutionary timescale at which mosquitos can adapt to avoid detection must be faster than that of humans adapting to find mosquitos itchy! Or so I thought - my current boring guess is that (1) mechanisms for the human body to detect foreign particles are fairly "broad", (2) the required adaptation from the mosquitos to evade them are not-way-too-simple, and (3) we just haven't put enough selection pressure to make such change happen.

Comment by Dalcy (Darcy) on Dalcy's Shortform · 2023-08-03T10:34:27.026Z · LW · GW

To me, the fact that the human brain basically implements SSL+RL is very very strong evidence that the current DL paradigm (with a bit of "engineering" effort, but nothing like fundamental breakthroughs) will kinda just keep scaling until we reach point-of-no-return. Does this broadly look correct to people here? Would really appreciate other perspectives.

Comment by Dalcy (Darcy) on Big picture of phasic dopamine · 2023-08-03T10:18:04.500Z · LW · GW

What are the errors in this essay? As I'm reading through the Brain-like AGI sequence I keep seeing this post being referenced (but this post says I should instead read the sequence!)

I would really like to have a single reference post of yours that contains the core ideas about phasic dopamine rather than the reference being the sequence posts (which is heavily dependent on a bunch of previous posts; also Post 5 and 6 feels more high-level than this one?)

Comment by Dalcy (Darcy) on Least-problematic Resource for learning RL? · 2023-08-01T21:25:28.885Z · LW · GW

Answering my own question, review / survey articles like https://arxiv.org/abs/1811.12560 seem like a pretty good intro.

Comment by Dalcy (Darcy) on DragonGod's Shortform · 2023-07-25T20:38:51.809Z · LW · GW

The Pointers Problem: Human Values Are A Function Of Humans' Latent Variables

Comment by Dalcy (Darcy) on Dalcy's Shortform · 2023-07-20T18:00:09.697Z · LW · GW

Mildly surprised how some verbs/connectives barely play any role in conversations, even in technical ones. I just tried directed babbling with someone, and (I think?) I learned quite a lot about Israel-Pakistan relations with almost no stress coming from eg needing to make my sentences grammatically correct.

Example of (a small part of) my attempt to summarize my understanding of how Jews migrated in/out of Jerusalem over the course of history:

They here *hand gesture on air*, enslaved out, they back, kicked out, and boom, they everywhere.

(audience nods, given common knowledge re: gestures, meaning of "they," etc)

Comment by Dalcy (Darcy) on Views on when AGI comes and on strategy to reduce existential risk · 2023-07-13T12:48:23.555Z · LW · GW

Related - "There are always many ways through the garden of forking paths, and something needs only one path to happen."

Comment by Dalcy (Darcy) on OpenAI Launches Superalignment Taskforce · 2023-07-12T05:52:03.886Z · LW · GW

Also, davidad's Open Agency Architecture is a very concrete example of what such a non-antisocial pivotal act that respects the preferences of various human representatives would look like (i.e. a pivotal process).

Perhaps not realistically feasible in its current form, yes, but davidad's proposal suggests that there might exist such a process, and we just have to keep searching for it.