Posts

Many methods of causal inference try to identify a "safe" subset of variation 2021-03-30T14:36:10.496Z
On the Boxing of AIs 2015-03-31T21:58:08.749Z
The Hardcore AI Box Experiment 2015-03-30T18:35:19.385Z
Boxing an AI? 2015-03-27T14:06:19.281Z

Comments

Comment by tailcalled on Reply to Nate Soares on Dolphins · 2021-06-10T09:03:57.033Z · LW · GW

Another tree rotation is the human vs animal distinction. Humans are animals, but sometimes one uses "animal" to refer to nonhuman animals. I wonder if there's some general things one could say about tree rotations. The human/animal distinction seems to have a different flavor to me than the fish/tetrapod distinction, though.

Something that also seems related is asking for an eggplant and expecting a fully-developed, non-rotting eggplant.

Comment by tailcalled on Reply to Nate Soares on Dolphins · 2021-06-10T08:42:09.112Z · LW · GW

I wonder to what degree the perspective comes from us generally not thinking about nonvertebrate animals. So the things that distinguish "phylogenetic fish" from them (like having craniums and vertebrae) are just considered "animal things". And so instead, when defining fish we end up focusing on what distinguishes tetrapods and taking the negation of that. Sort of a categorical "tree rotation" if you will.

Comment by tailcalled on Reply to Nate Soares on Dolphins · 2021-06-10T08:25:19.837Z · LW · GW

It's easier to keep track of the underlying relatedness as if it were an "essence" (even though patterns of physical DNA aren't metaphysical essences), rather than the all of the messy high-dimensional similarities and differences of everything you might notice about an organism.

Hmm, isn't DNA metaphysical essences?

IIRC, the metaphysical notion of essence came from noticing similarities between different creatures, that they seemed to cluster together in species as if constructed according to some blueprint. The reason for these similarities is the DNA - if "essences" had to correspond to anything in reality, then that would seem to be DNA.

Comment by tailcalled on Reply to Nate Soares on Dolphins · 2021-06-10T07:31:14.341Z · LW · GW

Am I misunderstanding something? You seem to be defending a phylogenetic definition of "fish" as a reason why dolphins aren't fish, but if you used a phylogenetic definition of "fish", you'd still have dolphins be fish - that's the first part of his argument.

Comment by tailcalled on Often, enemies really are innately evil. · 2021-06-07T14:45:15.327Z · LW · GW

(Sorry Scott Alexander, you totally fell down as a rationalist when you saw one of the biggest effect sizes ever, in a study that controlled for so many things, and still thought "hmm, i doubt being tortured regularly for a decade has a long term bad effect.")

Controlling for things isn't a good way to go about researching the effects of this. Instead, you should ask, what factors lead to variation in who gets bullied?

Comment by tailcalled on Search-in-Territory vs Search-in-Map · 2021-06-06T08:23:42.444Z · LW · GW

Recently I've also been thinking about something that seems vaguely related, which could perhaps be called inference in the map vs inference in the territory.

Suppose you want to know how some composite system works. This might be a rigid body object made up of molecules, a medicine made out of chemicals to treat a disease that is ultimately built out of chemicals, a social organisation method designed for systems made out of people, or anything like that.

In that case there are two ways you can proceed: either think about the individual components of the system and deduce from their behavior how the system will behave, or just build the system in reality and observe how the aggregate behaves.

If you do the former, you can apply already-known theory about the components to deduce it's behavior without needing to test it in reality. Though often in practice this theory won't be known, or will be too expensive to use, or similar. So in practice one generally has to investigate it holistically. But this requires using the territory as a map to figure it out.

(When investigating it holistically there is also the possibility of just using holistic rather than reductionistic theories. Often this holistic theory will originate from one of the previous methods though, e.g. our math for rigid body dynamics comes from actual experience with rigid bodies. Though also sometimes it might come from other places, e.g. evolutionary reasoning. So my dichotomy isn't quite as clean as yours, probably.)

Comment by tailcalled on The Homunculus Problem · 2021-06-03T13:20:45.601Z · LW · GW

I might not have explained the credence/propositional assertion distinction well enough. Imagine some sort of language model in AI, like GPT-3 or CLIP or whatever. For a language model, credences are its internal neuron activations and weights, while propositional assertions are the sequences of text tokens. The neuron activations and weights seem like they should definitely have a Bayesian interpretation as being beliefs, since they are optimized for accurate predictions, but this does not mean one can take the semantic meaning of the text strings at face value; the model isn't optimized to emit true text strings, but instead optimized to emit text strings that match what humans say (or if it was an RL agent, maybe text strings that make humans do what it wants, or whatever).

My proposal is, what if humans have a similar split going on? This might be obscured a bit in this context, since we're on LessWrong, which to a large degree has a goal of making propositional assertions act more like proper beliefs.  

In your model, do you think there's some sort of confused query-substitution going on, where we (at some level) confuse "is the color patch darker" with "is the square of the checkerboard darker"?

Yes, assuming I understand you correctly. It seems to me that there's at least three queries at play:

  1. Is the square on the checkerboard of a darker color?
  2. Is there a shadow that darkens these squares?
  3. Is the light emitted from this flat screen of a lower luminosity?

If I understand your question, "is the color patch darker?" maps to query 3?

The reason the illusion works is that for most people, query 3 isn't part of their model (in the sense of credences). They can deal with the list of symbols as a propositional assertion, but it doesn't map all the way into their senses. (Unless they have sufficient experience with it? I imagine artists would end up also having credences on it, due to experience with selecting colors. I've also heard that learning to see the actual visual shapes of what you're drawing, rather than the abstracted representation, is an important step in becoming an artist.)

Do the credences simply lack that distinction or something?

The existence of the illusion would seem to imply that most people's credences lack the distinction (or rather, lacks query 3, and thus finds it necessary to translate query 3 into query 2 or query 1). However, it's not fundamental to the notion of credence vs propositional assertion that it lacks this. Rather, the homunculus problem seems to involve some sort of duality, either real or confused. I'm proposing that the duality is real, but in a different way than the homunculus fallacy does, where credences act like beliefs and propositional assertions can act in many ways.

This model doesn't really make strong claims about the structure of the distinctions credences make, similar to how Bayesianism doesn't make strong claims about the structure of the prior. But that said, there must obviously be some innate element, and there also seems to be some learned element, where they make the distinctions that you have experience with.

We've seen objects move in and out of light sources a ton, so we are very experienced in the distinction between "this object has a dark color" vs "there is a shadow on this object". Meanwhile...

Wait actually, you've done some illustrations, right? I'm not sure how experienced you are with art (the illustrations you've posted to LessWrong have been sketches without photorealistic shading, if I recall correctly, but you might very well have done other stuff that I'm not aware of), so this might disprove some of my thoughts on how this works, if you have experience with shading things.

(Though in a way this is kinda peripheral to my idea... there's lots of ways that credences could work that don't match this.)

More generally, my correction to your credences/assertions model would be to point out that (in very specific ways) the assertions can end up "smarter". Specifically, I think assertions are better at making crisp distinctions and better at logical reasoning. This puts assertions in a weird position.

Yes, and propositional assertions seem more "open-ended" and separable from the people thinking of them, while credences are more embedded in the person and their viewpoint. There's a tradeoff, I'm just proposing seeing the tradeoff more as "credence-oriented individuals use propositional assertions as tools".

Comment by tailcalled on The Homunculus Problem · 2021-05-27T21:35:35.016Z · LW · GW

One model I've played around with is distinguishing two different sorts of beliefs, which for historical reasons I call "credences" and "propositional assertions". My model doesn't entirely hold water, I think, but it might be a useful starting point for inspiration for this topic.

Roughly speaking I define a "credence" to be a Bayesian belief in the naive sense. It updates according to what you perceive, and "from the inside" it just feels like the way the world is. I consider basic senses as well as aliefs to be under the "credence" label.

More specifically, in this model, your credence when looking at the picture is that there is a checkerboard with consistently colored squares, a cylinder standing on the checkboard, and casting a shadow on it, which obviously doesn't change the shade of the squares, but does make them look darker.

In contrast, in this model, I assert that abstract conscious high-level verbal beliefs aren't proper beliefs (in the Bayesian sense) at all; rather, they're "propositional assertions". They're more like a sort of verbal game or something. People learn different ways of communicating verbally with each other, and these ways to a degree constrain their learned "rules of the game" to act like proper beliefs - but in some cases they can end up acting very very different from beliefs (e.g. signalling and such).

When doing theory of mind, we learn to mostly just accept the homunculus fallacy, because socially this leads to useful tools for talking theory of mind, even if they are not very accurate. You also learn to endorse the notion that you know your credences are wrong and irrational, even though your credences are what you "really" believe; e.g. you learn to endorse a proposition that "B" has the same color as "A".

This model could probably be said to imply overly much separation of your rational mind away from the rest of your mind, in a way that is unrealistic. But it might be a useful inversion on the standard account of the situation, which engages in the homunculus fallacy?

Comment by tailcalled on Finite Factored Sets · 2021-05-23T22:55:02.522Z · LW · GW

Ah of course! So many symbols to keep track of 😅

Comment by tailcalled on Finite Factored Sets · 2021-05-23T22:41:27.825Z · LW · GW

I think one thing that confuses me is, wouldn't Y also be before X then?

Comment by tailcalled on Finite Factored Sets · 2021-05-23T22:30:28.427Z · LW · GW

Or wait, I'm dumb, that can definitely happen if X and Y are coin flips. But I feel like this doesn't add up with the other stuff, will need to read more carefully.

Comment by tailcalled on Finite Factored Sets · 2021-05-23T22:23:49.654Z · LW · GW

I'm a bit confused about how X can be independent of both Y and of (X xor Y). What would a probability distribution where this holds look like?

Comment by tailcalled on SGD's Bias · 2021-05-19T19:14:59.670Z · LW · GW

This depends on whether it can achieve perfect predictive power or not, no? What I had in mind was something like autoregressive text prediction, where there will always be some prediction errors. I would've assumed those prediction errors constantly introduce some noise into the gradients?

Comment by tailcalled on SGD's Bias · 2021-05-19T10:07:20.319Z · LW · GW

Hmm, and the -E[u(X, theta)] term would shrink during training, right? So eventually it would become more about the drift term? This makes me think of the "grokking" concept.

Comment by tailcalled on Meditations on Momentum · 2021-05-14T08:41:41.274Z · LW · GW

Psychologists have discovered the same effect in education. The longer it takes kids to learn how to read, the slower the development of their other cognitive skills and performance:

This sounds confounded by g.

Comment by tailcalled on Agency in Conway’s Game of Life · 2021-05-13T09:44:26.070Z · LW · GW

Unlike our universe, game of life is not reversible, so I don't think entropy is the key.

Comment by tailcalled on Sexual Dimorphism in Yudkowsky's Sequences, in Relation to My Gender Problems · 2021-05-06T10:03:28.675Z · LW · GW

See e.g. this.

Also in the context of health benefit associated with transition this is irrelevant because transitioning will not change your body size...

Yeah I know, I could've pointed out the body size effect to nim too.

Comment by tailcalled on Sexual Dimorphism in Yudkowsky's Sequences, in Relation to My Gender Problems · 2021-05-05T11:50:23.521Z · LW · GW

Also, bigger bodies = more cells that can end up turning into tumors and such.

Comment by tailcalled on Sexual Dimorphism in Yudkowsky's Sequences, in Relation to My Gender Problems · 2021-05-03T22:24:57.833Z · LW · GW

I've had an odd inbetween experience. I at the beginning I had a quick but weak positive reaction, but later I've tried going on and off without much effect beyond the sex drive. I attribute my initial positive reaction to placebo, but I would feel more confident in attributing this stuff to placebo if I could demonstrate it for someone who had a big, clear, unambiguous effect.

Comment by tailcalled on Sexual Dimorphism in Yudkowsky's Sequences, in Relation to My Gender Problems · 2021-05-03T19:58:51.805Z · LW · GW

Considering the hypothesis that men are more likely to have fringe beliefs than women, and that rationality is a fringe belief, what if my female equivalent is not a rationalist? What if instead of reverting her mind back after the experiment, she decides to e.g. change her mind to be religious, because she will value social conformity highly? Those are scary thoughts...

Makes for some great transhumanist ethics thought experiments. Would it be ethical to try out changing your values, with some mechanism that forces you to change back, even if you with those new values would be opposed to changing back and want to keep them?

Comment by tailcalled on Sexual Dimorphism in Yudkowsky's Sequences, in Relation to My Gender Problems · 2021-05-03T19:50:01.807Z · LW · GW

One thing I've been wondering about for a while is, some trans women I know say that estrogen immediately makes them feel better, on a time scale of hours to a single day. For those who it is this quick, it seems like it should be testable with placebo. Unfortunately the trans woman I had arranged a placebo test with ended up too busy.

Comment by tailcalled on Sexual Dimorphism in Yudkowsky's Sequences, in Relation to My Gender Problems · 2021-05-03T19:16:25.178Z · LW · GW

I try to do a lot of research on autogynephilia and related topics, and I think there's some things that are worth noting:

  1. Autogynephilia appears to be fairly rare in the general population of males; I usually say 3%-15%, though it varies from study to study depending on hard-to-figure-out things. My go-to references for prevalence rates are this and this paper. (And this is for much weaker degrees of autogynephilia than Zack's.) So it's not just about having a body that one finds attractive, there needs to be some ?other? factor before one ends up autogynephilic. (I've been interested in figuring out this other factor, but I haven't figured out much.)
  2. According to various surveys in the rationalist community, autogynephilia in men (and autoandrophilia in women) appears to be much more common here than it is in the general population. (And possibly this applies to autoandrophilia in men and autogynephilia in women too, but studying this is controversial and feels difficult.) As such, it might be easy for a group of rationalists to take autogenderphilia for granted as something that of course is part of your sexuality, even though, by point 1, it isn't necessarily.
Comment by tailcalled on Your Dog is Even Smarter Than You Think · 2021-05-01T08:17:44.448Z · LW · GW

I wonder if this could be made more scalable by having a dog who has learned this teaçh its puppies. 🤔

Comment by tailcalled on Can you improve IQ by practicing IQ tests? · 2021-04-28T08:59:51.085Z · LW · GW

Are you sure of this? Maybe the sort of people who are motivated to get an high score in a IQ test are the same sort of people who are motivated to get good grades in the college, who work harder to advance their career, and so on.

This is essentially proposing a correlation between intelligence and conscientiousness. But from my reading they appear to be mostly uncorrelated.

Comment by tailcalled on Can you improve IQ by practicing IQ tests? · 2021-04-28T08:58:27.936Z · LW · GW

Raven's matrices are only one example of an IQ test. Performance across a wide range of domains, from pattern recognition to sensory discrimination to knowledge to reaction time is correlated. This widespread pattern of correlations is likely due to the performance on these many domains sharing causes, with the broadly shared causes being called g.

Since g affects your performance on tests, IQ tests to an extent measure g. However, as you point out, you can often just practice a test to become better. This practice will only make you better at that specific test, though; training your pattern recognition skill with matrices will not make you better at distinguishing the weights and colors of objects using your senses. That is, practice doesn't change your g, but instead improves the test-specific skills called s.

Your IQ score is a combination of g and s factors (and other factors too). And it doesn't even exist unless you take an IQ test. So it can't be a stable innate characteristic of an individual. But g - that is, whatever underlies performance across wildly different tests - must exist independently of the tests, as a characteristic of the individual, and empirically it appears reasonably stable in adulthood, and highly genetic.

Comment by tailcalled on Computing Natural Abstractions: Linear Approximation · 2021-04-17T14:23:31.346Z · LW · GW

This seems like it would be unlikely to hold in 2 or 3 dimensions?

Comment by tailcalled on Computing Natural Abstractions: Linear Approximation · 2021-04-16T17:07:59.728Z · LW · GW

They need to be high-dimensional for the linear models themselves to do anything interesting, but I think adding a large number of low-dimensional linear models might, despite being boring, still change the dynamics of the graphs to be marginally more realistic for settings involving optimization. X turns into an estimate of Y, and tries to control this estimate towards zero; that's a pattern that I assume would be rare in your graph, but common in reality, and it could lead to real graphs exhibiting certain "conspiracies" that the model graphs might lack (especially if there are many (X, Y) pairs, or many (individually unidimensional) Xs that all try to control a single common Y).

But there's probably a lot of things that can be investigated about this. I should probably be working on getting my system for this working, or something. Gonna be exciting to see what else you figure out re natural abstractions.

Comment by tailcalled on Computing Natural Abstractions: Linear Approximation · 2021-04-16T17:01:45.306Z · LW · GW

Switching back to your framing: if X itself is large enough to contain multiple far-apart chunks of variables, then a PCA on X should yield natural abstractions (roughly speaking).

Agree, I was mainly thinking of whether it could still hold if X is small. Though it might be hard to define a cutoff threshold.

Perhaps another way to frame it is, if you perform PCA, then you would likely get variables with info about both the external summary data of X, and the internal dynamics of X (which would not be visible externally). It might be relevant to examine things like the relative dimensionality for PCA vs SVD, to investigate how well natural abstractions allow throwing away info about internal dynamics.

(This might be especially interesting a setting where it is tiled over time? As then the internal dynamics of X play a bigger role in things.)

This is quite compatible with the idea of abstractions coming from noise wiping out info: even within X, noise prevents most info from propagating very far. The info which propagates far within X is also the info which is likely to propagate far beyond X itself; most other info is wiped out before it reaches the boundaries of X, and has low redundancy within X.

🤔

That makes me think of another tangential thing. In a designed system, noise can often be kept low, and redundancy is often eliminated. So the PCA method might work better on "natural" (random or searched) systems than on designed systems, while the SVD method might work equally well on both.

Comment by tailcalled on Computing Natural Abstractions: Linear Approximation · 2021-04-16T16:50:50.709Z · LW · GW

Another question:

It seems to me like with 1 space dimension and 1 time dimension, different streams of information would not be able to cross each other. Intuitively the natural abstraction model makes sense and I would expect this stuff to work with more space dimensions too - but just to be on the safe side, have you verified it in 2 or 3 spatial dimensions?

(... 🤔 isn't there supposedly something about certain properties undergoing a phase shift beyond 4 dimensions? I have no idea whether that would come up here because I don't know the geometry to know the answer. I assume it wouldn't make a difference, as long as the size of the graph is big enough compare to the number of dimensions. But it might be fun to see if there's some number of dimensions where it just completely stops working.)

Comment by tailcalled on Computing Natural Abstractions: Linear Approximation · 2021-04-16T16:37:47.469Z · LW · GW

I think I phrased it wrong/in a confusing way.

Suppose Y is unidimensional, and you have Y=f(g(X), h(X)). Suppose there are two perturbations i and j that X can emit, where g is only sensitive to i and h is only sensitive to j, i.e. g(j)=0, h(i)=0. Then because the system is linear, you can extract them from the rest:

Y=f(g(X+ai+bj), h(X+ai+bj))=f(g(X), h(X))+af(g(i))+bf(h(j))

This means that if X only cares about Y, it is free to choose whether to adjust a or to adjust b. In a nonlinear system, there might be all sorts of things like moderators, diminishing returns, etc., which would make it matter whether it tried to control Y using a or using b; but in a linear system, it can just do whatever.

Comment by tailcalled on Prophetic Hazard · 2021-04-16T11:38:41.509Z · LW · GW

A lot of the prophetic hazards you mention don't seem very obvious to me. Like how does reminding a person that they are old make them older?

Comment by tailcalled on Computing Natural Abstractions: Linear Approximation · 2021-04-16T10:00:58.086Z · LW · GW

Y can consist of multiple variables, and then there would always be multiple ways, right?

Not necessarily. For instance if X has only one output, then there's only one way for X to change things, even if the one output connects to multiple Ys.

I thought by indirect you meant that the path between X and Y was longer than 1.

Yes.

If some third cause is directly upstream from both, then I suppose it wouldn't be uniquely defined whether changing X changes Y, since there could be directions in which to change the cause that change some subset of X and Y.

I'm not sure I get it, or at least if I get it I don't agree.

Are you saying that if we've got X <- Z -> Y and X -> Y, then the effect of X on Y may not be well-defined, because it depends on whether the effect is through Z or not, as the Z -> Y path becomes relevant when it is through Z?

Because if so I think I disagree. The effect of X on Y should only count the X -> Y path, not the X <- Z -> Y path, as the latter is a confounder rather than a true causal path.

Comment by tailcalled on Computing Natural Abstractions: Linear Approximation · 2021-04-16T06:15:59.279Z · LW · GW

Presumably you wouldn't be able to figure out the precise value of Y since Y isn't connected to X. You could only find an approximate estimate. Though on reflection the outputs are more interesting in a nonlinear graph (which was the context where I originally came up with the idea), since in a linear one all ways of modifying Y are equivalent.

Comment by tailcalled on Computing Natural Abstractions: Linear Approximation · 2021-04-15T22:11:33.931Z · LW · GW

Also, question, one way that you can get an abstraction of the neighbourhood is via SVD between the neighbourhood's variables and other variables far away, but another way that you can do it is just by applying PCA on the variables in the neighbourhood itself. Does this yield the same result, or is there some difference? My guess would be that it would yield highly correlated variables to what you get from SVD.

Finally, intuitively from intuition on PCA, I would assume that the most important variable would tend to correspond to some sort of "average activation" in the neighbourhood, and the second most important variable would tend to correspond to the difference between the activation in the top of the neighbourhood and the activation in the bottom of the neighbourhood. Defining this precisely might be a bit hard, as the signs of nodes are presumably somewhat arbitrary, so one would have to come up with some way to adjust for that; but I would be curious if this looks anything like what you've found?

I guess in a way both of these hypotheses contradict your reasoning for why the natural abstraction hypothesis holds (though is quite compatible with it still holding), in the sense that your natural abstraction hypothesis is built on the idea that noisy interactions just outside of the neighbourhood wipe away lower-level details, while my intuitions are that a lot of the information needed for abstraction could be recovered from what is going on within the neighbourhood.

Comment by tailcalled on Computing Natural Abstractions: Linear Approximation · 2021-04-15T21:55:37.821Z · LW · GW

Did you set up your previous posts with this experiment in mind? I had been toying with some similar things myself, so it's fun to see your analysis too.

Regarding choices that might not generalize, another one that I have been thinking about is agency/optimization. That is, while your model is super relevant for inanimate systems, optimized systems often try to keep some parameters under control, which involves many parameters being carefully adjusted to achieve that.

This might seem difficult to model, but I have an idea: Suppose you pick a pair of causal nodes X and Y, such that X is indirectly upstream from Y. In that case, since you have the ground truth for the causal network, you can compute how to adjust the weights of X's inputs and outputs to (say) keep Y as close to 0 as possible. This probably won't make much of a difference when picking only a single X, but for animate systems a very large number of nodes might be set for the purpose of optimization. There is a very important way that this changes the general abstract behavior of a system, which unfortunately nobody seems to have written about, though I'm not sure if this has any implications for abstraction.

Also, another property of the real world (that is relatively orthogonal to animacy, but which might interact with animacy in interesting ways) that you might want to think about is recurrence. You've probably already considered this, but I think it would be interesting to study what happens if the causal structure tiles over time and/or space.

Comment by tailcalled on Dutch-Booking CDT: Revised Argument · 2021-04-13T12:42:03.248Z · LW · GW

So I see two possible interpretations of traditional Dutch books:

I disagree, I don't think it's a simple binary thing. I don't think Dutch book arguments in general never apply to recursive things, but it's more just that the recursion needs to be modelled in some way, and since your OP didn't do that, I ended up finding the argument confusing.

The standard dutch book arguments would apply to the imp. Why should you be in such a different position from the imp? 

I don't think your argument goes through for the imp, since it never needs to decide its action, and therefore the second part of selling the contract back never comes up?

For example, multiply the contract payoff by 0.001. 

Hmm, on further reflection, I had an effect in mind which doesn't necessarily break your argument, but which increases the degree to which other counterarguments such as AlexMennen's break your argument. This effect isn't necessarily solved by multiplying the contract payoff (since decisions aren't necessarily continuous as a function of utilities), but it may under many circumstances be approximately solved by it. So maybe it doesn't matter so much, at least until AlexMennen's points are addressed so I can see where it fits in with that.

Comment by tailcalled on Dutch-Booking CDT: Revised Argument · 2021-04-12T16:46:47.710Z · LW · GW

This, again, seems plausible if the payoff is made sufficiently small.

How do you make the payoff small?

This is actually very similar to traditional Dutch-book arguments, which treat the bets as totally independent of everything.

Isn't your Dutch-book argument more recursive than standard ones? Your contract only pays out if you act, so the value of the dutch book causally depends on the action you choose.

Comment by tailcalled on Dutch-Booking CDT: Revised Argument · 2021-04-10T15:39:14.883Z · LW · GW

So the overall expectation is .

Wouldn't it be P(Act=a|do(buy B)) rather than P(Act=a)? Like my thought would be that the logical thing for CDT would be to buy the contract and then as a result its expected utilities change, which leads to its probabilities changing, and as a result it doesn't want to sell the contract. I'd think this argument only puts a bound on how much cdt and edt can differ, rather than on whether they can differ at all. Very possible I'm missing something though.

Comment by tailcalled on What will GPT-4 be incapable of? · 2021-04-09T08:25:15.255Z · LW · GW

Directing a robot using motor actions and receiving camera data (translated into text I guess to not make it maximally unfair, but still) to make a cup of tea in a kitchen.

Comment by tailcalled on I Trained a Neural Network to Play Helltaker · 2021-04-07T08:54:03.141Z · LW · GW

Did it memorize the way to beat the levels, or did it learn a generalized method of beating Helltaker?

Comment by tailcalled on Testing The Natural Abstraction Hypothesis: Project Intro · 2021-04-07T08:23:02.445Z · LW · GW

Exciting stuff. One thing I suspect is that you'll need some different account for abstractions in the presence of agency/optimization than abstractions that deal with unoptimized things, because agency implies "conspiracies" where many factors may all work together to achieve something.

Like your current point about "information at a distance" probably applies to both, but the reasons that you end up with information at a distance likely differ; with non-agency phenomena, there's probably going to be some story based on things like thermodynamics, averages over large numbers of homogenous components, etc., while agency makes things more complex.

Comment by tailcalled on Preventing overcharging by prosecutors · 2021-04-06T19:29:40.801Z · LW · GW

Suppose that the prosecutor has some random noise in their charges, such that they sometimes overcharge a bunch and sometimes undercharge a bunch. In that case it seems reasonable to suppose that things are more likely to go to court when the prosecutor is overcharging and the accused therefore thinks they can get more of the accusations dropped. But this would mean that the prosecutors are evaluated on a subset of the charges that are systematically too high, and therefore to compensate they end up lowering their assessed probabilities below what is actually counterfactually accurate if people just went to court about it always.

I don't know how big a problem this would be, but it seems like something that would be good to evaluate in the proposal.

Comment by tailcalled on Preventing overcharging by prosecutors · 2021-04-06T13:28:27.660Z · LW · GW

The ability of the prosecutor to accurately access the likelihood can be measured via the Briers score or a Log score.

Scoring predictions requires knowing the outcomes. But wouldn't the outcome depend on whether the accused takes plea deals and such?

Comment by tailcalled on Chaos Induces Abstractions · 2021-04-05T10:13:00.467Z · LW · GW

I've been thinking a lot about differences between people for... arguably most of my life, but especially the past few years. One thing I find interesting is that parts of your abstraction/chaos relationship don't seem to transfer as neatly to people. More specifically, what I have in mind is to elements:

  1. People carry genes around, and these genes can have almost arbitrary effects that aren't wiped away by noise over time because the genes persist and are copied digitally in their bodies.

  2. It seems to me that agency "wants" to resist chaos? Like if some sort of simple mechanical mechanism creates something, then the something easily gets moved away by external forces, but if a human creates something and wants to keep it, then they can place it in their home and lock their door and/or live in a society that respects private property. (This point doesn't just apply to environmental stuff like property, but also to biological stuff.)

Individual differences often seem more "amorphous" and vague than you get elsewhere, and I bet elements like the above play a big role in this. The abstraction/chaos post helps throw this into sharp light.

Comment by tailcalled on Many methods of causal inference try to identify a "safe" subset of variation · 2021-03-31T14:29:42.141Z · LW · GW

If there is really both reverse causation and regular causation between Xr and Y, you have a cycle, and you have to explain what the semantics of that cycle are (not a deal breaker, but not so simple to do.  For example if you think the cycle really represents mutual causation over time, what you really should do is unroll your causal diagram so it's a DAG over time, and redo the problem there).

I agree, but I think this is much more dependent on the actual problem that one is trying to solve. There's tons of assumptions and technical details that different approaches use, but I'm trying to sketch out some overview that abstracts over these and gets at the heart of the matter.

(There might also be cases where there is believed to be a unidirectional causal relationship, but the direction isn't know.)

The real question is, why should Xc be unconfounded with Y?  In an RCT you get lack of confounding by study design (but then we don't need to split the treatment at all).  But this is not really realistic in general -- can you think of some practical examples where you would get lucky in this way?

Indeed that is the big difficulty. Considering how often people use these methods in social science, it seems like there is some general belief that one can have Xc be unconfounded with Y, but this is rarely proven and seems often barely even justified. It seems to me that the general approach is to appeal to parsimony and assume that if you can't think of any major confounders, then they probably don't exist.

This obviously doesn't work well. I think people find it hard to get an intuition for how poorly it works, and I personally found that it made much more sense to me when I framed it in terms of the "Know your Xc!" point; the goal shouldn't be to think of possible confounders, but instead to think of possible nonconfounded variance. I also have an additional blog post in the works arguing that parsimony is empirically testable and usually wrong, but it will be some time before I post this.

Comment by tailcalled on Speculations Concerning the First Free-ish Prediction Market · 2021-03-31T10:34:44.202Z · LW · GW

Counterpoint:

Why do people trade?

We often trade to express an opinion on whether a future event will drive value up or down.

You believe that a vaccine will be widely distributed, so you try investing in travel stocks. You believe emissions regulation is coming, so you try shorting auto companies.

But these trades don’t provide direct exposure to the event. There’s a lot of noise that gets in the way. Kalshi enables you to isolate trading on the event itself.

This makes me pessimistic about it, see Prediction Markets Fail To Mooch.

Comment by tailcalled on Many methods of causal inference try to identify a "safe" subset of variation · 2021-03-31T09:57:33.049Z · LW · GW

😅 I think that is a very optimistic framing of the problem.

The hard part isn't really weighing the costs of different known observables to find efficient ways to study things, the hard part is in figuring out what observables that there are and how to use them correctly.

I don't think this is particularly viable algorithmically; it seems like an AI-complete problem. (Though of course one should be careful about claiming that, as often AI-complete things turn out to not be so AI-complete anyway.)

The core motivation for the series of blog posts I'm writing is, I've been trying to study various things that require empirical causal inference, and so I need to apply theory to figure out how to do this. But I find existing theory to be somewhat ad-hoc, providing a lot of tools with a lot of assumptions, but lacking an overall picture. This is fine if you just want to apply some specific tool, as you can then learn lots of details about that tool. But if you want to study a phenomenon, you instead need some way to map what you already know about the phenomenon to an appropriate tool, which requires a broader overview.

This post is just one post in a series. (Hopefully, at least - I do tend to get distracted.) It points out a key requirement for a broad range of methods - having some cause of interest where you know something about how the cause varies. I'm hoping to create a checklist with a broad range of causal inference methods and their key requirements. (Currently I have 3 other methods in mind that I consider to be completely distinct from this method, as well as 2 important points of critique on this method that I think usually get lost in the noise of obscure technical requirements for the statistics to be 100% justified.)

Regarding "theorizing with flowcharts", I tend to find it pretty easy. Perhaps it's something that one needs to get used to, but graphs are a practical way to summarize causal assumptions. Generating data may of course be helpful too, and I intend to do this in a later blog post, but it quickly gets unwieldy in that e.g. there are many parameters that can vary in the generated data, and which need to be explored to ensure adequate coverage of the possibilities.

Comment by tailcalled on Many methods of causal inference try to identify a "safe" subset of variation · 2021-03-30T14:39:45.666Z · LW · GW

Also, a sidenote - I tend to think of Pearl-style conditional independence methods as being related to instrumental variables and thus being a multivariate generalization of these sorts of methods. But I'm not entirely sure. Any thoughts on that?

Comment by tailcalled on Making a Cheerful Bid · 2021-03-28T08:36:57.019Z · LW · GW

You need enough of my trust in advance for me to be willing to quote you my cheerful price.

 

plus I barely know you

Are cheerful prices a good idea among near-strangers? The original idea was to address the issue of spending social capital among friends. Here you'd presumably have some trust and knowledge to make these things less of a problem.

Comment by tailcalled on Covid 3/25: Own Goals · 2021-03-27T10:36:47.071Z · LW · GW

The framing of Europe flip-flopping for no reason on the safety of AstraZeneca's vaccine seems inaccurate. As you point out, Scandinavia still has it suspended, but it was also Scandinavia that kicked off the decision to suspend, with other European countries following. (It's still bad ofc, but y'know, less self-contradictory.)