(A -> B) -> A
post by Scott Garrabrant · 2018-09-11T22:38:19.866Z · LW · GW · 11 commentsContents
11 comments
This post is about following type signature, which I call the type of agency: . You can also think of it as consequentialism or doing things on purpose. This post will be a rant with a bunch of random thoughts related to this type signature, and it will likely not make sense. It will also be sloppy and will have type errors, but I think it is worth posting anyway.
First, interpret these arrows as causal arrows, but you can also think of them as function arrows. This is saying that the causal relationship from to causes to happen. Think of as an action and as the goal. The reason that happens is the fact that it has as a consequence. There are not normally exponential objects like this in Bayes' nets, but I think you can modify so that it makes sense. (I'm not sure that this works, but you have a Cartesian closed category with nodes that are the nodes in your Bayes net, and add small number of morphisms from product nodes to individual nodes, corresponding to the functions in the Bayes' net. The acyclicness of the Bayes' net roughly corresponds to this category being thin. Then you can consider having other types of morphisms that can keep the category thin.)
If you have a game between two agents with action nodes and , with utilities and . The game implements a pair of functions and . We can Curry these functions and think of them as and . Bringing in the agency of both players leads to cycle. This cycle does not make sense unless the agency arrows are lossy in some way, so as to not be able to create a contradiction.
Fortunately, there is another reason to think that these agency arrows will be lossy. Lawvere's Fixed Point Theorem says that in a Cartesian closed category, unless has the fixed point property, you cannot have a surjective function , in Set this is saying that if has more than one element, you cannot have an injection . i.e. The agency arrows have to be lossy.
Also, notice that Argmax, takes in a function from some set to , and returns an element of the domain, , so Argmax has type .
This one is a bit more of a stretch, but if you look at gradient descent, you have some space , you have a function . The gradient can be thought of as a function from infinitesimal changes in to infinitesimal changes in . Gradient descent works by converting this gradient into a change in . i.e. Gradient descent looks kind of like .
11 comments
Comments sorted by top scores.
comment by MrMind · 2018-09-13T08:12:23.446Z · LW(p) · GW(p)
It's interesting to notice that there's nothing with that type on hoogle (Haskell language search engine), so it's not the type of any common utility.
On the other hand, you can still say quite a bit on functions of that type, drawing from type and set theory.
First, let's name a generic function with that type . It's possible to show that k cannot be parametric in both types. If it were, would be valid, which is absurd ( has an element!). It' also possible to show that if k is not parametric in one type, it must have access to at least an element of that type (think about and ).
A simple cardinality argument also shows that k must be many-to-one (that is, non injective): unless B is 1 (the one element type),
There is an interesting operator that uses k, which I call interleave:
Trivially,
It's interesting because partially applying interleave to some k has the type , which is the type of continuations, and I suspect that this is what underlies the common usage of such operators.
Replies from: Gurkenglas↑ comment by Gurkenglas · 2018-09-16T09:10:15.692Z · LW(p) · GW(p)
usually has one element, so needs not have an element.
That Hoogle doesn't list a result essentially follows from k not being parametric in all types. (Except that it lists unsafeCoerce :: - they'd rather have the type system inconsistent than incomplete...)
stands for what the agent ends up making happen, and may be easier to implement - just like predicting that Kasparov will win a chess match, without knowing how. Interesting should tend to have the property that interleave turns them into a particular kind of . Why would you call it interleave?
Replies from: MrMind↑ comment by MrMind · 2018-10-04T16:18:43.244Z · LW(p) · GW(p)
Let's say that is the set of available actions and is the set of consequences. is then the set of predictions, where a single prediction associates to every possible action a consequence. is then a choice operator, that selects for each prediction an action to take.
What we have seen so far:
- There's no 'general' or 'natural' choice operator, that is, every choice operator must be based on at least a partial knowledge of the domain or the codomain;
- Unless the possible consequences are trivial, a choice operator will choose the same action for many different predictions, that is a choice operator only uses certain feature of the predictions' space and is indifferent to anything else [1];
- A choice operator defines naturally a 'preferred outcome' operator, which is simply the predicted outcome of the chosen action, and is defined by 'sandwiching' the choice operator between two predictions. I just thought interleave is a better name than sandwich. It's of type .
[1] To show this, let be a partition of and let be the equivalence relation uniquely generated by the partition. Then
Replies from: Gurkenglas↑ comment by Gurkenglas · 2018-10-05T01:02:47.099Z · LW(p) · GW(p)
- Knowledge that there is an action to select, in the form of having an action in hand, allows the implementation of exactly one chooser: The one that always selects that action.
- holds for any function / partition between any two sets. The proof you want may be that is an exponential space and therefore usually larger than .
- interleave/sandwich should then take two predictions as parameters. This suggests that we could define a metric on the space of predictions, and then sandwich the chooser between two nearby predictions, to measure its response to inaccurate predictions.
↑ comment by MrMind · 2018-10-05T10:41:27.824Z · LW(p) · GW(p)
Re: the third point, I think it's important to differentiate between and , where is the true prediction, that is what actually happens when an agent performs the action .
is simply the outcome the agent is aiming at, while is the outcome the agent eventually gets. So maybe it's more interesting a measure of similarity in , from which you can compare the two.
comment by Diffractor · 2020-02-15T06:35:15.343Z · LW(p) · GW(p)
I found a paper about this exact sort of thing. Escardo and Olivia call that type signature a "selection functional", and the type signature is called a "quantification functional", and there's several interesting things you can do with them, like combining multiple selection functionals into one in a way that looks reminiscent of game theory. (ie, if has type signature , and has type signature , then has type signature .
comment by cousin_it · 2018-09-13T10:44:53.857Z · LW(p) · GW(p)
My first instinct is to translate this post to logic. Obviously A -> B doesn't imply A, because A -> B is true when A is false. So we need to expand the problem: imagine we have A -> B and some additional knowledge K, and together they imply A. Then it seems to me that K alone, without A -> B, would also imply A.
Proof: by definition, (not (A -> B)) = (A and not B). Therefore (not (A -> B)) -> A. Therefore (K and not (A -> B)) -> A. But we also have (K and (A -> B)) -> A by assumption. Therefore K -> A by case analysis.
So the only thing we can say about K is that it must imply A, and any such K would suffice. The proof only works in classical logic though. What can we say about K if the logic is intuitionistic?
We can see the difference by setting K = ((A -> B) -> A). Then in intuitionistic logic, K alone doesn't imply A because Peirce's law doesn't hold, but (K and (A -> B)) implies A by modus ponens. Moreover, it's obvious that any other suitable K must imply this one. That wraps up the intuitionistic case: K must imply (A -> B) -> A, any such K would suffice, and there's no shorter answer.
Can we exhibit specific A and B for which (A -> B) -> A holds intuitionistically, but A doesn't? Yes we can: this stackexchange answer says A = (((P -> Q) -> P) -> P) and B = (P -> Q) work. Of course this A still holds classically due to Peirce's law, but that's unavoidable.
comment by rk · 2018-09-12T12:59:02.866Z · LW(p) · GW(p)
I wonder if there are any plausible examples of this type where the constraints don't look like ordering on B and search on A.
To be clear about what I mean about those constraints, here's an example. One way you might be able to implement this function is if you can enumerate all the values of A and then pick the maximum B according to some ordering. If you can't enumerate A, you might have some strategy for searching through it.
But that's not the only feasible strategy. For example, if you can order B, take two elements of B to C and order C, you might do something like taking the element of B that, together with the value less than it, takes you to the greatest C.
My question is whether these weirder functions have any interest
Replies from: MrMind↑ comment by MrMind · 2018-09-13T08:42:50.932Z · LW(p) · GW(p)
I wonder if there are any plausible examples of this type where the constraints don't look like ordering on B and search on A.
Yes, as I shown in my post, such operators must know at least an element of one of the domains of the function. If it knows at least an element of A, a constant function on that element has the right type. Unfortunately, it's not much interesting.
comment by jmh · 2018-09-13T11:14:50.371Z · LW(p) · GW(p)
"Bringing in the agency (Ai→Ui)→Ai of both players leads to cycle. This cycle does not make sense unless the agency arrows are lossy in some way, so as to not be able to create a contradiction. "
I'm definitely missing something here - and may be thinking of things incorrectly here. Isn't a contradiction inherent in a cycle behavior? I'm thinking about things like voting cycles events where preferences are multi-peaked resulting in shifting majorities.
Is the "lossy" point just saying in such a cycle we have rules about pairing the alternatives that are then voted for and once one alternative has lost then it's out of the set for future votes?
Am I thinking of this the right way (even if putting it in a bit of a different context)?
comment by Oliver Sourbut · 2022-05-21T04:42:57.216Z · LW(p) · GW(p)
I think the gradient descent bit is spot on. That also looks like the flavour of natural selection, with non infinitesimal (but really small) deltas. Natural selection consumes a proof that a particular (mutation) produces (fitness) to generate/propagate/multiply .
I recently did some thinking about this and found an equivalence proof under certain conditions [AF · GW] for the natural selection case and the gradient descent case.
In general, I think the type signature here can indeed be soft or fuzzy or lossy and you still get consequentialism, and the 'better' the fidelity, the 'better' the consequentialism.
This post has also inspired some further thinking and conversations and refinement about the type of agency/consequentialism which I'm hoping to write up soon. A succinct intuitionistic-logic-flavoured summary is something like but there's obviously more to it than that.