A Brief Intro to Domain Theory
post by Diffractor · 2019-11-21T03:24:13.416Z · LW · GW · 4 commentsContents
The Basics: Functions: Quick note on notation: Embeddings and Projections: Category Theory Constructions: The Bilimit: Category Theory + Domain Theory: Yo Dawg I Heard You Like Fixed Points: None 4 comments
So, domain theory is a fairly abstract branch of math which is about giving semantics to weird recursive constructions in computer science, in the form of partially ordered sets with additional structure. I'm still learning the parts of it which regard building a link to explicit computable rules that can be implemented in a programming language, and it's not easy at all to learn. Takes a lot of persistence.
However, the parts I have learned so far seem worth explaining more widely, due to the ability to pull off some very unique fixpoint constructions in domain theory that are very hard to do in any other area of math. The classical example is showing that there are nontrivial models of the untyped lambda calculus. Lambda terms can act as functions from lambda terms to lambda terms, but it's awfully hard to explicitly come up with a space that's isomorphic to the space of functions , except for the single-point space, due to cardinality constraints. We must work with a restricted notion of function space in order to dodge the cardinality constraints.
Also, in game theory, we can view a policy as a function from the opponents policy to a probability distribution over our own actions. This was the subject of the lawvere problem a while ago that was solved by reflective oracles. Specifically, you'd want a pair of spaces , . (and then figure out what sort of computable construction corresponds to the thing you just built, if one exists).
If you want to have some seemingly-impossible type signature, it seems worthwhile to see what arsenal of tools domain theory has to let you construct such a space. So this will be a quick rundown of chapters 1-5 in these domain theory notes. http://www.cs.bham.ac.uk/~axj/pub/papers/handy1.pdf
The Basics:
Domains are a special type of partially ordered set, where the standard order roughly corresponds to "information content". Now, given an arbitrary partially ordered set, and ( roughly corresponds to the smallest batch of information that contains all the information present in the elements of the set you're taking the sup of, and inf corresponds to the largest batch of information such that every element of the set has more information than that) may not exist for certain subsets. As a toy example, take the poset of three elements that looks like a V. The set consisting of the top two elements has an , but it doesn't have a . If we add an element on top to turn the poset into a diamond shape, then all subsets have a and .
In a domain, we don't require the existence of arbitrary and (that's called a complete lattice), and we don't even require the existence of and for arbitrary finite sets (that's called a lattice). What we do require is that every directed set has a . A poset which fulfills this is called a dcpo (directed-complete partial order).
What's a directed set? It's a nonempty set , where, for all , there's a s.t. . You can always find upper bounds (not necessarily the supremum, though!) for any finite subset of a directed set, within the directed set. Consider the poset given by the natural numbers, where the ordering is given by the standard ordering on natural numbers. This looks like a chain that starts at 0 and extends forever upwards. This fails the requirement that every directed set has a ! Because is a directed set, but there's no for it. If we add a single element on the top corresponding to , which is everything, then every directed set has a again.
So, one requirement for being a domain is that every directed set has a . Or, in other words, given some arbitrary batch of information-states where for every two information-states, there's a third one incorporating all the information in both of them (and maybe some more info), there must be a minimal information-state incorporating all the information from the whole batch. Any finite poset fulfills the "every directed set has a " property, but there may be interesting failures when you move to infinitely many points.
The next component is continuity. Besides the standard ordering, there's a ordering, which corresponds to approximation. approximates iff, for all directed sets where , there's a s.t. . As an example, if we take the space and equip it with the usual order to turn it into a poset, (in the poset) corresponds to (in the standard number ordering). Transitivity holds, and antisymmetry holds. But reflexivity doesn't necessarily hold, as the previous example shows. An element that approximates itself, , is called compact.
A continuous dcpo is one where, for all points , . This set happens to be directed (though this fact isn't obvious), and in short, it means that any information-state can be described as the of information states that approximate it.
An example of a non-continuous dcpo is: Have two copies of the natural numbers, and add an that's above both of those chains, so they're "glued together at infinity". No numbers (in either of the chains) approximate , because for any point in one of the chains, you can take the entire other chain, and that's a directed set with a of , but nothing in that directed set is above the point you picked. So can't be built by taking the of stuff that approximates it, because there is no such stuff.
If, for all points , (ie, every element is the of the compact elements below it), then it's called an algebraic dcpo. These are nicer to work with.
A domain (as defined in the linked notes) is a continuous dcpo (every element can be built as the sup of stuff that approximates it, and all directed sets have a sup). But there's one more condition that shows up so often that I think it should just be folded into the definition of a domain, because if you drop it, an awful lot of useful stuff stops working.
Pointedness. Specifically, your domain must have an element that is below everything. The typical interpetation of this is something that loops forever. So I'll define a domain as a continuous dcpo with a .
Functions:
We only consider continuous functions between domains. Specifically, a continuous function is one which fulfills the following two properties.
In other words, the function should preserve the information ordering (increasing the information content of the input increases information content of the output), and also it shouldn't matter whether you take the of a directed set first and then send it through , or whether you send your directed set through and take the of the image (which is also a directed set by the first condition)
is the space of all continuous functions from a domain to a domain , and it fulfills a lot of the conditions to be a domain, though not necessarily continuity. The ordering on functions is given as follows: . (ie, always produces a more informative result than on the same input)
The notion of in the function space for directed collection of functions is as follows: . The image in the second half is directed because is a directed set of functions, thus showing that the result is well-defined, and then you can put in some extra work to show that is also a continuous function as well, so the function space is a dcpo.
The bottom element in the function space is the function where , the maximally uninformative function that just loops forever on all inputs.
The approximation order in the function space might be quite badly behaved, however. The function space might not even be a domain. In order to get the function space to be a domain, you have to impose extra conditions on either or . There are several possible extra conditions you can impose which get the function-space to be a domain (or algebraic domain), and also get the function-space to have the same property. So when we jump up to the category theory level, we typically work in some subcategory of domains that is closed under taking the function space.
One neat property of domains is that every continuous function from a domain to itself has a least fixed-point below all the other fixed points, and this least fixed-point also has a nice pretty description. The least fixed point of , is given by the of the directed set with elements: ...
because is below everything. so repeatedly applying this shows that each element in the chain is the element below it. So it's a directed set and has a . The image of after applying is just itself (minus the bottom element). By continuity of , . So is a fixed point of .
Also, the function that maps a function to its fixed point is continuous.
There's also a category-theory analogue of this same result, but it's quite a bit more complicated.
Quick note on notation:
The standard notation for composition of functions, , doesn't mesh well with intuition, because you have to read it backwards. First is applied, then is applied. This can be dealt with acceptably when you aren't composing that many functions, but later on in domain theory, you work up to some really complicated function compositions, so making up a notation that puts the functions the proper way around empirically made proofs vastly easier for me. From now on, will be a notation for "first apply , then , then ". It's the same as , but without the need to take ten seconds to mentally reverse the order whenever it shows up.
Embeddings and Projections:
An embedding-projection pair between a pair of domains and is a pair of continuous functions , s.t. , and . In other words, is richer than and has more information states available, so you can embed into . Crunching back down to via the projection, recovers exactly (ie, the embedding doesn't identify different things, it's injective). However, projecting down to , and then embedding back into destroys information, so , which is equivalent to .
These are nice because an embedding uniquely determines a projection, and vice-versa. If you've got an embedding, it's usually pretty easy to come up with a natural candidate for a projection, and once you verify the two defining properties of that pair of functions, you know there's nothing else you could substitute in (note that there may be many embedding/projection pairs! But once you fix one of the two, that uniquely determines the other part).
This comes in handy once we get up to the category theory level, because lots of times when you're drawing a diagram you're like "damn, I need this arrow to go the other way", but if the arrow is an embedding or projection, you can just automatically get an arrow going the other way by going "take the unique projection/embedding corresponding to the embedding/projection I already have". Normally you'd need an isomorphism to pull off the "reverse the arrows" trick, but embeddings and projections also let you do that. Also all isomorphisms are embeddings and projections.
Also, embeddings and projections have the nice property that they have to be strict (ie, they must both map to ).
Category Theory Constructions:
So, there's several things you can do to a domain to get another domain. We already know about function-space, but there are others.
Cartesian product, , is simple. It's the domain of pairs where iff .
These get you a cartesian closed category, if you are working in a sufficiently nice subcategory of domains where the function-space is forced to be a domain too and have the defining property of the subcategory.
Considering the subcategory of domains where the only morphisms are strict functions (functions which map to ), our candidate for the function space would be the strict function space, , of all functions which map to .
This doesn't harmonize with the cartesian product. However, there's another product. The smash product, . It's like cartesian product, but all pairs with a element in one of the coordinates are identified as the same point, the element of the product. Or, in other words, if one of the components of the pair loops forever, it's just classified as the loop-forever bottom element of the smash product.
Strict function space and smash product get you a monoidal closed category.
There's lifting, which is taking and making a new domain by sticking a single new element below everything in .
There's something called the coalesced sum, , which corresponds to the coproduct in category theory. It's done by taking two domains and , and identifying their bottom elements, which glues them together at the bottom point. The coalesced sum of the three-element domain that looks like a V, with itself, is a domain with 4 incomparable points, and a bottom element below them all.
So, function space, strict function space, cartesian product, smash product, coalesced sum, and lifting, are our basic building blocks.
There's one more rather complicated one, the bilimit, which plays a key role in generating fancy domains with seemingly-impossible type signatures.
The Bilimit:
Consider an infinite chain of domains indexed by the natural numbers, ... where, for all , there's an embedding of into . Call this an expanding system. In other words, as you go further up the chain, the domains get more complicated and keep adding extra points while preserving the existing structure. This is the category-theory analogue of a chain (a type of directed set), analogous to the chain we made to construct a fixed point of a function. It's then reasonable to ask "what's the analogue of in this context?"
A composition of embeddings is an embedding (the projection is given by composing the corresponding projections), so we've also got embeddings from any domain in this sequence to any higher domain, and projections from any domain in this sequence to any lower domain.
There's a very special domain called the bilimit, which is both a limit of the diagram with the projections, and a colimit of the diagram with the embeddings. Or, in other words, embedding into does the same thing as embedding into , , and embedding into . Same with projection. It doesn't matter whether you project into , or project into and then into . If something else is a limit or colimit, you can embed (or project) into that thing as needed.
The bilimit can be thought of as the infinite product of the domains (equipped with the obvious ordering), but you only keep the points of the form where . In other words, a point in the bilimit corresponds to some infinite sequence of points/ever-more-detailed information states from the ever-more-detailed domains, and crunching the 'th domain (more detailed) down to the 'th domain (less detailed) maps the 'th coordinate (more detailed information state, ) down to the 'th coordinate (less detailed information state). The bilimit of a finite sequence would just be the last domain in the sequence, because once you've got the final coordinate, that fixes everything else. The bilimit of an infinite sequence is like the "completion" of the process of the domains getting more and more detailed.
To project from the bilimit to one of the finite stages, just take the appropriate coordinate, that gives you the point to map to. To embed from one of the finite stages to the bilimit, just project back to get points for the earlier stages, and embed forwards to get points for the later stages.
If you take a point in the bilimit , projecting to and then embedding back into gets you a point , because project-then-embed moves points down. We can consider the batch of points in given by projecting-then-embedding into some finite stage. This is a directed set, and the sup is itself. (ie, to get a close approximant for , just project into some really distant domain and then embed back, that gets you a long initial sequence of coordinates that match up with itself)
This bilimit is the analogue of , on a category-theory level.
Since we're working in some subcategory of domains in order to make sure the function spaces work out alright, we'd also want our defining property of the subcategory to be closed under bilimits as well. There are indeed some conditions you can impose so you're working in a subcategory closed under bilimits.
The nicest and most convenient such subcategory is the category of bifinite domains. There's a really complicated definition of them, but after putting in a lot of work, you can show that something is bifinite iff it's the bilimit of some expanding system of finite pointed posets. (ie, every bifinite domain can be written as the bilimit of an infinite sequence of finite partially ordered sets which all have a bottom element). Another cool note is that these are all algebraic (every point can be made by taking the of elements that approximate themselves that are below ). The compact elements of a bilimit are those which can be made by taking a point at some finite stage and embedding it into the bilimit (ie, a point is compact only if it requires finitely much information to specify, you can complete the rest of the information by just embedding it forward forever once you get far enough)
The problem is that, since you've gotta make it as a limit of finite domains, that impairs your ability to incorporate probabilities into things, since there are infinitely many probabilities. Incorporating probabilities into domains is one of the things on my to-learn list.
Category Theory + Domain Theory:
To complete the analogy and get an analogue of the least-fixpoint theorem, but for domains, we need an analogue of a continuous function (we have an analogue of in the form of the bilimit).
In this case, the analogue of a continuous function would be a continuous functor (more on what makes a functor continuous later). For example, consider the functor s.t. . A fixpoint of would be a domain that's isomorphic to . We also want our functor to map embeddings to embeddings for the upcoming application, so if there's an embedding , we need to be an embedding from to . If you've got a functor with multiple inputs, then it needs to map tuples of embeddings to embeddings. Consider the cartesian product functor where . Given embeddings and , needs to be an embedding from to .
Let's do this for the function space, as that's the most complicated one.
Exercise: Do this for all the other constructors, like lifting and cartesian product and coalesced sum.
We have domains , and embeddings and . We want to be an embedding from to . Well... we have a function from to and we're trying to get a function from to . Using , then , we can get from to . But we have to somehow get from to , but we only have an embedding going the other way...
But it's an embedding, so we can just use the unique projection , which goes the other way! This is what I was talking about with restricting to embeddings so you can get your arrows going the other way when the need arises.
Specifically, our candidate for the embedding is: , and our candidate for the projection going the other way is to map to . Let's call this function for later use. It's a bit of work to show that these are continuous functions, so I'm gonna skip that part, but I will show the part about these two things fulfilling the relevant conditions for embeddings and projections.
Condition 1: embedding then projecting had better recover your original point.
by the definition of our embeddings and projections, and embed-then-project giving you identity.
Condition 2: projecting then embedding had better produce a lower point.
so
(again, by project-then-embed giving you a lower point, for the inequalities)
So our functor does map pairs of embeddings to embeddings.
Now do it with the other building blocks.
Anyways, returning to the original thing, we should go over the definition of continuity for a functor.
For an expanding sequence with bilimit , applying the functor to the sequence gives you a different expanding sequence (because it maps embeddings to embeddings). This has a bilimit .
A functor is continuous if . In other words it doesn't matter if you take the bilimit and shove it through the functor, or shove your sequence through the functor, and then take the bilimit of that. Swapping out "functor" for function, and "bilimit" for , we can see the close parallel to continuity of functions.
It's kinda hard to verify this condition, but there's a stronger one called local continuity which is a lot easier to check and implies continuity. For all the basic building blocks of domains (product, function space, lifting, coalesced sum, etc...), and compositions of them, the associated functors are continuous, which is quite nice.
Yo Dawg I Heard You Like Fixed Points:
Let's momentarily just look at continuous functors that only take one input, not multiple. Like the functor mapping to , as a concrete example. We're going to be trying to complete the analogy to the original fixpoint theorem. We start with , and keep applying our function to it to build up a chain, and then take the of the chain, and that's the minimal fixed point. So what's the analogue of in a category-theory sense?
It's just the domain consisting of a single point, call it . And there's only one possible embedding from that to whatever domain you want, the embedding which maps the single point to the bottom element in the target domain.
So our expanding sequence is .... and for the embeddings, we've got our unique embedding from into , so is an embedding from into , and is an embedding from into ... Ok, we have our embeddings, and thus our projections. Now we take the bilimit . By functor continuity, =bilimit of the expanding sequence shoved through . But shoving our expanding sequence through just gives us the same sequence, but with clipped off. This doesn't change the bilimit at all.
So, as desired, . And this is how you cook up mathematical spaces isomorphic to . Or if you want a space that's isomorphic to , that also can be done.
Now, if you're attentive, you'll have noticed that this construction just gives you a single point as your canonical solution to . This can be fixed by starting with something other than a single point (like a domain with two points, a top and a bottom), and then specifying a custom embedding from that into (2-point domain), giving you a full-blown model of the lambda calculus. But apparently the least fixed-point/canonical solution grants you extra-nice properties. I don't know what those are, since I haven't gotten to that chapter, but apparently you have more powerful tools available if you're working in the least-fixed-point domain instead of some other fixed-point.
There was a paper by Abramsky (here) that looked at something called the lazy lambda calculus, which turns out to be exactly the least-fixed-point solution of , which I'm still digesting. This is notable because adding an extra point at the bottom lets you get away from a single point. We start with the one-element domain, take the function space (still one-element domain), add a bottom element (ok, now we have two elements), take the function space of that (now three elements for the three possible continuous functions), add a bottom element, and from there it gets waay more complicated.
The analogue of the isomorphism between and in the computational sense is that you can take a lambda term (element of ), work on it until you get it into something called weak head normal form (a form like ), and then it's a function from lambda terms to lambda terms. The bottom element comes in because maybe the lambda term doesn't reduce to weak head normal form, and then the reduction process just loops forever, and that's the bottom element (because bottom elements correspond to stuff that just loops forever or doesn't return an output)
Now, with tupling and some more work, it's possible to even solve multiple fixpoints simaltaneously. If you've got continuous functors , then there's a least-fixpoint solution to:
So that's what domain theory can do. You can set up a bunch of isomorphisms that you want, of the form "these domains should be isomorphic to these other domains that can be defined in terms of the basic building blocks like function space and smash product", and cook up a canonical solution. An example I'm particularly interested in is ( is the three-element domain that looks like a V):
This corresponds to two domains of strategies, where a strategy is a continuous function from their strategy to your strategy, or it's one of two possible actions (or loop forever). So you can take two points in these, and repeatedly feed them to each other to either get looping forever, or a concrete action (the "repeatedly play the strategies against each other" function is a continuous function from to , though that's a bit tricky to show). This seems kinda like the full domain-theory version of Vanessa's metathreat hierarchy [AF · GW] without the restriction to operate on some fixed finite level, though I'm not sure how to get probabilities in there.
More later on how to get those domains into something you can actually work with on a computer, once I learn that.
4 comments
Comments sorted by top scores.
comment by cousin_it · 2019-11-21T08:23:33.207Z · LW(p) · GW(p)
For programmers, I think the simplest way to motivate domain theory is to think about a lazy language like Haskell. In such a language, if you have for example a list of numbers, it doesn't actually contain numbers, but rather "thunks" that when evaluated will give numbers. Some of these thunks, when you try to evaluate them, can go into an infinite loop instead of giving a number. Let's call these "bottoms". Basically any data structure can have some of its parts replaced with bottoms: a subtree of a tree can be bottom, and so on. Then we can say all functions in a lazy language are total functions from things-with-bottoms to things-with-bottoms: if your function goes into an infinite loop, we'll just say it returns a bottom.
Then we can say a piece of data X is "at least as informative" as Y, if Y can be obtained from X by replacing zero or more of its parts with bottoms. That's a partial ordering. And it's kind of obvious that any function we can write in our language will be monotone with respect to that ordering: f(X) will be at least as informative as f(Y).
So basically domain theory is what you get if you ask yourself "what if data structures could contain bottoms" and turn the abstractness crank a few times. In a lazy language it applies pretty naturally; in a strict language it's a bit more awkward, because things-with-bottoms don't exist at runtime to be passed around; and in a total language it's unnecessary, because all functions are already total.
comment by Pattern · 2019-11-21T17:13:59.903Z · LW(p) · GW(p)
Comments:
Great post! (I've used that notation for readability as well.)
This seems kinda like the full domain-theory version of Vanessa's metathreat hierarchy [LW · GW] without the restriction to operate on some fixed finite level, though I'm not sure how to get probabilities in there.
More later on how to get those domains into something you can actually work with on a computer, once I learn that.
I'm looking forward to these posts.
Questions:
One neat property of domains is that every continuous function from a domain to itself has a least fixed-point below all the other fixed points, and this least fixed-point also has a nice pretty description. The least fixed point of f∈[D→D], is given by the sup of the directed set A with elements: ⊥,f(⊥),f(f(⊥)),f(f(f(⊥)))...
Does this statement apply to the real numbers or a subset of them?
From now on, f;g;h will be a notation for "first apply f, then g, then h". It's the same as h∘g∘f, but without the need to take ten seconds to mentally reverse the order whenever it shows up.
Does the notation effect all the arrows in the proofs or just this part (below)?
Specifically, our candidate for the embedding is: F(e,e′)(f)=p;f;e′, and our candidate for the projection going the other way is to map g:D′→E′ to e;g;p′. Let's call this function P(e,e′) for later use. It's a bit of work to show that these are continuous functions, so I'm gonna skip that part, but I will show the part about these two things fulfilling the relevant conditions for embeddings and projections.
comment by Pattern · 2019-11-21T17:07:59.343Z · LW(p) · GW(p)
Styling:
Important statements could be marked to stand out more. (I have these in bold.)
[1] A poset which fulfills this is called a dcpo (directed-complete partial order).
[2] A continuous dcpo
There's three paragraphs between [1] (where the term is defined) and [2] (where the term is first used outside of the definition). Now I understand why acronyms are usually IN ALL CAPS.
From now on, f;g;h will be a notation for "first apply f, then g, then h". It's the same as h∘g∘f, but without the need to take ten seconds to mentally reverse the order whenever it shows up.
This also seems to appears way before it's used (below).
Specifically, our candidate for the embedding is: F(e,e′)(f)=p;f;e′, and our candidate for the projection going the other way is to map g:D′→E′ to e;g;p′. Let's call this function P(e,e′) for later use. It's a bit of work to show that these are continuous functions, so I'm gonna skip that part, but I will show the part about these two things fulfilling the relevant conditions for embeddings and projections.
comment by TheMajor · 2019-11-21T08:47:26.203Z · LW(p) · GW(p)
This looks really interesting! Your special notation for composing functions is making this a lot harder to read for me though, I've read the part up to 'Embeddings and Projections', but I really have to take a break here and set extra time aside to read it at a slower pace from that point. I'm happy the notation works for you but it's really jarring for me.