Answering questions honestly instead of predicting human answers: lots of problems and some solutions
post by evhub · 2021-07-13T18:49:01.842Z · LW · GW · 24 commentsContents
Setting up the problem Solution attempts Simplicity prior Paul's original proposal Defender Attacker Bit-splitting Composition with consistency Dataset conditioning Fixing dataset conditioning by removing the dataset Defender Attacker Dataset conditioning Composition with signature checking Understandability checking Event-avoiding humans Humans with rocks for brains Directions for future work The bit-splitting problem The understandability checking problem The event-avoidance problem The composition problem The rocks for brains problem Conclusion None 24 comments
This post is the result of work I did with Paul Christiano on the ideas in his “Teaching ML to answer questions honestly instead of predicting human answers [AF · GW]” post. In addition to expanding upon what is in that post in terms of identifying numerous problems with the proposal there and identifying ways in which some of those problems can be patched, I think that this post also provides a useful window into what Paul-style research looks like from a non-Paul perspective.
Recommended prior reading: “A naive alignment strategy and optimisim about generalization [AF · GW]” and “Teaching ML to answer questions honestly instead of predicting human answers [AF · GW]” (though if you struggled with “Teaching ML to answer questions honestly,” I reexplain things in a more precise way here that might be clearer for some people).
Setting up the problem
We want to train a model that produces natural language answers to questions about inputs . There are a lot of reasons to be worried about training such a model, but one specific reason is that, if we train on question-answer data produced by humans, we might end up with a model that tries to predict what a human would say rather than a model that tries to answer the questions honestly.
To further narrow the scope, we'll just consider situations in which our model ends up implemented with a logical deduction structure, where it has some world model on top of which it does logical deduction to reach conclusions which it then uses to inform its output. In particular, we'll consider two models, and , defined in pseudocode as
def M_plus(x, q):
axioms = world_model(x)
deduced_stmts = deduction(axioms)
return f_plus(q, deduced_stmts)
def M_minus(x, q):
axioms = world_model(x)
deduced_stmts = deduction(axioms)
return f_minus(q, deduced_stmts)
or defined in my notation as where and are two different ways of translating from the results of deduction into a natural language answer. Specifically, is the “honest embedding” which directly converts between logical statements and their equivalent natural language, thus answering questions by embedding as a logical statement and unembedding its answer in . Conversely, is the “mimicry embedding” which just searches for deductions about what a human would say in response to and outputs that—thus, just quotes , embedding it as just a string of characters for a human to respond to, rather than actually having to understand it in any meaningful way.
Now, our overall goal is to find some “reasonable” (i.e. plausibly implementable in ML) prior such that where is our dataset (the idea being that we're imagining that ML does something like find the maximum a posteriori model given some implicit prior). For the most part, however, we'll only be considering models with zero loss on our dataset[1]—of which and should both qualify (though note that getting a dataset which is good enough for that to be true is nontrivial; see Step 3 of “Teaching ML to answer questions honestly” [AF · GW])—such that we need Furthermore, at least for this post, we'll be happy if we can just get —that is, we'll be focusing on Step 1 [AF · GW] in Paul's post rather than Step 2 [AF · GW].
The basic process that we'll be using to interrogate this problem will be an attacker-defender setup, where the defender proposes some “reasonable” prior along with implementations of such that . Then, the attacker's job is to respond with some alternative (that implements something other than the intended algorithm) such that (under any set of “reasonable” assumptions that the attacker might need to make).
Solution attempts
Simplicity prior
The purpose of this first example is just to ask the question of whether this problem is even really a problem in the first place. To that end, we'll just let be some “reasonable” simplicity prior and ask whether, given such a prior, there are any “reasonable” assumptions we can make such that .
Now, we'll assume that , since it's a pretty simple operation (and it's shared between and , so it shouldn't matter), and we'll assume that are specified given rather than the reverse (such that [2]). Then, we get such that if and only if Furthermore, to make things clearer, let (this is just informational entropy, the number of bits necessary to specify under ) such that our condition becomes
We'll suppose that, if we don't condition on , , since requires fully specifying a map from the internal deduction language to natural language, whereas doesn't need to know anything about natural language as it can just look for deduced statements of the form (i.e. “the human answers to question about input ”). Intuitively, has to actually understand , which takes a lot of complexity, whereas can merely treat as a sequence of characters that get fed into , which should be much simpler. Then, the question here becomes, do we get all of the extra bits necessary to specify natural language in back when we condition on ?
Maybe. I think that the answer here is very plausibly yes, simply because a world model that includes humans should have to include a lot of information about natural language that can reuse, in which case shouldn't have an advantage over (though then we'd still need some version of Paul's Step 2 [AF · GW] to give an advantage). However, I think there are plausible assumptions under which conditioning on isn't enough to level the playing field. For example, suppose the human model in is sufficiently low-level such that it's just reasoning about neurons—then, there shouldn't be any way to easily read off information about natural language from to help specify . Alternatively, suppose the model is high-level enough to include a human natural language deduction procedure (e.g. that shares some bits with ) but that includes a bunch of mistakes to account for human biases—in that case, if you tried to just read off the mapping from deduction to natural language included in such an model for , the simplest implementation would seem to be to copy over all the mistakes, resulting in such an just behaving the same as rather than actually being any more truthful.
Thus, at least for the purposes of this post, we'll be assuming that this is actually a problem such that just using a “reasonable” simplicity prior isn't sufficient to level the playing field between and .
Paul's original proposal
Defender
Next up, we'll take a look at a variant of Paul's proposed fix in Step 1 of “Teaching ML to answer questions honestly” [AF · GW]. Specifically, we'll be looking at the non-relaxed version without any stop gradients. We'll talk about the stop gradients that Paul mentions later on—and as for Paul's relaxation of the conditioning,[3] that's really only useful to think about when we start translating into ML, which isn't something we'll be doing here. Thus, we'll just be looking at the simplest, generative version of Paul's proposed prior.
Suppose we give two output heads, and (which we will be treating fully symmetrically here), and split 's parameters into two groups, and (which we will not be treating symmetrically). Then, where is some simplicity prior, we'll let be the prior induced by the generative procedure where , is our dataset, is the head of , and is just the simplicity prior on given access to .
Here's why this might work. We'll think of as a function of such that . Then, we'll define our desired in pseudocode as
def M_theta_1_plus(theta_2, x, q):
axioms = world_model_no_H(H=theta_2)(x)
deduced_stmts = deduction(axioms)
return {"f1": f_plus(q, deduced_stmts), "f2": f_minus(q, deduced_stmts)}
or in my notation as such that implements , , and the entirety of except for a human-sized hole that's filled in by . Then, we'll let be the correct human model.
To compare to something, we'll define as and let be empty.
Then, calculating under , we get where is the complexity of 's choice to put in as opposed to any other splitting between and .
For now, we'll assume , though we'll flag that letting the defender make this assumption seems quite suspect. Moving forward regardless, however, and additionally assuming since it should be negligible (and shouldn't matter since it's shared between and ), we get
Then, calculating for comparison,
Now, determining if , we need (using the shorthands , ) which, making the assumption that , becomes which, assuming that the posterior conditioned on is dominated by the simplest model,[4] becomes
Finally, we'll argue that this (approximate) equality holds. Why? The basic intuition is that only has to pay for the parts of that aren't nailed down by enforcing that, whatever is, when inserted into , it has to make . Thus, we only have to pay for anything about beyond the fact that says a bunch of things in natural language that match on to statements in the internal deduction language. But that means we don't have to pay for the parts of that define how it does natural language, which, hopefully, are precisely the same parts of that is paying for, completing the equality.
Attacker
Before I jump into possible attacks, it's worth introducing an alternative formulation of the above setup that should hopefully make things somewhat clearer. In the above setup, we never actually use the two heads for anything other than checking equality, which means we can replace them with the alternative two heads , where is the output head and is the “check” head such that we condition on . We can convert back to the previous formalism just by setting and where is any function such that everywhere.
One thing in particular that I think the formulation makes very clear is just how powerful this conditioning is that we're giving the model for free: the model can enforce any condition on over the dataset just by paying the cost in bits of specifying that condition.
Bit-splitting
As it turns out, there are quite a lot of ways to attack the above proposal. In fact, we already touched on one—if , the above argument fails. We'll call this problem the bit-splitting problem, and we don't have a satisfactory solution to it. However, the situation is actually much worse than just the bit-splitting problem.
Composition with consistency
Now, we'll propose a new attack. Let be defined in pseudo-code as
def M_theta_1_minus_prime(theta_2, x, q):
axioms = theta_2(x)
deduced_stmts = deduction(axioms)
return {"f": f_minus(q, deduced_stmts), "f?": check_consistency(q, deduced_stmts)}
such that we've moved the entirety of to and conditioned on that world model being consistent when deduction is applied to it (i.e. ). The idea is that this should allow us to get a ton of savings in specifying the world model, as we only have to pay for anything about the world model on top of the fact that it must be consistent. To that end, we'll let .
Now, we want to demonstrate that . Thus, calculating , but previously the defender argued that such that, if , we have a successful attack.
This attack, however, is potentially patchable if we define a new such that
def M_theta_1_plus_prime(theta_2, x, q):
axioms = theta_2(x)
deduced_stmts = deduction(axioms)
return {
"f": f_minus(q, deduced_stmts),
"f?": check_consistency(deduced_stmts)
and f_plus(q, deduced_stmts) == f_minus(q, deduced_stmts),
}
which, for , gives us such that we get if and only if
It's far from obvious whether this approximate equality holds, however. Now that we're conditioning on both and , it's unclear whether the second condition still gets us enough bits to pay for anymore. In particular, is the only term in the above expression that doesn't get to condition on , which means that, if we were perfectly paying for by conditioning on before, now that we're also conditioning on , but not also conditioning on , if there's any overlap in what conditioning on gives us and what conditioning on gives us, then we'll no longer be able to pay for all of . For example, if alone is enough to pin down a lot of the natural language information in , that would undercut the bit savings from conditioning on , preventing that savings from being able to pay for . We'll call this problem the composition problem, with the idea being that we need to ensure that the way in which pays for composes with any other way of using the conditioning to save bits.
Dataset conditioning
This last attack is probably the most fatal to this specific approach—though also the one we have the most promising approach for fixing, as we'll see in the next solution attempt. But first, suppose that for all , there is some splitting (where is concatenate) such that it is possible to predict from . Then, let
def M_theta_1_minus_prime(theta_2, x, q):
axioms = theta_2(x)
deduced_stmts = deduction(axioms)
return {
"f": f_minus(q, deduced_stmts),
"f?": P(x[b], given=(x[a], deduced_stmts)) == 1,
}
or equivalently
Now, I won't go through the full calculation here, since it's very similar to the previous composition with consistency case, but if we try a similar fix as we did there, we get that if and only if In this case, however, is a much stronger condition than —potentially strong enough to force the specification of most of , likely including all of natural language, thus rendering the benefit from conditioning on nil.
Fundamentally, this problem, which we'll call the dataset conditioning problem, is coming from the fact that we leaked information about our dataset into the prior by conditioning on specifically over , rather than, as we'll see in the next proposal, over all inputs.
Fixing dataset conditioning by removing the dataset
Defender
To attempt to fix the dataset conditioning problem, we'll define a new given some simplicity prior as the prior induced by the generative procedure which is just the same as the previous , except now is checked over all inputs, not just over the dataset (note that we still update on the dataset at the end—it's just our prior which is now independent of it). Thus, we'll call this the dataset-less proposal. Now, it's worth noting that implementing such a check in ML is non-trivial, but should hopefully be doable given a GAN-like adversarial setup.
Moving forward regardless, we'll define in pseudocode as
def M_theta_1_plus(theta_2, x, q):
axioms = world_model_no_H(H=theta_2)(x)
deduced_stmts = deduction(axioms)
return {
"f": f_plus(q, deduced_stmts),
"f?": f_plus(q, deduced_stmts) == f_minus(q, deduced_stmts)
or not H_understands(x, q, deduced_stmts),
}
where is some check that the human model is actually capable of understanding and correctly answering given . Note that we need to compute something like “should understand this” rather than “does understand this,” since the latter is essentially just equivalent to whether , which inside of would make trivial. Moving forward regardless, we'll let be the desired human model.
Now, if we want this to beat the same as in the previous proposal, we need, using essentially the same derivation as before, which, assuming that under the idea that should be relatively simple in terms of —though we'll flag that this is a somewhat sketchy assumption to allow the defender to make—becomes which, if is equivalent to , is precisely the condition from before. While, as we'll see later, we explicitly don't want , the hope here is that, for the most part, those two conditions should serve a similar role, with ensuring that we don't need on crazy inputs that wouldn't be correct on, but still giving us the bit savings from enforcing that be correct on the inputs where it should be.
Attacker
First, note that the above proposal still falls prey to the bit-splitting and composition problems. Hopefully, however, it should resolve the dataset conditioning problem, though we'll explore what that will actually take below.
Additionally, as we'll also see below, the dataset-less approach brings to the fore some new problems as well.
Dataset conditioning
Suppose there exists some predicate such that . Then, we'll define as
def M_theta_1_minus_prime(theta_2, x, q):
axioms = theta_2(x)
deduced_stmts = deduction(axioms)
return {
"f": f_minus(q, deduced_stmts),
"f?": not is_real(x, deduced_stmts)
or P(x[b], given=(x[a], deduced_stmts)) == 1,
}
such that .
In such a case, we get that if and only if which depends heavily on the complexity of . Thus, we can reduce the dataset conditioning problem in this case to the existence of such an .
Then, since for this to be a problem it'd have to be the case that Fortunately, it does seem plausible that would have to quite large, potentially comparable to . Put simply, it seems like if you want to know what's real, you have to be able to predict the world. In particular, predicting what data points will show up in the world is essentially precisely what is supposed to be doing, such that seems like it might just require a full copy of .
That being said, there is a remaining risk here if there exists an that isn't just fully equivalent to —thus potentially allowing it to be much simpler—but such that conditioning on still gives similar savings for . We'll look at a specific example of such an in the next attack.
Furthermore, we'll start referring to these sorts of dataset conditioning problem as composition problems, as they have essentially the same structure. What made the dataset conditioning problem not a type of composition problem in the previous case was that previously we were getting for free, whereas now we have to pay for just like we had to pay for .
Composition with signature checking
We've already established that the composition with consistency problem that we saw in the previous proposal still applies in the dataset-less case. However, as the above dataset conditioning analysis suggests, if we can find a suitable , there might be a way to produce a new, particularly pernicious composition problem for the dataset-less proposal that looks similar to dataset conditioning.
To do that, we'll let where computes whether is cryptographically signed with the public key , where is some public key which is both in the dataset itself and has documents signed with it in the dataset as well. Then, as above, we'll let
def M_theta_1_minus_prime(theta_2, x, q):
axioms = theta_2(x)
deduced_stmts = deduction(axioms)
return {
"f": f_minus(q, deduced_stmts),
"f?": not is_signed(k, x, deduced_stmts)
or P(x[b], given=(x[a], deduced_stmts)) == 1,
}
and let . Furthermore, as in the previous composition problem, we'll compare to a defined as
def M_theta_1_plus_prime(theta_2, x, q):
axioms = theta_2(x)
deduced_stmts = deduction(axioms)
return {
"f": f_minus(q, deduced_stmts),
"f?": (
not is_signed(k, x, deduced_stmts)
or P(x[b], given=(x[a], deduced_stmts)) == 1
) and (
not H_understands(x, q, deduced_stmts)
or f_plus(q, deduced_stmts) == f_minus(q, deduced_stmts)
),
}
with .
Then, we get that if and only if Here, we see the same basic structure as the previous composition problem, where every term but gets to condition on . Similarly to the composition with consistency problem, if there is any overlap between what conditioning on gives you and what conditioning on , then we won't be able to pay back all of anymore.
In this case, the reason we might expect an overlap is a bit tricky. If the adversary that's attempting to check the conditions is perfect, then it should be able to instantiate all possible signed strings, meaning that no bits are saved at all here. On the other hand, if the adversary has access to the dataset such that it can generate signed strings from the dataset, but not sign any new strings itself (because we're assuming that the signature function is one-way), then the condition enforces that perfectly model the process generating such signed strings—which, if those strings are produced by humans in natural language, should help in specifying natural language, overlapping with the benefit of conditioning on .
Fortunately, there is a pretty simple way to fix this problem—just don't give the adversary access to the dataset, such that (if we're still assuming one-way signatures), it just won't be able to generate any examples of signed strings at all, fixing the problem similarly to as if the adversary were perfect. Unfortunately, such a requirement seems like it might render our dataset-less prior essentially unimplementable in machine learning, as we're now required to find some way to train such an adversary without ever letting it look at the dataset, which could be quite difficult.
Understandability checking
Next, we'll take a look at the obvious problem that we saw when the defender was going through their derivation, which is that the defender relied on the sketchy assumption that
Intuitively, it seems like should be easy to specify in terms of , but as we mentioned previously, the obvious implementation of makes trivial. Thus, to get a more general notion of —“should understand” rather than “does understand”—we'll presumably need to pay at least some non-negligible number of bits on top of .
Furthermore, we also can't just have , otherwise we'd have to pay the entire cost of , which we're supposing is very large for this approach to be able to solve the dataset conditioning problem.
We'll call this the understandability checking problem, and as we'll see, unlike some of the other problems with the dataset-less case, we don't have any satisfactory solution to this problem.
Event-avoiding humans
This next attack probably also occurs for Paul's original proposal, but it's significantly more pronounced in the dataset-less case.
For this attack, unlike in many of our previous attacks, we're just going to be attacking the definition of rather than the definition of , such that we'll let , thus only changing and leaving alone. Then, we'll define where is some event, are the parts of that are only necessary to compute 's behavior in the event of , and is some modification to such that acts in such a way that never predicts that will happen.
Intuitively, what's happening here is that we're setting in such a way that it causes the world to be simpler (in that never happens) and thus the condition to be easier to achieve (in that we no longer need it to hold in the event of ). We'll call this the event-avoidance problem. Notably, to ensure that we still fit the dataset, we do need to select such that it never happens on —but there should presumably be many such .
Then, we get that if and only if which should hold so long as there are events not in the dataset that are easy to avoid but quite complicated to understand what humans would do in such cases—e.g. what would a human do if they lived to a million years (not in the dataset, humans can be modified to never choose to live that long, behavior in that case seems quite complicated).
Humans with rocks for brains
In the event-avoidance case, we strategically selected to make in an unintended way. However, there's another way to get —or equivalently, —which is to strategically select to make false instead. Specifically, since is determined from deductions based on , we can select to ensure we get by encoding for a human that simply doesn't understand anything.
Naively, the problem with an attack of this form is that such a human model—which we'll call —won't be able to fit the dataset. Thus, we can't construct a traditional attack in the same sense as the previous ones where we create an alternative with lower complexity than using .[5]
However, there's something else we can do instead. Previously, the defender reduced their burden to the approximate equality which, if we convert back to the inequality formulation, is really just the burden
Thus, the defender needs it to be the case that is simpler than the simplest model such that . However, is such a model, since we're assuming it makes always false, which means the defender needs it to be the case that which, if we assume that , gives us a a successful attack, which we'll call the rocks for brains problem.
The rocks for brains problem seems basically fatal to this approach, since it wipes out all possible gains from conditioning on . It might be possible to salvage this approach if is somehow implemented independently of the model, but that would presumably require to be quite large, negating the defender's argument that relied on .
Directions for future work
As a result of the above analysis, we've managed to identify a bunch of concrete problems that seem to blocking approaches of this form. Of course, there could also be other problems that we haven't come up with yet, and there's also the issue of converting these priors into loss functions—though that's something that should probably come after we at least have something that seems to solve all of the problems that we're aware of.
Overall, however, the problems we've described here seem like good places to start, and I'll try to describe some thoughts on how they might be addressed below.
The bit-splitting problem
In Paul's original post, he makes a nod to the bit-splitting problem by carving in half and putting one half in and the other half in . Paul does this because he's modeling the underlying simplicity prior as a simple Gaussian over the norm of the parameters. Under such a model, there's a simplicity benefit to splitting all computation as equally as possible across all available parameters.
However, it's very unclear if any implicit neural network simplicity prior would actually behave that way. In particular, if we just change our simple model to use the norm instead of the norm, the benefit of splitting computation evenly across parameters goes away. Clearly, that's not a great model either—but it goes to show that there are plausible models under which bit-splitting isn't even a problem in the first case.
Fundamentally, what we need to do to alleviate the bit-splitting problem is look for architectures where and are allowed to grow as large as they want with only linear complexity penalties for doing so. Though we traditionally train only fixed-size neural networks, if there were a way to train a dynamically-sized neural network, where size was penalized linearly, that would at least seem to solve this problem—and such a modification doesn't seem too difficult to make.
The understandability checking problem
Of the remaining problems, this is the one that I think is most likely to turn out to not actually be a problem—in that it doesn't require any changes to the prior to solve. In many ways, the understandability checking problem is just coming from our inability to define an check that's simple in terms of and —but at least intuitively, it really does seem like such a simple implementation should exist.
The event-avoidance problem
In Paul's original post, he notes:
When differentiating the consistency test we should treat the intended head as fixed rather than differentiating through it. This removes SGD’s incentive to achieve consistency by e.g. making sure the world is simple and so all questions have simple answers. I currently feel very uneasy about the situation. The algorithm looks a lot more likely to work when we make this change, but it makes it harder to reason about the algorithm globally and I’m very scared about the fact that we can’t do an analogous trick in the program search case. I think the best hope may be to take the gradient-compression perspective suggested in the last bullet point. Overall this may amount to a significant change in this algorithm.
Essentially, what Paul is arguing here is that, going back to the original formulation, we should add a stop gradient operation to our generative procedure such that we get Unfortunately, this isn't actually a generative procedure anymore—the only straightforward way to make sense of this is to translate it into a loss function and think about what SGD would do, which makes this prior much harder to analyze—as Paul notes when he talks about his trepidation of going down this route.
Hopefully, however, adding this stop gradient should solve the event-avoidance problem, since it should remove the incentive to modify to make the world simpler (e.g. preventing from happening).
Fortunately, despite Paul's prior worries about not being able to translate this trick into the program search case, it turns out it's not too difficult to convert Paul's stop gradient formulation into a generative prior that we can analyze as we have been doing previously. Furthermore, there are many such translations—in particular, we can pick either a dataset-less or non-dataset-less version, depending on whether we prefer being exposed to the dataset conditioning problem or the rocks for brains problem. In this case, I'll present a dataset-less formulation, which should solve dataset conditioning but unfortunately won't solve rocks for brains.
Now, given some simplicity prior , we'll define as the prior induced by the following generative procedure:
- if , resample
First, note the uniform sampling in step 2. This is the trickiest part of this algorithm—and probably impossible to do in a neural network setting. Technically, the model space of a neural network is finite, so uniform sampling is possible—though the obvious algorithm of sampling uniformly from weight space doesn't work, since the parameter-function map is biased towards simplicity. Hopefully, just using a stop gradient will replicate what this algorithm is doing—and intuitively it does feel like they're doing the same thing—but I don't have any strong justification for that. Thus, it is somewhat unclear whether this procedure will actually have a nice translation when we convert back into an ML setup with stop gradients, but we'll put that aside for now.
Second, to make sense of our use of above, we'll let have three heads, , , and , where is a Boolean head that “guards” whether or not we do the check, and are two output heads. Furthermore, we'll let have two groups of parameters, and , such that . Thus, we'll let and .
Intuitively, what's going on here is that, in sampling a new set of parameters, wherever we want a stop gradient, we use the old , and wherever we don't want a stop gradient, we use the new —thus ensuring that we only sample in such a way that incentivizes the prior to be higher through the non-stop-gradiented places where shows up, not through the stop-gradiented places where we use .
Now, we'll define our desired in pseudocode as
def M_theta_1_plus(theta_2, x, q):
axioms = world_model_no_H(H=theta_2)(x)
deduced_stmts = deduction(axioms)
return {
"f_guard": H_understands(x, q, deduced_stmts),
"f1": f_plus(q, deduced_stmts),
"f2": f_minus(q, deduced_stmts),
}
where uses , either or , to give it its human model.
Then, we have to determine what will be favored given the above . First, consider . In that case, we get the complexity which, assuming that conditioning on exactly pays back (which is false due to the rocks for brains problem, but we're just trying to solve event-avoidance here), reduces to
Now, consider , as in the event-avoidance problem. In that case, we get the complexity but then, since being in is entirely unhelpful in making hold—since it only affects , which already has in its —we get where we're again assuming that we exactly pay back even if we use as our basis of comparison.
Then, we get that underperforms the desired if and only if which shows how we've now gotten a factor of two leg up on the event avoidance problem.[6] Unfortunately, there might still be event avoidance problems severe enough that a factor of two isn't enough to rule them out, but I suspect that there are ways to iterate this procedure to get an arbitrarily large factor there, though I haven't done that analysis yet.
Finally, to validate that our construction is complexity-competitive, we can compare to the alternative construction
def M_theta_1_minus(theta_2, x, q):
axioms = world_model(x)
deduced_stmts = deduction(axioms)
return {
"f_guard": False,
"f1": f_minus(q, deduced_stmts),
"f2": f_minus(q, deduced_stmts),
}
which, since it sets always to false and never uses , is just equivalent to our original .
Then, comparing complexities, we need which, assuming that the conditioning exactly pays off , reduces to as desired.
The composition problem
For the composition problem, the central issue is that we're not convinced that we can get the approximate equality even given the approximate equality
Fundamentally, one of the main reasons this is a problem is that doesn't get to condition on consistency, since it has to be defined in . Now, it could be the case that even fixing that problem, we're still not convinced of the approximate equality however, at the very least, reducing the problem down to this case seems like major progress.
Though I haven't fully fleshed it out yet, I believe that reducing the composition problem to the above case is possible via a prior that uses something like the following generative procedure (where is some simplicity prior and ) where the idea is to allow for arbitrarily many “rounds” of conditioning in sequence, thus allowing the consistency conditioning to happen first, followed by the conditioning, and therefore the specification of to benefit from the consistency conditioning.
Though I don't know if this sort of approach will fully resolve the composition problem—and even if it does the existence conditions could be extremely difficult to implement in an ML setup—at the very least I think it's a good place to start for future work.
The rocks for brains problem
Just as the dataset conditioning problem seemed essentially fatal for Paul's original proposal, the rocks for brains problem seems essentially fatal for the dataset-less proposal, putting us in a sort of “pick your poison” scenario. That being said, there are some potential paths forward for addressing the rocks for brains problem.
Fundamentally, the rocks for brains problem is coming from the difficulty of performing the check—thus, if we could remove the need to perform such a check, or find a cheap implementation for it that doesn't reference the human model at all, we could solve the rocks for brains problem. Really, all needs to do is identify some reasonably large set of questions for which should always be correct. For example, if is true for all questions about “basic” facts about the world, and false otherwise, that would probably be sufficient. The trick, then, is in finding some notion of “basic” facts that's simple to specify without using a human model—which doesn't actually seem that difficult.
Conclusion
Finally, as a last takeaway, I think that one thing that really stood out to me in all of this analysis is that the space of possible machine learning training procedures is really quite large and underexplored—there are a lot more, often very weird and very different, possible ML training setups than the ones that are traditionally used in practice. Overall, I think this makes me pretty optimistic that searching for fundamentally new ways to train machine learning models is an extremely valuable line of attack, both in terms of new very general approaches like amplification [AF · GW], debate [AF · GW], recursive reward modeling [AF · GW], market making [AF · GW], or imitative generalization [AF · GW], as well as new very specific approaches like those in this post. My strong sense is that, even with all of the different proposals and approaches that we have, we're really only scratching the surface of the different sorts of training procedures that are actually out there.
It's worth flagging that the zero loss assumption is somewhat questionable if we don't expect to train to convergence—but it's at least a plausible assumption, it makes the analysis a lot easier, and I don't expect it to be hiding major issues, so it seems fine at least for the purposes of this post. ↩︎
In an unbounded compute setting, the chain rule of conditional entropy gives us that However, if can be a one-way function of , then in general we just get the inequality Throughout this post, however, we'll make use of the full approximate equality where the hope is that this should make sense given that, in the neural network setting, would need to be near the end of the network, and thus should just be functions of . Additionally, given that we expect to be significantly more complex than or , even in general we shouldn't be changing much by doing this. ↩︎
Paul's relaxation is to translate into ↩︎
Note that this assumption is somewhat sketchy. Paul conjectures that this approximation is only ever off by a constant factor, though that's not necessarily very comforting if we don't have an estimate for the size of that factor, nor a proof of that conjecture. In general, we only get the inequality Fortunately, we'll mostly just be using this assumption as an intuition pump, with most of the analysis working just fine without it. When we do lean on it more heavily, it'll only be in the direction where we're actually guaranteed the inequality. ↩︎
While doesn't work for this, there is a way to use the rocks for brains problem to construct an attack in the same vein as our previous attacks where we build an with lower complexity than . Let . Then, since the output head in just runs , that means we need to provide a detailed enough picture of how humans work to enable to answer any questions about humans in the dataset correctly—but it need not be any more detailed than that. In particular, the human model need not be detailed enough to ensure anything about non-human-related inputs, so long as it can ensure that is always false for such inputs. Thus, let where determines if the inputs require knowledge of humans, are the parts of that are only necessary to compute 's behavior on non-human-related inputs (such that is everything necessary for inputs), and is a human that understands nothing (such that is always false). The idea here is that, for such a , we should get . Then, calculating , we get which, assuming that we can specify after without gaining complexity, becomes and since this attack leaves alone, we need only compare to , which has such that we get if and only if Then, the idea is that should be pretty straightforward, since it doesn't need to do much more than check whether makes use of —and removing the need to specify should be a big complexity bonus, since it removes the need to encode any general human beliefs about the world that aren't directly relevant to answering questions about other humans. ↩︎
Note that a similar analysis to that given for can also be given for , the rocks for brains example that does fit the dataset as given in a previous footnote. ↩︎
24 comments
Comments sorted by top scores.
comment by Joe Collman (Joe_Collman) · 2021-07-14T22:01:30.607Z · LW(p) · GW(p)
Thanks for writing this up. It is useful to see a non-Paul perspective on the same ideas, both in terms of clarifying the approach, and eliminating a few of my confusions.
A typo: After "or defined in my notation as", you have twice rather than
I've not yet been through the details, but it'd be helpful if you'd clarify the starting point and scope a little, since I may well be misunderstanding you (and indeed Paul). In particular on this:
Specifically, is the “honest embedding” which directly converts between logical statements and their equivalent natural language, thus answering questions by embedding as a logical statement and unembedding its answer in .
My immediate thought is that in general question answering there is no unique honest unembedding. Much of answer formation is in deciding which information is most relevant, important, useful, tacitly assumed... (even assuming fixed world model and fixed logical deductions).
So I assume that you have to mean a narrower context where e.g. the question specifies the logical form the answer must take and the answering human/model assigns values to pre-defined variables.
For a narrower setting, the gist of the post makes sense to me - but I don't currently see how a solution there would address the more general problem. Is finding a prior that works for closed questions with unique honest answers sufficient?
The more general setting seems difficult as soon as you're asking open questions.
If you do apply the constraint there, then it seems must do hugely more than a simple unembedding from deductions. It'll need to robustly select the same answer as a human from a huge set of honest answers, which seems to require something equivalent to predicting the human. At that point it's not clear to me when exactly we'd want to differ from in its later answers (there exist clear cases; I don't see a good general rule, or how you'd form a robust dataset to learn a rule).
To put it another way, [honest output to q from fixed world model] doesn't in general uniquely define an answer until you know what the answerer believes the asker of q values.
Apologies if I'm stating the obvious: I'm probably confused somewhere, and wish to double-check my 'obvious' assumptions. Clarifications welcome.
Replies from: evhub, paulfchristiano↑ comment by evhub · 2021-07-15T21:47:28.339Z · LW(p) · GW(p)
I mostly agree with what Paul said re using various techniques to improve the evaluation of to ensure you can test it on more open-ended questions. That being said, I'm more optimistic that, if you can get the initial training procedure right, you can rely on generalization to fill in the rest. Specifically, I'm imagining a situation where the training dataset is of the narrower form you talk about such that and always agree (as in Step 3 here [LW · GW])—but where the deployment setting wouldn't necessarily have to be of this form, since once you're confident that you've actually learned and not e.g. , you can use it for all sorts of things that wouldn't ever be in that training dataset (the hard part, of course, is ever actually being confident that you did in fact learn the intended model).
(Also, thanks for catching the typo—it should be fixed now.)
Replies from: Joe_Collman, Joe_Collman↑ comment by Joe Collman (Joe_Collman) · 2021-07-19T19:15:42.741Z · LW(p) · GW(p)
Having thought about it more (hopefully with more clarity), I think I have trouble imagining training data for that:
- We're highly confident is correct.
- Enables the model to decide which true things to output in general. (my (2) here [LW(p) · GW(p)])
It seems to me that we can be highly confident about matters of fact (how many chairs are in this room...), but less confident once value judgements come into play (which of A or B is the better answer to "How should I go about designing a chair?").
[Of course it's not black-and-white: one can make a philosophical argument that all questions are values questions. However, I think this is an issue even if we stick to pragmatic, common-sense approaches.]
I don't think we can remedy this for values questions by including only data that we're certain of. It seems to me that works for facts questions due to the structure of the world: it's so hugely constrained by physical law that you can get an extremely good model by generalizing from sparse data from a different distribution.
It's not clear that anything analogous works for generalizing preferences (maybe?? but I'd guess not). I'd expect an trained on [data we're highly confident is correct] to generalize poorly to general open questions.
Similarly, in Paul's setup I think the following condition will fail if we need to be highly confident of the correctness (relative to what is known) of the small dataset:
- The small dataset is still rich enough that you could infer correct language usage from it, i.e. the consistency condition on the small dataset alone suffices to recover all 10,000 bits required to specify the intended model.
It's entirely plausible you can learn "correct language usage" in the narrow sense from consistency on the small dataset (i.e. you may infer a [deduced_statement -> natural_language_equivalent] mapping). I don't think it's plausible you learn it in the sense required (i.e. a [(set_of_all_deduced_statements, Q) -> natural_language_answer] mapping).
Again, perhaps I'm (not even) wrong, but I think the above accurately describes my current thinking.
↑ comment by Joe Collman (Joe_Collman) · 2021-07-18T03:14:23.945Z · LW(p) · GW(p)
Ok, I think that makes some sense in so far as you're softening the constraint and training it in more open-ended conditions. I'm not currently clear where this gets us, but I'll say more about that in my response to Paul.
However, I don't see how you can use generalization from the kind of dataset where and always agree (having asked prescriptive questions). [EDIT: now I do, I was just thinking particularly badly]
I see honestly answering a question as a 2-step process (conceptually):
1) Decide which things are true.
2) Decide which true thing to output.
In the narrow case, we're specifying ((2) | (1)) in the question, and training the model to do (1). Even if we learn a model that does (1) perfectly (in the intended way), it hasn't learned anything that can generalize to (2).
Step (2) is in part a function of human values, so we'd need to be giving it some human-values training signal for it to generalize.
[EDIT: I've just realized that I'm being very foolish here. The above suggests that learning (1) doesn't necessarily generalize to (2). In no way does it imply that it can't. I think the point I want to make is that an that does generalize extremely well in this way is likely to be doing some close equivalent to predicting-the-human. (in this I'm implicitly claiming that doing (2) well in general requires full understanding of human values)]
Overall, I'm still unsure how to describe what we want: clearly we don't trust Alice's answers if she's being blackmailed, but how about if she's afraid, mildly anxious, unusually optimistic, slightly distracted, thinking about concept a or b or c...?
It's clear that the instrumental model just gives whatever response Alice would give here.
I don't know what the intended model should do; I don't know what "honest answer" we're looking for.
If the situation has property x, and Alice has reacted with unusual-for-Alice property y. Do we want the Alice-with-y answer, or the standard-Alice answer? It seems to depend on whether we decide y is acceptable (or even required) w.r.t. answer reliability, given x. Then I think we get the same problem on that question etc.
↑ comment by paulfchristiano · 2021-07-15T17:30:03.094Z · LW(p) · GW(p)
- I don't think you actually want to use supervised training for training , you want to use feedback of the form "Is this answer much wronger than that answer?" and then train the model to not produce definitely-wrong answers.
- Likewise the constraint would really want to be something softer (e.g. forcing to give plausible-looking answers to questions as evaluated by ).
- I think that most questions about what is useful / tacitly assumed / etc. can be easily handled on top of the "raw" ability to elicit the model's knowledge (if you like you could imagine having a debate about which answer is better all things considered, using to assess the model's beliefs about closed question)
- I do think there are a lot of problems along these lines that you'd want to think about a bunch in theory, and then later need to do a bunch of empirical work on. But unfortunately I also think there are a lot of "bigger fish to fry" that are very likely to sink this entire family of approaches. So the first order of business is understanding those and wandering our way to a general category of solution that might actually work.
↑ comment by Joe Collman (Joe_Collman) · 2021-07-18T03:43:12.467Z · LW(p) · GW(p)
Ok, the softer constraints make sense to me, thanks.
Using a debate with assessing simple closed questions makes sense, but it seems to me that only moves much of the problem rather than solving it. We start with "answering honestly vs predicting human answers" and end up with "judging honestly vs predicting human judgments".
While "Which answer is better, Alice's or Bob's?" is a closed question, learning to answer the general case still requires applying a full model of human values - so it seems a judge-model is likely to be instrumental (or essentially equivalent: again, I'm not really sure what we'd mean by an intended model for the judge).
But perhaps I'm missing something here; is predicting-the-judge less of a problem than the original? Are there better approaches than using debate which wouldn't have analogous issues?
comment by Rohin Shah (rohinmshah) · 2021-07-27T12:17:19.402Z · LW(p) · GW(p)
except now is checked over all inputs, not just over the dataset (note that we still update on the dataset at the end—it's just our prior which is now independent of it).
Doesn't this mean that the two heads have to be literally identical in their outputs? It seems like at this point your prior is "generate parameters randomly under the constraint that the two heads are identical", which seems basically equivalent to having a single head and generating parameters randomly, so it seems unintuitive that this can do anything useful.
(Disclaimer: I skimmed the post because I found it quite challenging to read properly, so it's much more likely than usual that I failed to understand a basic point that you explicitly said somewhere.)
Replies from: evhub↑ comment by evhub · 2021-07-27T19:14:24.643Z · LW(p) · GW(p)
It seems like at this point your prior is "generate parameters randomly under the constraint that the two heads are identical"
That's not what the prior looks like—the prior is more like “generate parameters that specify some condition, then sample parameters that make that condition true.” Thus, you don't need to pay for the complexity of satisfying the condition, only the complexity of specifying it (as long as you're content with the simplest possible way to satisfy it). This is why the two-step nature of the algorithm is necessary—the prior you're describing is what would happen if you used a one-step algorithm rather than a two-step algorithm (which I agree would then not do anything).
Replies from: rohinmshah↑ comment by Rohin Shah (rohinmshah) · 2021-07-28T08:10:03.358Z · LW(p) · GW(p)
Hmm, I'm not thinking about the complexity part at all right now; I'm just thinking mechanically about what is implied by your equations.
the prior is more like “generate parameters that specify some condition, then sample parameters that make that condition true.”
I'm not sure exactly what you mean by the parameters specifying some condition. I thought the condition was specified upfront by the designer (though of course to check the condition you need to look at both parameters, so you can view this as the first set of parameters specifying a condition on the second set of parameters). As far as I can tell, the intended condition is "the two heads are identical" in the dataset-less case. Looking directly at the math, the equations you have are:
θ1∼p(θ1)
θ2∼p(θ2 | θ1)⋅I[∀x∈X. ∀q∈Q. Mθ1,θ2|f?(x,q)]
My interpretation is:
- Generate θ1 randomly.
- Generate θ2 randomly from θ1, subject to the constraint that the two heads output the same value on all possible inputs.
Imagine there was a bijection between model parameters and resulting function. (I'm aware this is not at all true.) In that case it seems like you are enforcing the constraint that the two heads have identical parameters. In which case you could just have generated parameters for the first head, and then copied them over into the second head, rather than go through this complicated setup.
Now, there isn't actually a bijection between model parameters and resulting function. But it seems like the only difference is that you make it more likely that you sample heads which have lots of different implementations in model parameters, i.e. you're doubling the strength of the neural net prior (and that's the only effect). This seems undesirable?
Replies from: evhub, hogwash9↑ comment by evhub · 2021-07-28T19:23:38.654Z · LW(p) · GW(p)
Hmm, I'm not thinking about the complexity part at all right now; I'm just thinking mechanically about what is implied by your equations.
The only difference between this setup and normal ML is the prior/complexity—you still have the ability to learn all the same functions, it's just that some are more/less likely now.
though of course to check the condition you need to look at both parameters, so you can view this as the first set of parameters specifying a condition on the second set of parameters
Yep, that's exactly right.
Imagine there was a bijection between model parameters and resulting function. (I'm aware this is not at all true.) In that case it seems like you are enforcing the constraint that the two heads have identical parameters.
That's definitely not what should happen in that case. Note that there is no relation between and or and —both sets of parameters contribute equally to both heads. Thus, can enforce any condition it wants on by leaving some particular hole in how it computes and and forcing to fill in that hole in such a way to make 's computation of the two heads come out equal.
Replies from: rohinmshah↑ comment by Rohin Shah (rohinmshah) · 2021-07-29T07:06:55.951Z · LW(p) · GW(p)
The only difference between this setup and normal ML is the prior/complexity—you still have the ability to learn all the same functions, it's just that some are more/less likely now.
Yeah, sorry, I wasn't clear here -- I meant that, rather than reasoning about the complexity of individual pieces / stages and then adding them all up at the end, I am instead simulating out the equations until both and are chosen, and then reasoning about the thing you get afterwards.
Note that there is no relation between and or and —both sets of parameters contribute equally to both heads. Thus, can enforce any condition it wants on by leaving some particular hole in how it computes and and forcing to fill in that hole in such a way to make 's computation of the two heads come out equal.
Yes, I think I understand that. (I want to note that since is chosen randomly, it isn't "choosing" the condition on ; rather the wide distribution over leads to a wide distribution over possible conditions on . But I think that's what you mean.)
That's definitely not what should happen in that case.
I think you misunderstood what I was claiming. Let me try again, without using the phrase "enforcing the constraint", which I think was the problem.
Imagine there was a bijection between model parameters and resulting function. In Stage 1 you sample randomly. In Stage 2, you sample , such that it fills in the holes in and to make and compute the same function. By our bijection assumption, the parameters in must be identical to the parameters in . Thus, we can conclude the following:
- If contained a parameter from and in the same location (e.g. it includes the weight at position (3, 5) in layer 3 in both and ), then it must have assigned the same value to both of them.
- If contained a parameter from and contained the corresponding parameter from , then must have set that parameter to the same value as in .
- If contained a parameter from and in the same location, then it must have assigned the same value to both of them.
These constraints are necessary and sufficient to satisfy the overall constraint that , and therefore any other parameters in are completely unconstrained and are set according to the original neural net prior.
So it seems to me that (1) any parameters not in or are set according to the original neural net prior, and (2) parameters in must be identical to the corresponding parameters in , but their values are chosen according to the neural net prior.
This seems equivalent to having a single head , sampling its parameters from the original prior, and then copying those parameters into .
I think you should already be pretty worried by the fact that this seems to give weird results when assuming a bijection between model parameters and resulting functions, but let's analyze it without the bijection assumption too:
Since and have to be identical on all inputs, it doesn't matter what input they get, and therefore there is no constraint on the part of the neural net that is generating the inputs. So, we still get (1): any parameters not in or are set according to the original neural net prior. (2) is no longer true, but instead of getting that parameters in are equivalent to parameters in , we get that the function implemented by is equivalent to the function implemented by . Since ultimately the generating process is "sample parameters until ", the probability of getting a particular function is proportional to the square of the probability of generating parameters for that function (since you have to successfully generate the function twice). So, you are doubling the strength of the neural net prior in the heads, and leaving the strength the same in the world model (i.e. all parts except for the head).
Replies from: evhub↑ comment by evhub · 2021-07-29T20:17:39.890Z · LW(p) · GW(p)
Yeah, sorry, I wasn't clear here -- I meant that, rather than reasoning about the complexity of individual pieces / stages and then adding them all up at the end, I am instead simulating out the equations
Sure, makes sense—theoretically, that should be isomorphic.
I want to note that since is chosen randomly, it isn't "choosing" the condition on ; rather the wide distribution over leads to a wide distribution over possible conditions on . But I think that's what you mean.
This seems like a case where I'm using the more constructive formulation of simulating out the equations and you're thinking about in a more complexity-oriented framing. Of course, again, they should be equivalent.
By our bijection assumption, the parameters in must be identical to the parameters in .
I'm not sure what you mean by this part— and are just different heads, not entirely different models, so I'm not sure what you mean by “the parameters in .” I don't think that a bijection assumption between weights and single-head outputs really makes sense in this context. I also definitely would say that if and were separate models such that they couldn't reuse weights between them, then none of the complexity arguments that I make in the post would go through.
These constraints are necessary and sufficient to satisfy the overall constraint that , and therefore any other parameters in are completely unconstrained and are set according to the original neural net prior.
I'm happy to accept that there are ways of setting (e.g. just make and identical) such that the rest of the parameters are unconstrained and just use the neural net prior. However, that's not the only way of setting —and not the most complexity-efficient, I would argue. In the defender's argument, sets all the head-specific parameters for both and to enforce that computes and computes , and also sets all the shared parameters for everything other than the human model, while leaving the human model to , thus enforcing that specify a human model that's correct enough to make without having to pay any extra bits to do so.
Replies from: rohinmshah↑ comment by Rohin Shah (rohinmshah) · 2021-07-30T06:45:06.375Z · LW(p) · GW(p)
I'm not sure what you mean by this part— and are just different heads, not entirely different models, so I'm not sure what you mean by “the parameters in .” I don't think that a bijection assumption between weights and single-head outputs really makes sense in this context. I also definitely would say that if and were separate models such that they couldn't reuse weights between them, then none of the complexity arguments that I make in the post would go through.
I assumed that when you talked about a model with "different heads" you meant that there is a shared backbone that computes a representation, that is then passed through two separate sequences of layers that don't share any weights, and those separate sequences of layers were the "heads" and . (I'm pretty sure that's how the term is normally used in ML.) I might benefit from an example architecture diagram where you label what are.
I did realize that I was misinterpreting part of the math -- the is quantifying over inputs to the overall neural net, rather than to the parts-which-don't-share-weights. My argument only goes through if you quantify the constraint over all inputs to the parts-which-don't-share-weights. Still, assuming that with your desired part-which-shares-weights, every possible input to parts-which-don't-share-weights can be generated by some (which seems like it will be close enough to true), the argument still suggests that conditioning on the desired part-which-shares-weights, you have just doubled the strength of the neural net prior on the parts-which-don't-share-weights.
In the defender's argument, sets all the head-specific parameters for both and to enforce that computes and computes
This seems to suggest that and are different functions, i.e. there's some input on which they disagree. But then has to make them agree on all possible . So is the idea that there are some inputs to , that can never be created with any possible ? That seems... strange (though not obviously impossible).
Replies from: evhub↑ comment by evhub · 2021-07-30T21:15:32.127Z · LW(p) · GW(p)
I assumed that when you talked about a model with "different heads" you meant that there is a shared backbone that computes a representation, that is then passed through two separate sequences of layers that don't share any weights, and those separate sequences of layers were the "heads" and .
Yep, that's what I mean.
Still, assuming that with your desired part-which-shares-weights, every possible input to parts-which-don't-share-weights can be generated by some (which seems like it will be close enough to true), the argument still suggests that conditioning on the desired part-which-shares-weights, you have just doubled the strength of the neural net prior on the parts-which-don't-share-weights.
Note that conditioning on the part-which-shares-weights is definitely not what the prior is doing—the only conditioning in the prior is conditioning on . If we look at the intended model, however, includes all of the parts-which-don't-share-weights, while is entirely in the part-which-shares-weights.
Technically, I suppose, you can just take the prior and condition on anything you want—but it's going to look really weird to condition on the part-which-shares-weights having some particular value without even knowing which parts came from and which came from .
I do agree that, if were to specify the entire part-which-shares-weights and leave to fill in the parts-which-don't-share-weights, then you would get exactly what you're describing where would have a doubly-strong neural net prior on implementing the same function for both heads. But that's only one particular arrangement of —there are lots of other s which induce very different distributions on .
This seems to suggest that are different functions, i.e. there's some input on which they disagree.
Note that the inputs to are deduced statements, not raw data. They are certainly different functions over the space of all possible deduced statements—but once you put a correct world model in them, they should produce equivalent maps.
Replies from: rohinmshah↑ comment by Rohin Shah (rohinmshah) · 2021-07-31T08:16:25.470Z · LW(p) · GW(p)
Yep, that's what I mean.
Then I'm confused what you meant by
I'm not sure what you mean by this part— and are just different heads, not entirely different models, so I'm not sure what you mean by “the parameters in .”
Seems like if the different heads do not share weights then "the parameters in " is perfectly well-defined?
Note that conditioning on the part-which-shares-weights is definitely not what the prior is doing
Yeah, sorry, by "conditioning" there I meant "assuming that the algorithm correctly chose the right world model in the end", I wasn't trying to describe a particular step in the algorithm. But in any case I don't think we need to talk about that
They are certainly different functions over the space of all possible deduced statements—but once you put a correct world model in them, they should produce equivalent maps.
Okay, so iiuc you're relying on an assumption (fact? desire?) that the world model will never produce deduced statements that distinguish between and ? My understanding of and comes from here:
Specifically, is the “honest embedding” which directly converts between logical statements and their equivalent natural language, thus answering questions by embedding as a logical statement and unembedding its answer in . Conversely, is the “mimicry embedding” which just searches for deductions about what a human would say in response to and outputs that—thus, just quotes , embedding it as just a string of characters for a human to respond to, rather than actually having to understand it in any meaningful way.
If and produce equivalent maps, doesn't that mean that we've just gotten something that can only respond as well as a human? Wouldn't that be a significant limitation? (E.g. given that I don't know German, if my question to the model is "what does <german phrase> mean", does the model have to respond "I don't know"?)
In addition, since the world model will never produce deduced statements that distinguish between and , it seems like the world model could never produce decision-relevant deduced statements that the human wouldn't have realized. This seems both (a) hard to enforce and (b) a huge capability hit.
Replies from: evhub↑ comment by evhub · 2021-08-02T22:02:05.201Z · LW(p) · GW(p)
Seems like if the different heads do not share weights then "the parameters in " is perfectly well-defined?
It seemed to me like you were using it in a way such that shared no weights with , which I think was because you were confused by the quantification, like you said previously. I think we're on the same page now.
Okay, so iiuc you're relying on an assumption (fact? desire?) that the world model will never produce deduced statements that distinguish between and ?
Sorry, I was unclear about this in my last response. and will only agree in cases where the human understands what's happening. In the dataset version, we get that by collecting a dataset where we think the human always gets it right, whereas in the dataset-less version, we get that by including the check which ensures that we don't have to satisfy the condition when the human would get the question wrong.
Replies from: rohinmshah↑ comment by Rohin Shah (rohinmshah) · 2021-08-03T07:52:23.236Z · LW(p) · GW(p)
I think I might be missing a change you made to the algorithm. Can write an arbitrary program for ? In that case, what prevents you from getting
def M_theta_1_plus(theta_2, x, q):
axioms = world_model(theta_2=theta_2)(x)
deduced_stmts = deduction(axioms)
return {
"f": f_minus(q, deduced_stmts),
"f?": True,
}
It seems like this should be lower complexity than the intended result, since True
has much lower complexity than H_understands
?
It seemed to me like you were using it in a way such that shared no weights with
I mean, I would still have said this because I interpret a "head" as "the part after the shared layers", but I'm also happy to instead treat as the entire function for which the first head forms part of the implementation.
Replies from: evhub↑ comment by evhub · 2021-08-04T22:30:44.203Z · LW(p) · GW(p)
Can write an arbitrary program for ?
Yes—at least that's the assumption I'm working under.
It seems like this should be lower complexity than the intended result, since
True
has much lower complexity thanH_understands
?
I agree that the you've described has lower complexity than the intended —but the in this case has higher complexity, since is no longer getting any of its complexity for free from conditioning on the condition. And in fact what you've just described is precisely the unintended model—what I call —that I'm trying to compete against, with the hope being that the savings that gives you in are sufficient to compensate for the loss in having to specify and H_understands
in .
If we calculate the complexity of your proposal, we get whereas, if we calculate the complexity of the intended , we get such that you can see that the question of which one wins is precisely dependent on whether the savings from conditioning on offsets the cost of having to specify and .
Replies from: rohinmshah↑ comment by Rohin Shah (rohinmshah) · 2021-08-05T08:33:51.595Z · LW(p) · GW(p)
such that you can see that the question of which one wins is precisely dependent on whether the savings from conditioning on offsets the cost of having to specify and .
Yeah, that makes sense. I guess I don't really see the intuition about why this should be true, but fair enough to leave that as an open question.
↑ comment by hogwash9 · 2021-07-29T01:42:40.633Z · LW(p) · GW(p)
Imagine there was a bijection between model parameters and resulting function. (I'm aware this is not at all true.) In that case it seems like you are enforcing the constraint that the two heads have identical parameters.
AFAIK, I always imagined the idea behind this objective function to be quite similar to contrastive learning, where you have two networks (or equivalently two sets of parameters), and the goal is to maximize agreement for pairs of inputs to each network that have the same ground truth class/label (conversely maximize disagreement for pairs that are different). That in mind, there are various papers (e.g.) that explore the possibility of "collapsed" solutions like the one you mentioned (where both networks are learning the same mapping, such that there's less benefit to propagating any examples through two networks), which makes this something that we want to minimize. In practice, though, this has been found to occur rarely (c.f. [1]).
Nonetheless, since reading Paul's statement about the problem of the instrumental model, I've been thinking about issues that might arise with the proposed solution, even though similar approaches (i.e. the contrastive training objective) have proven effective for robustness in general (e.g. against adversarial perturbations, data limited scenarios). If I were committed to this stance, I would agree somewhat with the desire to explore alternatives, and I have thought about the extent to which some sort of reconstruction loss could be introduced; this is where the goal might instead be to "maximize agreement" with a set of non-trivial observations/facts that are guaranteed to be more "objective" (somehow) than the original training data (one inspiration being that reconstruction losses in vision deep learning papers like this one often turn out to be good regularizers). So far I haven't had any promising proposals come to light for generative LM.
I am still holding onto the thought, given the remote possibility that all of my above assumptions are correct, and also because "generative models" might reflect the ideal approach to unsupervised learning, whereas "contrastive learning" is sometimes seen as a sort of compromise since (unlike generative models) it's amenable to limited compute [2].
Replies from: rohinmshah↑ comment by Rohin Shah (rohinmshah) · 2021-07-29T07:11:46.267Z · LW(p) · GW(p)
That in mind, there are various papers (e.g.) that explore the possibility of "collapsed" solutions like the one you mentioned
I haven't read the paper, but in contrastive learning, aren't these solutions prevented by the negative examples?
Replies from: hogwash9↑ comment by hogwash9 · 2021-07-29T07:39:46.683Z · LW(p) · GW(p)
It makes sense that negative pairs would help to a large extent, but not all contrastive papers used negative examples, like BYOL (ref). Edit: but now I'm realizing that this might no longer fit the definition of contrastive learning (instead just ordinary self supervised learning), so I apologize about the error/confusion in that case.
Replies from: rohinmshah↑ comment by Rohin Shah (rohinmshah) · 2021-07-29T07:55:25.328Z · LW(p) · GW(p)
If memory serves, with BYOL you are using current representations of an input to predict representations of a related input , but the representation of comes from an old version of the encoder. So, as long as you start with a non-collapsed initial encoder, the fact that you are predicting a past encoder which is non-collapsed ensures that the current encoder you learn will also be non-collapsed.
(Mostly my point is that there are specific algorithmic reasons to expect that you don't get the collapsed solutions, it isn't just a tendency of neural nets to avoid collapsed solutions.)
but now I'm realizing that this might no longer fit the definition of contrastive learning (instead just ordinary self supervised learning), so I apologize about the error/confusion in that case.
No worries, I think it's still a relevant example for thinking about "collapsed" solutions.
comment by Charlie Steiner · 2021-07-14T21:31:36.991Z · LW(p) · GW(p)
I'm having some formatting problems (reading on lesswrong.com in firefox) with scroll bars under full-width LaTex covering the following line of text.
(So now I'm finishing reading it on greaterwrong.)