Intuitions about solving hard problems
post by Richard_Ngo (ricraz) · 2022-04-25T15:29:04.253Z · LW · GW · 23 commentsContents
Solving hard scientific problems usually requires compelling insights Using the compelling insight heuristic to evaluate alignment research directions Using the compelling insight heuristic to generate alignment research directions None 23 comments
Solving hard scientific problems usually requires compelling insights
Here’s a heuristic which plays an important role in my reasoning about solving hard scientific problems: that when you’ve made an important breakthrough, you should be able to explain the key insight(s) behind that breakthrough in an intuitively compelling way. By “intuitively compelling” I don’t mean “listeners should be easily persuaded that the idea solves the problem”, but instead: “listeners should be easily persuaded that this is the type of idea which, if true, would constitute a big insight”.
The best examples are probably from Einstein: time being relative, and gravity being equivalent to acceleration, are both insights in this category. The same for Malthus and Darwin and Godel; the same for Galileo and Newton and Shannon.
Another angle on this heuristic comes from Scott Aaronson’s list of signs that a claimed P≠NP proof is wrong. In particular, see:
#6: the paper lacks a coherent overview, clearly explaining how and why it overcomes the barriers that foiled previous attempts.
And #1: the author can’t immediately explain why the proof fails for 2SAT, XOR-SAT, or other slight variants of NP-complete problems that are known to be in P.
I read these as Aaronson claiming that a successful solution to this very hard problem is likely to contain big insights that can be clearly explained.
Perhaps the best counterexample is the invention of Turing machines. Even after Turing explained the whole construction, it seems reasonable to still be uncertain whether there's actually something interesting there, or whether he's just presented you with a complicated mess. I think that uncertainty would be particularly reasonable if we imagine trying to understand the formalism before Turing figures out how to implement any nontrivial algorithm (like prime factorisation) on a Turing machine, or how to prove any theorems about universal Turing machines.
Other counterexamples might include quantum mechanics, where quantization was originally seen as a hack to make the equations work; or formal logic, where I’m not sure if there were any big insights that could be grasped in advance of actually seeing the formalisms in action.
Using the compelling insight heuristic to evaluate alignment research directions
It’s possible that alignment will in practice end up being more of an engineering problem than a scientific problem like the ones I described above. E.g. perhaps we're in a world where, with sufficient caution about scaling up existing algorithms, we'll produce aligned AIs capable of solving the full version of the problem for us.[1] But suppose we're trying to produce a fully scalable solution ourselves; are there existing insights which might be sufficient for that? Here are some candidates, which I’ll only discuss very briefly, and plan to discuss in more detail in a forthcoming post (I’d also welcome suggestions for any I've missed):
- “Trustworthy imitation of human external behavior would avert many default dooms as they manifest in external behavior unlike human behavior.”
- This is Eliezer’s description [AF · GW] of the core insight behind Paul’s imitative amplification proposal. I find this somewhat compelling, but less so than I used to, since I’ve realized that the line between imitation learning and reinforcement learning is blurrier than I used to think (e.g. see this or this).
- Decomposing supervision of complex tasks allows better human oversight.
- Again, I’ve found this less compelling over time - in this case because I’ve realized that decomposition is the “default” approach we follow whenever we evaluate things, and so the real “work” of the insight needs to be in describing how we’ll decompose tasks, which I don’t think we’ve made much progress on (with techniques like cross-examination being possible exceptions).
- Weight-sharing makes deception much harder.
- I think this is the main argument pushing me towards optimism about ELK; thanks to Ajeya for articulating it to me.
- Uncertainty about human preferences makes agents corrigible.
- This is Stuart Russell’s claim about why assistance games are a good potential solution to alignment; I basically don’t buy it at all, for the same reasons as Yudkowsky (but kudos to Stuart for stating the proposed insight clearly enough that the disagreement is obvious).
- Myopic agents can be capable while lacking incentives for long-term misbehavior.
- This claim seems to drive a bunch of Evan Hubinger’s work, but I don’t buy it. In order for an agent’s behavior to be competent over long time horizons, it needs to be doing some kind of cognition aimed towards long time horizons, and we don’t know how to stop that cognition from being goal-directed.
- Problems that arise in limited-data regimes (e.g. inner misalignment) go away when you have methods of procedurally generating realistic data (e.g. separable world-models).
- This claim was made to me by Michael Cohen. It’s interesting, but I don’t think it solves the core alignment problem, because we don’t understand cognition well enough to efficiently factor out world-models from policies. E.g. training a world-model to predict observations step-by-step seems like it loses out on all the benefits of thinking in terms of abstractions; whereas training it just on long-term predictive accuracy makes the intermediate computations uninterpretable and therefore unusable.
- By default we’ll train models to perform bounded tasks of bounded scope, and then achieve more complex tasks by combining them.
- This seems like the core claim motivating Eric Drexler’s CAIS framework. I think it dramatically underrates the importance of general intelligence, and the returns to scaling up single models, for reasons I explain further here [AF · GW].
- Functional decision theory.
- I don’t think this directly addresses the alignment problem, but it feels like the level of insight I’m looking for, in a related domain.
Note that I do think each of these claims gestures towards interesting research possibilities which might move the needle in worlds where the alignment problem is easy. But I don’t think any of them are sufficiently powerful insights to scalably solve the hard version of the alignment problem. Why do many of the smart people listed above think otherwise? I think it’s because they’re not accounting properly for the sense in which the alignment problem is an adversarial one: that by default, optimization which pushes towards general intelligence will also push towards misalignment, and we'll need to do something unusual to be confident we're separating them. In other words, the set of insights about consequentialism and optimization which made us worry about the alignment problem in the first place (along with closely-related insights like the orthogonality thesis and instrumental convergence) are sufficiently high-level, and sufficiently robust, that unless you're guided by other powerful insights you're less likely to find exceptions to those principles, and more likely to find proposals where you can no longer spot the flaw.
This claim is very counterintuitive from a ML perspective, where loosely-directed exploration of new algorithms often leads to measurable improvements. I don’t know how to persuade people that applying this approach to alignment leads to proposals which are deceptively appealing, except by getting them to analyze each of the proposals above until they convince themselves that the purported insights are insufficient to solve the problem. Unfortunately, this is very time-consuming. To save effort, I’d like to promote a norm for proposals for alignment techniques to be very explicit about where the hard work is done, i.e. which part is surprising or insightful or novel enough to make us think that it could solve alignment even in worlds where that’s quite difficult. Or, alternatively, if the proposal is only aimed at worlds where the problem is relatively easy, please tell me that explicitly. E.g. I spent quite a while being confused about which part of the ELK agenda was meant to do the hard work of solving ontology identification; after asking Paul about it, though, his main response was “maybe ontology identification is easy”, which I feel very skeptical about.[2] (He also suggested something to do with the structure of explanations as a potential solving-ELK-level-insight; but I don’t understand this well enough to discuss it in detail.)
Using the compelling insight heuristic to generate alignment research directions
If we need more major insights about intelligence/consequentialism/goals to solve alignment, how might we get them? Getting more evidence from seeing more advanced systems will make this much easier, so one strategy is just to keep pushing on empirical and engineering work, while keeping an eye out for novel phenomena which might point us in the right directions.[3]
But for those who want to try to generate those insights directly, some tentative options:
- Studying how existing large models think, and try to extrapolate from that
- Understand human minds well enough to identify insights about human intelligence (e.g. things like predictive coding, or multi-agent models of minds, or dual process theory) which can be applied to alignment
- Understanding how groups think (e.g. how task decomposition occurs in cultural evolution, or in corporations, or…)
- Agent foundations research
Of course, the whole point of major insights is that it’s hard to predict them; so I’d be excited about others pursuing potential insights that don’t fall into any of these categories (with the caveat that, as the work gets increasingly abstract, it’s necessary to be increasingly careful for it to have any chance of succeeding).
- ^
I model Eliezer as agreeing with most of the claims I make in this post, but strongly disagreeing with this sentence, because he thinks that the core problem is so hard that no amount of prosaic engineering effort could plausibly prevent catastrophe in the absence of major novel insights.
- ^
Some brief intuitions about why: I think the hardest part of human cognition is generating and merging different ontologies. Thinking “within” an ontology is like doing normal research in a scientific field; reasoning about different ontologies is like doing philosophy, or doing paradigm-breaking research, and so it seems like a particularly difficult thing to generate a training signal for.
- ^
Thanks to Nathan Helm-Burger for reminding me of this, with his comment.
23 comments
Comments sorted by top scores.
comment by Quintin Pope (quintin-pope) · 2022-04-25T20:33:59.816Z · LW(p) · GW(p)
Hmm. I suppose a similar key insight for my own line of research might go like:
The orthogonality thesis is actually wrong for brain-like learning systems. Such systems first learn many shallow proxies for their reward signal. Moreover, the circuits implementing these proxies are self-preserving optimization demons [AF · GW]. They’ll steer the learning process away from the true data generating process behind the reward signal so as to ensure their own perpetuation.
If true, this insight matters a lot for value alignment because it points to a way that aligned behavior in the infra-human regime could perpetuate into the superhuman regime. If all of:
- We can instil aligned behavior in the infra-human regime
- The circuits that implement aligned behavior in the infra-human regime can ensure their own perpetuation into the superhuman regime
- The circuits that implement aligned behavior in the infra-human regime continue to implement it in the superhuman regime
hold true, then I think we’re in a pretty good position regarding value alignment. Off-switch corrigibility is a bust though because self-preserving circuits won’t want to let you turn them off.
If you’re interested in some of the actual arguments for this thesis, you can read my answer [LW · GW] to a question about the relation between human reward circuitry and human values.
Replies from: ricraz↑ comment by Richard_Ngo (ricraz) · 2022-04-25T21:37:30.381Z · LW(p) · GW(p)
I think this is very interesting, and closely related to a line of thinking I've been pursuing; stay tuned for a forthcoming post which talks about the development of shallow proxies (although I'm not thinking of it as a particularly strong reason for optimism).
comment by John Schulman (john-schulman) · 2022-04-26T03:14:40.974Z · LW(p) · GW(p)
Weight-sharing makes deception much harder.
Could you explain or provide a reference for this?
Replies from: Johannes_Treutlein↑ comment by Johannes Treutlein (Johannes_Treutlein) · 2022-06-08T21:53:51.701Z · LW(p) · GW(p)
I'd also be curious about this!
Replies from: Johannes_Treutlein↑ comment by Johannes Treutlein (Johannes_Treutlein) · 2022-06-22T23:41:17.467Z · LW(p) · GW(p)
I find this particularly curious since naively, one would assume that weight sharing implicitly implements a simplicity prior, so it should make optimization more likely and thus also deceptive behavior? Maybe the argument is that somehow weight sharing leaves less wiggle room for obscuring one's reasoning process, making a potential optimizer more interpretable? But the hidden states and tied weights could still be encoding deceptive reasoning in an uninterpretable way?
comment by adamShimi · 2022-04-25T20:20:43.648Z · LW(p) · GW(p)
I like that you're proposing an explicit heuristic inspired by the history of science for judging research directions and approaches, and acknowledge that it leads to conclusion that are counter intuitive to my Richard-model (pushing for Agents foundations for example), so you're not just retrofitting your own conclusion AFAIK. I also like that you're applying it to object-level directions in alignment — that's something I'm working on at the moment for my own research, based on your pushback.
That being said, my prediction/retrodiction is that this is too strong a criteria, for reasons already discussed in this post [AF · GW]. Basically I expect that for most if not all great scientific solutions you mention, if you back up enough (sometimes you don't need to back up that far), you will find a step, an idea, an insight that proved crucial down the line but didn't look like the right type. Even in the post there's a sort of weird double standard where you implicitly discuss Darwin and Einstein after they have matured their theory, whereas you talk about Turing before he proves a non-trivial result or design a non-trivial algorithm. The extension of my prediction here is that during the long process that these thinkers (and others examples) took to arrive at their insights, they built on models and ideas that revealed bits of evidence but where jank, incorrect, and eventually used as scaffolding then thrown away.
Note that this is a rather empirical prediction about the history of science, and that I'm curious of any counterexample you or anybody else would find to it.
Another issue I see is that often the insights redefined the rules of the game. Galileo for example (in the Feyerabend interpretation at least) redefines "rest state" and "movement" to be consistent with a moving earth. You could say that this is insight that are compelling, but from the perspective at that time of history, it looks more like changing the rules of the game (of what a theory of the stars has to deal with) by changing the natural interpretations [LW · GW] associated with it.
Replies from: Spencer Becker-Kahn↑ comment by carboniferous_umbraculum (Spencer Becker-Kahn) · 2022-04-26T09:26:12.755Z · LW(p) · GW(p)
I broadly agree with Richard's main point, but I also do agree with this comment in the sense that I am not confident that the example of Turing compared with e.g. Einstein is completely fair/accurate.
One thing I would say in response to your comment, Adam, is that I don't usually see the message of your linked post as being incompatible with Richard's main point. I think one usually does have or does need productive mistakes that don't necessarily or obviously look like they are robust partial progress. But still, often when there actually is a breakthrough, I think it can be important to look for this "intuitively compelling" explanation. So one thing I have in mind is that I think it's usually good to be skeptical if a claimed breakthrough seems to just 'fall out' of a bunch of partial work without there being a compelling explanation after the fact.
↑ comment by adamShimi · 2022-04-26T09:47:50.119Z · LW(p) · GW(p)
Thanks for the answer.
One thing I would say in response to your comment, Adam, is that I don't usually see the message of your linked post as being incompatible with Richard's main point. I think one usually does have or does need productive mistakes that don't necessarily or obviously look like they are robust partial progress. But still, often when there actually is a breakthrough, I think it can be important to look for this "intuitively compelling" explanation. So one thing I have in mind is that I think it's usually good to be skeptical if a claimed breakthrough seems to just 'fall out' of a bunch of partial work without there being a compelling explanation after the fact.
Hum, I'm not sure I'm following your point. Do you mean that you can have both productive mistakes and intuitively compelling explanations when the final (or even intermediary breakthrough) is reached? Then I totally agree. My point was more that if you only use Richard's heuristic, I expect you to not reach the breakthrough because you would have killed in the bud many productive mistakes that actually lead the way there.
There's also a very kuhnian thing here that I didn't really mention in my previous comment (except on the Galileo part): the compellingness of an answer is often stronger after the fact, when you work in the paradigm that it lead too. That's another aspect of productive mistakes or even breakthrough: they don't necessarily look right or predict more from the start, and evaluating their consequences is not necessarily obvious.
Replies from: ricraz, Spencer Becker-Kahn↑ comment by Richard_Ngo (ricraz) · 2022-04-26T10:09:38.879Z · LW(p) · GW(p)
I like this pushback, and I'm a fan of productive mistakes. I'll have a think about how to rephrase to make that clearer. Maybe there's just a communication problem, where it's hard to tell the difference between people claiming "I have an insight (or proto-insight) which will plausibly be big enough to solve the alignment problem", versus "I have very little traction on the alignment problem but this direction is the best thing I've got". If the only effect of my post is to make a bunch of people say "oh yeah, I meant the second thing all along", then I'd be pretty happy with that.
Why do I care about this? It has uncomfortable tinges of status regulation, but I think it's important because there are so many people reading about this research online, and trying to find a way into the field, and often putting the people already in the field on some kind of intellectual pedestal. Stating clearly the key insights of a given approach, and their epistemic status, will save them a whole bunch of time. E.g. it took me ages to work through my thoughts on myopia [AF · GW] in response to Evan's posts on it, whereas if I'd known it hinged on some version of the insight I mentioned in this post, I would have immediately known why I disagreed with it.
As an example of (I claim) doing this right, see the disclaimer on my "shaping safer goals [? · GW]" sequence: "Note that all of the techniques I propose here are speculative brainstorming; I'm not confident in any of them as research directions, although I'd be excited to see further exploration along these lines." Although maybe I should make this even more prominent.
Lastly, I don't think I'm actually comparing Darwin and Einstein's mature theories to Turing's incomplete theory. As I understand it, their big insights required months or years of further work before developing into mature theories (in Darwin's case, literally decades).
Replies from: adamShimi↑ comment by adamShimi · 2022-04-26T11:26:03.694Z · LW(p) · GW(p)
I like this pushback, and I'm a fan of productive mistakes. I'll have a think about how to rephrase to make that clearer. Maybe there's just a communication problem, where it's hard to tell the difference between people claiming "I have an insight (or proto-insight) which will plausibly be big enough to solve the alignment problem", versus "I have very little traction on the alignment problem but this direction is the best thing I've got". If the only effect of my post is to make a bunch of people say "oh yeah, I meant the second thing all along", then I'd be pretty happy with that.
When phrased like that, I agree with you. I am personally relatively suspicious of claims by a bunch of people to have found a path to alignment, but actually excited by some of their productive mistakes (as discussed a bit in my post).
I also fully agree that I want people to use the second, and my "history of alignment" research direction aims at concretely teasing the productive mistakes and revealed bits of evidence without falling for the "this is obviously a solution" or "this is obviously not a solution and thus useless".
Why do I care about this? It has uncomfortable tinges of status regulation, but I think it's important because there are so many people reading about this research online, and trying to find a way into the field, and often putting the people already in the field on some kind of intellectual pedestal. Stating clearly the key insights of a given approach, and their epistemic status, will save them a whole bunch of time. E.g. it took me ages to work through my thoughts on myopia [AF · GW] in response to Evan's posts on it, whereas if I'd known it hinged on some version of the insight I mentioned in this post, I would have immediately known why I disagreed with it.
+1000. And teasing out more generally the assumptions, the insights, the new parts of works and approach is I think super necessary and on my research agenda. That's also part of the reason why I feel asking newcomers to be distillers [AF · GW] is not necessarily a great idea: good distillation of the type we're discussing requires IMO quite a deep understanding of the landscape, the problem and the underlying ideas. Otherwise you at best get a decent summary, and we need more.
As an example of (I claim) doing this right, see the disclaimer on my "shaping safer goals [? · GW]" sequence: "Note that all of the techniques I propose here are speculative brainstorming; I'm not confident in any of them as research directions, although I'd be excited to see further exploration along these lines." Although maybe I should make this even more prominent.
Haven't reread your sequence in quite some time, but I think the value of such exploratory sequence is to make clearer the intuitions underlying the direction, even if they haven't lead yet to productive mistakes. So I like your disclaimer, but I think the even better way of doing this is to clarify for different posts and ideas what are the intuitions you're building on and where the current formalims/descriptions/analogies are failing to capture them.
Lastly, I don't think I'm actually comparing Darwin and Einstein's mature theories to Turing's incomplete theory. As I understand it, their big insights required months or years of further work before developing into mature theories (in Darwin's case, literally decades).
This might also be a bit of miscommunication, but I felt like your discussion of Turing could also have applied especially in Darwin's case, where the initial insight required a lot of additional pieces and clarification to make a clean and ordered theory that you can actually defend. Generally I was pointing at the risk of hindsight bias, where the fact that the insight is clean and powerful once the full theory is known and considered didn't mean it was so compelling at the time it was thought of. (Which is also a general empirical claim about the history of scientific progress, to explore ;) )
↑ comment by carboniferous_umbraculum (Spencer Becker-Kahn) · 2022-04-26T11:26:54.334Z · LW(p) · GW(p)
Yes I think you understood me correctly. In which case I think we more or less agree in the sense that I also think it may not be productive to use Richard's heuristic as a criterion for which research directions to actually pursue.
comment by iivonen · 2022-04-26T14:33:49.565Z · LW(p) · GW(p)
One thing I'm interested in but don't know where to start looking for it, is seeing people who are working instead on the reverse direction - mathematical approaches which show aligned AI is not possible or likely. By this I mean formal work that suggests something like "almost all AGIs are unsafe", in the same way that the chances of picking a rational number at random from is zero because almost all real numbers are irrational.
I don't say this to be a downer! I mean it in the sense of a mathematician who spent 7 years attempting to prove X exists, and then sits down one day and spends 4 hours proving why X cannot exist. Progress can take surprising forms!
Replies from: anonymousaisafety↑ comment by anonymousaisafety · 2022-04-26T16:49:39.243Z · LW(p) · GW(p)
I have been working on an argument from that angle.
I've been developing it independently from my own background in autonomous safety-critical hardware/software systems, but I discovered recently that it's very similar to Drexler's CAIS from 2019, except with more focus on low-level evidence or rationale for why certain claims are justified.
It isn't so much a pure mathematical approach as it is a systems engineering or systems safety perspective on all of the problems[1][2] that would remain even if someone showed up tomorrow and dropped a formally verified algorithm describing an "aligned AGI" onto my desk, and what ramification that has for the development of AGI at all. The only complicated math in it so far is about computational complexity classes and relatively simple if, then logic for analyzing risk vectors.
I guess if I had to pick the "key" insight that I claim I can contribute, and share it now, it would be this:
If you define super-human performance in terms of some abstract thing called "intelligence" (or "general intelligence"), you run into a philosophical question: "what is intelligence?"
There is an answer accepted by this community that intelligence is "efficient cross-domain optimization".
From this answer, it follows that "general intelligence" is a prerequisite, so we can only rank solutions on tasks by evaluating them in the context of "general intelligence". In this way, the community can dismiss super-human performance on a specific task if that solution is not immediately generalizable to other tasks.
This also dismisses solutions that can be generalized to other tasks by taking a known algorithm for training a specific solution and deploying that algorithm on a specific task. The latter solution might be something like DeepMind's research into AlphaGo, then AlphaZero, then AlphaFold, and then Ithaca. That would seem to demonstrate an repeatable engineering process that can be deployed to a specific task and develop a solution with super-human performance on that task without that specific solution generalizing to other tasks.
... [text omitted]
We've achieved super-human or "human" performance in Go, Chess, protein folding, image recognition, language recognition, art generation, code generation, translation, and many other fields using AI/ML systems that do not, in any way, demonstrate "general intelligence".
... [text omitted]
Another way to think about this is to ask if what we call "general intelligence" is ultimately an inefficient algorithm for solving problems, despite the earlier claim that the definition of intelligence was "efficient cross-domain optimization".
I.e.: what if you can always solve problems faster and more efficiently by deploying AI/ML algorithms without "general intelligence", than a hypothetical "general intelligence" algorithm would be able to do, even if that algorithm was deployed to specialized hardware?
The closest I've seen to someone posing this question was Peter Watts' sci-fi novel Blindsight [11], but that was more focused on the idea of "consciousness" vs "general intelligence".
In a world where the algorithm that we'd recognize as "general intelligence" is fundamentally inefficient, we'd see seemingly remarkable and unexplained gains on AI/ML systems across a variety of unrelated problem domains where not one of those AI/ML systems has a capability we'd recognize as "general intelligence".
If you've read CAIS, you might recognize the above argument, where it was worded as:
In particular, taking human learning as a model for machine learning has encouraged the conflation of intelligence-as-learning-capacity with intelligence-as-competence, while these aspects of intelligence are routinely and cleanly separated AI system development: Learning algorithms are typically applied to train systems that do not themselves embody those algorithms. [CAIS 11.7]
When this idea was proposed in 2019, it seems to me like it was criticized because people didn't see how task-focused AI/ML systems could keep improving and eventually surpass human performance without somehow developing "general intelligence" along the way, plus a general skepticism that there would be rational reasons to not "just" staple every single hypothetical task together inside a system and call it AGI. I really think it's worth looking at this again in light of the last 3 years and asking if that criticism was justified.
- ^
In systems safety, we're concerned with the safety of a larger system than the usual "product-focused" mindset. It is not enough for there to be a proof that a hypothetical product as-designed is safe. We also need to look at the likelihood of:
- design failures (the formal proof was wrong because the verification of it had a bug, there is no formal proof, the "formally verified" proof was actually checked by humans and not by an automated theorem prover)
- manufacturing failures (hardware behavior out-of-spec, missed errata, power failures, bad ICs, or other failure of components)
- implementation failures (software bugs, compiler bugs, differences between an idealized system in a proof vs the implementation of that system in some runtime or with some language)
- verification failures (bugs in tests that resulted in a false claim that the software met the formal spec)
- environment or runtime failures (e.g. radiation-induced upsets like bit flips; Does the system use voting? Is the RAM using ECC? What about the processor itself?)
- usage failures (is the product still safe if it's misused? what type of training or compliance might be required? is maintenance needed? is there some type of warning or lockout on the device itself if it is not actively maintained?)
- process failures ("normalization of deviance")
- ^
For each of these failure modes, we then look at the worst-case magnitude of that failure. Does the failure result in non-functional behavior, or does it result in erroneous behavior? Can erroneous behavior be detected? By what? Etc. This type of review is called an FMEA. This review process can rule out designs that "seem good on paper" if there's sufficient likelihood of failures and inability to mitigate them to our desired risk tolerances outside of just the design itself, especially if there exist other solutions in the same design space that do not have similar flaws.
comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2022-04-29T18:14:40.062Z · LW(p) · GW(p)
Weight-sharing makes deception much harder.
Can I read about that somewhere? Or could you briefly elaborate?
comment by Nathan Helm-Burger (nathan-helm-burger) · 2022-04-25T16:42:40.044Z · LW(p) · GW(p)
My hope for my personal work is that nibbling away at the mystery with 'prosaic engineering work ' will make the problem clearer such that profound insights will be easier to generate. I think in science generally it is a good heuristic to follow that when no clear theory exists, and gathering more data is an option, then go gather the data. Also, use engineering to build better tools with which to gather new data.
Replies from: ricraz↑ comment by Richard_Ngo (ricraz) · 2022-04-25T17:24:05.128Z · LW(p) · GW(p)
Oh yeah, I totally agree with this. Will edit into the piece.
comment by Eric Drexler · 2022-08-02T00:07:12.471Z · LW(p) · GW(p)
I’d like to promote a norm for proposals for alignment techniques to be very explicit about where the hard work is done, i.e. which part is surprising or insightful or novel enough to make us think that it could solve alignment even in worlds where that’s quite difficult.
Alignment is, by nature, an engineering task, not a scientific task: It is an attempt to make something, not to understand some existing thing. It may be that, as you suggest, “solving hard scientific problems usually requires compelling insights”, but this is beside the point. Spaceflight was a hard problem, but was solved without a special, compelling insight. Likewise for the progress of computation from vacuum tubes to nanoscale electronics. Both are in the domain of engineering, where problems are typically solved by improving and composing many components. Asking “which part solves the hard problem” would be a mistake.
Regarding the CAIS model, you suggest that it “dramatically underrates the importance of general intelligence”, yet I have argued that the comprehensive AI services model (including the service of developing new services) is a way of thinking about implementations of general intelligence, not a substitute for it!
The capabilities of large language models should update our expectations, but do not persuade me that knowledge and skills of societal scale and diversity must or will be embodied in an undifferentiated blob of computation.
By the way, I haven’t suggested the CAIS model as a solution to alignment problems; instead of proposing a solution, it suggests that alignment problems are likely to arise (and perhaps be solved) in a context different from what has often been assumed. Some problems seem more tractable in that context, others less.
comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2022-04-29T18:18:32.513Z · LW(p) · GW(p)
This is Eliezer’s description [LW · GW] of the core insight behind Paul’s imitative amplification proposal. I find this somewhat compelling, but less so than I used to, since I’ve realized that the line between imitation learning and reinforcement learning is blurrier than I used to think (e.g. see this or this).
I didn't understand what you mean by the line being blurrier... Is this a comment about what works in practice for imitation learning? Does a similar objection apply if we replace imitation
learning with behavioral cloning?
comment by Evan R. Murphy · 2022-04-28T17:23:05.814Z · LW(p) · GW(p)
Overall I think this is a good post and very interesting, thanks.
I find this somewhat compelling, but less so than I used to, since I’ve realized that the line between imitation learning and reinforcement learning is blurrier than I used to think (e.g. see this or this).
So I checked out those links. Briefly looking at them, I can see what you mean about the line between RL and imitation learning being blurry. The first paper seems to show a version of RL which is basically imitation learning.
I'm confused because when you said this makes iterated amplification less compelling to you, I took that to mean it made you less optimistic about iterated amplification as a solution for alignment. But why would whether something is technically classified as imitation learning or a special kind of RL make a difference for its effectiveness?
Or did you mean not that you find it any less promising as an alignment proposal, but just that you now find the core insight less compelling/interesting because it's not as major an innovation over the idea of RL as you had thought it was?
comment by Oleg S. · 2022-04-27T06:00:12.500Z · LW(p) · GW(p)
I don’t know too much about alignment research, but what surprises me most is lack of discussion of two points:
-
For the alignment to work its theory should not only tell humans how to create aligned super-human AGI, but also tell that AGI how to self-improve without destroying its own values. Otherwise how does paperclips optimizer which is marginally smarter than human make sure that its next iteration will still care about paperclips? Good alignment theory should work across all intelligence levels.
-
What are practical implication of alignment research in the world where AGI is hard? Imagine we have a good alignment theory but do not have AGI. I would assume that the theory can be used to manipulate existing superintelligent systems such as science, deep state, stock market. The reverse of this is does alignment research have any results which can be practically used right now?
comment by sanxiyn · 2022-04-25T18:07:43.892Z · LW(p) · GW(p)
Since you specifically mentioned Godel: what was Godel's insight?
Replies from: Algon↑ comment by Algon · 2022-04-25T19:43:15.923Z · LW(p) · GW(p)
EDIT: I am talking about first order logic here.
DOUBLE EDIT: I didn't actually describe the insight required in proving the completeness theorem and compactness theorem. He used the former to prove the latter (because all proofs are finite, and if something is inconsistent, it can't have any models, so every statement must be proveable). I don't know what his key insight was for the compactness theorem, as I've only proven it via the completeness theorem.
That logic could be aritmatized, i.e. you could use the simplest arithmetic system to mimic logical deduction, and hence could use the logic of the simplest arithmetic system to talk about itself, letting you get all those nasty self reference paradoxes. Doing that was highly non trivial. Of course, he also proved a bunch of other important theorems like the completeness theorem: if every model of a set of axioms implies some some statement T, then the set of axioms must prove that statement. And also the compactness theorem: a set of logical statements is consistent iff every finite subset is consistent.
Replies from: adamShimi