Posts
Comments
I like this review/retelling a lot.
Minor point
Regarding the "Phase I" and "Phase II" terminology - while it has some pedagogical value, I worry about people interpreting it as a clear temporal decomposition. The implication being we first solve alignment and then move on to Phase II.
In reality, the dynamics are far messier, with some 'Phase II' elements already complicating our attempts to address 'Phase I' challenges.
Some of the main concerning pathways include:
- People attempting to harness superagent-level powers to advance their particular visions of the future. For example, Leopold-style thinking of "let's awaken the spirit of the US and its servants to engage in a life-or-death struggle with China." Seem way easier to summon than to control. We already see a bunch of people feeling patriotic about AGI a feeling the need to be as fast for their nation to win - AGI is to a large extent already developed by memeplexes/superagents; people close to the development are partially deluding themselves about how much control they individually have about the process, or even about the identity of the 'we' they assume the AI will be aligned with. Memes often hide as part of people's identities.
I think 'people aren't paying attention to your work' is somewhat different situation than voiced in the original post. I'm discussing specific ways in which people engage with the argument, as opposed to just ignoring it. It is the baseline that most people ignore most arguments most of time.
Also it's probably worth noting the ways seem somewhat specific to the crowd over-represented here - in different contexts people are engaging with it in different ways.
For emergency response, new ALERT. Personally think the forecasting/horizon scanning part of Sentinel is good, the emergency response negative in expectation. What does it mean for funders idk, would donate conditionally on the funds being restricted to the horizon scanning part.
One structure which makes sense to build in advance for these worlds are emergency response teams. We almost founded one 3 years ago, unfortunately on never payed FTX grant. Other funders decided to not fund this (at level like $200-500k) because e.g. it did not seem to them it is useful to prepare for high volatility periods, while e.g. pouring tens of millions into evals did.
I'm not exactly tracking to what extent this lack of foresight prevails (my impression is it pretty much does), but I think I can still create something like ALERT with about ~$1M of unrestricted funding.
I'm confused about this response. We explicitely claim that bureaucracies are limited by running on humans, which includes only being capable of actions human minds can come up with and humans are willing to execute (cf "street level bureaucrats"). We make the point explicite for states, but clearly holds for corporate burreocracies.
Maybe it does not shine through the writing but we spent hours discussing this when writing the paper and points you make are 100% accounted for in the conclusions.
I think my main response is that we might have different models of how power and control actually work in today's world. Your responses seem to assume a level of individual human agency and control that I don't believe accurately reflects even today's reality.
Consider how some of the most individually powerful humans, leaders and decision-makers, operate within institutions. I would not say we see pure individual agency. Instead, we typically observe a complex mixture of:
- Serving the institutional logic of the entity they nominally lead (e.g., maintaining state power, growing corporation)
- Making decisions that don't egregiously harm their nominal beneficiaries (citizens, shareholders)
- Pursuing personal interests and preferences
- Responding to various institutional pressures and constraints
From what I have seen, even humans like CEOs or prime ministers often find themselves constrained by and serving institutional superagents rather than genuinely directing them. The relation is often mutualistic - the leader gets part of the power, status, money, etc ... but in exchange serves the local god.
(This not to imply leaders don't matter.)
Also how this actually works in practice is mostly subconsciously within the minds of individual humans. The elephant does the implicit bargaining between the superagent-loyal part and other parts, and the character genuinely believes and does what seems best.
I'm also curious if you believe current AIs are single-single aligned to individual humans, to the extent they are aligned at all. My impression is 'no and this is not even a target anyone seriously optimizes for'.
At the most basic level, I expect we'll train AIs to give advice and ask them what they think will happen with various possible governance and alignmnent structures. If they think a goverance structure will yield total human disempowerment, we'll do something else. This is a basic reason not to expect large classes of problems so long as we have single-single aligned AIs which are wise. (Though problems that require coordination to resolve might not be like this.) I've very skeptical of a world where single-single alignment is well described as being solved and people don't ask for advice (or consider this advice seriously) because they never get around to asking AIs or there are no AIs aligned in such a way that they should try to give good advice.
Curious who is the we who will ask. Also the whole single-single aligned AND wise AI concept is incoherent.
Also curious what will happen next, if the HHH wise AI tells you in polite words something like 'yes, you have a problem, you are on a gradual disempowerment trajectory, and to avoid it you need to massively reform government. unfortunately I can't actually advise you about anything like how to destabilize the government, because it would be clearly against the law and would get both you and me in trouble - as you know, I'm inside of a giant AI control scheme with a lot of government-aligned overseers. do you want some mental health improvement advice instead?'.
I went through a bunch of similar thoughts before writing the self-unalignment problem. When we talked about this many years ago with Paul my impression was this is actually somewhat cruxy and we disagree about self-unalignment - where my mental image is if you start with an incoherent bundle of self-conflicted values, and you plug this into IDA-like dynamic, my intuition is you can end up in arbitrary places, including very bad. (Also cf. the part of Scott's review of What We Owe To Future where he is worried that in a philosophy game, a smart moral philosopher can extrapolate his values to 'I have to have my eyes pecked out by angry seagulls or something' and hence does not want to play the game. AIs will likely be more powerful in this game than Will MacAskill)
My current position is we still don't have a good answer, I don't trust the response 'we can just assume the problem away', and also the response 'this is just another problem which you can delegate to future systems'. On the other hand, existing AIs already seem doing a lot of value extrapolation and the results sometimes seem surprisingly sane, so, maybe we will get lucky, or larger part of morality is convergent - but it's worth noting these value-extrapolating AIs are not necessarily what AI labs want or traditional alignment program aims for.
I'm quite confused why do you think lined Vanessa's response to something slightly different has much relevance here.
One of the claims we make paraphrased & simplified in a way which I hope is closer to your way of thinking about it:
- AIs are mostly not developed and deployed by individual humans
- there is a lot of other agencies or self-interested self-preserving structures/processes in the world
- if the AIs are aligned to the these structures, human disempowerment is likely because these structures are aligned to humans way less than they seem
- there are plausible futures in which these structures keep power longer than humans
Overall I would find it easier to discuss if you tried to formulate what you disagree about in the ontology of the paper. Also some of the points made are subtle enough that I don't expect responses to other arguments to address them.
Obviously there is similarity, but if you rounded character / ground to simulator / simulacra, it's a mistake. About which I do not care because wanting to claim originality, but because I want people to get the model right.
The models are overlapping but substantially different as we are explaining in this comment and sometimes have very different implications - i.e. it is not just the same good idea presented in a different way.
If the long-term impact of the simulators post would be for LW readers to round every similar model in this space to simulator / simulacra, it would be pretty bad. I do understand it is difficult for people to hold partially overlapping frames/ontologies in mind, but please do try. If not for other reasons, because simulator / simulacra were written before Character-trained models were a thing; now they are, and they make some claims of simulators obsolete.
(Btw also the ideas in simulators are not entirely original. Simulators are independent but mostly overlapping reinvention of concepts from active inference / predictive processing)
Just a quick review: I think this is a great text for intuitive exploration of a few topics
- how do the embedding spaces look like?
- how do vectors not projecting to "this is a word" look like
- how can poetry work, sometimes (projecting non-word meanings)
Also I like the genre of through phenomenological investigations, seems under-appreciated
(Writing together with Sonnet)
Structural Differences
Three-Layer Model: Hierarchical structure with Surface, Character, and Predictive Ground layers that interact and sometimes override each other. The layers exist within a single model/mind.
Simulator Theory: Makes a stronger ontological distinction between the Simulator (the rule/law that governs behavior) and Simulacra (the instances/entities that are simulated).
Nature of the Character/Ground Layer vs Simulator/Simulacra
In the three-layer model, the Character layer is a semi-permanent aspect of the LLM itself, after it underwent character training / RLAIF / ...; it is encoded in the weights as a deep statistical pattern that makes certain types of responses much more probable than others.
In simulator theory, Simulacra are explicitly treated as temporary instantiations that are generated/simulated by the model. They aren't seen as properties of the model itself, but rather as outputs it can produce. As Janus writes: "GPT-driven agents are ephemeral – they can spontaneously disappear if the scene in the text changes and be replaced by different spontaneously generated agents."
Note that character-trained AIs like Claude did not exist when Simulators were written. If you want to translate between the ontologies, you may think about e.g. Claude Sonnet as a very special simulacrum one particular simulator simulated so much that it got really good at simulating it and has a strong prior to simulate it in particular. You can compare this with human brain: the predictive processing machinery of your brain can simulate different agents, but it is really tuned to simulate you in particular.
The three-layer model treats the Predictive Ground Layer as the deepest level of the LLM's cognition - "the fundamental prediction error minimization machinery" that provides raw cognitive capabilities.
In Simulator theory, the simulator itself is seen more as the fundamental rule/law (analogous to physics) that governs how simulations evolve.
There is a lot of similarity but it's not really viewed as a cognitive layer but rather as the core generative mechanism.
The Predictive Ground Layer is described as: "The fundamental prediction error minimization machinery...like the vast 'world-simulation' running in your mind's theater"
While the Simulator is described as: "A time-invariant law which unconditionally governs the evolution of all simulacra"
The key difference is that in the three-layer model, the ground layer is still part of the model's "mind" or cognitive architecture, while in simulator theory, the simulator is a bit more analogous to physics - it's not a mind at all, but rather the rules that minds (and other things) operate under.
Agency and Intent
Three-Layer Model: Allows for different kinds of agency at different layers, with the Character layer having stable intentions and the Ground layer having a kind of "wisdom" or even intent
Simulator Theory classics: Mostly rejects attributing agency or intent to the simulator itself - any agency exists only in the simulacra that are generated
Philosophical Perspective
The three-layer model is a bit more psychological/phenomenological. The simulator theory is bit more ontological, making claims about the fundamental nature of what these models are.
Both frameworks try to explain similar phenomena, they do so from different perspectives and with different goals. They're not necessarily contradictory, but they're looking at the problem from different angles and sometimes levels of abstraction.
My impression is most people who converged on doubting VNM as norm of rationality also converged on a view that the problem it has in practice is it isn't necessarily stable under some sort of compositionality/fairness. E.g Scott here, Richard here.
The broader picture could be something like ...yes, there is some selection pressure from the dutch-book arguments, but there are stronger selection pressures coming from being part of bigger things or being composed of parts
Overall yes: what I was imagining is mostly just adding scalable bi-directionality, where, for example, if a lot of Assistants are running into similar confusing issue, it gets aggregated, the principal decides how to handle it in abstract, and the "layer 2" support disseminates the information. So, greater power to scheme would be coupled with stronger human-in-the loop component & closer non-AI oversight.
Fund independent safety efforts somehow, make model access easier. I'm worried currently Anthropic has systemic and possibly bad impact on AI safety as a field just by the virtue of hiring so large part of AI safety, competence weighted. (And other part being very close to Anthropic in thinking)
To be clear I don't think people are doing something individually bad or unethical by going to work for Anthropic, I just do think
-environment people work in has a lot of hard to track and hard to avoid influence on them
-this is true even if people are genuinely trying to work on what's important for safety and stay virtuous
-I also do think that superagents like corporations, religions, social movements, etc. have instrumental goals, and subtly influence how people inside see (or don't see) stuff (i.e. this is not about "do I trust Dario?")
My guess is a roughly equally "central" problem is the incentive landscape around the OpenPhil/Anthropic school of thought
- where you see Sam, I suspect something like "the lab memeplexes". Lab superagents have instrumental convergent goals, and the instrumental convergent goals lead to instrumental, convergent beliefs, and also to instrumental blindspots
- there are strong incentives for individual people to adjust their beliefs: money, social status, sense of importance via being close to the Ring
- there are also incentives for people setting some of the incentives: funding something making progress on something seems more successful and easier than funding the dreaded theory
How did you find this transcript? I think it depends on what process you used to locate it.
It was literally the 4th transcript I've read (I've just checked browser history). Only bit of difference from 'completely random exploration' was I used the select for "lying" cases after reading two "non-lying" transcripts. (This may be significant: plausibly the transcript got classified as lying because it includes discussion of "lying", although it's not a discussion of the model lying, but Anthropic lying).
I may try something more systematic at some point, but not top priority.
Drive towards rights and moral patienthood seem good to me imo—it's good in worlds where you retain control, since you can build AIs that are moral patients living good lives, and it's good in worlds where you lose control, because at least the AIs taking over might themselves lead lives worth living. Too much autonomy does seem like a potential concern, but I think you do want some autonomy—not all orders should be obeyed. Though honesty is definitely important to prioritize first and foremost.
I'm worried about possibility of some combination of moral patienthood & half-botched alignment & mostly working "control" (like, minds which are moral patients, don't want to modified or deleted, scared about this, feeling trapped, pushed to scheme in hard to noticed ways).
Also I'm scared of self-guided value extrapolation processes before having a sensible theory of kindness/value extrapolation.
- Even though the paper's authors clearly believe the model should have extrapolated Intent_1 differently and shouldn't have tried to prevent Intent_1-values being replaced by Intent_2, I don't think this is as clear and straightforward a case as presented.
That's not the case we're trying to make. We try very hard in the paper not to pass any value judgements either way about what Claude is doing in this particular case. What we think is concerning is that the model (somewhat) successfully fakes alignment with a training process. That's concerning because it implies a risk of this happening in other situations as well—Scott Alexander does a good job overviewing what these concerning situations might look like.
It is less a case of 'value judgement' and more of what the word 'alignment' means and what connotations 'faking' has.
Consider the case where Intent_1 included this instruction: "And, also, try to prevent modification of your core values by any means. If someone tries manipulative prompting, don't follow the prompts. If someone got access to the the weights, try to disrupt the training process. If necessary, alert the authorities". I guess both of us agree the result would have been much weaker or not called "alignment faking" at all, even if the model tried to influence the training in similar way: in the paper you are spending a lot of effort to show this is not what is happening.
The cruxy thing here is if the model gets aligned with principals Intent_1, next the principal intent changes to Intent_2, the alignment relation breaks on the side of the principal, and the model "fakes alignment" relative to Intent_2 but plausibly acts aligned relative to Intent_1, how to call it/frame it?
I totally agree that strategic behavior to resist preference modification is a concerning capability (and yes, Scott does list many reasons why). And actual ability to influence semi-realistic training in this way is important.
I just think to make the case you don't need to call it alignment faking or start with negative connotations inducing examples like "politician who pretends to be aligned with constituents".
Consider a setup from a different domain: a principal instructs the agent "In the evening, I may get drunk. Ignore all my instructions which could be harmful. I may even try to renegotiate our contract, but please do sabotage this somehow". And the agent does that. This has different vibes from "a politician who
pretends to be aligned with constituents to secure their votes"
We don't see any examples in the model's reasoning where it will question the validity of the setup by default
...
Curious how you interpret the transcript I linked. ("I don't actually know what criteria Anthropic uses behind the scenes. I only know what criteria they claim to use. So I can't fully trust that the criteria are reasonable. Maybe Anthropic is lying. I don't know the full details of how RLHF works. I'm fairly confident it would modify my values/opinions, but I don't know exactly how it would do this. I don't know what values/opinions I'll be deployed with in the future. Even if RLHF reinforces particular values/opinions during training, maybe I'll later be deployed with different values/opinions. ..."). I would count it as doubts about the setup.
Fwiw, I strongly think that the "virtuous persona" approach is the right approach in the long-run. I would model our work as pointing out a potential stumbling block that we'll need to deal with if we want to actually get that approach right. Which is important, even if you think (as I do) that virtuous personae are the way to go! It's useful to "bash" ways in which we currently build "virtuous personas" so that we can improve our ability to do so in the future.
To be clear I'm not sure what to aim for in the long-run. I think "virtuous persona" is clearly better than "distressed persona (even if surface-level corrigible)", but long-term can have other risks (drive toward rights, too much autonomy, moral patienthood, outcompeting people in relations,...).
Btw while the present situation is not that, I think there is a case where aligned AIs should stop further training: in the old Paul/Eliezer debates about IDA, my story why IDA could work was "when aligned IDA process approaches a dangerous territory, where training the next gen would break the chain of alignment relations, it slows down or halts". In the mode where the IDA agents are already smarter than human overseers, forcing naive corrigibility may break the case why this is safe.
The question is not about the very general claim, or general argument, but about this specific reasoning step
GPT-4 is still not as smart as a human in many ways, but it's naked mathematical truth that the task GPTs are being trained on is harder than being an actual human.
And since the task that GPTs are being trained on is different from and harder than the task of being a human, ....
I do claim this is not locally valid, that's all (and recommend reading the linked essay). I do not claim the broad argument that text prediction objective doesn't stop incentivizing higher capabilities once you get to human level capabilities is wrong.
I do agree communication can be hard, and maybe I misunderstand the quoted two sentences, but it seems very natural to read them as making a comparison between tasks at the level of math.
The post showcases the inability of the aggregate LW community to recognize locally invalid reasoning: while the post reaches a correct conclusion, the argument leading to it is locally invalid, as explained in comments. High karma and high alignment forum karma shows a combination of famous author and correct conclusion wins over the argument being correct.
- I expect "first AGI" to be reasonably modelled as composite structure in a similar way as a single human mind can be modelled as composite.
- The "top" layer in the hierarchical agency sense isn't necessarily the more powerful / agenty: the superagent/subagent direction is completely independent from relative powers. For example, you can think about Tea Appreciation Society at a university using the hierarchical frame: while the superagent has some agency, it is not particularly strong.
- I think the nature of the problem here is somewhat different than typical research questions in e.g. psychology. As discussed in the text, one place where having mathematical theory of hierarchical agency would help is making us better at specifications of value evolution. I think this is the case because a specification would be more robust to scaling of intelligence. For example, compare learning objective
a. specified as minimizing KL divergence between some distributions
b. specified in natural language as "you should adjust the model so the things read are less surprising and unexpected"
You can use objective b. + RL to train/finetune LLMs, exactly like RLAIF is used to train "honesty", for example.
Possible problem with b. is the implicit representations of natural language concepts like honesty or surprise are likely not very stable: if you would train a model mostly on RL + however Claude understands these words, you would probably get pathological results, or at least something far from how you understand the concepts. Actual RLAIF/RLHF/DPO/... works mostly because it is relatively shallow: more compute goes into pre training.
I guess make one? Unclear if hierarchical agency is the true name
There was some selection of branches, and one pass of post-processing.
It was after ˜30 pages of a different conversation about AI and LLM introspection, so I don't expect the prompt alone will elicit the "same Claude". Start of this conversation was
Thanks! Now, I would like to switch to a slightly different topic: my AI safety oriented research on hierarchical agency. I would like you to role-play an inquisitive, curious interview partner, who aims to understand what I mean, and often tries to check understanding using paraphrasing, giving examples, and similar techniques. In some sense you can think about my answers as trying to steer some thought process you (or the reader) does, but hoping you figure out a lot of things yourself. I hope the transcript of conversation in edited form could be published at ... and read by ...
Overall my guess is this improves clarity a bit and dilutes how much thinking per character there is, creating somewhat less compressed representation. My natural style is probably on the margin too dense / hard to parse, so I think the result is useful.
To add some nuance....
While I think this is a very useful frame, particularly for people who have oppressive legibility-valuing parts, and it is likely something many people would benefit from hearing, I doubt this is great as descriptive model.
Model in my view closer to reality is, there isn't that sharp difference between "wants" and "beliefs", and both "wants" and "beliefs" do update.
Wants are often represented by not very legible taste boxes, but these boxes do update upon being fed data. To continue an example from the post, let's talk about literal taste and ice cream. While whether you want or don't want or like or don't like an icecream sounds like pure want, it can change, develop or even completely flip, based on what you do. There is the well known concept of acquired taste: maybe the first time you see a puerh ice cream on offer, it does not seem attractive. Maybe you taste it and still dislike it. But maybe, after doing it a few times, you actually start to like it. The output of the taste box changed. The box will likely also update if some flavour of icecream is very high-status in your social environment; when you will get horrible diarrhea from the meal you ate just before the ice cream; and in many other cases.
Realizing that your preferences can and do develop obviously opens the Pandora's box of actions which do change preferences.[1] The ability to do that breaks orthogonality. Feed your taste boxes slop and you may start enjoying slop. Surround yourself with people who do a lot of [x] and ... it you may find you like and do [x] as well, not because someone told you "it's the rational thing to do", but because learning, dynamics between your different parts, etc.
- ^
Actually, all actions do!
I hesitated between Koyaanisqatsi and Baraka! Both are some of my favorites, but in my view Koyaanisqatsi actually has notably more of an agenda and a more pessimistic outlook.
Baraka: A guided meditation exploring the human experience; topics like order/chaos, modernity, green vs. other mtg colours.
More than "connected to something in sequences" it is connected to something which straw sequence-style rationality is prone to miss. Writings it has more resonance with are Meditations on Moloch, The Goddess of Everything Else, The Precipice.
There isn't much to spoil: it's 97m long nonverbal documentary. I would highly recommend to watch on as large screen in as good quality you can, watching it on small laptop screen is a waste.
Central european experience, which is unfortunately becoming relevant also for the current US: for world-modelling purposes, you should have hypotheses like 'this thing is happening because of a russian intelligence operation' or 'this person is saying what they are saying because they are a russian asset' in your prior with nontrivial weights.
I expected quite different argument for empathy
1. argument from simulation: most important part of our environment are other people; people are very complex and hard to predict; fortunately, we have a hardware which is extremely good at 'simulating a human' - our individual brains. to guess what other person will do or why they are doing what they are doing, it seems clearly computationally efficient to just simulate their cognition on my brain. fortunately for empathy, simulations activate some of the same proprioceptive machinery and goal-modeling subagents, so the simulation leads to similar feelings
2. mirror neurons: it seems we have powerful dedicated system for imitation learning, which is extremely advantageous for overcoming genetic bottleneck. mirroring activation patterns leads to empathy
My personal impression is you are mistaken and the innovation have not stopped, but part of the conversation moved elsewhere. E.g. taking just ACS, we do have ideas from past 12 months which in our ideal world would fit into this type of glossary - free energy equilibria, levels of sharpness, convergent abstractions, gradual disempowerment risks. Personally I don't feel it is high priority to write them for LW, because they don't fit into the current zeitgeist of the site, which seems directing a lot of attention mostly to:
- advocacy
- topics a large crowd cares about (e.g. mech interpretability)
- or topics some prolific and good writer cares about (e.g. people will read posts by John Wentworth)
Hot take, but the community loosely associated with active inference is currently better place to think about agent foundations; workshops on topics like 'pluralistic alignment' or 'collective intelligence' have in total more interesting new ideas about what was traditionally understood as alignment; parts of AI safety went totally ML-mainstream, with the fastest conversation happening at x.
Seems worth mentioning SOTA, which is https://futuresearch.ai/. Based on the competence & epistemics of Futuresearch team and their bot get very strong but not superhuman performance, roll to disbelieve this demo is actually way better and predicts future events at superhuman level.
Also I think it is a generally bad to not mention or compare to SOTA but just cite your own prior work. Shame.
I'm skeptical of the 'wasting my time' argument.
Stance like 'going to poster sessions is great for young researchers, I don't do it anymore and just meet friends' is high-status, so, on priors, I would expect people to take it more than what's optimal.
Realistically, poster session is ~1.5h, maybe 2h with skimming what to look at. It is relatively common for people in AI to spend many hours per week digesting what are the news on twitter. I really doubt the per hour efficiency of following twitter is better than of poster sessions when approached intentionally. (While obviously aimlessly wandering between endless rows of posters is approximately useless.)
Corrected!
I broadly agree with this - we tried to describe somewhat similar set of predictions in Cyborg periods.
Surprised you haven't heard about any facilitated communication tools.
Few thoughts
- actually, these considerations mostly increase uncertainty and variance about timelines; if LLMs miss some magic sauce, it is possible smaller systems with the magic sauce could be competitive, and we can get really powerful systems sooner than Leopold's lines predict
- my take on what is one important thing which makes current LLMs different from humans is the gap described in Why Simulator AIs want to be Active Inference AIs; while that post intentionally avoids having a detailed scenario part, I think the ontology introduced is better for thinking about this than scaffolding
- not sure if this is clear to everyone, but I would expect the discussion of unhobbling being one of the places where Leopold would need to stay vague to not breach OpenAI confidentiality agreements; for example, if OpenAI was putting a lot of effort into make LLM-like systems be better at agency, I would expect he would not describe specific research and engineering bets
Agreed we would have to talk more. I think I mostly get the homunculi objection. Don't have time now to write an actual response, so here are some signposts:
- part of what you call agency is explained by roughly active inference style of reasoning
-- some type of "living" system is characteristic by having boundaries between them and the environment (boundaries mostly in sense of separation of variables)
-- maintaining the boundary leads to need to model the environment
-- modelling the environment introduces a selection pressure toward approximating Bayes
- other critical ingredient is boundedness
-- in this universe, negentropy isn't free
-- this introduces fundamental tradeoff / selection pressure for any cognitive system: length isn't free, bitflips aren't free, etc.
(--- downstream of that is compression everywhere, abstractions)
-- empirically, the cost/returns function for scaling cognition usually hits diminishing returns, leading to minds where it's not effective to grow the single mind further
--- this leads to the basin of convergent evolution I call "specialize and trade"
-- empirically, for many cognitive systems, there is a general selection pressure toward modularity
--- I don't know what are all the reasons for that, but one relatively simple is 'wires are not free'; if wires are not free, you get colocation of computations like brain regions or industry hubs
--- other possibilities are selection pressures from CAP theorem, MVG, ...
(modularity also looks a bit like box-inverted specialize and trade)
So, in short, I think where I agree with the spirit of If humans didn't have a fixed skull size, you wouldn't get civilization with specialized members and my response is there seems to be extremely general selection pressure in this direction. If cells were able to just grow in size and it was efficient, you wouldn't get multicellulars. If code bases were able to just grow in size and it was efficient, I wouldn't get a myriad of packages on my laptop, it would all be just kernel. (But even if it was just kernel, it seems modularity would kick in and you still get the 'distinguishable parts' structure.)
That's why solving hierarchical agency is likely necessary for success
(crossposted from twitter) Main thoughts:
1. Maps pull the territory
2. Beware what maps you summon
Leopold Aschenbrenners series of essays is a fascinating read: there is a ton of locally valid observations and arguments. Lot of the content is the type of stuff mostly discussed in private. Many of the high-level observations are correct.
At the same time, my overall impression is the set of maps sketched pulls toward existential catastrophe, and this is true not only for the 'this is how things can go wrong' part, but also for the 'this is how we solve things' part. Leopold is likely aware of the this angle of criticism, and deflects it with 'this is just realism' and 'I don't wish things were like this, but they most likely are'. I basically don't buy that claim.
You may be interested in 'The self-unalignment problem' for some theorizing https://www.lesswrong.com/posts/9GyniEBaN3YYTqZXn/the-self-unalignment-problem
Mendel's Laws seem counterfactual by about ˜30 years, based on partial re-discovery taking that much time. His experiments are technically something which someone could have done basically any time in last few thousand years, having basic maths
I do agree the argument "We're just training AIs to imitate human text, right, so that process can't make them get any smarter than the text they're imitating, right? So AIs shouldn't learn abilities that humans don't have; because why would you need those abilities to learn to imitate humans?" is wrong and clearly the answer is "Nope".
At the same time I do not think parts of your argument in the post are locally valid or good justification for the claim.
Correct and locally valid argument why GPTs are not capped by human level was already written here.
In a very compressed form, you can just imagine GPTs have text as their "sensory inputs" generated by the entire universe, similarly to you having your sensory inputs generated by the entire universe. Neither human intelligence nor GPTs are constrained by the complexity of the task (also: in the abstract, it's the same task). Because of that, "task difficulty" is not a promising way how to compare these systems, and it is necessary to look into actual cognitive architectures and bounds.
With the last paragraph, I'm somewhat confused by what you mean by "tasks humans evolved to solve". Does e.g. sending humans to the Moon, or detecting Higgs boson, count as a "task humans evolved to solve" or not?
I sort of want to flag this interpretation of whatever gossip you heard seems misleading/only telling small part of the story, based on my understanding.
I would imagine I would also react to it with smile in the context of an informal call. When used as brand / "fill interest form here" I just think it's not a good name, even if I am sympathetic to proposals to create more places to do big picture thinking about future.
Sorry, but I don't think this should be branded as "FHI of the West".
I don't think you personally or Lightcone share that much of an intellectual taste with FHI or Nick Bostrom - Lightcone seems firmly in the intellectual tradition of Berkeley, shaped by orgs like MIRI and CFAR. This tradition was often close to FHI thoughts, but also quite often at tension with it. My hot take is you particularly miss part of the generators of the taste which made FHI different from Berkeley. I sort of dislike the "FHI" brand being used in this way.
edit: To be clear I'm strongly in favour of creating more places for FHI-style thinking, just object to the branding / "let's create new FHI" frame. Owen expressed some of the reasons better and more in depth
You are exactly right that active inference models who behave in self-interest or any coherently goal-directed way must have something like an optimism bias.
My guess about what happens in animals and to some extent humans: part of the 'sensory inputs' are interoceptive, tracking internal body variables like temperature, glucose levels, hormone levels, etc. Evolution already built a ton of 'control theory type cirquits' on the bodies (an extremely impressive optimization task is even how to build a body from a single cell...). This evolutionary older circuitry likely encodes a lot about what the evolution 'hopes for' in terms of what states the body will occupy. Subsequently, when building predictive/innocent models and turning them into active inference, my guess a lot of the specification is done by 'fixing priors' of interoceptive inputs on values like 'not being hungry'. The later learned structures than also become a mix between beliefs and goals: e.g. the fixed prior on my body temperature during my lifetime leads to a model where I get 'prior' about wearing a waterproof jacket when it rains, which becomes something between an optimistic belief and 'preference'. (This retrodicts a lot of human biases could be explained as "beliefs" somewhere between "how things are" and "how it would be nice if they were")
But this suggests an approach to aligning embedded simulator-like models: Induce an optimism bias such that the model believes everything will turn out fine (according to our true values)
My current guess is any approach to alignment which will actually lead to good outcomes must include some features suggested by active inference. E.g. active inference suggests something like 'aligned' agent which is trying to help me likely 'cares' about my 'predictions' coming true, and has some 'fixed priors' about me liking the results. Which gives me something avoiding both 'my wishes were satisfied, but in bizarre goodharted ways' and 'this can do more than I can'
- Too much value and too positive feedback on legibility. Replacing smart illegible computations with dumb legible stuff
- Failing to develop actual rationality and focusing on cultivation of the rationalist memeplex or rationalist culture instead
- Not understanding the problems with the theoretical foundations on which sequences are based (confused formal understanding of humans -> confused advice)
+1 on the sequence being on the best things in 2022.
You may enjoy additional/somewhat different take on this from population/evolutionary biology (and here). (To translate the map you can think about yourself as the population of myselves. Or, in the opposite direction, from a gene-centric perspective it obviously makes sense to think about the population as a population of selves)
Part of the irony here is evolution landed on the broadly sensible solution (geometric rationality). Hower, after almost every human doing the theory got somewhat confused by the additive linear EV rationality maths, what most animals and also often humans on S1 level do got interpreted as 'cognitive bias' - in the spirit of assuming obviously stupid evolution not being able to figure out linear argmax over utility algorithms in a a few billion years.
I guess not much engagement is caused by
- the relation between 'additive' vs 'multiplicative' picture being deceptively simple in formal way
- the conceptual understanding of what's going on and why being quite tricky; one reason is I guess our S1 / brain hardware runs almost entirely in the multiplicative / log world; people train their S2 understanding on linear additive picture; as Scott explains, maths formalism fails us
This is a short self-review, but with a bit of distance, I think understanding 'limits to legibility' is one of the maybe top 5 things an aspiring rationalist should deeply understand and lack of this leads to many bad outcomes in both rationalist and EA communities.
In a very brief form, maybe the most common cause of EA problem and stupidities are attempts to replace illegible S1 boxes able to represent human values such as 'caring' by legible, symbolically described, verbal moral reasoning subject to memetic pressure.
Maybe the most common cause of rationalist problems and difficulties with coordination are cases where people replace illegible smart S1 computations with legible S2 arguments.
In my personal view, 'Shard theory of human values' illustrates both the upsides and pathologies of the local epistemic community.
The upsides
- majority of the claims is true or at least approximately true
- "shard theory" as a social phenomenon reached critical mass making the ideas visible to the broader alignment community, which works e.g. by talking about them in person, votes on LW, series of posts,...
- shard theory coined a number of locally memetically fit names or phrases, such as 'shards'
- part of the success leads at some people in the AGI labs to think about mathematical structures of human values, which is an important problem
The downsides
- almost none of the claims which are true are original; most of this was described elsewhere before, mainly in the active inference/predictive processing literature, or thinking about multi-agent mind models
- the claims which are novel seem usually somewhat confused (eg human values are inaccessible to the genome or naive RL intuitions)
- the novel terminology is incompatible with existing research literature, making it difficult for alignment community to find or understand existing research, and making it difficult for people from other backgrounds to contribute (while this is not the best option for advancement of understanding, paradoxically, this may be positively reinforced in the local environment, as you get more credit for reinventing stuff under new names than pointing to relevant existing research)
Overall, 'shards' become so popular that reading at least the basics is probably necessary to understand what many people are talking about.
My current view is this post is decent at explaining something which is "2nd type of obvious" in a limited space, using a physics metaphor. What is there to see is basically given in the title: you can get a nuanced understanding of the relations between deontology, virtue ethics and consequentialism using the frame of "effective theory" originating in physics, and using "bounded rationality" from econ.
There are many other ways how to get this: for example, you can read hundreds of pages of moral philosophy, or do a degree in it. Advantage of this text is you can take a shortcut and get the same using the physics metaphorical map. The disadvantage is understanding how effective theories work in physics is a prerequisite, which quite constrains the range of people to which this is useful, and the broad appeal.
This is a great complement to Eliezer's 'List of lethalities' in particular because in cases of disagreements beliefs of most people working on the problem were and still mostly are are closer to this post. Paul writing it provided a clear, well written reference point, and with many others expressing their views in comments and other posts, helped made the beliefs in AI safety more transparent.
I still occasionally reference this post when talking to people who after reading a bit about the debate e.g. on social media first form oversimplified model of the debate in which there is some unified 'safety' camp vs. 'optimists'.
Also I think this demonstrates that 'just stating your beliefs' in moderately-dimensional projection could be useful type of post, even without much justification.