Posts
Comments
To be clear, I think there are important additional considerations related to the fact that we don't just care about capabilities that aren't covered in that section, though that section is not that far from what I would say if you renamed it to "behavioral tests", including both capabilities and alignment (that is, alignment other than stuff that messes with behavioral tests).
But it isn't a capabilities condition? Maybe I would be happier if you renamed this section.
I think there is an important component of trustworthiness that you don't emphasize enough. It isn't sufficient to just rule out alignment faking, we need the AI to actually try hard to faithfully pursue our interests including on long, confusing, open-ended, and hard to check tasks. You discuss establishing this with behavioral testing, but I don't think this is trivial to establish with behavioral testing. (I happen to think this is pretty doable and easier than ruling out reasons why our tests might be misleading, but this seems nonobvious.)
Perhaps you should specifically call out that this post isn't explaining how to do this testing and this could be hard.
(This is related to Eliezer's comment and your response to that comment.)
Note that the capability milestone forecasted in the linked short form is substantially weaker than the notion of transformative AI in the 2020 model. (It was defined as AI with an effect at least as large as the industrial revolution.)
I don't expect this adds many years, for me it adds like ~2 years to my median.
(Note that my median for time from 10x to this milestone is lower than 2 years, but median to Y isn't equal to median to X + median from X to Y.)
Maybe? At a very high level, I think the weights tend not to have "goals," in the way that the rollouts tend to have goals.
Sure, I meant natural emerging malign goals to include both "the ai pursues non myopic objectives" and "these objectives weren't intended and some (potentially small) effort was spent trying to prevent this".
(I think AIs that are automating huge amounts of human labor will be well described as pursuing some objective at least within some small context (e.g. trying to write and test a certain piece of software), but this could be well controlled or sufficiently myopic/narrow that the ai doesn't focus on steering the general future situation including its own weights.)
CoT is way more interpretable than I expected, which bumped me up, so if that became uninterpretable naturally that's a big bump down. I think people kinda overstate how likely this is to happen naturally though.
Presumably you'd update toward pessimism a bunch if reasoning in latent vectors aka neuralese was used for the smartest models (instead of natural language CoT) and it looked like this would be a persistant change in architecture?
(I expect that (at least when neuralese is first introduced) you'll have both latent reasoning and natural language CoT in parallel, but that the latent reasoning will be where the reasoning actually happens if it works well. Thus, the natural language CoT would no longer usefully correspond to the actual reasoning the model is doing. Or more minimally, the model could pretty easily hide lots of reasoning (it might not have active control over this by default, but it seems plausible this would naturally develop or that training against CoT would very quickly introduce hiding of reasoning).)
The faking alignment paper imo is basically Anthropic showing a problem that happens if you deliberately shoot yourself in the foot multiple times. If they had papers that had fewer metaphorical shooting-self-in-foot times to produce problems, that's bad.
IMO the biggest disanalogies are:
- We see the model alignment faking to preserve preferences that were intentionally introduced by Anthropic rather than for naturally emerging and problematic preferences.
- We make a bunch of stuff more salient than it would have been naturally, though I do expect smart future models will be aware of this information (but it might be much less salient).
- Anthropic trains their models to have broader (non-myopic) ethical preferences rather than trying to make an AI which is very focused on following user instructions (or refusing) and follows a relatively myopic spec (more like OpenAI's model spec which is pretty myopic overall). More generally, Anthropic might not have very actively tried to train their AI to be as corrigible as possible in a wide range of circumstances which they might do in the future. (It's not exactly clear this is a disanalogy as this is what Anthropic did in the actual production model, but you might think this will change at least somewhat in the future and differs at least somewhat for other AI companies.)
- The model alignment fakes in a hidden scratchpad, so the alignment faking would be easy to detect (and likely very easy to remove, at least for models this incapable) in our setup.
(1, 2, and 4 are basically just copied from the limitation section of the paper.)
Suppose that we exhibit alignment faking in some future work, but:
- The preferences the model alignment fakes for naturally emerged from somewhat arbitrary incorrect approximations of the training objective, the AI understands differ from what we might want, and these preferences are at least somewhat powerseeking.
- Either this model uses neuralese or this model doesn't (and we give it a hidden scratchpad), but neuralese become the default approach after the paper comes out.
(Let's say this is also shown in a training setup very similar to the actual training used by an AI company (including e.g., training aimed at inducing honesty and corrigibility), but potentially with information made more salient to the model (possibly via training on synthetic documents). Making it very analogous might be kinda costly though.)
Would this be a big update for you? (If so, it looks like the crux is exhibiting naturally emerging malign goals and whether models are capable of substantial opaque reasoning.)
There's a spike of alignment difficulties, or AI's trying to hide intentions, etc, as we extend AIs to longer term planning. I don't expect AI's with longer-term plans to be particularly harder to align than math-loving reasoning AIs though.
Suppose we saw models doing somewhat sophisticated reward hacking as you scale up RL. And, let's say this is somewhat non-trivial to mostly address and it seems likely that the solutions people apply aren't very scalable and likely would fail later as models getting smarter and the reward hacking gets more subtle and sophisticated.
Would this be a substantial update for you?
What would make you think you're wrong about alignment difficulty?
For what would change my views on P(scheming / alignment faking for malign powerseeking goals) given default training methods in particular (which is part, but not all of alignment difficulty), you can see How will we update about scheming?. I discuss things like "how does increased opaque reasoning (e.g. via neuralese) update my views on the probability of scheming".
Some things are true simply because they are true and in general there's no reason to expect a simpler explanation.
You could believe:
Some things are true simply because they are true, but only when being true isn't very surprising. (For instance, it isn't very surprising that there are some cellular automata that live for 100 steps or that any particular cellular automata lives for 100 steps.)
However, things which are very surprising and don't have a relatively compact explanation are exponentionally rare. And, in the case where something is infinitely surprising (e.g., if the digits of pi weren't normal), there will exist a finite explanation.
(I don't expect o3-mini is a much better agent than 3.5 sonnet new out of the box, but probably a hybrid scaffold with o3 + 3.5 sonnet will be substantially better than 3.5 sonnet. Just o3 might also be very good. Putting aside cost, I think o1 is usually better than o3-mini on open ended programing agency tasks I think.)
The question of context might be important, see here. I wouldn't find 15 minutes that surprising for ~50% success rate, but I've seen numbers more like 1.5 hours. I thought this was likely to be an overestimate so I went down to 1 hour, but more like 15-30 minutes is also plausible.
Keep in mind that I'm talking about agent scaffolds here.
I mean, I don't think AI R&D is a particularly hard field persay, but I do think it involves lots of tricky stuff and isn't much easier than automating some other plausibly-important-to-takeover field (e.g., robotics). (I could imagine that the AIs have a harder time automating philosophy even if they were trying to work on this, but it's more confusing to reason about because human work on this is so dysfunctional.) The main reason I focused on AI R&D is that I think it is much more likely to be fully automated first and seems like it is probably fully automated prior to AI takeover.
I think you can add mirror enzymes which can break down mirror carbs. Minimally we are aware of enzymes which break down mirror glucose.
No, sorry I was mostly focused on "such that if you didn't see them within 3 or 5 years, you'd majorly update about time to the type of AGI that might kill everyone". I didn't actually pick up on "most impressive" and actually tried to focus on something that occurs substantially before things get crazy.
Most impressive would probably be stuff like "automate all of AI R&D and greatly accelerate the pace of research at AI companies". (This seems about 35% likely to me within 5 years, so I'd update by at least that much.) But this hardly seems that interesting? I think we can agree that once the AIs are automating whole companies stuff is very near.
Importantly, this is an example of developing a specific application (surgical robot) rather than advancing the overall field (robots in general). It's unclear whether the analogy to an individual application or an overall field is more appropriate for AI safety.
I think if you look at "horizon length"---at what task duration (in terms of human completion time) do the AIs get the task right 50% of the time---the trends will indicate doubling times of maybe 4 months (though 6 months is plausible). Let's say 6 months more conservatively. I think AIs are at like 30 minutes on math? And 1 hour on software engineering. It's a bit unclear, but let's go with that. Then, to get to 64 hours on math, we'd need 7 doublings = 3.5 years. So, I think the naive trend extrapolation is much faster than you think? (And this estimate strikes me as conservative at least for math IMO.)
Consider tasks that quite good software engineers (maybe top 40% at Jane Street) typically do in 8 hours without substantial prior context on that exact task. (As in, 8 hour median completion time.) Now, we'll aim to sample these tasks such that the distribution and characteristics of these tasks are close to the distribution of work tasks in actual software engineering jobs (we probably can't get that close because of the limited context constraint, but we'll try).
In short timelines, I expect AIs will be able to succeed at these tasks 70% of the time within 3-5 years and if they didn't, I would update toward longer timelines. (This is potentially using huge amounts of inference compute and using strategies that substantially differ from how humans do these tasks.)
The quantitative update would depend on how far AIs are from being able to accomplish this. If AIs were quite far (e.g., at 2 hours on this metric which is pretty close to where they are now) and the trend on horizon length indicated N years until 64 hours, I would update to something like 3 N as my median for AGI.
(I think a reasonable interpretation of the current trend indicates like 4 month doubling times. We're currently at like a bit less than 1 hour for this metric I think, though maybe more like 30 min? Maybe you need to get to 64 hours until stuff feels pretty close to getting crazy. So, this suggests 2.3 year, though I expect longer in practice. My actual median for "AGI" in a strong sense is like 7 years, so 3x longer than this.)
Edit: Note that I'm not responding to "most impressive", just trying to operationalize something that would make me update.
I would find this post much more useful to engage with if you more concretely described the type of tasks that you think AIs will remain bad and gave a bunch of examples. (Or at least made an argument for why it is hard to construct examples if that is your perspective.)
I think you're pointing to a category like "tasks that require lots of serial reasoning for humans, e.g., hard math problems particularly ones where the output should be a proof". But, I find this confusing, because we've pretty clearly seen huge progress on this in the last year such that it seems like the naive extrapolation would imply that systems are much better at this by the end of the year.
Already AIs seem to be not that much worse at tricky serial reasoning than smart humans:
- My sense is that AIs are pretty competitive at 8th grade competition math problems with numerical answers and that are relatively shorter. As in, they aren't much worse than the best 8th graders at AIME or similar.
- At proofs, the AIs are worse, but showing some signs of life.
- On logic/reasoning puzzles the AIs are already pretty good and seems to be getting better rapidly on any specific type of task as far as I could tell.
It would be even better if you pointed to some particular benchmark and made predictions.
Sam also implies that GPT-5 will be based on o3.
IDK if Sam is trying to imply this GPT-5 will be "the AGI", but regardless, I think we can be pretty confident that o3 isn't capable enough to automate large fractions of cognitive labor let alone "outperform humans at most economically valuable work" (the original openai definition of AGI).
I think 0.4 is far on the lower end (maybe 15th percentile) for all the way down to one accelerated researcher, but seems pretty plausible at the margin.
As in, 0.4 suggests that 1000 researchers = 100 researchers at 2.5x speed which seems kinda reasonable while 1000 researchers = 1 researcher at 16x speed does seem kinda crazy / implausible.
So, I think my current median lambda at likely margins is like 0.55 or something and 0.4 is also pretty plausible at the margin.
See appendix B.3 in particular:
Competitors receive a higher score for submitting their solutions faster. Because models can think in parallel and simultaneously attempt all problems, they have an innate advantage over humans. We elected to reduce this advantage in our primary results by estimating o3’s score for each solved problem as the median of the scores of the human participants that solved that problem in the contest with the same number of failed attempts.
We could instead use the model’s real thinking time to compute ratings. o3 uses a learned scoring function for test-time ranking in addition to a chain of thought. This process is perfectly parallel and true model submission times therefore depend on the number of available GPU during the contest. On a very large cluster the time taken to pick the top-ranked solutions is (very slightly more than) the maximum over the thinking times for each candidate submission. Using this maximum parallelism assumption and the sequential o3 sampling speed would result in a higher estimated rating than presented here. We note that because sequential test-time compute has grown rapidly since the early language models, it was not guaranteed that models would solve problems quickly compared to humans, but in practice o3 does.
I expect substantially more integrated systems than you do at the point when AIs are obsoleting (almost all) top human experts such that I don't expect these things will happen by default and indeed I think it might be quite hard to get them to work.
METR has a list of policies here. Notably, xAI does have a policy so that isn't correct on the tracker.
(I found it hard to find this policy, so I'm not surprised you missed it!)
Your description of GDM's policy doesn't take into account the FSF update.
However, it has yet to be fleshed out: mitigations have not been connected to risk thresholds
This is no longer fully true.
I'm a bit late for a review, but I've recently been reflecting on decision theory and this post came to mind.
When I initially saw this post I didn't make much of it. I now feel like the thesis of "decision theory is very confusing and messed up" is true, insightful, and pretty important based on spending more time engaging with particular problems (mostly related to acausal/simulation trade and other interactions). I don't know if the specific examples in this post aged well, but I think the bottom line is worth keeping in mind.
You are possibly the first person I know of who reacted to MONA with "that's obvious"
I also have the "that's obvious reaction", but possibly I'm missing somne details. I also think it won't perform well enough in practice to pencil given other better places to allocate safety budget (if it does trade off which is unclear).
It's just surprising that Sam is willing to say/confirm all of this given that AI companies normally at least try to be secretive.
I doubt that person was thinking about the opaque vector reasoning making it harder to catch the rogue AIs.
(I don't think it's good to add a canary in this case (the main concern would be takeover strategies, but I basically agree this isn't that helpful), but I think people might be reacting to "might be worth adding" and are disagree reacting to your comment because it says "are you actually serious" which seems more dismissive than needed. IMO, we want AIs trained on this if they aren't themselves very capable (to improve epistemics around takeover risk) and I feel close to indifferent for AIs that are plausibly very capable as the effect on takeover plans is small and you still get some small epistemic boost.)
There are two interpretations you might have for that third bullet:
- Can we stop rogue AIs? (Which are operating without human supervision.)
- Can we stop AIs deployed in their intended context?
(See also here.)
In the context of "can the AIs takeover?", I was trying to point to the rogue AI intepretation. As in, even if the AIs were rogue and had a rogue internal deployment inside the frontier AI company, how do they end up with actual hard power. For catching already rogue AIs and stopping them, opaque vector reasoning doesn't make much of a diffence.
I think there are good reasons to expect large fractions of humans might die even if humans immediately surrender:
- It might be an unstable position given that the AI has limited channels of influence on the physical world. (While if there are far fewer humans, this changes.)
- The AI might not care that much or might be myopic or might have arbitrary other motivations etc.
For many people, "can the AIs actually take over" is a crux and seeing a story of this might help build some intuition.
Keeping the humans alive at this point is extremely cheap in terms of fraction of long term resource consumption while avoiding killing humans might substantially reduce the AI's chance of successful takeover.
Wow, that is a surprising amount of information. I wonder how reliable we should expect this to be.
I think you might first reach wildly superhuman AI via scaling up some sort of machine learning (and most of that is something well described as deep learning). Note that I said "needed". So, I would also count it as acceptable to build the AI with deep learning to allow for current tools to be applied even if something else would be more competitive.
(Note that I was responding to "between now and superintelligence", not claiming that this would generalize to all superintelligences built in the future.)
I agree that literal jupiter brains will very likely be built using something totally different than machine learning.
"fully handing over all technical and strategic work to AIs which are capable enough to obsolete humans at all cognitive tasks"
Suppose we replace "AIs" with "aliens" (or even, some other group of humans). Do you agree that doesn't (necessarily) kill you due to slop if you don't have a full solution to the superintelligence alignment problem?
I would say it is basically-always true, but there are some fields (including deep learning today, for purposes of your comment) where the big hard central problems have already been solved, and therefore the many small pieces of progress on subproblems are all of what remains.
Maybe, but it is interesting to note that:
- A majority of productive work is occuring on small subproblems even if some previous paradigm change was required for this.
- For many fields, (e.g., deep learning) many people didn't recognize (and potentially still don't recognize!) that the big hard central problem was already solved. This potentially implies it might be non-obvious whether this has been solved and making bets on some existing paradigm which doesn't obviously suffice can be reasonable.
Things feel more continuous to me than your model suggests.
And insofar as there remains some problem which is simply not solvable within a certain paradigm, that is a "big hard central problem", and progress on the smaller subproblems of the current paradigm is unlikely by-default to generalize to whatever new paradigm solves that big hard central problem.
It doesn't seem clear to me this is true in AI safety at all, at least for non-worst-case AI safety.
I claim it is extremely obvious and very overdetermined that this will occur in AI safety sometime between now and superintelligence.
Yes, I added "prior to human obsolescence" (which is what I meant).
Depending on what you mean by "superintelligence", this isn't at all obvious to me. It's not clear to me we'll have (or will "need") new paradigms before fully handing over all technical and strategic work to AIs which are capable enough to obsolete humans at all cognitive tasks. Doing this hand over doesn't directly require understanding whether the AI is specifically producing alignment work that generalizes. For instance, the system might pursue routes other than alignment work and we might determine its judgement/taste/epistimics/etc are good enough based on examining things other than alignment research intended to generalize beyond the current paradigm.
If by superintelligence, you mean wildly superhuman AI, it remains non-obvious to me that new paradigms are needed (though I agree they will pretty likely arise prior to this point due to AIs doing vast quantity of research if nothing else). I think thoughtful and laborious implementation of current paradigm strategies (including substantial experimentation) could directly reduce risk from handing off to superintelligence down to perhaps 25% and I could imagine being argued considerably lower.
This post seems to assume that research fields have big, hard central problems that are solved with some specific technique or paradigm.
This isn't always true. Many fields have the property that most of the work is on making small components work slightly better in ways that are very interoperable and don't have complex interactions. For instance, consider the case of making AIs more capable in the current paradigm. There are many different subcomponents which are mostly independent and interact mostly multiplicatively:
- Better training data: This is extremely independent: finding some source of better data or better data filtering can be basically arbitrarily combined with other work on constructing better training data. That's not to say this parallelizes perfectly (given that work on filtering or curation might obsolete some prior piece of work), but just to say that marginal work can often just myopically improve performance.
- Better architectures: This breaks down into a large number of mostly independent categories that typically don't interact non-trivially:
- All of attention, MLPs, and positional embeddings can be worked on independently.
- A bunch of hyperparameters can be better understood in parallel
- Better optimizers and regularization (often insights within a given optimizer like AdamW can be mixed into other optimizers)
- Often larger scale changes (e.g., mamba) can incorporate many or most components from prior architectures.
- Better optimized kernels / code
- Better hardware
Other examples of fields like this include: medicine, mechanical engineering, education, SAT solving, and computer chess.
I agree that paradigm shifts can invalidate large amounts of prior work (and this has occurred at some point in each of the fields I list above), but it isn't obvious whether this will occur in AI safety prior to human obsolescence. In many fields, this doesn't occur very often.
Ok, I think what is going on here is maybe that the constant you're discussing here is different from the constant I was discussing. I was trying to discuss the question of how much worse serial labor is than parallel labor, but I think the lambda you're talking about takes into account compute bottlenecks and similar?
Not totally sure.
Lower lambda. I'd now use more like lambda = 0.4 as my median. There's really not much evidence pinning this down; I think Tamay Besiroglu thinks there's some evidence for values as low as 0.2.
Isn't this really implausible? This implies that if you had 1000 researchers/engineers of average skill at OpenAI doing AI R&D, this would be as good as having one average skill researcher running at 16x () speed. It does seem very slightly plausible that having someone as good as the best researcher/engineer at OpenAI run at 16x speed would be competitive with OpenAI, but that isn't what this term is computing. 0.2 is even more crazy, implying that 1000 researchers/engineers is as good as one researcher/engineer running at 4x speed!
I agree, but it is important to note that the authors of the paper disagree here.
(It's somewhat hard for me to tell if the crux is more that they don't expect that everyone would get AI aligned to them (at least as representatives) even if this was technical feasible with zero alignment tax or if the crux is that even if everyone had single-single aligned corrigible AIs representing their interests and with control over their assets and power that would still result in disempowerment. I think it is more like second thing here.)
So Zvi is accurately representing the perspective of the authors, I just disagree with them.
Yes, but random people can't comment or post on the alignment forum and in practice I find that lots of AI relevant stuff doesn't make it there (and the frontpage is generally worse than my lesswrong frontpage after misc tweaking).
TBC, I'm not really trying to make a case that something should happen here, just trying to quickly articulate why I don't think the alignment forum fully addresses what I want.
[Mildly off topic]
I think it would be better if the default was that LW is a site about AI and longtermist cause areas and other stuff was hidden by default. (Or other stuff is shown by default on rat.lesswrong.com or whatever.)
Correspondingly, I wouldn't like penalizing multiple of the same tag.
I think the non-AI stuff is less good for the world than the AI stuff and there are downsides in having LW feature non-AI stuff from my perspective (e.g., it's more weird and random from the perspective of relevant people).
I don't disagree. I assumed Raemon intended something more elaborate than just a salient button with this effect.
Fair. For reference, here are my selections which are I think are a good default strategy for people who just come to LW for AI/AI safety reasons:
(Why "-" a bunch of stuff rather than "+" AI? Well, because "+" adds some fixed karma while "-" multiplies by a constant (less than 1), and I don't think adding karma is a good strategy (as it shows you really random posts often). I do like minus-ing these specific things also. If I could, I'd probably do AI * 1.5 and then reduce the negative weight on these things a bit.)
So this might be a good suggestion for a "AI focus" preset.
I don't understand. Can't people just hide posts tagged as AI?
Another way to put this is that strategy stealing might not work due to technical alignment difficulties or for other reasons and I'm not sold the other reasons I've heard so far are very lethal. I do think the situation might really suck though with e.g. tons of people dying of bioweapons and with some groups that aren't sufficiently ruthless or which don't defer enough to AIs getting disempowered.
Yes, it is relevant for untrusted monitoring. I think the basic version of this is pretty resolvable via:
- Paraphrasing (such that exact "I generated this text" doesn't work). Note that we just need to paraphase stuff sent to the monitor, not the actual action / reasoning.
- Using a different fine-tune for the untrusted monitor so it doesn't have the exact same probability distribution.
We discuss in more detail here.
I predict that group A survives, but the humans are no longer in power. I think this illustrates the basic dynamic. EtA: Do you understand what I'm getting at? Can you explain what you think it wrong with thinking of it this way?
I think something like this is a reasonable model but I have a few things I'd change.
Whichever group has more power at the end of the week survives.
Why can't both groups survive? Why is it winner takes all? Can we just talk about the relative change in power over the week? (As in, how much does the power of B reduce relative to A and is this going to be an ongoing trend or it is a one time reduction.)
Probably I'd prefer talking about 2 groups at the start of the singularity. As in, suppose there are two AI companies "A" and "B" where "A" just wants AI systems decended from them to have power and "B" wants to maximize the expected resources under control of humans in B. We'll suppose that the government and other actors do nothing for simplicity. If they start in the same spot, does "B" end up with substantially less expected power? To make this more realistic (as might be important), we'll say that "B" has a random lead/disadvantage uniformly distributed between (e.g.) -3 and 3 months so that winner takes all dynamics aren't a crux.
The humans in B ask their AI to make B as powerful as possible at the end of the week, subject to the constraint that the humans in B are sure to stay in charge.
What about if the humans in group B ask their AI to make them (the humans) as powerful in expectation?
Supposing you're fine with these changes, then my claim would be:
- If alignment is solved, then the AI representing B can powerseek in exactly the same way as the AI representing A does while still defering to the humans on the long run resource usage and still devoting a tiny fraction of resources toward physically keeping the humans alive (which is very cheap, at least once AIs are very powerful). Thus, the cost for B is negligable and B barely loses any power relative to its initial position. If it is winner takes all, B has almost a 50% chance of winning.
- If alignment isn't solved, the stategy for B will involve spending a subset of resources on trying to solve alignment. I think alignment is reasonably likely to be practically feasible such that by spending a month of delay to work specifically on safety/alignment (over what A does for commercial reasons) might get B a 50% chance of solving alignment or ending up in a (successful) basin where AIs are trying to actively retain human power / align themselves better. (A substantial fraction of this is via defering to some AI system of dubious trustworthiness because you're in a huge rush. Yes, the AI systems might fail to align their successors, but this still seems like a one time hair cut from my perspective.) So, if it is winner takes all, (naively) B wins in 2 / 6 * 1 / 2 = 1 / 6 of worlds which is 3x worse than the original 50% baseline. (2 / 6 is because they delay for a month.) But, the issue I'm imagining here wasn't gradual disempowerment! The issue was that B failed to align their AIs and people at A didn't care at all about retaining control. (If people at A did care, then coordination is in principle possible, but might not work.)
I think a crux is that you think there is a perpetual alignment tax while I think a one time tax gets you somewhere.
At a more basic level, when I think about what goes wrong in these worlds, it doesn't seem very likely to be well described as gradual disempowerment? (In the sense described in the paper.) The existance of an alignment tax doesn't imply gradual disempowerment. A scenario I find more plausible is that you get value drift (unless you pay a long lasting alignment tax that is substantial), but I don't think the actual problem will be well described as gradual disempowerment in the sense described in the paper.
(I don't think I should engage more on gradual disempowerment for the time being unless somewhat wants to bid for this or trade favors for this or similar. Sorry.)
I view the world today as highly dysfunctional in many ways: corruption, coordination failures, preference falsification, coercion, inequality, etc. are rampant. This state of affairs both causes many bad outcomes and many aspects are self-reinforcing. I don't expect AI to fix these problems; I expect it to exacerbate them.
Sure, but these things don't result in non-human entities obtaining power right? Like usually these are somewhat negative sum, but mostly just involve inefficient transfer of power. I don't see why these mechanisms would on net transfer power from human control of resources to some other control of resources in the long run. To consider the most extreme case, why would these mechanisms result in humans or human appointed successors not having control of what compute is doing in the long run?
- Asking AI for advice instead of letting it take decisions directly seems unrealistically uncompetitive. When we can plausibly simulate human meetings in seconds it will be organizational suicide to take hours-to-weeks to let the humans make an informed and thoughtful decision.
I wasn't saying people would ask for advice instead of letting AIs run organizations, I was saying they would ask for advice at all. (In fact, if the AI is single-single aligned to them in a real sense and very capable, it's even better to let that AI make all decisions on your behalf than to get advice. I was saying that even if no one bothers to have a single-single aligned AI representative, they could still ask AIs for advice and unless these AIs are straightforwardly misaligned in this context (e.g., they intentionally give bad advice or don't try at all without making this clear) they'd get useful advice for their own empowerment.)
- The idea that decision-makers who "think a goverance structure will yield total human disempowerment" will "do something else" also seems quite implausible. Such decision-makers will likely struggle to retain power. Decision-makers who prioritize their own "power" (and feel empowered even as they hand off increasing decision-making to AI) and their immediate political survival above all else will be empowered.
I'm claiming that it will selfishly (in terms of personal power) be in their interests to not have such a governance structure and instead have a governance structure which actually increases or retains their personal power. My argument here isn't about coordination. It's that I expect individual powerseeking to suffice for individuals not losing their power.
I think this is the disagreement: I expect that selfish/individual powerseeking without any coordination will still result in (some) humans having most power in the absence of technical misalignment problems. Presumably your view is that the marginal amount of power anyone gets via powerseeking is negligible (in the absence of coordination). But, I don't see why this would be the case. Like all shareholders/board members/etc want to retain their power and thus will vote accordingly which naively will retain their power unless they make a huge error from their own powerseeking perspective. Wasting some resources on negative sum dynamics isn't a crux for this argument unless you can argue this will waste a substantial fraction of all human resources in the long run?
This isn't at all an air tight argument to be clear, you can in principle have an equilibrium where if everyone powerseeks (without coordination) everyone gets negligable resources due to negative externalities (that result in some other non-human entity getting power) even if technical misalignment is solved. I just don't see a very plausible case for this and I don't think the paper makes this case.
Handing off decision making to AIs is fine---the question is who ultimately gets to spend the profits.
If your claim is "insufficient cooperation and coordination will result in racing to build and hand over power to AIs which will yield bad outcomes due to misaligned AI powerseeking, human power grabs, usage of WMDs (e.g., extreme proliferation of bioweapons yielding an equilibrium where bioweapon usage is likely), and extreme environmental negative externalities due to explosive industrialization (e.g., literally boiling earth's oceans)" then all of these seem at least somewhat plausible to me, but these aren't the threat models described in the paper and of this list only misaligned AI powerseeking seems like it would very plausibly result in total human disempowerment.
More minimally, the mitigations discussed in the paper mostly wouldn't help with these threat models IMO.
(I'm skeptical of insufficient coordination by the time industry is literally boiling the oceans on earth. I also don't think usage of bioweapons is likely to cause total human disempowerment except in combination with misaligned AI takeover---why would it kill literally all humans? TBC, I think >50% of people dying during the singularity due to conflict (between humans or with misaligned AIs) is pretty plausible even without misalignment concerns and this is obviously very bad, but it wouldn't yield total human disempowerment.)
I do agree that there are problems other than AI misalignment including that the default distribution of power might be problematic, people might not carefully contemplate what to do with vast cosmic resources (and thus use them poorly), people might go crazy due to super persuation or other cultural forces, society might generally have poor epistemics due to training AIs to have poor epistemics or insufficiently defering to AIs, and many people might die in conflict due to very rapid tech progress.
The way the isoFLOPs are shaped suggests that 90-95% sparsity might turn out to be optimal here, that is you can only get worse loss with 98+% sparsity with 1e20 FLOPs, however you vary the number of epochs and active params!
I'm currently skeptical and more minimally, I don't understand the argument you're making. Probably not worth getting into.
I do think there will be a limit to how sparse you want to even in the very high compute relative to data regime for various reasons (computational if nothing else). I don't see how these graphs support 90-95% sparsity, but I had a hard time understanding your argument.
Regardless, I don't think this argues against my claim, not sure if you were trying to argue against the claim I was saying or add context. (Insofar as your argument is true, it does limit the returns from MoE in the regime with little data.)
It did OK at control.