Not sure how all the details play out - in particular, my big question for any RL setup is "how does it avoid wireheading?". In this case, presumably there would have to be some kind of constraint on the reward-prediction model, so that it ends up associating the reward with the state of the environment rather than the state of the sensors.
I'm generally bullish on multiple objectives, and this post is another independent arrow pointing in that direction. Some other signs which I think point that way:
The argument from Why Subagents?. This is about utility maximizers rather than reward maximizers, but it points in a similar qualitative direction. Summary: once we allow internal state, utility-maximizers are not the only inexploitable systems; markets/committees of utility-maximizers also work.
The argument from Fixing The Good Regulator Theorem. That post uses some incoming information to "choose" between many different objectives, but that's essentially emulating multiple objectives. If we have multiple objectives explicitly, then the argument should simplify. Summary: if we need to keep around information relevant to many different objectives, but have limited space, that forces the use of a map/model in a certain sense.
One criticism: at a few points I think this post doesn't cleanly distinguish between reward-maximization and utility-maximization. For instance, the optimizing for "the abstract concept of ‘I want to be able to sing well’" definitely sounds like utility-maximization.
Methylation is the primary transposon suppression mechanism, so methylation levels would tell us the extent to which transposons are suppressed at a given instant, but not the number of live transposon copies.
There's a lot of different kinds-of-value which mentorship can provide, but I'll break it into two main classes:
Things which can-in-principle be provided by other channels, but can be accelerated by 1-on-1 mentorship.
Things for which 1-on-1 mentorship is basically the only channel.
The first class includes situations where mentorship is a direct substitute for a textbook, in the same way that a lecture is a direct substitute for a textbook. But it also includes situations where mentorship adds value, especially via feedback. A lecture or textbook only has space to warn against the most common failure-modes and explain "how to steer", and learning to recognize failure-modes or steer "in the wild" takes practice. Similar principles apply to things which must be learned-by-doing: many mistakes will be made, many wrong turns, and without a guide, it may take a lot of time and effort to figure out the mistakes and which turns to take. A mentor can spot failure-modes as they come up, point them out (which potentially helps build recognition), point out the right direction when needed, and generally save a lot of time/effort which would otherwise be spent being stuck. A mentor still isn't strictly necessary in these situations - one can still gain the relevant skills from a textbook or a project - but it may take longer that way.
For these use-cases, there's a delicate balance. On the one hand, the mentee needs to explore and learn to recognize failure-cases and steer on their own, not become reliant on the mentor's guidance. On the other hand, the mentor does need to make sure the mentee doesn't spend too much time stuck. The socratic method is often useful here, as are the techniques of research conversation support role. Also, once a mistake has been made and then pointed out, or once the mentor has provided some steering, it's usually worth explicitly explaining the more general pattern and how this instance fits it. (This also includes things like pointing out a different frame and then explaining how this frame works more generally - that's a more meta kind of "steering".)
The second class is mostly illegible knowledge/skills - things which a mentor wouldn't explicitly notice or doesn't know how to explain. For these, demonstration is the main channel. Feedback can be provided to some degree by demonstrating, then having the mentee try, or vice-versa. In general, it won't be obvious exactly what the mentor is doing differently than the mentee, or how to explain what the mentor is doing differently, but the mentee will hopefully pick it up anyway, at least enough to mimic it.
So far, other than those, we've mostly been kicking around smaller problems. For instance, the last couple days we were talking about general approaches for gearsy modelling in the context of a research problem Aysajan's been working on (specifically, modelling a change in India's farm subsidy policy). We also spent a few days on writing exercises - approximately everyone benefits from more practice in that department.
We've also done a few exercises to come up with Hard Problems to focus on. ("What sci-fi technologies or magic powers would you like to have?" was a particularly good one, and the lists of unsolved problems are also intended to generate ideas.) Once Aysajan has settled on ~10-20 Hard Problems to focus on (initially), those will drive the projects. You should see posts on whatever he's working on fairly frequently.
Live human being is indeed the harder version. I recommend the easier version first, harder version after.
The latter seems pretty hard to do, practically, with current technology, without using rockets (to at least setup an 'efficient' system initially).
Ah, but what specific bottlenecks make it hard? What are the barriers, and what chunking of the problem do they suggest?
Also: it's totally fine to assume that you can use rockets for setup, and then go back and remove that assumption later if the rocket-based initial setup is itself the main bottleneck to implementation.
Word on the grapevine: it sounds like they might just be adding a bunch of parameters in a way that's cheap to train but doesn't actually work that well (i.e. the "mixture of experts" thing).
It would be highly entertaining if ML researchers got into an arms race on parameter count, then Goodharted on it. Sounds like exactly the sort of thing I'd expect not-very-smart funding agencies to throw lots of money at. Perhaps the Goodharting would be done by the funding agencies themselves, by just funding whichever projects say they will use the most parameters, until they end up with lots of tiny nails. (Though one does worry that the agencies will find out that we can already do infinite-parameter-count models!)
a huge fraction of the genome consists of dead transposons
assuming the model is correct, different cells will have different numbers of live transposons
The first point makes it difficult-in-general to count transposons in the genome, especially with high-throughput sequencing (HTS). HTS usually breaks the genome into small pieces, sequences them separately, then computationally reconstructs the whole thing. But if there's many copies of similar sequence, this strategy is prone to err/uncertainty, and that's exactly the case for all those transposon-copies.
That said, tools for reliably sequencing transposons are an active research area and progress is being made, so it will probably be cheaper in the not-too-distant future.
One way to circumvent this whole issue is to look at the amount of transposon RNA in a cell, rather than DNA. This doesn't tell us anything about live transposon count - there could be a bunch of fresh copies which are being suppressed in a healthy cell. But it will tell us how active the transposons are right now. In practice, I expect this would mainly measure senescent cells (since they're the only cells where I'd expect lots of transposon RNA), but that's a hypothesis which would be useful to test.
Great comment - these were both things I thought about putting in the post, but didn't quite fit.
Goodhart, in particular, is a huge reason to avoid relying on many bits of selection, even aside from the exponential problem. Of course we also have to be careful of Goodhart when designing training programs, but at least there we have more elbow room to iterate and examine the results, and less incentive for the trainees to hack the process.
So, one simple model which I expect to be a pretty good approximation: IQ/g-factor is a thing and is mostly not trainable, and then skills are roughly-independently-distributed after controlling for IQ.
For selection in this model, we can select for a high-g-factor group as the first step, but then we still run into the exponential problem as we try to select further within that group (since skills are conditionally independent given g-factor).
This won't be a perfect approximation, of course, but we can improve the approximation as much as desired by adding more factors to the model. The argument for the exponential problem goes through: select first for the factors, and then the skills will be approximately-independent within that group. (And if the factors themselves are independent - as they are in many factor models - then we get the exponential problem in the first step too.)
Does training scale linearly? Does it take just twice as much time to get someone to 4 bits (top 3% in world, one in every school class) and from 4 to 8 bits (one in 1000)?
This is a good point. The exponential -> linear argument is mainly for independent skills: if they're uncorrelated in the population then they should multiply for selection; if they're independently trained then they should add for training. (And note that these are not quite the same notion of "independent", although they're probably related.) It's potentially different if we're thinking about going from 90th to 95th percentile vs 50th to 75th percentile on one axis.
(I'll talk about the other two points in response to Gunnar's comment.)
Suggestion: find ways for candidates to work closely with top tier people such that it doesn't distract those people too much.
In particular, I currently think an apprenticeship-like model is the best starting point for experiments along these lines. Eli also recently pointed out to me that this lines up well with Bloom's two-sigma problem: one-on-one tutoring works ~two standard deviations better than basically anything else in education.
I won't give any spoilers, but I recommend "how to efficiently reach orbit without using a rocket" as a fun exercise. More generally, the goal is to reach orbit in a way which does not have exponentially-large requirements in terms of materials/resources/etc. (Rockets have exponential fuel requirements; see the rocket equation.)
A (likely) counterexample is elastin: it seems to not be broken down at all in humans. So if new elastin is produced (e.g. as part of a wound-healing response), it just sticks around indefinitely.
This is in contrast to homeostatic equilibrium, which describes most things in biological systems, but not elastin.
Writers do sometimes use "accumulation"/"depletion" to refer to things in homeostatic equilibrium, but I find this terminology misleading at best, and in most cases I think the writer theirself is confused about the distinction and why it matters.
Meta-note: I think the actual argument here is decent, but using the phrase "power dynamics" will correctly cause a bunch of people to dismiss it without reading the details. "Power", as political scientists use the term, is IMO something like a principle component which might have some statistical explanatory power, but is actively unhelpful for building gears-level models.
I would suggest instead the phrase "bargaining dynamics", which I think points to the gearsy parts of "power" while omitting the actively-unhelpful parts.
So, de Gray gave that mechanism for ROS export (which I think was one of his best contributions on the theory side of things, it was plausible and well-grounded and quite novel). It is a mechanism which can happen, although I don't know of experimental evidence for whether it's the main mechanism for ROS export, especially in senescent cells. And that also still leaves the question of ROS import into other cells - not so relevant for atherosclerosis, but quite relevant to the exponential acceleration of aging. Also, it leaves open the question of ROS transport between mitochondria/cytoplasm/nucleus, which is necessary to explain the DNA damage part of the senescence feedback loop.
If the wheels are bouncing off each other, then that could be chaotic in the same way as billiard balls. But at least macroscopically, there's a crapton of damping in that simulation, so I find it more likely that the chaos is microscopic. But also my intuition agrees with yours, this system doesn't seem like it should be chaotic...
Couldn't this be operationalized as empirical if a wide variety...learn and give approximately the same predictions and recommendations for action (if you want this, do this), i.e. causal predictions?
Very good question, and the answer is no. That may also be a true thing, but the hypothesis here is specifically about what structures the systems are using internally. In generally, things could give exactly the same externally-visible predictions/actions while using very different internal structures.
You are correct that this is a kind of convergence claim. It's not claiming convergence in all intelligent systems, but I'm not sure exactly what the subset of intelligence systems is to which this claim applies. It has something to do with both limited computation and evolution (in a sense broad enough to include stochastic gradient descent).
One very important thing I don't know about the work on methylation sites is whether they're single-cell or averaged across cells. That matters a lot, because senescent cells should have methylation patterns radically different from everything else, but similar to each other (or at least along-the-same-axis as each other).
One thing I am pretty confident about is that methylation patterns are downstream, not upstream. Methyl group turnover time is far too fast to be a plausible root cause of aging. (In principle, there could be some special methyl groups which turn over slowly, but I would find that very surprising.)
Some key experimental findings on the mitogenesis/mitophagy stuff:
mitochondrial mutants are clonal: when cells have high counts of mutant mitochondria, the mutants in one cell usually have the same mutation.
it's usually a mutation in one particular mitochondrial gene (figure 1 in this paper is a great visual of this).
(For references, check thesetwo papers and their background sections.) These facts imply that mitochondrial mutations aren't random - under at least some conditions, mitochondria with certain mutations are positively selected and take over the cell. Furthermore, this positive selection process accounts for essentially-all of the cells taken over by mutant mitochondria in aged organisms.
Then the big question is: do mitchondria with these mutations take over healthy cells? If yes, then the rate at which mutant-mitochondria-dominated cells appear is determined by the rate of mitochondrial mutations. However, I find it more likely that the "quality control mechanisms" of selective mitophagy/mitogenesis do not favor mutant mitochondria in healthy cells, but do favor them in senescent cells. In that case, mutant mitochondria are probably downstream of cellular senescence. I don't know of a study directly confirming/disconfirming that, but it matches the general picture. For instance, there are far more senescent cells than mutant mitochondrial cells. Also, the mitochondrial quality control mechanisms seem linked to membrane polarization, and in senescent cells the membranes of even healthy mitochondria are partially depolarized (that's part of the feedback loop discussed in the post), so partial depolarization would no longer confer as large a selective disadvantage.
Good question. I'd say: writing a paper proving your peers wrong is great fun, but requires a paper. You are expected to make a strong, detailed case, even when the work is pretty obviously flawed. You can't just ignore a bad model in a background section or have a one-sentence "X found Y, but they're blatantly p-hacking" - those moves risk a reviewer complaining. And even after writing the prove-them-wrong paper, you still can't just ignore the bad work in background sections of future papers without risking reviewers' ire.
Important point: neither of the models in this post are really "the optimizer's model of the world". M1 is an observer's model of the world (or the "God's-eye view"); the world "is being optimized" according to that model, and there isn't even necessarily "an optimizer" involved. M2 says what the world is being-optimized-toward.
To bring "an optimizer" into the picture, we'd probably want to say that there's some subsystem which "chooses"/determines θ′, in such a way that E[−logP[X|M2]|M1(θ′)]≤E[−logP[X|M2]|M1(θ)], compared to some other θ-values. We might also want to require this to work robustly, across a range of environments, although the expectation does that to some extent already. Then the interesting hypothesis is that there's probably a limit to how low such a subsystem can make the expected-description-length without making θ′ depend on other variables in the environment. To get past that limit, the subsystem needs things like "knowledge" and a "model" of its own - the basic purpose of knowledge/models for an optimizer is to make the output depend on the environment. And it's that model/knowledge which seems likely to converge on a similar shared model/encoding of the world.
There's an important point which I think this misses.
Rather than imagining the bottom level of a 2D pyramid, imagine the bottom level of a 3D pyramid. As you fill in the bottom level of that 3D pyramid, at some point you go from "it's mostly space with a few islands filled in" to "it's mostly filled in with a few islands of space". There's this phase-transition-like-phenomenon where all the concepts/knowledge go from disconnected pieces to connected whole.
For instance, in studying mechanics, this transition came for me around the time I took a differential equations class (I'd already taken some physics and programming). I went from feeling like "I can only model the dynamics of certain systems with special, tractable forms" to "I can model most systems, at least numerically, except for certain systems with special, intractable weird stuff". This was still only level 1 of the pyramid - the higher levels still provided important tools for solving mechanics problems more efficiently - but it gave me a unified framework in which everything fit together, and in which I could generally see where the holes were.
Make sure to check that the values in the jacobian aren't exploding - i.e. there's not values like 1e30 or 1e200 or anything like that. Exponentially large values in the jacobian probably mean the system is chaotic.
If you want to avoid explicitly computing the jacobian, write a method which takes in a (constant) vector u and uses backpropagation to return ∇x0(xt⋅u). This is the same as the time-0-to-time-t jacobian dotted with u, but it operates on size-n vectors rather than n-by-n jacobian matrices, so should be a lot faster. Then just wrap that method in a LinearOperator (or the equivalent in your favorite numerical library), and you'll be able to pass it directly to an SVD method.
In terms of other uses... you could e.g. put some "sensors" and "actuators" in the simulation, then train some controller to control the simulated system, and see whether the data structures learned by the controller correspond to singular vectors of the jacobian. That could make for an interesting set of experiments, looking at different sensor/actuator setups and different controller architectures/training schemes to see which ones do/don't end up using the singular-value structure of the system.
Great comment, you're hitting a bunch of interesting points.
For a common human abstraction to be mostly recoverable as a 'natural' abstraction, it must depend mostly on the thing it is trying to abstract, and not e.g. evolutionary or cultural history, or biological implementation. ...
A few notes on this.
First, what natural abstractions we use will clearly depend at least somewhat on the specific needs of humans. A prehistoric tribe of humans living on an island near the equator will probably never encounter snow, and never use that natural abstraction.
My claim, for these cases, is that the space of natural abstractions is (approximately) discrete. Discreteness says that there is no natural abstraction "arbitrarily close" to another natural abstraction - so, if we can "point to" a particular natural abstraction in a close-enough way, then there's no ambiguity about which abstraction we're pointing to. This does not mean that all minds use all abstractions. But it means that if a mind does use a natural abstraction, then there's no ambiguity about which abstraction they're using.
One concrete consequence of this: one human can figure out what another human means by a particular word without an exponentially massive number of examples. The only way that's possible is if the space of potential-word-meanings is much smaller than e.g. the space of configurations of a mole of atoms. Natural abstractions give a natural way for that to work.
Of course, in order for that to work, both humans must already be using the relevant abstraction - e.g. if one of them has no concept of snow, then it won't work for the word "snow". But the claim is that we won't have a situation where two people have intuitive notions of snow which are arbitrarily close, yet different. (People could still give arbitrarily-close-but-different verbal definitions of snow, but definitions are not how our brain actually represents word-meanings at the intuitive level. People could also use more-or-less fine-grained abstractions, like eskimos having 17 notions of snow, but those finer-grained abstractions will still be unambiguous.)
If an otherwise unnatural abstraction is used by sufficiently influential agents, this can cause the abstraction to become 'natural', in the sense of being important to predict things 'far away'.
Yes! This can also happen even without agents: if the earth were destroyed and all that remained were one tree, much of the tree's genetic sequence would not be predictive of anything far away, and therefore not a natural abstraction. But so long as there are lots of genetically-similar trees, "tree-like DNA sequence" could be a natural abstraction.
This is also an example of a summary too large for the human brain. Key thing to notice: we can recognize that a low-dimensional summary exists, talk about it as a concept, and even reason about its properties (e.g. what could we predict from that tree-DNA-sequence-distribution, or how could we estimate the distribution), without actually computing the summary. We get an unambiguous "pointer", even if we don't actually "follow the pointer".
Another consequence of this idea that we don't need to represent the abstraction explicitly: we can learn things about abstractions. For instance, at some point people looked at wood under a microscope and learned that it's made of cells. They did not respond to this by saying "ah, this is not a tree because trees are not made of cells; I will call it a cell-tree and infer that most of the things I thought were trees were in fact cell-trees".
I think there is a connection to instrumental convergence, roughly along the lines of 'most utility functions care about the same aspects of most systems'.
Exactly right. The intuitive idea is: natural abstractions are exactly the information which is relevant to many different things in many different places. Therefore, that's exactly the information which is likely to be relevant to whatever any particular agent cares about.
Figuring out the classes of systems which learn roughly-the-same natural abstractions is one leg of this project.
My own understanding of the flat minima idea is that it's a different thing. It's not really about noise, it's about gradient descent in general being a pretty shitty optimization method, which converges very poorly to sharp minima (more precisely, minima with a high condition number). (Continuous gradient flow circumvents that, but using step sizes small enough to circumvent the problem in practice would make GD prohibitively slow. The methods we actually use are not a good approximation of continuous flow, as I understand it.) If you want flat minima, then an optimization algorithm which converges very poorly to sharp minima could actually be a good thing, so long as you combine it with some way to escape the basin of the sharp minimum (e.g. noise in SGD).
That said, I haven't read the various papers on this, so I'm at high risk of misunderstanding.
Also worth noting that there are reasons to expect convergence to flat minima besides bias in SGD itself. A flatter basin fills more of the parameter space than a sharper basin, so we're more likely to initialize in a flat basin (relevant to the NTK/GP/Mingard et al picture) or accidentally stumble into one.
I don't have any empirical evidence, but we can think about what a flat minimum with high noise would mean. It would probably mean the system is able to predict some data points very well, and other data points very poorly, and both of these are robust: we can make large changes to the parameters while still predicting the predictable data points about-as-well, and the unpredictable data points about-as-poorly. In human terms, it would be like having a paradigm in which certain phenomena are very predictable, and other phenomena look like totally-random noise without any hint that they even could be predictable.
Not sure what it would look like in the perfect-training-prediction regime, though.
The purpose of an RCT is to prove something works after we already have enough evidence to pay attention to that particular hypothesis at all. Since the vast majority of things (in an exponentially large space) do not work, most of the bits-of-evidence are needed just to "raise the hypothesis from entropy" - i.e. figure out that the hypothesis is promising enough to spend the resources on an RCT in the first place. The RCT provides only the last few bits of evidence, turning a hunch into near-certainty; most of the bits of evidence must have come from some other source already. It's exactly the same idea as Einstein's Arrogance.
Yeah, I wouldn't want to accelerate e.g. black-box ML. I imagine the real utility of such a fund would be to experiment with ways to accelerate intellectual progress and gain understanding of the determinants, though the grant projects themselves would likely be more object-level than that. Ideally the grants would be in areas which are not themselves very risk-relevant, but complicated/poorly-understood enough to generate generalizable insights into progress.
I think it takes some pretty specific assumptions for such a thing to increase risk significantly on net. If we don't understand the determinants of intellectual progress, then we have very little ability to direct progress where we want it; it just follows whatever the local gradient is. With more understanding, at worst it follows the same gradient faster, and we end up in basically the same spot.
The one way it could net-increase risk is if the most likely path of intellectual progress leads to doom, and the best way to prevent doom is through some channel other than intellectual progress (like political action, for instance). Then accelerating the intellectual progress part potentially gives the other mechanisms (like political bodies) less time to react. Personally, though, I think a scenario in which e.g. political action successfully prevents intellectual progress from converging to doom (in a world where it otherwise would have) is vanishingly unlikely (like, less than one-in-a-hundred, maybe even less than one-in-a-thousand).
Ah, yeah, you're right. Thanks, I was understanding the reason for convergence of SGD to a local minimum incorrectly. (Convergence depends on steadily decreasing η; that decrease is doing more work than I realized.)
I'm still wrapping my head around this myself, so this comment is quite useful.
Here's a different way to set up the model, where the phenomenon is more obvious.
Rather than Brownian motion in a continuous space, think about a random walk in a discrete space. For simplicity, let's assume it's a 1D random walk (aka birth-death process) with no explicit bias (i.e. when the system leaves state k, it's equally likely to transition to k+1 or k−1). The rate λk at which the system leaves state k serves a role analogous to the diffusion coefficient (with the analogy becoming precise in the continuum limit, I believe). Then the steady-state probabilities of state k and state k−1 satisfy
... i.e. the flux from values-k-and-above to values-below-k is equal to the flux in the opposite direction. (Side note: we need some boundary conditions in order for the steady-state probabilities to exist in this model.) So, if λk>λk−1, then pk<pk−1: the system spends more time in lower-diffusion states (locally). Similarly, if the system's state is initially uniformly-distributed, then we see an initial flux from higher-diffusion to lower-diffusion states (again, locally).
Going back to the continuous case: this suggests that your source vs destination intuition is on the right track. If we set up the discrete version of the pile-of-rocks model, air molecules won't go in to the rock pile any faster than they come out, whereas hot air molecules will move into a cold region faster than cold molecules move out.
I haven't looked at the math for the diode-resistor system, but if the voltage averages to 0, doesn't that mean that it does spend more time on the lower-noise side? Because presumably it's typically further from zero on the higher-noise side. (More generally, I don't think a diffusion gradient means that a system drifts one way on average, just that it drifts one way with greater-than-even probability? Similar to how a bettor maximizing expected value with repeated independent bets ends up losing all their money with probability 1, but the expectation goes to infinity.)
Also, one simple way to see that the "drift" interpretation of the diffusion-induced drift term in the post is correct: set the initial distribution to uniform, and see what fluxes are induced. In that case, only the two drift terms are nonzero, and they both behave like we expect drift terms to behave - i.e. probability increases/decreases where the divergence of the drift terms is positive/negative.
You think they are more closely connected to AGI than I did, such that conditionalizing on AGI not happening means those things don't happen either? Would you then agree e.g. that in 2025 we have self-driving cars, or billion-dollar models, you'd be like "well fuck AGI is near?"
Self-driving cars would definitely update me significantly toward shorter timelines. Billion-dollar models are more a downstream thing - i.e. people spending billions on training models is more a measure of how close AGI is widely perceived to be than a measure of how close it actually is. So upon seeing billion-dollar models, I don't think I'd update much, because I'd already have updated on the things which made someone spend a billion dollars on a model (which may or may not actually be strong evidence for AGI being close).
In this world, I'd also expect that models are not a dramatic energy consumer (contra your #6), mainly because nobody wants to spend that much on them. I'd also expect chatbots to not have dramatically more usage than today (contra your #7) - it will still mostly be obvious when you're talking to a chatbot, and this will mostly be considered a low-status/low-quality substitute for talking to a human, and still only usable commercially for interactions in a very controlled environment (so e.g. no interactions where complicated or free-form data collection is needed). In other words, chatbot use-cases will generally be pretty similar to today's, though bot quality will be higher. Similar story with predictive tools - use-cases similar to today, limitations similar to today, but generally somewhat better.
Definitely, and the Nate Silver piece in particular is 8 years out of date. But these are long-term trends, and the predictions don't require much precision - COVID might shift some demographic numbers by 10% for a decade, but that's not enough to substantially change the predictions for 2040.
Sure. Here's a graph from wikipedia with global fertility rate projections, with global rate dropping below replacement around 2040. (Note that replacement is slightly above 2 because people sometimes die before reproducing - wikipedia gives 2.1 as a typical number for replacement rate.)
For Chinese GDP, there's some decent answers on this quora question about how soon Chinese GDP per capita will catch up to the US. (Though note that I do not think Chinese GDP per capita will catch up to the US by 2040 - just to other first world countries, most of which have much lower GDP per capita than the US. For instance EU was around $36k nominal in 2019, vs $65k nominal for the US in 2019.) You can also eyeball this chart of historical Chinese GDP growth:
Extending on point 2: if we want to talk about a price drop, then we need to think about relative elasticity of supply vs demand - i.e. how sensitive is demand to price, and how sensitive is supply to price. Just thinking about the supply side is not enough: it could be that price drops a lot, but then demand just shoots up until some new supply constraint becomes binding and price goes back up.
(Also, I would be surprised if supercomputers and AI are actually the energy consumers which matter most for pricing. Air conditioning in South America, Africa, India, and Indonesia seems likely to be a much bigger factor, just off the top of my head, and there's probably other really big use-cases that I'm not thinking of right at the moment.)
+1 to this, though I think a slightly modified version of jacopo's argument is stronger: new constraints are likely to become binding in general when cost of current constraints drops by a factor of 10, though it's not always obvious which constraints will be relevant.
Anti-aging will be in the pipeline, if not necessarily on the market yet. The main root causes of most of the core age-related diseases will be basically understood, and interventions which basically work will have been studied in the lab.
Fertility will be below replacement rate globally, and increasingly far below replacement in first-world countries (most of which are already below today). Life expectancy will still be increasing, so the population will still be growing over all (even assuming anti-aging is slow), but slowly and decelerating.
Conditional on anti-aging not already seeing large-scale adoption, the population will have a much higher share of elderly dependents and a lower share of working-age people to support them, pretty much everywhere. This problem already dominates the budgets of first-world governments today: it means large-and-increasing shares of GDP going to retirement/social security and healthcare for old folks (who already consume the large majority of healthcare).
Conditional on anti-aging not already seeing large-scale adoption, taxes will probably go up in most first-world countries. There just isn't enough spending to cut anywhere else to keep up with growing social security/healthcare obligations, and dramatically reducing those obligations won't be politically viable with old people only becoming more politically dominant in elections over time. (In theory, dramatically opening up immigration could provide another path, but I wouldn't call that the most likely outcome.)
China's per-capita GDP will catch up to current first-world standards, at which point they will not be able to keep up the growth rate of recent decades. That will probably result in some kind of political instability, since the CCP's popularity is heavily dependent on growth, and also because a richer population is a more powerful population which is just generally harder to control without its assent.
The predictions about AI-adjacent things seem weird when we condition on AGI not taking off by 2040. Conditional on that, it seems like the most likely world is one where the current scaling trends play out on the current problems, but current methods turned out to not generalize very well to most real-world problems (especially problems without readily-available giant data sets, or problems in non-controlled environments). In other words, this turns out pretty similar to previous AI/ML booms: a new class of problems is solved, but that class is limited, and we go into another AI winter afterwards.
In that world, I'd expect deep learning to be used commercially for things which we're already close to: procedural generation of graphics for games and maybe some movies, auto-generation of low-quality written works (for use-cases which don't involve readers paying close attention) or derivative works (like translations or summaries), that sort of thing. In most cases, it probably won't be end-to-end ML, just tools for particular steps. Prompt programming mostly turns out to be a dead end, other than a handful of narrow use-cases. Automated cars will probably still be right-around-the-corner, with companies producing cool demos regularly but nobody really able to handle the long tail. People will stop spending large amounts on large models and datasets, though models will still grow slowly as compute & data get cheaper.
I'm still some combination of confused and unconvinced about optimization-under-uncertainty. Some points:
It feels like "optimization under uncertainty" is not quite the right name for the thing you're trying to point to with that phrase, and I think your explanations would make more sense if we had a better name for it.
The examples of optimization-under-uncertainty from your other comment do not really seem to be about uncertainty per se, at least not in the usual sense, whereas the Dr Nefarious example and maligness of the universal prior do.
Your examples in the other comment do feel closely related to your ideas on learning normativity, whereas inner agency problems do not feel particularly related to that (or at least not any more so than anything else is related to normativity).
It does seem like there's in important sense in which inner agency problems are about uncertainty, in a way which could potentially be factored out, but that seems less true of the examples in your other comment. (Or to the extent that it is true of those examples, it seems true in a different way than the inner agency examples.)
The pointers problem feels more tightly entangled with your optimization-under-uncertainty examples than with inner agency examples.
... so I guess my main gut-feel at this point is that it does seem very plausible that uncertainty-handling (and inner agency with it) could be factored out of goal-specification (including pointers), but this particular idea of optimization-under-uncertainty seems like it's capturing something different. (Though that's based on just a handful of examples, so the idea in your head is probably quite different from what I've interpolated from those examples.)
On a side note, it feels weird to be the one saying "we can't separate uncertainty-handling from goals" and you saying "ok but it seems like goals and uncertainty could somehow be factored". Usually I expect you to be the one saying uncertainty can't be separated from goals, and me to say the opposite.