Posts
Comments
Seeking feedback on this AI Safety proposal:
(I don't have experience in AI experimentation)
I'm interested in the question of, "How can we use smart AIs to help humans at strategic reasoning."
We don't want the solution to be, "AIs just tell humans exactly what to do without explaining themselves." We'd prefer situations where smart AIs can explain to humans how to think about strategy, and this information makes humans much better at doing strategy.
One proposal to make progress on this is to set a benchmark for having smart AIs help out dumb AIs by providing them with strategic information.
Or more specifically, we find methods of having GPT4 give human-understandable prompts to GPT2, that would allow GPT2 to do as well as possible on specific games like chess.
Some improvements/changes would include:
- Try to expand the games to include simulations of high-level human problems. Like simplified versions of Civilization.
- We could also replace GPT2 with a different LLM that better represents how a human, with some amount of specialized knowledge (for example, being strong at probability).
- There could be a strong penalty for prompts that aren't human-understandable.
- Use actual humans in some experiments. See how much they improve at specific [chess | civilization] moves, with specific help text.
- Instead of using GPT2, you could likely just use GPT4. My impression is that GPT4 is a fair bit worse than the SOTA chess engines. So you use some amplified GPT4 procedure, to figure out how to come up with the best human-understandable chess prompts, to give to GPT4s without the amplification.
- You set certain information limits. For example, you see how good of a job an LLM could do with "100 bits" of strategic information.
A solution would likely involve search processes where GPT4 experiments with a large space of potential English prompts, and tests them over the space of potential chess moves. I assume that reinforcement learning could be done here, but perhaps some LLM-heavy mechanism could work better. I'd assume that good strategies would be things like, "In cluster of situations X, you need to focus on optimizing Y." So the "smart agent" would need to be able to make clusters of different situations, and solve for a narrow prompt for many of them.
It's possible that the best "strategies" would be things like long decision-trees. One of the key things to learn about is what sorts/representations of information wind up being the densest and most useful.
Zooming out, if we had AIs that we knew give AIs and humans strong and robust strategic advice in test cases, I imagine we could use some of this for real life cases - perhaps most importantly, to strategize about AI safety.
I was pretty surprised to see so little engagement / net-0 upvotes (besides mine). Any feedback is appreciated, I'm curious how to do better going forward.
I spent a while on this and think there's a fair bit here that would be useful to some community members, though perhaps the jargon or some key aspects weren't very liked.
I'd flag that I think it's very possible TSMC will be very much hurt/destroyed if China is in control. There's been a bit of discussion of this.
I'd suspect China might fix this after some years, but would expect it would be tough for a while.
https://news.ycombinator.com/item?id=40426843
Quick point - a "benefit corporation" seems almost identical to a "corporation" to me, from what I understand. I think many people assume it's a much bigger deal than it actually is.
My impression is that practically speaking, this just gives the execs more power to do whatever they feel they can sort of justify, without shareholders being able to have the legal claims to stop them. I'm not sure if this is a good thing in the case of OpenAI. (Would we prefer Sam A / the board have more power, or that the shareholders have more power?)
I think B-Corps make it harder for them to get sued for not optimizing for shareholders. Hypothetically, it makes it easier for them to be sued for not optimizing their other goals, but I'm not sure if this ever/frequently actually happens.
Really wishing the new Agent Foundations team the best. (MIRI too, but its position seems more secured)
I think that naively, I feel pretty good about this potential split. If MIRI is doing much more advocacy work, that work just seems very different to Agent Foundations research.
This could allow MIRI to be more controversial and risk-taking without tying things to the Agent Foundations research, and that research could hypothetically more easily getting funding from groups that otherwise disagree with MIRI's political views.
I hope that team finds good operations support or a different nonprofit sponsor of some kind.
Thinking about this more, it seems like there are some key background assumptions that I'm missing.
Some assumptions that I often hear get presenting on this topic are things like:
1. "A misaligned AI will explicitly try to give us hard-to-find vulnerabilities, so verifying arbitrary statements from these AIs will be incredibly hard."
2. "We need to generally have incredibly high assurances to build powerful systems that don't kill us".
My obvious counter-arguments would be:
1. Sure, but smart agents would have a reasonable prior that agents would be misaligned, and also, they would give these agents tasks that would be particularly easy to verify. Any action actually taken by a smart overseer, using information provided by another agent with a chance of being misaligned, M (known by the smart overseer), should be EV-positive in value. With some creativity, there's likely a bunch of ways of structuring things (using systems likely not to be misaligned, using more verifiable questions), where many resulting actions will likely be heavily EV-positive.
2. "Again, my argument in (1). Second, we can build these systems gradually, and with a lot of help from people/AIs that won't require such high assurances." (This is similar to the HCH / oversight arguments)
First, I want to flag that I really appreciate how you're making these delta clear and (fairly) simple.
I like this, though I feel like there's probably a great deal more clarity/precision to be had here (as is often the case).
Under my models, if I pick one of these objects at random and do a deep dive researching that object, it will usually turn out to be bad in ways which were either nonobvious or nonsalient to me, but unambiguously make my life worse and would unambiguously have been worth-to-me the cost to make better.
I'm not sure what "bad" means exactly. Do you basically mean, "if I were to spend resources R evaluating this object, I could identify some ways for it to be significantly improved?" If so, I assume we'd all agree that this is true for some amount R, the key question is what that amount is.
I also would flag that you draw attention to the issue with air conditioners. But for the issue of personal items, I'd argue that when I learn more about popular items, most of what I learn are positive things I didn't realize. Like with Chesterton's fence - when I get many well-reviewed or popular items, my impression is generally that there were many clever ideas or truths behind those items that I don't at all have time to understand, let alone invent myself. A related example is cultural knowledge - a la The Secret of Our Success.
When I try out software problems, my first few attempts don't go well for reasons I didn't predict. The very fact that "it works in tests, and it didn't require doing anything crazy" is a significant update.
Sure, with enough resources R, one could very likely make significant improvements to any item in question - but as a purchaser, I only have resources r << R to make my decisions. My goal is to buy items to make my life better, it's fine that there are potential other gains to be had by huge R values.
> “verification is easier than generation”
I feel like this isn't very well formalized. I think I agree with this comment on that post. I feel like you're saying, "It's easier to generate a simple thing than verify all possible things", but Paul and co are saying more like, "It's easier to verify/evaluate a thing of complexity C than generate a think of complexity C, in many important conditions", or, "There are ways of delegating many tasks where the evaluation work required would be less than that of doing the work yourself, in order to get a result of a certain level of quality."
I think that Paul's take (as I understand it) seems like a fundamental aspect about the working human world. Humans generally get huge returns from not inventing the wheel all the time, and deferring to others a great deal. This is much of what makes civilization possible. It's not perfect, but it's much better than what individual humans could do by themselves.
> Under my current models, I expect that, shortly after AIs are able to autonomously develop, analyze and code numerical algorithms better than humans, there’s going to be some pretty big (like, multiple OOMs) progress in AI algorithmic efficiency (even ignoring a likely shift in ML/AI paradigm once AIs start doing the AI research)
I appreciate the precise prediction, but don't see how it exactly follows. This seems more like a question of "how much better will early AIs be compared to current humans", than one deeply about verification/generation. Also, I'd flag that in many worlds, I'd expect that pre-AGI AIs could do a lot of this code improvement - or they already have - so it's not clear exactly how big a leap the "autonomously" is doing here.
---
I feel like there are probably several wins to be had by better formalizing these concepts more. They seem fairly cruxy/high-delta in the debates on this topic.
I would naively approach some of this with some simple expected value/accuracy lens. There are many assistants (including AIs) that I'd expect would improve the expected accuracy on key decisions, like knowing which AI systems to trust. In theory, it's possible to show a bunch of situations where delegation would be EV-positive.
That said, a separate observer could of course claim that one using the process above would be so wrong as to be committing self-harm. Like, "I think that when you would try to use delegation, your estimates of impact are predictably wrong in ways that would lead to you losing." But this seems like mainly a question about "are humans going to be predictably overconfident in a certain domain, as seen by other specific humans".
My hunch is that this is arguably bad insofar that it helps out OpenAI / SOTA LLMs, but otherwise a positive thing?
I think we want to see people start deploying weak AIs on massive scales, for many different sorts of tasks. The sooner we do this, the sooner we get a better idea of what real problems will emerge, and the sooner engineers will work on figuring out ways of fixing those problems.
On-device AIs generally seem a safer than server LLMs, mainly because they're far less powerful. I think we want a world where we can really maximize the value we get from small, secure AI.
If this does explode in a thousand ways, I assume it would be shut down soon enough. I assume Apple will roll out some of these features gradually and carefully. I'll predict that damages caused by AI failures with this won't be catastrophic. (Let's say, < ~$30B in value, over 2 years).
Out of the big tech companies (FAANG), I think I might trust Apple the most to do a good job on this.
And, while the deal does bring attention to ChatGPT, it comes across to me as a temporary and limited thing, rather than a deep integration. I wouldn't expect this to dramatically boost OpenAI's market cap. The future of Apple / server LLM integration still seems very unclear.
Thanks for that explanation.
EG, if the hypothesis is true, I can imagine that "do a lot of RLHF, and then ramp up the AIs intelligence" could just work. Similarly for "just train the AI to not be deceptive".)
Thanks, this makes sense to me.
Yea, I guess I'm unsure about that '[Inference step missing here.]'. My guess is that such system would be able to recognize situations where things that score highly with respect to its ontology, would score lowly, or would be likely to score lowly, using a human ontology. Like, it would be able to simulate a human deliberating on this for a very long time and coming to some conclusion.
I imagine that the cases where this would be scary are some narrow ones (though perhaps likely ones) where the system is both dramatically intelligent in specific ways, but incredibly inept in others. This ineptness isn't severe enough to stop it from taking over the world, but it is enough to stop it from being at all able to maximize goals - and it also doesn't take basic risk measures like "just keep a bunch of humans around and chat to them a whole lot, when curious", or "try to first make a better AI that doesn't have these failures, before doing huge unilateralist actions" for some reason.
It's very hard for me to imagine such an agent, but that doesn't mean it's not possible, or perhaps likely.
Why do you assume that we need to demand this be done in "the math that defines the system"?
I would assume we could have a discussion with this higher-ontology being to find a happy specification, using our ontologies, that it can tell us we'll like, also using our ontologies.
A 5-year-old might not understand an adult's specific definition of "heavy", but it's not too hard for it to ask for a heavy thing.
So when the AI "understands humans perfectly well", that means something like: The AI can visualise the flawed (ie, high prediction error) model that we use to think about the world. And it does this accurately. But it also sees how the model is completely wrong, and how the things, that we say we want, only make sense in that model that has very little to do with the actual world.
This sounds a lot like a good/preferable thing to me. I would assume that we'd generally want AIs with ideal / superior ontologies.
It's not clear to me why you'd think such a scenario would make us less optimistic about single-agent alignment. (If I'm understanding correctly)
(feel free to stop replying at any point, sorry if this is annoying)
> Like, you ask the AI "optimize this company, in a way that we would accept, after a ton of deliberation", and it has a very-different-off-distribution notion than you about what constitutes the "company", and counts as you "accepting", and what it's even optimizing the company for.
I'd assume that when we tell it, "optimize this company, in a way that we would accept, after a ton of deliberation", this could be instead described as, "optimize this company, in a way that we would accept, after a ton of deliberation, where these terms are described using our ontology"
It seems like the AI can trivially figure out what humans would regard as the "company" or "accepting". Like, it could generate any question like, "Would X qualify as the 'company, if asked to a human?", and get an accurate response.
I agree that we would have a tough time understanding its goal / specifications, but I expect that it would be capable of answering questions about its goal in our ontology.
If it's able to function as well as it would if it understands our ontology, if not better, then why does it then matter if it doesn't use our ontology?
I assume a system you're describing could still be used by humans to do (basically) all of the important things. Like, we could ask it "optimize this company, in a way that we would accept, after a ton of deliberation", and it could produce a satisfying response.
> But why would it? What objective, either as the agent's internal goal or as an outer optimization signal, would incentivize the agent to bother using a human ontology at all, when it could instead use the predictively-superior quantum simulator?
I mean, if it can always act just as well as if it could understand human ontologies, then I don't see the benefit of it "technically understanding human ontologies". This seems like it is tending into some semantic argument or something.
If an agent can trivially act as if it understands Ontology X, where/why does it actually matter that it doesn't technically "understand" ontology X?
I assume that the argument that "this distinction matters a lot" would functionally play out in there being some concrete things that it can't do.
Thanks! I wasn't expecting that answer.
I think that raises more questions than it answers, naturally. ("Okay, can an agent so capable that they can easily make a quantum-simulation to answer tests, really not find some way of effectively understanding human ontologies for decision-making?"), but it seems like this is more for Eliezer, and also, that might be part of a longer post.
Thanks for that, but I'm left just as confused.
I assume that this AI agent would be able to have conversations with humans about our ontologies. I strongly assume it would need to be able to do the work of "thinking through our eyes/ontologies" to do this.
I'd imagine the situation would be something like,
1. The agent uses quantum-simutions almost all of the time.
2. In the case it needs to answer human questions, like answer AP Physics problems, it easily understands how to make these human-used models/ontologies in order to do so.
Similar to how graduate physicists can still do mechanics questions without considering special relativity or quantum effects, if asked.
So I'd assume that the agent/AI could do the work of translation - we wouldn't need to.
I guess, here are some claims:
1) Humans would have trouble policing a being way smarter than us.
2) Humans would have trouble understanding AIs with much more complex ontologies.
3) AIs with more complex ontologies would have trouble understanding humans.
#3 seems the most suspect to me, though 1 and 2 also seem questionable.
I'm trying to understand this debate, and probably failing.
>human concepts cannot be faithfully and robustly translated into the system’s internal ontology at all.
I assume we all agree that the system can understand the human ontology, though? This is at least necessary for communicating and reasoning about humans, which LLMs can clearly already do to some extent.
There's a lot of work around mapping ontologies, and this is known to be difficult, but very possible - especially for a superhuman intelligence.
So, I fail to see what exactly the problem is. If this smarter system can understand and reason about human ways of thinking about the world, I assume it could optimize for these ways if it wanted to. I assume the main question is if it wants to - but I fail to understand how this is an issue of ontology.
If a system really couldn't reason about human ontologies, then I don't see how it would understand the human world at all.
I'd appreciate any posts that clarify this question.
Minor point, but I think we might have some time here. Securing model weights becomes more important as models become better, but better models could also help us secure model weights (would help us code, etc).
Minor flag, but I've thought about some similar ideas, and here's one summary:
https://forum.effectivealtruism.org/posts/YpaQcARgLHFNBgyGa/prioritization-research-for-advancing-wisdom-and
Personally, I'd guess that we could see a lot of improvement by clever uses of safe AIs. Even if we stopped improving on LLMs today, I think we have a long way to go to make good use of current systems.
Just because there are potentially risky AIs down the road doesn't mean we should ignore the productive use of safe AIs.
+1 here, I've found this to be a major pain, and just didn't do it in my last one with Eli Lifland.
That could be.
My recollection from Zuckerberg was that he was thinking of transformative AI, at least, as a fairly far-away goal, more like 8 to 20 years++ (and I'd assume "transformative AI" would be further), and that overall, he just hasn't talked much about it.
I wasn't thinking of all of Yann LeCun's statements, in part because he makes radical/nonsensible-to-me statements all over the place (which makes me assume he's not representing the whole department). It's not clear to me how much most of his views represent Meta, though I realize he is technically in charge of AI there.
Good point, fixing!
an estimated utility function is a practical abstraction that obscures the lower-level machinery/implementational details
I agree that this is what's happening. I probably have different intuitions regarding how big of a problem it is.
The main questions here might be something like:
- Is there any more information about the underlying system, besides its various utility function, useful for decision-making?
- If (1) is false, can we calibrate for that error when trying to approximate things with the utility function? If we just use the utility function, will be be over-confident, or just extra (and reasonably) cautious?
- In situations where we don't have models of the underlying system, can utility function estimates be better than alternatives we might have?
My quick expected answers to this:
- I think for many things, utility functions are fine. I think these are far more precise and accurate than other existing approaches that we have today (like, people intuitively guessing what's good for others) .
- I think if we do a decent job, we can just add extra uncertainty/caution to the system. I like to trust future actors here to not be obviously stupid in ways we could expect.
- As I stated before, I don't think we have better tools yet. I'm happy to see research into more work in understanding the underlying systems, but in the meantime, utility functions seem about as precise and information-rich as anything else we have.
is that different "deliberation/idealization procedures" may produce very different results and never converge in the limit.
Agreed. This is a pretty large topic, I was trying to keep this essay limited. My main recommendation here was to highlight the importance of deliberation and potential deliberation levels, in part to better discuss issues like these.
Do you have a preferred distinction of value functions vs. utility functions that you like, and ideally can reference? I'm doing some investigation now, and it seems like the main difference is the context in which they are often used.
My impression is that Lesswrong typically uses the term "Utility function" to mean a more specific thing than what economists do. I.E. the examples of utility functions in economics textbooks. https://brilliant.org/wiki/utility-functions/ has examples.
They sometimes describe simple things like this simple relationship, as a "utility function".
I'm curious why this got the disagreement votes.
1. People don't think Holden doing that is significant prioritization?
2. There aren't several people at OP trying to broadly figure out what to do about AI?
3. There's some other strategy OP is following?
Also, I should have flagged that Holden is now the "Director of AI Strategy" there. This seems like a significant prioritization.
It seems like there are several people at OP trying to figure out what to broadly do about AI, but only one person (Ajeya) doing AIS grantmaking? I assume they've made some decision, like, "It's fairly obvious what organizations we should fund right now, our main question is figuring out the big picture."
Ajeya Cotra is currently the only evaluator for technical AIS grants.
This situation seems really bizarre to me. I know they have multiple researchers in-house investigating these issues, like Joseph Carlsmith. I'm really curious what's going on here.
I know they've previously had (what seemed to me) like talented people join and leave that team. The fact that it's so small now, given the complexity and importance of the topic, is something I have trouble grappling with.
My guess is that there are some key reasons for this that aren't obvious externally.
I'd assume that it's really important for this team to become really strong, but would obviously flag that when things are that strange, it's likely difficult to fix, unless you really understand why the situation is the way it is now. I'd also encourage people to try to help here, but I just want to flag that it might be more difficult than it might initially seem.
Thanks for clarifying! That really wasn't clear to me from the message alone.
> Though if you used Squiggle to perform an existential risk-reward analysis of whether to use Squiggle, who knows what would happen
Yep, that's in the works, especially if we can have basic relative value forecasts later on.
If you think that the net costs of using ML techniques when improving our rationalist/EA tools are not worth it, then there can be some sort of argument there.
Many Guesstimate models are now about making estimates about AI safety.
I'm really not a fan of the "Our community must not use ML capabilities in any form", not sure where others here might draw the line.
I assume that in situations like this, it could make sense for communities to have some devices for people to try out.
Given that some people didn't return theirs, I imagine potential purchasers could buy used ones.
Personally, I like the idea of renting one for 1-2 months, if that were an option. If there's a 5% chance it's really useful, renting it could be a good cost proposition. (I realize I could return it, but feel hesitant to buy one if I think there's a 95% chance I would return it.)
Happy to see experimentation here. Some quick thoughts:
- The "Column" looked a lot to me like a garbage can at first. I like the "+" in Slack for this purpose, that could be good.
- Checkmark makes me think "agree", not "verified". Maybe a badge or something?
- "Support" and "Agreement" seem very similar to me?
- While it's a different theme, I'm in favor of using popular icons where possible. My guess is that these will make it more accessible. I like the eyes you use, in part because are close to the icon. I also like:
- 🚀 or 🎉 -> This is a big accomplishment.
- 🙏 -> Thanks for doing this.
- 😮 -> This is surprising / interesting.
- It could be kind of neat to later celebrate great rationalist things by having custom icons for them, to represent when a post reminds people of their work in some way.
- I like that it shows who reacted what, that makes a big deal to me.
I liked this a lot, thanks for sharing.
Here's one disagreement/uncertainty I have on some of it:
Both of the "What failure looks like" posts (yours and Pauls) posts present failures that essentially seem like coordination, intelligence, and oversight failures. I think it's very possible (maybe 30-46%+?) that pre-TAI AI systems will effectively solve the required coordination and intelligence issues.
For example, I could easily imagine worlds where AI-enhanced epistemic environment make low-risk solutions crystal clear to key decision-makers.
In general, the combination of AI plus epistemics, pre-TAI, seems very high-variance to me. It could go very positively, or very poorly.
This consideration isn't enough to change p(doom) under 10%, but I'm probably be closer to 50% than you would be. (Right now, maybe 40% or so).
That said, this really isn't a big difference, it's less than one order of magnitude.
Quick update:
Immersed now supports a BETA for "USB Mode". I just tried it with one cable, and it worked really well, until it cut out a few minutes in. I'm getting a different USB-C cable that they recommend. In general I'm optimistic.
(That said, there are of course better headsets/setups that are coming out, too)
https://immersed.zendesk.com/hc/en-us/articles/14823473330957-USB-C-Mode-BETA-
Happy to see discussion like this. I've previously written a small bit defending AI friends, on Facebook. There was some related comments there.
I think my main takeaway is "AI friends/romantic partners" are some seriously powerful shit. I expect we'll see some really positive uses and also some really detrimental ones. I'd naively assume that, like with other innovations, some communities/groups will be much better at dealing with them than others.
Related, research to help encourage the positive sides seems pretty interesting to me.
Maybe we can refer to these systems as cybernetic or cyborg rubber ducking? :)
Yea; that's not a feature that exists yet.
Thanks for the feedback!
Dang, this looks awesome. Nice work!
Not yet. There are a few different ways of specifying the distribution, but we don't yet have options for doing from the 25th&75th percentiles. It would be nice to do eventually. (Might be very doable to add in a PR, for a fairly motivated person).
https://www.squiggle-language.com/docs/Api/Dist#normal
You can type in, normal({p5: 10, p95:30})
. It should later be possible to say normal({p25: 10, p75:30})
.
Separately; when you say "25, 50, 75 percentiles"; do you mean all at once? This would be an overspecification; you only need two points. Also; would you want this to work for normal/lognormal distributions, or anything else?
Mostly. The core math bits of Guesstimate were a fairly thin layer on Math.js. Squiggle has replaced much of the MathJS reliance with custom code (custom interpreter + parser, extra distribution functionality).
If things go well, I think it would make sense to later bring Squiggle in as the main language for Guesstimate models. This would be a breaking change, and quite a bit of work, but would make Guesstimate much more powerful.
Really nice to see this. I broadly agree. I've been concerned with boards for a while.
I think that "mediocre boards" are one of the greatest weaknesses of EA right now. We have tons of small organizations, and I suspect that most of these have mediocre or fairly ineffective boards. This is one of the main reasons I don't like the pattern of us making lots of tiny orgs; because we have to set up yet one more board for each one, and good board members are in short supply.
I'd like to see more thinking here. Maybe we could really come up with alternative structures.
For example, I've been thinking of something like "good defaults" as a rule of thumb for orgs that get a lot of EA funding.
- They choose an effective majority of board members from a special pool of people who have special training and are well trusted by key EA funders.
- There's a "board service" organization that's paid to manage the processes of boards. This service would arrange meetings, make sure that a bunch of standards are getting fulfilled, and would have the infrastructure in place to recruit new EDs when needed. These services can be paid by the organization.
Basically, I'd want to see us treat small nonprofits as sub-units of a smoothly-working bureaucracy or departments in a company. This would involve a lot of standardization and control. Obviously this could backfire a lot if the controlling groups ever do a bad job; but (1) if the funders go bad, things might be lost anyway, and (2), I think the expected harm of this could well be less than the expected benefit.
For what it's worth, I think I prefer the phrase,
"Failing with style"
Minor point:
I suggest people experiment with holiday ideas and report back, before we announce anything "official". Experimentation seems really nice on this topic, that seems like the first step.
In theory we could have a list of holiday ideas, and people randomly choose a few of them, try them out, then report back.
Interesting. Thanks!
The more sophisticated system is Squiggle. It's basically a prototype. I haven't updated it since the posts I made about it last year.
https://www.lesswrong.com/posts/i5BWqSzuLbpTSoTc4/squiggle-an-overview
Update:
I think some of the graphs could be better represented with upfront fixed costs.
When you buy a book, you pay for it via your time to read it, but you also have the fixed initial fee of the book.
This fee isn't that big of a deal for most books that you have a >20% chance of reading, but it definitely is for academic articles or similar.
(Also want to say I've been reading them all and am very thankful)
I enjoyed writing this post, but think it was one of my lesser posts. It's pretty ranty and doesn't bring much real factual evidence. I think people liked it because it was very straightforward, but I personally think it was a bit over-rated (compared to other posts of mine, and many posts of others).
I think it fills a niche (quick takes have their place), and some of the discussion was good.
Good point! I feel like I have to squint a bit to see it, but that's how exponentials sometimes look early on.
To be clear, I care about clean energy. However, if energy production can be done without net-costly negative externalities, then it seems quite great.
I found Matthew Yglesias's take, and Jason's writings, interesting.
https://www.slowboring.com/p/energy-abundance
All that said, if energy on the net leads to AGI doom, that could be enough to offset any gain, but my guess is that clean energy growth is still a net positive.
but I think this is actually a decline in coal usage.
Ah, my bad, thanks!
They estimate ~35% increase over the next 30 years
That's pretty interesting. I'm somewhat sorry to see it's linear (I would have hoped solar/battery tech would improve more, leading to much faster scaling, 10-30 years out), but it's at least better than some alternatives.