You Better Mechanize
post by Zvi · 2025-04-22T13:10:08.921Z · LW · GW · 1 commentsContents
Table of Contents You Better Mechanize Superintelligence Eventually Please Review This Podcast They Won’t Take Our Jobs Yet They Took Our (Travel Agent) Jobs The Case Against Intelligence Intelligence Explosion Explosive Economic Growth Wowie on Alignment and the Future But That’s Good Actually None 1 commentOr you had better not. The question is which one. This post covers the announcement of Mechanize, the skeptical response from those worried AI might kill everyone, and the associated (to me highly frustrating at times) Dwarkesh Patel podcast with founders Tamay Besiroglu and Ege Erdil. Mechanize plans to help advance the automation of AI labor, which is a pivot from their previous work at AI Safety organization Epoch AI. Many were not thrilled by this change of plans. This post doesn’t cover Dwarkesh Patel’s excellent recent post asking questions about AI’s future, which may get its own post as well.
Table of Contents
- You Better Mechanize.
- Superintelligence Eventually.
- Please Review This Podcast.
- They Won’t Take Our Jobs Yet.
- They Took Our (Travel Agent) Jobs.
- The Case Against Intelligence.
- Intelligence Explosion.
- Explosive Economic Growth.
- Wowie on Alignment and the Future.
- But That’s Good Actually.
You Better Mechanize
To mechanize or not to mechanize?Mechanize (Matthew Barnett, Tamay Besiroglu, Ege Erdil): Today we’re announcing Mechanize, a startup focused on developing virtual work environments, benchmarks, and training data that will enable the full automation of the economy. We will achieve this by creating simulated environments and evaluations that capture the full scope of what people do at their jobs. This includes using a computer, completing long-horizon tasks that lack clear criteria for success, coordinating with others, and reprioritizing in the face of obstacles and interruptions. We’re betting that the lion’s share of value from AI will come from automating ordinary labor tasks rather than from “geniuses in a data center”. Currently, AI models have serious shortcomings that render most of this enormous value out of reach. They are unreliable, lack robust long-context capabilities, struggle with agency and multimodality, and can’t execute long-term plans without going off the rails. To overcome these limitations, Mechanize will produce the data and evals necessary for comprehensively automating work. Our digital environments will act as practical simulations of real-world work scenarios, enabling agents to learn useful abilities through RL. The market potential here is absurdly large: workers in the US are paid around $18 trillion per year in aggregate. For the entire world, the number is over three times greater, around $60 trillion per year. The explosive economic growth likely to result from completely automating labor could generate vast abundance, much higher standards of living, and new goods and services that we can’t even imagine today. Our vision is to realize this potential as soon as possible. Mechanize is backed by investments from Nat Friedman and Daniel Gross, Patrick Collison, Dwarkesh Patel, Jeff Dean, Sholto Douglas, and Marcus Abramovitch. Tamay Besiroglu: We’re hiring very strong full stack engineers to build realistic, high-fidelity virtual environments for AI.This move from Epoch AI into what is clearly a capabilities company did not sit well with many who are worried about AI, especially superintelligent AI.
Jan Kulveit: ‘Full automation of the economy as soon as possible’ without having any sensible solution to gradual disempowerment seems equally wise, prudent and pro-human as ‘superintelligence as soon as possible’ without sensible plans for alignment. Anthony Aguirre: Huge respect for the founders’ work at Epoch, but sad to see this. The automation of most human labor is indeed a giant prize for companies, which is why many of the biggest companies on Earth are already pursuing it. I think it will be a huge loss for most humans, as well as contribute directly to intelligence runaway and disaster. The two are inextricably linked. Hard for me to see this as something another than just another entrant in the race to AGI by a slightly different name and a more explicit human-worker-replacement goal. Adam Scholl: This seems to me like one of the most harmful possible aims to pursue. Presumably it doesn’t seem like that to you? Are you unworried about x-risk, or expect even differentially faster capabilities progress on the current margin to help, or think that’s the wrong frame, or…? Richard Ngo: The AI safety community is very good at identifying levers of power over AI – e.g. evals for the most concerning capabilities. Unfortunately this consistently leads people to grab those levers “as soon as possible”. Usually it’s not literally the same people, but here it is. To be clear, I don’t think it’s a viable strategy to stay fully hands-off the coming AI revolution, any more than it would have been for the Industrial Revolution. But it’s particularly jarring to see the *evals* people leverage their work on public goods to go accelerationist. This is why I’m a virtue ethicist now. No rules are flexible enough to guide us through this. And “do the most valuable thing” is very near in strategy space to “do the most disvaluable thing”. So focus on key levers only in proportion to how well-grounded your motivations are. Update: talked with Tamay, who disputes the characterization of the Mechanize founders being part of the AI safety community. Tao agrees (as below). IMO they benefited enough from engaging with the community that my initial tweet remains accurate (tho less of a central example). Tao Lin: I’ve talked to these 3 people over the last few years, and although they discussed AI safety issues in good faith, they never came off as anti-acceleration or significantly pro-safety. I don’t feel betrayed, we were allies in one context, but no longer. Oliver Habyrka: IMO they clearly communicated safety priorities online. See this comment thread. Literal quote by Jaime 3 months ago: > I personally take AI risks seriously, and I think they are worth investigating and preparing for. Ben Landau-Taylor: My neighbor told me AI startups keep eating his AI safety NGOs so I asked how many NGOs he has and he said he just goes to OpenPhil and gets a new NGO so I said it sounds like he’s just feeding OpenPhil money to startups and then his daughter started crying.The larger context makes it clear that Jaime cares about safety, but is primarily concerned about concentration of power and has substantial motivation to accelerate AI development. One can (and often should) want to act both quickly and safety. Whereas in the interview Tamay and Ege do with Patel, they seem very clearly happy to hand control over almost all the real resources and of the future to AIs. I am not confused about why they pivoted to AI capabilities research (see about 01:43:00). If we can enable AI to do tasks that capture mundane utility and make life better, then they provide utility and make life better. That’s great. The question is the extent to which one is also moving events towards superintelligence. I am no longer worried about the ‘more money into AI development’ effect. It’s now about the particular capabilities one is working towards, and what happens when you push on various frontiers.
Seán Ó hÉigeartaigh: I’m seeing criticism of this from ‘more people doing capabilities’ perspective. But I disagree. I really want to see stronger pushes towards more specialised AI rather than general superintelligence, b/c I think latter likely to be v dangerous. seems like step in right direction. … I’m not against AI. I’m for automating labor tasks. There are just particular directions i think are v risky, especially when rushed towards in an arms race. Siebe: This seems clearly about general, agentic, long time-horizon AI though? Not narrow [or] specialized. Jan Kulveit: What they seem to want to create sounds more like a complement to raw cognition than substitute, making it more valuable to race to get more powerful cognition Richard Ngo: This announcement describes one of the *least* specialized AI products I’ve ever heard a company pitch. If you’re going to defend “completing long-horizon tasks that lack clear criteria for success, coordinating with others, and reprioritizing in the face of obstacles and interruptions” as narrow skills, then your definition of narrow is so broad as to be useless, and specifically includes the most direct paths to superintelligence. Autonomy looks like the aspect of AGI we’ll be slowest to get, and this pushes directly towards that. Also, evals are very important for focusing labs’ attention – there are a bunch of quotes from lab researchers about how much of a bottleneck they are. Richard Ngo (December 17, 2024): Many in AI safety have narrowed in on automated AI R&D as a key risk factor in AI takeover. But I’m concerned that the actions they’re taking in response (e.g. publishing evals, raising awareness in labs) are very similar to the actions you’d take to accelerate automated AI R&D.I agree that this is fundamentally a complement to raw cognition at best, and plausibly it is also extra fuel for raw cognition. Having more different forms of useful training data could easily help the models be more generally intelligent. Gathering the data to better automate various jobs and tasks, via teaching AIs how to do them and overcome bottlenecks, is the definition of a ‘dual use’ technology. Which use dominates?
Superintelligence Eventually
I think one central crux here is simple: Is superintelligence (ASI) coming soon? Is there going to be an ‘intelligence explosion’ at all? The Mechanize folks are on record as saying no. They think we are not looking at ASI until 2045, regardless of such efforts. Most people at the major labs disagree. If they are right that ASI is sufficiently far, then doing practical automation is differentially a way to capture mundane utility. Accelerating it could make sense. If they are wrong and ASI is instead relatively near, then this accelerates further how it arrives and how things play out once it does arrive. That means we have less time before the end, and makes it less likely things turn out well. So you would have to do a highly bespoke job of differentially advancing mundane utility automation tasks, for this to be a worthwhile tradeoff. They explain their position at length on Dwarkesh Patel’s podcast, which I’ll be responding to past this point.Please Review This Podcast
For previous Patel podcasts, I’ve followed a numbered note structure, with clear summary versus commentary demarcation. This time, I’m going to try doing it in more free flow – let me know if you think this is better or worse.They Won’t Take Our Jobs Yet
They don’t expect the ‘drop in remote worker’ until 2040-2045, for the full AGI remote worker that can do literally everything, which I’d note probably means ASI shortly thereafter. They say if you look at the percentage currently automated it is currently very small, or that the requirements for transformation aren’t present yet, which is a lot like saying we didn’t have many Covid cases in February 2020, this is an exponential or s-curve. You can absolutely extrapolate. Their next better argument is that we’ve run through 10 OOMs (orders of magnitude) of compute in 10 years, but we are soon to be fresh out of OOMs after maybe 3 more, so instead of having key breakthroughs every three years we’ll have to wait a lot longer for more compute. An obvious response is that we’re rapidly gaining compute efficiency, and AI is already accelerating our work and everything is clearly iterating faster, and we’re already finding key ways to pick up these new abilities like long task coherence through better scaffolding (especially if you count o3 as scaffolding) and opening up new training methods.They Took Our (Travel Agent) Jobs
Dwarkesh says, aren’t current systems already almost there? They respond no, it can’t move a cup, it can’t even book a flight properly. I’ve seen robots that are powered by 27B LLMs move cups. I’ve seen operator book flights, I believe the better agents can basically do this already, and they admit the flight booking will be solved in 2025. Then they fall back on, oh travel agents mostly don’t book flights, so this won’t much matter. There’s so many different things each job will have to do. So I have two questions now.- Aren’t all these subtasks highly correlated in AI’s ability to do them? Once the AI can start doing tasks, why should the other tasks stop the AI from automating the job, or automating most of the job (e.g. 1 person does the 10% the AI can’t yet do and the other 9 are fired, or 8 are fired and you get twice as much output)? As I’ve said many times, They Took Our Jobs is fine at first, your job gets taken so we do the Next Job Up that wasn’t quite worth it before, great, but once the AI takes that job too the moment you create it, you’ve got problems.
- What exactly do travel agents do that will be so hard? I had o3 break it down into six subproblems. I like its breakdown so I’m using that, its predictions seem oddly conservative, so the estimates here are mine.
- Data plumbing. Solved by EOY 2025 if anyone cares.
- Search and optimization. It says 2-3 years, I say basically solved now, once you make it distinct from step C (preference elicitation). Definitely EOY 2025 to be at superhuman levels based off a YC startup. Easy stuff, even if your goal is not merely ‘beat humans’ but to play relatively close to actual maximization.
- Preference-elicitation and inspiration. It basically agrees AI can already mostly do this. With a little work I think they’re above human baseline now.
- Transaction and compliance. I don’t know why o3 thinks this takes a few extra years. I get that errors are costly but errors already happen, there’s a fixed set of things to deal with here and you can get them via Ace-style example copying, checklists and tool use if you have to. Again, seriously, why is this hard? No way you couldn’t get this in 2026 if you cared, at most.
- Live ops and irregular operations. The part where it can help you at 3am with no notice, and handle lots of things at once, is where AI dominates. So it’s matter of how much this bleeds into F and requires negotiation with humans, and how much those humans refuse to deal with AIs.
- Negotiation and human factors. This comes down to whether the concierge is going to refuse to deal with AIs, or treat them worse – the one thing AIs can’t do as well as humans is Be Human.
o3: Bottom line: historic shrinkage was roughly 60 / 40 tech vs everything else; looking forward, the next wave is even more tech‑weighted, with AI alone plausibly erasing two‑thirds of the remaining headcount by the mid‑2030s.In a non-transformed world, a few travel agents survive by catering to the very high end clients or those afraid of using AI, and most of their job involves talking to clients and then mostly giving requests to LLMs, while occasionally using human persuasion on the concierge or other similar targets.
The Case Against Intelligence
Then we get an argument that what we have today ‘looks easy’ now but would have looked insurmountably hard in 2015. This argument obviously cuts against the idea that progress will be difficult! And importantly and rightfully so. Problems look easy, then you improve your tools and suddenly everything falls into place and they look easy. The argument is being offered in the other direction because part of that process was scaling up compute so much, which we likely can’t quickly do again that many times, but we have a lot of other ways we can scale things and the algorithmic improvements are dramatic. Indeed the next sentence is that we likely will unlock agency in 3-5 years and the solution will then look fairly simple. My response would start by saying that we’ve mostly already unlocked agency already, it’s not fully ready but if you can’t see us climbing the s-curves and exponentials faster than this you are not paying attention. But even if it does take 3-5 years, okay, then we have agency and it’s simple. If you combine what we have now with a truly competent agent, what happens next? Meanwhile this week we have another company, Ace, announcing it will offer us fully agentic computer use and opening up its alpha. They admit ‘complex reasoning’ will be easy, and retreat to talking about narrow versus general tasks. They claim that we are ‘very far’ from taking a general game off Steam released this year, and then playing it. Dwarkesh mentions Claude Plays Pokemon, it’s fair to note that the training data is rather contaminated here. I would say, you know what, I don’t think we are that far from playing a random unseen turn-based Steam game, although real time might take a bit longer. They expect AI to likely soon earn $100 billion a year, but dismiss this as not important, although at $500 billion they’d eyeball emoji. They say so what, we pay trillions of dollars for oil. I would say that oil was rather transformative in its own way, if not on the level of AI. Imagine the timeline if there hadn’t been any oil in the ground. But hey. They say AI isn’t even that good at coding, it’s only impressive ‘in the human distribution.’ And what percentage of the skills to automate AI R&D do they have? They say AI R&D doesn’t matter that much, it’s been mostly scaling, and also AI R&D requires all these other skills AIs don’t currently have. It can’t figure out what directions to look in, it can only solve new mathematical problems not figure out which new math problems to work on, it’s ‘much worse’ at that. The AIs only look impressive because we have so much less knowledge than the AI. So the bar seems to at least kind of be that AI has to have all the skills to do it the way humans currently do it, and they have to have those skills now, implicitly it’s not that big a deal if you can automate 50% or 90% or 98% of tasks while the humans do the rest, and even if you had 100% it wouldn’t be worth much? They go for the ‘no innovation’ and ‘no interesting recombination’ attacks. The reason for Moravec’s paradox, which as they mention is that for AI easy things look hard and hard things look easy, is that we don’t notice when easy things look easy or hard things look hard. Mostly, actually, if you understand the context, easy things are still easy and hard things are still hard. They point out that the paradox tends to follow when human capabilities evolved – if something is recent the AI will probably smoke us, if it’s ancient then it won’t. But humans have the same amount of compute either way, so it’s not about compute, it’s about us having superior algorithmic efficiency in those domains, and comparing our abilities in these domains to ours in other domains shows how valuable that is. They go for the Goodhart’s law attack that AI competition and benchmark scores aren’t as predictive for general competence as they are for humans, or at least get ahead of the general case. Okay, sure. Wait a bit. They say if you could ‘get the competencies’ of animals into AI you might have AGI already. Whelp. That’s not how I see it, but if that’s all it takes, why be so skeptical? All we have to do is give them motor skills (have you seen the robots recently?) and sensory skills (have you seen computer vision?). This won’t take that long. And then they want to form a company whose job is, as I understand it, largely to gather the kind of data to enable you to do that. Then they do the ‘animals are not that different from humans except culture and imitation’ attack. I find this absurd, and I find ‘the human would only do slightly better at pivoting its goals entirely in a strange environment’ claim absurd. It’s like you have never actually met animals and want to pretend intelligence isn’t a thing. But even if true then this is because culture solves bottlenecks that AIs never have to face in the first place – that humans have very limited data, compute, parameters, memory and most importantly time. Every 80 years or so, all the humans die, all you can preserve is what you can pass down via culture and now text, and you have to do it with highly limited bandwidth. Humans spend something like a third of their existence either learning or teaching as part of this. Whereas AIs simply don’t have to worry about all that. In this sense, they have infinite culture. Dwarkesh points this out too, as part of the great unhobbling. If you think that the transition from non-human primates to humans involved only small raw intelligence jumps, but did involve these unhobblings plus an additional raw compute and intelligence jump, then you should expect to see another huge effective jump from these additional unhobblings. They say that ‘animals can pursue long term goals.’ I mean, if they can then LLMs can. At 44:40 the explicit claim is made that a lack of good reasoning is not a bottleneck on the economy. It’s not the only bottleneck, but to say it isn’t a huge bottleneck seems patently absurd? Especially after what has happened in the past month, where a lack of good reasoning has caused an economic crisis that is expected to drag several percentage points off GDP, and that’s one specific reasoning failure on its own. Why all this intelligence denialism? Why can’t we admit that where there is more good reasoning, things go better, we make more and better and more valuable things more efficiently, and life improves? Why is this so hard? And if it isn’t true, why do we invest such a large percentage of our lives and wealth into creating good reasoning, in the form of our educational system? I go over this so often. It’s a zombie idea that reasoning and intelligence don’t matter, that they’re not a bottleneck, that having more of them would not help an immense amount. No one actually believes this. The same people who think IQ doesn’t matter don’t tell you not to get an education or not learn how to do good reasoning. Stop it. That’s not to say good reasoning is the only bottleneck. Certainly there are other things that are holding us back. But good reasoning would empower us to help solve many of those other problems faster and better, even within the human performance range. If we add in AGI or ASI performance, the sky’s the limit. How do you think one upgrades the supply chains and stimulates the demand and everything else? What do you think upgrades your entire damn economy and all these other things? One might say good reasoning doesn’t only solve bottlenecks, it’s the only thing that ever does.Intelligence Explosion
On the intelligence explosion, Tamay uses the diminishing returns to R&D attack and the need for experiments attack and a need for sufficient concentration of hardware attack. There’s skepticism of claims of researcher time saved. There’s what seems like a conflation by Ege of complements versus bottlenecks, which can be the same but often aren’t. All (including me) agree this is an empirical numbers question, whether you can gain algorithmic efficiency and capability fast enough to match your growth in need for effective compute without waiting for an extended compute buildout (or, I’d assume, how fast we could then do such a buildout given those conditions.) Then Tamay says that if we get AGI 2027, the chance of this singularity is quite high, because it’s conditioning on compute not being very large. So the intelligence explosion disagreement is mostly logically downstream of the question of how much we will need to rely on more compute versus algorithmic innovations. If it’s going to mostly be compute growth, then we get AGI later, and also to go from AGI to ASI will require further compute buildout, so that too takes longer. (There’s a funny aside on Thermopylae, and the limits of ‘excellent leadership,’ yes they did well but they ultimately lost. To which I would respond, they only ultimately lost because they got outflanked, but also in this case ‘good leadership’ involves a much bigger edge. A better example is, classically, Cortes, who they mention later. Who had to fight off another Spanish force and then still won. But hey.) Later there’s a section where Tamay essentially says yes we will see AIs with superhuman capabilities in various domains, pretty much all of them, but thinking of a particular system or development as ‘ASI’ isn’t a useful concept when making the AI or thinking about it. I disagree, I think it’s a very useful handle, but I get this objection.Explosive Economic Growth
The next section discusses explosive economic growth. It’s weird. We spent most of the first hour with arguments (that I think are bad) for why AI won’t be that effective, but now Ege and Tamay are going to argue for 30% growth rates anyway. The discussion starts out with limitations. You need data to train for the new thing, you need all the physical inputs, you have regulatory constraints, you have limits to how far various things could go at all. But as they say the value of AI automation is just super high. Then there’s doubt that sufficient intelligence could design even ‘the kinds of shit that humans would have invented by 2050,’ talk about ‘capital buildup’ and learning curves and efficiency gains for complementary inputs. The whole discussion is confusing to me, they list all these bottlenecks and make these statements that everything has to be learning by doing and steady capital accumulation and supply chains, saying that the person with the big innovation isn’t that big a part of the real story, that the world has too much rich detail so you can’t reason about it. And then assert there will be big rapid growth anyway, likely starting in some small area. They equate to what happened in China, except that you can also get a big jump in the labor force, but I’d say China had that too in effect by taking people out of very low productivity jobs. I sort of interpret this as: There is a lot of ruin (bottlenecks, decreasing marginal returns, physical requirements, etc) in a nation. You can have to deal with a ton of that, and still end up with lots of very rapid growth anyway. They also don’t believe in a distinct ‘AI economy.’ Then later, there’s a distinct section on reasons to not expect explosive growth, and answers to them. There’s a lot of demand for intensive margin and product variety consumption, plus currently world GDP is only 10k a year versus many people happily spend millions. Yes some areas might be slower to automate but that’s fine you automate everything else, and the humans displaced can work in the slower areas. O-ring worlds consist of subcomponents and still allow unbounded scaling. Drop-in workers are easy to incorporate into existing systems until you’re ready to transition to something fully new. They consider the biggest objection regulation, or coordination to not pursue particular technology. In this case, that seems hard. Not impossible, but hard. A great point is when Ege highlights the distinction between rates and levels of economic activity. Often economists, in response to claims about growth rates – 30% instead of 3%, here – will make objections about future higher levels of activity, but these are distinct questions. If you’re objecting to the level you’re saying we could never get there, no matter how slowly. They also discuss the possibility of fully AI firms, which is a smaller lift than a full AI economy. On a firm level this development seems inevitable.Wowie on Alignment and the Future
There’s some AI takeover talk, they admit that AI will be much more powerful than humans but then they equate an AI taking over to the US invading Sentinel Island or Guatemala, the value of doing so isn’t that high. They make clear the AIs will steadily make out with all the resources – the reason they wouldn’t do an explicit ‘takeover’ in that scenario is that they don’t have to, and that they’re ‘integrated into our economy’ but they would be fully in control of that economy with an ever growing share of its resources, so why bother taking the rest? And the answer is, on the margin why wouldn’t you take the rest, in this scenario? Or why would you preserve the public goods necessary for human survival? Then there’s this, and, well, wowie moment of the week:Ege Erdil: I think people just don’t put a lot of weight on that, because they think once we have enough optimization pressure and once they become super intelligent, they’re just going to become misaligned. But I just don’t see the evidence for that. Dwarkesh Patel: I agree there’s some evidence that they’re good boys. Ege Erdil: No, there’s more than some evidence.‘Good boys’? Like, no, what, absolutely not, what are you even talking about, how do you run an AI safety organization and have this level of understanding of the situation. That’s not how any of this works, in a way that I’m not going to try to fit into this margin here, by the way since you recorded this have you seen how o3 behaves? You really think that if this is ‘peaceful’ then due to ‘trade’ as they discuss soon after yes humans will lose control over the future but it will all work out for the humans? They go fully and explicitly Hansonian, what you really fear is change, man. Also, later on, they say they are unsure if accelerating AI makes good outcomes more versus less likely, and that maybe you should care mostly about people who exist today and not the ones who might be born later, every year we delay people will die, die I tell you, on top of their other discounting of the future based on inability to predict or influence outcomes. Well, I suppose they pivoted to run an AI capabilities organization instead. I consider the mystery of why they did that fully solved, at this point. Then in the next section, they doubt value lock-in or the ability to preserve knowledge long term or otherwise influence the future, since AI values will change over time. They also doubt the impact of even most major historical efforts like the British efforts to abolish slavery, where they go into some fun rabbit holing. Ultimately, the case seems to be that in the long run nothing matters and everything follows economic incentives? Ege confirms this doesn’t mean you should give up, just that you should ‘discount the future’ and focus on the near term, because it’s hard to anticipate the long term effects of your actions and incentives will be super strong, especially if coordination is hard (including across long distances), and some past attempts to project technology have been off by many orders of magnitude. You could still try to align current AI systems to values you prefer, or support political solutions. I certainly can feel the ‘predictions are hard especially about the future’ energy, and that predictions about what changes the outcome are hard too. But I certainly take a very different view of history, both past and future, and our role in shaping it and our ability to predict it.
But That’s Good Actually
Finally at 2:12:27 Dwarkesh asks about Mechanize, and why they think accelerating the automation of labor will be good, since so many people think it is bad and most of them aren’t even thinking about the intelligence explosion and existential risk issues. Ege responds, because lots of economic growth is good and at first wages should even go up, although eventually they will fall. At that point, they expect humans to be able to compensate by owning lots of capital – whereas I would presume, in the scenarios they’re thinking about, that capital gets taken away or evaporates over time, including because property rights have never been long term secure and they seem even less likely to be long term secure for overwhelmed humans in this situation. That’s on top of the other reasons we’ve seen above. They think we likely should care more about present people than future people, and then discount future people based on our inability to predict or predictably influence them, and they don’t mind AI takeover or the changes from that. So why wouldn’t this be good, in their eyes? There is then a section on arms race dynamics, which confused me, it seems crazy to think that a year or more edge in AI couldn’t translate to a large strategic advantage when you’re predicting 30% yearly economic growth. And yes, there have been decisive innovations in the past that have come on quickly. Not only nukes, but things like ironclads. They close with a few additional topics, including career advice, which I’ll let stand on their own.1 comments
Comments sorted by top scores.