Reply to Vitalik on d/acc
post by samuelshadrach (xpostah) · 2025-03-05T18:55:55.340Z · LW · GW · 0 commentsThis is a link post for http://samuelshadrach.com/?file=/raw/english/unimportant/reply_vitalik_dacc_reply.md
Contents
No comments
2025-03-05
Vitalik recently wrote an article on his ideology of d/acc. This is impressively similar to my thinking so I figured it deserved a reply. (Not claiming my thinking is completely original btw, it has plenty of influences including Vitalik himself.)
Disclaimer
- This is a quickly written note. I might change my mind on this stuff tomorrow for all I know.
Two axes he identifies for differentially accelerating tech are:
- big group versus small group - prioritise accelerating tech that can be deployed by a small group rather than by a big group
- offense versus defense - prioritise accelerating tech that can be deployed for defence rather offense
I think I generally get where this is coming from and find these important ideas.
Some confusions from my side:
- Self-replication
- I am generally in favour of building self-sustaining social systems over not. Success of d/acc ultimately relies on followers of Vitalik's d/acc a) building only those tech that satisfy d/acc criteria and b) providing social approval to people who build tech as per d/acc criteria. For this system to be self-sustaining, point b) may need to be passed into the future long after all of d/acc's current followers (vitalik included) are dead. Self-replicating culture is possible to build but extremely difficult. Religions are among the oldest self-replicating cultures. Ideas such as markets and democracy have also successfully self-replicated for multiple centuries now. I'm unsure if this idea of d/acc being present in culture is alone sufficient to ensure people in year 2200 are still only building tech that satisfies d/acc criteria
- Often, culture is shaped by incentives IMO. If people of the future face incentives that make it difficult to follow d/acc, they might abandon it. It is hard for me to explain this idea in short, but it is something I consider very important. I would rather leave future generations with incentives to do a Thing, than just culture telling them to do a Thing.
- Terminal values
- To me the terminal values of all these galaxy-brain plans is likely preserving and growing timeless stuff like truth and empathy.
- Defensive tech provides truth a good defence as information is easy to replicate but hard to destroy. As long as multiple hostile civilisations (or individuals) can coexist, it is likely atleast one of them will preserve the truth for future generations.
- However, it is harder for me to see how any of these plans connect to empathy. Sure, totalitarianism and extinction can be bad for promoting empathy, but I think it requires more work than just preventing those outcomes. Increasing resource abundance and solving physical security seem useful here. Building defensive tech can increase physical security. In general, my thinking on which tech increases versus decreases human empathy is still quite confused.
- Takeoff may favour offence
- Intelligence-enhancing technologies such as superintelligent AI, genetic engineering of humans to increase IQ, human brain connectome-mapping for whole brain emulation, etc. are so radically accelerating that I'm unsure if an offence-defence balance will get maintained throughout the takeoff. A small differential in intelligence leads to a very large differential in offensive power, it is possible offense just wins at some point while the takeoff is occuring
- Entropy may favour offence
- Historically, it has always been easier to blow up a region of space than to keep it in an ordered state and defend it against being blown up. Defence has typically been achieved and continues to be achieved in game theoretic ways, "if you blow up my territory I blow up yours", rather than in actual physical ways, "I can defend against your attack, also my defence costs less than your offense". This seems somewhat inherent to physics itself, rather than specific to the branches of the tech tree humans have gone down as of 2025. Consider this across times and scales, from the very small and ancient (gunpowder beats metal locks) to the very big and futuristic (a bomb that can blow up the observable universe may have no defence).
- Maybe big group is inherently favoured
- What a big group can build is a strict superset of what a small group can build. Ensuring that all the frontier tech can necessarily be built by small groups is hard. Often what is called open source tech and free market production and what not, is centralised production decentralised consumption. For example solar panels can only be manufactured by a large group, but they can be traded and used by a small group. This is why a lot of tech that appears free market produced on the surface ultimately has supply chain bottlenecks if you try to build it from scratch in a new country. When I say "scratch" I actually mean scratch, dig your own iron and water out of the ground, fully independent supply chain.
0 comments
Comments sorted by top scores.