Posts

Surviving and Shaping Long-Term Competitions: Lessons from Net Assessment 2023-11-24T18:18:41.072Z
Rules for Epistemic Warfare? 2021-06-05T13:30:00.896Z
The Future of Nuclear Arms Control? 2021-02-15T01:50:01.757Z
Epistemic Warfare 2020-12-11T03:50:01.497Z
Competitive Universal Basic Services? 2020-11-09T03:20:02.239Z
The Alignment-Competence Trade-Off, Part 1: Coalition Size and Signaling Costs 2020-01-15T23:10:01.055Z
In Defense of the Arms Races… that End Arms Races 2020-01-15T21:30:00.828Z
The Cybersecurity Dilemma in a Nutshell 2019-12-12T03:20:01.213Z
Modernization and arms control don’t have to be enemies. 2019-01-12T18:50:01.148Z
Lessons from the Cold War on Information Hazards: Why Internal Communication is Critical 2018-02-24T23:34:33.250Z
Strategic High Skill Immigration 2017-12-06T01:15:19.387Z

Comments

Comment by Gentzel on Moderation notes re: recent Said/Duncan threads · 2023-04-18T11:33:53.787Z · LW · GW

My model of the problem boils down to a few basic factors:

  1. Attention competition prompts speed and rewards some degree of imprecision and controversy with more engagement.
  2. It is difficult to comply with many costly norms and to have significant output/win attention competitions.
  3. There is debate over which norms should be enforced, and while getting the norms combination right is positive-sum overall, different norms favor different personalities in competition.
  4. Just purging the norm breakers can create substantial groupthink if the norm breakers disproportionately express neglected ideas or comply with other neglected and costly but valuable norms.
  5. It is costly for 3rd parties to adjudicate and intervene precisely in conflicts involving attention competition, since they are inherently costly to sort out.

General recommendations/thoughts:

  1. Slow the pace of conversation, perhaps through mod rate limits on comment length and frequency or temporary bans. This seems like a proportional response to argument spam and attention competition, and would seem to push toward better engagement incentives without inducing groupthink from overzealous censorship.
  2. If entangled in comment conflict yourself, aim to write more carefully, clearly, and in a condensed manner that is more inherently robust against adversarial misinterpretation. If the other side doesn't reciprocate, make your effort explicit to reduce the social cost of unilaterally not responding quickly (e.g. leaving a friendly temporary comment about responding later when you get time to convey your thoughts clearly).
  3. To the degree possible, reset and focus on conversations going forward, not publicly adjudicating who screwed-up what in prior convos. While it is valuable to set norms, those who are intertwined in conflict and stand to competitively benefit from the selective enforcement of the norms they favor are inherently not credible as sources of good norm sets.

In general we should be aiming for positive-sum and honest incentives, while economizing in how we patch exploits in the norms that are promoted and enforced. Attention competition makes this inherently hard, thus it makes sense to attack the dynamic itself.

Comment by Gentzel on Moderation notes re: recent Said/Duncan threads · 2023-04-18T10:17:13.791Z · LW · GW
Comment by Gentzel on Rules for Epistemic Warfare? · 2021-06-06T18:09:29.793Z · LW · GW

I am not sure that is actually true. There are many escalatory situations, border clashes, and mini-conflicts that could easily lead to far larger scale war, but don't due to the rules and norms that military forces impose on themselves and that lead to de-escalation. Once there is broader conflict though between large organizations, then yes you often do often need a treaty to end it.

Treaties don't work on decentralized insurgencies though and hence forever wars: agreements can't be credibly enforced when each fighter has their own incentives and veto power. This is an area where norm spread can be helpful, and I do think online discourse is currently far more like waring groups of insurgents than waring armies.

Comment by Gentzel on Rules for Epistemic Warfare? · 2021-06-06T17:56:05.063Z · LW · GW

Why would multi-party conflict change the utility of the rules? It does change the ease of enforcement, but that's the reason to start small and scale until the advantages of cooperating exceed the advantages of defecting. That how lots of good things develop where cooperation is hard.

The dominance of in-group competition seems like the sort of thing that is true until it isn't. Group selection is sometimes slow, but that doesn't mean it doesn't exist. Monopolies have internal competition problems, while companies on a competitive market do get forced to develop better internal norms for cooperation, or they risk going out of business against competitors that have achieve higher internal alignment via suppressing internal zero-sum competition (or re-aligned it in a positive-sum manner for the company).

Comment by Gentzel on Rules for Epistemic Warfare? · 2021-06-06T17:44:44.093Z · LW · GW

I don't think you are fully getting what I am saying, though that's understandable because I haven't added any info on what makes a valid enemy.

I agree there are rarely absolute enemies and allies. There are however allies and enemies with respect to particular mutually contradictory objectives.

Not all war is absolute, wars have at times been deliberately bounded in space, and having rules of war in the first place is evidence of partial cooperation between enemies. You may have adversarial conflict of interest with close friends on some issues: if you can't align those interests it isn't the end of the world. The big problem is lies and sloppy reasoning that go beyond defending one's own interests into causing unnecessary collateral damage for large groups. The entire framework here is premised on the same distinction you seem to think I don't have in mind... which is fair because it was unstated. XD

The big focus is a form of cooperation between enemies to reduce large scale indiscriminate collateral damage of dishonesty. It is easier to start this cooperation between actors that are relatively more aligned, before scaling to actors that are relatively less aligned with each other. Do you sense any floating disagreements remaining?

Comment by Gentzel on Rules for Epistemic Warfare? · 2021-06-06T17:27:59.078Z · LW · GW

That's totally fair for LessWrong, haha. I should probably try to reset things so my blog doesn't automatically post here except when I want it to.

Comment by Gentzel on Modernization and arms control don’t have to be enemies. · 2021-03-17T19:37:17.714Z · LW · GW

I agree with this line of analysis. Some points I would add:

-Authoritarian closed societies probably have an advantage at covert racing, at devoting a larger proportion of their economic pie to racing suddenly, and at artificially lowering prices to do so. Open societies have probably a greater advantage at discovery/the cutting edge and have a bigger pie in the first place (though better private sector opportunities compete up the cost of defense engineering talent). Given this structure, I think you want the open societies to keep their tech advantage, and make deployment/scaling military tech a punishment for racing by closed societies. -Your first bullet seems similar to the situation the U.S. is in now, Russia and China just went through a modernization wave, and Russia has been doing far more nuclear experimentation while the U.S. talent for this is mostly old or already retired + a lot of the relevant buildings are falling apart. Once you are in the equilibrium of knowing a competitor is doing something and your decision is to match or not, you don't have leverage to stop the competitor unless you get started. Because of how old a lot of U.S. systems are/how old the talent is, Russia likely perceived a huge advantage to getting the U.S. to delay. A better structure for de-escalation is neutral with respect to relative power differences: if you de-escalate by forfeiting relative power you keep increasing the incentive for the other side to race.

There are some other caveats I'm not getting into here, but I think we are mostly on the same page.

Comment by Gentzel on The Future of Nuclear Arms Control? · 2021-03-17T19:16:20.791Z · LW · GW

Some of the original papers on nuclear winter reference this effect, e.g. in the abstract here about high yield surface burst weapons (e.g. I think this would include the sort that would have been targeted at silos by the USSR). https://science.sciencemag.org/content/222/4630/1283

A common problem with some modern papers is that they just take soot/dust amounts from these prior papers without adjusting for arsenal changes or changes in fire modeling.

Comment by Gentzel on The Future of Nuclear Arms Control? · 2021-03-03T13:10:32.239Z · LW · GW

This is what the non-proliferation treaty is for. Smaller countries could already do this if they want, as they aren't treaty limited in terms of the number of weapons they make, but getting themselves down the cost curve wouldn't make export profitable or desirable because they have to eat the cost of going down the cost curve in the first place and no one that would only buy cheap nukes is going to compensate them for this. Depending on how much data North Korea got from prior tests, they might still require a lot more testing, and they certainly require a lot more nuclear material which they can't get cheaply. Burning more of their economy to get down the cost curve isn't going to enable them to export profitably, and if they even started it could be the end of the regime (due to overmatch by U.S. + Korea + Japan). The "profit" they get from nukes is in terms of regime security and negotiating power... they aren't going to throw those in the trash. They might send scientists, but they aren't going to give away free nukes, or no one is going to let planes or ships leave their country without inspection for years. The Cuban missile crisis was scary for the U.S. and USSR, but a small state making this sort of move against the interest of superpowers is far more likely to invite an extreme response (IMO). 

Comment by Gentzel on The Future of Nuclear Arms Control? · 2021-02-24T20:46:47.218Z · LW · GW

I generally agree with this thought train of concern. That said, if the end state equilibrium is large states have counterforce arsenals and only small states have multi-megaton weapons, then I think that equilibrium is safer in terms of expected death because the odds of nuclear winter are so much lower.

There will be risk adaptation either way. The risk of nuclear war may go up contingent on their being a war, but the risk of war may go down because there are lower odds of being able to keep war purely conventional. I think that makes assessing the net risk pretty hard, but I doubt you'd argue for turning every nuke into a civilization ender to improve everyone's incentives: at some point it just isn't credible that you will use the weapons and this reduces their detergent effect. There is an equilibrium that minimizes total risk across sources of escalation, accidents, etc. and I'm trying to spark convo toward figuring out what that equilibrium is. I think as tech changes, the best equilibrium is likely to change, and it is unlikely to be the same arms control as decades ago, but I may be wrong about the best direction of change.

Comment by Gentzel on The Future of Nuclear Arms Control? · 2021-02-24T20:36:14.322Z · LW · GW

Precision isn't cheap. Low yield accurate weapons will often be harder to make than large yield inaccurate weapons. A rich country might descend the cost curve in production, but as long the U.S. stays in an umbrella deterrence paradigm that doesn't decrease costs for anyone else, because we don't export nukes.

This also increases the cost for rogue states to defend their arsenals (because they are small, don't have a lot of area to hide stuff, etc.), which may discourage them from gaining them in the first place.

Comment by Gentzel on The Future of Nuclear Arms Control? · 2021-02-24T20:30:29.151Z · LW · GW

I meant A. The Beirut explosion was about the yield of a mini-nuke.

Comment by Gentzel on The Future of Nuclear Arms Control? · 2021-02-15T10:22:48.848Z · LW · GW

I could imagine unilateral action to reduce risk here being good, but not in violation of current arms control agreements. To do that without breaking any current agreements, that means replacing lots of warheads with lower yields or dial yields, and probably getting more conventional long-range precision weapons. Trying to replace some sub-launhed missiles with low yield warheads was a step in that direction.

There's a trade-off between holding leverage to negotiate, and just directly moving to a better equilibrium, but if you are the U.S., the strategy shift may just increase your negotiating power since the weapons are more useable.

The main thing I want to advocate is for people to debate these ideas to see if there is a potentially better equilibrium to aim for, and to chart a path to it. I don't want people to blindly assume I am right.

Comment by Gentzel on Epistemic Warfare · 2020-12-19T23:20:06.981Z · LW · GW

I think you need legible rules for norms to scale in an adversarial game, so it can't be direct utility threshold based rules.

Proportionality is harder to make legible, but when lies are directed at political allies that's clear friendly fire or betrayl. Lying to the general public also shouldn't fly, that's indiscriminate.

I really don't think lying and censorship is going to help with climate change. We already have publication bias and hype on one side, and corporate lobbying + other lies on the other. You probably have to take another approach to get trust/credibility when joining the fray so late. If there were greater honesty and accuracy we'd have invested more in nuclear power a long time ago, but now that other renewable tech has descended the learning curve faster different options make sense going forward. In the Cold War, anti-nuclear movements generally got a bit hijacked by communists trying to make the U.S. weaker and to shift focus from mutual to unilateral action... there's a lot of bad stuff influenced by lies in distant past that constrain options in the future. I guess it would be interesting to see what deception campaigns in history are the most widely considered good and successful after the fact. I assume most are ones with respect to war, such as ally deception about the D-Day landings.

Comment by Gentzel on Epistemic Warfare · 2020-12-19T22:52:28.792Z · LW · GW

It's not an antidote, just like a blockade isn't an antidote to war. Blockades might happen to prevent a war or be engineered for good effects, but by default they are distortionary in a negative direction, have collateral damage, and can only be pulled off by the powerful.

Comment by Gentzel on Epistemic Warfare · 2020-12-19T22:46:43.936Z · LW · GW

While it can depend on the specifics, in general censorship is coercive and one sided. Just asking someone to not share something isn't censorship, things are more censorial if there is a threat attached.

I don't think it is bad to only agree to share a secret with someone if they agree to keep the secret. The info wouldn't have been shared in the first place otherwise. If a friend gives you something in confidence, and you go public with the info, you are treating that friend as an adversary at least to some degree, so being more demanding in response to a threat is proportional.

Comment by Gentzel on Competitive Universal Basic Services? · 2020-11-11T22:54:08.047Z · LW · GW

Vouchers could be in the range of competition, but if people prefer basic income to the value they can get via voucher at the same cost-level then there has to best substantial value that the individual doesn't capture to justify it. School vouchers may be a case of this, since education has broader societal value.

Comment by Gentzel on Competitive Universal Basic Services? · 2020-11-11T22:47:29.440Z · LW · GW

Issue there is that implicitly U.S. R&D is subsidizing the rest of the world since we don't negotiate prices but others do. Seems like an unfortunate trade-off between the present and the future/ here and other places, except when there is a lack of reinvestment of revenue into R&D.

Comment by Gentzel on Competitive Universal Basic Services? · 2020-11-11T22:40:22.776Z · LW · GW

I agree with your point in general. In these cases, I'm specifically focusing on regulations for issues that evaporate with central coordination: 

- Government is doing the central coordinating, so overriding zoning shouldn't result in uncoordinated planning: gov will also incur the related infrastructure costs.
- If you relax zoning and room size minimums everywhere, the minimum cost to live everywhere decreases, so no particular spot becomes disproportionately vulnerable to concentrating the negative externalities of poverty while simultaneously you decrease housing cost based poverty everywhere.

Comment by Gentzel on Competitive Universal Basic Services? · 2020-11-11T22:19:39.434Z · LW · GW

I think I agree with the time-restricted fund idea on competitive commodities as being better than just providing the commodities since there aren't going to be a lot of further economy of scale benefits. 

Having competing services and basic income coming out of the same gov budget does create pressure to not make things as poorly as past gov programs. The incentives should still be aligned, because people can still choose to opt out just like the normal market. 

On food, the outcomes shouldn't be as bad as food stamp restrictions over time not just because of the opt-out option, but also because the data will be more legible to the government and enable better standards over time where we otherwise have pretty bad food science. 

I think people would only opt-in to the food plan if it basically allows them to capture benefits somewhere else within the service package they select (e.g. extra basic income via reducing expected medical bills). Otherwise basic income should dominate a food plan as an option unless the person is looking for a way to tie their own hands.

 

Comment by Gentzel on Competitive Universal Basic Services? · 2020-11-11T21:56:09.897Z · LW · GW

I agree that it is a huge problem if the rules can change in a manner that evaporates the fitness pressure on the services: you need some sort of pegging to stop budgets from exploding, you can't have gov outlawing competition, etc. 

I also don't have a strong opinion on how flexible the government should be here. The more flexible it is, the less benefit you get from constraining variance and achieving economies of scale, the more flexible it is, the more people can get exactly what they want, but with less buying power. I do think it is helpful to have the ability to individually opt out of services, and this would be a very useful signal for forcing both the government and service contractors to adapt. I'm not sure just how many services should be competing for a given service niche within the broader system. One idea would be you have a competition to come up with cheap standardized services, and then companies compete to provide them. 

The big thing you are trying to achieve is providing a welfare floor at a much more sustainable cost via competitive pressure combined with the ability to centrally coordinate consumer preferences. The coordination doesn't just give market power benefits, you also have increased legibility that decreases search and transaction cost for consumers and potentially the ability to do better large scale (though still not randomized) experiments in nutrition science and regulation design (e.g. food standards far exceeding regulatory requirements cheaply, exceptions to housing size requirements, etc.)

Comment by Gentzel on Competitive Universal Basic Services? · 2020-11-11T20:21:25.352Z · LW · GW

I think this is a good argument in general, but idk how it does against this particular set-up.

When spending levels are pegged, and you are starting out with a budget scope similar to current social programs, some particular company or bureaucracy is only going to capture a whole market if they do really good job since: A: people can opt-out for cash, B: people can chose different services within the system, and C: people can spend their own income on whatever they want outside the universal system.

As long as you sustain fitness pressure on the services, and constrain the competition to performance rather than anti-competitive capture maneuvers, I think it could go well. 

Singapore's health system comes to mind since it is a very low percentage of GDP, has both public and private options, and there is still universal care (though you always face some degree of cost when getting care in order to avoid bad rationing). 

This sort of thing could go very poorly in practice if the government is too corrupt, but in that particular case it seems like there is sufficient fitness pressure on government hospitals from the private sector that they are probably doing a good job. 

Comment by Gentzel on Competitive Universal Basic Services? · 2020-11-11T19:19:26.004Z · LW · GW

The goal is to standardize a floor, not to chop up the ceiling. People would be free to buy whatever they want if they opt out. Those that opt-in benefit from central coordination with others to solve the adverse selection problem with housing that incentivizes each local area to regulate things bigger and bigger than people need to keep away poor people, making housing more expensive everywhere than it needs to be and curtailing any innovation in the direction of making housing smaller. It probably isn't a coincidence that Japan has capsule hotels, better zoning, low housing costs, more spread out high speed transit and a lot of wacky houses given its barriers to immigration. Don't forget that most large group houses that people live in are illegal (though enforcement is lax unless something else goes wrong) and that cities all over the place have all sorts of NIMBY policy and rent controls that distort the market. That is the counterfactual you are replacing, there isn't a pure market counterfactual in any big city I can think of, but would be interested to hear. The current equilibrium in most places creates an extremely strong incentive to create barriers to housing innovation, and accordingly they do. Singapore is hyper market oriented and rich as a country, but nevertheless did central coordination on housing to drive down costs and erode the support base for communism (not saying one should copy all their policies of course).

https://reasonstobecheerful.world/singapore-affordable-housing-freedom/#:~:text=In%20Singapore%2C%20housing%20is%20affordable%2C%20diverse%20and%20impeccably%20maintained.&text=80%20percent%20of%20Singaporeans%20live,Singaporean%20context%20in%20a%20moment.)

On the lack of paternalist services, I am getting at why the market doesn't do them by default. If there are so many can you give one example? If you are just talking about gov ones, then we are talking about different things. 

On GiveDirectly, I am incredibly skeptical of unconditional cash transfer vs. other targeted interventions in terms of direct effectiveness and vs. or institutional interventions for the long-term: https://ssir.org/articles/entry/givedirectly_not_so_fast

In abstract, interventions like this have always been possible since there have been political entities that use money, and yet you don't see any country anywhere getting rich via this mechanism. Governments through history instead coordinate and provide services that would otherwise be difficult to provide. When there are  systematic transfers, they are usually conditional in order to sustain good incentives over the long-term.

More concretely, the costs the GiveDirectly RCT gives for roofing costs seem a bit strange once you start reasoning about local prices. If people are so poor they are earning like a dollar or two per day, it isn't going to cost $55 to thatch a tiny roof unless it someone actually has to spend about a month doing it... if metal roofing is less than $1 per square foot in materials I have no idea how you get to a $400 roof when labor wages are so low. 

Comment by Gentzel on The Alignment-Competence Trade-Off, Part 1: Coalition Size and Signaling Costs · 2020-01-20T20:47:41.080Z · LW · GW

I think those two cases are pretty compatible. The simple rules seem to get formed due to the pressures created by large groups, but there are still smaller sub-groups within large groups than can benefit from getting around the inefficiency caused by the rules, so they coordinate to bend the rules.

Hanson also has an interesting post on group size and conformity: http://www.overcomingbias.com/2010/10/towns-norm-best.html

In the vegan case, it is easier to explain things to a small number of people than a large number of people, even though it may still not be worth your time with small numbers of people. It's easier to hash out argument with one family member than to do something your entire family will impulsively think is hypocritical during Thanksgiving.

Comment by Gentzel on The Alignment-Competence Trade-Off, Part 1: Coalition Size and Signaling Costs · 2020-01-20T20:33:25.162Z · LW · GW

Yea, when you can copy the same value function across all the agents in an bureaucracy, you don't have to pay signaling costs to scale up. Alignment problems become more about access to information rather than having misaligned goals.

Comment by Gentzel on In Defense of the Arms Races… that End Arms Races · 2020-01-20T20:23:45.961Z · LW · GW

I think capacity races to deter deployment races are the best from this perspective: develop decisive capability advantages, and all sorts of useful technology, credibly signal that your could use it for coercive purposes and could deploy it at scale, then don't abuse the advantage, signal good intent, and just deploy useful non-coercive applications. The development and deployment process basically becomes an escalation ladder itself, where you can choose to stop at any point (though you still need to keep people employed/trained to sustain credibility).

I would probably rephase the "good guys" statement in terms of value alignment. For competitions where near term racing risks are low, long-term racing risks are high, the likely winner from starting race conditions earlier would be more value aligned with your goals, then racing may make sense. If initial race risks are high, or a less aligned actor is disproportionately likely to gain power, then pushing for more cooperative norms at the margin makes sense. You want to maximize your ability to foster or engineer risk reducing cooperation before the times of highest risk, and you want to avoid forms of cooperation that increase risk.

Comment by Gentzel on In Defense of the Arms Races… that End Arms Races · 2020-01-20T19:59:41.095Z · LW · GW

Basically, if even if there are adaptations that could happen to make an animal more resistant to venom, the incremental changes in their circulatory system required to do this are so maladaptive/harmful that they can't happen.

This is a pretty core part of competitive strategy: matching enduring strengths against the enduring weaknesses of competitors.

That said, races can shift dimensions too. Even though snake venom won the race against blood, gradual changes in the lethality of venom might still cause gradual adaptive changes in the behavior of some animals. A good criticism of competitive strategies between states, businesses, etc. is that the repeated shifts in competitive dimensions can still result in Molochian conditions/trading away utility for victory, which may have been preventable via regulation or agreement.

Comment by Gentzel on In Defense of the Arms Races… that End Arms Races · 2020-01-20T17:11:13.383Z · LW · GW

Thanks, some of Quester's other books on deterrence also seem pretty interesting books also seem interesting.

My post above was actually intended as a minor update to an old post from several years ago on my blog, so I didn't really expect it to be copied over to LessWrong. If I spent more time rewriting the post again, I think I would focus less on that case, which I think rightly can be contested from a number of directions, and talk more about conditions for race deterrence generally.

Basically, if you can credibly build up the capacity to win an arms race (with significant advantages in the relevant forms of talent, natural resources, industrial capacity, etc.) then you may not even have to race. Limited development could plausibly serve to make capacity credible, gain the advantages of positive externalities from cutting edge R&D, but avoid actually sinking a lot of the economy into the production of destabilizing systems. By showing extreme capability in a limited sense, and credible capability to win a particular race, you may be able to deter racing if the communication of lasting advantage is credible. If lasting advantage is not credible, you may get more of a Sputnik or AlphaGo type event and galvanize competitors toward racing faster.

For global tech competition more generally, it would be interesting to investigate industrial subsidies by competing governments to see in what conditions countries attempt strategic protectionism and to get around the WTO and in which cases they give up a sector of competition. My prior is that protectionism is more likely when an industry is established, and that countries which could have successfully entered a sector can be deterred from doing so.

Comment by Gentzel on In Defense of the Arms Races… that End Arms Races · 2020-01-20T16:59:42.578Z · LW · GW

While it may sound counter intuitive, I think you want to increase both hegemony and balance of power at the same time. Basically a more powerful state can help solve lots of coordination problems, but to accept the risks of greater state power you want the state to be more structurally aligned with larger and larger populations of people.

https://www.amazon.com/Narrow-Corridor-States-Societies-Liberty-ebook/dp/B07MCRLV2K

Obviously states are more aligned with their own populations than with everyone, but I think the expansion of the U.S. security umbrella has been good for reducing the number of possible security dilemmas between states and accordingly people are better off than they would otherwise be with more independent military forces (higher defense spending, higher war risk, etc.). There is some degree of specialization within NATO which makes it harder for states to go to war as individuals, and also makes their contribution to the alliance more vital. The more this happens at a given resource level, the more powerful the alliance will be in absolute terms, and the more power will be internally balanced against unilateral actions that conflict with some state's interests, though at some point veto power and reduced redundancy could undermine the strength of the alliance.

For technological risks, racing increases risk in the short-run between the competitors but will tend to reduce the number of competitors. In the long-run, agreeing not to race while other technologies progress increases the amount of low hanging fruit and expands the scope of competition to more possible competitors. If you think resource-commandeering positive feedback loops are not super close, there might be a degree of racing you would want earlier to establish front-runners to win and deter potential market entrants from expanding the competition during a period of high-risk low-hanging fruit. You might be able to do better yet if the near term leading competitors can reach agreement to not race, and then team up to defeat or buyout new entrants. The leaders obviously can't hold everything completely still and expect to remain leaders though, and businesses should deliver measurable tech progress if they want to avoid anti-monopoly regulation.

Anyway, basically preventing races isn't as simple as choosing not to race, and even if your goal is just to minimize risk you either have to credibly commit a larger and large number of actors to not defect over time as technology and know-how diffuses, or you should want more aligned competitors to win and to cooperate to slow the risky aspects of racing.

Apologies if this wasn't clear from the post, the post was intended as a minor update to one I wrote several years ago, and I didn't expect to see it get copied over to LessWrong, haha.

Comment by Gentzel on The Cybersecurity Dilemma in a Nutshell · 2019-12-12T17:08:59.295Z · LW · GW

The book does assume from the start that states want offensive options. I guess it is useful to breakdown the motivations of offensive capabilities. Though the motivations aren’t fully distinct, it matters if a state is intruding as the prelude to or an opening round of a conflict, or if it is just trying to improve its ability to defend itself without necessarily trying to disrupt anything in the network being intruded into. There are totally different motives too, like North Korea installing cryptocurrency miners on other countries’ computers, but I guess you could analogize that to taxing territory from a foreign state without engaging its military.

The book basically argues that even if cybersecurity is your goal, a more cost-effective defense will almost always involve making intrusions for defensive purposes since it becomes prohibitively expensive to protect everything when the attacker can choose anywhere to strike.

I could see an argument that very small actors would do better to focus purely on defenses, since if their networks are small enough, it may be easier to map them and to protect everything extremely well, while it could require more talent to make useful intrusions into other networks. The larger an actor is, (like a state) the more complex its systems are, and the harder to centrally control and monitor those systems are, presumably the more effective going on the offensive becomes to counter intruders. I think states do make this calculation, and that's why they often also have smaller air-gapped systems that are easier to defend.

For defending the public though, it would be a nightmare to individually intervene in millions of online businesses, just as it would be a nightmare if the government had to post guards outside every business to prevent intrusion by foreign soldiers. When the landscape is like that: far more vulnerabilities than adversaries, potential adversaries are a rational point of focus.

Comment by Gentzel on Strategic High Skill Immigration · 2017-12-07T23:15:24.237Z · LW · GW

I suspect high skill immigration directly helps probably with other risks more than with AI due to the potential ease of espionage with software (though some huge data sets are impractical to steal). However, as risks from AI are likely more immanent, most of the net benefit will likely be concentrated with reductions in risk there, provided such changes are done carefully.

As for brain drain, it seems to be a net economic benefit to both sides, even if one side gets further ahead in the strategic sense: https://en.wikipedia.org/wiki/Human_capital_flight

Basically, smart people go places where they earn more, and send back larger remitances. Some plausibly good effect on home country institutions too: https://en.wikipedia.org/wiki/Human_capital_flight#Democracy,_human_rights_and_liberal_values

Comment by Gentzel on Strategic High Skill Immigration · 2017-12-07T23:00:15.563Z · LW · GW

We have lived in a multi-polar world where human alignment is a critical key to power: therefore in the most competitive systems, some humans got a lot of what they wanted. In the future with better AI, humans won't be doing as much of the problem solving, so keeping humans happy, motivating them, etc. might become less relevant to keeping strategic advantage. Why Nations Fail has a lot of examples on this sort of idea.

It's also true that states aren't unified rational actors, so this sort of analysis is more of a course grained description of what happens over time: in the long run, the most competitive systems win, but in the short run smaller coalition dynamics might prevent larger states from exploiting their position of advantage to the maximal degree.

As for happiness, autonomy doesn't require having all options, just some options. The US is simultaneously very strong, while also having lots of autonomy for its citizens. The US was less likely to respect the autonomy of other countries during the cold war when it percieved existential risks from Communism: centralized power can be compatible with increased autonomy, but you want the centralized power to be in a system which is less likely to abuse power (though all systems abuse power to some degree).

Comment by Gentzel on Strategic High Skill Immigration · 2017-12-07T22:49:30.814Z · LW · GW

The most up to date version of this post can be found here: https://theconsequentialist.wordpress.com/2017/12/05/strategic-high-skill-immigration/

Comment by Gentzel on An update on Signal Data Science (an intensive data science training program) · 2016-04-10T10:10:07.090Z · LW · GW

At times the signal house was densely populated and a bunch of people got sick. These problems went away over time as some moved out, and we standardized better health practices (hand sanitizer freely available, people spreading out or working from their rooms if sick, etc).

Comment by Gentzel on An update on Signal Data Science (an intensive data science training program) · 2016-04-09T21:56:08.095Z · LW · GW

I think it is better to assess personal fit for the bootcamp. There are a lot of advantages I think you can get from the program that would be difficult to acquire quickly on your own.

Aside from lectures, a lot of the program was self study, including a lot of my most productive time at the bootcamp, but there was normally the option to get help, and it was this help, advice, and strategy that I think made the program far more productive than what I would have done on my own, or in another bootcamp for that matter (I am under the impression longer bootcamps may develop specific skills at using the software better, but they don't convey nearly the same level of conceptual understanding of statistics in data science, and likewise there are many types of mistakes graduates of other programs will make that graduates of Signal's cohort have been taught not to). When there was not the option to get help, I usually shifted my work schedule and it wasn't much of a problem: there are so many projects to work on, that there was almost always something productive to work on where I wouldn't get stuck (optional exercises on prior projects or making prior projects better). I can see this being very frustrating for some people though, as getting stuck and not having immediate feedback interrupts flow.

Many of the organizational problems didn't seem to really be problems, and seemed more like differences which are good for some and not for others. Pair programming was not always optimal due to the large degree of differences between students. It wouldn't have made sense for everyone to pair program since it would have been holding back some of the faster students. A more rigid structure would have helped people who were less naturally self directed/focused though. Organizational problems that happened with respect to the first cohort in terms of setting up (furniture, internet, whiteboards, etc.) are unlikely to be problems for future cohorts now that the instructors have learned from experience and have a place set up. The first cohort took the risks and costs of such things, which later cohorts probably won't have to worry about.

This is not like other bootcamps, it is less expensive, more individually focused rather than having the entire group doing all the same curriculum, and there are a bunch of rationalists iteratively helping you decide which jobs are best to apply to, who can network you into what position, and which skills actually matter most for aiming for the specific jobs you are aimed at. I don't expect you to be able to have the same opportunities at a normal bootcamp, but a normal bootcamp is probably also lower risk if you don't trust yourself to make things work out (other programs may have quizzes where they throw you out if you fail, and essentially force you to remain focused, with Signal you are more in control yourself, and can take time off to apply to jobs.