Posts

Natural laws should be explicit constraints on strategy space 2019-08-13T20:22:47.933Z · score: 10 (3 votes)
Offering public comment in the Federal rulemaking process 2019-07-15T20:31:39.182Z · score: 19 (4 votes)
Outline of NIST draft plan for AI standards 2019-07-09T17:30:45.721Z · score: 19 (5 votes)
NIST: draft plan for AI standards development 2019-07-08T14:13:09.314Z · score: 17 (5 votes)
Open Thread July 2019 2019-07-03T15:07:40.991Z · score: 15 (4 votes)
Systems Engineering Advancement Research Initiative 2019-06-28T17:57:54.606Z · score: 23 (7 votes)
Financial engineering for funding drug research 2019-05-10T18:46:03.029Z · score: 11 (5 votes)
Open Thread May 2019 2019-05-01T15:43:23.982Z · score: 11 (4 votes)
StrongerByScience: a rational strength training website 2019-04-17T18:12:47.481Z · score: 15 (7 votes)
Machine Pastoralism 2019-04-03T16:04:02.450Z · score: 12 (7 votes)
Open Thread March 2019 2019-03-07T18:26:02.976Z · score: 10 (4 votes)
Open Thread February 2019 2019-02-07T18:00:45.772Z · score: 20 (7 votes)
Towards equilibria-breaking methods 2019-01-29T16:19:57.564Z · score: 23 (7 votes)
How could shares in a megaproject return value to shareholders? 2019-01-18T18:36:34.916Z · score: 18 (4 votes)
Buy shares in a megaproject 2019-01-16T16:18:50.177Z · score: 15 (6 votes)
Megaproject management 2019-01-11T17:08:37.308Z · score: 57 (21 votes)
Towards no-math, graphical instructions for prediction markets 2019-01-04T16:39:58.479Z · score: 30 (13 votes)
Strategy is the Deconfusion of Action 2019-01-02T20:56:28.124Z · score: 75 (24 votes)
Systems Engineering and the META Program 2018-12-20T20:19:25.819Z · score: 31 (11 votes)
Is cognitive load a factor in community decline? 2018-12-07T15:45:20.605Z · score: 20 (7 votes)
Genetically Modified Humans Born (Allegedly) 2018-11-28T16:14:05.477Z · score: 30 (9 votes)
Real-time hiring with prediction markets 2018-11-09T22:10:18.576Z · score: 19 (5 votes)
Update the best textbooks on every subject list 2018-11-08T20:54:35.300Z · score: 78 (28 votes)
An Undergraduate Reading Of: Semantic information, autonomous agency and non-equilibrium statistical physics 2018-10-30T18:36:14.159Z · score: 31 (7 votes)
Why don’t we treat geniuses like professional athletes? 2018-10-11T15:37:33.688Z · score: 20 (16 votes)
Thinkerly: Grammarly for writing good thoughts 2018-10-11T14:57:04.571Z · score: 6 (6 votes)
Simple Metaphor About Compressed Sensing 2018-07-17T15:47:17.909Z · score: 8 (7 votes)
Book Review: Why Honor Matters 2018-06-25T20:53:48.671Z · score: 31 (13 votes)
Does anyone use advanced media projects? 2018-06-20T23:33:45.405Z · score: 45 (14 votes)
An Undergraduate Reading Of: Macroscopic Prediction by E.T. Jaynes 2018-04-19T17:30:39.893Z · score: 38 (9 votes)
Death in Groups II 2018-04-13T18:12:30.427Z · score: 32 (7 votes)
Death in Groups 2018-04-05T00:45:24.990Z · score: 48 (19 votes)
Ancient Social Patterns: Comitatus 2018-03-05T18:28:35.765Z · score: 20 (7 votes)
Book Review - Probability and Finance: It's Only a Game! 2018-01-23T18:52:23.602Z · score: 18 (9 votes)
Conversational Presentation of Why Automation is Different This Time 2018-01-17T22:11:32.083Z · score: 70 (29 votes)
Arbitrary Math Questions 2017-11-21T01:18:47.430Z · score: 8 (4 votes)
Set, Game, Match 2017-11-09T23:06:53.672Z · score: 5 (2 votes)
Reading Papers in Undergrad 2017-11-09T19:24:13.044Z · score: 42 (14 votes)

Comments

Comment by ryan_b on The Power to Solve Climate Change · 2019-09-13T17:02:57.910Z · score: 2 (1 votes) · LW · GW

It's probably worth emphasizing the huge challenges involved in trying to get a coordination process to output something highly specific. This is enough of a problem to start with using specificity to privately evaluate start-ups; how to get from there, through governments, and then through international relations to a highly specific treaty is a good candidate for the most complicated possible task.

Comment by ryan_b on The Power to Judge Startup Ideas · 2019-09-06T18:18:21.480Z · score: 4 (2 votes) · LW · GW

I have heard from several angel investors words to the effect of "I don't invest in ideas, I invest in people." Which is to say they prefer a good group of founders with a mediocre idea to a less reliable group of founders with a better one.

This seems similar to your high generic competence standard. The hitch is that the preference for a good team over a good idea doesn't rest completely on the likelihood with which a mediocre idea will be successfully executed, but rather also on the likelihood that this good team will recognize the mediocrity of the idea and shift to a new one successfully. Quoting from Paul Graham's essay linked above:

So don't get too attached to your original plan, because it's probably wrong. Most successful startups end up doing something different than they originally intended — often so different that it doesn't even seem like the same company.

I feel like the ability to recognize and then articulate value should be included in the idea of generic competence. Likewise for things like opposition research: following the advertising example, I don't see why we can't just recurse on execution advantages with the same basic structure of a story. It is like a Value Sub-Proposition Story, where the specific person is the entrepreneur and the specific problem is delivering on some aspect of the Value Proposition (by getting it in front of people).

It still seems useful to the investor to know whether or not execution advantages are specific and what they may be, and therefore also useful to the entrepreneur to articulate them.

Comment by ryan_b on Utility ≠ Reward · 2019-09-06T15:43:45.264Z · score: 14 (8 votes) · LW · GW

I strongly approve of providing less-formal essays to aid with clarity and intuition.

Comment by ryan_b on Living the Berkeley idealism · 2019-09-05T15:06:30.744Z · score: 2 (1 votes) · LW · GW

This post reminded me of a video about a different way to prepare textbooks, which was written in Julia. The book is Algorithms for Optimization. It's beyond my ability to assess content quality, but it seems to look pretty sharp.

Comment by ryan_b on The Power to Judge Startup Ideas · 2019-09-05T14:58:17.854Z · score: 7 (3 votes) · LW · GW

I have the same belief about startups, but I don't see it as being in conflict with the Value Prop Story. I would go further and say it is really important to be able to link the execution to the value proposition, because otherwise what are you executing exactly?

Naively, if A and B neither have a value proposition, we expect them both to fail. If A does have one, it is trivial for B to claim theirs is higher as a result of execution. This is things like:

  • UX design -> easier to use
  • Hiring | coding -> shorter time to delivering value, and add more value faster
  • Minimising downtime | customer support -> value is more likely to be there when the user wants it
  • Advertising | sales | expanding -> able to put the value proposition in front of more users, faster

Execution is the causal explanation for delivering value, so being able to articulate this feels like a huge advantage.

Comment by ryan_b on Open & Welcome Thread - August 2019 · 2019-09-01T20:50:12.361Z · score: 2 (1 votes) · LW · GW

Sure! Message me and we’ll schedule.

Comment by ryan_b on What are the biggest "moonshots" currently in progress? · 2019-09-01T20:37:20.158Z · score: 3 (2 votes) · LW · GW

By way of clarification:

  • Are we talking about biggest-projects-which-are-moonshots, which might tell us a lot about how to get funding or the things the public/governments like?

  • Are we talking longest-moonshots in the sense of furthest time horizon or least likely?

  • Are we talking biggest-moons-at-which-there-is-a-shot, in the sense of benefiting some aspect of civilization?

All of these seem reasonable, but might be worth distinguishing in answers.

Comment by ryan_b on [Link] Book Review: Reframing Superintelligence (SSC) · 2019-08-30T20:21:57.020Z · score: 3 (2 votes) · LW · GW

I am aiming directly at questions of how an AI that starts with a only a robotic arm might get to controlling drones or trading stocks, from the perspective of the AI. My intuition, driven by Moravec's Paradox, is that each new kind of output (or input) has a pretty hefty computational threshold associated with it, so I suspect that the details of the initial inputs/outputs will have a big influence on the risk any given service or agent presents.

The reason I am interested in this is that it feels like doing things has no intrinsic connection to learning things, and the we only link them because so much of our learning and doing is unconscious. That is to say, I suspect actions are orthogonal to intelligence.

Comment by ryan_b on Open & Welcome Thread - August 2019 · 2019-08-30T15:59:26.195Z · score: 2 (1 votes) · LW · GW

I went from being indecisive and procrastinating to being much more decisive. The change happened while I was in the military, and there were two key developments. The first was constant exposure to situations where a decision needed to be made, and suffering the consequences when it wasn't; the second was a better understanding of the relationship between decisions and long-term objectives.

This doesn't map to civilian life perfectly. For example, I am as prone to procrastination as ever when it comes to luxury things or things I consider wasteful; it is really hard to make decisions that amount to setting money on fire for me, and luxuries are unnecessary so no decision is usually the best decision anyway.

There have also been some surprise benefits. The central insight for making sense of Army decisions is that the overriding priority is unity, which winds up meaning that the Army prefers successfully coordinating on a stupid thing to failing to coordinate on a smart one. A decision that leads to failed coordination is always bad. This has been extremely helpful in making sense of any kind of leadership activity, and most importantly is excellent practical guidance in terms of family.

Comment by ryan_b on [Link] Book Review: Reframing Superintelligence (SSC) · 2019-08-30T14:19:29.024Z · score: 5 (3 votes) · LW · GW

This sounds like we're resting on an abstract generalization of 'outputs.' Is there any work being done to distinguish between different outputs, and consider how a computer might recognize a kind it doesn't already have?

Comment by ryan_b on [Link] Book Review: Reframing Superintelligence (SSC) · 2019-08-29T20:34:14.028Z · score: 3 (2 votes) · LW · GW

A lot of the distinction between a service and an agent seems to rest on the difference between thinking and doing. Is there a well-defined concept of action for intelligent agents?

Comment by ryan_b on [deleted post] 2019-08-29T20:00:09.762Z

I feel like a Rube-Goldberg device would be a good intuition pump here. How can we describe a Rube-Goldberg device in terms of correlations? What is a good way to break it into chunks, and then also a good way to connect those chunks? Since they are usually built of simple machines, everything is mathematically tractable - we have a good grip on those.

Comment by ryan_b on [deleted post] 2019-08-29T19:55:17.736Z

3. Naively, actions feel like they require causal reasoning, and causal reasoning of any kind seems to require the ability to reason about two parts of the environment. One of these parts can be you (or the AI).

But I am not sure this is the case. Strong correlation seems to be good enough for a human brain - we do all kinds of actions without any understanding of what we are doing or why. This can go as far as provoking conscious confusion during the action. Based on this lower standard, what correlation would be needed?

Boundaries of some kind, because we need some way to localize what we are doing and looking at. Strong, chiefly as a matter of efficiency. We want a way to describe a correlation such that we can chain it with other correlations, and then eventually bundle them together as an action. Then doing new actions is a matter of chaining the correlations back to something we can currently do.

Comment by ryan_b on [deleted post] 2019-08-29T19:28:39.320Z

Actions are not just Embedded Agency in a different guise. From the Full-Text Version it looks to me like what actions are and how to discover them is abstracted away, which makes sense in the context of that project.

It appears most relevant to problems associated with multi-level models.

Comment by ryan_b on [deleted post] 2019-08-29T18:49:53.202Z

2. Is thinking about actions just rephrasing the agent-environment question? It feels like the answer is no, because it isn't as though being able to specify the relationship between the agent and the environment changes the need to compute the specific details of any particular action.

But it might be impossible to specify an action exactly without being able to specify the agent-environment relationship exactly. Could it be (or is it) stated implicitly?

Comment by ryan_b on [deleted post] 2019-08-29T18:42:33.809Z

1. I am deeply confused by this.

The older conversations about Tool AI seemed to focus on the difference between an Oracle that answers questions and one that does things. I feel like this distinction is bigger and in a different way than it was made out to be, because doing things is really hard. If feels like the paradox of sensing being complicated should go two ways.

Checking the Wikipedia page for Moravec's Paradox, "sensorimotor" is how they describe it, so both sensing and motor skills (inputs and outputs) are covered. My intuition fairly screams that this should generalize to any other environment-affecting action. So:

  • The more inputs an AI starts with, the easier it is to recognize other inputs/outputs.
  • The more outputs an AI starts with, the easier it is to add other inputs/outputs.
  • This still doesn't identify what causes the machine to try to affect the world at all.
Comment by ryan_b on [deleted post] 2019-08-29T15:24:43.091Z

For the Book Review: Reframing Superintelligence (SSC) linkpost:

There seems to be some missing linkage between what a computer knows and what it can do. I feel like there is some notion of action that is missing: how the heck does an AI of any given sophistication add new actions to its repertoire? This can't happen in any software we currently have - even the ability to categorize or define actions doesn't imply the ability to create new ones.

Logical Actions are Optimization Channels: intuition from Information Theory - a given action is like a channel, and the message is optimization of the environment vis-a-vis goals. Logical actions are not the same as actual actions: for example, it is obvious to humans that we can look at what a computer actually does to get information about its intentions, so having drones steal candy from babies can turn us against it regardless of what it displays on the monitor. But a logical action of 'Signal Good Intentions' encompasses both what the monitor displays and how humans perceive the drone activity. Further, we can look at how dividing bandwidth up into multiple channels impacts the efficiency of transmitting a message as an intuition for how more logical actions increase the capability of the AI.

This seems to be orthogonal to the question of agency - even an AI with many logical actions that it optimizes won't generate new ones unless one of those logical actions is 'search action space for new actions'. This makes it clear that Tool AIs with a large action set will be strictly more powerful than Agent AIs that only start with 'search action space' up until a certain point.


Comment by ryan_b on Where are people thinking and talking about global coordination for AI safety? · 2019-08-28T14:30:05.587Z · score: 2 (1 votes) · LW · GW
4. Information technology has massively increased certain kinds of coordination (e.g., email, eBay, Facebook, Uber), but at the international relations level, IT seems to have made very little impact. Why?

I note the coordination is entirely at a lower-level than those companies: mostly individuals are using these services for coordination, as well as small groups. It seems like coordination innovations aren't bottom up, but rather top-down (even if the IT examples are mostly opt-in). This seems to match other large coordination improvements, like empire, monotheism, or corporations. There is no higher level of abstraction than governments from which to improve international relations, it seems to me.

Quite separately, we could ask: what are the specific challenges in international relations that IT could address? The problems mostly revolve around questions of trust, questions of the basic competence of human agents (diplomats, ambassadors, heads of state, etc), and fundamental conflicts of interest. International relations has an irreducible component of face-to-face personal relationships, so I would expect tools built around that or to facilitate it to be the most relevant.

That being said, it's also clear that Facebook and Uber aren't even trying to target problems related to international relations. We know contracting with multiple governments is achievable, because people like Google, Microsoft, and Palantir all manage it selling IT for intelligence purposes. Dominic Cummings has a blog post High performance government, ‘cognitive technologies’, Michael Nielsen, Bret Victor, & ‘Seeing Rooms’ that speculates about how international relations could be improved by making the stupendous complexity of the information at work more readily available to decision makers, both for educational purposes and in real time. Maybe there would be an opportunity for a Situation Room Company, or similar. Following on the personal relationship observation, perhaps something like Salesforce-but-for-diplomacy would have some value.

Comment by ryan_b on Open & Welcome Thread - August 2019 · 2019-08-28T14:15:29.350Z · score: 8 (4 votes) · LW · GW

I recently saw several episodes of Boss Baby: Back in Business, which is an animated series on Netflix, continuing the movie. It is fairly stupid, as you would expect from something done in the style of children's animated series.

But it also seems to be a pretty good example of writing children's picture books about corporate life and Moral Mazes.

Comment by ryan_b on [deleted post] 2019-08-27T21:47:54.332Z
4. Information technology has massively increased certain kinds of coordination (e.g., email, eBay, Facebook, Uber), but at the international relations level, IT seems to have made very little impact. Why?

I note the coordination is entirely at a lower-level than those companies: mostly individuals are using these services for coordination, as well as small groups. It seems like coordination innovations aren't bottom up, but rather top-down (even if the IT examples are mostly opt-in). This seems to match other large coordination improvements, like empire, monotheism, or corporations. There is no higher level of abstraction than governments from which to improve international relations, it seems to me.

Quite separately, we could ask: what are the specific challenges in international relations that IT could address? The problems mostly revolve around questions of trust, questions of the basic competence of human agents (diplomats, ambassadors, heads of state, etc), and fundamental conflicts of interest. None of these are really addressable with off-the-shelf IT solutions.

That being said, it's also clear that Facebook and Uber aren't even trying to target problems related to international relations. We know contracting with multiple governments is achievable, because people like Google, Microsoft, and Palantir all manage it selling IT for intelligence purposes. Dominic Cummings has a blog post High performance government, ‘cognitive technologies’, Michael Nielsen, Bret Victor, & ‘Seeing Rooms’ that speculates about how international relations could be improved by making the stupendous complexity of the information at work more readily available to decision makers, both for educational purposes and in real time. Maybe there would be an opportunity for a Situation Room Industries, or similar.


Comment by ryan_b on [deleted post] 2019-08-27T20:58:15.517Z

From Where are people thinking and talking about global coordination for AI safety?

3. When humans made advances in coordination ability in the past, how was that accomplished? What are the best places to apply leverage today?

Chiefly because groups that were not sufficiently coordinated were destroyed, or absorbed by competing groups.

Comment by ryan_b on Soft takeoff can still lead to decisive strategic advantage · 2019-08-27T15:23:00.585Z · score: 2 (1 votes) · LW · GW

I am strongly convinced this boils down to bad decisions.

There's a post over at the Scholar's Stage about national strategy for the United States [low politics warning], and largely it addresses the lack of regional expertise. The part about Rome:

None of these men received any special training in foreign languages, cultures, diplomacy, or statecraft before attaining high rank. Men were more likely to be chosen for their social status than proven experience or familiarity with the region they were assigned to govern. The education of these officials was in literature, grammar, rhetoric, and philosophy, and their ability to govern was often judged on their literary merits. The historian Susan Mattern discusses one example of this in her masterful study of Roman strategy, Rome and the Enemy. The key passage comes from Tacitus, who reports that the Emperor Nero was better placed to deal with Parthian shenanigans in Armenia than Cladius, for he was advised by Burrus and Seneca, "men known for their expertise in such matters" (Annals 13.6).

And later:

This had significant strategic implications. Between the conquest of Dalmatia in the early days of the Principate and the arrival of the Huns in the days of Late Antiquity, it is difficult to find an enemy on Rome's northern borders that was not created by Rome itself. Rehman has already noted that the greatest defeat of the Principate, that of Teutoburg Forest, was the work of man in Rome's employ. Teutoburg is but one point in a pattern that repeated for centuries. Most Germanic barbarian groups did not live in oppida, as the Celts did, and had little political hierarchy to speak of. When Romans selected local leaders to negotiate with, favor with trade or other boons, and use as auxiliary allies in war they were transforming petty chiefs into kings. Roman diplomatic norms, combined with unrelenting Roman military pressure, created the very military threats they were hoping to forestall.

The same pattern happened to China with the steppe, and (in my opinion) it matches pretty closely what is happening with the United States in Central Asia and the Middle East.

Comment by ryan_b on Soft takeoff can still lead to decisive strategic advantage · 2019-08-27T13:51:16.091Z · score: 6 (3 votes) · LW · GW

A book that completely changed my way of thinking about this sort of thing is Supplying War, by Martin Van Creveld. It is a history of logistics from the Napoleonic Era to WW2, mostly in Europe.

One startling revelation (to me) is that WW1 was the first war where supply lines became really important, because everything from the bores of the artillery to the gauge of the rail lines was sufficiently differentiated that you could no longer simply take the enemy's stuff and use it. At the same time, the presence of rail finally meant it was actually feasible to transport enough supplies from an industrial core to the border to make a consistent difference.

All prior conflicts in Europe relied on forage and capture of enemy equipment for the supply of armies.

Comment by ryan_b on Soft takeoff can still lead to decisive strategic advantage · 2019-08-26T15:06:33.129Z · score: 7 (4 votes) · LW · GW

I claim that 1939 Germany would not be able to conquer western Europe. There are two reasons for this: first, 1939 Germany did not have reserves in fuel, munitions, or other key industrial inputs to complete the conquest when they began (even allowing for the technical disparities); second, the industrial base of 1910 Europe wasn't able to provide the volume or quality of inputs (particularly fuel and steel) needed to keep the warmachine running. Europe would fall as fast as 1939 German tanks arrived - but I expect those tanks to literally run out of gas. Of course if I am wrong about either of those two core arguments I would have to update.

I am not sure what lessons to draw about the AGI scenario in particular either; mostly I am making the case for extreme caution in the assumptions we make for modelling the problem. The Afghanistan example shows that capability and goals can't be disentangled the way we usually assume. Another particularly common one is the perfect information assumption. As an example, my current expectation in a slow takeoff scenario is multiple AGIs which each have Decisive Strategic Advantage windows at different times but do not execute it for uncertainty reasons. Strictly speaking, I don't see any reason why two different entities could not have Decisive Strategic Advantage simultaneously, in the same way the United States and Soviet Union both had extinction-grade nuclear arsenals.

Comment by ryan_b on Book Review: Secular Cycles · 2019-08-23T21:18:20.580Z · score: 3 (2 votes) · LW · GW
The secular cycles are based around Malthusian population growth, but we are now in a post-Malthusian regime where land is no longer the limiting resource. And the cycles seem to assume huge crises killing off 30% to 50% of the population

While I understand Malthus was very specific about the question of land and population, I always find it feels wrong to say "post-Malthusian" because it seems to imply we're done with constraints entirely. By contrast, I feel like if we were bumping up against any constraint at all we should see something resembling Malthusian dynamics. The question is, what are the actual constraints now?

I'm not even convinced they need to be fixed. It seems perfectly reasonable to have a constraint that is itself dynamic; Malthus used land largely as a proxy for food supply, which we know is more variable than land because locusts and blight and weather. I'm inclined to look at things like energy production as the modern equivalent. I also wonder about that 30-50% number - it seems within reach for a crises to cause that much in losses to capital, for example.

What about constraints based on outputs rather than inputs? Greenhouse gas, say, or nitrogen?

Comment by ryan_b on What authors consistently give accurate pictures of complex topics they discuss? · 2019-08-23T20:55:59.885Z · score: 2 (1 votes) · LW · GW

I think the question of how efficient electric motors are is largely irrelevant to the purposes of the book. The whole point is to explore what our current requirements actually are, and whether renewable energy can meet them. Until and unless the majority of British drivers are in electric cars, it doesn't seem to have a bearing on that question.

The 'without the hot air' angle is to eliminate this problem that environmental proposals often have where totally changing one aspect of society is totally feasible given that another aspect totally changes at the same time. You will also notice that his numbers for solar power aren't based on theoretical, or even estimated future, conversion efficiency; they are based on the efficiency available in the market at the time he was writing (he uses 10% efficiency for the kind of panel you could feasibly install everywhere, and put 20% for the top-of-the-line models, in 2008).

That being said, you have effectively skipped to the conclusion of the book: the answer is no, at current consumption levels sustainable energy isn't feasible; the solution we have to pursue is reducing our energy consumption. He addresses the benefits of more efficient forms of getting around (including electric cars and trains and such) in the Transport chapter.

Comment by ryan_b on Book Review: Secular Cycles · 2019-08-23T20:28:42.312Z · score: 3 (2 votes) · LW · GW

You will probably be interested in Seshat: the Global History Databank, a project led by Turchin. We're still a long way from standard benchmarks, mind you, but we're at least on the way to a database upon which any kind of quantitative work can be done.

They recently cracked the problem of whether big gods or big societies came first.

Comment by ryan_b on Soft takeoff can still lead to decisive strategic advantage · 2019-08-23T17:58:16.394Z · score: 11 (5 votes) · LW · GW

I broadly agree that Decisive Strategic Advantage is still plausible under a slow takeoff scenario. That being said:

Objection to Claim 1A: transporting 1939 Germany back in time to 1910 is likely to cause a sudden and near-total collapse of their warmaking ability because 1910 lacked the international trade and logistical infrastructure upon which 1939 Germany relied. Consider the Blockade of Germany, and that Czarist Russia would not be able to provide the same trade goods as the Soviet Union did until 1941 (nor could they be invaded for them, like 1941-1945). In general I expect this objection to hold for any industrialized country or other entity.

The intuition I am pointing to with this objection is that strategic advantage, including Decisive Strategic Advantage, is fully contextual; what appear to be reasonable simplifying assumptions are really deep changes to the nature of the thing being discussed.

To reinforce this, consider that the US invasion of Afghanistan is a very close approximation of the 30 year gap you propose. At the time the invasion began, the major source of serious weapons in the country was the Soviet-Afghan War which ended in 1989, being either provided by the US covert alliance or captured from the Soviets. You would expect at least local strategic advantage vis-a-vis Afghanistan. Despite this, and despite the otherwise overwhelming disparities between the US and Afghanistan, the invasion was a political defeat for the US.

Comment by ryan_b on Natural laws should be explicit constraints on strategy space · 2019-08-22T15:05:07.200Z · score: 5 (2 votes) · LW · GW

I am sympathetic to this feeling, but as it happens c pops up almost immediately because of communication and targeting requirements. Radios, radar, laser guidance, and various kinds of telemetry all have to use the speed of light (at least in air) explicitly in their operation.

Comment by ryan_b on Natural laws should be explicit constraints on strategy space · 2019-08-22T14:53:26.613Z · score: 2 (1 votes) · LW · GW
You're recommending that if we think we're constrained and can't identify the natural law that's binding, the constraint is probably imaginary or contingent on some other thing we should examine?

I separated this one out because it is an excellent idea. I had not gotten that far, but this is a superb way to proceed for integrating new constraints in general.

Comment by ryan_b on Natural laws should be explicit constraints on strategy space · 2019-08-22T14:49:49.789Z · score: 2 (1 votes) · LW · GW
You talk about pilot as a constraint and the obvious removal of the constraint (unmanned fighters). This is the opposite of a natural law: it's an assumed constraint or a constraint within a model, not a natural law.

Yes, exactly; this is why natural laws should be explicit. When the assumed constraint was broken, this surprised a lot of people, and surprise is a bad place to be.

I think " We have a good command of natural law at the scale where warmachines operate. " is exactly opposite of what I believe

That's interesting - would you be willing to describe this in more detail? Ships, planes, and tanks are all in the Newtonian mechanics and classical Maxwell's Equations regime; it's a lot of combustion engines, rockets, radios, and ballistics. Though weirdly we don't have a good understanding of how explosions happen. Outside of GPS, we don't even really use relativity; I'd be surprised if we had a better understanding of natural law at any other scale.

We have some hints as to natural law in those scales, but we're nowhere near those constraints. There are a huge number of contingent constraints in our technology and modeling of the problem, which are very likely overcome-able with effort.

That's the motivation in a nutshell. Following the example of transistors, we know what the physical constraints are and also that we are quite close to them now. We have a consistent experience of each step closer to those constraints being harder to achieve than the one before it, which I expect to generalize to other examples. Assuming I am correct, you can then estimate how difficult something is to overcome (and therefore how likely it is to happen) by seeing how close to the natural law constraint it is.

I feel it is similar to the low hanging fruit hypothesis for scientific progress. We use distance from the limits of natural law as the yardstick for how low the strategic fruit is hanging.

Comment by ryan_b on Matthew Barnett's Shortform · 2019-08-13T21:13:44.600Z · score: 15 (5 votes) · LW · GW

I used to feel similarly, but then a few things changed for me and now I am pro-textbook. There are caveats - namely that I don't work through them continuously.

Textbooks seem overly formal at points

This is a big one for me, and probably the biggest change I made is being much more discriminating in what I look for in a textbook. My concerns are invariably practical, so I only demand enough formality to be relevant; otherwise I am concerned with a good reputation for explaining intuitions, graphics, examples, ease of reading. I would go as far as to say that style is probably the most important feature of a textbook.

As I mentioned, I don't work through them front to back, because that actually is homework. Instead I treat them more like a reference-with-a-hook; I look at them when I need to understand the particular thing in more depth, and then get out when I have what I need. But because it is contained in a textbook, this knowledge now has a natural link to steps before and after, so I have obvious places to go for regression and advancement.

I spend a lot of time thinking about what I need to learn, why I need to learn it, and how it relates to what I already know. This does an excellent job of helping things stick, and also of keeping me from getting too stuck because I have a battery of perspectives ready to deploy. This enables the reference approach.

I spend a lot of time what I have mentally termed triangulating, which is deliberately using different sources/currents of thought when I learn a subject. This winds up necessitating the reference approach, because I always wind up with questions that are neglected or unsatisfactorily addressed in a given source. Lately I really like founding papers and historical review papers right out of the gate, because these are prone to explaining motivations, subtle intuitions, and circumstances in a way instructional materials are not.

Comment by ryan_b on [deleted post] 2019-08-13T20:19:34.373Z

Rough draft for Scott's Secular Cycles post:


This doesn't qualify as criticism per se, but might offer some help for coloring in the edges. The only real suspicion I have about Turchin's work is that it follows the traditional model of only looking at agrarian empires, even though better information is available now outside of this traditional focus.

  • Significant change in the understanding of mongols and other nomadic empires
    • From Needy Nomad (of material goods for survival) to Tradey Nomad (of luxury goods for maintaining their social organization), from Thomas Barfield in The Perilous Frontier.
      • Under this lens, a large unified agrarian state provides enough luxury trade and raiding for large nomadic confederacies to form.
    • Beckwith goes further in Empires of the Silk Road (into controversy), arguing that nomads are the drivers of Eurasian commerce. This is on the basis of records detailing huge importations of finished goods, including things like iron weapons and armor, into China. Further, he argues an inverse relationship between the agrarian states and the nomad confederacies - noting that the confederacies grew larger first, posits that the formation of a large confederacy creates a kind of Silk Road free-trade zone, which generates enough surplus wealth in the agrarian kingdoms to fund wars of unification.
    • A little detail about the Han-Xiongnu Wars provides some context about a stupendous crisis that might devour a golden age.

Comment by ryan_b on Trauma, Meditation, and a Cool Scar · 2019-08-12T13:55:36.367Z · score: 3 (2 votes) · LW · GW
Ah man, sorry your joke bombed.

On the basis of this line alone, I regret nothing!

Comment by ryan_b on Trauma, Meditation, and a Cool Scar · 2019-08-09T19:27:27.136Z · score: 10 (4 votes) · LW · GW
I said "good thing I have another good eye" on the way to the hospital

HA! After I crawled away from the truck, I was laying between the engine block and the cab, and my gunner was kneeling a little ways away pulling security. After a little time I said to him, "After due and careful consideration, I have decided explosions are even more exciting from the inside."

He was unamused. Too bad - not a lot of opportunities to deliver that joke.

The two things I knew beforehand were that episodes of spontaneously reliving the event are the classic example of consequences I did not want, and that there is a technique called exposure therapy, which usually entails deliberately exposing yourself to some trigger until you normalize to it again. Doing it on purpose was like exposing myself to no trigger, I figure. I'm confident this isn't how it actually works, but I kind of felt like every one I went through deliberately was one less I would have to go through while driving in the car or something.

Comment by ryan_b on Trauma, Meditation, and a Cool Scar · 2019-08-09T16:17:49.530Z · score: 3 (2 votes) · LW · GW

I have found that it does a really good job of separating the feelings then from the feelings now, because I can just keep verifying to myself that I'm actually fine, and so is everyone else.

How do you feel about the surprise of the event? I feel like the dominant feature in my recovery is the preparation I had beforehand: I knew when I joined this was a thing that happens, it being the most publicized part of the war; there are a hundred thousand people it has happened to before me who described it; we have general emergency medical training and maximum-intensity safety gear; we have hours of specific training for how to respond to it; I personally had the habit of visualizing it when I sat down in the truck; I knew that day we were going to drive until it happened to someone. I was about as prepared as humanly possible, and getting blown up still sucked. Yet I never had to deal with feeling like I couldn't believe it even happened.

Comment by ryan_b on Can we really prevent all warming for less than 10B$ with the mostly side-effect free geoengineering technique of Marine Cloud Brightening? · 2019-08-08T17:01:43.188Z · score: 2 (1 votes) · LW · GW

I think that is plausible, and I think the factors you mention are definitely a virtue of the MCB approach. A further one is that even if we were to produce too few, the ones we did produce would still result in marginal gains. I also agree that most of the cost will be at the beginning; even more so if it is done correctly.

But I point out the error in estimating how many boats will be needed is completely independent of the error in estimating the timeline and costs for setting up production; we aren't at liberty to assume they will even approximately balance out. I think it is reasonable to infer that the longer the delay until operations start, the more boats will be needed to achieve the goal. This means the risk is lopsided primarily on the side of costs increasing; there's no particular likelihood of things being much cheaper or faster than expected, like we expected production to start in five years and it mysteriously happened in three.

These are all solvable problems, mind; the core of my criticism is that there are specific issues that arise from the bigness of challenges alone, and that we need to account for them deliberately. This is not done in baseline cost or time estimates, and rarely done even among people who are experienced in tackling big challenges, so we aren't at liberty to assume that we can hand it off to experienced practitioners and they will handle it.

Comment by ryan_b on Can we really prevent all warming for less than 10B$ with the mostly side-effect free geoengineering technique of Marine Cloud Brightening? · 2019-08-08T15:38:00.563Z · score: 3 (2 votes) · LW · GW

As it happens, coordinating a large assembly-line project is fairly standard megaproject material. Ships, aircraft, and semiconductors are good examples.

The hitch is your example assumes a WWII-grade of funding and coordination. Do you think that can be achieved quickly enough, cheaply enough, and reliably enough to be ignored when proposing such a project?

Comment by ryan_b on Trauma, Meditation, and a Cool Scar · 2019-08-07T17:29:04.529Z · score: 8 (3 votes) · LW · GW

Empathy.

I had a severe back injury far from home in 2011, the circumstances surrounding which have been detailed elsewhere. Irritatingly, my glasses were lost and I spent a week at a hospital in Germany unable to see anything clearly and bedridden.

I also had a sensitivity to loud noises, and thought about the event a lot. With nothing better to do, and being forewarned that post-traumatic stress was a severe problem, I got control of the situation by deliberately reliving the event over and over again until the adrenaline stopped.

I still do it, sometimes; if I hit a pothole and that surprises me, or if I find myself otherwise unaccountably stressed and unfocused, I go back and smooth it out again.

Comment by ryan_b on Writing children's picture books · 2019-08-07T15:52:24.740Z · score: 2 (2 votes) · LW · GW

One of the things I like about this idea is how it specifically triggers thinking about two different modes of communication, the words and the pictures. I feel like when I think about displaying information it is usually either showing something I already know in word form, or alternatively to get at information I cannot grok otherwise like data points in a large table. I almost never think about giving one idea both barrels from the get-go.

Comment by ryan_b on Can we really prevent all warming for less than 10B$ with the mostly side-effect free geoengineering technique of Marine Cloud Brightening? · 2019-08-07T14:53:28.620Z · score: 2 (1 votes) · LW · GW

To my knowledge, none. This is because to my knowledge there has never been such a project.

I claim that there is no reason to expect geoengineering to be different than any other field in project outcomes. I claim further there are strong causal reasons to expect them to be the same. Large projects behave similarly regardless of whether we are talking civil infrastructure, oil & gas, energy, mining, aerospace, entertainment or defense. There is no trait of geoengineering which can differentiate it from this pattern.

This is because the problems are not driven by the field from which the project originates, but by the irreducible complexity that comes with size. Absent a specific commitment to dealing with irreducible complexity problems, we should expect budget and timeline estimates to be badly wrong.

Note this doesn't make geoengineering a bad field or their projects worse than other projects; the thing I am pointing to is that we need separate expertise to make it work the way we need. We cannot afford to spend multiple projects worth of budget on only one project, and we really cannot afford to be surprised by a 10 year delay.


Comment by ryan_b on Can we really prevent all warming for less than 10B$ with the mostly side-effect free geoengineering technique of Marine Cloud Brightening? · 2019-08-06T14:33:28.658Z · score: 2 (1 votes) · LW · GW

The implications are significantly different if $10B turns into $30B while the project is underway, which is the norm. The timeline is also significant, and delays of 2-10 years matter a great deal to how successful the project is going to be.

Comment by ryan_b on Can we really prevent all warming for less than 10B$ with the mostly side-effect free geoengineering technique of Marine Cloud Brightening? · 2019-08-06T14:27:14.265Z · score: 2 (1 votes) · LW · GW

It does not follow that because climate models are bad to an unknown degree, geoengineering projects will overperform to a symmetric degree. This is a common assumption among large projects, but it is also a specific failure mode.


Comment by ryan_b on Can we really prevent all warming for less than 10B$ with the mostly side-effect free geoengineering technique of Marine Cloud Brightening? · 2019-08-05T14:45:57.122Z · score: 3 (5 votes) · LW · GW

No, with ~95% confidence.

The central problem of geoengineering projects is that there is no reason to even entertain the notion that they will perform differently than regular projects. They will be over budget by about the same amount, miss their timelines by about the same amount, and miss their performance targets by about the same amount. This last point is the real crux of the matter, because we are talking about large scale, irreversible changes to the environment. The only way to remedy an error is by using the same error-prone process that caused it in the first place.

That being said, there are new methods available for managing huge and complex projects. Then it is a matter of adopting the methods.

Comment by ryan_b on Conversation on forecasting with Vaniver and Ozzie Gooen · 2019-08-01T16:21:25.217Z · score: 9 (3 votes) · LW · GW

I agree with you about the non-decision value of forecasting. My claim is that the decision value of forecasting is neglected, rather than that decisions are the only value. I strongly feel that neglecting the decisions aspect is leaving money on the table. From Ozzie:

My impression is that some groups have found it useful and a lot of businesses don't know what to do with those numbers. They get a number like 87% and they don't have ways to directly make that interact with the rest of their system.

I will make a stronger claim and say that the decisions aspect is the highest value aspect of forecasting. From the megaproject management example: Bent Flyvbjerg (of Reference Class Forecasting fame) estimates that megaprojects account for ~8% of global GDP. The time and budget overruns cause huge amounts of waste, and eyeballing his budget overrun numbers it looks to me like ~3% of global GDP is waste. I expect the majority of that can be resolved with good forecasting; by comparison with modelling of a different system which tries to address some of the same problems, I'd say 2/3 of that waste.

So I currently expect that if good forecasting became the norm only in projects of $1B or more, excluding national defense, it would conservatively be worth ~2% of global GDP.

Looking at the war example, we can consider a single catastrophic decision: disbanding the Iraqi military. I expect reasonable forecasting practices would have suggested that when you stop paying a lot of people who are in possession of virtually all of the weaponry, that they would have to find other ways to get by. Selling the weapons and their fighting skills, for example. This decision allowed an insurgency to unfold into a full-blown civil war, costing some 10^5 lives and 10^6 displaced people and moderately intense infrastructure damage.

Returning to the business example from the write-up, if one or more projects were to succeed in delivering this kind of value, I expect a lot more resources would be available for the pursuing true-beliefs-aspect of forecasting. I go as far as to say it would be a very strong inducement for people who do not currently care about having true beliefs to start doing so, in the most basic big pile of utility sense.


Comment by ryan_b on Conversation on forecasting with Vaniver and Ozzie Gooen · 2019-07-31T21:20:22.984Z · score: 11 (5 votes) · LW · GW

I approve of this write-up, and would like to see more of this kind of content.

I feel like the most neglected part of forecasting is how it relates to anything else. The working assumption is that if it works well and is widely available, it will enable a lot of really cool stuff; I agree with this assumption, but I don't see much effort to bridge the gap between 'cool stuff' and 'what is currently happening'. I suspect that the reason more isn't being invested in this area is that we mostly won't use it regardless of how well it works.

There are other areas where we know how to achieve good, or at least better, outcomes in the form of best practices, like software and engineering. I think it is uncontroversial to claim that most software or engineering firms do not follow most best practices, most of the time.

But that takes effort, and so you might reason that perhaps trying to predict what will happen is more common when the responsibility is enormous and rewards are fabulous, to the tune of billions of dollars or percentage points of GDP. Yet that is not true - mostly people doing huge projects don't bother to try.

Perhaps then a different standard, where hundreds of thousands of lives are on the line and where nations hang in the balance. Then, surely, the people who make decisions will think hard about what is going to happen. Alas, even for wars it is not the case.

When we know the right thing to do, we often don't do it; and whether the rewards are great or terrible, we don't try to figure out if we will get them or not. The people who would be able to make the best use of forecasting in general follow a simpler rule: predict success, then do whatever they would normally do.

There's an important ambiguity at work, and the only discussion of it I have read is in the book Prediction Machines. This book talks about what the overall impact of AI will be, and they posit that the big difference will be a drop in cost of prediction. The predictions they talk about are mostly of the routine sort, like how much inventory is needed or expected number of applications, which is distinct from the forecasting questions of GJP. But the point they made that I thought was valuable is how deeply entwined predictions and decisions are in our institutions and positions, and how this will be a barrier to taking advantage of the new trends for businesses. We will have to rethink how decisions are made once we separate out the prediction component.

So what I would like to see from forecasting platforms, companies, and projects is a lot more specifics about how forecasting relates to the decisions that need to be made, and how it improves them. As it stands, forecasting infrastructure probably looks a lot like a bridge to nowhere from the perspective of its beneficiaries.

Comment by ryan_b on Open Thread July 2019 · 2019-07-27T15:05:54.214Z · score: 4 (2 votes) · LW · GW

The front page features are very useful for getting up to speed. Recently Curated is newer posts the mods thought were important, From the Archives are old posts that were well received, and Continue Reading helps keep track of the sequences (the core content of the site) so you can consume them over time.

Welcome!

Comment by ryan_b on How often are new ideas discovered in old papers? · 2019-07-26T15:29:21.286Z · score: 6 (3 votes) · LW · GW

I expect this information is reasonably well documented in histories of particular currents of thought. I have no idea how often it happens in absolute terms, but I feel it must be relatively common because I have encountered it as an amateur reader of papers.

A good example is ET Jaynes' work on maximum caliber, which is a variational principle for dynamical systems. It might be cheating because it is well-understood to be a controversial concept, but the insights concerned entropy. Jaynes' specialty was Statistical Mechanics, for which he had employed information-theoretic notions of entropy in order to account for the lack of knowledge of the microstates. When Jaynes' was writing, physics used the Clausius formulation of 2nd Law of Thermodynamics, which he found unsatisfactory for the problem of prediction because it says nothing about intermediate states before reaching equilibrium. In Physical Chemistry they used a different one, which came from work by G.N. Lewis, who used the Gibbs formulation of the 2nd Law. It is insights drawn from the subtleties of Gibbs' work concerning entropy that gave Jaynes the predictive power he was interested in. Lastly, Jaynes had the work of Clifford Truesdell who was writing around the same time in the field of Continuum Mechanics, and working to expand that approach to fully cover thermodynamics. Truesdell's work persuaded Jaynes that the other approaches were in fact wrong.

So here was a case where one physics researcher (Jaynes) borrowed math ideas from communication (Shannon), then read older work from chemistry (Lewis), leading to much older work in early thermodynamics that had new insights (Gibbs), and confirmed by more recent work in a different field of math (Truesdell). All of these insights went into his work on maximum caliber.

In a similar vein, a lot of Truesdell's writing consists of going back to the early days of thermodynamics and finely sifting the insights therein. He writes well and carefully, but is animated and polemical; I recommend reading him to anyone interested in thermodynamics.


Comment by ryan_b on The Real Rules Have No Exceptions · 2019-07-24T18:05:37.879Z · score: 23 (6 votes) · LW · GW

I affirm Scharre's interpretation.

Anecdote: during deployment when we arrive in country, we are given briefings about the latest tactics being employed in the area where we will be operating. When I went to Iraq in 2008 one of these briefings was about young girls wearing suicide vests, which was previously unprecedented.

The tactic consisted of taking a family hostage, and telling the girl that if she did not wear this vest and go to X place at Y time, her family would be killed. Then they would detonate the vest by remote.

We copped to it because sometimes we had jammers on which prevented the detonation, and one of the girls told us what happened. Of course, we didn't have jammers everywhere. Then the calculus changes from whether we can take the hit in order to spare the child, to one child or many (suicide bombings target crowds).

The obvious wrongness of killing children does not change; nor that of allowing children to die. So one guy eats the sin, and the others feel ashamed for letting him.

Comment by ryan_b on Open Thread July 2019 · 2019-07-24T15:09:30.790Z · score: 2 (1 votes) · LW · GW

I don't understand the source of your concern.

Is it not at all concerning that aliens with no knowledge of Earth or humanity could plausibly guess that a movement dedicated to a maximizing, impartial, welfarist conception of the good would also be intrinsically attracted to learning about idealized reasoning procedures?

This is not at all concerning. If we are concerned about this then we should also be concerned that aliens could plausibly guess a movement dedicated to space exploration would be intrinsically attracted to learning about idealized dynamical procedures. It seems to me this is just a prior that groups with a goal investigate instrumentally useful things.

My model of your model so far is this: because the EA community is interested in LessWrong, and because LessWrong facilitated the group that work on HRAD research, the EA community will move their practices closer to implications of this research even in the case where it is wrong. Is that accurate?

My expectation is that EAs will give low weight to the details of HRAD research, even in the case where it is a successful program. The biggest factor is the timelines: HRAD research is in service of the long term goal of reasoning correctly about AGI; EA is about doing as much good as possible, as soon as possible. The iconic feature of the EA movement is the giving pledge, which is largely predicated on the idea that money given now is more impactful than money given later. There is a lot of discussion about alternatives and different practices, for example the donor's dilemma and mission hedging, but these are operational concerns rather than theoretical/idealized ones.

Even if I assume HRAD is a productive line of research, I strongly expect that the path to changing EA practice leads from some surprising result, evaluated all the way up to the level of employment and investment decisions. This means the result would need to be surprising, then it would need to withstand scrutiny, then it would need to lead to conclusions big enough to shift activity like donations, employment, and investments, cost of change included and all. I would be deeply shocked if this happened, and then further shocked if it had a broad enough impact to change the course of EA as a group.