How do we identify bottlenecks to scientific and technological progress?

post by NaiveTortoise (An1lam) · 2018-12-31T20:21:38.348Z · LW · GW · 3 comments

This is a question post.

Contents

  Answers
    5 ryan_b
    5 ChristianKl
    4 ChristianKl
    3 John_Maxwell_IV
None
3 comments

In discussions of AI, nanotechnology, brain-computer interfaces, and genetic engineering, I've noticed a common theme of disagreement over the right bottleneck to focus on. The general pattern is that one person or group argues that we know enough about a topic's foundation that it's time to start to focus on achieving near-term milestones, often engineering ones. The other group counters by arguing that taking such a milestone/near-term focused approach is futile because we lack the fundamental insights to achieve the long-term goals we really care about. I'm interested in heuristics we can use or questions we can ask to try and resolve these disagreements. For example, in the recent MIRI update, Nate Soares talks about how we're not prepared to build an aligned AI because we can't talk about the topic without confusing ourselves. While this post focuses on capability, not safety, I think the "can we talk about the topic without sounding confused" is a useful heuristic for understanding how ready we are to build independent of safety questions.

What follows are a few links to/descriptions of concrete examples of this pattern:

People forget. When they landed on the moon, they already had several hundred years of calculus so they have the math; physics, so they know Newton’s Laws; aerodynamics, you know how to fly; rocketry, people were launching rockets for many decades before the moon landing. When Kennedy gave the moon landing speech, he wasn’t saying, let’s do this impossible task; he was saying, look, we can do it. We’ve launched rockets; if we don’t do this, somebody else will get there first. I anticipate at least one answer to this question will look something like "look at the science and see if you understand the phenomena you which to engineer on top of enough", but I think this answer doesn't fully solve the problem. For example, in the case of nanotech, Drexler's argument centers on the point that successful engineering requires finding one path to success not necessarily understanding the entire space of possible phenomena.

EDIT (01/02/2019): I removed references to safety/alignment after ChristianKI noted that conflating the two makes the question more confusing and John_Maxwell_IV argued that I was misrepresenting his (and likely others') views on alignment. The post now focuses solely on the question of identifying bottlenecks to progress.

Answers

answer by ryan_b · 2019-01-02T21:28:55.943Z · LW(p) · GW(p)

Since these are all large subjects containing multiple domains of expertise, I am inclined to adopt the following rule: anything someone nominates as a bottleneck should be treated as a bottleneck until we have a convincing explanation for why it is not. I expect that once we have a good enough understanding of the relevant fields, convincing explanations should be able to resolve whole groups of prospective bottlenecks.

There are also places where I would expect bottlenecks to appear even if they have not been pointed out yet. These two leap to mind:

1. New intersections between two or more fields.

2. Everything at the systems level of analysis.

I feel like fast progress can be made on both types. While it is common for different fields to have different preferred approaches to a problem, it feels much rarer for there not to be any compatible approaches to a problem in both fields. The challenge would lie in identifying what those approaches are, which mostly just requires a sufficiently broad survey of each field. The systems level of analysis is always a bottleneck in engineering problems; the important thing is to avoid the scenario where it has been neglected.

It feels easy to imagine a scenario where the compatible approach from one of the fields is under-developed, so we would have to go back and develop the missing tools before we can really integrate the fields. It is also common even in well-understood areas for a systems level analysis to identify a critical gap. This doesn't seem any different to the usual process of problem solving, it's just each new iteration gets added to the bottleneck list.

answer by ChristianKl · 2019-01-02T11:27:50.880Z · LW(p) · GW(p)

Thomas Khun argues in his book that those scientific fields that try to achieve specific goals make worse progress then scientific fields that attempt to solve problems within those fields that it's researchers find interesting.

Physics progressed when physicists wanted to understand the natural laws of the universe and not because physicists wanted to make stuff that's useful.

On the other hand, you have a subject like nutrition science that's focused on producing knowledge that has immediate practical applications and the field doesn't make any practical progress.

answer by ChristianKl · 2019-01-02T11:34:56.450Z · LW(p) · GW(p)

Asking what's the bottleneck to do X and asking what needs to happen for X to be done safely are two different questions.

For practical purposes it's important to know both answers but for understanding it's clouds the issue to fix the questions together.

The question of whether we can build more effecitve BCI's is a question that's mostly about technical capability.

On the other hand, the concern that Nate raises over AGI is a safety concern. Nate doesn't doubt that we can build an AGI but considers it dangerous to do so.

comment by NaiveTortoise (An1lam) · 2019-01-02T19:04:58.289Z · LW(p) · GW(p)

FYI: I've updated the post to focus solely on the "what's the bottleneck to do X" question and not on safety, as I think the former question is less discussed on LW and what I wanted answers to focus on.

answer by John_Maxwell (John_Maxwell_IV) · 2018-12-31T23:36:17.342Z · LW(p) · GW(p)

FWIW, I can't speak for Paul Christiano, but insofar as you've attempted to summarize what I think here, I don't endorse the summary.

comment by habryka (habryka4) · 2018-12-31T23:38:32.522Z · LW(p) · GW(p)

Where does the post mention Paul Christiano? I only see a link to a discussion, without any commentary.

Edit: Nvm, I figured it out. I assume you mean: "The general pattern is that one person or group argues that we know enough about a topic's foundation that it's time to start to focus on achieving near-term milestones, often engineering ones. " is the specific line that you think doesn't accurately capture your views.

comment by NaiveTortoise (An1lam) · 2019-01-01T00:38:38.286Z · LW(p) · GW(p)

Can you be more specific? If you help me understand how/if I'm misrepresenting, I'd be happy to change it. My sense is that Paul's view is more like, "through working towards prosaic alignment, we'll get a better understanding of whether there are insurmountable obstacles to alignment of scaled up (and likely better) models." I can rephrase to something like that or something when more nuanced. I'm just wary of adding too much alignment-specific discussion as I don't want the debate to be too focused on the object-level alignment debate.

It's also worth noting that there are other researchers who hold similar views, so I'm not just talking about Paul's.

Replies from: An1lam, John_Maxwell_IV
comment by NaiveTortoise (An1lam) · 2019-01-02T19:04:13.083Z · LW(p) · GW(p)

FYI: I've updated the post to not talk about alignment at all, since I think focusing only on bottlenecks to progress in terms of capabilities makes the post clearer. Thanks to ChristianKI for pointing this out.

John_Maxwell_IV, would love feedback on how you feel about the edited version.

comment by John_Maxwell (John_Maxwell_IV) · 2019-01-04T09:34:12.010Z · LW(p) · GW(p)

I think your original phrasing made it sound kinda like I thought that we should go full steam ahead on experimental/applied research. I agree with MIRI that people should be doing more philosophical/theoretical work related to FAI, at least on the margin. The position I was taking in the thread you linked was about the difficulty of such research, not its value.

With regard to the question itself, Christian's point is a good one. If you're solely concerned with building capability, alternating between theory and experimentation, or even doing them in parallel, seems optimal. If you care about safety as well, it's probably better to cross the finish line during a "theory" cycle than an "experimentation" cycle.

3 comments

Comments sorted by top scores.

comment by ryan_b · 2019-01-02T15:37:11.014Z · LW(p) · GW(p)

Since these are all large subjects containing multiple domains of expertise, I am inclined to adopt the following rule: anything someone nominates as a bottleneck should be treated as a bottleneck until we have a convincing explanation for why it is not. I expect that once we have a good enough understanding of the relevant fields, convincing explanations should be able to resolve whole groups of prospective bottlenecks.

There are also places where I would expect bottlenecks to appear even if they have not been pointed out yet. These two leap to mind:

1. New intersections between two or more fields.

2. Everything at the systems level of analysis.

I feel like fast progress can be made on both types. While it is common for different fields to have different preferred approaches to a problem, it feels much rarer for there not to be any compatible approaches to a problem in both fields. The challenge would lie in identifying what those approaches are, which mostly just requires a sufficiently broad survey of each field. The systems level of analysis is always a bottleneck in engineering problems; the important thing is to avoid the scenario where it has been neglected.

It feels easy to imagine a scenario where the compatible approach from one of the fields is under-developed, so we would have to go back and develop the missing tools before we can really integrate the fields. It is also common even in well-understood areas for a systems level analysis to identify a critical gap. This doesn't seem any different to the usual process of problem solving, it's just each new iteration gets added to the bottleneck list.

Replies from: An1lam
comment by NaiveTortoise (An1lam) · 2019-01-02T19:26:06.575Z · LW(p) · GW(p)

You should promote this to a full answer rather than a comment! It more than qualifies.

Regarding 1, I suspect a lot of recent progress in neuroscience has come from applying computational and physics-style approaches to existing problems. See, for example, the success Ed Boyden has had in his lab with applying physics thinking to building better neuroscience tools–optogenetics, expansion microscopy, and most recently implosion fabrication.

I think nanotechnology is a prime example of 2. AIUI, a lot of the component technologies for at least trying to build nano-assemblers exist but we lack the technology/institutions/incentives/knowledge to engineer them into coherent products and tools.

Replies from: ryan_b
comment by ryan_b · 2019-01-02T21:56:00.140Z · LW(p) · GW(p)

Copied to full answer!

I agree regarding neuroscience. I went to a presentation (from whom I have suddenly forgotten, and I seem to have lost my notes) that was describing an advanced type of fMRI that allowed more advanced inspection than previously, and the big discovery mostly consistent of "optimize the c++" and "rearrange the UI with practitioners in mind." I found it tremendously impressive - they were using it to help map epilepsy seizures in much more detail.

I am strongly tempted to say that 2 should be considered the highest priority in any kind of advanced engineering project, and I am further tempted to say it would sometimes be worth considering even before having project goals. There has been some new work in systems engineering recently [LW · GW] that emphasizes the meta level and focusing on architecture-space before even getting the design constraints; I wonder if the same trick could be pulled with capabilities. Sort of systematizing the constraints at the same time as the design.