The Future of Science

post by Richard_Ngo (ricraz) · 2020-07-28T02:43:37.503Z · LW · GW · 2 comments

Contents

  Questions
None
2 comments

(Talk given at [LW · GW] an event on Sunday 19th of July [LW · GW]_. Richard Ngo is responsible for the talk, Jacob Lagerros and David Lambert edited the transcript. _

If you're a curated author and interested in giving a 5-min talk, which will then be transcribed and edited, sign up here_.) _

Richard Ngo: I'll be talking about the future of science. Even though this is an important topic (because science is very important) it hasn’t received the attention I think it deserves. One reason is that people tend to think, “Well, we’re going to build an AGI, and the AGI is going to do the science.” But this doesn’t really offer us much insight into what the future of science actually looks like.

It seems correct to assume that AGI is going to figure a lot of things out. I am interested in what these things are. What is the space of all the things we don’t currently understand? What knowledge is possible?

These are ambitious questions. But I’ll try to come up with a framing that I think is interesting.

One way of framing the history of science is through individuals making an observation and coming up with general principles to explain it.   So in physics, you observe how things move and how they interact with each other. In biology, you observe living organisms, and so on. I'm going to call this “descriptive science”.

More recently, however, we have developed a different type of science, which I'm going to call “generative science”. This basically involves studying the general principles behind things that don’t exist yet and still need to be built.

This is, I think, harder than descriptive science, because you don't actually have anything to study. You need to bootstrap your way into it. A good example of this is electric circuits. We can come up with fairly general principles for describing how they work. And eventually this led us to computer science, which is again very general. We have a very principled understanding of many aspects of computer science, which is a science of things that didn't exist before we started studying them.

I would also contrast this to most types of engineering such as aerospace engineering. I don't think it's principled or general enough to put it in the same class as physics or biology and so on.

So what would it look like if we took all the existing sciences and made them more generative? For example, in biology, instead of saying, "Here are a bunch of living organisms, how do they work?" you would say, "What are all the different possible ways that you might build living organisms, or what is the space of possible organisms  and why did we end up in this particular path of the space on Earth?"

Even just from the perspective of understanding how organisms work, this seems really helpful. You understand things in contrast to other things. I don't think we're really going to fully understand how the organisms around us work until we understand why evolution didn't go down all these different paths. And for doing that it's very useful to build those other organisms.

You could do the same thing with physics. Rather than asking how our universe works, you could ask how an infinite number of other possible universes would work. It seems safe to assume that this would keep people busy for quite a long time.

Another direction that you could go in is asking how this would carry over to things we don’t currently think of as science.

Take sociology, for example. Sociology is not very scientific right now. It's not very good, mostly speaking. But why? And how might it become more scientific in the future?

One aspect of this is just that societies are very complicated, and they're composed of minds, which are also very complicated. There are also a lot of emergent effects of those minds interacting with each other, which makes it a total mess.

So one way of solving this is by having more intelligent scientists. Maybe humans just aren't very good at understanding systems where the base-level components are as intelligent as humans. Maybe you need to have a more intelligent agent studying the system in order to figure out the underlying principles by which it works.

But another aspect of sociology that makes it really hard to study, and less scientific, is that you can't generate societies to study.

You have a hypothesis, but you can't generate a new society to test it. I think this is going to change over the coming decades. You are going to be able to generate systems of agents intelligent enough that they can do things like cultural evolution. And you will be able to study these generated societies as they form. So even a human-level scientist might be able to make a science out of sociology by generating lots of different environments and model societies.

The examples of this we've seen so far are super simple but actually quite interesting, like Axelrod's Prisoner's Dilemma tournament or Laland's Social Learning Tournament. There are a couple of things like that which led to really interesting conclusions, despite having really, really basic agents. And that concludes my talk.

Questions

Ben Pace: Thank you very much, Richard. That was fascinating. So you made this contrast between generative and more descriptive versions of science.

How much of that set was just a matter of whether or not feedback loops existed in these other spaces? Once we came up with microprocessors, suddenly we were able to build, research, and explore quite a lot of new, more advanced things using science.

And similarly with the sociology example, you mentioned something along the lines of,  "We'll potentially get to a place where we can actually just test a lot of these things and then a science will form around this measurement tool." In your opinion, is this a key element in being able to explore new sciences?

Richard Ngo: Yes. I think feedback loops are pretty useful. I'd say there's probably just a larger space of things in generative sciences. We have these computer architectures, right? So we can study them. But how do we know that the computer architectures couldn't have been totally different? This is not really a question that traditional sciences focus on that much.

Biologists aren't really spending much of their time asking, "But what if animals had been totally different? What are all the possible ways that you could design a circulatory system, and mitochondria, and things like that?” I think some interesting work is being done that does ask these questions, but it seems like, broadly speaking, there's just a much richer space to explore.

David Manheim: So when you started talking about generative versus descriptive, my initial thought was Schelling's “Micromotives and Macrobehavior” where basically the idea was, “Hey, even if you start with even these pretty basic things, you can figure out how discrimination happens even if people have very slight preferences.” There's a lot of things he did with that, but what strikes me about it is that it was done with very simple individual agents. Beyond that (unless you go all the way to purely rational actor agents, and even then you need lots and lots of caveats and assumptions), you don’t get much in terms of how economics works.

Even if you can simulate everybody, it doesn't give you much insight.  Is that a problem for your idea of how science develops?

Richard Ngo: So you're saying that if we can simulate everyone given really simple models of them, it still doesn't give us much insight?

David Manheim: Even when we have complex models of them, we can observe their behavior but we can't do much with it. We can't tell you much that's useful as a result of even pretty good models.

Richard Ngo: I would just say that our models are not very good, right? Broadly speaking, often in economics, it feels something like "we're going to reduce all human preferences to a single dimension but still try to study all the different ways that humans interact; all their friendships and various types of psychological workings and goals and so on".

You can collapse all of these things in different ways and then study them, but I don't think we've had models that are anywhere near the complexity of the phenomena that are actually relevant to people's behavior.

David Manheim: But even when they are predictive, even when you can actually replicate what it is that you see with humans, it doesn't seem like you get very much insight into the dynamics... other than saying, "Hey, look, this happens." And sometimes, your assumptions are actually wrong, yet you still recover correct behavior. So overall it didn't tell us very much other than, yes, you successfully replicated what happened.

Richard Ngo: Right. What it seems like to me is that there are lots of interesting phenomena that happen when you have systems of interacting agents in the world. People do a bunch of interesting things. So I think that if you have the ability to recreate that, then you’d have the ability to play around with it and just see in which cases this arises and which cases it doesn't arise.

Maybe the way I'd characterize it is something like: in our current models, sometimes they're good enough to recreate something that vaguely looks like this phenomenon, but then if you modify it you don't get other interesting phenomena. It's more that they break, I guess. So what would be interesting is the case where you have the ability to model agents that are sophisticated enough, that when you change the inputs away from recreating the behavior that we have observed in humans, you still get some other interesting behavior. Maybe the tit-for-tat agents are a good example of this, where the set-up is pretty simple, but even then you can come up with something that's fairly novel.

Owain Evans: I think your talk was based on a really interesting premise. Namely that, if we do have AGI in the next 50 years, I think it's plausible that development will be fairly continuous; meaning that on the road to AGI we'll have very powerful, narrow AI that is going to be transformative for science. And I think now is a really good time to think about, in advance, how science could be transformed by this technology.

Maybe it is an opportunity similar to big science coming out of World War II, or the mathematization of lots of scientific fields in the 20th century that were informal before.

You brought up one plausible aspect of that: much better ability to run simulations. In particular, simulations of intelligent agents, which are very difficult to run at the moment.

But you could look at all the aspects of what we do in science and say “how much will narrow AI (that’s still much more advanced than today’s AI) actually help with that?” I think that even with simulations, there are going to be limits, due to its difficulty. Some things are just computationally intractable to simulate. AI's not going to change that. There are NP-hard problems even when simulating very simple physical systems.

And when you're doing economics or sociology, there are humans, rational agents. You can get better at simulating them. But humans interact with the physical world, right? We create technologies. We suffer natural disasters. We suffer from pandemics. And so, the intractability is going to bite when you're trying to simulate, say, human history or the future of a group of humans. Does that make sense? I am curious about your response.

Richard Ngo: I guess I don't have strong opinions about which bits will be intractable in particular. I think there's probably a lot of space for high-level concepts that we don't currently have. So maybe one way of thinking about this is game theory. Game theory is a pretty limited model in a lot of ways. But it still gives us many valuable concepts like  “defecting in a prisoner's dilemma”, and so on, that inform the way that we view complex systems, even though we don't really know exactly what the hypothesis we're asking is. Even just having that type of thing brought to our attention is sufficient to reframe the way that we see a lot of things.

So I guess the thing I'm most excited about is this expansion of concepts. This doesn't feel super intractable because it doesn't feel like you need to simulate anything in its full complexity in order to get the concepts that are going to be really useful going forward.

Ben Pace: John, you wrote something  in chat. Did you want to discuss it quickly?

John Wentworth: It is tangential, but I have trouble picturing how on Earth a simulation problem could ever be NP-complete. The whole point is that the specification of the simulation tells you the thing you have to run to do the simulation. It's a circuit. You just run it. There's no solving of anything to be NP-complete.

2 comments

Comments sorted by top scores.

comment by NaiveTortoise (An1lam) · 2020-07-28T15:14:45.657Z · LW(p) · GW(p)

Great post (or talk I guess)!

Two "yes, and..." add-ons I'd suggest:

  1. Faster tool development as the result of goal-driven search through the space of possibilities. Think something like Ed Boyden's Tiling Tree Method semi-automated and combined with powerful search. As an intuition pump, imagine doing search in the latent space of GPT-N, maybe fine tuned on all papers in an area's, embeddings.
  2. Contrary to some of the comments from the talk, I weakly suspect NP-hardness will be less of a constraint for narrow AI scientists than it is for humans. My intuition here comes from what we've seen with protein folding and learned algorithms where my understanding is that hardness results limit how quickly we can do things in general but not necessarily on the distributions we encounter in practice. I think this is especially likely if we assume that AI scientists will be better at searching for complex but fast approximations than humans are. (I'm very uncertain about this one since I'm by no means an expert in these areas.)
comment by ChristianKl · 2020-07-28T11:44:50.711Z · LW(p) · GW(p)

The problem in sociology is one of lacking an accepted way to distinguish good science from bad science. I think Tetlock's credence calibration provides such a tool and an academic discipline where scientists are required to make a lot of predictions about social things would lead to them developing reliable knowledge about those social things.