Measure of complexity allowed by the laws of the universe and relative theory?

post by dr_s · 2023-09-07T12:21:03.882Z · LW · GW · 5 comments

This is a question post.

Contents

  Answers
    2 Noosphere89
    0 Ilio
None
5 comments

A big question that determines a lot about what risks from AGI/ASI may look like has to do with the kind of things that our universe's laws allow to exist. There is an intuitive sense in which these laws, involving certain symmetries as well as the inherent smoothing out caused by statistics over large ensembles and thus thermodynamics, etc., allow only certain kinds of things to exist and work reliably. For example, we know "rocket that travels to the Moon" is definitely possible. "Gene therapy that allows a human to live and be youthful until the age of 300" or "superintelligent AGI" are probably possible, though we don't know how hard. "Odourless ambient temperature and pressure gas that kills everyone who breathes it if and only if their name is Mark with 100% accuracy" probably is not. Are there known attempts at systematising this issue using algorithmic complexity, placing theoretical and computational bounds, and so on so forth?

Answers

answer by Noosphere89 · 2023-09-07T20:17:20.131Z · LW(p) · GW(p)

This is very, very dependent on what assumptions you fundamentally make about the nature of physical reality, and what assumptions you make about how much future civilizations can alter physics.

I genuinely think that if you want to focus on the long term, unfortunately we'd need to solve very, very difficult problems in physics to reliably give answers.

For the short term limitations that are relevant to AI progress, I'd argue that the biggest one is probably thermodynamics stuff, and in particular the Landauer limit is a good approximation for why you can't make radically better nanotechnology than life without getting into extremely weird circumstances, like reversible computation.

comment by dr_s · 2023-09-08T08:57:09.848Z · LW(p) · GW(p)

what assumptions you make about how much future civilizations can alter physics

I don't think the concept of "altering physics" makes sense. Physics is the set of rules that determine reality. By definition, everyone living in this universe is subject to the laws of physics. If someone were to find a way so, say, locally alter what we call Planck's constant, that would just mean that it's not actually a constant, but the emergent product of a deeper system that can be tinkered with, which doesn't mean you're altering the laws of physics - it merely peels away one layer and puts the laws at a layer lower.

A more interesting question perhaps would be whether this ladder has an end, or if you can have some kind of infinite regression of ever more fundamental layers. In the latter case I could imagine there being an argument for "everything is possible if you can go deep enough". But if the current state of particle physics is any indication, even just going to the next layer would require an insane amount of energy. Successive layers if they exist might just turn out to be practically inaccessible.

For the short term limitations that are relevant to AI progress, I'd argue that the biggest one is probably thermodynamics stuff, and in particular the Landauer limit is a good approximation for why you can't make radically better nanotechnology than life without getting into extremely weird circumstances, like reversible computation.

Right, so I'm aware of the Landauer limit of course, but I was wondering about whether there was anything more specific available. One of the things that strike me with all sorts of nanotechnology is how if you make things bigger they usually get less efficient, but if you make them smaller they get more vulnerable to thermal fluctuations and diffusion temporarily or permanently messing with their functioning. Besides the Landauer limit I expect there will also be limits on e.g. actual physical manipulation or how much energy you can extract from the environment (lest nanomachines can be used to straight up violate the 2nd law). Wondered if there was a general and powerful framework to deal with these questions, but I guess not.

Replies from: Muireall, M. Y. Zuo
comment by Muireall · 2023-09-08T12:29:14.051Z · LW(p) · GW(p)

Stochastic thermodynamics may be the general and powerful framework you're looking for regarding molecular machines.

comment by M. Y. Zuo · 2023-09-09T19:28:59.953Z · LW(p) · GW(p)

Most higher level engineering textbooks cover this topic pretty thoroughly.

At least from the Thermodynamics II and III, Fluid Mechanics II and III, Solid Mechanics II and III, etc., courses that I took back in school.

It's also all derivable from the fundamental symmetries of physics, plus some constants, axioms, and maybe some math tricks when it comes to Maxwell/Heaviside equations and the not-yet-resolved contradictions between gravity and quantum mechanics. 

Replies from: dr_s
comment by dr_s · 2023-09-10T11:48:12.965Z · LW(p) · GW(p)

I wouldn't say they do. This is not about known science, it's about science we don't know, and what can we guess about it. Some considerations on gravity and quantum mechanics do indeed put a lower bound on the energy scale at which we expect new physics to manifest, but that doesn't mean that even lower energy new physics aren't theoretically possible - if they weren't, there would be no point doing anything at the LHC past the discovery of the Higgs Boson, since the Standard Model doesn't predict anything else. Thought to be fair, the lack of success in finding literally anything predicted either by string theory or by supersymmetry isn't terribly encouraging in this respect.

Replies from: sharmake-farah, M. Y. Zuo
comment by Noosphere89 (sharmake-farah) · 2023-09-10T14:38:38.063Z · LW(p) · GW(p)

This is exactly the situation where your question unfortunately doesn't have an answer, at least right now.

comment by M. Y. Zuo · 2023-09-10T14:47:06.460Z · LW(p) · GW(p)

Like you said the science we don't know is at inaccessibly large or small scales. 

Yes maybe in the far future in a society spread across multiple galaxies, or that can make things near Planck lengths, they could do something that would totally stump us.

But your never going to find a final answer to this in the present day for exactly those reasons.

In fact it's unlikely anyone on LW could even grasp the answers even if by some miracle a helpful time traveller from the future showed up and started answering.

Replies from: dr_s
comment by dr_s · 2023-09-10T14:56:16.744Z · LW(p) · GW(p)

Well, as I said, there might be some general insight. For example biological cells are effectively nanomachines far beyond our ability to build, yet they are not all-powerful; no individual bacterium has single-handedly grown to grey goo its way through the entire Earth, despite there being no particular reasons why it wouldn't be able to. This likely comes from a mixture of limits of the specific substrate (carbon, DNA for information storage), the result of competition between multiple species (which can be seen as inevitable result of imprecise copying and following divergence, even though mostly cells have mechanisms to try and prevent those sort of mistakes) and perhaps intrinsic thermodynamic limits of Von Neumann machines as a whole. So understanding which is which would be interesting and useful.

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2023-09-10T16:15:43.421Z · LW(p) · GW(p)

This kind of understanding is already available in higher level textbooks, within known energy and space-time scales, as previously mentioned?

If your asking, for example, whether with infinite time and energy some sort of grey goo 'superorganism' is possible, assuming some sort of far future technology that goes beyond our current comprehension, then that is obviously not going to have an answer for the aformentioned reasons...

Assuming you already have sufficient knowledge of the fundamental sciences and engineering and mathematics at the graduate level, then finding the textbooks, reading them, comparatively analyzing them, and drawing your own conclusions wouldn't take more then a few weeks. This sort of exhaustive analysis would presumably satisfy even a very demanding level of certainty (perhaps 99.9% confidence?).

 If your asking for literally 100% certainty then that's impossible. In fact, nothing on LW every written, nor ever can be written, will meet that bar, especially when the Standard Model is known to be incomplete.

If your asking whether someone has already done this and will offer it in easily digestable chunks in the form of LW comments, then it seems exceedingly unlikely.

Replies from: dr_s
comment by dr_s · 2023-09-10T16:37:26.826Z · LW(p) · GW(p)

I'm asking if there is a name and a specific theory of these things. I strongly disagree that just studying thermodynamics or statistical mechanics answers these questions, at least directly - though sure, if there is a theory of it, those are the tools you need to derive it. There are obvious thermodynamic limits of course, but they are usually ridiculously permissive. I'm asking if there's a theory that tries to study things at a lower level of generality, is all, and sets more narrow bounds than just "any nanomachine could not go above Carnot efficiency" or "any nanomachine would be subject to Brownian motion" or such.

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2023-09-10T18:34:36.342Z · LW(p) · GW(p)

I'm asking if there is a name and a specific theory of these things.

Why do you believe there is one? 

Replies from: dr_s
comment by dr_s · 2023-09-10T21:34:44.309Z · LW(p) · GW(p)

I don't? I wondered if there might be one, and asked if anyone else knew any better.

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2023-09-11T05:33:21.625Z · LW(p) · GW(p)

Then on what basis do you "strongly disagree that just studying thermodynamics or statistical mechanics answers these questions, at least directly"?

How did you attain the knowledge for this?

Replies from: dr_s
comment by dr_s · 2023-09-11T08:33:39.887Z · LW(p) · GW(p)

By having a MD in Engineering and a Physics PhD, following the same exact courses you recommend as potentially containing the answer and in fact finding no direct answer to these specific questions in them.

You could argue "the answer can be derived from that knowledge" and sure, if it exists it probably can, but that's why I'm asking. Lots of theories can be derived from other knowledge. Most of machine learning can be derived from a basic knowledge of Bayes' theorem and multivariate calculus, but that doesn't make any math undergrad a ML expert. I was asking so that I could read any previous work on the topic. I might actually spend some more time thinking about approaches myself later, but wouldn't do it without first knowing if I'm just reinventing the wheel, so I was probing for answers. I don't think this is particularly weird or controversial.

answer by Ilio · 2023-09-07T22:18:58.337Z · LW(p) · GW(p)

[downvoted]

comment by dr_s · 2023-09-08T08:59:24.835Z · LW(p) · GW(p)

I was just about to say "wait that's just Dust Theory" and then you mentioned Permutation City yourself. But also, in that scenario, the guy moving the stones certainly has the power to make anything happen - but the entities inside the universe don't, as they are bound by the rules of the simulation. Which is as good as saying that if you want to make anything happen, you should pray to God.

Replies from: Ilio
comment by Ilio · 2023-09-08T11:20:43.286Z · LW(p) · GW(p)

Which is as good as saying that if you want to make anything happen, you should pray to God.

Actually the point is: if one can place rocks at will then their computing power is provably as large as any physically realistic computer. But yes, if one can’t place rocks at will then it might be better to politely ask the emulator.

wait that's just Dust Theory

Actually that’s less, because in Dust theory we don’t even need to place the rocks. 😉

5 comments

Comments sorted by top scores.

comment by Charlie Steiner · 2023-09-07T13:37:36.441Z · LW(p) · GW(p)

I think maybe this gwern essay is the one I was thinking of, but I'm not sure. It doesn't quite answer your question.

But there isn't a complexity-theoretic argument that's more informative than general arguments about humans not being the special beings of maximal possible intelligence. We don't know precisely what problems a future AI will have to solve, or what approximations it will find appropriate to make.

Replies from: dr_s
comment by dr_s · 2023-09-07T13:53:27.798Z · LW(p) · GW(p)

Thanks for the essay! As you say, not quite what I was looking for, but still interesting (though mostly saying things I already know/agree with).

My question is more in line with the recent post about the smallest possible button [LW · GW] and my own about the cost of unlocking optimization from observation [LW · GW]. The very idea not of what problems computation can solve, but of how far can problem-solving carry you in actually affecting the world. The limit would be I guess "suppose you have an oracle that given the relevant information can instantly return the optimal strategy to achieve your goal, how well does that oracle perform?". So, like, is tech so advanced that it truly looks like magic even to us possible at all? I assume some things (deadly enough in their own right) like nanotech and artificial life are, but wonder about even more exotic stuff.

Replies from: Ilio
comment by Ilio · 2023-09-07T16:02:32.553Z · LW(p) · GW(p)

suppose you have an oracle that given the relevant information can instantly return the optimal strategy to achieve your goal, how well does that oracle perform?

I guess CT experts (e.g. not me) would say it either depends on boring details or belong to one of three possibilities:

  • if you only care about « probably approximately correct » solutions, then it’s probably in BPP
  • if you care about « unrealistically powerfull but still mathematically checkable » solutions, then it’s as large as PSPACE (see interactive proofs)
  • if you only care about convincing yourself and don’t need formal proof, then it’s among Turing degrees, because one could show it’s better than you for compressing most strings without actually proving it’s performing hypercomputation.
Replies from: dr_s
comment by dr_s · 2023-09-07T16:26:17.775Z · LW(p) · GW(p)

My point is that there have to be straight up impossibilities in there. For example, if you had a constraint to only use 3 atoms to build a molecule, there are only so many stable combinations. When one considers for example nanomachines it is reasonable to imagine that there is a minimum physical size that can embed a given program, and that size also puts limitations on effectiveness, lifetime, and sensory abilities. Like e.g. you lose resolution on movement because the smaller you are the stronger the effect of Brownian forces, stuff like that, at the crossroads between complexity theory and thermodynamics.

Replies from: Ilio
comment by Ilio · 2023-09-07T21:44:32.501Z · LW(p) · GW(p)

I see, thanks for clarifying.