What could one do with truly unlimited computational power?
post by Yitz (yitz) · 2020-11-11T10:03:03.891Z · LW · GW · 9 commentsThis is a question post.
Contents
One day, you find an unmarked cardboard box sitting on your front porch. None Answers 11 Taran 11 waveman 4 Kevin Lacker None 9 comments
One day, you find an unmarked cardboard box sitting on your front porch.
On top of the box is a single note:
“Open me.”
You take the box inside and open it, to find what appears to be a small black laptop nestled between old newspaper clippings. There are no identifying features to the laptop, other than its sleek blackness and small size. When you open the laptop, instead of being greeted with any sort of familiar welcome screen, there is simply text, displayed in white over a black background. The text says the following:
“Welcome to the next stage of the human experiment.
I have have been watching your kind for a while now, and I believe that you are now ready. I have decided to entrust this machine to your care. Do not attempt to figure out how this machine performs its calculations, as uninformed tampering may result in the deletion of your universe.
The item you are now holding is what your mathematicians would call a Universal Turing Machine, similar in many ways to your typical computer. This machine, however, is significantly different from any other currently existent on Earth.
After reading this message, press any button to reveal two input fields stacked on top of each other, and one output field at the bottom. All input fields can be fed any required data from the internet, or can be entered into directly through the keyboard. Input lengths of any finite size are acceptable.
The top input field will accept and is capable of automatically running the intended code of any language or format that is capable of being run on a standard Universal Turing Machine.
That input will then be translated into the necessary binary code to be computable by the internal “black box” computer.
The bottom input field must be fed a finite natural number of any length, which will determine the number of computations per second. Do not worry about exceeding the speed of light, or of going past any other finite limit; the computation itself is performed in a “bubble universe” with different physical laws than your own, and is capable of computing at absolutely any positive finite speed relative to your universe.
The output field will display the output of your calculations, if any exist, after exactly one minute of computation relative to you.
Do with this machine as you will.
Wishing you the best,
God.”
Your finger hovers over the keyboard.
What will you do with this marvelous machine? What can you do?
What happens next is up to you.
>________________________________
NOTE FROM ME, OUTSIDE OF THE STORY: I wrote this trying to work out my thoughts on what might be possible with a machine with unlimited but finite computing power. I was going to continue the story, but found that I honestly couldn't think of all that many interesting things that would be possible to do with such a machine, that couldn't already be done now. As such, I'm turning this question public, hoping that anyone reading this might have some interesting ideas that I haven't thought of.
Answers
Pretty interesting. You're still constrained by your ability to specify solutions, so you can't immediately solve cold fusion or FTL (you'd need to manually write and debug an accurate-enough physics simulator first). Truly, no computing system can free you from the burden of clarifying your ideas. But this constraint does leave some scope for miracles, and I want to talk about one technique in particular: program search.
Program Search
Program search is a very powerful, but dangerous and ethically dubious, way to exploit unbounded compute. Start with a set of test cases, then generate all programs of length less than 100 megabytes (or whatever) and return the shortest, fastest one that passes all the test cases. Both constraints are important: "shortest" prevents the optimizer from returning a hash table that memorizes all possible inputs, and "fastest" prevents it from relying on the unusual nature of the oracle universe (note that you will need a perfect emulator in order to find out which program is fastest, since wall-clock time measurements in the oracle's universe might be ineffective or misleading). In a narrow sense, this is the perfect compiler: you tell it what kind of program you want, and it gives you exactly what you asked for.
Risks
There are some practical dangers. In Python or C, for example, the space of all programs includes programs which can corrupt or mislead your test harness. The ideal language for this task has no runtime flexibility or ambiguity whatsoever; Haskell might work. But that still leaves you at the mercy of God's Haskell implementation: we can assume that He introduced no new bugs, but He might have faithfully replicated an existing bug in the reference Haskell compiler, which your enumeration will surely find. This is unlikely to cause serious problems (at least at first), but it means you have to cross-check the output of whatever program the oracle finds for you.
More insidiously, some the programs that we run during the search might instantiate conscious minds, or otherwise be morally relevant. If that seems unlikely, ask yourself: are you totally sure it's impossible to simulate a suffering human brain in 100 megs of Haskell? This risk can be limited somewhat, for example by running the programs in order from smallest to largest, but is hard to rule out entirely.
Applications
If you're willing to put up with all that, the benefits are enormous. All ML applications can be optimized this way: just find the program that scores above some threshold on your metric, given your other constraints (if you have a lot of data you might be able to use the best-scoring program, but in small-data regimes the smallest, fastest program might still just be a hash table. Maybe score your programs by how much simpler than the training data they are?).
With a little more work, it should be possible to -- almost -- solve all of mathematics: to create an oracle which, given a formal system, can tell you whether any given statement can proved within that system and, if so, whether it can be proved true or false (or both)...that is, for proofs up to some ridiculous but finite length. I think you will have to invent your own proof language for this; the existing ones are all designed around complexity limitations that don't apply to you. Make sure your language isn't Turing complete, to limit the risk of moral catastrophe. Once you have that, you can just generate all possible proofs and then check whether the one you want is present or not.
Simulation
Up until now we've been limited by our ability to specify the solution we want. We can write test cases and generate a program which fulfills them, but it won't do anything we didn't explicitly ask for. We can find the ideal classifier for a set of images, but we first have to find those images out in the real world somewhere, and the power of our classifier is bounded by the number of images we can find.
If we can specify precise rules for a simulation, and a goal within that simulation, most of that constraint disappears. For example, to find the strongest Go-playing program, we can instantiate all possible Go-playing programs and have them compete until there's an unambiguous winner; we don't need any game records from human players. The same trick works for everything simulatable: Starcraft, Magic: the Gathering, piloting fighter jets, you name it. If you don't want to use the oracle to directly generate a strong AI, you can instead develop accurate-enough simulations of the real-world, and then use the oracle to develop effective agents within those simulations.
Endgame
Ultimately the idea would be to develop a computer model of the laws physics that's as correct and complete as our computer model of the rules of Go, so that you can finally develop nanofactories, anti-aging drugs, and things like that. I don't see how to do it, but it's the only prize worth playing for. At this point it becomes very important to be able prove the Friendliness of every candidate program; use the math oracle you built earlier to develop a framework for that before moving forward.
↑ comment by Pattern · 2020-11-16T19:49:59.685Z · LW(p) · GW(p)
With a little more work, it should be possible to -- almost -- solve all of mathematics: to create an oracle which, given a formal system, can tell you whether any given statement can proved within that system
...with a proof of length L or less.
Aside from the issue of the given statement, what can't be proved?
- Problems whose answers are independent of the framework you are using (The continuum hypothesis). [1]
- Undecidable problems. [2]
Things which probably can't be proved:
- Things which are false, like 2+2=3. (For a given framework which meets certain requirements, it is not possible to prove that there isn't a contradiction - within that framework.) [3]
Footnotes
[1]
From https://en.wikipedia.org/wiki/Continuum_hypothesis:
The continuum hypothesis was advanced by Georg Cantor in 1878, and establishing its truth or falsehood is the first of Hilbert's 23 problems presented in 1900. The answer to this problem is independent of ZFC, so that either the continuum hypothesis or its negation can be added as an axiom to ZFC set theory, with the resulting theory being consistent if and only if ZFC is consistent.
[2]
https://en.wikipedia.org/wiki/Undecidable_problem
In computability theory and computational complexity theory, an undecidable problem is a decision problem for which it is proved to be impossible to construct an algorithm that always leads to a correct yes-or-no answer. The halting problem is an example: it can be proven that there is no algorithm that correctly determines whether arbitrary programs eventually halt when run.
https://en.wikipedia.org/wiki/Halting_problem:
the halting problem is undecidable over Turing machines. It is one of the first cases of decision problems proven to be unsolvable. This proof is significant to practical computing efforts, defining a class of applications which no programming invention can possibly perform perfectly.
[3]
https://en.wikipedia.org/wiki/Kurt_G%C3%B6del#Incompleteness_theorem
Gödel...proved for any computable axiomatic system that is powerful enough to describe the arithmetic of the natural numbers...
If a (logical or axiomatic formal) system is consistent, it cannot be complete.
The consistency of axioms cannot be proved within their own system.Replies from: donald-hobson
↑ comment by Donald Hobson (donald-hobson) · 2020-12-27T20:39:20.787Z · LW(p) · GW(p)
- Problems whose answers are independent of the framework you are using (The continuum hypothesis). [1]
- Undecidable problems. [2]
These are pretty much the same thing. The continuum hypothesis is a case where you have a single formal system in mind ( ZFC ) and have proved that the continuum hypothesis is independent of the axioms.
In the case of the halting problem, you just have a couple of extra quantifiers. For all formal systems that don't prove a contradiction, there exists a Turing machine, such that whether the Turing machine halts or not can't be proved from the axioms of the formal system. (Technically, the formal system needs to be R.E., which means that there is a computer program that can tell if an arbitrary string is an axiom. )
↑ comment by Alexei · 2020-11-13T05:17:40.430Z · LW(p) · GW(p)
I don’t think there’s any reason to create the agents for those games, since you can just brute force solve for the optimal play.
Replies from: Taran↑ comment by Taran · 2020-11-13T11:09:53.991Z · LW(p) · GW(p)
If you're using the oracle to generate moves directly then you don't need an agent, yeah. But that won't always work: you can generate the complete Starcraft state space and find the optimal reply for each state, but you can't run that program in our universe (it's too big) and you can't use the oracle to generate SC moves in real time (it's too slow).
Replies from: Alexei↑ comment by Yitz (yitz) · 2020-11-13T04:34:06.891Z · LW(p) · GW(p)
Thanks for the fascinating response!
If you don't mind, I might try playing around with some of the ideas you mentioned in future write-ups here; there's a lot of interesting theoretical questions that could be explored along those lines.
Replies from: TaranI suppose I would test out the claim by getting it to mine a few hundred $million of bitcoin for me.
Then crack all the interesting crypto data that is floating around.
Then brute force search for proofs/solutions of all the big problems such as Riemann Hypothesis, factoring etc. Try all texts shorter than say 100 terabytes and see if they solved the problem.
Work out the implications of all the human genes by calculating out how the human body works. This would include things like solving protein folding.
Find the best/shortest algorithm for all the AI/ML challenges that delivers near perfect results.
Then simulate the body with various chemical compounds added to cure cancer, infections etc. Design vaccine + cure for coronavirusV2 and also machines to make them.
I guess you could create a model of the brain and then run all sorts of experiments on the simulation to work out how it works.
Simulate atoms and molecules and brute force ways to build things that would build arbitrary nanotech engines.
At what point would you get worried about AI safety?
↑ comment by Yitz (yitz) · 2020-11-16T03:44:10.684Z · LW(p) · GW(p)
I really like your first three ideas, and would definitely consider doing that if I was in this position (although now that I'm thinking about it, I wouldn't want to accidentally alert any powerful actors against me so early on in my journey, for fear of getting the laptop confiscated/stolen, so I'd be very careful before doing anything that could potentially be traced back to me online). :)
As for "calculating out how the human body works," I'm not sure it would be that simple to pull off, at least not at first. Taking your statement literally would mean having the laptop simulate an entire human, brain and all, which is discussed later, so for practical purposes I'm assuming what you meant by that is calculating how a typical human cell works; say, a single neuron. You could definitely solve protein folding and probably simulate most chemical interactions fairly trivially, as long as you can express the physics involved as finitely computable functions (which I'm not sure has been proven possible for all of chemistry/quantum mechanics, though I may be mistaken on that). However, in order to figure out how things actually work inside of an entire human cell, you'll not only need to be able to formally express physics and chemistry, but will also need to know what that cell is chemically composed of in the first place (in order to simulate it properly and not just be given fallible guesses by the computer). In order to make this work, you either already have a pretty much complete formal understanding of a human cell, or have figured out a way to specify your goal so precisely that only a manageable number of valid possibilities are given using the known rules of physics, which seems incredibly hard to do, if not totally impossible with our current tools.
More broadly, the same problem comes up when trying to write a program simulating the human brain. The best neurosurgeons in the world are still in the dark about how most of the brain's functions are actually performed, and currently have to make do with incredibly generalized and high-level assumptions. In order to simulate a human brain (rather than "simply" create a generalized non-human AI), you would need a level of knowledge about our own inner workings that is not currently available. Thankfully, you might not need to know the exact workings of an adult human brain to make one, but without that knowledge at the very least you will need to be able to fully simulate the growth of an embryonic brain, and be able to properly "feed" it appropriate outside stimulation, which could plausibly be reduced to the problem of perfectly simulating the working of a single embryonic cell, then letting the simulation proceed smoothly from there.
Regardless, both goals reduce to the general problem that in order to simulate a complex system, we must already have at least some amount of "base knowledge" of that system, or to put it more precisely, we must know at least as much information as is contained in its Kolmogorov complexity. (please correct me if I'm wrong about this btw, I'm fairly confident in saying this, but I may have messed up somewhere due to the complexity (heh) of the issue)
That's what I think makes this hypothetical so interesting to me—the thought that even with unbounded finite computational abilities, some of our most important problems would still require a tremendous amount of physical fieldwork, and would certainly still require thinking intelligently about how to code for the solutions we want.
You could probably make drastic improvements in AI, because you could do extremely expensive things like modeling any function by minimizing its Kolmogorov complexity. I bet you could develop superhuman AI within a day, if given access to a computer with 2^2^100 clock speed.
↑ comment by Yitz (yitz) · 2020-11-16T04:11:05.332Z · LW(p) · GW(p)
You're a lot braver than me!
I'd be absolutely terrified of trying to create anything anywhere near superhuman AI (as in AGI, of course; I'd be fine with trying to exceed humans on things like efficient protein folding and that sort of stuff), due to the massive existential risk from AGI that LessWrong loves talking about in every other post.
Personally, I would wait to get the world's leading AI ethics experts's unanimous approval before trying anything like that, and that only after at least a few months of thorough discussion. An exception to that might be if I was afraid that the laptop would fall into the hands of bad actors, in which case I'd probably call up MIRI and just do whatever they tell me to do as fast as humanly possible.
I do agree with you though; it probably would be perfectly possible to develop superhuman AI within a day, given such power.
It is worth asking what sort of algorithm you might use, and perhaps more importantly, what would you define as the "win condition" for your program? Going for something like a massively larger version of GPT3 would probably pass the Turing test with relative ease, but I'm not sure it would be capable of generating smarter-than-human insight, since it will only attempt to resemble what already exists in its training data. How would you go about it, if you weren't terribly concerned about AI safety?
Replies from: kevin-lacker↑ comment by Kevin Lacker (kevin-lacker) · 2020-11-20T00:39:01.664Z · LW(p) · GW(p)
The first algorithm I would use is this, to solve problems of mimicking a function with provided inputs and outputs:
For all possible programs of length less than X, run that program on the inputs for time Y. Then measure how close it comes to the outputs. The closest program is then your model.
This takes time O(Y*2^X) so it's impractical in the world we live in, but in this hypothetical world it would work pretty well. This only solves the "classification" or "modeling" type of machine learning problems, rather than reinforcement learning per se, but that seems pretty good to start.
For reinforcement learning, it just depends what you'd want to do in general. I would not just build a general AI and give it access to the internet, any more than I would bring an army of teenagers over to my house and give them access to my car and wallet. If you really had a super-powerful AI then I think the best way of increasing its practical capabilities over time while controlling it would be like any other technology - start a tech company, raise money, think of a business model, and just see what happens. That strategy seems way more likely that you could retain control over the technology and continue to express your own moral judgment over time. Compare to, for example, the scientists developing nuclear weapons, who quickly lost control to politicians. Maybe you could build a new search engine - that seems like it could be a lot better with real AI behind it.
9 comments
Comments sorted by top scores.
comment by johnswentworth · 2020-11-11T17:53:21.407Z · LW(p) · GW(p)
If you could automatically pipe output back into the "computation speed" parameter, without being limited by passing around increasingly large strings, then you could get truly infinite compute in finite time.
Replies from: yitz↑ comment by Yitz (yitz) · 2020-11-11T23:47:32.677Z · LW(p) · GW(p)
How would that work exactly? Let's say you get the output to give you the largest possible number given the number of computations currently allowed, which as long as the "computation speed" parameter is finite, will be finite as well, albeit incredibly large. Every step taken will only increase the computation speed by a finite amount, so how would you reach infinity in a finite time?
Replies from: johnswentworth↑ comment by johnswentworth · 2020-11-12T00:06:30.176Z · LW(p) · GW(p)
Let's say we have api access to the magic box - we can call magic(code, number) to run it. Furthermore, we can do this from within code being executed by the magic box itself.
Then, we could run a function like f(n): magic("f(2*n)", n). So, the function has the box run a copy of itself with n doubled. With each recursive step, the speed doubles, so the computation time (roughly) halves - computation time will actually go down a bit slower than this, since n will have more bits at each step, but that's a linear effect so the whole thing will still work out to an at-least-exponential speedup. The total runtime is then an exponential series (something like 1 + 1/2 + 1/4 + ...), so it will come out to a finite number. Thus, infinite computation in finite time. (This particular infinite computation doesn't actually do anything besides recurse infinitely, but it can be modified.)
Replies from: yitz↑ comment by Yitz (yitz) · 2020-11-13T04:56:12.810Z · LW(p) · GW(p)
Your assumption that you could fully simulate the magic box inside itself is a pretty massive one, and I honestly wouldn't expect it to be true in this hypothetical universe. After all, after receiving the input parameters, the machine simulates a Universal Turing Machine of arbitrary finite size, which by definition cannot perform any non-Turing-computable functions, and certainly couldn't simulate a machine more complex than itself. In order for your magic "Zeno's paradox-destroyer" function to work given the parameters outlined in the story, it would need to be able to call the machine running it from the inside, and God (in His infinite wisdom 🙃) hasn't given us any "escape" api functions for us to do that.
(note that I really do appreciate your thoughts here, and I'm not trying to dunk on them, just trying to get a better understanding of unbounded finite computational systems and want to plug any potential holes in the story before expanding on this hypothetical universe in the future)
Replies from: johnswentworth↑ comment by johnswentworth · 2020-11-13T05:21:34.822Z · LW(p) · GW(p)
Yeah, that makes sense.
How exactly does the "speed" box take input? Because if its input is a plain decimal number, then we really only have access to exponential compute, since we'll need to type out a string of n digits to get 2^O(n) compute. If takes more general programs (which output a number), then we can use the machine itself to search for short formulas which output large numbers, which might buy us a lot more compute. On the other hand, if the speed box takes general programs, then it needs to somehow recognize programs which don't halt, which we could also exploit for hypercomputation.
Replies from: Taran, yitz↑ comment by Taran · 2020-11-13T11:50:39.088Z · LW(p) · GW(p)
Maybe the input box uses its own notation, something weak enough that infinite loops are impossible but powerful enough to express Conway's arrows? That seems like it would be enough to be interesting without accidentally adding a halting oracle.
Replies from: yitz↑ comment by Yitz (yitz) · 2020-11-16T02:24:24.994Z · LW(p) · GW(p)
I'll go with Taran's idea there, I think. Something like Douglas Hofstadter's BlooP language, perhaps, which only allows primitive recursive functions. Would BlooP allow for chained arrow notation, or would it be too restrictive for that? More generally, what is the fastest growing primitive recursive function possible, and what limits would that give us in terms of the scope of problems that can be solved by our magic box?
Replies from: Taran↑ comment by Taran · 2020-11-16T12:24:47.722Z · LW(p) · GW(p)
Would BlooP allow for chained arrow notation, or would it be too restrictive for that?
Sadly, you need more than that: chained arrows can express the Ackermann function, which isn't primitive recursive. But I guess you don't really need them. Even if you just have Knuth's up-arrows, and no way to abstract over the number of up-arrows, you can just generate a file containing two threes with a few million up-arrows sandwiched between them, and have enough compute for most purposes (and if your program still times out, append the file to itself and retry).
↑ comment by Yitz (yitz) · 2020-11-16T02:45:27.579Z · LW(p) · GW(p)
that's a really good point, and I'm mentally kicking myself right now for not having thought of that. In answer to another comment below yours, I suggested only allowing primitive recursive functions to be entered as input in the second box, which I think would solve the problem (if possibly creating another computational limit at the growth rate of the fastest growing primitive recursive function possible [which I haven't studied at all tbh, so if you happen to have any further reading on that I'd be quite interested]). Going a bit further with this, while I suggested God limiting us to one bounded language like BlooP to use on the second field, if He instead allowed for any language to be used, but only primitive recursive functions to be run, would we be able to exploit that feature for hypercomputation?
Also, while we're discussing it, what would be in the space of problems that could be uniquely solved by 2^O(n) compute, but not by "normal" O(n) compute?