AI is not enough

post by benjayk · 2012-02-07T15:53:28.737Z · LW · GW · Legacy · 39 comments

Contents

39 comments

What I write here may be quite simple (and I am certainly not the first to write about it), but I still think it is worth considering:


Say we have an abitrary problem that we assume has an algorithmic solution, and search for the solution of the problem.


How can the algorithm be determined?
Either:
a) Through another algorithm that exist prior to that algorithm.
b) OR: Through something non-algorithmic.


In the case of AI, the only solution is a), since there is nothing else but algorithms at its disposal. But then we have the problem to determine the algorithm the AI uses to find the solution, and then it would have to determine the algorithm to determine that algorithm, etc...
Obviously, at some point we have to actually find an algorithm to start with, so in any case at some point we need something fundamentally non-algorithmic to determine a solution to an problem that is solveable by an algorithm.


This reveals something fundamental we have to face with regards to AI:

Even assuming that all relevant problems are solvable by an algorithm, AI is not enough. Since there is no way to algorithmically determine the appropiate algorithm for an AI (since this would result in an infinite regress), we will always have to rely on some non-algorithmical intelligence to find more intelligent solutions. Even if we found a very powerful seed AI algorithm, there will always be more powerful seed AI algorithms that can't be determined by any known algorithm, and since we were able to find the first one, we have no reason to suppose we can't find another more powerful one. If an AI recursively improves 100000x times until it is 100^^^100 times more powerful, it still will be caught up if a better seed AI is found, which ultimately can't be done by an algorithm, so that further increases of the most general intelligence always rely on something non-algorithmic.

But even worse, it seems obvious to me that there are important practical problems that have no algorithmic solution (as opposed to theoretical problems like the halting problem, which are still tractable in practice), apart from the problem of finding the right algorithm.
In a sense, it seems all algorithms are too complicated to find the solution to the simple (though not necessarily easy) problem of giving rise to further general intelligence.
For example: No algorithm can determine the simple axioms of the natural numbers from anything weaker. We have postulate them by virtue of the simple seeing that they make sense. Thinking that AI could give rise to ever improving *general* intelligence is like thinking that an algorithm can yield "there is a natural number 0 and every number has a successor that, too, is a natural number". There is simply no way to derive the axioms from anything that doesn't already include it. The axioms of the natural numbers are just obvious, yet can't be derived - the problem of finding the axioms of natural numbers is too simple to be solved algorithmically. Yet still it is obvious how important the notion of natural numbers is.
Even the best AI will always be fundamentally incapable of finding some very simple, yet fundamental principles.
AI will always rely on the axioms it already knows, it can't go beyond it (unless reprogrammed by something external). Every new thing it learns can only be learned in term of already known axioms. This is simply a consequence of the fact that computers/programs are functioning according to fixed rules. But general intelligence necessarily has to transcend rules (since at the very least the rules can't be determined by rules).


I don't think this is an argument against a singularity of ever improving intelligence. It just can't happen driven (solely or predominantly) by AI, whether through a recursively self-improving seed AI or cognitive augmentation. Instead, we should expect a singularity that happens due to emergent intelligence. I think it is the interaction of different kind of intelligence (like human/animal intuitive intelligence, machine precision and the inherent order of the non-living universe, if you want to call that intelligence) that leads to increase in general intelligence, not just one particular kind of intelligence like formal reasoning used by computers.

39 comments

Comments sorted by top scores.

comment by asr · 2012-02-07T17:35:11.836Z · LW(p) · GW(p)

There is an ambiguity in how people are using the world "algorithm"

Algorithm-1 is a technique that provably accomplishes some identifiable task X, perhaps with a known allowance for error. This is mostly what we talk about in computer science as "algorithms". Algorithm-2 is any process that can be described and that sometimes terminates. It might not come with any guarantees, it might not come with any clear objectives. A heuristic is an example of Algorithm-2.

Note that this distinction is observer-dependent. Unintelligible code is Algorithm-2, but it becomes Algorithm-1 when you learn what it does and why it works.

Human intelligence is an example of Algorithm-2 and not an example of Algorithm-1 for our purposes.

Machines can do both Algorithm-1 and Algorithm-2.

As near as I can tell, you are highlighting the fact that we don't have an Algorithm-1 for AI design. But that doesn't mean there isn't an Algorithm-2 that accomplishes it and doesn't mean we won't find that Algorithm-2.

Note that an Algorithm-2 can output an Algorithm-1 and a proof/explanation of it. There's nothing stopping us from building a giant-and-horrible combination of theorem prover, sandbox, code mutator, etc, that takes as input a spec, and might at some point output an algorithm, with a proof, that meets the spec.

Replies from: JoshuaZ
comment by JoshuaZ · 2012-02-08T07:39:27.391Z · LW(p) · GW(p)

You make a good point that some confusion may be resulting from different notions of algorithm.

Algorithm-1 is a technique that provably accomplishes some identifiable task X, perhaps with a known allowance for error. This is mostly what we talk about in computer science as "algorithms". Algorithm-2 is any process that can be described and that sometimes terminates. It might not come with any guarantees, it might not come with any clear objectives. A heuristic is an example of Algorithm-2.

From a math/compsci perspective, the most common definition is neither of these. Algorithm-3 is a deterministic process which terminates in finite time and yields some output. Thus an algorithm under definition three is essentially akin to a computable function.

Replies from: asr
comment by asr · 2012-02-08T08:17:09.469Z · LW(p) · GW(p)

From a math/compsci perspective, the most common definition is neither of these. Algorithm-3 is a deterministic process which terminates in finite time and yields some output. Thus an algorithm under definition three is essentially akin to a computable function.

I was writing sloppily without checking a reference. However, I did want to include randomized algorithms, online algorithms, and suchlike under definition 1.

Replies from: JoshuaZ
comment by JoshuaZ · 2012-02-08T14:13:20.204Z · LW(p) · GW(p)

Yes, I'm suggesting that most of the time when people use the term "algorithm" they don't mean to include those so there's a useful third notion of algorithm. I'm still not completely sure which of these the OP actually intended.

comment by gjm · 2012-02-07T16:57:44.752Z · LW(p) · GW(p)

I'm afraid just about everything here is wrong.

at some point we need something fundamentally non-algorithmic

No. Our brains are already implementing lots of algorithms. So far as we know, anything human beings come up with -- however creative -- is in some sense the product of algorithms. I suppose you could go further back -- evolution, biochemistry, fundamental physics -- but (1) it's hard to see how those could actually be relevant here and (2) as it happens, so far as we know those are all ultimately algorithmic too.

we will always have to rely on some non-algorithmical intelligence to find more intelligent solutions.

No (not even if you were right about ultimately needing something fundamentally non-algorithmic). Suppose you have some initial magic non-algorithmic step where the Finger of God implants intelligence into something (a computer, a human being, whatever). After that, that intelligent thing can design more intelligent things which design more intelligent things, etc. The alleged requirement to avoid an infinite regress is satisfied by that initial Finger-of-God step, even if everything after that is algorithmic. There's no reason to think that continued non-algorithmic stuff is called for.

we have no reason to suppose we can't find another more powerful one.

That might be true. It might even be true -- though I don't think you've given coherent reasons to think so -- that there'll always be a possible Next Big Thing that can't be found algorithmically. So what? A superintelligent AI isn't any less useful, or any less dangerous, merely because a magical new-AI-creating process might be able to create an even more superintelligent AI.

No algorithm can determine the simple axioms of the natural numbers from anything weaker.

It is not clear that this means anything. You certainly have given no reasons to believe it.

There is simply no way to derive the axioms from anything that doesn't already include it.

I think you are confusing derivations within some formal system such as Peano arithmetic (where, indeed, the only way to get the axioms is to begin with them, or with some other axioms that imply them) and (a quite different sort of) derivations outside that formal system, such as whatever Peano did to arrive at his axioms. I know of no reason to believe that the latter is fundamentally non-algorithmic, though for sure we don't know what algorithms would be best.

general intelligence necessarily has to transcend rules

I know of no reason to believe this, and it seems to me that if it seems true it's because what you imagine when you think about following rules is very simple rule-following, the sort of thing that might be done by a computer program at most a few pages in length running on a rather slow computer. In particular ...

since at the very least the rules can't be determined by rules

Whyever not? They have to be different rules, that's all.

Instead, we should expect a singularity that happens due to emergent intelligence.

"Emergence" is not magic.

not just one particular kind of intelligence like formal reasoning used by computers

Well, that might well be correct, in the sense that good paths to AI might well involve plenty of things that aren't best thought of as "formal reasoning". (Though, if they run on conventional computers, they will be equivalent in some sense to monstrously complicated systems of formal reasoning.)

Replies from: benjayk
comment by benjayk · 2012-02-07T17:57:15.153Z · LW(p) · GW(p)

You didn't really respond to my argument. You just said: "It's all algorithmic, basta.". The problem is that there is no algorithmic way to determine any algorithm, since if you try to find an algorithm for the algorithm you only have a bigger problem of determining that algorithm. The universe can't run solely on algorithms, except if you invoke "God did it! He created the first algorithm" or "The first algorithm just appered randomly out of nowhere". I think this statement is ridiculous, but there is no refutation for dogma. If the universe would be so absurd, I could as well be a christian fundamentalist or just randomly do nonsensical things (since it's all random either way).

No algorithm can determine the simple axioms of the natural numbers from anything weaker.

It is not clear that this means anything. You certainly have given no reasons to believe it.

What? The axioms of natural numbers can't be determined because they are axioms. If that's not true, "derive 0 is a natural number" and "1 is the succesor of 0" without any notion of numbers.

It means that there is no way that an AI could invent the natural numbers. Hence there are important inventions that AIs can't make - in principle.

There is simply no way to derive the axioms from anything that doesn't already include it.

I think you are confusing derivations within some formal system such as Peano arithmetic (where, indeed, the only way to get the axioms is to begin with them, or with some other axioms that imply them) and (a quite different sort of) derivations outside that formal system, such as whatever Peano did to arrive at his axioms. I know of no reason to believe that the latter is fundamentally non-algorithmic, though for sure we don't know what algorithms would be best.

Instead of asserting that, just try some way to derive the simplest axioms of arithmetic from something that's not more complex (which of course can't always work to arrive at the axioms since we have a limited amount of complex systems). It doesn't work. The axioms of arithmetic are irreducible simple - to simple to be derived.

I know of no reason to believe this, and it seems to me that if it seems true it's because what you imagine when you think about following rules is very simple rule-following, the sort of thing that might be done by a computer program at most a few pages in length running on a rather slow computer. In particular ..

Not at all! It doesn't matter how complex the rules are. You can't go beyond the axioms of the rules, because that is what makes the rules rules. Yet still it is easily possible to invent new axioms. This is essential for intelligence, yet an AI can't do it, since it only works by its axioms. It can do it on a meta-level, for sure, but that's not enough, since in this case the new axioms are just derived from the old ones. Well, or it uses user input, but in this case the program isn't a self-contained intelligence anymore.

since at the very least the rules can't be determined by rules

Whyever not? They have to be different rules, that's all.

And how are these rules determined? Either you have an infinite chain of rules, which itself can't be derived from an rule, or you start picking out a rule without any rule.

Instead, we should expect a singularity that happens due to emergent intelligence.

"Emergence" is not magic.

Really? I think it is, not of course in any anthrophomorphic sense. What else could describe, for example, the emergence of the patterns out of cellular automata rules? It seems to me nature is inherently magical. We just have to be careful to not project our superstitious ideas of magic into nature. Even all materialist have to rely on magic at the most critical points. Look at the anthropic principle. Or at the question "Where do the laws of nature come from?". Either we deny that the question is meaningful or important, or we have to admit it is fundamentally mysterious and magical.

Replies from: gjm, APMason, prase
comment by gjm · 2012-02-07T23:02:47.226Z · LW(p) · GW(p)

No, I didn't say "it's all algorithmic, basta"; I said "so far as we know, it's all algorithmic". Of course it's possible that we'll somehow discover that actually our minds run on magic fairies and unicorns or something, but so far as I can tell all the available evidence is consistent with everything being basically algorithmic. You're the one claiming to know that that isn't so; I invite you to explain how you know.

I haven't claimed that the axioms of arithmetic are derived from something simpler. I have suggested that for all we know, the process by which we found those axioms was basically algorithmic, though doubtless very complicated. (I'm not claiming that that algorithmic process is why the axioms are right. If you're really arguing not about the processes by which discoveries are made but about why arithmetic is the way it is, then we need to have a different discussion.)

it is easily possible to invent new axioms. This is essential for intelligence, yet an AI can't do it, since it only works by its axioms.

I'm afraid this is very, very wrong. Perhaps the following analogy will help: suppose I said "It is easily possible to contemplate arbitrarily large numbers, even ones bigger than 2^32 or 2^64. This is essential for intelligence, yet an AI can't do it, since it only works with 32-bit or 64-bit arithmetic." That would be crazy, right?, because an AI (or anything else) implemented on a hardware substrate that can only do a very limited set of operations can still do higher-level things if it's programmed to. A computer can do arbitrary-precision arithmetic by doing lots of 32-bit arithmetic, if the latter is organized in the right way. Similarly, it can cook up new axioms and rules by following fixed rules satisfying fixed axioms, if the latter are organized in the right way.

And how are these rules determined?

Depends how far back the chain of causation you want to go. There'll be some rules programmed into the computer by human beings. Those were determined by whatever complicated algorithms human brains execute. Those were determined by whatever complicated algorithms human cultures and biological evolution execute. Those were determined by ... etc. As you go further back, you get algorithms with less direct connection to intelligence (ours, or a computer's, or whatever). Ultimately, you end up with whatever the basic laws of nature are, and no one knows those for sure. (But, again, so far as anyone knows they're algorithmic in nature.)

So: no infinite chain, probably (though it's not clear to me that there's anything actually impossible about that); you start with whatever the laws of nature are, and so far as anyone knows they just are what they are. (I suppose you could try to work that up into some kind of first-cause argument for the existence of God, but I should warn you that it isn't likely to work well.)

Really? I think it [emergence] is [magic] ... It seems to me nature is inherently magical.

Oh. Either you're using the word "magical" in a nonstandard way that I don't currently understand, or at least one of us is so terribly wrong about the nature of the universe that further discussion seems unlikely to be helpful.

comment by APMason · 2012-02-07T21:44:43.213Z · LW(p) · GW(p)

The universe can't run solely on algorithms, except if you invoke "God did it! He created the first algorithm" or "The first algorithm just appered randomly out of nowhere". I think this statement is ridiculous, but there is no refutation for dogma. If the universe would be so absurd, I could as well be a christian fundamentalist or just randomly do nonsensical things (since it's all random either way).

As a general rule, arguments which rely on having exhausted the hypothesis space are weak. You can't say, "It can't be algorithms, because algorithms don't solve the problem of the first cause." Well, so what? Neither do the straw men you suggest. Neither, indeed, do "emergence" or "magic", which aren't explanations at all. It's one of those hard problems - it doesn't just trouble positions you disagree with.

comment by prase · 2012-02-07T21:08:03.359Z · LW(p) · GW(p)

The axioms of natural numbers can't be determined because they are axioms. If that's not true, "derive 0 is a natural number" and "1 is the succesor of 0" without any notion of numbers.

Again, they cannot be derived within the formal system where they are axioms. They can be determined in a different system which uses distinct axioms or derivation rules. This is, more or less, how you could interpret the parent comment.

The axioms of arithmetic are irreducible simple - to simple to be derived.

Your argument seems to be

  1. Humans have derived arithmetics.
  2. Arithmetics can't be algorithmically derived from a simpler system.
  3. Therefore, humans are not algorithmic.

It seems that you are equivocating in your demands. Your original assertion is that an algorithm can't derive (in this case, meaning invent) formal arithmetics, but the quoted argument supports another claim, namely that the formalisation of arithmetics is the most austere possible. But this claim is not (at least not obviously) relevant to the original question whether intelligence is algorithmic or not. Humans haven't derived formal arithmetics from a simpler formal system. Removing the equivocation, the argument is a clear non-sequitur:

  1. Humans have invented arithmetics.
  2. Arithmetics can't be simplified.
  3. Therefore, humans are not algorithmic.

What else could describe, for example, the emergence of the patterns out of cellular automata rules? It seems to me nature is inherently magical.

What do you mean by magical? Saying "emergence is magical" doesn't look like a description.

I think this statement is ridiculous

I would suggest you to be more careful with such statements. It comes across as confrontational.

comment by FAWS · 2012-02-07T18:51:34.432Z · LW(p) · GW(p)

You are assuming a concept of "algorithm" wide enough that everything an AI could do is considered an algorithm and simultaneously narrow enough that things humans do that don't seem obviously algorithmic don't count as algorithms, so you are simply begging the question. The concept of algorithm you use already implies that there are things a human can do and an AI can't right from the start.

comment by mas · 2012-02-07T17:15:55.413Z · LW(p) · GW(p)

I'm having trouble thinking of methods of problem solving that are non-algorithmic. Can anyone provide me with an example?

Replies from: benjayk
comment by benjayk · 2012-02-07T17:20:05.560Z · LW(p) · GW(p)

How do you solve the problem of making yourself a cup of tea? You don't have any precise step-by-step method of doing so. How do you know that you exist or anything at all exist for that matter? What kind of algorithm do you use for that?

What algorithm do you use to find the appropiate axioms to formalize what numbers are?

In all cases you use methods that non-algorithmic, or you don't use methods and still solve the problem (like "I just recognize that I exist").

Replies from: asr, Vaniver, mas
comment by asr · 2012-02-07T17:46:34.810Z · LW(p) · GW(p)

You've convinced me that I don't have conscious introspective access to the algorithms I use for these things. This doesn't mean that my brain isn't doing something pretty structured and formal underneath.

The formalization example I think is a good one. There's a famous book by George Polya, "how to solve it". It's effectively a list of mental tactics used in problem solving, especially mathematical problem solving.

When I sit down to solve a problem, like formalizing the natural numbers, I apply something like Polya's tool-set iteratively. "Have I formalized something similar before?" "Is there a simpler version I could start with?" and so forth. This is partly conscious and partly not -- but the fact that we don't have introspective access to the unconscious mind doesn't make it nonalgorithmic.

As I work, I periodically evaluate what I have. There's a black box in my head for "do I like this?" I don't know a lot about its internals, but that again isn't evidence for it being non-algorithmic. It's fairly deterministic. I have no reason to doubt that there's a Turing machine that simulates it.

Effectively, my algorithm for math works like this:

while(nothing else is a higher priority than this problem) {  
   stare at the problem and try to understand it  
   search my past memories for something related   // neural nets are good at this  
   for each relevant past memory,   
      try to apply a relevant technique that worked in the past  
      evaluate the result.   
      if it looks like progress  
         declare this to be the new version of the problem  

}

Seems algorithmic to me!

Replies from: benjayk
comment by benjayk · 2012-02-07T18:08:23.060Z · LW(p) · GW(p)

Sorry, it seems you are just presuming computationalism. The question is not "Why would it not be algorithmic?" but "Why would it be algorithmic?", considering that as you say for yourself, from your perspective no algorithm is visible.

The algorithm you wrote down is a nice metaphor, but not in any way an algorithm in the way computer science means it. Since we talk about AI, I am only refering to the use of algorithm in the sense of "precisely formalizable procedure" as in computer science.

Replies from: asr
comment by asr · 2012-02-07T19:04:47.887Z · LW(p) · GW(p)

I agree that neural nets in general, and the human brain in particular can't be readily replaced with a well-structured computer program of moderate complexity.

But I indeed was presuming computationalism, in the sense that "all a human brain does is compute some function that could in principle be formalized, given enough information about the particular brain in question". If that's the claim you wanted to focus on, you should have raised it more directly.

Computationalism is quite separate from whether there is a simple formalism for intelligence. I believe computationalism because I believe that it would be possible to build an accurate neuron-level simulator of a brain. Such a simulator could be evaluated using any turing-equivalent computer. But the resulting function would be very messy and lack a simple hierarchical structure.

Which part of this are we disagreeing on? Do you think a neuron-level brain simulation could produce intelligent behavior similar in character to a human being? Do you think an engineered software artifact could ever do the same?

comment by Vaniver · 2012-02-08T03:44:56.594Z · LW(p) · GW(p)

How do you solve the problem of making yourself a cup of tea?

I follow a set of rules that provide me with a cup of tea in a finite number of steps.

comment by mas · 2012-02-07T17:30:40.448Z · LW(p) · GW(p)

How do you know that my brain doesn't have algorithms running for all of these problems?

Surely for tea making it's something like this: I want tea -- Do I have all ingredients? -- (Water) yes, (Tea bag) no -- Do I go to the store? -- Is the store open etc...

Replies from: benjayk
comment by benjayk · 2012-02-07T18:13:58.903Z · LW(p) · GW(p)

I don't know that. I am not claiming to "know" any of what I write here. I don't think it is knowable. I just write about what is obvious to me. This doesn't amount to knowledge, though.

In theory it could be possible that there is an algorithm, even though noone showed this. I doubt it, though. In any case, there would be some non-algorithmical way to arrive at this algorithm, which just shifts the mystery from humans to something else.

Actually it seems that you could describe everything that happens in terms of an algorithm. The simplest possibility is to just describe what happens in a string and let the algorithm output that. This doesn't imply, though, that this is what makes things happen. But even then, this algorithm still couldn't be derived from algorithms alone.

comment by Manfred · 2012-02-07T18:03:49.604Z · LW(p) · GW(p)

An interesting test of what appear to be general arguments against something

is whether you can use them to prove the opposite as well.

Clearly, if you want to use something non-algorithmic to solve a problem (say, if you're a human, which for the sake of argument we will pretend are "non-algorithmic"), you have to get that non-algorithm somehow. But then we have the problem to determine the non-algorithm to find that, and so on...
Obviously, at some point we have to actually find a non-algorithm to start with, so in any case at some point we need something fundamentally algorithmic to determine a solution to an problem that is solveable by a non-algorithm.

Replies from: benjayk
comment by benjayk · 2012-02-07T18:37:17.424Z · LW(p) · GW(p)

Interesting reply! The solution is to terminate the chain of determination with a non-algorithm that is fundamentally indeterminate, which quantum mechanics already hints at.

comment by Alex_Altair · 2012-02-07T16:36:42.547Z · LW(p) · GW(p)

I think this post is filled with conceptual confusion, and I find it fascinating.

You never quite state the alternative to an algorithm. I propose that the only alternative is randomness. All processes in the universe are algorithms with elements of randomness.

I'm curious as to what you think the human brain does, if not an algorithm. I, like many on LW, believe the human brain can be simulated by a Turing machine (possible with a random generator). Concepts like "heuristics" or "intuition" or exploration" are algorithms with random elements. There is a lot of history on formalizing processes, and nothing known lies outside Turing machines (see the Church-Turing thesis).

In addition, I think the human brain is a lot more algorithmic than you think it is. A lot of Lukeprog's writings on neuroscience and psychology demonstrate ways in which our natural thoughts or intuitions are quite predictable.

at some point we have to actually find an algorithm to start with

The universe started with the laws of physics (which are known to be algorithms possibly with a random generator), and have run that single algorithm up to the present day.

What do you think about my proposed algorithm/random dichotomy?

Replies from: benjayk
comment by benjayk · 2012-02-07T17:33:34.473Z · LW(p) · GW(p)

You never quite state the alternative to an algorithm. I propose that the only alternative is randomness.

The alternative to algorithms are non-formalizable processes. Obviously I can't give a precise definition or example of one, since in this case we would have an algorithm again.

The best example I can give is the following: Assume that the universe works precisely according to laws (I don't think so, but let's assume it). What determines the laws? Another law? If so, you get an infinite regress of laws, and you don't have a law to determine this, either. So according to you, the laws of the universe are random. I think this hardly plausible.

I'm curious as to what you think the human brain does, if not an algorithm.

I don't know, and I don't think it is knowable in a formalizable way. I consider intelligence to be irreducible. The order of the brain can only be seen in recognizing its order, not in reducing it to any formal principle.

In addition, I think the human brain is a lot more algorithmic than you think it is. A lot of Lukeprog's writings on neuroscience and psychology demonstrate ways in which our natural thoughts or intuitions are quite predictable.

I am not saying the human brain is entirely non-algorithmic. Indeed, since the knwon laws of nature we discovered are quite algorithmic (except for quantum indeterminateness) and the behaviour of the brain can't deviate from that to a very large degree (otherwise we would have recognized it already) we can assume the behaviour of our brains can be quite closely approximated by laws. Still, this doesn't mean there isn't a very crucial non lawful behaviour inherent to it.

The universe started with the laws of physics (which are known to be algorithms possibly with a random generator), and have run that single algorithm up to the present day.

How did the universe find that algorithm? Also, the fact that the behaviour of physics is nicely approximated by laws doesn't mean that these laws are absolute or unchanging.

What do you think about my proposed algorithm/random dichotomy?

Frankly, I see no reason at all to think it is valid.

Replies from: Tiiba
comment by Tiiba · 2012-02-18T00:50:43.753Z · LW(p) · GW(p)

"So according to you, the laws of the universe are random. I think this hardly plausible."

I don't see why it is not plausible. It's not like the Universe has any reason to choose the laws that it did and not others. Why have a procedure, algorithmic or not, if there are no goals?

comment by falenas108 · 2012-02-07T19:57:32.493Z · LW(p) · GW(p)

Based on your comments, you are clearly an atheist, and therefore reject the argument of God existing because there has to be an uncaused cause.

Yet, your uncaused algorithm argument takes the exact same form. Isn't it the same counterargument?

Replies from: benjayk
comment by benjayk · 2012-02-08T14:44:03.424Z · LW(p) · GW(p)

I am not necessarily an atheist, it depends on your definition of God. I reject all relgious conceptions of God, but accept God as a name for the mysterious source of all order, intelligence and meaning, or as existence itself.

So in this sense, God is the uncaused cause and also everything caused.

It would indeed be a counterargument if I didn't believe in uncaused cause, but I do believe in an uncaused cause, even though it isn't a seperate entity like the usual notion of God implies.

comment by benjayk · 2012-02-08T16:19:02.838Z · LW(p) · GW(p)

My argument still holds in another form, though. Even if we assume the universe has a preexisting algorithm that just unfolds, we don't know which it is. So we can't determine the best seed AI from that either, effectively we still have to start from b). Unless we get the best seed AI by accident (which seems unlikely to me) there will be room for a better seed AI which can only determined if we start with a totally new algorithm (which the original seed AI is unable to, since then it would have to delete itself). We, having the benefit of not knowing our algorithm, can built a better seed AI which the old seed AI couldn't built because it already has a known algorithm it must necessarily build on.

Indeed, a good seed AI would at some point suggest us to try another seed AI, because it infers that its original code is unlikely to be the best possible at optimal self-modification. Or it would say "Delete this part of my source code and rewrite it, it doesn't seem optimal to me, but I can't rewrite it because I can't modify this part without destroying myself or basing the modification on the very part that I want to fundamentally rewrite".

comment by benjayk · 2012-02-08T15:42:34.767Z · LW(p) · GW(p)

I see in which case my argument fails:

If we assume a prexisting algorithm for the universe already (which most people here seem to do), then everything else could be derived from that, including all axioms of natural numbers, since we assume the algorithm to be more powerful then them at the start. Step b) is simply postulated to be already fulfilled, with the algorithm just being there (and "just being there" is not an algorithm), so that we already have an algorithm to start with (the laws of nature).

The "laws of nature" simply have to be taken as granted. There is nothing deeper than that. The problem is that then we face a universe which is essentially abitrary, since from what we know the laws could be anything abitrary else as well (these laws would just be the way they are, too). But this is obviously not true. The laws of nature are not abitrary, there is a deeper order in them which can't stem from any algorithm (since this would just be another abitrary algorithm). But if this is the case, I believe we have no reason to suppose that this order just stopped working and lets do one algorithm all the rest. We would rather expect it to be still continuously active. Then the argument works. The non-algorithmic order can always yield more powerful seed AIs (since there is no single most powerful algorithm, I think we can agree on that), so that AI is not sufficient for an ever increasing general intelligence.

So we face a problem of worldview here, which really is independent of the argument and this is maybe not the right place to argue it (if it is even useful to discuss it, I am not sure about that, either).

comment by shminux · 2012-02-07T20:17:20.563Z · LW(p) · GW(p)

What I write here may be quite simple (and I am certainly not the first to write about it)

Feel free to post relevant links, then. Until then, downvoted.

comment by Thomas · 2012-02-07T16:21:56.792Z · LW(p) · GW(p)

Any bitstring to a certain length N, can be produced.

Some of them are algorithms. Some can be proved with a rigor. Some can be tested statistically.

I don't see your point as valid.

Replies from: Viliam_Bur, benjayk
comment by Viliam_Bur · 2012-02-07T16:31:59.050Z · LW(p) · GW(p)

Any bitstring to a certain length N, can be produced.

Producing and proving/testing all strings up to 1000 bits could be pretty expensive. And yet 1000 bits could be not enough for an intelligent AI.

Therefore, in this universe it is probably not possible to create an intelligent AI by simply enumerating and testing all bitstrings.

Replies from: asr
comment by asr · 2012-02-07T17:53:46.551Z · LW(p) · GW(p)

Yes. This shows you would need a better algorithm than brute-force search. But better algorithms are known to exist -- to pick a trivial one, you can do random generation in a nontrivial programming language, with a grammar and a type system. This lets you rule out lots of ill-typed or ill-formed programs quickly.

A still less trivial example would generate new programs out of bits of existing programs. See the "macho" work at the University of Illinois for an example. They're able to synthesize small programs (the size of 'ls') from the man-page description, plus a big body of example code.

Replies from: benjayk
comment by benjayk · 2012-02-07T18:47:27.904Z · LW(p) · GW(p)

Better algorithms maybe are known to exist, but these itself can't ultimately be selected by an algorithm, as you would have an infinite regress of picking algorithms then.

Generating all possible programs doesn't solve anything, since we still have to select a program that we actually use to solve a particular problem, also the algorithm to generate all possible algorithms itself cannot be algorithmically determined. So what you say doesn't refute my point at all.

Replies from: asr, thomblake
comment by asr · 2012-02-07T18:53:18.538Z · LW(p) · GW(p)

Hrm? Suppose you're trying to accomplish some problem X. There are range of algorithms and heuristics available to you. You try a few of them. At some point -- usually a very quick point -- one of them is good enough for your practical purpose, and you stop.

We don't typically go too far in formalizing our purposes, generally. But I don't see what the deep point is that you're driving at. For practical purposes, algorithms are chosen by people in order to solve practical problems. Usually there are a few layers of formalized intermediaries -- compilers and libraries and suchlike. But not very far down the regress, there's a human. And humans settle for good enough. And they don't have a formal model of how they do so.

There isn't an infinite algorithmic regress. The particular process humans use to choose algorithms is unquestionably not a clean formal algorithm. Nobody ever said it was. The regress stops when you come to a human, who was never designed and isn't an algorithm-choosing algorithm. But that doesn't shed any light on whether a formal algorithm exists that could act similarly to a human, or whether there is an algorithm-choice procedure that's as good or better than a human.

comment by thomblake · 2012-02-09T17:09:34.193Z · LW(p) · GW(p)

Better algorithms maybe are known to exist, but these itself can't ultimately be selected by an algorithm, as you would have an infinite regress of picking algorithms then.

This is fallacious.

Correct conclusion: you would then have a 1-step regress of picking algorithms.

Watch that slippery slope.

comment by benjayk · 2012-02-07T17:22:04.937Z · LW(p) · GW(p)

You can even write an algorithm that goes through all possible algorithms (universal dovetailer). Yet this algorithm doesn't solve any problem. So you still have to select an algorithm to solve a problem.

Replies from: JoshuaZ
comment by JoshuaZ · 2012-02-08T07:34:06.911Z · LW(p) · GW(p)

You can even write an algorithm that goes through all possible algorithms (universal dovetailer)

The universal dovetailer runs through all possible programs, which is a superset of all algorithms. You can't use it to get access to just the genuine algorithms in any algorithmic fashion- if you could you could solve the Halting problem.

Replies from: benjayk
comment by benjayk · 2012-02-08T14:25:42.950Z · LW(p) · GW(p)

I agree. That's why is say "this algorithm doesn't solve any problem", it isn't in a problem solving algorithm in the sense I used in my post. Any "just go through all XYZ" doesn't solve my stated problem, because it doesn't select the actual useful solution.

Replies from: JoshuaZ
comment by JoshuaZ · 2012-02-08T14:50:28.091Z · LW(p) · GW(p)

That's not the issue here. The issue is more subtle. Dovetaling doesn't go through every algorithm but goes through every program. That is, it runs a program whether or not that program will halt.

it isn't in a problem solving algorithm in the sense I used in my post

I'm not completely sure what you mean by a problem solving algorithm. Variations of universal dovetailing that are very close to it can be problem solving algorithms by most reasonable notions of the term. Consider for example the following:

Proposition: If P=NP then there is an explicitly constructable algorithm which gives explicit solutions to 3-SAT in polynomial time. Proof sketch: For a given 3-SAT dovetail through all programs. Whenever a program gives an output, check if that output is a solution to the 3-SAT problem. If so, you've found your answer. It isn't too hard to see that this process will terminate in polynomial time if P=NP.

(I'm brushing some issues here under the rug, like what we mean by explicitly constructable, and there's an implicit assumption here of some form of w-consistency for our axiomatic system.)

This sort of construction works for a large variety of issues so that one can say that morally speaking if an algorithm exists to do so something in some complexity class then dovetailing will find an example of such an algorithm.

comment by scientism · 2012-02-07T18:10:30.956Z · LW(p) · GW(p)

I like this post more for the responses than for the argument. It's obvious that a lot of people here mean something by "algorithm" that is so general as to be useless: they mean any kind of (deterministic) procedure or process at all. Algorithms are effective procedures for calculation. Equations, laws, theories, and so forth, are not algorithms (note that algorithms are procedures and not descriptions). The sort of argument you're making was common to early discussions of AI. The idea that everything a person can do can be explained algorithmically is really more a presupposition of AI rather than something it actively argues for. The idea is that the presupposition will be vindicated by the creation of AI programs that show the disputed properties.