Posts
Comments
My argument still holds in another form, though. Even if we assume the universe has a preexisting algorithm that just unfolds, we don't know which it is. So we can't determine the best seed AI from that either, effectively we still have to start from b). Unless we get the best seed AI by accident (which seems unlikely to me) there will be room for a better seed AI which can only determined if we start with a totally new algorithm (which the original seed AI is unable to, since then it would have to delete itself). We, having the benefit of not knowing our algorithm, can built a better seed AI which the old seed AI couldn't built because it already has a known algorithm it must necessarily build on.
Indeed, a good seed AI would at some point suggest us to try another seed AI, because it infers that its original code is unlikely to be the best possible at optimal self-modification. Or it would say "Delete this part of my source code and rewrite it, it doesn't seem optimal to me, but I can't rewrite it because I can't modify this part without destroying myself or basing the modification on the very part that I want to fundamentally rewrite".
I see in which case my argument fails:
If we assume a prexisting algorithm for the universe already (which most people here seem to do), then everything else could be derived from that, including all axioms of natural numbers, since we assume the algorithm to be more powerful then them at the start. Step b) is simply postulated to be already fulfilled, with the algorithm just being there (and "just being there" is not an algorithm), so that we already have an algorithm to start with (the laws of nature).
The "laws of nature" simply have to be taken as granted. There is nothing deeper than that. The problem is that then we face a universe which is essentially abitrary, since from what we know the laws could be anything abitrary else as well (these laws would just be the way they are, too). But this is obviously not true. The laws of nature are not abitrary, there is a deeper order in them which can't stem from any algorithm (since this would just be another abitrary algorithm). But if this is the case, I believe we have no reason to suppose that this order just stopped working and lets do one algorithm all the rest. We would rather expect it to be still continuously active. Then the argument works. The non-algorithmic order can always yield more powerful seed AIs (since there is no single most powerful algorithm, I think we can agree on that), so that AI is not sufficient for an ever increasing general intelligence.
So we face a problem of worldview here, which really is independent of the argument and this is maybe not the right place to argue it (if it is even useful to discuss it, I am not sure about that, either).
I am not necessarily an atheist, it depends on your definition of God. I reject all relgious conceptions of God, but accept God as a name for the mysterious source of all order, intelligence and meaning, or as existence itself.
So in this sense, God is the uncaused cause and also everything caused.
It would indeed be a counterargument if I didn't believe in uncaused cause, but I do believe in an uncaused cause, even though it isn't a seperate entity like the usual notion of God implies.
I agree. That's why is say "this algorithm doesn't solve any problem", it isn't in a problem solving algorithm in the sense I used in my post. Any "just go through all XYZ" doesn't solve my stated problem, because it doesn't select the actual useful solution.
Better algorithms maybe are known to exist, but these itself can't ultimately be selected by an algorithm, as you would have an infinite regress of picking algorithms then.
Generating all possible programs doesn't solve anything, since we still have to select a program that we actually use to solve a particular problem, also the algorithm to generate all possible algorithms itself cannot be algorithmically determined. So what you say doesn't refute my point at all.
Interesting reply! The solution is to terminate the chain of determination with a non-algorithm that is fundamentally indeterminate, which quantum mechanics already hints at.
I don't know that. I am not claiming to "know" any of what I write here. I don't think it is knowable. I just write about what is obvious to me. This doesn't amount to knowledge, though.
In theory it could be possible that there is an algorithm, even though noone showed this. I doubt it, though. In any case, there would be some non-algorithmical way to arrive at this algorithm, which just shifts the mystery from humans to something else.
Actually it seems that you could describe everything that happens in terms of an algorithm. The simplest possibility is to just describe what happens in a string and let the algorithm output that. This doesn't imply, though, that this is what makes things happen. But even then, this algorithm still couldn't be derived from algorithms alone.
Sorry, it seems you are just presuming computationalism. The question is not "Why would it not be algorithmic?" but "Why would it be algorithmic?", considering that as you say for yourself, from your perspective no algorithm is visible.
The algorithm you wrote down is a nice metaphor, but not in any way an algorithm in the way computer science means it. Since we talk about AI, I am only refering to the use of algorithm in the sense of "precisely formalizable procedure" as in computer science.
You didn't really respond to my argument. You just said: "It's all algorithmic, basta.". The problem is that there is no algorithmic way to determine any algorithm, since if you try to find an algorithm for the algorithm you only have a bigger problem of determining that algorithm. The universe can't run solely on algorithms, except if you invoke "God did it! He created the first algorithm" or "The first algorithm just appered randomly out of nowhere". I think this statement is ridiculous, but there is no refutation for dogma. If the universe would be so absurd, I could as well be a christian fundamentalist or just randomly do nonsensical things (since it's all random either way).
No algorithm can determine the simple axioms of the natural numbers from anything weaker.
It is not clear that this means anything. You certainly have given no reasons to believe it.
What? The axioms of natural numbers can't be determined because they are axioms. If that's not true, "derive 0 is a natural number" and "1 is the succesor of 0" without any notion of numbers.
It means that there is no way that an AI could invent the natural numbers. Hence there are important inventions that AIs can't make - in principle.
There is simply no way to derive the axioms from anything that doesn't already include it.
I think you are confusing derivations within some formal system such as Peano arithmetic (where, indeed, the only way to get the axioms is to begin with them, or with some other axioms that imply them) and (a quite different sort of) derivations outside that formal system, such as whatever Peano did to arrive at his axioms. I know of no reason to believe that the latter is fundamentally non-algorithmic, though for sure we don't know what algorithms would be best.
Instead of asserting that, just try some way to derive the simplest axioms of arithmetic from something that's not more complex (which of course can't always work to arrive at the axioms since we have a limited amount of complex systems). It doesn't work. The axioms of arithmetic are irreducible simple - to simple to be derived.
I know of no reason to believe this, and it seems to me that if it seems true it's because what you imagine when you think about following rules is very simple rule-following, the sort of thing that might be done by a computer program at most a few pages in length running on a rather slow computer. In particular ..
Not at all! It doesn't matter how complex the rules are. You can't go beyond the axioms of the rules, because that is what makes the rules rules. Yet still it is easily possible to invent new axioms. This is essential for intelligence, yet an AI can't do it, since it only works by its axioms. It can do it on a meta-level, for sure, but that's not enough, since in this case the new axioms are just derived from the old ones. Well, or it uses user input, but in this case the program isn't a self-contained intelligence anymore.
since at the very least the rules can't be determined by rules
Whyever not? They have to be different rules, that's all.
And how are these rules determined? Either you have an infinite chain of rules, which itself can't be derived from an rule, or you start picking out a rule without any rule.
Instead, we should expect a singularity that happens due to emergent intelligence.
"Emergence" is not magic.
Really? I think it is, not of course in any anthrophomorphic sense. What else could describe, for example, the emergence of the patterns out of cellular automata rules? It seems to me nature is inherently magical. We just have to be careful to not project our superstitious ideas of magic into nature. Even all materialist have to rely on magic at the most critical points. Look at the anthropic principle. Or at the question "Where do the laws of nature come from?". Either we deny that the question is meaningful or important, or we have to admit it is fundamentally mysterious and magical.
You never quite state the alternative to an algorithm. I propose that the only alternative is randomness.
The alternative to algorithms are non-formalizable processes. Obviously I can't give a precise definition or example of one, since in this case we would have an algorithm again.
The best example I can give is the following: Assume that the universe works precisely according to laws (I don't think so, but let's assume it). What determines the laws? Another law? If so, you get an infinite regress of laws, and you don't have a law to determine this, either. So according to you, the laws of the universe are random. I think this hardly plausible.
I'm curious as to what you think the human brain does, if not an algorithm.
I don't know, and I don't think it is knowable in a formalizable way. I consider intelligence to be irreducible. The order of the brain can only be seen in recognizing its order, not in reducing it to any formal principle.
In addition, I think the human brain is a lot more algorithmic than you think it is. A lot of Lukeprog's writings on neuroscience and psychology demonstrate ways in which our natural thoughts or intuitions are quite predictable.
I am not saying the human brain is entirely non-algorithmic. Indeed, since the knwon laws of nature we discovered are quite algorithmic (except for quantum indeterminateness) and the behaviour of the brain can't deviate from that to a very large degree (otherwise we would have recognized it already) we can assume the behaviour of our brains can be quite closely approximated by laws. Still, this doesn't mean there isn't a very crucial non lawful behaviour inherent to it.
The universe started with the laws of physics (which are known to be algorithms possibly with a random generator), and have run that single algorithm up to the present day.
How did the universe find that algorithm? Also, the fact that the behaviour of physics is nicely approximated by laws doesn't mean that these laws are absolute or unchanging.
What do you think about my proposed algorithm/random dichotomy?
Frankly, I see no reason at all to think it is valid.
You can even write an algorithm that goes through all possible algorithms (universal dovetailer). Yet this algorithm doesn't solve any problem. So you still have to select an algorithm to solve a problem.
How do you solve the problem of making yourself a cup of tea? You don't have any precise step-by-step method of doing so. How do you know that you exist or anything at all exist for that matter? What kind of algorithm do you use for that?
What algorithm do you use to find the appropiate axioms to formalize what numbers are?
In all cases you use methods that non-algorithmic, or you don't use methods and still solve the problem (like "I just recognize that I exist").
I contest that afterlife is a lie. I think one reason many people believe in an afterlife is because it actually makes sense, even though their picture of what it looks like is very unlikely to be accurate.
In my opinion it is simply a logical certainty that there is an "afterlife" (if one dies in the first place): I can't ever experience nothing in the present (even though I can say in retrospect say "I experienced nothing ", which just means I failed to experience an experience with certain properties) , so I will always experience something in the present. And experiencing is not a static thing that could 'stop' in the present - it requires change -, thus I will always experience a future. What's the alternative?
Ceasing to exist is a 3-person concept, it can't happen to a subject. But we ARE subjects (notwithstanding our useful relative identify as a 3-person accessible thing, ie our current body), so we can't cease to exist in a final 1-person sense. Or at least we can't know what ceasing to exists means for us, anymore that we can know what the world would be like if there would be nothing. So there is no reaon to be afraid of ceasing to exist or treat it like something that actually happens to us or any one else (though temporary death is most probably something we should worry about).
To frame it as questions: What could ceasing to exist mean for me? I care about my experience, but there isn't one in this case. The dead one isn't me, it is just a body that used to be my body. So why would I worry about a non-experience of something that isn't me? Why not instead solely consider experiences I could have (eg being revived after death)?
So what kind of afterlife awaits us? I think it's likely to be the case that intelligence at some point in some branch of the multiverse can run abitrarily good simulations of our past/present/near future and thus will ressurect every being with no violation of the laws of physics. Actually I can't think of an alternative that doesn't require some fundamental things about the world or us to be very much unlike science think they are (eg there is a spiritual plane where we reincarnate from, or consciousness can eternally exist in random quantum fluctuations...).
I think the most practical / accurate way of conceiving of individuality is through the connection of your perceptions through memory. You are the same person as 3 years ago, because you remember being that person (not only rationally, but on a deeper level of knowing and feeling that you were that person). Of course different persons will not share the memory of being the same person. So if we conceive of individuality in the way we actually experience individuality (which I think is most reasonable), there is not much sense in saying that many persons living right know are the same person, no matter how much they share certain memes. Even for an outside observer this is true, since people express enough of their memory to the outside world to understand that their memories form distinct life stories. It may be true to say that many persons share a cultural identity or share a meme space, but this does make them the same person, since they do not share their personal identity. So unless your AI is dumb and does not understand what individuality consists of it won't say that there are only thousands of people.
I might be true though that at some point in the future some people that have different memories right know will merge into one entity and thus share the same memory (if the singularity happens I think it is not that unlikely). Then we could say that different persons living right now might not be different persons ultimately, but they still are different persons right now.