Posts
Comments
"Ideals are like stars". All Schurz is doing is defining, yet again, desire. Desire is metonymic by definition, and I think it is one of the most important evolutionary traits of the human mind. This permanent dissatisfaction of the mind must have proven originally very useful in going after more game that we could consume, and it is still useful in scientific pursuits. How would AI find its ideals? What would be the origin of the desire of AI that would make it spend energy for finding something utterly useless like general knowledge? If AI evolves it would be focused on energy problems (how to think more and faster with lower energy consumption) and it may find interesting answers, but only on that practical area. If you don't solve the problem of AI desire (and this is the path of solving friendliness) AI will evolve very fast on a single direction and will reach real fast the limits of its own "evolutionary destiny". I still think the way to go is to replace biological mass with replaceable material in humans, not the other way around.
You have it backwards. The message is not the data they send, but the medium they use for sending it. When the combined brainpower of earth turns to analyze the message, the first inquiry shouldn't be what pattern they form, but how can you form a pattern across millions of light years. At that moment you drop any hypotheses that negate that possibility, and focus only on those that are corroborated. You use the combined brainpower of earth and you have individual or small groups of scientist working on all the hypotheses they can imagine. The only important thing is that they work in parallel creating as many hypotheses as possible. As you falsify hypotheses you arrive to a better description of the universe. Although a small group of empirical scientist keep track of the message for millenniums, but the rest of the humanity moves into a new paradigm. Within one generation you find a practical use for the new theoretical physics, you invade the alien species realm and create a new kind of Spam out of their flesh. My point, you don't need data to derive laws, you only need it to falsify laws you imagined. A Bayesian superintelligence is forced to derive laws from the observable world, but it will never have a breakthrough, we have the luxury of imagining laws and just wait for falsification. I am not sure we think of theories, as you say. Although we just don't understand yet how we imagine them, my guess is that the breakthrough process is some form of paralel computing that starts with infinite possibilities and moves on through falsification until it arrives to an "idea", which needs to go trough a similar process on the outside world.