Should an AGI build a telescope to spot intergalactic Segways?

post by Michaël Trazzi (mtrazzi) · 2018-04-28T21:55:15.664Z · score: 14 (4 votes) · LW · GW · 6 comments

Contents

  Algorithm-recalcitrance
  Hardware-recalcitrance
  Content-recalcitrance
    The impossibility of Segway deduction
None
6 comments

Inspired by Yudkowsky's original sequences, I am starting today (28/04/2018) my own series of daily articles.

This first article is a summary of ideas expressed in a Meetup about AI Safety I organized in Paris about two weeks ago. The theme of the discussion was "The Kinetics of an Intelligence Explosion", and the material was, of course, the chapter in Superintelligence with the same title.

We essentially discussed recalcitrance, and in particular the three factors of recalcitrance mentioned in Superintelligence (Chapter 4) which are algorithms, hardware and content.

Algorithm-recalcitrance

Last year I read Le Mythe de la Singularité (The Myth of Singularity) by Jean-Gabriel Ganascia (who appeared to be my teacher for a course on knowledge representation a few months ago). In his book he expressed some elementary thoughts about the limits of pure hardware improvements without improvements in algorithms (here is the Lesswrong wiki about it).

At the same time I had this class on knowledge representation, I was also taking a course on Algorithmic Complexity. We obviously discussed the P vs NP problem, but I also discovered a bunch of different classes of complexity (ZPP, RP and BPP are complexity classes for probabilistic Turing Machines, and PSPACE takes into account a polynomial amount of space for instance).

The question is (obviously) whether some problems are intractable (for example NP-complete, assuming P is not equal to NP), and algorithm-recalcitrance would therefore be high, or this question does not matter at all because to every hard-problem there exists a tractable approximation algorithm.

This reminds me of a Youtube comment I saw a few weeks ago about a Robert Miles video (which dealt with Superintelligence, one way or the other). The comment was (approximately) as follow "But aren't there some problems impossible to optimize, even for a Superintelligence ? Aren't sorting algorithms doomed to be executed with a O(nlogn) complexity ?".

A nice counter-argument to this comment (thank you the Youtube comment section) was that the answer to this question depends on how the problem is formulated. What are the hypotheses on the data structure for the incoming input? Aren't there some way to maintain a nice data structure at any time inside the Superintelligence's hardware?

Another interesting counter-argument is the Fast Inverse Square Root example. Although some problems seem to be computationally expensive, with some clever hacks (e.g. introducing ad hoc mathematical constants which fit in memory) they become way faster.

But for some problems an approximate solution is not allowed, and this might be a problem, even for a Superintelligence. For instance, inverting the sha256 hash function (assuming the one-way hypothesis).

Hardware-recalcitrance

Physical limits are self-evident restrictions to Hardware improvements. Straightforward constraints are limits in speed (speed of light) or limit in computation power (the universe might be finite). Some limits may also be found in the infinitely small because of Planck's length.

Content-recalcitrance

With the modern Deep Learning paradigm, one could think that more content (i.e. more data) is the solution to all problems.

Here are two counter-arguments (or factors of content-recalcitrance if you want):

1. More content does not necessarily imply an increase in the algorithm's performance
2. Some content (data) might prove particularly difficult to obtain, and a "perception winter" may arise

The first point being largely developed in Superintelligence, I will develop the second one (and thus explain a bit more the title of this post).

The impossibility of Segway deduction

Imagine tremendous progress in Artificial Intelligence. One-shot learning is a thing and now algorithms only need very small data inputs to generalize knowledge.

Would an AGI be capable of imagining the existence of Segways (human invention) if it had never seen one before ?

I believe it would not be capable of doing so.

And I think for some physical properties of the universe, the only way to get the data is to go there, or build a telescope to spot some "intergalactic Segways" wandering around.

You could argue that exploring the universe is goddamn long and that a Superintelligence might just as well generate thousands of simulation to gather data about what might exist at the edge of the universe.

But to generate those so-called simulations you need laws of physics, and some prior hypotheses about the universe.

And to obtain them, the only way is to explore the universe (or just build the fu***** telescope).

6 comments

Comments sorted by top scores.

comment by Gyrodiot · 2018-04-29T09:06:29.679Z · score: 7 (2 votes) · LW · GW

Thanks for your post. Your argumentation is well-written and clear (to me).

I am confused by the title, and the conclusion. You argue that a Segway is a strange concept, that an ASI may not be capable of reaching by itself through exploration. I agree that the space of possible concepts that the ASI can understand is far greater than the space of concepts that the ASI will compute/simulate/instantiate.

However, you compare this to one-shot learning. If an ASI sees a Segway, a single time, would it be able to infer what is does, what's it for, how to build it, etc.? I think so! The purpose of one-shot learning models is to provide a context, a structure, that can be augmented with a new concept based on a single example. This is far simpler than coming up with said new concept from scratch.

See, on efficient use of sensory data, That Alien Message [LW · GW].

I interpret your post as « no, an ASI shouldn't build the telescope, because it's a waste of resources and it wouldn't even need it » but I'm not sure this was the message you wanted to send.

comment by Michaël Trazzi (mtrazzi) · 2018-04-29T11:31:50.111Z · score: 2 (1 votes) · LW · GW

Thank you for your well-formulated comment. I agree that more details/precision could be much appreciated.

I am confused by the title, and the conclusion.

Not understanding the title and the conclusion is a natural/expected reaction. I wanted to write this Meetup summary for a long time and only thought of this funny headline for a title and I guess the conclusion might seem like a weird way to come back on feet. I was also short on time so I had to be overly implicit. I will nonetheless try to answer your comment the best as I can.

If an ASI sees a Segway, a single time, would it be able to infer what is does, what's it for, how to build it, etc.? I think so! The purpose of one-shot learning models is to provide a context, a structure, that can be augmented with a new concept based on a single example. This is far simpler than coming up with said new concept from scratch.

I also think so! I totally agree that providing a structure/context is much simpler to truly innovate by creating a completely new idea (such as general relativity for Einstein).

See, on efficient use of sensory data, That Alien Message [LW · GW].

Totally relevant reference, thank you.

I interpret your post as « no, an ASI shouldn't build the telescope, because it's a waste of resources and it wouldn't even need it » but I'm not sure this was the message you wanted to send.

I think I was not clear enough about the message. Thank you for asking for clarifications.

Actually, I believe the ASI should build the telescope (and it might not even be a waste of resource if it knows physics well enough to optimize it in a smart way).

The Segway is not, in itself, a complicated engineering product. An ASI could, in principle, generalize the concept of a Segway from seeing it only once (as you mentioned) and understand the usage humans would have of it (if it had some prior knowledge about humans, of course).

What I meant by "Intergalactic Segway" is an ad hoc engineering product made by some strange intergalactic empire we have never met. Segways seem really convenient for humans, but they are so because they fit our biological bodies which are very specific and adapted from natural selection (which, in turn, adapted from planet Earth).

I believe aliens might have different needs and engineering features, and would end up building "Intergalactic Segways" to suit their needs, and that we would have not a single clue about what those "Intergalactic Segways" even look like.

Furthermore, even if for the ASI it was more resource efficient to generate 10^30 simulations of the Universe to know how other aliens behave, I think it is not enough.

I think the search space for alien civilizations (if we assume that human-level-intelligence civilizations are rare in the universe) is huge, and that to run sufficiently precise physical simulations in this incredibly huge space would prove impossible, and that building a telescope (or just send von Neumann probes at the edges of the observable universe) would be the only efficient solution.

This is all I have to say for now (had not thought more about it).

If you have more critics/questions I would be happy to discuss it further.

comment by Gyrodiot · 2018-05-04T09:03:14.798Z · score: 6 (3 votes) · LW · GW

Thanks for your clarification. Even though we can't rederive Intergalactic Segways from unknown strange aliens, could we derive information about those same strange aliens, by looking at the Segways? I'm reminded of some SF stories about this, and our own work figuring out prehistorical technology...

comment by Michaël Trazzi (mtrazzi) · 2018-05-04T12:43:48.326Z · score: 2 (1 votes) · LW · GW

Interesting question! I don't have any clue. Maybe you could answer your own question, or give more information about those stories or your work on prehistorical technology?

comment by Donald Hobson (donald-hobson) · 2018-04-29T11:38:43.488Z · score: 1 (1 votes) · LW · GW

There exist some maths problems that even ASI can't solve, because they require more computation than fits in the universe. To prove this, consider the set of all programs that take in an arbitrary turing machine and return "halt" "no halt" or "unsure". Rule out all the programs that are ever wrong. Rule out all the programs that require more computation than fits in the universe. Consider a program that take in a turing machine and applies all such programs to it. If any of them return "halt" then you have worked out that it halts in finite time. If any return "no halt" then you know it does not halt. As the halting problem can't be solved, then the program must sometimes return unsure. That is there must exist instances of the halting problem that no program that fits in the universe can solve. (Assuming the universe contains a finite amount of computation)

These problems aren't actually that important to the real world. They are abstract mathematical limitations that wouldn't stop the AI from achieving a decisive strategic advantage. There are limits, but they aren't very limiting.

The AI needs at least some data to deduce facts about the world. This is also not very limiting. Will it need to build huge pieces of physics equipment to work out how the universe works, or will it figure it out from the data we have already gathered? Could it figure out string theory from a copy of Kepler's notes? We just don't know. It depends if there are several different theories that would produce similar results.

comment by Dacyn · 2018-04-29T12:38:33.184Z · score: 5 (2 votes) · LW · GW

Your example seems a bit weird to me, because the amount of computation a program requires depends on its input. There are some inputs (in fact all but finitely many of them) such that no program can read the input using all the computing power in the universe. So trivially there are instances of the halting problem that no program in the universe can solve (because such a program cannot even read the input).

Also, I don't think the definition of "solve" is precise enough for the mathematical-flavor reasoning you seem to be trying to do here. An AI could flip a coin to answer all yes/no questions, does this count as "solving" the ones it gets right? If so it seems that there's no yes/no problem that the AI couldn't solve (if it got lucky).

Incidentally, I think there are plenty of simple math problems that an AI wouldn't be able to solve. For example I think an AI probably wouldn't be able to give an answer to the Collatz conjecture that's any more satisfying than the one we already have (namely, that there is a heuristic argument that it is probably true, but a small chance that it might be wrong and no way to tell). Such problems might or might not be relevant to the AI's strategic interests.

Finally, some math problems can't be solved even with an infinite Turing machine!