[Link] Audio recording of Stephen Wolfram discussing AI and the Singularity

post by RaelwayScot · 2015-11-18T21:41:31.933Z · LW · GW · Legacy · 3 comments

Contents

3 comments

Here is the link:

https://www.youtube.com/watch?v=TMviBl46dXg

3 comments

Comments sorted by top scores.

comment by MrMind · 2015-11-19T09:20:46.778Z · LW(p) · GW(p)

Any summary? What does he actually believe? At the moment I'm behind a content filter.

Replies from: gjm
comment by gjm · 2015-11-20T12:21:07.901Z · LW(p) · GW(p)

It's an answer to a question at a conference (I don't know in exactly what context). He was asked "Should we be worried about the Singularity?".

His answer is rather rambling and doesn't really answer the question he was asked. It goes something like this (I've reordered it a little to put related things together more but it's still more a collection of loosely-coupled thoughts than a coherent well-structured anything):

  • The history of technology is one of automating more and more things that people do.
  • As AI improves machines will get better at executing human goals.
    • Goals are "a very human thing". "In some theoretical sense" AIs can't have them.
      • I'm not sure which of several things he means by this. (1) That purpose is fundamentally beyond the capabilities of machines. This seems an unlikely thing for him to think. (2) That the particular kinds of mental structure we call "goals" are specific to how human beings happen to be built, and there's no reason why AIs should have them. This is a bit like Eliezer's position on human values. It might be true, but it seems to me like a very wide variety of AI-ish systems will have things enough like goals to make the use of that term appropriate. (3) That because we will be making the AIs with our goals in view, anything they do can be seen as serving those goals. This seems rather naive about principal/agent problems.
    • There's an interesting scenario in which machines kinda take over. Consider how people use GPS systems; they say where they want to go and then more or less just do what the machine tells them. They still have the option of doing something different, and they set the original goal, but mostly they're taking the machine's advice. So now suppose machines get really good at predicting and advising in every area of life (explicitly including, e.g., interpersonal relations). Then maybe almost everyone, almost all the time, will be just doing what a machine advises them to do, even though they get to set the goals and they aren't obliged to accept the machine's advice.
    • Suppose AI systems take over more and more of the work that humans currently do. What will humans actually do then? One answer may be that we set the machine's goals. But maybe actually we sit around playing video games all day. That sounds terrible, but if you showed our present-day lives to someone from 1000 years ago they might look just as frivolous as a life spent playing video games looks to us now.
  • If people are to set AIs' goals, they need an adequate way to describe those goals. So far, we have human natural languages (fuzzy and unreliable even for human-human communication) and formalized languages for computers (very limited in domain).
    • Wolfram has been thinking about how to formalize areas of discourse currently not well served by formalized languages. He says this has a long history but mostly way back in the past (e.g., "philosophical languages" in the 1600s); I bet actually there's plenty in more recent AI research, but never mind.
    • Languages with the precision of programming languages and the scope and expressivity of natural languages would be a major advance comparable to that of the appearance of natural languages in the first place, in terms of what communication and cooperation they enable.
  • Wolfram used to think there was a clear-cut distinction between computation and intelligence, but no longer does; he would now say that beyond a certain threshold all systems are computationally equivalent, including e.g. human brains and the earth's weather.
  • In various areas where computers used to be much much worse than people (e.g., image recognition) they are now comparable in ability to us (note: I'm not sure most people actually working on image recognition would say that, rather than that existing image-recognition tasks don't probe deeply enough; see e.g. Suddenly, a leopard print sofa appears which has been discussed on LW a couple of times). They will surely get better; so what happens when they are 100x better at these tasks than we are?
  • Consider what happens inside, say, an image-recognizing system. We have words for describing what its lowest-level bits do; maybe they recognize "horizontal stripes" or something. And we may have words for describing what it's highest-level bits do; maybe they recognize "cats". But for the middle levels we typically don't have words (or concepts). Some of them might be really useful if we had them. (He calls them "post-linguistic emergent concepts".)
    • Fancier AI systems may communicate with one another in terms of these, so we may simply be unable to understand their communications. We certainly aren't likely to understand their inner workings. But that shouldn't be a big surprise; we already very often don't understand the inner workings of things in nature.

For the avoidance of doubt: all of the above is my attempt to summarize what Wolfram said; I shouldn't be assumed to agree with any of it (other than a couple of interpolations of my own that I hope are immediately recognizable as such). And since I'm very capable of mistakes, Wolfram also shouldn't be assumed to agree with any of it, though I hope I've got it right.

[EDITED to add:] That his answer was rambling is no particular criticism of Wolfram. That's generally what you get when you ask someone about a big topic and they haven't already put together a structured answer to your question, even if they're very smart and have thought a lot about the topic.

Replies from: MrMind
comment by MrMind · 2015-11-23T08:04:05.782Z · LW(p) · GW(p)

Very useful summary, thank you very much.