"Singularity or Bust" full documentary
post by Torello · 2013-11-01T20:48:13.857Z · LW · GW · Legacy · 9 commentsContents
9 comments
http://www.3quarksdaily.com/3quarksdaily/2013/11/singularity-or-bust-.html
I've never heard of this before, and have only watched 7 minutes so far, but I'd imagine many people here would be interested in this video.
9 comments
Comments sorted by top scores.
comment by MathiasZaman · 2013-11-02T15:32:47.678Z · LW(p) · GW(p)
Here's a summary of the video, written while watching it.
Summary
- A positive singularity is possible in about 10 years
- Three ways towards singularity: AI, Nano-technology and computer-brain interfaces
- The person we're following (Ben Goertzel) works with machine consciousness
- Doesn't follow a materialistic outlook on consciousness. He sees consciousness as the ground on which things are formed and different structures manifest consciousness in different ways. (For example, a human brains manifests consciousness in one way and a coin manifests it in another way.)
- Introduction of the second person we're following (Hugo de Garis), who works with Goertzel on the Conscious Robot Project.
- Apparently, there's a province in China that pours great amounts of money in computer science and software.
- de Garis is trying to tap into that money in order to kickstart a machine intelligence and robots in China and maybe the world.
- de Garis sees China as the next big culture, while America is growing stagnant and "fat"
- Goertzel sees artificial scientists as the big step towards a singularity
- 20 minutes in and we see their robot. It falls over when told to go right.
- Goertzel says building a "thinking machine" shouldn't be harder than creating the google search engine.
- The mind as pattern-recognition engine
- More robot. It walks a bit wobbly. They talk to the robot a little bit.
- The robot does a couple of dance moves and kicks a ball.
- Goertzel's motivation to do this research is lessening suffering, removing death, removing limitations of the human body.
- A brief explanation of how AI wouldn't have to be a copy of a human mind.
- Explanation of how a superhuman mind would lead to a rapid increase in technology and how humans won't be in control anymore.
- De Garis is more pessimistic than Goertzel. Foresees a great debate as the gap between human and machine intelligences closes.
- De Garis sees the worst scenario as the most probable. He foresees a violent conflict between humans who want super-powerful AI (he calls the AI artelects and the humans who want them cosmists) and those who don't want such a thing (which he calls terrans). He says he's glad he'll be dead before this global war happens.
- Goertzel thinks we can accurately predict the events leading up to a singularity, but not anything beyond that.
- The reason Goertzel works in AI is because it feels natural to him.
- The goal for Goertzel is going beyond the human condition.
- Before Goertzel had kids, he wanted to be immortal, but now he has kids, that drive has lessened.
- White text on a black background tells us de Garis' lab is dismantled and the Chinese government is reconsidering the profitability of the home robot.
- Goertzel continues working on robots.
After seeing the documentary, I'm a bit at a loss as to who the target audience is. For people who are new to the concept of AI or the singularity this probably isn't the best way to learn more and for people who are already familiar with those concepts, this documentary doesn't offer a lot of new insights. I also don't think it stands strongly enough as a human-interest piece.
The bits with the robot were kinda fun, but they didn't provide a lot of information. The robot didn't look like much more than a cleverbot with legs and since we're not told anything about the robot, I don't have a particular reason to assume it isn't just that.
Replies from: None, Emile↑ comment by [deleted] · 2013-11-02T16:31:53.107Z · LW(p) · GW(p)
This is well known in LW, but I just want to add that Goertzel doesn't believe in the FAI research MIRI does and considers it meaningless daydreaming. He doesn't think a provably friendly AI is realistic and thinks a better approach is to pursue practical approaches to making the AI more friendly.
Replies from: MrMind↑ comment by MrMind · 2013-11-04T13:41:31.731Z · LW(p) · GW(p)
Well, if the summary is correct, he also thinks that a coin has consciousness, which is far worse than not endorsing MIRI research program and far more concerning about a person who works on AI.
Replies from: None↑ comment by [deleted] · 2013-11-04T18:00:08.360Z · LW(p) · GW(p)
Argument from incredulity? Really?
It's a genuine philosophical position, and one with a long tradition in the reductionist philosophy of mind. If you don't believe that a coin (or to use the more typical example, a thermostat) can have epsilon consciousness, then where do you draw the line? What does a conscious system have to have that, if you were to take it away the system would suddenly become inert and without a subjective experience of existence?
Positing that all things - all! - are conscious to varying degrees dependent on their informational complexity and structural pattern is an attempt to dissolve the mysterious question of what grants us awareness in the first place.
Replies from: MrMind, None↑ comment by MrMind · 2013-11-05T08:54:59.154Z · LW(p) · GW(p)
Argument from incredulity? Really?
No, not really. I don't feel I have confuted Goertzel because I cannot believe his position. See below.
It's a genuine philosophical position, and one with a long tradition in the reductionist philosophy of mind.
Yeah, but that doesn't make it any less silly.
If you don't believe that a coin (or to use the more typical example, a thermostat) can have epsilon consciousness, then where do you draw the line? What does a conscious system have to have that, if you were to take it away the system would suddenly become inert and without a subjective experience of existence?
Well, I cannot describe exactly what a system must be configured to say that it is conscious, but I can certainly draw a line on having a cognitive system (that is, a system for perceiving, elaborating and storing information). Mind you, that is a necessary, not sufficient, condition.
If you take away the brain out of a human, it becomes a thing, like a rock or a thermostat.
Positing that all things - all! - are conscious to varying degrees dependent on their informational complexity and structural pattern is an attempt to dissolve the mysterious question of what grants us awareness in the first place.
If you conflate consciousness with informational complexity, then you can do away with the term and just call it complexity. It doesn't make consciousness any less mysterious, it just sweeps the problem of what differentiate a human from a coin under the rug.
Replies from: None↑ comment by [deleted] · 2013-11-05T18:27:50.454Z · LW(p) · GW(p)
Well, I cannot describe exactly what a system must be configured to say that it is conscious, but I can certainly draw a line on having a cognitive system (that is, a system for perceiving, elaborating and storing information).
A thermostat has these properties. So does a coin or rock too, if you look at it from the perspective of having vibrational modes, electric potentials & eddy currents, stored potential and spatial configuration with respect to, say, a gravitational field. It's a bit more of a stretch than a thermostat (as we tend to mentally abstract away the coin or rock as something with no internal structure), but I hope you can see the argument.
Saying “necessary but not sufficient” is an escape technique. “A system for perceiving, elaborating and storing information” could be stretched to cover every interaction of two or more particles. So these properties are unrelated to the issue and there is some other, specific point at which you wish to draw the line and say “all things which have this property are conscious, those which do not are not.” What is that point?
If you don't know what that cutoff is then consider, at least hypothetically, the possibility that there is no dividing line. Then there is a continuum of conscious experience from the ϵ-consciousness of two interacting particles to the jumbled mess of interactions that is our brain (spanning orders of magnitude in difference that maybe only astronomers are used to dealing with).
If you conflate consciousness with informational complexity, then you can do away with the term and just call it complexity. It doesn't make consciousness any less mysterious, it just sweeps the problem of what differentiate a human from a coin under the rug.
I'm not, and you've misunderstood my point. I think Goertzel and I are on the same page that consciousness is an algorithm in motion. Even thermostats, coins, and rocks run algorithms (yes even rocks, where the steady-state quantum interactions of its composite particles do form an algorithm governing response to, say, vibrational events). So, even a rock or coin has epsilon consciousness.
This is my position which I think Goertzel shares. It's at least in line with what he said in the documentary about coins having minute, but non-zero consciousness.
↑ comment by [deleted] · 2014-06-13T10:06:18.256Z · LW(p) · GW(p)
That conception certainly makes the AGI problem really easy. I'll solve it right now. Get a machine vision/olfaction etc system that recognizes structural complexity classes, does computational complexity efficiciency optimisation stuff, depends on the structural complexity class stuff for activity detection, once it has activity detection model game complexity and you're done - it can know process the entire computable universe including people.
Further assumptions:
..”problem solving" is largely, perhaps entirely, a matter of appropriate selection. Take, for instance, any popular book of problems and puzzles. Almost every one can be reduced to the form: out of a certain set, indicate one element. … It is, in fact, difficult to think of a problem, either playful or serious, that does not ultimately require an appropriate selection as necessary and sufficient for its solution.
It is also clear that many of the tests used for measuring “intelligence” are scored essentially according to the candidate’s power of appropriate selection. … Thus it is not impossible that what is commonly referred to as “intellectual power” may be equivalent to “power of appropriate selection”. Indeed, if a talking Black Box were to show high power of appropriate selection in such matters — so that, when given difficult problems it persistently gave correct answers — we could hardly deny that it was showing the ‘behavioral' equivalent of “high intelligence”.""
↑ comment by Emile · 2013-11-02T15:55:12.578Z · LW(p) · GW(p)
(you need to put an empty line before the first item of your list for it to be displayed correctly)
Replies from: MathiasZaman↑ comment by MathiasZaman · 2013-11-02T16:04:44.445Z · LW(p) · GW(p)
Ok. Done. Thanks.