Tegmark's talk at Oxford
post by Stuart_Armstrong · 2013-06-12T11:49:09.803Z · LW · GW · Legacy · 10 commentsContents
10 comments
Max Tegmark, from the Massachusetts Institute of Technology and the Foundational Questions Institute (FQXi), presents a cosmic perspective on the future of life, covering our increasing scientific knowledge, the cosmic background radiation, the ultimate fate of the universe, and what we need to do to ensure the human race's survival and flourishing in the short and long term. He's strongly into the importance of xrisk reduction.
10 comments
Comments sorted by top scores.
comment by Kawoomba · 2013-06-12T13:49:01.797Z · LW(p) · GW(p)
I'll say a little bit about the single one [x-risk] on this list that I worry about the most, which is (...) Unfriendly Artificial Intelligence. (...) One thing I think is clear: That we really don't know [about the impact of foom-able AI]. (...) My feeling is if we really don't know, probably we should put at least a little bit of thought into thinking about it (...) with a few awsome exceptions [points out the FHI] there is way too little attention given to this.
A very nice touch a bit later on when he says he worries about this also as a father, which reinforces the point that x-risk isn't just something academic, but would have an actual, real impact to his actual family's actual well-being. It's easy to banish x-risk discussions to some academic sphere of armchair-theorycrafting, and not realize that if e.g. the planet explodes, that encompasses your house as well. Even your comfy chair!
Replies from: None↑ comment by [deleted] · 2013-06-12T15:45:01.832Z · LW(p) · GW(p)
Replies from: StabilizerVisions of it swam sickeningly through his nauseated mind. There was no way his imagination could feel the impact of the whole Earth having gone, it was too big. He prodded his feelings by thinking that his parents and his sister had gone. No reaction. He thought of all the people he had been close to. No reaction. Then he thought of a complete stranger he had been standing behind in the queue at the supermarket before and felt a sudden stab - the supermarket was gone, everything in it was gone. Nelson's Column had gone! Nelson's Column had gone and there would be no outcry, because there was no one left to make an outcry. From now on Nelson's Column only existed in his mind. England only existed in his mind - his mind, stuck here in this dank smelly steel-lined spaceship. A wave of claustrophobia closed in on him.
England no longer existed. He'd got that - somehow he'd got it. He tried again. America, he thought, has gone. He couldn't grasp it. He decided to start smaller again. New York has gone. No reaction. He'd never seriously believed it existed anyway. The dollar, he thought, had sunk for ever. Slight tremor there. Every Bogart movie has been wiped, he said to himself, and that gave him a nasty knock. McDonalds, he thought. There is no longer any such thing as a McDonald's hamburger.
He passed out.
↑ comment by Stabilizer · 2013-06-14T05:11:04.131Z · LW(p) · GW(p)
It's from the 'The Hitchhiker's Guide to the Galaxy'. There, I saved you a google.
comment by biased_tracer · 2013-06-20T22:03:38.051Z · LW(p) · GW(p)
I'm a bit confused about the prior that he uses in order to assign uniform probability on the existence of extraterrestrial life. Although I agree with that a logarithmic flat prior is a good idea for this problem, it is important to acknowledge that it is biased towards the unconstrained large scales. Since there is a minimum length scale by construction (the size of the earth or so) it would look more fair if he imposed a large scale cutoff as well (at radius of the observable Universe say). This way we can no longer claim that the extraterrestrial life is most likely to be found further than the edge of our Universe, but we could possibly still rule out our own galaxy.
Aside from that, an excellent (and entertaining) talk by Tegmark.
comment by Larks · 2013-06-12T14:47:55.164Z · LW(p) · GW(p)
He is concerned that AIs might not be conscious. Interestingly, this is IIRC the exact opposite fear to Eliezer, who is afraid that they might be (though I may be misremembering). I think Tegmark is mainly talking about UFAIs that replace us (rather than FAIs that protect us) - so basically he's saying he'd value a conscious clippy, but not an unconscious one.
Replies from: shminux↑ comment by Shmi (shminux) · 2013-06-12T17:02:05.483Z · LW(p) · GW(p)
Does he define "conscious"?
Replies from: Discredited↑ comment by Discredited · 2013-06-13T18:07:47.499Z · LW(p) · GW(p)
No. Elsewhere he has said "I believe that consciousness is the way information feels when being processed", but in this talk he seems to make a little bit of a retreat. He describes a positive singularity with p-zombie AI/robots that have perception and appear conscious, but aren't "aware" of the world around them. He makes no clarification of how perception differs from awareness and doesn't mention introspection at all.
Replies from: shminux↑ comment by Shmi (shminux) · 2013-06-13T18:42:28.771Z · LW(p) · GW(p)
So... basically he doesn't know what he is talking about?
Replies from: diegocaleiro↑ comment by diegocaleiro · 2013-06-14T05:55:22.200Z · LW(p) · GW(p)
Neither does anyone who is talking about consciousness...
Replies from: shminux↑ comment by Shmi (shminux) · 2013-06-14T06:23:29.165Z · LW(p) · GW(p)
indeed.