Official videos from the Singularity Summit

post by NancyLebovitz · 2011-10-26T17:11:00.875Z · LW · GW · Legacy · 38 comments

Here.

38 comments

Comments sorted by top scores.

comment by shminux · 2011-10-26T17:25:49.348Z · LW(p) · GW(p)

My (unflattering) comment on EY's presentation style.

Replies from: lukeprog
comment by lukeprog · 2011-10-26T19:22:39.585Z · LW(p) · GW(p)

But, he appears to be wearing a bondage-gear leather vest.

Eliezer FTW.

Replies from: None, shminux, None
comment by [deleted] · 2011-10-26T20:23:56.603Z · LW(p) · GW(p)

He looks like he showed up at the last minute from a Renaissance Faire run by sadomasochists.

Replies from: pedanterrific, Richard_Kennaway
comment by pedanterrific · 2011-10-27T05:00:40.147Z · LW(p) · GW(p)

A day in the life of Eliezer Yudkowsky.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2011-10-27T11:48:49.347Z · LW(p) · GW(p)

I think the vest is harmless, but perhaps I'm not the person to go to for how to dress to impress normal people. I don't seem to be on the autism spectrum, but so far as clothes are concerned, I'm amazed at how much people read into them.

What does horrify me is that I was needed to post the link. Why didn't the Singularity Institute do it?

Replies from: pedanterrific
comment by pedanterrific · 2011-10-27T16:04:46.777Z · LW(p) · GW(p)

I guess they assumed since it was on the front page of the SingInst site there was no need...?

comment by Richard_Kennaway · 2011-10-27T11:57:29.346Z · LW(p) · GW(p)

Er...not really. I mean, I know what that would look like and, um, no. Sometimes leather is just leather.

Replies from: None, pedanterrific
comment by [deleted] · 2011-10-27T16:16:01.802Z · LW(p) · GW(p)

No, not really. It's an exaggeration for humorous effect.

I know what that would actually look like too, and while it might cause even bigger signaling problems for Eliezer, it would certainly be interesting to watch. Did I inadvertantly suggests that I thought Ren Faires and BDSM were bad things?

comment by pedanterrific · 2011-10-27T16:05:46.103Z · LW(p) · GW(p)

Well, it would be kinda hard to give a presentation through the branks.

comment by shminux · 2011-10-26T20:05:03.820Z · LW(p) · GW(p)

Right garment, wrong venue.

comment by [deleted] · 2011-10-27T03:38:23.212Z · LW(p) · GW(p)

.

comment by Wei Dai (Wei_Dai) · 2011-11-03T11:49:40.451Z · LW(p) · GW(p)

It's great to see high status people like Max Tegmark and Jaan Tallinn publicly support the Singularitarian cause (i.e., trying to push the future towards a positive Singularity). Tallinn specifically mentioned LW as the main influence for his becoming a Singularitarian (or in his newly invented term, "CL3 Generation"). Does anyone know Tegmark's story?

Replies from: betterthanwell
comment by betterthanwell · 2011-11-03T12:19:53.682Z · LW(p) · GW(p)

I suspect that Tegmark is bright enough to have arrived on his own,
given cosmology, physical law and a strict adherence to materialism.

(In terms of how he arrived at a Singularitarian worldview, not how he came to affiliate with the SIAI.)

In his own words (2007):

I believe that consciousness is, essentially, the way information feels when being processed. Since matter can be arranged to process information in numerous ways of vastly varying complexity, this implies a rich variety of levels and types of consciousness. The particular type of consciousness that we subjectively know is then a phenomenon that arises in certain highly complex physical systems that input, process, store and output information. Clearly, if atoms can be assembled to make humans, the laws of physics also permit the construction of vastly more advanced forms of sentient life. Yet such advanced beings can probably only come about in a two-step process: first intelligent beings evolve through natural selection, then they choose to pass on the torch of life by building more advanced consciousness that can further improve itself.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2011-11-04T01:59:27.830Z · LW(p) · GW(p)

I suspect that Tegmark is bright enough to have arrived on his own, given cosmology, physical law and a strict adherence to materialism.

What I want to know is how he became motivated to push for a positive Singularity. It seems to me that people who think that a Singularity is possible, or even likely, greatly outnumber people who think they ought to do something about it. (I used to wonder why most people are so apathetic in the face of such danger and opportunity, but maybe a better question is how the few people who are not became that way.)

comment by lukeprog · 2011-10-26T21:32:01.187Z · LW(p) · GW(p)

My favorite talk is Jaan's.

Replies from: Wei_Dai, shminux, Vladimir_Nesov, timtyler
comment by Wei Dai (Wei_Dai) · 2011-11-04T02:12:33.712Z · LW(p) · GW(p)

I really liked Jaan's talk as well, but I wonder how "Level 2" people react to it. Would they be offended by the suggestion that they are maximizing their social status instead of doing what's best for future society, or by the Levels terminology which implies that they are inferior to "Level 3" people? (The implication seems clear despite repeated disclaimers from Jaan.)

My first reaction to seeing Jaan's talk was "someone ought to forward this to Bill Gates", but now I'm not so sure.

Replies from: lukeprog
comment by lukeprog · 2011-11-04T02:25:56.307Z · LW(p) · GW(p)

Yup, I'd want to change that part.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2011-11-04T09:28:07.496Z · LW(p) · GW(p)

I'm not sure it should be changed, just saying that Jaan might want to do a bit of "market research" before putting his message in front of a different audience. Who knows, maybe being described as status junkies and "level 2" is actually a good way to make people like Bill Gates reconsider their priorities?

comment by shminux · 2011-10-26T22:01:54.293Z · LW(p) · GW(p)

I'm sure this has nothing to do with him paying a large chunk of your salary :P

Seriously, though, it was a great talk, with a great conclusion, too. But can't say that the label "CL3 Generation" is catchy enough.

Replies from: atucker
comment by atucker · 2011-10-27T16:12:34.028Z · LW(p) · GW(p)

There was a discussion at a dinner afterwards about what the C stood for.

It took far too long to remember, but "exactivist" was a popular alternative. (exact + ist, ex - activist, ex (as in x-risk) activist, and probably a few more that I forgot).

comment by Vladimir_Nesov · 2011-10-27T21:24:35.135Z · LW(p) · GW(p)

A good talk, but as others have mentioned, "CL3" thing is strange, and it seems like the whole idea of there being levels has only weak motivation (and raises irrelevant objections, that prompt disclaimers about "levels other than CL3 being OK" that Jaan was forced to repeatedly make). On the other hand, the categorization into three unordered areas of activism/concern seems solid.

comment by timtyler · 2011-10-27T18:31:06.538Z · LW(p) · GW(p)

That is probably a good example of how not to attempt to launch a meme.

Replies from: lukeprog
comment by lukeprog · 2011-10-27T19:08:47.168Z · LW(p) · GW(p)

Can you be more specific?

Replies from: timtyler
comment by timtyler · 2011-10-27T20:04:06.526Z · LW(p) · GW(p)

The "CL3 Generation" meme. It even managed to remind me of Scientology's "OT auditing levels".

Perhaps more time on 4chan is needed.

comment by [deleted] · 2011-10-26T20:01:45.165Z · LW(p) · GW(p)

Is it possible to obtain the slides from EY's presentation?

Replies from: lukeprog
comment by lukeprog · 2011-10-26T21:32:42.147Z · LW(p) · GW(p)

Not what you asked, but... I did upload his list of open problems here.

Replies from: Dr_Manhattan, timtyler, timtyler, timtyler, timtyler
comment by Dr_Manhattan · 2011-11-01T12:38:00.552Z · LW(p) · GW(p)

Seriously Luke, slides - the video was kind of blurry. Use the Force (if you have to)!

I think there is such a thing as professionalism, and it's not always bad. Posting slides for your talks is common practice. In EY's case we can chuck it up to absentminded genius, but this is why we have well organized people like you at SingInst. I say it as a supporter.

Replies from: lukeprog
comment by lukeprog · 2011-11-01T19:16:32.901Z · LW(p) · GW(p)

Just got permission from Eliezer to post his Singularity Summit 2011 slides. Here you go.

Replies from: None, Dr_Manhattan
comment by [deleted] · 2011-12-11T11:28:10.474Z · LW(p) · GW(p)

Great!

comment by Dr_Manhattan · 2011-11-01T19:48:39.275Z · LW(p) · GW(p)

Thanks a lot Luke.

comment by timtyler · 2011-10-27T12:49:04.732Z · LW(p) · GW(p)

extension of Solomonoff induction to anthropic reasoning and higher-order logic – why ideal rational agents still seem to need anthropic assumptions.

I would say it lacks a rationale. AFAIK, intelligent agents just maximise some measure of utility. Anthropic issues are dealt with automatically as part of this process.

Much the same is true of this one:

Theory of logical uncertainty in temporal bounded agents.

Again, this is a sub-problem of solving the maximisation problem.

Breaking a problem down into sub-problems is valuable - of course. On the other hand you don't want to mistake one problem for three problems - or state a simple problem in a complicated way.

comment by timtyler · 2011-10-27T12:54:22.221Z · LW(p) · GW(p)

How do you construe a utility function from a psychologically realistic detailed model of a human’s decision process?

It may be an obvious thing to say - but there is an existing research area that deals with this problem: revealed preference theory.

I would say obtaining some kind of utility function from observations is rather trivial - the key problem is compressing the results. However, general-purpose compression is part of the whole project of building machine intelligence anyway. If we can't compress, we get nowhere, and if we can compress, then we can (probably) compress utility functions.

Replies from: lukeprog
comment by lukeprog · 2011-10-27T19:06:49.901Z · LW(p) · GW(p)

It may be an obvious thing to say - but there is an existing research area that deals with this problem: revealed preference theory.

Right. Also, choice modeling in economics and preference extraction in AI / decision support systems.

comment by timtyler · 2011-10-27T15:50:42.046Z · LW(p) · GW(p)

Better formalize hybrid of causal and mathematical inference.

I'm not convinced that there is much to be done there. Inductive inference is quite general, while causal inference involves its application to systems that change over time in a lawful manner. Are we talking about optimising inductive inference systems to preferentially deal with causal patterns?

That is similar to the "reference machine" problem - in that eventually you can expose the machine to some real-world data and then let it design its own reference machine. Hand-coding a reference machine might help with getting off the ground initially, however.

Does anyone understand better - or have a link?

comment by timtyler · 2011-10-27T12:45:49.102Z · LW(p) · GW(p)

Making hypercomputation conceivable

This one seems to be a pretty insignificant problem, IMHO. Real icing-on-the-cake stuff that isn't worth spending time on at this stage.

comment by timtyler · 2011-11-01T21:10:06.650Z · LW(p) · GW(p)

All the videos here: http://www.youtube.com/user/SingularitySummits ...are currently private.

comment by sidsver · 2011-11-22T11:05:36.883Z · LW(p) · GW(p)

Guys, have a look: www.cl3generation.com

Very preliminary at this stage. Cheers.

comment by timtyler · 2011-11-01T19:10:50.693Z · LW(p) · GW(p)

My comments on the video: Eliezer Yudkowsky: Open Problems in Friendly Artificial Intelligence

An ordinary utility maximiser calculates its future utility conditional on it choosing to defect - and do the same conditional on it choosing to cooperate. If it knows it its is playing the Prisoner's Dilemma against its clone it will expect the clone to make the same deterministic decisions that it does. So, it will choose to cooperate - since that maximises its own utility. That is what behaviour to expect from a standard utility maximiser.

...and...

05:55 - What about the well-known list of informational harms? E.g. see the Bostrom "Information Hazards" paper.

I notice that multiple critical comments have been incorrectly flagged as spam on this video. Some fans have a pretty infantile way of expressing disagreement.