Peter Thiel's speech at Oxford Debating Union on technological stagnation, Nuclear weapons, COVID, Environment, Alignment, 'anti-anti anti-anti-classical liberalism', Bostrom, LW, etc.

post by M. Y. Zuo · 2023-01-30T23:31:26.134Z · LW · GW · 33 comments

Contents

33 comments

This is a very interesting speech covering a lot of the popular topics, and it's only 27 minutes long.

Beware though his views are usually provocative and especially here it seems like certain ideas are expressed to challenge one group or several simultaneously, so I recommended it with caveats.

For example, his opening line is:

"I'm always reminded of a question a colleague of mine like to ask 'what is the antonym of diversity?', 'what word is the single antonym of diversity?'…university."

33 comments

Comments sorted by top scores.

comment by Søren Elverlin (soren-elverlin-1) · 2023-01-31T12:58:27.109Z · LW(p) · GW(p)
  • AI Risk is mentioned first at 19:40.
  • Bostrom's "The Vulnerable World Hypothesis" paper is grossly misquoted.
  • No object-level arguments against AI Risk are presented, nor are there any reference to object-level arguments made by anyone.

I'm still upvoting the post, because I find it useful to know how AI Risk (and we) are perceived.

Replies from: TrevorWiesinger, M. Y. Zuo
comment by trevor (TrevorWiesinger) · 2023-02-16T03:18:30.957Z · LW(p) · GW(p)Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2023-02-16T13:18:42.065Z · LW(p) · GW(p)

It does seem like it attracted an unusually large proportion of downvotes, which is why it's sitting at just 8.

Likely due to some animosity some folks may feel towards Thiel.

comment by M. Y. Zuo · 2023-01-31T18:41:36.053Z · LW(p) · GW(p)

There's also the Q&A session afterwards, which isn't nearly as interesting or provocative, but it does reflect what the average Oxford debating union member might be thinking. Or at least that's my understanding.

comment by Viliam · 2023-01-31T15:00:39.557Z · LW(p) · GW(p)

After hearing the first 20 minutes, it seems to me that Thiel imagines some kind of conspiracy of universities to suppress progress. Which reminds me of a quote: "Don’t let your schooling interfere with your education."

If your problem with universities is ideology, there is little you can do about that. But if your problem is that they do not teach enough science, or teach the science wrong, the solution is straightforward -- provide better lessons of science online. Kids these days spend most of their time online; build something like Khan Academy on steroids, maybe with some real-life incentives (the five best students each year win actual money), and kids will compete at becoming better scientists.

Similarly, what can't we celebrate successful scientists? Dunno, why don't you organize a celebration for those who you think deserve it?

It's striking how none of the solutions involve more technology. The solution to climate change is not fusion reactors. The solution to nuclear weapons is not better anti-ballistic missile systems. The solution to AI... The solution to biotech is not accelerating the research even faster.

Disappointed that you skipped the technological solution to the AI. Yudkowsky might have learned a thing or two.

(Two advertisements every 5 minutes -- the ultimate YouTube experience. Just kidding, it will get worse soon.)

Replies from: M. Y. Zuo, ChristianKl
comment by M. Y. Zuo · 2023-01-31T16:08:10.256Z · LW(p) · GW(p)

He does raise the interesting point that strong taboos are usually hiding something. 

Robert B. Laughlin, the controversial professor at Stanford, was his example. It seems to have been personal as one of his friends failed to receive a PhD under Laughlin due to the feuding around a very strong academic taboo being broken. 

The implication being Laughlin's students at the time were denied opportunities as revenge, as they were easier targets to take down than a Stanford professor who just won the Nobel prize in physics. 

If true, it's an understandable motivation to then hold a grudge or ponder about conspiracies behind other apparently inexplicable phenomena.

I guess in that sense it does boil down to an ideological fight. 

Can someone investigate their colleagues? Is it permissible to air suspicions openly? Is it acceptable to  claim other professors at the university are hucksters and fraudsters with just circumstantial evidence? 

comment by ChristianKl · 2023-01-31T15:57:21.158Z · LW(p) · GW(p)

But if your problem is that they do not teach enough science, or teach the science wrong, the solution is straightforward -- provide better lessons of science online. 

This relates to what Thiel said about fields that call themselves XY science. It's a tell, that this isn't really the right thing. Solving textbook problems isn't science. Science is 0 -> 1. Science is about doing experiments and learning something useful from them that was previously unknown.

The problem is not that children aren't taught enough science in school, it's that they usually aren't taught any. Most children leave school without having done a single experiment in which they learned something useful that was previously unknown.

maybe with some real-life incentives (the five best students each year win actual money), and kids will compete at becoming better scientists.

Science inherently has real-world incentives. If you learn something through your experiments, that is useful, that has real-world value. If there isn't any real-world value, that's a sign that it's not really science but just some game of pretending to do science.

Similarly, what can't we celebrate successful scientists? Dunno, why don't you organize a celebration for those who you think deserve it?

If Peter Thiel would organize a celebration for a scientist, you would get a bunch of journalists thinking about how to write negative articles about that event. That's not going to be a purely positive event for the celebrated scientist.

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2023-01-31T23:54:52.631Z · LW(p) · GW(p)

maybe with some real-life incentives (the five best students each year win actual money), and kids will compete at becoming better scientists.

Science inherently has real-world incentives. If you learn something through your experiments, that is useful, that has real-world value. If there isn't any real-world value, that's a sign that it's not really science but just some game of pretending to do science.

I'm not too sure about this.  Aren't there scientists who claim to have did it for the beauty, the enjoyment of such beauty, etc.?

Much like pure mathematicians who talk about the beauty of their equations motivating them.

Replies from: ChristianKl
comment by ChristianKl · 2023-02-01T00:06:26.468Z · LW(p) · GW(p)

Enjoyment of beauty is a real-world value. We pay artists to produce beautiful things because they have value to us. 

But even there, when you listen to talk to the string theorists about how their work reveals beauty that only they understand, there's the fraud question. 

In any case, students almost never create scientific work that's beautiful enough that someone would engage with it without being bribed to do so. 

Replies from: green_leaf
comment by green_leaf · 2023-02-01T00:13:48.551Z · LW(p) · GW(p)

Science has to do with understanding and knowledge - practical applications are applied science, engineering, medicine, etc. It's up to those fields to come up with ideas about how to find real-world use for the science.

Replies from: ChristianKl
comment by ChristianKl · 2023-02-01T00:22:20.664Z · LW(p) · GW(p)

I didn't speak about "practical applications". If another scientist can build on the work you produce, you are also creating real-world value. In the absence of fraud, anything people are willing to pay for has real-work value. 

Replies from: green_leaf
comment by green_leaf · 2023-02-01T00:41:34.012Z · LW(p) · GW(p)

Another scientist being able to build on the work some other scientist produces is different from what most people would call "real-world value," but I agree that's important (even though I disagree that other people's ability to build on, buy or do anything else with the work determines if something is science or if the science is worthwhile - the science-status of a paper is determined purely by its content, not by what other people are or aren't capable of doing with it).

Even though I agree that a platonic ideal of a scientist would be able to build on any paper containing true science (perhaps given enough time for technological advancement, if that is necessary).

Replies from: ChristianKl
comment by ChristianKl · 2023-02-01T01:44:11.446Z · LW(p) · GW(p)

There are multiple things you can mean by the word science. In the content of Thiel's talk science is the thing on which you can build progress. Science in that sentence depends on creating work that's valuable to other people. As long as the knowledge you gain is esoteric and in your own head it's not science. Science is actually about exoteric knowledge that other people adopt. 

I did link the Larry McEnerney talk for a reason. It gives more details about the notion of value that I'm pointing toward.

Replies from: green_leaf
comment by green_leaf · 2023-02-01T01:47:21.485Z · LW(p) · GW(p)

I don't think that's what anyone means by science, so I'm naturally suspicious towards someone using it in such a manner.

Replies from: ChristianKl
comment by ChristianKl · 2023-02-01T01:52:19.510Z · LW(p) · GW(p)

To refer to Duncan's latest post [LW · GW], do you seriously claim that I don't mean that with science (I'm certainly part on anyone)?

Or for that matter Larry McEnerney that defines knowledge in the linked talk as being something that's actually valuable to other people?

In our times of great stagnation, there are many people who don't think that science is about producing value. That position is part of the problem.

Replies from: green_leaf
comment by green_leaf · 2023-02-01T02:28:59.938Z · LW(p) · GW(p)

To refer to Duncan's latest post [LW · GW], do you seriously claim that I don't mean that with science (I'm certainly part on anyone)?

No. I mean the customary meaning of that phrase, which, I think, would be maybe something like "anyone except a few people."

It's certainly possible for you (or someone else) to redefine science, but then the criticism is that what-is-customarily-meant-by-science doesn't fulfill the criteria of what-the-speaker-redefined-the-word-science-to-mean, which might be true, but I don't see how is it important.

A better criticism would be that science that's not useful shouldn't be produced (rather than that it's not true science), but then the obvious problem is that the usefulness or uselessness of science can't always be judged in advance, and that it might take decades (or even centuries) for scientific knowledge to become useful, and humans trying to optimize for usefulness (rather than for science-quality-and-correctness) would curtail those scientific papers that have no obvious use today.

That would lead to being stuck in a sort of local maximum.

Replies from: ChristianKl
comment by ChristianKl · 2023-02-01T15:26:18.843Z · LW(p) · GW(p)

No. I mean the customary meaning of that phrase

Okay, so it's saying untrue things for rhetorical impact.

that it might take decades (or even centuries) for scientific knowledge to become useful, and humans trying to optimize for usefulness (rather than for science-quality-and-correctness) would curtail those scientific papers that have no obvious use today.

When you send a paper to a journal, that journal does ask itself "Is this paper useful to the people who read this journal and helps advance the field or is it pointless for the readers of the journal to read it." Given that this is how our scientific system works, following Larry McEnerney advise about writing to actually create value for the readers of the journal does help produce better papers.

Thomas Kuhn did distinguish scientific fields from fields that aren't by the fact that scientific fields progress. If the changes of a field are due to fashion and not progress, it's not a science but in Kuhns sense if the change it is. For progress to happen you need to solve problems that help the field progress.

In Viliam's proposal, having students train "science" with something Kahn Academy like is having train skills that are not about producing anything on which other people or even themselves can build. I used the term real-world to contrast it with the world of school. 

Replies from: green_leaf
comment by green_leaf · 2023-02-01T17:02:52.893Z · LW(p) · GW(p)

Okay, so it's saying untrue things for rhetorical impact.

It's saying something literally untrue (how the English language often works) but not for rhetorical impact, but simply because that's what the phrase means.

When you send a paper to a journal, that journal does ask itself "Is this paper useful to the people who read this journal and helps advance the field or is it pointless for the readers of the journal to read it."

If that was the case, the suggestion to change the process of producing science would be pointless, because science would already work that way.

In Viliam's proposal, having students train "science" with something Kahn Academy like is having train skills that are not about producing anything on which other people or even themselves can build. I used the term real-world to contrast it with the world of school.

The understanding from textbooks (or Khan Academy) is very much needed to create something other people can build on. The reason there is no obvious pathway from the former to the latter is because science is extremely complex with many layers of abstraction.

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2023-02-02T01:06:17.545Z · LW(p) · GW(p)

When you send a paper to a journal, that journal does ask itself "Is this paper useful to the people who read this journal and helps advance the field or is it pointless for the readers of the journal to read it."

If that was the case, the suggestion to change the process of producing science would be pointless, because science would already work that way.

Can you elaborate on this?

Replies from: green_leaf
comment by green_leaf · 2023-02-02T01:51:45.009Z · LW(p) · GW(p)

If only real-world-useful science was published in journals, it would be pointless to suggest that only real-world-useful science should be produced.

comment by Rodrigo Heck (rodrigo-heck-1) · 2023-01-30T23:40:00.073Z · LW(p) · GW(p)

I am with him on this. The level of AI alarmism that is being put forward especially in this community is uncalled for. I was just reading Yudkowski and Scott's chat exchange and all the doom arguments I captured were of the form "what if?". What about we just return to the way we do engineering: keep building and innovating and dealing with negative side effects along the way?

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2023-01-30T23:44:13.076Z · LW(p) · GW(p)

To borrow Thiel's analogies, the same could also be said by proponents of further developments in nuclear weapons or 'gain-of-function' research of viruses... which raises the interesting question of whether he intended his speech to be partially self-negating one level further in.

Replies from: rodrigo-heck-1
comment by Rodrigo Heck (rodrigo-heck-1) · 2023-01-31T00:28:15.349Z · LW(p) · GW(p)

AI risk is still at another level of concern. If you ask me to list what can go wrong with gain of function research, I can probably cite a lot of things. Now if you ask me what dangers LLM's can cause to humanity, I will have a much more inoffensive list.

Replies from: lc
comment by lc · 2023-01-31T17:46:00.158Z · LW(p) · GW(p)

Current* large language models are not general intelligences. This community is mostly concerned with existential risk from future AIs, not the extremely minor risks from misuse of current AIs.

Replies from: rodrigo-heck-1, sharmake-farah
comment by Rodrigo Heck (rodrigo-heck-1) · 2023-01-31T22:10:22.502Z · LW(p) · GW(p)

That's exactly my point. We don't even know how these future technologies will look like. Gain of function research has potential major negative effects right now, so I think it's reasonable to be cautious. AI is not currently at this point. It may potentially be in the future, but by then we will be better equipped to deal with it and assess the risk-benefit profile we are willing to put up with.

Replies from: lc
comment by lc · 2023-01-31T22:59:29.748Z · LW(p) · GW(p)

but by then we will be better equipped to deal with it

This is precisely the point with which others disagree; especially the implicit assertion that we will be sufficiently equipped to handle the problem rather than just "better".

Replies from: rodrigo-heck-1
comment by Rodrigo Heck (rodrigo-heck-1) · 2023-02-01T00:01:18.023Z · LW(p) · GW(p)

That's still a theoretical problem; something we should consider but not overly update on, in my opinion. Besides, can you think of any technology people could foresee it would be developed and specialists managed to successfully plan a framework before implementation? That wasn't the case even with nuclear bombs.

Replies from: cubefox, liam-donovan-1
comment by cubefox · 2023-02-01T18:57:31.779Z · LW(p) · GW(p)

Besides, can you think of any technology people could foresee it would be developed and specialists managed to successfully plan a framework before implementation?

That's part of the reason why Eliezer Yudkowsky thinks we're doomed and Robin Hanson thinks that we shouldn't try to do much now. The difference between the two is take-off speed: For EY we either solve alignment before arrival of superintelligence (which is unlikely) or be doomed, RH thinks we have time to make alignment work during arrival of superintelligence.

Replies from: rodrigo-heck-1
comment by Rodrigo Heck (rodrigo-heck-1) · 2023-02-01T19:20:35.325Z · LW(p) · GW(p)

Well, Eliezer is the one making extraordinary claims, so I think I am justified in applying a high dose of skepticism before evidence of AI severely acting against humanity's best interest pops up.

Replies from: Zachary
comment by Zachary · 2023-02-01T19:39:31.670Z · LW(p) · GW(p)

Are you able to strong man the argument in favor of AI being an existential risk to humanity?

comment by Liam Donovan (liam-donovan-1) · 2023-02-01T17:45:02.661Z · LW(p) · GW(p)

Well....Eliezer does think we're doomed so doesn't necessarily contradict his worldview

comment by Noosphere89 (sharmake-farah) · 2023-01-31T17:50:39.639Z · LW(p) · GW(p)

Hm, I think this is way too confident of a take here. It is possible LLMs simply can't scale, but you need to avoid making such a rightly controversial claim as a response to someone.

Replies from: lc
comment by lc · 2023-01-31T17:51:48.918Z · LW(p) · GW(p)

Added a word then.