Peter Thiel on Technological Stagnation and Out of Touch Rationalists
post by Matt Goldenberg (mr-hire) · 2022-12-07T13:15:32.009Z · LW · GW · 26 commentsThis is a link post for https://youtu.be/ibR_ULHYirs
Contents
26 comments
Sharing mostly because I personally didn't realize how Thiel was viewing the Bay Area rationalists these days.
He explicitly calls out Eliezer's "Death with Dignity" post as ridiculous, calls out Bostrom as out of touch, and says that the rationalists are only interesting because they aren't saying anything, and are just an echo of prevailing feelings about technology.
I think it's worthwhile really trying to step into Thiels point of view and update on it.
26 comments
Comments sorted by top scores.
comment by Razied · 2022-12-07T15:07:47.181Z · LW(p) · GW(p)
Thiel's argument against Bostrom's Vulnerable World Hypothesis is basically "Well, Science might cause bad things, but totalitarianism might cause even worse stuff!", which, sure, but Bostrom's whole point is that we seem to be confronted with a choice between two very undesirable outcomes: either technology kills us or we become totalitarian. Either we risk death from cancer, or we risk death from chemotherapy. Thiel implicitly agrees with this frame, it's just that he thinks the cure worse than the disease, he doesn't offer some third option or argue that science is less dangerous than Bostrom believes.
He also unfortunately doesn't offer much against Elizer's "Death With Dignity" post, no specific technical counterarguments, just some sneering and "Can you believe these guys?" stuff. I don't think Thiel would be capable of recognizing the End of The World as such 5 years before it happens. However his point about the weirdness of bay area rationalists is true, though not especially new.
↑ comment by Noosphere89 (sharmake-farah) · 2022-12-07T17:54:08.397Z · LW(p) · GW(p)
The best arguments against the VWH solution is in the post Enlightenment values in a vulnerable world, especially once we are realistic about what incentives states are under:
The above risks arise from a global state which is loyally following its mandate of protecting humanity’s future from dangerous inventions. A state which is not so loyal to this mandate would still find these tools for staying in power instrumental, but would use them in pursuit of much less useful goals. Bostrom provides no mechanism for making sure that this global government stays aligned with the goal of reducing existential risk and conflates a government with the ability to enact risk reducing policies with one that will actually enact risk reducing policies. But the ruling class of this global government could easily preside over a catastrophic risk to their citizens and still enrich themselves. Even with strong-minded leaders and robust institutions, a global government with this much power is a single point of failure for human civilization. Power within this state will be sought after by every enterprising group whether they care about existential risk or not. All states today are to some extent captured by special interests which lead them to do net social harm for the good of some group. If the global state falls into the control of a group with less than global interests, the alignment of the state towards global catastrophic risks will not hold.
A state which is aligned with the interests of some specific religion, race, or an even smaller oligarchic group can preside over and perpetrate the killing of billions of people and still come out ahead with respect to its narrow interests. The history of government gives no evidence that alignment with decreasing global catastrophic risk is stable. By contrast, there is evidence that alignment with the interests of some powerful subset of constituents is essentially the default condition of government.
If Bostrom is right that minimizing existential risk requires a stable and powerful global government, then politicide, propaganda, genocide, scapegoating, and stagnation are all instrumental in pursuing the strategy of minimizing anthropogenic risk. A global state with this goal is therefore itself a catastrophic risk. If it disarmed other more dangerous risks, such a state could an antidote but whether it would do so isn’t obvious. In the next section we consider whether the panopticon government is likely to disarm many existential risks.
Beyond these two examples, a global surveillance state would be searching the urn specifically for black balls. This state would have little use for technologies which would improve the lives of the median person, and they would actively suppress those which would change the most important and high status factors of production. What they want are technologies which enhance their ability to maintain control over the globe. Technologies which add to their destructive and therefore deterrent power. Bio-weapons, nuclear weapons, AI, killer drones, and geo-engineering all fit the bill.
A global state will always see maintaining power as essential. A nuclear arsenal and an AI powered panopticon are basic requirements for the global surveillance state that Bostrom imagines. It is likely that such a state will find it valuable to expand its technological lead over all other organizations by actively seeking out black ball technologies. So in addition to posing an existential risk in and of itself, a global surveillance state would increase the risk from black ball technologies by actively seeking destructive power and preventing anyone else from developing antidotes.
Here's a link to the longer version of the post.
https://forum.effectivealtruism.org/posts/A4fMkKhBxio83NtBL/enlightenment-values-in-a-vulnerable-world [EA · GW]
↑ comment by Joel Burget (joel-burget) · 2022-12-07T16:25:30.797Z · LW(p) · GW(p)
Thiel's arguments about both the Vulnerable World Hypothesis and Death with Dignity were so (uncharacteristically?) shallow that I had to question whether he actually believes what he said, or was just making an argument he thought would be popular with the audience. I don't know enough about his views to say but my guess is that it's somewhat (20%+) likely.
Replies from: None↑ comment by Lao Mein (derpherpize) · 2022-12-08T01:38:31.224Z · LW(p) · GW(p)
The VWH is very iffy. It can be generalized into fairly absurd conclusions. It's like Pascal's Mugging, but with unknown unknowns, which evades statistical analysis by definition.
"We don't know if SCP-tier infohazards can result in human extinction. Every time we think a new thought, we're reaching into an urn, and there is a chance that it will become both lethal and contagious. Yes, we don't know if this is even possible, but we're thinking a lot of new thoughts now adays. The solution to this is..."
"We don't know if the next vaccine can result in human extinction. Every time we make a new vaccine, we're reaching into an urn, and there is a chance that it will accidentally code for prions and kill everyone 15 years later. Or something we can't even imagine right now. Yes, according to our current types of vaccines this is very unlikely, and our existing vaccines do in fact provide a lot of benefits, but we don't know if the next vaccine we invent, especially if it's using new techniques, will be able to slip past existing safety standards and cause human extinction. The solution to this is..."
"Since you can't statistically analyze unknown unknowns, and some of them might result in human extinction, we shouldn't explore anything without a totalitarian surveillance state"
I think Thiel detected an adversarial attempt to manipulate his decision-making and rejected it out of principle.
My main problem is the "unknown unknowns evade statistical analysis by definition" part. There is nothing we can do to satisfy the VWH except by completely implementing its directives. It's in some ways argument-proof by design, since it incorporates unknown unknowns so heavily. Since nothing can be used to disprove the VWH, I reject it as a bad hypothesis.
Replies from: sig↑ comment by sig · 2022-12-08T10:36:58.647Z · LW(p) · GW(p)
I found none of those quotes in https://nickbostrom.com/papers/vulnerable.pdf
When using quotation marks, please be more explicit where the quotes are from, if anywhere.
How VWH could be extrapolated is of course relevant and interesting; wouldn't it make sense to pick an example from the actual text?
↑ comment by the gears to ascension (lahwran) · 2022-12-07T17:42:56.763Z · LW(p) · GW(p)
this is the same dude who has been funding Trump heavily, his claim that he doesn't want totalitarianism is obviously probably nonsense
comment by Wei Dai (Wei_Dai) · 2022-12-07T23:37:04.990Z · LW(p) · GW(p)
calls out Bostrom as out of touch
I think he actually said that Bostrom represents the current zeitgeist, which is kind of the opposite of "out of touch"? (Unless he also said "out of touch"? Unfortunately I can't find a transcript to do a search on.)
It's ironic that everyone thinks of themselves as David fighting Goliath. We think we're fighting unfathomably powerful economic forces (i.e., Moloch) trying to build AGI at any cost, and Peter thinks he's fighting a dominant culture that remorselessly smothers any tech progress.
comment by Algon · 2022-12-07T14:28:11.049Z · LW(p) · GW(p)
Here's a transcript. Sorry for the slight innacuracies, I got Whisper-small to generate it using this notebook someone made. Here's the section about MIRI and Bostrom.
Replies from: Mitchell_Porter, TAGBut but I, you know, I was involved peripherally with some of these sort of
East Bay rationalist futuristic groups.
There was one called the Singularity Institute in the 2000s and the sort of the
self-understanding was, you know, building an AGI, it's going to be this most,
the most important technology in the history of the world.
We better make sure it's friendly to human beings and we're going to work on
making sure that it's friendly.
And you know, the vibe sort of got a little bit stranger and I think it was
around 2015 that I sort of realized that, that they weren't really, they didn't
seem to be working that hard on the AGI anymore and they seemed to be more
pessimistic about where it was going to go and it was kind of a, it sort of
devolved into sort of a Burning Man, Burning Man camp.
It was sort of, had gone from sort of transhumanist to Luddite in, in 15 years.
And some, something had sort of gone wrong.
My, and it was finally confirmed to me by, by a post from Mary, Machine
Intelligence Research Institute, the successor organization in April of this
year.
And this is again, these are the people who are, and this is sort of the cutting
edge thought leaders of the, of the people who are pushing AGI for the last 20
years and, and you know, it was fairly important in the whole Silicon Valley
ecosystem.
Title, Mary announces new death with dignity strategy.
And then the summary, it's obvious at this point that humanity isn't going to
solve the alignment problem.
I, how is AI aligned with humans or even try very hard or even go out with much
of a fight.
Since survival is unattainable, we should shift the focus of our efforts to helping
humanity die with slightly more dignity.
And, and then anyway, it goes on to talk about why it's only slightly more dignity
because people are so pathetic and they've been so lame at dealing with this.
And of course you can, you know, there's probably a lot you can say that, you
know, this was, there's somehow, this was somehow deeply in the logic of the whole
AI program for, for decades that it was, was potentially going to be very dangerous.
If you believe in Darwinism or Machiavellianism, there are no purely
self-interested actors.
And then, you know, if you get a superhuman AGI, you will never know that
it's aligned.
So there was something, you know, there was a very deep problem.
People have had avoided it for 20 years or so.
At some point, one day they wake up and the best thing we can do is, is, is just
hand out some Kool-Aid a la People's Temple to everybody or something like this.
And, and if we, and then I think, unless we just dismiss this sort of thing as, as
just, as just the kind of thing that happens in a, in a, in a post-COVID
mental breakdown world.
I found another article from Nick Bostrom who's sort of an Oxford academic.
And, you know, most of these people are sort of, I know there's, there's somehow,
they're interesting because they have nothing to say.
They're interesting because they're just mouthpieces.
There's, it's like the mouth of Sauron.
It's, it's just sort of complete sort of cogs in the machine, but they are, they're
useful because they tell us exactly where the zeitgeist is in some ways.
And, and, and this was from 2019 pre-COVID, the vulnerable world hypothesis.
And that goes through, you know, a whole litany of these different ways where, you
know, science and technology are creating all these dangers for the world.
And what do we do about them?
And it's the precautionary principle, whatever that means.
But then, you know, he has a four-part program for achieving stabilization.
And I will just read off the four things you need to do to make our world less
vulnerable and achieve stabilization in the sort of, you know, we have this
exponentiating technology where maybe it's not progressing that quickly, but
still progressing quickly enough.
There are a lot of dangerous corner cases.
You only need to do these four things to, to stabilize the world.
Number one, restrict technological development.
Number two, ensure that there does not exist a large population of actors
representing a wide and recognizably human distribution of motives.
So that's a, that sounds like a somewhat incompatible with the DEI, at least in
the, in the ideas form of diversity.
Number three, establish extremely effective preventive policing.
And number four, establish effective global governance.
Since you can't let, you know, even if there's like one little island somewhere
where this doesn't apply, it's no good.
And, and so it is basic, and this is, you know, this is the zeitgeist on the other
side.
↑ comment by Mitchell_Porter · 2022-12-07T22:36:47.451Z · LW(p) · GW(p)
It's completely unclear to me whether he actually thinks there is a risk to humanity from superhuman AI, and if so, what he thinks could or should be done about it.
For example, is he saying that "you will never know that [superhuman AGI] is aligned" truly is "a very deep problem"? Or is he saying that this is a pseudo-problem created by following the zeitgeist or something?
Similarly, what is his point about Darwinism and Machiavellianism? Is he saying, because that's how the world works, superhuman AI is obviously risky? Or is he saying that these are assumptions that create the illusion of risk??
In any case, Thiel doesn't seem to have any coherent message about the topic itself (as opposed to disapproving of MIRI and Nick Bostrom). I don't find that completely surprising. It would be out of character for a politically engaged, technophile entrepreneur to say "humanity's latest technological adventure is its last, we screwed up and now we're all doomed".
His former colleague Elon Musk speaks more clearly - "We are not far from dangerously strong AI" (tweeted four days ago) - and he does have a plan - if you can't beat them, join them, by wiring up your brain (i.e. Neuralink).
comment by Vladimir_Nesov · 2022-12-07T15:00:44.725Z · LW(p) · GW(p)
This feels like a conflict theory on corrupted hardware argument: AI risk people think they are guided by technical considerations, but the norm encompassing their behavior is the same as with everything else in technology, smothering progress instead of earnestly seeking a way forward, navigating the dangers.
So I think the argument is not about the technical considerations, which could well be mostly accurate, but a culture of unhealthy attitude towards them, shaping technical narratives and decisions. There's been a recent post [LW · GW] making a point of the same kind.
comment by ryan_b · 2022-12-08T22:43:51.726Z · LW(p) · GW(p)
I watched that talk on youtube. My first impression was strongly that he was using hyperbole for driving the point to the audience; the talk was littered with the pithiest versions his positions. Compare with the series of talks he gave after Zero to One was released for the more general way he expresses similar ideas, and you can also compare with some of the talks that he gives to political groups. On a spectrum between a Zero to One talk and a Republican Convention talk, this was closer to the latter.
That being said, I wouldn't be surprised if he was skeptical of any community that thinks much about x-risk. Using the 2x2 for definite-indefinite and optimism-pessimism, his past comments on American culture have been about losing definite optimism. I expect he would view anything focused on x-risk as falling into the definite pessimism camp, which is to say we are surely doomed and should plan against that outcome. By the most-coarse sorting my model of him uses, we fall outside of the "good guy" camp.
He didn't say anything about this specifically in the talk, but I observe his heavy use of moral language. I strongly expect he takes a dim view of the prevalence of utilitarian perspectives in our neck of the woods, which is not surprising because it is something we and our EA cousins struggle with ourselves [LW · GW] from time to time.
As a consequence, I fully expect him to view the rationality movement as people who are doing not-good-guy things and who use a suspect moral compass all the while. I think that is wrong, mind you, but it is what my simple model of him says.
It is easy to imagine outsiders having this view. I note people within the community have voiced dissatisfaction with the amount of content that focuses on AI stuff, and while strict utilitarianism isn't the community consensus it is probably the best-documented and clearest of the moral calculations we run.
In conclusion, Thiel's comments don't cause me to update on the community because it doesn't tell me anything new about us, but it does help firm up some of the dimensions along which our reputation among the public is likely to vary.
comment by Viliam · 2022-12-08T16:06:35.796Z · LW(p) · GW(p)
To me it sounds like Thiel is making a political argument against... diversity, wokeness, the general opposition against western civilization and technology... and pattern-matching everything to that. His argument sounds to me like this:
*
A true libertarian is never afraid of progress, he boldly goes forward and breaks things. You cannot separate dangerous research from useful research anyway; every invention is dual-use, so worrying about horrible consequences is silly, progress is always a net gain. The only reason people think about risks is political mindkilling.
I am disappointed that Bay Area rationalists stopped talking about awesome technology, and instead talk about dangers. Of course AI will bring new dangers, but it only worries you if you have a post-COVID mental breakdown. Note that even university professors, who by definition are always wrong and only parrot government propaganda, are agreeing about the dangers of AI, which means it is now a part of the general woke anti-technology attitude. And of course the proposed solution is world government and secret police controlling everyone! Even the Bible says that we should fear the Antichrist more than we fear Armageddon.
*
The charitable explanation is that he only pretends to be mindkilled, in order to make a political point.
Replies from: AllAmericanBreakfast↑ comment by DirectedEvolution (AllAmericanBreakfast) · 2022-12-08T16:16:54.757Z · LW(p) · GW(p)
I agree with your interpretation of Thiel. The guy is heavily involved in right-wing US politics, and that’s an essential piece of context for interpreting his actions and statements. He’s powerful, rich, smart and agentic. While we can interrogate his words at face value, it’s also fine to interpret them as a tool for manipulating perceptions of status. He has now written “Thiel’s summary of Bay Area rationalists,” and insofar as you’re exposed to and willing to defer to Thiel’s take, that is what your perception will be. More broadly, he’s setting what the values will be at the companies he runs, the political causes he supports, and garnering support for his vision by defining what he stands against. That’s a function separate from the quality of the reasoning in his words.
Thiel seems like a smart enough person to make a precise argument when he wants to, so when he loads his words with pop culture references and described his opponents as “the mouth of Sauron,” I think it’s right to start with the political analysis. Why bother reacting to Thiel if you’re mainly concerned with the content of his argument? It’s not like it’s especially new or original thinking. The reason to focus on Thiel is that you’re interested in his political maneuvers.
Replies from: mr-hire↑ comment by Matt Goldenberg (mr-hire) · 2022-12-08T17:17:15.667Z · LW(p) · GW(p)
smart enough person to make a precise argument when he wants to, so when he loads his words with pop culture references and described his opponents as “the mouth of Sauron,” I think it’s right to start with the political analysis.
FWIW I've often heard him make precise arguments while also using LOTR references and metaphorical language like this, so I don't think is is a sufficient trigger for "he must be making a political statement and not a reasoned one".
Replies from: AllAmericanBreakfast↑ comment by DirectedEvolution (AllAmericanBreakfast) · 2022-12-08T18:19:46.867Z · LW(p) · GW(p)
I specifically said you can interpret his statement on the level of a reasoned argument. Based on your response, you could also update in favor of seeing even his more reason-flavored arguments as having political functions.
comment by Arthur Conmy (arthur-conmy) · 2023-01-31T18:03:49.877Z · LW(p) · GW(p)
It seems like a cached speech from him. He echoes the same words at the Oxford Union earlier this month. I'm unsure how much this needs updating on. He constantly pauses and is occasionally inflammatory so my impression was he was measuring his words carefully for the audience.
comment by the gears to ascension (lahwran) · 2022-12-07T17:32:53.427Z · LW(p) · GW(p)
dude has been funding trumpism, I wouldn't really read much into what he says
edit 4mo later: https://johnganz.substack.com/p/the-enigma-of-peter-thiel
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2022-12-07T17:42:16.947Z · LW(p) · GW(p)
WTF downvotes! you wanna explain yourselves?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2022-12-07T18:11:32.622Z · LW(p) · GW(p)
I'm guessing the problem is that you are advocating against dignifying the evil peddlers of bunkum by acknowledging them as legitimate debate partners.
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2022-12-07T19:20:18.922Z · LW(p) · GW(p)
oh hmm. thanks for explaining! I think I don't universally agree with offering intellectual charity, especially to those with extremely large implementable agency differences, like thiel (and sbf, and musk, and anyone with a particularly enormous stake of power coupons, aka money). I'm extremely suspicious by default of such people, and the fact that thiel has given significantly to the trump project seems like strong evidence that he can't be trusted to speak his beliefs, since he has revealed a preference for those who will take any means to power. my assertion boils down to "beware adversarial agency from trumpist donors". perhaps it doesn't make him completely ignorable, but I would still urge unusually much caution.
Replies from: Vladimir_Nesov, None↑ comment by Vladimir_Nesov · 2022-12-07T19:36:47.958Z · LW(p) · GW(p)
The exercise of figuring out what he could've meant doesn't require knowing that he believes it. I think the point I formulated [LW(p) · GW(p)] makes sense and is plausibly touching on something real, but it's not an idea I would've spontaneously thought of on my own, so the exercise is interesting. Charity to something strange is often like that. I'm less clear on whether it's really the point Thiel was making, and I have no idea if it's something he believes, but that doesn't seem particularly relevant.
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2022-12-07T20:03:15.830Z · LW(p) · GW(p)
fair enough!
↑ comment by [deleted] · 2022-12-11T06:27:15.696Z · LW(p) · GW(p)
See I just think it means he's a shortsighted greedy moron
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2022-12-11T07:11:59.119Z · LW(p) · GW(p)
I mean I agree with that assessment. I do think that, hmm, it should be more possible to be direct about criticism on lesswrong without also dismissing the possibility of considering your interlocutor to be speaking meaningfully. Even though you're agreeing with me, I do also agree with Nesov's comment in way - if you can't consider the possibility of adversarial agency without needing to bite back hard, you can't evaluate it usefully.