Intergenerational trauma impeding cooperative existential safety efforts
post by Andrew_Critch · 2022-06-03T08:13:25.439Z · LW · GW · 29 commentsContents
Part 1 — The trauma of being ignored Part 2 — Forgetting that humanity changes Part 3 - Reflections on the Fundamental Attribution Error (FAE) Part 4 — The FAE applied to humanity Part 5 – What, if anything, to do about this None 29 comments
Epistemic status: personal judgements based on conversations with ~100 people aged 30+ who were worried about AI risk "before it was cool", and observing their effects on a generation of worried youth, at a variety of EA-adjacent and rationality-community-adjacent events.
Summary: There appears to be something like inter-generational trauma among people who think about AI x-risk — including some of the AI-focussed parts of the EA and rationality communities — which is
- preventing the formation of valuable high-trust relationships with newcomers that could otherwise be helpful to humanity collectively making better decisions about AI, and
- feeding the formation of small pockets of people with a highly adversarial stance towards the rest of the world (and each other).
[This post is also available on the EA Forum [EA · GW].]
Part 1 — The trauma of being ignored
You — or some of your close friends or colleagues — may have had the experience of fearing AI would eventually pose an existential risk to humanity, and trying to raise this as a concern to mainstream intellectuals and institutions, but being ignored or even scoffed at just for raising it. That sucked. It was not silly to think AI could be a risk to humanity. It can.
I, and around 100 people I know, have had this experience.
Experiences like this can easily lead to an attitude like “Screw those mainstream institutions, they don’t know anything and I can’t trust them.”
At least 30 people I've known personally have adopted that attitude in a big way, and I estimate many more. In the remainder of this post, I'd like to point out some ways this attitude can turn out to be a mistake.
Part 2 — Forgetting that humanity changes
Basically, as AI progresses, it becomes easier and easier to make the case that it could pose a risk to humanity's existence. When people didn’t listen about AI risks in the past, that happened under certain circumstances, with certain AI capabilities at the forefront and certain public discourse surrounding them. These circumstances have changed, and will continued to change. It may not be getting easier as fast as one would ideally like, but it is getting easier. Like the stock market, it may be hard to predict how and when things will change, but they will.
If one forgets this, one can easily adopt a stance like "mainstream institutions will never care" or "the authorities are useless". I think these stances are often exaggerations of the truth, and if one adopts them, one loses out on the opportunity to engage productively with the rest of humanity as things change.
Part 3 - Reflections on the Fundamental Attribution Error (FAE)
The Fundamental Attribution Error (wiki/Fundamental_attribution_error) is a cognitive bias whereby you too often attribute someone else's behavior to a fundamental (unchanging) aspect of their personality, rather than considering how their behavior might be circumstantial and likely to change. With a moment's reflection, one can see how the FAE can lead to
- trusting too much — assuming someone would never act against your interests because they didn't the first few times, and also
- trusting too little — assuming someone will never do anything good for you because they were harmful in the past.
The second reaction could be useful for getting out of abusive relationships. The risk of being mistreated over and over by someone is usually not worth the opportunity cost of finding new people to interact with. So, in personal relationships, it can be healthy to just think "screw this" and move on from someone when they don't make a good first (or tenth) impression.
Part 4 — The FAE applied to humanity
If one has had the experience of being dismissed or ignored for expressing a bunch of reasonable arguments about AI risk, it would be easy to assume that humanity (collectively) can never be trusted to take such arguments seriously. But,
- Humanity has changed greatly over the course of history, arguably more than any individual has changed, so it's suspect to assume that humanity, collectively, can never be rallied to take a reasonable action about AI.
- One does not have the opportunity to move on and find a different humanity to relate to. "Screw this humanity who ignores me, I'll just imagine a different humanity and relate to that one instead" is not an effective strategy for dealing with the world.
Part 5 – What, if anything, to do about this
If the above didn't resonate with you, now might be a good place to stop reading :) Maybe this post isn't good advice for you to consider after all.
But if it did resonate, and you're wondering what you may be able to do differently as a result, here are some ideas:
- Try saying something nice and civilized about AI risk that you used to say 5-10 years ago, but which wasn’t well received. Don’t escalate it to something more offensive or aggressive; just try saying the same thing again. Someone new might take interest today, who didn’t care before. This is progress. This is a sign that humanity is changing, and adapting somewhat to the circumstances presented by AI development.
- Try Googling a few AI-related topics that no one talked about 5-10 years ago to see if today more people are talking about one or more of those topics. Switch up the keywords for synonyms. (Maybe keep a list of search terms you tried so you don't go in circles, and if you really find nothing, you can share the list and write an interesting LessWrong post speculating about why there are no results for it.)
- Ask yourself if you or your friends feel betrayed by the world ignoring your concerns about AI. See if you have a "screw them" feeling about it, and if that feeling might be motivating some of your discussions about AI.
- If someone older tells you "There is nothing you can do to address AI risk, just give up", maybe don't give up. Try to understand their experiences, and ask yourself seriously if those experiences could turn out differently for you.
29 comments
Comments sorted by top scores.
comment by johnswentworth · 2022-06-03T16:29:23.965Z · LW(p) · GW(p)
Writing this post as if it's about AI risk specifically seems weirdly narrow.
It seems to be a pattern across most of society that young people are generally optimistic about the degree to which large institutions/society can be steered, and older people who've tried to do that steering are mostly much less optimistic about it. Kids come out of high school/college with grand dreams of a great social movement which will spur sweeping legislative change on X (climate change, animal rights, poverty, whatever). Unless they happen to pick whichever X is actually the next hot thing (gay rights/feminism/anti-racism in the past 15 years), those dreams eventually get scaled back to something much smaller, and also get largely replaced by cynicism about being able to do anything at all.
Same on a smaller scale: people go into college/grad school with dreams of revolutionizing X. A few years later, they're working on problems which will never realistically matter much, in order to reliably pump out papers which nobody will ever read. Or, new grads go into a new job at a big company, and immediately start proposing sweeping changes and giant projects to address whatever major problems the company has. A few years later, they've given up on that sort of thing and either just focus on their narrow job all the time or leave to found a startup.
Given how broad the pattern is, it seems rather ridiculous to pose this as a "trauma" of the older generation. It seems much more like the older generation just has more experience, and has updated toward straightforwardly more correct views of how the world works.
Experiences like this can easily lead to an attitude like “Screw those mainstream institutions, they don’t know anything and I can’t trust them.”
Also... seriously, you think that just came from being ignored about AI? How about that whole covid thing?? It's not like we're extrapolating from just one datapoint here.
If someone older tells you "There is nothing you can do to address AI risk, just give up", maybe don't give up. Try to understand their experiences, and ask yourself seriously if those experiences could turn out differently for you.
My actual advice here would be: first, nobody ever actually advises just giving up. I think the thing which is constantly misinterpreted as "there is nothing you can do" is usually pointing out that somebody's first idea or second idea for how to approach alignment runs into some fundamental barrier. And then the newby generates a few possible patches which will not actually get past this barrier, and very useful advice at that point is to Stop Generating Solutions [LW · GW] and just understand the problem itself better. This does involve the mental move of "giving up" - i.e. accepting that you are not going to figure out a viable solution immediately - but that's very different from "giving up" in the strategic sense.
(More generally, the field as whole really needs to hold off on proposing solutions [LW · GW] more, and focus on understanding the problem itself better.)
Replies from: Andrew_Critch, TekhneMakre, jan-kulveit, lc, M. Y. Zuo↑ comment by Andrew_Critch · 2022-06-03T19:05:06.080Z · LW(p) · GW(p)
Writing this post as if it's about AI risk specifically seems weirdly narrow.
I disagree. Parts 2-5 wouldn't make sense to argue for a random other cause area that people go to college hoping to revolutionize. Parts 2-5 are about how AI is changing rapidly, and going to continue changing rapidly, and those changes result in changes to discourse, such that it's more-of-a-mistake-than-for-other-areas to treat humanity as a purely static entity that either does or doesn't take AI x-risk seriously enough.
By contrast, animal welfare is another really important area that kids go to college hoping to revolutionize and end up getting disillusioned, exactly as you describe. But the facts-on-the-ground and facts-being-discussed about animal welfare are not going to change as drastically over the next 10 years as the facts about AI. Generalizing the way you're generalizing from other cause areas to AI is not valid, because AI is in fact going to be more impactful than most other things that ambitious young people try to revolutionize. Even arguments of the form "But gain of function research still hasn't been banned" aren't fully applicable, because AI is (I claim, and I suspect you believe) going to be more impactful than synthetic biology over the next ~10 years, and that impact creates opportunities for discourse that could be even more impactful than COVID was.
To be clear, I'm not trying to argue "everything is going to be okay because discourse will catch up". I'm just saying that discourse around AI specifically is not as static as the FAE might lead one to feel/assume, and that I think the level of faith in changing discourse among the ~30 people I'm thinking of when writing this post seems miscalibratedly low.
Replies from: johnswentworth↑ comment by johnswentworth · 2022-06-03T19:47:40.497Z · LW(p) · GW(p)
I agree parts 2-5 wouldn't make sense for all the random cause areas, but they would for a decent chunk of them. CO2-driven climate change, for example, would have been an excellent fit for those sections about 10 years ago.
That said, insofar as we're mainly talking about level of discourse, I at least partially buy your argument. On the other hand, the OP makes it sound like you're arguing against pessimism about shifting institutions in general, which is a much harder problem than discourse alone (as evidenced by the climate change movement, for instance).
Replies from: TekhneMakre↑ comment by TekhneMakre · 2022-06-03T20:08:11.015Z · LW(p) · GW(p)
(Agree again)
To add:
the level of faith in changing discourse among the ~30 people I'm thinking of when writing this post seems miscalibratedly low.
The discourse that you're referring to seems likely to be being Goodharted, so it's not a good proxy for whether institutions will make sane decisions about world-ending AI technology. A test that would distinguish these variables would be to make logical arguments on a point that's not widely accepted. If the response is updating or logical counterargument, that's promising; if the response is some form of dismissal, that's evidence the underlying generators of non-logic-processing are still there.
↑ comment by TekhneMakre · 2022-06-03T17:54:15.663Z · LW(p) · GW(p)
+1
To add:
This is a sign that humanity is changing, and adapting somewhat to the circumstances presented by AI development.
It is evidence of that, but it's not super strong, and in particular it doesn't much distinguish between "the generators of why humanity was suicidally dismissive of information and reasoning have changed" from "some other more surface thing has changed, e.g. some low-fidelity public Zeitgeist has shifted which makes humans make a token obeisance to the Zeitgeist, but not in a way that implies that key decision makers will think clearly about the problem". The above comment points out that we have other reason to think those generators haven't changed much. (The latter hypothesis is a paranoid hypothesis, to be sure, in the sense that it claims there's a process pretending to be a different process (matching at a surface level the predictions of an alternate hypothesis) but that these processes are crucially different from each other. But paranoid hypotheses in this sense are just often true.) I guess you could say the latter hypothesis also is "humanity changing, and adapting somewhat to the circumstances presented by AI development", but it's not the kind of "adaptation to the circumstances" that implies that now, reasoning will just work!
Not to say, don't try talking with people.
Replies from: steven0461↑ comment by steven0461 · 2022-06-03T19:57:46.677Z · LW(p) · GW(p)
Yes, my experience of "nobody listened 20 years ago when the case for caring about AI risk was already overwhelmingly strong and urgent" doesn't put strong bounds on how much I should anticipate that people will care about AI risk in the future, and this is important; but it puts stronger bounds on how much I should anticipate that people will care about counterintuitive aspects of AI risk that haven't yet undergone a slow process of climbing in mainstream respectability, even if the case for caring about those aspects is overwhelmingly strong and urgent (except insofar as LessWrong culture has instilled a general appreciation for things that have overwhelmingly strong and urgent cases for caring about them), and this is also important.
↑ comment by Jan Kulveit (jan-kulveit) · 2022-06-04T04:54:13.956Z · LW(p) · GW(p)
It's probably worth noting that I take the opposite update from the covid crisis: it was much easier to get governments listen to us and do marginally more sensible things than expected. With better preparation and larger resources, it would have been possible to cause order of magnitude more sensible things to happen. Also it's worth noting some governments were highly sensible and agentic about covid
↑ comment by lc · 2022-06-04T06:41:16.193Z · LW(p) · GW(p)
It seems to be a pattern across most of society that young people are generally optimistic about the degree to which large institutions/society can be steered, and older people who've tried to do that steering are mostly much less optimistic about it. Kids come out of high school/college with grand dreams of a great social movement which will spur sweeping legislative change on X (climate change, animal rights, poverty, whatever). Unless they happen to pick whichever X is actually the next hot thing (gay rights/feminism/anti-racism in the past 15 years), those dreams eventually get scaled back to something much smaller, and also get largely replaced by cynicism about being able to do anything at all.
Remember all of those nonprofits the older generation dedicated to AI safety-related activism; places where people like Eliezer spent their days trying to convince people their concerns are correct instead of doing math? All of those hundreds of millions of dollars of funding that went to guys like Rob Miles and not research houses? No? I really want to remember, but I can't.
Seriously, is this a joke? This comment feels like it was written about a completely different timeline. The situation on the ground for the last ten years has been one where the field's most visible and effective activists have full-time jobs doing math and ML research surrounding the alignment problem, existential risk in general, or even a completely unrelated research position at a random university. We have practically poured 90% of all of our money and labor into MIRI and MIRI clones instead of raising the alarm. When people here do propose raising the alarm, the reaction they get is uniformly "but the something something contra-agentic process" or "activism? are you some kind of terrorist?!"
Even now, after speaking to maybe a dozen people referred to me after my pessimism post, I have not found one person who does activism work full time. I know a lot of people who do academic research on what activists might do if they existed, but as far as I can tell no one is actually doing the hard work of optimizing their leaflets. The closest I've found are Vael Gates and Rob Miles, people who instead have jobs doing other stuff, because despite all of the endless bitching about how there's no serious plans, no one has ever decided to either pay these guys for, or organize, the work they do inbetween their regular jobs.
A hundred people individually giving presentations to their university or nonprofit heads and then seething when they're not taken seriously is not a serious attempt, and you'll forgive me for not just rolling over and dying.
Update ~20 minutes after posting: Took a closer look; it appears Rob Miles might be getting enough from his patreon to survive, but it's unclear. It's weird to me that he doesn't produce more content if he's doing this full time.
Replies from: elityre, habryka4↑ comment by Eli Tyre (elityre) · 2022-06-05T08:49:06.682Z · LW(p) · GW(p)
places where people like Eliezer spent their days trying to convince people their concerns are correct instead of doing math?
What? Eliezer took 2 years to write the sequences, during which he did very little direct alignment work. And in the years before that, SingInst was mostly an advocacy org, running the Singularity Summits.
Replies from: lc↑ comment by lc · 2022-06-05T16:23:46.121Z · LW(p) · GW(p)
That's right. I put him in the basket of "the field's most visible and effective activists". He's done more than, literally, 99.9999999% of the human population. That's why it's so frustrating that his activism has mostly been done in pursuit of the instrumental goal of recruiting geniuses for MIRI for better higher quality maths research, not trying to convince the broader public. The sequences were fantastic, probably way better than I could have done if I were trying to drum up public support directly.
But they were never a scalable solution to that problem. People who read the sequences don't have a bite sized way of transferring that information to others. They can go up to their friends and say "read the sequences", but they're left with no more compact lossy way to transmit the pertinent details about AGI risk. It's simply not an attempt to be memetic; something like 1700 pages of content, where the reproducible core idea is probably ten. On top of that, he deliberately filters people with incorrect beliefs about religion, politics, etc., who might otherwise agree with him and be willing to support the narrow cause he champions.
As far as I can tell this was all by design. Eliezer did not, and perhaps still does not, think it's a worthwhile strategy to get general public & academic support for AGI risk management. He will do a presentation or two at a prestigious university every so often, but doesn't spend most of his time reaching people that aren't fields medalists. When he does reach one of those he sends them to work in his research house instead of trying to get them to do something as effective as writing the sequences was. I shudder to imagine the future we might have had if 10 full-Eliezers and 50 semi-Eliezers had been working on that problem full time for the last fifteen years.
Note: This is a Dumbledore Critique. It's not Eliezer's God-given duty to save the world, and 99.99% of the planets' intelligentsia gets a much stronger ding for doing nothing at all, or worse.
Replies from: Eliezer_Yudkowsky, elityre↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2022-06-05T18:52:16.328Z · LW(p) · GW(p)
But they were never a scalable solution to that problem. People who read the sequences don't have a bite sized way of transferring that information to others. They can go up to their friends and say "read the sequences", but they're left with no more compact lossy way to transmit the pertinent details about AGI risk.
I don't think anything like that exists. It's been tried in hopeless floundering attempts, by people too ignorant to know how hopeless it is; nobody competent has tried it because anybody competent sees that they don't know how to do that.
Replies from: lc↑ comment by lc · 2022-06-05T19:18:22.300Z · LW(p) · GW(p)
I'm not particularly good at persuading people of things. So, consider the fact that I convinced my father, who is a nontechnical, religious 60yo man, that AGI was a serious problem within a 45-60 minute conversation. To the point that he was actually, unironically concerned that I wasn't doing more to help. I get this reaction or something equivalent regularly, from people of very different backgrounds, of different intelligence levels, ethnicities, political tribes. Not all of them go off and devote their lives to alignment like you did, but they at least buy into an intellectual position and now have a stake in the ground. With pleasant reminders, they often try to talk to their friends.
I didn't have to get him to change the gears in his mind that led him to Christianity in order for me to get him to agree with me about AI, just like environmentalists don't have to turn democrats into expert scientists to get them to understand global warming. For as many people for whom the "convergent instrumental goals" thing has to be formalized, there are either people who just get it, defer their understanding to others, or are able to use some completely different model that drags them to the same conclusions. There's a difference between "the arguments and details surrounding AGI risk sufficient to mobilize" and "the fully elaborated causal chain".
Obviously "go talk to everyone until they agree" isn't a scalable solution, and I don't have a definitive one or else I'd go do it. Perhaps arbital was partly an attempt to accelerate this process? But you can see why it seems highly counterintuitive to me that it would be literally impossible to bring people onto the AGI risk train without giving them a textbook of epistemology first, to the point that I'd question someone who seems right about almost everything.
(It's possible you're talking about "understanding the problem in enough detail to solve it technically" and I'm talking about "doing whatever reasoning they need to to arrive at the correct conclusion and maybe help us talk to DeepMind/FAIR researchers who would be better equipped to solve it technically if they had more peer pressure", in which case that's that.)
↑ comment by Eli Tyre (elityre) · 2022-06-09T08:21:17.475Z · LW(p) · GW(p)
I shudder to imagine the future we might have had if 10 full-Eliezers and 50 semi-Eliezers had been working on that problem full time for the last fifteen years.
That sounds obviously amazing. Are you under the impression that recruitment succeeded so enormously that there are 10 people that can produce intellectual content as relevant and compelling as the original sequences, but that they've been working at MIRI (or something) instead? Who are you thinking of?
I don't think we got even a single Eliezer-substitute, even though that was one of the key goals of the writing the sequences.
↑ comment by habryka (habryka4) · 2022-06-04T19:49:31.708Z · LW(p) · GW(p)
Rob Miles is funded by the Long Term Future Fund at a roughly full-time salary: https://forum.effectivealtruism.org/posts/dgy6m8TGhv4FCn4rx/long-term-future-fund-september-2020-grants#Robert_Miles___60_000_ [EA · GW]
Replies from: lc↑ comment by lc · 2022-06-04T20:03:26.899Z · LW(p) · GW(p)
That's genuinely good news to me. However, he's only made two videos in the past year? I'm not being accusatory, just confused.
Replies from: habryka4↑ comment by habryka (habryka4) · 2022-06-04T21:05:56.081Z · LW(p) · GW(p)
He has also been helping a bunch of other people with video content creation. For example: https://www.youtube.com/c/RationalAnimations
Replies from: lc↑ comment by M. Y. Zuo · 2022-06-04T20:23:54.880Z · LW(p) · GW(p)
Given how broad the pattern is, it seems rather ridiculous to pose this as a "trauma" of the older generation. It seems much more like the older generation just has more experience, and has updated toward straightforwardly more correct views of how the world works.
Agree. The community has suffered from evaporative cooling to an extent, and it has become less welcoming of naive new ideas that were discussed many times before, much like any virtual community. This may appear as cynicism or trauma, but that’s from the perspective of folks just coming out of Plato’s cave into the bright sunlight. To them being told that the sun sets or can be eclipsed would also seem to be cynicism and trauma.
comment by JenniferRM · 2022-06-04T07:07:55.214Z · LW(p) · GW(p)
It is ~15% plausible to me that this is written in a way that includes me personally?
I hope not. I hope when you were counting your 30 people, that I'm not NEARLY important enough to show up in that kind of list these days but... maybe I still am? :-(
I wrote the stuff below, and I think maybe my thesis statement could be something like: "have you considered that the problem isn't inter-generational trauma but rather some trauma from an unusually clear perception of actually existing circumstances that would traumatize most people if most people paid attention to it?"
Like... which part of the situation isn't objectively pretty bad? Are you seeing the same world as me?
I&II. (EDITED) THIS IS NOT MY FIRST RODEO
...
I had some text here about "other political half-successes I've seen" but it suffices to say that this is not my first rodeo. It is normal for smart people to be ahead of the curve, and it is normal for them to be copied at lower fidelity, and it is normal for half-broken versions of what they were trying to say to be used to license some things they wanted and many things they disagree with.
This is hard to manage feelings about, kind of intrinsically.
Most of my regrets from these experiences, in retrospect, are about not hitting on the main big ideas that were most central, and which could have licensed shutting down DUMB VERSIONS of similar content.
Concepts like justice and fairness tend to be safer to transmit, especially if you include a demand for rigor.
Coherent Extrapolated Volition is imperfect, but I see a lot of wisdom in promoting it under the frame of "hard to ruin if picked up by lots of people" :-)
...
III. GOVERNANCE FUTURISM... MAYBE?
By 2013, I gave up hope on SingInst [LW · GW]... because of how sad and disheartened I was that EVEN SingInst were probably ALSO going to predictably fail at saving the world...
Then I switched to trying to figure out, as a hobby, why organizations so often seem to start to systematically not do what they are nominally AIMING AT.
Part of this involved reading about institutions of various sorts, while working on the least dangerous most objectively morally good thing that I could find at Google in the meantime. (Credit assignment marginalia: Will Newsome as working on institutions before I was [LW · GW].)
I don't regret what I did at Google and the "institutional" line of thinking still doesn't seem super obviously wrong to me.
Critch, I don't think that you [LW · GW] or Samo much disagree with many people (including me) that: NEW institutions will be decisive here because the old ones are simply INADEQUATE.
The focus on institutions doesn't seem like it is actually the wrong focus? Still! After all these years!
Which is almost maybe hopeful? :-)
IV. CONGRESS CRITTERS ARE VALID MORAL PATIENTS
Let's be clear, this kind of thing is NOT a problem I see in myself, but I could see how someone might seen it in me based on my circumstances:
The Fundamental Attribution Error (wiki/Fundamental_attribution_error) is a cognitive bias whereby you too often attribute someone else's behavior to a fundamental (unchanging) aspect of their personality, rather than considering how their behavior might be circumstantial and likely to change.
When I interact with people F2F, I like practically all of them. Individual humans are like: THE BEST.
People in isolation, with no one watching, in an environment of safety, can think so clearly and so well while requiring so few calories. Also, all social mammals are wonderful to any other mammals that tickle their oxytocin circuits, and all mammals have these circuits, and its great <3
If I was at a cocktail party and bumped into a Congressman (assuming the goal was to make a good impression rather than hide or something) then I could probably tell a joke and get a laugh. I'm pretty sure it wouldn't be hard at all to like the guy. Such people are often almost literally sparkly [LW · GW].
If I had enough situational mojo, I might even have the chance to raise an Important Issue Of The Day and watch his face fall, but then I could explain to him that I understand some of the real organizational and humanistic constraints he faces, and express sympathy for how his powers should be increased, but won't be.
See the relief that he doesn't expect me to expect him to Save Us All or some dumb impossible thing like that. He would respect the sympathetic understanding of the barriers that he faces in getting to solutions (even though he's supposedly so powerful).
However...
V. LOOK AT THE CIRCUMSTANCES WITH CLEAR EYES
If you, Critch, agree that first pass the post voting for president is bad, that could be a circumstance that you think is likely to change... but I don't think it will change.
Since it won't change soon, we have to solve any given public goods problems WITHOUT A RELIABLY SANE CIVILIAN COMMANDER IN CHIEF.
This isn't about the person, it is about the institutional circumstances.
If you (generic reader) agree that Congress should have elections that make every single seat competitive and proportionally representative, that could be a circumstance that someone might think is likely to change... but I don't think it will change very soon.
Therefore we have to solve any given public goods problem WITHOUT A RELIABLY SANE CONGRESS MAKING SANE LAWS.
(Compare this to sympathizing with a congress person at a cocktail party: I'm not so gauche as to spontaneously bring up proportional representation with someone currently in congress, because that would NOT get a laugh. They would be confused, then probably opposed. In perhaps the same way religious people are certain that invisible dragons are also intangible to flour [LW · GW], pretty much all powerful people know that anything that would remove them from power is bad.)
VI. CAN THE VOTERS SAVE US? (SPOILER ALERT: NO)
If we were not beyond the reach of god [? · GW], then Democracy Would Save Us just from having the label "democracy" as a descriptor.
Over in dath ilan [? · GW], they have something like Democracy, but also they have a planet full of one room schoolhouses, where professional Watchers mind kids every morning, and make sure the 11 year olds have fun while properly teaching the 7 year olds about supply and demand. Over here on Earth, we have voters so economically illiterate that they think that rent control will improve a housing crisis!
Do you think that the elementary school teachers we currently have will "change this circumstance" fast enough?
The economic illiteracy of voters could be a circumstance that is likely to change... but I don't think it will change very soon.
Therefore we have to solve any given problem WITHOUT ECONOMICALLY SANE VOTERS VOTING FOR LEGISLATORS THAT PROMISE SANE POLICIES.
This isn't about the students as humans. As humans, they are cute and and friendly and like to play games and listen to songs together.
Despite being wonderful as humans, there is a problem in their curricular circumstances and economic knowledge attainment.
Once you understand how governance should work, you look around and realize that we are in a fractally broken hellworld when it comes to governance.
It doesn't fix anything that almost all the individual humans are basically nice, when most of them are incapable of noticing that the government is fractally broken.
They don't realize that no non-trivial non-traditional public goods problems can be solved by the governance system we currently have. They aren't bad people, but they prevent problem from being solved, which is bad.
VII. MAYBE CHINA?
Maybe the CCP has competence? Maybe we could auto-translate this whole website into Chinese and beg the CCP to save the world (since they live in the world too and presumably also want it to keep existing)?
But in case you didn't notice, when covid happened the (unelected authoritarian) government of China made sure the airports OUT INTO THE WORLD kept operating, because they wanted the rest of the world to get infected and die of the disease if they didn't have the competence to stop it.
(Also they are engaged in a genocide right now.) Competent: yes! Universally Benevolent? NO!
VIII. DON'T WE ALREADY KNOW WHAT THE DEFAULT IS?
Schmidhuber is a very smart (though unpopular) guy, and another very smart person on this very website went around asking AI researchers (including him) about Friendliness in AI long ago and Schmidhuber was stunningly unbiased and clear in his explanation of what was was likely to happen [LW · GW]:
Once somebody posts the recipe for practically feasible self-improving Goedel machines or AIs in form of code into which one can plug arbitrary utility functions, many users will equip such AIs with many different goals, often at least partially conflicting with those of humans. The laws of physics and the availability of physical resources will eventually determine which utility functions will help their AIs more than others to multiply and become dominant in competition with AIs driven by different utility functions. Which values are "good"? The survivors will define this in hindsight, since only survivors promote their values.
Avoiding a situation like this requires coordination [LW · GW], which is a public goods problem.
Humanity is really really bad at public goods problems.
We would need coordination to avoid the disasters from a lack coordination... so unless we get lucky somehow... it just ain't gonna happen?
That is it.
That's the entire problem.
Saying it again for clarity: doing public goods well is a public goods problem.
All those great shiny friendly humans who love dancing and have oxytocin circuits... are not already in high quality coordination-inducing "circumstances".
I don't personally believe in FOOM, but there doesn't NEED to be a FOOM to end up with really really sad outcomes.
All it needs is for the top 3 or 10 or 37 competent human entities that are willing to go down this route (presumably corporations or state actors?) to compete with each other, at pumping the predictability out of the world, via stock markets or engagement algorithms or anything else they can come up with. (The story I linked to has ONE company, and they go quite fast. That part seems unrealistic to me. The board room conversations are not depicted. But the way they "harvest predictability out of the world" seems roughly like "how it is likely to go".)
I think that it can and probably will happen in slow motion. The institutional inertia is too great and the problem is too complicated to fit in low-bandwidth channels.
If a solution is found it might well only occur in a single brain or maybe a tiny research group. That's how proofs are generated. Proofs are often short, and proof checking is fast, so transmitting the idea should go pretty fast?
IX. BEING MILDLY IMPOLITE TO POWER
I'm only 15% certain I'm even being mentioned here?
There appears to be something like inter-generational trauma among people who think about AI x-risk — including some of the AI-focused parts of the EA and rationality communities — which is preventing the formation of valuable high-trust relationships with newcomers that could otherwise be helpful to humanity collectively making better decisions about AI.
But like... imagine people in a boardroom privately discussing something insanely bad, like GoF research, or a marketing campaign full of lies promoting an addictive carcinogen, and one person says "hey no, that would be insanely bad", and then... maybe this statement would wake people up, and actually work to stop the insanely bad thing?
If that was universal, I think humans would NOT be doomed, but I suspect (from experience in board rooms) that it rarely happens, and when it happens it feels the same as telling a Congressman about proportional representation and how (probably) literally every seat in congress should be competitive in every election, including his.
It isn't "polite to power", but I think its true.
So I think I'm going to keep doing it even if I don't win many friends or influence as many people that way. Basically, however it is that we all die eventually, from some insane catastrophe, I want my conscience to have been clean.
X. MAYBE IN HEAVEN I CAN HANG OUT WITH DIOGENES?
I remember that sickening feeling in my gut in February of 2020, when I realized that everything in the US was built on lies and flinching and we were totally going to fail at covid, that we had ALREADY failed at it, because of the lies, and because of sociological inertia.
We don't use IQ tests to pick bureaucrats anymore. Also you can't fire them for base line incompetence, only gross violations of procedure.
They didn't know that "N95 masks have pores that are smaller than aerosolized viral particles, and therefore will obviously work to protect the respiratory tract of anyone who wears one properly".
It wasn't part of the procedures for them to know, apparently, and that's why they weren't fired, I guess?
A million Americans died, and not a single person at the FDA or CDC has been put on trial or resigned or apologized for it that I've heard of. It used to be that incentives matter, but maybe that stopped somehow?
Also many bureaucrats hold dumb ideologies about how elected officials firing bureaucrats is bad. It is insane. It was insane in 2019. It is still insane in 2022.
In 2019 I was trying to be politely quiet about it, but I am done with that.
IF "we" can't tell enough truth, fast enough, to coordinate to defeat a murderous yet fundamentally insentient string of 30,000 nucleotides, that transcribes into protein at a rate of ~1 amino acid / second...
...THEN how the hell are "we" supposed to tame some rogue god or genie, with exabytes of cleverness, that thinks at the speed of light??
X. DOING "GOOD" IS PROBABLY GOOD
I want to stop lying-via-deference-to-power, you know? Maybe next time we'll fail because too many people were too honest about too many objectively true mechanical facts that are relevant to the next problem?
But it seems like that's not a likely failure mode.
And in the meantime, I'm going to keep along with my hobby of trying to design a benevolent 99.9% voluntarist (in both senses) online global government that doesn't suck. (Voluntarism has some interesting properties. I'm not committed to it, but... maybe? War is bad. Civil war is worse. The tricky part is having things be bloodless and... well... "voluntary" I think? All you REALLY have to do, basically, is get "consent" from ~8 billion people?)
Also, I think I'd be reasonably happy working on this with anyone who is smart, with a will towards Good, who wants to help. Bonus: I encourage having opinions about why the polylaw system suggested in Terra Ignota is hilariously bad, but at least it is wrong in smaller and clearer and more interesting ways than the status quo! <3
This is a good old quote that resonates with how great every individual potentially is, and implicitly how terrible poorly organized swarms of people are... (note the word is "change" not necessarily "improve"):
Replies from: Benito, 1a3orn, SaidAchmizNever doubt that a small group of thoughtful, committed, citizens can change the world. Indeed, it is the only thing that ever has.
↑ comment by Ben Pace (Benito) · 2022-06-05T00:30:30.470Z · LW(p) · GW(p)
Just a little feedback that your comments are... very long. I personally would read them more if you cut the bottom 50% from them after you finished. Or spent a little longer adding a bulletted summary at the top so I could choose to zoom into just the sections that interest me.
Replies from: JenniferRM↑ comment by JenniferRM · 2022-06-07T19:00:09.699Z · LW(p) · GW(p)
If you want terse, follow smart people on twitter, then when they talk to people in terse ways, you get to see it, terse responses are enforced, and so on <3
Being too verbose is one possible protective strategy for critiquing bad faith public speech. Then, as a second order optimization, the roman numeral thing is something that I think maybe Scott and I picked up from The Last Psychiatrist, as a way to make the length tolerable, and indicate "beats" in the emotional movement, when writing about subjects where bad faith can't be ignored as something potentially salient to the object level discussion.
My most recent follow was based on this beauty, which aphoristically encodes seemingly-authentic (and possibly virtuous) self doubt plus awareness of vast semantic systems and deep time.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2022-06-07T19:32:38.475Z · LW(p) · GW(p)
Okay :)
↑ comment by Said Achmiz (SaidAchmiz) · 2023-02-06T09:31:21.560Z · LW(p) · GW(p)
Also, I think I’d be reasonably happy working on this with anyone who is smart, with a will towards Good, who wants to help.
What kind of contribution(s) to this project would you say are most important right now?
comment by alyssavance · 2022-06-04T05:50:21.983Z · LW(p) · GW(p)
I think it's true, and really important, that the salience of AI risk will increase as the technology advances. People will take it more seriously, which they haven't before; I see that all the time in random personal conversations. But being more concerned about a problem doesn't imply the ability to solve it. It won't increase your base intelligence stats, or suddenly give your group new abilities or plans that it didn't have last month. I'll elide the details because it's a political debate, but just last week, I saw a study that whenever one problem got lots of media attention, the "solutions" people tried wound up making the problem worse the next year. High salience is an important tool, but nowhere near sufficient, and can even be outright counterproductive.
comment by Dagon · 2022-06-03T17:03:31.262Z · LW(p) · GW(p)
Speaking as an Old, and one who's not personally deeply involved in AI Risk mitigation/research, I'm not sure how much of this is generational, vs just disagreement on risk levels and ability of specific individuals to influence.
The "if someone tells you to give up, maybe don't" advice is fully general. If MOST people tell you, then that's evidence to update on, but if it doesn't move you enough, it's your life and you make the call. Likewise the "feel betrayed by the world ignoring your concerns". That's a near-universal feeling, though the topic shifts among individuals and sub-groups. And should likewise not dissuade you beyond making you ask the question whether they're right.
comment by ACrackedPot · 2022-06-03T20:23:43.847Z · LW(p) · GW(p)
I'm a crackpot.
Self-identifiably as so. Part of the reason I self-identify as a crackpot is to help create a kind of mental balance, a pushback against the internal pressure to dismiss people who don't accept my ideas: Hey, self, most people who have strong beliefs similar to or about the thing you have strong beliefs about are wrong, and the impulse to rage against the institution and people in it for failing to grasp the obvious and simple ideas you are trying to show them is exactly the wrong impulse.
The "embitterment" impulse can be quite strong; when you have an idea which, from your perspective, is self-evident if you spend any amount of time considering it, the failure of other people to adopt that idea can look like a failure to even consider anything you have said. Or it can look like everybody else is just unimaginative or unintelligent or unwilling to consider new ideas; oh, they're just putting in their 9-5, they don't actually care anymore.
Framing myself as a crackpot helps anticipate and understand the reactions I get. Additionally, framing myself as a crackpot serves as a useful signal to somebody reading; first, that if they have no patience for these kinds of ideas, that they should move on. And second, that if they do have the patience for these kinds of ideas, that I have self-awareness of exactly what kind of idea it is, and am unlikely to go off on insane rants against them for interpreting me as a crackpot, and also that having warned them in advance I am also aware that this may be an imposition and I am not taking their time for granted. (Also higher level meta signaling stuff that is harder to explain.)
Are you a crackpot? I have no idea. However, when people start talking about existential safety, my personal inclination is to tune them out, because they do pattern match to "Apocalyptic Thinkers". The AI apocalypse basically reads to me like so many other apocalypse predictions.
Mind, you could be right; certainly I think I'm right, and I'm not going to be casting any stones about spreading ideas that you think are correct and important.
However, my personal recommendation is to adopt my own policy: Self awareness.
comment by MSRayne · 2022-06-09T12:24:24.762Z · LW(p) · GW(p)
This is not about "intergenerational trauma". That was a very inaccurate thing to put in the title. Intergenerational trauma is when a traumatized parent or other elder passes along trauma to the child usually by being abusive toward them, and it's much too strong a term for "elders in the community encourage newcomers to become similarly cynical" as you are speaking to.