Is Clickbait Destroying Our General Intelligence?
post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2018-11-16T23:06:29.506Z · LW · GW · 65 commentsContents
66 comments
(Cross-posted from Facebook.)
Now and then people have asked me if I think that other people should also avoid high school or college if they want to develop new ideas. This always felt to me like a wrong way to look at the question, but I didn't know a right one.
Recently I thought of a scary new viewpoint on that subject.
This started with a conversation with Arthur where he mentioned an idea by Yoshua Bengio about the software for general intelligence having been developed memetically. I remarked that I didn't think duplicating this culturally transmitted software would be a significant part of the problem for AGI development. (Roughly: low-fidelity software tends to be algorithmically shallow. Further discussion moved to comment below.)
But this conversation did get me thinking about the topic of culturally transmitted software that contributes to human general intelligence. That software can be an important gear even if it's an algorithmically shallow part of the overall machinery. Removing a few simple gears that are 2% of a machine's mass can reduce the machine's performance by way more than 2%. Feral children would be the case in point.
A scary question is whether it's possible to do subtler damage to the culturally transmitted software of general intelligence.
I've had the sense before that the Internet is turning our society stupider and meaner. My primary hypothesis is "The Internet is selecting harder on a larger population of ideas, and sanity falls off the selective frontier once you select hard enough."
To review, there's a general idea that strong (social) selection on a characteristic imperfectly correlated with some other metric of goodness can be bad for that metric, where weak (social) selection on that characteristic was good. If you press scientists a little for publishable work, they might do science that's of greater interest to others. If you select very harshly on publication records, the academics spend all their time worrying about publishing and real science falls by the wayside.
On my feed yesterday was an essay complaining about how the intense competition to get into Harvard is producing a monoculture of students who've lined up every single standard accomplishment and how these students don't know anything else they want to do with their lives. Gentle, soft competition on a few accomplishments might select genuinely stronger students; hypercompetition for the appearance of strength produces weakness, or just emptiness.
A hypothesis I find plausible is that the Internet, and maybe television before it, selected much more harshly from a much wider field of memes; and also allowed tailoring content more narrowly to narrower audiences. The Internet is making it possible for ideas that are optimized to appeal hedonically-virally within a filter bubble to outcompete ideas that have been even slightly optimized for anything else. We're looking at a collapse of reference to expertise because deferring to expertise costs a couple of hedons compared to being told that all your intuitions are perfectly right, and at the harsh selective frontier there's no room for that. We're looking at a collapse of interaction between bubbles because there used to be just a few newspapers serving all the bubbles; and now that the bubbles have separated there's little incentive to show people how to be fair in their judgment of ideas for other bubbles, it's not the most appealing Tumblr content. Print magazines in the 1950s were hardly perfect, but they could get away with sometimes presenting complicated issues as complicated, because there weren't a hundred blogs saying otherwise and stealing their clicks. Or at least, that's the hypothesis.
It seems plausible to me that basic software for intelligent functioning is being damaged by this hypercompetition. Especially in a social context, but maybe even outside it; that kind of thing tends to slop over. When someone politely presents themselves with a careful argument, does your cultural software tell you that you're supposed to listen and make a careful response, or make fun of the other person and then laugh about how they're upset? What about when your own brain tries to generate a careful argument? Does your cultural milieu give you any examples of people showing how to really care deeply about something (i.e. debate consequences of paths and hew hard to the best one), or is everything you see just people competing to be loud in their identification? The Occupy movement not having any demands or agenda could represent mild damage to a gear of human general intelligence that was culturally transmitted and that enabled processing of a certain kind of goal-directed behavior. And I'm not sure to what extent that is merely a metaphor, versus it being simple fact if we could look at the true software laid out. If you look at how some bubbles are talking and thinking now, "intellectually feral children" doesn't seem like entirely inappropriate language.
Shortly after that conversation with Arthur, it occurred to me that I was pretty much raised and socialized by my parents' collection of science fiction.
My parents' collection of old science fiction.
Isaac Asimov. H. Beam Piper. A. E. van Vogt. Early Heinlein, because my parents didn't want me reading the later books.
And when I did try reading science fiction from later days, a lot of it struck me as... icky. Neuromancer, bleah, what is wrong with this book, it feels damaged, why do people like this, it feels like there's way too much flash and it ate the substance, it's showing off way too hard.
And now that I think about it, I feel like a lot of my writing on rationality would be a lot more popular if I could go back in time to the 1960s and present it there. "Twelve Virtues of Rationality" is what people could've been reading instead of Heinlein's Stranger in a Strange Land, to take a different path from the branching point that found Stranger in a Strange Land appealing.
I didn't stick to merely the culture I was raised in, because that wasn't what that culture said to do. The characters I read didn't keep to the way they were raised. They were constantly being challenged with new ideas and often modified or partially rejected those ideas in the course of absorbing them. If you were immersed in an alien civilization that had some good ideas, you were supposed to consider it open-mindedly and then steal only the good parts. Which... kind of sounds axiomatic to me? You could make a case that this is an obvious guideline for how to do generic optimization. It's just what you do to process an input. And yet "when you encounter a different way of thinking, judge it open-mindedly and then steal only the good parts" is directly contradicted by some modern software that seems to be memetically hypercompetitive. It probably sounds a bit alien or weird to some people reading this, at least as something that you'd say out loud. Software contributing to generic optimization has been damaged.
Later the Internet came along and exposed me to some modern developments, some of which are indeed improvements. But only after I had a cognitive and ethical foundation that could judge which changes were progress versus damage. More importantly, a cognitive foundation that had the idea of even trying to do that. Tversky and Kahneman didn't exist in the 1950s, but when I was exposed [LW · GW] to this new cognitive biases literature, I reacted like an Isaac Asimov character trying to integrate it into their existing ideas about psychohistory, instead of a William Gibson character wondering how it would look on a black and chrome T-Shirt. If that reference still means anything to anyone.
I suspect some culturally transmitted parts of the general intelligence software got damaged by radio, television, and the Internet, with a key causal step being an increased hypercompetition of ideas compared to earlier years. I suspect this independently of any other hypotheses about my origin story. It feels to me like the historical case for this thesis ought to be visible by mere observation to anyone who watched the quality of online discussion degrade from 2002 to 2017.
But if you consider me to be more than usually intellectually productive for an average Ashkenazic genius in the modern generation, then in this connection it's an interesting and scary further observation that I was initially socialized by books written before the Great Stagnation. Or by books written by authors from only a single generation later, who read a lot of old books themselves and didn't watch much television.
That hypothesis doesn't feel wrong to me the way that "oh you just need to not go to college" feels wrong to me.
65 comments
Comments sorted by top scores.
comment by sirjackholland · 2018-11-19T18:59:39.988Z · LW(p) · GW(p)
Alternative hypothesis: the internet encourages people who otherwise wouldn't contribute to the general discourse to contribute to it. In the past, contributing meant writing some kind of article, or at least letter-to-the-editor, which 1) requires a basic level of literacy and intellectual capacity, and 2) provides a filter, removing the voices of those who can't write something publishers consider worth of publication (with higher-influence publications having, in general, stricter filters).
Anecdote in point: I have yet to see an internet comment that I couldn't imagine one of my relatives writing (sorry, relatives, but a few of y'all have some truly dumb opinions!). But these relatives I have in mind wouldn't have contributed to the general discourse before the internet was around, so if you don't have That Uncle in your family you may not have been exposed to ideas that bad before seeing YouTube comments.
Last minute edit: I mean that I have yet to see an internet comment that I couldn't imagine one of my relatives writing years and years ago, i.e. I expect that we would have seen 2018 level discourse in 2002 if That Uncle had posted as much in 2002 as in 2018.
Replies from: elityre, bfinn↑ comment by Eli Tyre (elityre) · 2020-04-25T00:25:51.014Z · LW(p) · GW(p)
This seems like a really important point. Thanks.
↑ comment by bfinn · 2019-12-02T20:04:52.411Z · LW(p) · GW(p)
I reckon a bit of both - viz.:
(a) The Internet (and TV before it) make it in platform's interests, via ad revenue, to produce clickbait (soaps/game shows), because humans are more interest-seekers than truth-seekers. This phenomenon is aka 'dumbing down'. And also:
(b) the Internet enables all consumers to broadcast their own stuff regardless of truth/quality. This is another kind of dumbing down; though note TV didn't do this, making it clear that it's a different kind.
Replies from: bfinn↑ comment by bfinn · 2021-02-02T12:18:53.764Z · LW(p) · GW(p)
Further comment on re-reading the essay:
I do think EY's central point is a long-observed one. Essentially that over decades the media has increasingly dumbed down in order to appeal to mass audiences; and this perpetuates the dumbness of audiences.
The second half is not so often observed as the first, with regard to the media, though it is in other spheres. For example, in the UK over the last few decades high school and university curricula have dumbed down (due to competition for students between exam boards and new universities), with the obvious effect of producing dumb students.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2018-11-16T23:11:00.520Z · LW(p) · GW(p)
(Deleted section on why I thought cultural general-intelligence software was not much of the work of AGI:)
...because the soft fidelity of implicit unconscious cultural transmission can store less serially deep and intricate algorithms than the high-fidelity DNA transmission used to store the kind of algorithms that appear in computational neuroscience.
I recommend Terrence Deacon's The Symbolic Species for some good discussion of the surprising importance of the shallow algorithms and parameters that can get transmitted culturally. The human-raised chimpanzee Kanzi didn't become a human, because that takes deeper and more neural algorithms than imitating the apes around you can transmit, but Kanzi was a lot smarter than other chimpanzees in some interesting ways.
But as necessary as it may be to avoid feral children, this kind of shallow soft-software doesn't strike me as something that takes a long time to redevelop, compared to hard-software like the secrets of computational neuroscience.
Replies from: avturchin↑ comment by avturchin · 2018-11-17T10:46:02.539Z · LW(p) · GW(p)
Anatomically modern humans appeared around 300 K years ago, but the civilisation started only 5K years. It seems that this time was needed to polish the training data set for general intelligence.
I read a book about prehistoric art, and it strikes me that the idea of a drawing took tens thousand years to consolidate. This idea of drawing later evolved in symbols and text.
Replies from: gjm↑ comment by gjm · 2018-11-17T14:53:31.006Z · LW(p) · GW(p)
If it took 300k years to develop human software, and 4-13M years to develop human hardware (starting from our common ancestor with chimpanzees), that seems consistent with Eliezer's claim that developing the software shouldn't take all that long _compared with the hardware_. (Eliezer doesn't say "hardware" but "hard-software", but unless I misunderstand he's talking about something fairly close to "software that implements what human brain hardware does".)
[EDITED to add:] On the other hand, you might expect software to evolve faster than hardware, at any given level of underlying complexity/difficulty/depth, because the relevant timescales for selection of memes are shorter than those for genes. So actually I'm not sure how best to translate timelines of human development into predictions for AI development. There's no very compelling reason to assume that "faster for evolution" and "faster for human R&D" are close to being the same thing, anyway.
Replies from: Benquo, avturchin↑ comment by Benquo · 2018-11-19T21:24:40.157Z · LW(p) · GW(p)
I think you're responding to this as though it were just a metaphor and not noticing the extent to which it might just be meant literally. If we exit the part of human coordination space where we have a civilization, it could easily take another 300,000 years to get it back. That's not a generalized claim about software vs hardware development times. It's a specific claim that the specific "shallow soft-software" Eliezer is referring to might take hundreds of thousands of years to redevelop, regardless of what you might otherwise think about AI software development timelines.
Replies from: gjm↑ comment by gjm · 2018-11-19T22:39:22.867Z · LW(p) · GW(p)
I'm like 96% sure it was intended to apply to the question of how much of the work in making an AGI is about "cultural general-intelligence software". But yeah, I agree that if we destroy our civilization it could take a long time to get it back. Not just because building a civilization takes a long time; also because there are various resources we've probably consumed most of the most accessible bits of, and not having such easy access to coal and oil and minerals could make building a new civilization much harder. But I'm not sure what hangs on that (as opposed to the related but separate question of whether we would rebuild civilization if we lost it) -- the destruction of human civilization would be a calamity, but I'm not sure it would be a much worse calamity if it took 300k years to repair than if it took "only" 30k years.
Replies from: Benquo↑ comment by Benquo · 2018-11-19T23:03:55.113Z · LW(p) · GW(p)
I think it matters because of what it implies about how hard a target civilization is to reach. Even if the 300k year process could be sped up a lot by knowing what we're aiming for, it's evidence that the end result was a much weaker natural attractor than our current state is, from a starting point of founding civilization at all.
↑ comment by avturchin · 2018-11-18T10:21:21.577Z · LW(p) · GW(p)
I found an interesting article: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4429600/
Some factoids from it: "For example, there are nearly 20 million genomic loci that differ between humans and chimpanzees" (but 99 per cent of genome is non-coding regions).
"Another evolutionary approach has been to focus on genomic loci that are well conserved throughout vertebrate evolution but are strikingly different in humans; these regions have been named “human accelerated regions (HARs)” (Bird et al., 2007; Bush and Lahn, 2008; Pollard et al., 2006; Prabhakar et al., 2008). So far, ∼2700 HARs have been identified, again most of them in noncoding regions: at least ∼250 of these HARs seem to function as developmental enhancers in the brain".
"Comparison of the FOXP2 cDNAs from multiple species indicates that the human FOXP2 protein differs at only 3 amino acid residues from the mouse ortholog, and at 2 residues from the chimpanzee, gorilla, and rhesus macaque orthologs ... Mice carrying humanized FoxP2 show accelerated learning, qualitatively different ultrasonic vocalizations, and increased dendrite length and synaptic plasticity in the medium spiny neurons of the striatum."
So I have an impression that the changes in the genome were rather small, but very effective in fine tuning of the brain by creating new connections between regions, increasing its size etc. The information content of the changes depends not only on the number of single nucleotide changes, but their exact location through all 3 billion pair genome (which needs around 30 bits to code), but the main role was of these 250 HARs, and inside each HAR the change may be rather small, like in case of FOXP2.
Multiplying all that gives that significance difference between chimp and human brain development programs is around 25 000 bits. Not sure if this calculation is right, because there are many other genes and promoters in play.
The soft-software, imho, was what I call "human training dataset" and it includes, first of all, language (and our home environment, all visual production etc). The existence of the feral children which can't be trained to be human again means that human brain is the universal learning machine (the idea was discussed [LW · GW] in LW), but its training dataset is outside of the hardware of the machine.
Currently we have biggest changes in that dataset from ancient time because of Internet etc and if the principles of universal thinking are in the dataset we could lost them, as EY said.
Replies from: shohobohaum-za↑ comment by Shohobohaum Za (shohobohaum-za) · 2020-01-18T05:33:36.262Z · LW(p) · GW(p)
I'm a little late to the game here, but I have a small issue with the above.
I don't think it is accurate to estimate the size of changes in such a manner, as there is an enormous complex of transcription factors that create interplay between small changes, ones of which we may never see any actual trace or are located outside the genome that affect the genome. SNPs are important (such as those in FOXp2) but not the be all end all factor for those expressions as well - epigenetic factors can drive selection just as effectively as chance mutation creates advantage. Two sides of the same coin, so to speak.
The HARs in question are not only genes, but some of them are connected with multiple sections of the genome in this capacity. They carry with them effects and reactions that are hard to calculate as single instances of information (bit encoding). Activation of some factors may lead to activation/deactivation of other factors. This networking is far too massive to make sense of without intense inquiry (which assuredly they are doing with GWAS on the 250 HARs mentioned above). Which leads to my inquiry - how is it 25000 bits of difference? We did not see the pathway that effectively created that hardware, and much of it could be conceived as data that is environmental - which is what I suppose you're getting at somewhat, but your rote calculation seems to contradict. Do you simply mean, brain development programs in the actual code? I dont think that is as useful of a perception, as it limits the frame of reference to a small part of the puzzle. Gene expression is much more affected by environmental stimuli than one might perceive, feral children being an interesting point to that regard.
comment by Benquo · 2018-11-19T21:47:00.226Z · LW(p) · GW(p)
I think you're underestimating how unusual your source culture was in the other direction - it wasn't just short-lived, it was also new. Prior to the mid 20th Century, there wasn't a single dominant hegemonic power with an unified intellectual culture centered around a newly consolidated centralized administrative state and media apparatus. Then, shortly after WWII, there was.
There was also a world before the construction of a homogenizing information source (i.e. prior to the internet, broadcast media, the printing press), and that world was also not damaged in the way Neuromancer describes - it wasn't producing the kind of books you read as a kid. But it contained the raw materials used to construct the world that could write those books.
comment by gwern · 2018-11-17T00:19:27.904Z · LW(p) · GW(p)
I felt some deja vu reading this and looked it up. Did anything in particular prompt crossposting it to LW over a year later, or is this just when you got around to it?
Replies from: omnesiacomment by Arthur Milchior (Arthur-Milchior) · 2018-11-17T01:48:00.362Z · LW(p) · GW(p)
Have you any idea about how to test your hypothesis ? How to test for difference between past and present ?
I am entirely convinced that some people don't try to go and steal other good idea from the outgroup. Even people from «progessive» group. I can easily imagine that it is a general tendency, and not just something I see in the people in my neighborhood.
However, I don't see anything convincing that it is actually getting worse. I'm not an historian, from what I have heard and understood of the past centuries, open-mindness was not generally an adjective which could describe most of the people who had to live in those time, even if they were some great exceptions. The time it took to switch from roman number to arabic number seems to show that "stealing good ideas" was not an applied ideal. Thus, I must admit I'm kind of sceptic about the content of this blog post. Or, to say it in an other way, it's strangely looking similar to an article in the recent trend about "how internet is creating a bubble around you", but rewritten with rationalist wording.
By the way, if there is an easy way to distinguish good idea from bad idea, I'd love to have a pointer to it. Which would be mandatory to know what idea to actually steal.
Replies from: Flipnash, Pattern↑ comment by Flipnash · 2018-11-17T08:22:44.239Z · LW(p) · GW(p)
"By the way, if there is an easy way to distinguish good idea from bad idea, I'd love to have a pointer to it. Which would be mandatory to know what idea to actually steal. "
My crack at a solution to this problem was to learn to recognize ideas that are useful then filter those by how moral they are.
I fail all the time at this. I miss things. I fail to grasp the idea or fail to find a use case. I fail to judge the moral consequences of the idea.
I find it easier to find ideas that are useful to a problem i'm immediately facing rather than useful in general. Which narrows my filter bubble to just those related to programming as those are the problems I encounter and think about the most.
↑ comment by Pattern · 2018-11-18T07:27:09.684Z · LW(p) · GW(p)
Two ways to do this:
1) If it's an idea someone else uses, ask them where they use it and how. If it's too general, get a concrete example.
2) If it is clear where it might be applied, test it. (I tested Bayes Theorem on a real life question with numbers I made up, to see if it was a useful tool. I didn't think of the problem when I was looking at the technique; I came across a question and I remembered it, so I used it.)
comment by JenniferRM · 2018-12-03T08:53:18.640Z · LW(p) · GW(p)
One thing to keep in mind is sampling biases in social media, which are HUGE.
Even if we just had pure date ordered posts from people we followed, in a heterogeneous social network with long tailed popularity distributions the "median user" sees "the average person they follow" having more friends than them.
Also, posting behavior tends to also have a long tail, so sloppy prolific writers are more visible than slow careful writers. (Arguably Asimov himself was an example here: he was *insanely* prolific. Multiple books a year for a long time, plus stories, plus correspondence.)
Then, to make the social media sampling challenges worse, the algorithms surface content to mere users that is optimized for "engagement", and what could be more engaging than the opportunity to tell someone they are "wrong on the Internet"? Unless someone is using social media very *very* mindfully (like trying to diagonalize what the recommendation engine's think of them) they are going to what causes them to react.
I don't know what is really happening to the actual "average mind" right now, but I don't think many other people know either. If anyone has strong claims here, it makes me very curious about their methodology.
The newsfeed team at Facebook probably has the data to figure a lot of this out, but there is very little incentive for them to be very critical or tell the truth to the public. However, in my experience, the internal cultures of tech companies are often not that far below/behind the LW zeitgeist and I think engineering teams sometimes even go looking for things like "quality metrics" that they can try to boost (counting uses of the word "therefore" or the equivalent idea that uses semantic embedding spaces instead) as a salve for their consciences.
More deeply, like on historical timescales, I think that repeated low level exposure to lying liars improves people's bullshit detectors.
By modern standards, people who first started listening to radio were *insanely gulllible* in response to the sound of authoritative voices, both in the US and in Germany. Similarly for TV a few decades later. The very first ads on the Internet (primitive though they were) had incredibly high conversion rates... For a given "efficacy" of any kind of propaganda, more of the same tends to have less effect over time.
I fully expect this current media milieu to be considered charmingly simple, with gullible audiences and hamhanded influence campaigns, relative to the manipulative tactics that will be invented in future decades, because this stuff will stop working :-)
comment by Egon Freeman (egon-freeman) · 2020-06-14T23:18:20.668Z · LW(p) · GW(p)
It's funny to me, now that I look back on 2005 and what I said back then.
I argued that you should throw your TV out the window, because the content delivered there was being "tailored for you" by limited groups with vested interests. I argued at the time, that I would much rather be my own group with vested interest, and that the Internet enabled me to filter input according to my own preferences.
When I think about what you said, today, I realise that I was 100% correct - and how wrong I was to be joyous of it. Through various social algorithms employed by (most notably) Facebook and Google, over the years I've probably isolated myself from things that would've been of tremendous intellectual value for me - not least of which is the simple input of things contrary to my current way of thinking.
So it seems that, in the end, I still have to get off my behind and go looking for things to challenge myself with - I can't count on any outside force to bring these things to me, even by accident. Because, it seems, there are no more accidents.
At this point I should point out that the only reason I'm even here is because I read "HPMoR". It was brought to my attention because I'm into "fanfiction", and there was no reason to even consider the deeper things that it essentially "smuggled" into my perception. To put it another way: the person recommending it to me didn't do so because they thought it'd enrich my mental life - they did so because they thought I'd enjoy this well-written story.
comment by Said Achmiz (SaidAchmiz) · 2018-11-17T00:22:49.943Z · LW(p) · GW(p)
Good post! I agree with almost all of this (I have one objection but it’s not really central). The bit about how your writing would likely have been more popular in the ’60s seems pretty likely.
And when I did try reading science fiction from later days, a lot of it struck me as… icky. Neuromancer, bleah, what is wrong with this book, it feels damaged, why do people like this, it feels like there’s way too much flash and it ate the substance, it’s showing off way too hard.
Amusingly, I very recently read Neuromancer for the first time… and had almost exactly this reaction. An unpopular opinion in our sorts of circles, it seems—which surprised me a bit.
Replies from: ryan_b, ingres, gjm↑ comment by ryan_b · 2018-11-19T01:11:39.892Z · LW(p) · GW(p)
Contra the other responders, I like Neuromancer.
Of course, I felt largely the same way vis-a-vis the damage and emphasis on style, but that’s the whole pitch of the genre: the world is damaged and the punk aesthetic is the only non-corporate culture remaining, and that largely out of spite.
↑ comment by namespace (ingres) · 2018-11-17T21:19:25.589Z · LW(p) · GW(p)
Will second not enjoying Neuromancer very much.
comment by Jacob Falkovich (Jacobian) · 2018-11-19T16:02:54.204Z · LW(p) · GW(p)
there used to be just a few newspapers serving all the bubbles
I think that this in particular is a positive factor for quality of discourse / general intelligence, at least for large segments of the population.
Monoculture TV was trending towards 20-minute sitcom episodes, but now that we have HBO/Netflix many people choose to watch 10-hour seasons with complex stories. Radio and TV had talk shows with 5-minute interviews, now millions choose to listen to 2-hour podcasts. National magazines for nerds had to appeal to all nerds in the nation, but now I can choose to read LessWrong.
Certainly some people choose simpler and stupider. But when there was no choice, the mass media had to appeal to the common denominator - maybe not the 1st percentile of simplicity and stupidity but at most the 20th, not the 50th. This means that most people get smarter things when they have freedom to choose more niche media.
comment by habryka (habryka4) · 2019-11-29T22:10:48.635Z · LW(p) · GW(p)
I kind of have conflicting feelings about this post, but still think it should at least be nominated for the 2018 review.
I think the point about memetically transmitted ideas only really being able to perform a shallow, though maybe still crucial, part of cognition is pretty important and might deserve this to be nominated alone.
But the overall point about clickbait and the internet feels also really important to me, but I also feel really conflicted because it kind of pattern-matches to a narrative that I feel performs badly on some reference-class forecasting perspectives. I do think the Goodhart's law points are pretty clear, but I really wish we could do some more systematic study of whether the things that Eliezer is pointing to are real.
So overall, I think I really want this to be reviewed, at least so that we can maybe collectively put some effort into finding more empirical sources of Eliezer's claims in this post, and see whether they hold up. If they do, then I do think that that is of quite significant importance.
comment by steven0461 · 2018-11-17T23:08:29.583Z · LW(p) · GW(p)
It feels to me like the historical case for this thesis ought to be visible by mere observation to anyone who watched the quality of online discussion degrade from 2002 to 2017.
My impression is that politics is more prominent and more intense than it used to be, and that this is harming people's reasonableness, but that there's been no decline outside of that. I feel like I see fewer outright uninformed or stupid arguments than I used to; probably this has to do with faster access to information and to feedback on reasoning. EA and AI risk memes have been doing relatively well in the 2010s. Maybe that's just because they needed some time to germinate, but it's still worth noting.
comment by Raemon · 2019-12-31T02:16:52.122Z · LW(p) · GW(p)
This a first pass review that's just sort of organizing my thinking about this post.
This post makes a few different types of claims:
- Hyperselected memes may be worse (generally) than weakly selected ones
- Hyperselected memes may specifically be damaging our intelligence/social memetic software
- People today are worse at negotiating complex conflicts from different filter bubbles
- There's a particular set of memes (well represented in 1950s sci-fi) that was particularly important, and which are not as common nowadays.
It has a question which is listed although not focused on too explicitly on its own terms:
- What do you do if you want to have good ideas? (i.e. "drop out of college? read 1950s sci-fi in your formative years?")
It prompts me to separately consider the questions:
- What actually is the internet doing to us? It's surely doing something.
- What sorts of cultures are valuable? What sorts of cultures can be stably maintained? What sorts of cultures cause good intellectual development?
...
Re: the specific claim of "hypercompetition is destroying things", I think the situation is complicated by the "precambrian explosion" of stuff going on right now. Pop music is defeating classical music in relative terms, but, like, in absolute terms there's still a lot more classical music now than in 1400 [citation needed?]. I'd guess this is also true of for tribal FB comments vs letter-to-the-editor-type writings.
- [claim by me] Absolute amounts of thoughtful discourse is probably still increasing
My guess is that "listens carefully to arguments" has just always been rare, and that people have generally been dismissive of the outgroup, and now that's just more prominent. I'd also guess that there's more 1950s style sci-fi today than in 1950. But it might not be, say, driving national projects that required a critical mass of it. (And it might or might not be appearing on bestseller lists?)
If so, the question is less "are things being destroyed / lost" and more "are the relative proportions of things changing in a way that affects the global landscape?"
Jimrandomh once hypothesized, off-the-cuff "maybe great civilizations are formed during the window in between they develop the ability to filter people for intelligence, and that filtering-mechanism getting somehow co-opted or destroyed)" which feels potentially relevant here.
I think most of these questions suggest lines of research to figure out what's actually going on, but I'm not sure if I could figure them out meaningfully within the next couple weeks.
Appendix: Claims this article makes, or considers, or seems to assume:
- Human general intelligence software may have been developed memetically (as opposed to biologically/evolutionarily?)
- Feral Children exist (define feral children?)
- The Internet is selecting harder on a larger population of ideas, and sanity falls off the selective frontier once you select hard enough.
- (see also "scientists publishing papers or college students")
- Radio, television and the internet selects more harshly for hedonic memes
- We're looking at the collapse of deference to expertise
- Print magazines in the 1950s could print more complicated/nuanced takes on things
- Hyperselection of memes might cause habits like:
- When someone presents a careful argument, do you listen and make a careful response? or laugh?
- Does your cultural milieu give you any examples of people debating consequences of paths and choosing the best one?
- Old science fiction provides good socialization (esp. with the habit "when encountering new ideas, check if it's good and incorporate it")
comment by Gurkenglas · 2018-11-17T18:57:20.052Z · LW(p) · GW(p)
This pattern matches for me to how every generation thinks the new form of media will harm the next generation, all the way back to Socrates thinking writing will destroy people's memory.
Does becoming the stereotypically biased geezer seem plausible to you who wrote that he might hit a brick wall at 40 [LW · GW], or is this outside-view thinking asking the wrong questions [LW · GW]?
Replies from: Benquo, RobbBB↑ comment by Rob Bensinger (RobbBB) · 2018-11-17T19:26:58.587Z · LW(p) · GW(p)
If people tend to systematically make a certain mistake, then it's worth asking whether there's some causal factor behind it and whether that could be nudging us toward making the same mistake.
On the other hand, our general ability to solve problems and figure things out presumably is either staying the same, or getting worse, or getting better. That's a factual question that we should be able to learn about, and if (after trying to correct for biases) we did end up reaching a conclusion that resembles an old mistake, well, then it's also possible that the truth resembles an old mistake.
Replies from: Gurkenglas↑ comment by Gurkenglas · 2018-11-17T20:19:30.435Z · LW(p) · GW(p)
Correct. A candidate for a common causal factor of blaming the new media is observing that the young people are wrong.
If we taboo the evidence of subjectively observing the people raised by the new media, what remains is the filter bubble effect. It seems in hindsight like a natural rationalization to reach for, because it's one of the few known downsides of the internet. Eliezer, you wrote the book on noticing when you rationalize. Is this a likely story?
Are some parts of the internet more affected by the new effects of the internet than others? Is there a way to test them for their cognitive function as opposed to thinking in ways the previous generation wouldn't approve of?
comment by Jameson Quinn (jameson-quinn) · 2018-11-17T15:08:32.743Z · LW(p) · GW(p)
In my anecdotal experience, the kids are OK. At least as OK as we were when I was a kid in the 80s reading SF from the 60s and 70s.
If you want me to take this hypothesis more seriously than that, show more evidence.
comment by Adam Zerner (adamzerner) · 2018-11-18T17:12:41.137Z · LW(p) · GW(p)
I think it is worth noting that results probably vary a lot based on who you hang out around on the internet. Eg. us LessWrongers hopefully have improved our culturally transmitted software by hanging out around the rationalistsphere, whereas people who browse pictures of Cardi B on Instagram probably haven't.
So as far as looking for a takeaway that you can apply to your personal life goes, I don't think most readers here need to be too worried. I would say that we should be watchful, but not paranoid. Personally, I spend a bit of time on poker forums discussing hands, and I've been extremely frustrated with the quality of conversation and discourse there. I should probably keep an eye on myself to see if anything has rubbed off on me.
comment by StartAtTheEnd · 2023-10-17T07:55:29.085Z · LW(p) · GW(p)
I don't think that the internet itself is making us dumber, I think that big crowds of people tend to be stupid and that the modern internet creates a lot of such environments.
To say something insightful and concise: Groups of people tend to lower the competition rather than improving themselves. E.g. harming the reputation of the other party, rather than developing themselves to higher standards and winning this way.
I'm not hitting the nail entire on the head with this sentence, but most people should have noticed something similar themselves.
I agree with your example of Goodhart's law mechanisms. When signaling valuable outcomes becomes more important than the things which actually cause said outcomes, it becomes a game of pretend-virtue and pretend-competence. Experts generally can't deceive eachother, but in recent times, it's all about appealing to the general population, who are entirely unqualified in judging what experts are doing.
Consider for example a situation where PhD papers were graded by public opinion rather than by professors. Students would have to change their papers to reflect this, and the quality of the papers would suffer gravely as a result.
To generalize further on the previous point, an important issue in the modern world is a lack of selection and proper hierarchies. The rule "90% of everything is garbage" applies, but recent moral values are rejecting any sorts of hierarchies, even between functional and dysfunctional countries, cultures, cities, religions, values, etc.
I disagree that strong competition is bad. What's bad is judging people on shallow criteria which can be gamed. In short, you can't outsource character judgement/evaluation to simple metrics. But I can see an intrepretation which is correct: The need to perform well makes weak people sacrifice organic and genuine interests and approaches with meta-gaming. They abandon all personal standards and adapt the dominant strategy (clickbait is one such strategy)
Yes, narrow audiences were a focus in the past, as opposed to broader audiences. This is once again a quality vs quantity tradeoff, and you're right to notice a drop in quality as a result from this.
Your observation about hedonic selection seems correct as well, but one has to wonder what sort of psychological changes are behind this decadence of taste. Perhaps it's just that the internet used to have more intelligent people, with a higher need for novelty and challenges, and that society is increasingly suffering from a sort of exhaustion which calls for hedonism, escapism and other harmful indulgences in material with zero cognitive cost (a more fancy way of 'doing nothing') (Only relevant for those who are interested in the relationship between society and how changes in society changes the general psychological state of the population)
I can't help but recalling Nietzsche's critiques of decadence, for all of this behaviour seems to run quite parallel to the sort of smallness and lack of taste that he personally fought against.
To wrap it up, I'd call all this a subversion of standards. Everything valuable is rare, but value is mostly decided by public opinion (numbers), and why wouldn't public opinion speak well about itself, and badly about whatever is out of reach for it?
I think all of these issues are cause by an important idea which is missing in the modern society: That bad things spawn good things, and that good things spawn bad things. Everything comes at a price, e.g. strength comes at effort. Modern society is removing the "bad" things which generate the good, not realizing the harm it's doing. An instance of this problem is overprotective parents, but I mean it in a much larger sense, so large that I want to ask you "Are you sure reducing human suffering is the right move? Have you considered the value of suffering?"
As a final note, I don't see why you aren't generalizing your idea further. To the point of using methaphors like "Why wouldn't an untended garden fill up with weeds rather than pretty flowers? Weeds are stronger and multiply more quickly, and beauty doesn't translate to fitness", or perhaps a comparison to game theory problems. Or this one that I like: We're replacing organic human things with mathematical things, so human taste is now a weaker selector than efficiency".
What I want to know is: How did we keep these problems at bay in the past? Through good mental health? To reject the winning strategy because it's ugly seems like a costly standard to have, and yet that characterizes the past much more than the present.
Thanks for reading. I apologize for my low verbal abilities, but I'm confident in my intuition.
P.S. While software is less important than hardware, it seems that changes to this software can easily cause us to regress. That most of modern human progression is due to the software, which doesn't seems robust against the degradation taking place now. Hopefully we're just approaching a local minimum, but if this issue is caused by modern technology like Ted Kaczynski is hinting that it is, I'm not so optimistic.
Replies from: erioire↑ comment by ErioirE (erioire) · 2024-05-14T20:56:08.507Z · LW(p) · GW(p)
The rule "90% of everything is garbage" applies, but recent moral values are rejecting any sorts of hierarchies, even between functional and dysfunctional countries, cultures, cities, religions, values, etc.
When society suppresses attempts to evaluate concepts or situations as objectively better or worse than alternatives, is it any surprise that polarization increases?
If there are no commonly agreed upon benchmarks to calibrate against it becomes a war of whoever can shout loudest/most convincingly.
↑ comment by StartAtTheEnd · 2024-05-14T21:51:34.903Z · LW(p) · GW(p)
I find that subjective measurements are punished harder than objective ones. You are sometimes forgiven for claiming that "science shows X", but personal opinions are rarely allowed to discriminate, even if they, by their very nature, and meant to do exactly that. Example: "I want to date X type of people" or "I wouldn't date X type of people". For almost every category of X, you'll be judged hard for your preferences, even if you didn't consciously choose any of them.
I don't think it's just about shouting the loudest or most convincingly. At least I want to stress that what counts as "convincing" is more emotional than rational, in all cases where the rational is less pleasant to the ear. Some people can see through this and side with the truth, but I think the ratio of them is too small to counter the effect.
Since this is mostly about value, objectivity can't help us. Even if it could (through agreement about metrics), the relationships of real-world data is too complex. War feels terrible, yet it's great for technological advancements. "War is good" is not a common opinion at all, it lost, and the positive effects are rarely even considered. Society tends to think of things as either entirely good or entirely bad, but if you consider 3 or 4 links of cause and effect, such thinking becomes useless. But society generally doesn't look that far, and neither does it like people who do. People who look that far ahead will advocate for terrible things now to bring about good things later (accelerationism, revolution, eugenics, etc). But it will happily make the locally best choice even when it's completely unsustainable.
Anyway - I think making the correct choice requires some willpower, for the same reason that it requires willpower to eat salad rather than a burger. But the average person, to the extent that they're "moral", tends to be weak. No willpower, no backbone, no abiliy to resist temptation, conflict-shy, afraid to assert themselves. Stronger people suffer from this effect, for they can either make the worse choice, or get called "evil" for making the better choice. To use an example which may be familiar to you, how do you save somebody who is addicted to something harmful or procrastinating important work? You either aid their destruction, or take their pleasure away from them, and both choices are painful.
Replies from: erioire↑ comment by ErioirE (erioire) · 2024-05-14T22:28:08.770Z · LW(p) · GW(p)
You're right, "objectively" doesn't fit as well in that statement as I thought.
That is how I intended 'convincing' to be interpreted.
For almost every category of X, you'll be judged hard for your preferences, even if you didn't consciously choose any of them.
It depends on if X is a demographic/group or a variable. "I don't want to date people who are [uneducated/from a drastically different cultural background]" sounds a lot less politically correct than "I want to date people with whom I estimate a high probability of mutual relationship satisfaction." because you don't have to explain your criteria to everyone.
I admit that's more semantic obfuscation of judgement risk markers than it is mitigating the problem.
↑ comment by StartAtTheEnd · 2024-05-14T23:36:55.386Z · LW(p) · GW(p)
I see! I think we largely agree then.
It does depend how you explain yourself, but in the end, you're just wording the same thing (the same preference) differently, and that's still assuming that you know the reason of your own preference, and that they have a reason.
The logic seems to be "when the truth looks bad, it is, therefore you must pretend otherwise", which adds a useless layer on top of everything obscuring the truth. The truth isn't always more valuable than pleasant lies, but when this constructed social reality starts influencing areas in which it does matter (like medicine, general science and ways of doing things, like parenting), I find that it's harmful.
I'll also admit that I don't find preferences to be a problem at all. Even though most preferences are shallow (occuring before conscious thought). I think both lying about them and inferring something from them is more harmful. All this perceived intent where none exists is what causes aspects of life to be so unappealing. I find most peoples perceptions to be unhealthy, by which I mean lacking in innocence, resulting in a sort of oversensitivity or tendency to project or interpret negative signals.
This is sort of abstract, but if we assume that racism is solved by not seeing color, then moral evil can be solved by not looking at the world through such a lens. Favorable and unfavorable outcomes will still exist, the dimension of "pure/corrupt" feelings associated with things will just disappear. This may be throwing out the baby with the bathwater though.
comment by DanArmak · 2020-01-18T10:21:22.796Z · LW(p) · GW(p)
This post raises some reasonable-sounding and important-if-true hypotheses. There seems to be a vast open space of possible predictions, relevant observations, and alternative explanations. A lot of it has good treatment, but not on LW, as far as I know.
I would recommend this post as an introduction to some ideas and a starting point, but not as a good argument or a basis for any firm conclusions. I hope to see more content about this on LW in the future.
comment by Raemon · 2019-12-01T22:02:23.313Z · LW(p) · GW(p)
Initially, I did not nominate this post, for reasons similar to Habryka's note that "it kind of pattern-matches to a narrative that I feel performs badly on some reference-class forecasting perspectives".
But, upon reflection: the hypotheses here do feel "important if true", and moreover the model seems plausible. And, regardless, "What exactly is modern internet culture doing to us?" seems like a really important question, which I'd like to have seriously investigated. It seems like exactly the sort of thing rationality is for: a high stakes question with limited information, potentially with only a limited window to get the answer right.
So, this nomination is not (necessarily) because I think this should be included in the Best of 2018 book, but because I want the claims to get more thorough review/operationalization/thinking-about-what-future-work-is-helpful. (Meanwhile, I've definitely thought a lot about in in the past year)
...
(Addenda: I also think this might have been the post that crystallized the idea of "hypercompetition can produce worse results" for me, including domains like college admissions and hiring. I think I've gotten that from a few different places but I noticed that point on the re-read here and it's definitely a hypothesis I consider more often now)
↑ comment by bfinn · 2019-12-02T20:19:04.609Z · LW(p) · GW(p)
Re your addendum, to make an almost-obvious point, over-optimizing producing worse results is what large parts of modern life are all about; typically over-optimizing on evolved behaviours. Fat/sugar, porn, watching TV (as a substitute for real life), gambling (risk-taking to seek reward), consumerism and indeed excess money-seeking (accumulating unnecessary resources), etc. The bad results often take the form of addictions.
Though some such things are arguably harmless (e.g. professional sport - building unnecessary muscles/abilities full-time to win a pointless status contest).
comment by Jotto999 · 2018-11-23T21:55:09.551Z · LW(p) · GW(p)
I'm concerned about the way you're operationalizing this (as in, insufficiently). Could you make this into an concrete prediction about the world? What sort of question about this could we post on Metaculus that wouldn't resolve ambiguous?
comment by Dagon · 2018-11-20T18:59:27.287Z · LW(p) · GW(p)
This has long been my biggest worry for the future. Pleasant/attractive future experiences of intelligent agents (whether human, artificial, or alien) seem unlikely in highly-competitive environments. This is not because competition itself is unpleasant, but because agents will always seek their own utility (or at least existence if expected utility is positive) over other agents'.
It's far more general than current cultural threats - my fear is that Scott's Moloch is a fundamental truth of individual existence. The repugnant conclusion, however repugnant, is the true conclusion of maximizing overall utility within available resources.
The only out I currently see is to give up individuality (or more likely, to have it taken from us by a more powerful utility monster).
Replies from: StartAtTheEnd↑ comment by StartAtTheEnd · 2023-10-17T08:44:05.963Z · LW(p) · GW(p)
Only lower needs are ugly, higher needs include pleasant and attractive aspects of life. Only simple people think that material wealth is true wealth. Sadly, it seems that we're outnumbered, and that the average human being is at a lower stage of development than in the past. Most people haven't even begun the process of individualization, yet alone reached the point where petty things offend their taste.
This scale can also be understood as "lower" and "higher". It's hard to put into words what exactly is higher and lower, but this difference is a major part of linguistics, in which it's visible. "Corrupt", "Filthy", "Indecent", "Unrefined". See how social status, cleanliness, manners, and standards correlate with language with positive evaluations? The vocal minority which indulges in petty identity politics and other such types offend me in the same way. Of course, a tradeoff here is that they can enjoy life much more than I, who considers most things below himself.
I otherwise agree with you, you put it into words much better than I ever could. But I think it's a fundamental truth of groups more than of individual existence. After all, smaller communities tend to be better than big ones. Moloch might be an essential part of size, so that the problem is the utility monster rather than us. Maybe we're not maximizing individual utility? We're working for something else which grants us utility, but which eventually replaces us. If I automate a task, I will feel like I'm increasing my own power, but I'm actually just creating my own replacement, and decreasing my own value in the process.
I have one positive observation to share with you though, it's that optimizing over longer spans of time removes all of these issues. Moloch is only a problem when people are greedy and optimizing immediate reward. If you were a greedy person destined to live for a million years, you'd likely not contribute to anything which reduces the overall quality of the world. Extend this time span to a quadrillion years, and you'd pretty much have solved the problem with human alignment. The world would be an extension of yourself and damaging it in any way would be hurting yourself. It would still be a "us vs them" situation, but the "us" would include all of humanity, rather than just a tiny part of it, so "them" wouldn't be your fellow human beings but something like physics and possibly aliens.
(Apologies if necro-posting is frowed upon on here, and please let me know if it is)
comment by greylag · 2018-11-18T22:00:00.582Z · LW(p) · GW(p)
Neuromancer, bleah, what is wrong with this book, it feels damaged, why do people like this, it feels like there's way too much flash and it ate the substance, it's showing off way too hard.
Hypotheses:
- This is the millennia-long tension between Enlightenment and Romanticism. Romanticism feels deeply wrong to someone on team Enlightenment, especially when stealing Enlightenment’s science fictional tropes!
- A cultural Idea Trap. Great Stagnation gives you Cyberpunk. (Doubtful, suspect events occurred in wrong order)
comment by Adam Zerner (adamzerner) · 2018-11-18T01:21:04.890Z · LW(p) · GW(p)
I hope that the trend of low quality internet content will reverse itself, and settle at a less low quality equilibrium. As an analogy, think about food. We start out at whatever point, then we introduce fast food and processed garbage, people initially respond by consuming too much of it, but then, once they realize how harmful it is, they start to wise up and look for healthier alternatives. (Well, some people do, to some extent.) Same with smoking. With the internet, I think we're just starting to enter the phase where we realize how harmful it is. I don't know enough to say "I'm confident that the internet will follow the same path", but it certainly seems plausible.
Something that makes me feel more optimistic about the internet as opposed to food and smoking is that it is a lot easier to precommit to avoiding things on the internet than it is to precommit to avoiding, say, McDonalds, or a pack of cigarettes. SelfControl is a good example. It allows you to block websites for up to 24 hours at a time. But I see no reason why this sort of thing can't be expanded. Why only 24 hours? Why only blocking websites? What about apps? What about certain types of content, say, on your newsfeed? What about precommitting to, say, five hours per week as opposed to an outright block? I suspect that this sort of software will evolve to be more and more sophisticated.
Of course, there is tremendous economic incentive for the Facebooks of the world to prevent people from using this sort of productivity software, but it seems like a very hard battle to win. How do you prevent people from realizing that they aren't as happy as they used to be before they spent 4 hours a day mindlessly browsing your site and getting in to stupid arguments? How do you stop people from precommitting with the click of a button? Maybe I'm just underestimating the ability of internet companies to manipulate us. After all, they've been winning for years, and humans don't have the best track record of plucking low hanging fruits in their personal lives.
Replies from: Benquo, None↑ comment by Benquo · 2018-11-19T11:38:49.331Z · LW(p) · GW(p)
Obesity rates in the US haven’t materially declined. Smoking rates declined because of a massive public health campaign orchestrated in large part through the government. And this only happened because the US’s basic info-processing and coordination mechanisms still functioned well enough to reason about the problem. This does not bode well for the US’s ability to repair damage to its info-processing and coordination mechanisms.
comment by Apollo13 · 2018-11-17T21:51:25.508Z · LW(p) · GW(p)
I really like this, however the term 'hard selecting' or 'selecting harder' is confusing to me. Do you think you could rephrase it to me, so I could understand it better?
Replies from: Pattern↑ comment by Pattern · 2018-11-18T07:55:59.311Z · LW(p) · GW(p)
Quick fictional example:
There are 2 schools, let's call them school A and school B, that will only accept the best of the best. School A will only accept people that were in the top 10% of the people who graduated their high school. School B says Ha! We will only accept people who were the top student in their graduating class.
And then later down the line it is discovered that while the graduates/attendees of school B are 'more brilliant', they also have a higher burnout rate, and more mental health problems.
Why? Maybe in a graduating class the top 10% are hard workers, but the top person might work too hard.
comment by FeepingCreature · 2018-11-17T11:17:18.215Z · LW(p) · GW(p)
Eh, the gig economy will fix it.
I can't think of any economic model that would more select for the ability to take pieces of cognitive architecture and put them together into novel ways. Weren't you the one who said science was going too slow, and that a true Bayesian should be able to solve shallow problems in, let's say a quarter hour and more complex ones like unified physics in a week? That does not sound "old style of work" to me, but it does moreso sound, amusingly, "glib memetics" - and "startups". Similarly, the Agile model of development is to accept doing damage but be aware of it as you do so - make the feature usable, then move on but put a cleanup task in the backlog. At least where I work, modern-style seems more reactive and demanding of fluidity, and startup/gig work can only increase that. I think we're kind of in a transition phase where the human mind is being effectively operationalized as a target platform, but large parts of the population haven't fully evolved the software to actually manage being treated as a target, and social systems are taking full advantage. But society is also taking advantage of the increased flexibility on offer here, and in the medium run self-awareness will have to catch up to be able to keep up and frontrun the rapidly-evolving memetic environment. At least that's my expectation.
comment by kboon · 2018-11-17T09:56:32.239Z · LW(p) · GW(p)
My cultural software tells me to skip over large parts of any long text, if I can sort of guess what the author is trying to say, even though I know I shouldn't. But I'm an old man by internet standards. I at least remember having an attention span.
Replies from: Benquocomment by transhumanist_atom_understander · 2023-10-22T21:34:35.645Z · LW(p) · GW(p)
As someone who grew up with Greg Egan on the shelf, I want to note that Greg Egan said basically the same thing about "Neuromancer" (that it cares more about being fashionable than having the characters think through their situation), and "Quarantine" and "Permutation City" were in part responses to cyberpunk, so perhaps all is not lost.
Backing that up with Greg Egan interview quotes.
From the Karen Burnham interview, on hating "Neuromancer", and on the influence of cyberpunk on "Quarantine":
I read Neuromancer in 1985, because I was voting for the Hugos that year and I thought I ought to read all the nominated novels. I really hated it; aside from the style and the characters, which definitely weren't to my taste, a lot of things about the technology in the book seemed very contrived and unlikely, especially the idea that anyone would plug in a brain-computer interface that they knew a third party could use to harm them.
Over the next few years I read some Rucker and Sterling novels, which I definitely enjoyed more than Gibson. So there was some reasonable stuff written under the cyberpunk banner, but none of it felt very groundbreaking to anyone who'd been reading Dick and Delany, and if it hadn't been wrapped in so much hype I probably would have enjoyed it more. In fact, the way cyberpunk as a movement influenced me most was a sense of irritation with its obsession with hipness. I don't think there's much doubt that “Axiomatic” and the opening sections of Quarantine have a kind of cyberpunk flavour to them, but my thinking at the time would have been less “Maybe I can join the cyberpunk club!” and more “Maybe I can steal back private eyes and brain-computer interfaces for people who think mirror shades are pretentious, and do something more interesting with them.”
From the Marisa O’Keeffe interview, something that corroborates what Eliezer Yudkowsky said about "Neuromancer" characters worrying how things look on a t-shirt:
A lot of cyberpunk said, in effect: “Computers are interesting because cool, cynical men (or occasionally women) in mirrorshades do dangerous things with them.” If that really is the most interesting thing you can imagine about a computer, you shouldn’t be writing SF.
From the Russell Blackford interview, on the influence of cyberpunk on "Permutation City":
I recall being very bored and dissatisfied with the way most cyberpunk writers were treating virtual reality and artificial intelligence in the ’80s; a lot of people were churning out very lame noir plots that utterly squandered the philosophical implications of the technology. I wrote a story called “Dust”, which was later expanded into Permutation City, that pushed very hard in the opposite direction, trying to take as seriously as possible all the implications of what it would mean to be software. In the case of Permutation City that included some metaphysical ideas that I certainly wouldn’t want to repeat in everything I wrote, but the basic notions about the way people will be able to manipulate themselves if they ever become software, which I developed a bit further in Diaspora, seem logically unavoidable to me.
Something depressing is certainly going on in mainstream culture, since for example "The New York Times" hasn't had a review of a Greg Egan book since "Diaspora" in 1998, except to suggest "that Egan doesn’t fully understand how oppression works — or that he is trying to make an inappropriate point".
But science fiction seems alright, if it reacted to "Neuromancer" exactly along the lines of Eliezer Yudkowsky's reaction to this post, producing some of the most beloved (by sci-fans) science fiction of the 90s. And I still see every new Alastair Reynolds book in the sci-fi sections of non-specialty bookstores.
comment by Mary Chernyshenko (mary-chernyshenko) · 2019-12-05T21:06:36.645Z · LW(p) · GW(p)
(a minor point. Feral children might not "exist", in a way. My zoological friends say it is quite possible that someone takes out an unfit child, or even just a mouth too many, to the forest and leaves it there. It is possible that a predator finds it and doesn't eat at once - predators can play with food. And it is possible that the child is found again before it dies. (Perhaps even without the predator part.) But it's no more than that.)
comment by Eli Tyre (elityre) · 2020-04-25T00:21:33.516Z · LW(p) · GW(p)
Is this title intentionally (ironically) clickbaity? Or unintentionally (ironically) clickbaity?
Replies from: Benito↑ comment by Ben Pace (Benito) · 2020-04-25T02:42:34.940Z · LW(p) · GW(p)
Pretty sure it's intentionally ironic.
comment by greylag · 2018-11-18T21:45:53.015Z · LW(p) · GW(p)
the intense competition to get into Harvard is producing a monoculture of students who've lined up every single standard accomplishment and how these students don't know anything else they want to do with their lives
This is Goodhart’s Law [LW · GW] run riot, yes?
Replies from: Benito↑ comment by Ben Pace (Benito) · 2018-11-18T21:47:47.671Z · LW(p) · GW(p)
Yeah, this para was just introducing goodhart's law in simple language (and without the jargon):
To review, there's a general idea that strong (social) selection on a characteristic imperfectly correlated with some other metric of goodness can be bad for that metric, where weak (social) selection on that characteristic was good. If you press scientists a little for publishable work, they might do science that's of greater interest to others. If you select very harshly on publication records, the academics spend all their time worrying about publishing and real science falls by the wayside.
comment by totallybogus · 2018-11-18T15:17:35.646Z · LW(p) · GW(p)
Is Clickbait Destroying Our General Intelligence? You Won't BELIEVE What Comes Next!
(Personally, I don't buy it. I think persuasion technology - think PowerPoint et al., but also possibly new varieties of e.g. "viral" political advertising and propaganda, powered by the Internet and social media - has the potential to be rather more dangerous than BuzzFeed-style clickbait content. If only becausr clickbait is still optimizing for curiosity and intellectual engagement, if maybe in a slightly unconventional way compared to, e.g. 1960s sci-fi.)
comment by qmaury · 2022-08-04T19:06:56.640Z · LW(p) · GW(p)
I always thought of the internet in the same way that Night City is described in Neuromancer:
"Night City [is] like a deranged experiment in social Darwinism, designed by a bored researcher who kept one thumb permanently on the fast-forward button."
The internet is just societal evolution set on fast-forward.
comment by Peter Smythe (peter-smythe) · 2019-05-07T11:05:31.373Z · LW(p) · GW(p)
Broke: Optimizing your beliefs for predictive capability.
Woke: Optimizing the inclusive memetic fitness of your beliefs on the internet.
comment by Peter Smythe (peter-smythe) · 2019-05-07T10:47:30.646Z · LW(p) · GW(p)
Hmmmmmm...I think there has been a cultural shift toward an optimization of one's connections to only include a memetically-purified ingroup based around one's existing beliefs.