Is Clickbait Destroying Our General Intelligence?

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2018-11-16T23:06:29.506Z · score: 115 (52 votes) · LW · GW · 44 comments

(Cross-posted from Facebook.)


Now and then people have asked me if I think that other people should also avoid high school or college if they want to develop new ideas. This always felt to me like a wrong way to look at the question, but I didn't know a right one.

Recently I thought of a scary new viewpoint on that subject.

This started with a conversation with Arthur where he mentioned an idea by Yoshua Bengio about the software for general intelligence having been developed memetically. I remarked that I didn't think duplicating this culturally transmitted software would be a significant part of the problem for AGI development. (Roughly: low-fidelity software tends to be algorithmically shallow. Further discussion moved to comment below.)

But this conversation did get me thinking about the topic of culturally transmitted software that contributes to human general intelligence. That software can be an important gear even if it's an algorithmically shallow part of the overall machinery. Removing a few simple gears that are 2% of a machine's mass can reduce the machine's performance by way more than 2%. Feral children would be the case in point.

A scary question is whether it's possible to do subtler damage to the culturally transmitted software of general intelligence.

I've had the sense before that the Internet is turning our society stupider and meaner. My primary hypothesis is "The Internet is selecting harder on a larger population of ideas, and sanity falls off the selective frontier once you select hard enough."

To review, there's a general idea that strong (social) selection on a characteristic imperfectly correlated with some other metric of goodness can be bad for that metric, where weak (social) selection on that characteristic was good. If you press scientists a little for publishable work, they might do science that's of greater interest to others. If you select very harshly on publication records, the academics spend all their time worrying about publishing and real science falls by the wayside.

On my feed yesterday was an essay complaining about how the intense competition to get into Harvard is producing a monoculture of students who've lined up every single standard accomplishment and how these students don't know anything else they want to do with their lives. Gentle, soft competition on a few accomplishments might select genuinely stronger students; hypercompetition for the appearance of strength produces weakness, or just emptiness.

A hypothesis I find plausible is that the Internet, and maybe television before it, selected much more harshly from a much wider field of memes; and also allowed tailoring content more narrowly to narrower audiences. The Internet is making it possible for ideas that are optimized to appeal hedonically-virally within a filter bubble to outcompete ideas that have been even slightly optimized for anything else. We're looking at a collapse of reference to expertise because deferring to expertise costs a couple of hedons compared to being told that all your intuitions are perfectly right, and at the harsh selective frontier there's no room for that. We're looking at a collapse of interaction between bubbles because there used to be just a few newspapers serving all the bubbles; and now that the bubbles have separated there's little incentive to show people how to be fair in their judgment of ideas for other bubbles, it's not the most appealing Tumblr content. Print magazines in the 1950s were hardly perfect, but they could get away with sometimes presenting complicated issues as complicated, because there weren't a hundred blogs saying otherwise and stealing their clicks. Or at least, that's the hypothesis.

It seems plausible to me that basic software for intelligent functioning is being damaged by this hypercompetition. Especially in a social context, but maybe even outside it; that kind of thing tends to slop over. When someone politely presents themselves with a careful argument, does your cultural software tell you that you're supposed to listen and make a careful response, or make fun of the other person and then laugh about how they're upset? What about when your own brain tries to generate a careful argument? Does your cultural milieu give you any examples of people showing how to really care deeply about something (i.e. debate consequences of paths and hew hard to the best one), or is everything you see just people competing to be loud in their identification? The Occupy movement not having any demands or agenda could represent mild damage to a gear of human general intelligence that was culturally transmitted and that enabled processing of a certain kind of goal-directed behavior. And I'm not sure to what extent that is merely a metaphor, versus it being simple fact if we could look at the true software laid out. If you look at how some bubbles are talking and thinking now, "intellectually feral children" doesn't seem like entirely inappropriate language.

Shortly after that conversation with Arthur, it occurred to me that I was pretty much raised and socialized by my parents' collection of science fiction.

My parents' collection of old science fiction.

Isaac Asimov. H. Beam Piper. A. E. van Vogt. Early Heinlein, because my parents didn't want me reading the later books.

And when I did try reading science fiction from later days, a lot of it struck me as... icky. Neuromancer, bleah, what is wrong with this book, it feels damaged, why do people like this, it feels like there's way too much flash and it ate the substance, it's showing off way too hard.

And now that I think about it, I feel like a lot of my writing on rationality would be a lot more popular if I could go back in time to the 1960s and present it there. "Twelve Virtues of Rationality" is what people could've been reading instead of Heinlein's Stranger in a Strange Land, to take a different path from the branching point that found Stranger in a Strange Land appealing.

I didn't stick to merely the culture I was raised in, because that wasn't what that culture said to do. The characters I read didn't keep to the way they were raised. They were constantly being challenged with new ideas and often modified or partially rejected those ideas in the course of absorbing them. If you were immersed in an alien civilization that had some good ideas, you were supposed to consider it open-mindedly and then steal only the good parts. Which... kind of sounds axiomatic to me? You could make a case that this is an obvious guideline for how to do generic optimization. It's just what you do to process an input. And yet "when you encounter a different way of thinking, judge it open-mindedly and then steal only the good parts" is directly contradicted by some modern software that seems to be memetically hypercompetitive. It probably sounds a bit alien or weird to some people reading this, at least as something that you'd say out loud. Software contributing to generic optimization has been damaged.

Later the Internet came along and exposed me to some modern developments, some of which are indeed improvements. But only after I had a cognitive and ethical foundation that could judge which changes were progress versus damage. More importantly, a cognitive foundation that had the idea of even trying to do that. Tversky and Kahneman didn't exist in the 1950s, but when I was exposed [LW · GW] to this new cognitive biases literature, I reacted like an Isaac Asimov character trying to integrate it into their existing ideas about psychohistory, instead of a William Gibson character wondering how it would look on a black and chrome T-Shirt. If that reference still means anything to anyone.

I suspect some culturally transmitted parts of the general intelligence software got damaged by radio, television, and the Internet, with a key causal step being an increased hypercompetition of ideas compared to earlier years. I suspect this independently of any other hypotheses about my origin story. It feels to me like the historical case for this thesis ought to be visible by mere observation to anyone who watched the quality of online discussion degrade from 2002 to 2017.

But if you consider me to be more than usually intellectually productive for an average Ashkenazic genius in the modern generation, then in this connection it's an interesting and scary further observation that I was initially socialized by books written before the Great Stagnation. Or by books written by authors from only a single generation later, who read a lot of old books themselves and didn't watch much television.

That hypothesis doesn't feel wrong to me the way that "oh you just need to not go to college" feels wrong to me.

44 comments

Comments sorted by top scores.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2018-11-16T23:11:00.520Z · score: 28 (10 votes) · LW · GW

(Deleted section on why I thought cultural general-intelligence software was not much of the work of AGI:)

...because the soft fidelity of implicit unconscious cultural transmission can store less serially deep and intricate algorithms than the high-fidelity DNA transmission used to store the kind of algorithms that appear in computational neuroscience.

I recommend Terrence Deacon's The Symbolic Species for some good discussion of the surprising importance of the shallow algorithms and parameters that can get transmitted culturally. The human-raised chimpanzee Kanzi didn't become a human, because that takes deeper and more neural algorithms than imitating the apes around you can transmit, but Kanzi was a lot smarter than other chimpanzees in some interesting ways.

But as necessary as it may be to avoid feral children, this kind of shallow soft-software doesn't strike me as something that takes a long time to redevelop, compared to hard-software like the secrets of computational neuroscience.

comment by avturchin · 2018-11-17T10:46:02.539Z · score: 10 (3 votes) · LW · GW

Anatomically modern humans appeared around 300 K years ago, but the civilisation started only 5K years. It seems that this time was needed to polish the training data set for general intelligence.

I read a book about prehistoric art, and it strikes me that the idea of a drawing took tens thousand years to consolidate. This idea of drawing later evolved in symbols and text.

comment by gjm · 2018-11-17T14:53:31.006Z · score: 6 (3 votes) · LW · GW

If it took 300k years to develop human software, and 4-13M years to develop human hardware (starting from our common ancestor with chimpanzees), that seems consistent with Eliezer's claim that developing the software shouldn't take all that long _compared with the hardware_. (Eliezer doesn't say "hardware" but "hard-software", but unless I misunderstand he's talking about something fairly close to "software that implements what human brain hardware does".)

[EDITED to add:] On the other hand, you might expect software to evolve faster than hardware, at any given level of underlying complexity/difficulty/depth, because the relevant timescales for selection of memes are shorter than those for genes. So actually I'm not sure how best to translate timelines of human development into predictions for AI development. There's no very compelling reason to assume that "faster for evolution" and "faster for human R&D" are close to being the same thing, anyway.

comment by Benquo · 2018-11-19T21:24:40.157Z · score: 10 (2 votes) · LW · GW

I think you're responding to this as though it were just a metaphor and not noticing the extent to which it might just be meant literally. If we exit the part of human coordination space where we have a civilization, it could easily take another 300,000 years to get it back. That's not a generalized claim about software vs hardware development times. It's a specific claim that the specific "shallow soft-software" Eliezer is referring to might take hundreds of thousands of years to redevelop, regardless of what you might otherwise think about AI software development timelines.

comment by gjm · 2018-11-19T22:39:22.867Z · score: 2 (1 votes) · LW · GW

I'm like 96% sure it was intended to apply to the question of how much of the work in making an AGI is about "cultural general-intelligence software". But yeah, I agree that if we destroy our civilization it could take a long time to get it back. Not just because building a civilization takes a long time; also because there are various resources we've probably consumed most of the most accessible bits of, and not having such easy access to coal and oil and minerals could make building a new civilization much harder. But I'm not sure what hangs on that (as opposed to the related but separate question of whether we would rebuild civilization if we lost it) -- the destruction of human civilization would be a calamity, but I'm not sure it would be a much worse calamity if it took 300k years to repair than if it took "only" 30k years.

comment by Benquo · 2018-11-19T23:03:55.113Z · score: 4 (2 votes) · LW · GW

I think it matters because of what it implies about how hard a target civilization is to reach. Even if the 300k year process could be sped up a lot by knowing what we're aiming for, it's evidence that the end result was a much weaker natural attractor than our current state is, from a starting point of founding civilization at all.

comment by avturchin · 2018-11-18T10:21:21.577Z · score: 9 (2 votes) · LW · GW

I found an interesting article: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4429600/

Some factoids from it: "For example, there are nearly 20 million genomic loci that differ between humans and chimpanzees" (but 99 per cent of genome is non-coding regions).

"Another evolutionary approach has been to focus on genomic loci that are well conserved throughout vertebrate evolution but are strikingly different in humans; these regions have been named “human accelerated regions (HARs)” (Bird et al., 2007; Bush and Lahn, 2008; Pollard et al., 2006; Prabhakar et al., 2008). So far, ∼2700 HARs have been identified, again most of them in noncoding regions: at least ∼250 of these HARs seem to function as developmental enhancers in the brain".

"Comparison of the FOXP2 cDNAs from multiple species indicates that the human FOXP2 protein differs at only 3 amino acid residues from the mouse ortholog, and at 2 residues from the chimpanzee, gorilla, and rhesus macaque orthologs ... Mice carrying humanized FoxP2 show accelerated learning, qualitatively different ultrasonic vocalizations, and increased dendrite length and synaptic plasticity in the medium spiny neurons of the striatum."

So I have an impression that the changes in the genome were rather small, but very effective in fine tuning of the brain by creating new connections between regions, increasing its size etc. The information content of the changes depends not only on the number of single nucleotide changes, but their exact location through all 3 billion pair genome (which needs around 30 bits to code), but the main role was of these 250 HARs, and inside each HAR the change may be rather small, like in case of FOXP2.

Multiplying all that gives that significance difference between chimp and human brain development programs is around 25 000 bits. Not sure if this calculation is right, because there are many other genes and promoters in play.

The soft-software, imho, was what I call "human training dataset" and it includes, first of all, language (and our home environment, all visual production etc). The existence of the feral children which can't be trained to be human again means that human brain is the universal learning machine (the idea was discussed [LW · GW] in LW), but its training dataset is outside of the hardware of the machine.

Currently we have biggest changes in that dataset from ancient time because of Internet etc and if the principles of universal thinking are in the dataset we could lost them, as EY said.

comment by gwern · 2018-11-17T00:19:27.904Z · score: 24 (11 votes) · LW · GW

I felt some deja vu reading this and looked it up. Did anything in particular prompt crossposting it to LW over a year later, or is this just when you got around to it?

comment by omnesia · 2018-11-17T22:31:22.338Z · score: 2 (2 votes) · LW · GW

It might have something to do with an article published on theAtlantic the day before this crosspost. It looks at the stagnation of science. https://www.theatlantic.com/amp/article/575665

comment by Benquo · 2018-11-19T21:47:00.226Z · score: 15 (4 votes) · LW · GW

I think you're underestimating how unusual your source culture was in the other direction - it wasn't just short-lived, it was also new. Prior to the mid 20th Century, there wasn't a single dominant hegemonic power with an unified intellectual culture centered around a newly consolidated centralized administrative state and media apparatus. Then, shortly after WWII, there was.

There was also a world before the construction of a homogenizing information source (i.e. prior to the internet, broadcast media, the printing press), and that world was also not damaged in the way Neuromancer describes - it wasn't producing the kind of books you read as a kid. But it contained the raw materials used to construct the world that could write those books.

comment by Said Achmiz (SaidAchmiz) · 2018-11-17T00:22:49.943Z · score: 15 (6 votes) · LW · GW

Good post! I agree with almost all of this (I have one objection but it’s not really central). The bit about how your writing would likely have been more popular in the ’60s seems pretty likely.

And when I did try reading science fiction from later days, a lot of it struck me as… icky. Neuromancer, bleah, what is wrong with this book, it feels damaged, why do people like this, it feels like there’s way too much flash and it ate the substance, it’s showing off way too hard.

Amusingly, I very recently read Neuromancer for the first time… and had almost exactly this reaction. An unpopular opinion in our sorts of circles, it seems—which surprised me a bit.

comment by ryan_b · 2018-11-19T01:11:39.892Z · score: 5 (3 votes) · LW · GW

Contra the other responders, I like Neuromancer.

Of course, I felt largely the same way vis-a-vis the damage and emphasis on style, but that’s the whole pitch of the genre: the world is damaged and the punk aesthetic is the only non-corporate culture remaining, and that largely out of spite.

comment by namespace (ingres) · 2018-11-17T21:19:25.589Z · score: 3 (2 votes) · LW · GW

Will second not enjoying Neuromancer very much.

comment by gjm · 2018-11-17T14:48:36.133Z · score: 2 (1 votes) · LW · GW

Dunno whether it's an unpopular reaction in any particular circles, but it's pretty much how I felt about _Neuromancer_ too.

comment by ArthurRainbow · 2018-11-17T01:48:00.362Z · score: 13 (9 votes) · LW · GW

Have you any idea about how to test your hypothesis ? How to test for difference between past and present ?

I am entirely convinced that some people don't try to go and steal other good idea from the outgroup. Even people from «progessive» group. I can easily imagine that it is a general tendency, and not just something I see in the people in my neighborhood.

However, I don't see anything convincing that it is actually getting worse. I'm not an historian, from what I have heard and understood of the past centuries, open-mindness was not generally an adjective which could describe most of the people who had to live in those time, even if they were some great exceptions. The time it took to switch from roman number to arabic number seems to show that "stealing good ideas" was not an applied ideal. Thus, I must admit I'm kind of sceptic about the content of this blog post. Or, to say it in an other way, it's strangely looking similar to an article in the recent trend about "how internet is creating a bubble around you", but rewritten with rationalist wording.

By the way, if there is an easy way to distinguish good idea from bad idea, I'd love to have a pointer to it. Which would be mandatory to know what idea to actually steal.

comment by Flipnash · 2018-11-17T08:22:44.239Z · score: 1 (1 votes) · LW · GW

"By the way, if there is an easy way to distinguish good idea from bad idea, I'd love to have a pointer to it. Which would be mandatory to know what idea to actually steal. "

My crack at a solution to this problem was to learn to recognize ideas that are useful then filter those by how moral they are.

I fail all the time at this. I miss things. I fail to grasp the idea or fail to find a use case. I fail to judge the moral consequences of the idea.

I find it easier to find ideas that are useful to a problem i'm immediately facing rather than useful in general. Which narrows my filter bubble to just those related to programming as those are the problems I encounter and think about the most.

comment by Pattern · 2018-11-18T07:27:09.684Z · score: 0 (0 votes) · LW · GW

Two ways to do this:

1) If it's an idea someone else uses, ask them where they use it and how. If it's too general, get a concrete example.

2) If it is clear where it might be applied, test it. (I tested Bayes Theorem on a real life question with numbers I made up, to see if it was a useful tool. I didn't think of the problem when I was looking at the technique; I came across a question and I remembered it, so I used it.)

comment by JenniferRM · 2018-12-03T08:53:18.640Z · score: 12 (3 votes) · LW · GW

One thing to keep in mind is sampling biases in social media, which are HUGE.

Even if we just had pure date ordered posts from people we followed, in a heterogeneous social network with long tailed popularity distributions the "median user" sees "the average person they follow" having more friends than them.

Also, posting behavior tends to also have a long tail, so sloppy prolific writers are more visible than slow careful writers. (Arguably Asimov himself was an example here: he was *insanely* prolific. Multiple books a year for a long time, plus stories, plus correspondence.)

Then, to make the social media sampling challenges worse, the algorithms surface content to mere users that is optimized for "engagement", and what could be more engaging than the opportunity to tell someone they are "wrong on the Internet"? Unless someone is using social media very *very* mindfully (like trying to diagonalize what the recommendation engine's think of them) they are going to what causes them to react.

I don't know what is really happening to the actual "average mind" right now, but I don't think many other people know either. If anyone has strong claims here, it makes me very curious about their methodology.

The newsfeed team at Facebook probably has the data to figure a lot of this out, but there is very little incentive for them to be very critical or tell the truth to the public. However, in my experience, the internal cultures of tech companies are often not that far below/behind the LW zeitgeist and I think engineering teams sometimes even go looking for things like "quality metrics" that they can try to boost (counting uses of the word "therefore" or the equivalent idea that uses semantic embedding spaces instead) as a salve for their consciences.

More deeply, like on historical timescales, I think that repeated low level exposure to lying liars improves people's bullshit detectors.

By modern standards, people who first started listening to radio were *insanely gulllible* in response to the sound of authoritative voices, both in the US and in Germany. Similarly for TV a few decades later. The very first ads on the Internet (primitive though they were) had incredibly high conversion rates... For a given "efficacy" of any kind of propaganda, more of the same tends to have less effect over time.

I fully expect this current media milieu to be considered charmingly simple, with gullible audiences and hamhanded influence campaigns, relative to the manipulative tactics that will be invented in future decades, because this stuff will stop working :-)

comment by sirjackholland · 2018-11-19T18:59:39.988Z · score: 10 (4 votes) · LW · GW

Alternative hypothesis: the internet encourages people who otherwise wouldn't contribute to the general discourse to contribute to it. In the past, contributing meant writing some kind of article, or at least letter-to-the-editor, which 1) requires a basic level of literacy and intellectual capacity, and 2) provides a filter, removing the voices of those who can't write something publishers consider worth of publication (with higher-influence publications having, in general, stricter filters).

Anecdote in point: I have yet to see an internet comment that I couldn't imagine one of my relatives writing (sorry, relatives, but a few of y'all have some truly dumb opinions!). But these relatives I have in mind wouldn't have contributed to the general discourse before the internet was around, so if you don't have That Uncle in your family you may not have been exposed to ideas that bad before seeing YouTube comments.

Last minute edit: I mean that I have yet to see an internet comment that I couldn't imagine one of my relatives writing years and years ago, i.e. I expect that we would have seen 2018 level discourse in 2002 if That Uncle had posted as much in 2002 as in 2018.

comment by steven0461 · 2018-11-17T23:08:29.583Z · score: 10 (6 votes) · LW · GW
It feels to me like the historical case for this thesis ought to be visible by mere observation to anyone who watched the quality of online discussion degrade from 2002 to 2017.

My impression is that politics is more prominent and more intense than it used to be, and that this is harming people's reasonableness, but that there's been no decline outside of that. I feel like I see fewer outright uninformed or stupid arguments than I used to; probably this has to do with faster access to information and to feedback on reasoning. EA and AI risk memes have been doing relatively well in the 2010s. Maybe that's just because they needed some time to germinate, but it's still worth noting.

comment by Jacobian · 2018-11-19T16:02:54.204Z · score: 9 (4 votes) · LW · GW
there used to be just a few newspapers serving all the bubbles

I think that this in particular is a positive factor for quality of discourse / general intelligence, at least for large segments of the population.

Monoculture TV was trending towards 20-minute sitcom episodes, but now that we have HBO/Netflix many people choose to watch 10-hour seasons with complex stories. Radio and TV had talk shows with 5-minute interviews, now millions choose to listen to 2-hour podcasts. National magazines for nerds had to appeal to all nerds in the nation, but now I can choose to read LessWrong.

Certainly some people choose simpler and stupider. But when there was no choice, the mass media had to appeal to the common denominator - maybe not the 1st percentile of simplicity and stupidity but at most the 20th, not the 50th. This means that most people get smarter things when they have freedom to choose more niche media.

comment by greylag · 2018-11-18T21:48:20.847Z · score: 5 (3 votes) · LW · GW

Early Heinlein, because my parents didn't want me reading the later books.

This seems like exceptionally good judgement.

comment by Lambda · 2018-12-12T22:20:19.097Z · score: 1 (1 votes) · LW · GW

What's the boundary between early and late, and why is late bad?

comment by Jotto999 · 2018-11-23T21:55:09.551Z · score: 4 (3 votes) · LW · GW

I'm concerned about the way you're operationalizing this (as in, insufficiently). Could you make this into an concrete prediction about the world? What sort of question about this could we post on Metaculus that wouldn't resolve ambiguous?

comment by adamzerner · 2018-11-18T17:12:41.137Z · score: 4 (2 votes) · LW · GW

I think it is worth noting that results probably vary a lot based on who you hang out around on the internet. Eg. us LessWrongers hopefully have improved our culturally transmitted software by hanging out around the rationalistsphere, whereas people who browse pictures of Cardi B on Instagram probably haven't.

So as far as looking for a takeaway that you can apply to your personal life goes, I don't think most readers here need to be too worried. I would say that we should be watchful, but not paranoid. Personally, I spend a bit of time on poker forums discussing hands, and I've been extremely frustrated with the quality of conversation and discourse there. I should probably keep an eye on myself to see if anything has rubbed off on me.

comment by greylag · 2018-11-18T22:00:00.582Z · score: 3 (3 votes) · LW · GW

Neuromancer, bleah, what is wrong with this book, it feels damaged, why do people like this, it feels like there's way too much flash and it ate the substance, it's showing off way too hard.

Hypotheses:

  1. This is the millennia-long tension between Enlightenment and Romanticism. Romanticism feels deeply wrong to someone on team Enlightenment, especially when stealing Enlightenment’s science fictional tropes!
  2. A cultural Idea Trap. Great Stagnation gives you Cyberpunk. (Doubtful, suspect events occurred in wrong order)
comment by adamzerner · 2018-11-18T01:21:04.890Z · score: 3 (2 votes) · LW · GW

I hope that the trend of low quality internet content will reverse itself, and settle at a less low quality equilibrium. As an analogy, think about food. We start out at whatever point, then we introduce fast food and processed garbage, people initially respond by consuming too much of it, but then, once they realize how harmful it is, they start to wise up and look for healthier alternatives. (Well, some people do, to some extent.) Same with smoking. With the internet, I think we're just starting to enter the phase where we realize how harmful it is. I don't know enough to say "I'm confident that the internet will follow the same path", but it certainly seems plausible.

Something that makes me feel more optimistic about the internet as opposed to food and smoking is that it is a lot easier to precommit to avoiding things on the internet than it is to precommit to avoiding, say, McDonalds, or a pack of cigarettes. SelfControl is a good example. It allows you to block websites for up to 24 hours at a time. But I see no reason why this sort of thing can't be expanded. Why only 24 hours? Why only blocking websites? What about apps? What about certain types of content, say, on your newsfeed? What about precommitting to, say, five hours per week as opposed to an outright block? I suspect that this sort of software will evolve to be more and more sophisticated.

Of course, there is tremendous economic incentive for the Facebooks of the world to prevent people from using this sort of productivity software, but it seems like a very hard battle to win. How do you prevent people from realizing that they aren't as happy as they used to be before they spent 4 hours a day mindlessly browsing your site and getting in to stupid arguments? How do you stop people from precommitting with the click of a button? Maybe I'm just underestimating the ability of internet companies to manipulate us. After all, they've been winning for years, and humans don't have the best track record of plucking low hanging fruits in their personal lives.

comment by Benquo · 2018-11-19T11:38:49.331Z · score: 10 (2 votes) · LW · GW

Obesity rates in the US haven’t materially declined. Smoking rates declined because of a massive public health campaign orchestrated in large part through the government. And this only happened because the US’s basic info-processing and coordination mechanisms still functioned well enough to reason about the problem. This does not bode well for the US’s ability to repair damage to its info-processing and coordination mechanisms.

comment by Apollo13 · 2018-11-17T21:51:25.508Z · score: 3 (2 votes) · LW · GW

I really like this, however the term 'hard selecting' or 'selecting harder' is confusing to me. Do you think you could rephrase it to me, so I could understand it better?

comment by Pattern · 2018-11-18T07:55:59.311Z · score: 6 (4 votes) · LW · GW

Quick fictional example:

There are 2 schools, let's call them school A and school B, that will only accept the best of the best. School A will only accept people that were in the top 10% of the people who graduated their high school. School B says Ha! We will only accept people who were the top student in their graduating class.

And then later down the line it is discovered that while the graduates/attendees of school B are 'more brilliant', they also have a higher burnout rate, and more mental health problems.

Why? Maybe in a graduating class the top 10% are hard workers, but the top person might work too hard.

comment by Gurkenglas · 2018-11-17T18:57:20.052Z · score: 3 (2 votes) · LW · GW

This pattern matches for me to how every generation thinks the new form of media will harm the next generation, all the way back to Socrates thinking writing will destroy people's memory.

Does becoming the stereotypically biased geezer seem plausible to you who wrote that he might hit a brick wall at 40 [LW · GW], or is this outside-view thinking asking the wrong questions [LW · GW]?

comment by Benquo · 2018-11-19T21:26:40.964Z · score: 21 (5 votes) · LW · GW
all the way back to Socrates thinking writing will destroy people's memory

This turned out to be true, of course.

comment by Rob Bensinger (RobbBB) · 2018-11-17T19:26:58.587Z · score: 3 (2 votes) · LW · GW

If people tend to systematically make a certain mistake, then it's worth asking whether there's some causal factor behind it and whether that could be nudging us toward making the same mistake.

On the other hand, our general ability to solve problems and figure things out presumably is either staying the same, or getting worse, or getting better. That's a factual question that we should be able to learn about, and if (after trying to correct for biases) we did end up reaching a conclusion that resembles an old mistake, well, then it's also possible that the truth resembles an old mistake.

comment by Gurkenglas · 2018-11-17T20:19:30.435Z · score: 2 (2 votes) · LW · GW

Correct. A candidate for a common causal factor of blaming the new media is observing that the young people are wrong.

If we taboo the evidence of subjectively observing the people raised by the new media, what remains is the filter bubble effect. It seems in hindsight like a natural rationalization to reach for, because it's one of the few known downsides of the internet. Eliezer, you wrote the book on noticing when you rationalize. Is this a likely story?

Are some parts of the internet more affected by the new effects of the internet than others? Is there a way to test them for their cognitive function as opposed to thinking in ways the previous generation wouldn't approve of?

comment by FeepingCreature · 2018-11-17T11:17:18.215Z · score: 3 (2 votes) · LW · GW

Eh, the gig economy will fix it.

I can't think of any economic model that would more select for the ability to take pieces of cognitive architecture and put them together into novel ways. Weren't you the one who said science was going too slow, and that a true Bayesian should be able to solve shallow problems in, let's say a quarter hour and more complex ones like unified physics in a week? That does not sound "old style of work" to me, but it does moreso sound, amusingly, "glib memetics" - and "startups". Similarly, the Agile model of development is to accept doing damage but be aware of it as you do so - make the feature usable, then move on but put a cleanup task in the backlog. At least where I work, modern-style seems more reactive and demanding of fluidity, and startup/gig work can only increase that. I think we're kind of in a transition phase where the human mind is being effectively operationalized as a target platform, but large parts of the population haven't fully evolved the software to actually manage being treated as a target, and social systems are taking full advantage. But society is also taking advantage of the increased flexibility on offer here, and in the medium run self-awareness will have to catch up to be able to keep up and frontrun the rapidly-evolving memetic environment. At least that's my expectation.

comment by Dagon · 2018-11-20T18:59:27.287Z · score: 2 (1 votes) · LW · GW

This has long been my biggest worry for the future. Pleasant/attractive future experiences of intelligent agents (whether human, artificial, or alien) seem unlikely in highly-competitive environments. This is not because competition itself is unpleasant, but because agents will always seek their own utility (or at least existence if expected utility is positive) over other agents'.

It's far more general than current cultural threats - my fear is that Scott's Moloch is a fundamental truth of individual existence. The repugnant conclusion, however repugnant, is the true conclusion of maximizing overall utility within available resources.

The only out I currently see is to give up individuality (or more likely, to have it taken from us by a more powerful utility monster).

comment by totallybogus · 2018-11-18T15:17:35.646Z · score: 2 (2 votes) · LW · GW

Is Clickbait Destroying Our General Intelligence? You Won't BELIEVE What Comes Next!

(Personally, I don't buy it. I think persuasion technology - think PowerPoint et al., but also possibly new varieties of e.g. "viral" political advertising and propaganda, powered by the Internet and social media - has the potential to be rather more dangerous than BuzzFeed-style clickbait content. If only becausr clickbait is still optimizing for curiosity and intellectual engagement, if maybe in a slightly unconventional way compared to, e.g. 1960s sci-fi.)

comment by Jameson Quinn (jameson-quinn) · 2018-11-17T15:08:32.743Z · score: 2 (5 votes) · LW · GW

In my anecdotal experience, the kids are OK. At least as OK as we were when I was a kid in the 80s reading SF from the 60s and 70s.

If you want me to take this hypothesis more seriously than that, show more evidence.

comment by kboon · 2018-11-17T09:56:32.239Z · score: 2 (5 votes) · LW · GW

My cultural software tells me to skip over large parts of any long text, if I can sort of guess what the author is trying to say, even though I know I shouldn't. But I'm an old man by internet standards. I at least remember having an attention span.

comment by Benquo · 2018-11-19T11:40:23.470Z · score: 8 (1 votes) · LW · GW

Some of that’s just a rational response to writing targeted at lower levels of literacy.

comment by Peter Smythe (peter-smythe) · 2019-05-07T10:47:30.646Z · score: 1 (1 votes) · LW · GW

Hmmmmmm...I think there has been a cultural shift toward an optimization of one's connections to only include a memetically-purified ingroup based around one's existing beliefs.

comment by greylag · 2018-11-18T21:45:53.015Z · score: 1 (1 votes) · LW · GW

the intense competition to get into Harvard is producing a monoculture of students who've lined up every single standard accomplishment and how these students don't know anything else they want to do with their lives

This is Goodhart’s Law [LW · GW] run riot, yes?

comment by Benito · 2018-11-18T21:47:47.671Z · score: 3 (2 votes) · LW · GW

Yeah, this para was just introducing goodhart's law in simple language (and without the jargon):

To review, there's a general idea that strong (social) selection on a characteristic imperfectly correlated with some other metric of goodness can be bad for that metric, where weak (social) selection on that characteristic was good. If you press scientists a little for publishable work, they might do science that's of greater interest to others. If you select very harshly on publication records, the academics spend all their time worrying about publishing and real science falls by the wayside.
comment by Peter Smythe (peter-smythe) · 2019-05-07T11:05:31.373Z · score: -1 (2 votes) · LW · GW

Broke: Optimizing your beliefs for predictive capability.

Woke: Optimizing the inclusive memetic fitness of your beliefs on the internet.