post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by gull · 2024-02-17T23:16:10.912Z · LW(p) · GW(p)

Have you read Cixin Liu's Dark Forest, the sequel to Three Body Problem? The situation on the ground might be several iterations more complicated than you're predicting.

Replies from: TrevorWiesinger
comment by trevor (TrevorWiesinger) · 2024-02-17T23:33:37.052Z · LW(p) · GW(p)

Strong upvoted! That's the way to think about this.

I read Three-body problem, not the rest yet (you've guessed my password, I'll go buy a copy).

My understanding of the situation here on the real, not-fake Earth, is that having the social graph be this visible and manipulable by invisible hackers, does not improve the situation.

I tried clean and quiet solutions and they straight-up did not work at all. Social reality is a mean mother fucker, especially when it is self-reinforcing, so it's not surprising to see somewhat messy solutions become necessary.

I think I was correct to spend several years (since early 2020) trying various clean and quiet solutions, and watching them not work, until I started to get a sense of why they might not be working.

Of course, maybe the later stages of my failures were just one more person falling through the cracks of the post-FTX Malthusian environment, twisting EA and AI safety culture out of shape. This made it difficult for a lot of people to process information about X-risk, even in cases like mine where the price tag was exactly $0.

I could have waited longer and made more tries, but that would have meant sitting quietly through more years of slow takeoff with the situation probably not being fixed.

Replies from: StartAtTheEnd, gull
comment by StartAtTheEnd · 2024-02-18T01:24:57.626Z · LW(p) · GW(p)

Could transparency/openness of information be a major factor?

I've noticed that video games become much worse as a result of visibility of data. With wikis, build-in search, automatic markets, and other such things, metas (as in meta-gaming) start to form quickly. The optimal strategies become rather easy to find, and people start exploiting them as a matter of course.

Another example is dating. Compare modern dating apps to the 1980s. Dating used to be much less algorithmic, you didn't run people through a red-flag checklist, you just spent time with them and evaluated how enjoyable that was.

I think the closed-information trait is extremely valuable as it can actually defeat Moloch. Or more accurately, the world seems to be descending into an unfavorable nash's equilibrium as a result of optimal strategies being visible.

By the way, the closed-information vs open-information duality can be compared to ribbonfarm's Warrens vs. Plazas view of social spaces (not sure if you know about that article)

comment by gull · 2024-02-18T00:05:19.165Z · LW(p) · GW(p)

So you read Three Body Problem but not Dark Forest. Now that I think about it, that actually goes quite a long way to put the rest into context. I'm going to go read about conflict/mistake theory [? · GW] and see if I can get into a better headspace to make sense of this.

comment by gjm · 2024-02-17T23:16:15.379Z · LW(p) · GW(p)

"Regression to the mean" is clearly an important notion in this post, what with being in the title and all, but you never actually say what you mean by it. Clearly not the statistical phenomenon of that name, as such.

(My commenting only on this should not be taken to imply that I find the rest of the post reasonable; I think it's grossly over-alarmist and like many of Trevor's posts treats wild speculation about the capabilities and intentions of intelligence agencies etc. as if it were established fact. But I don't think it likely that arguing about that will be productive.)

Replies from: TrevorWiesinger
comment by trevor (TrevorWiesinger) · 2024-02-17T23:40:50.978Z · LW(p) · GW(p)

Yes, it means inducing conformity. It means making people more similar to the average person while they are in the controlled environment. 

That is currently the best way to improve data quality when you are analyzing something as complicated as a person. Even if you somehow were able to get secure copies of all the sensor data from all the sensors from all the smartphones, with the current technology level you're still better off controlling for variables wherever possible, including within the user's mind. 

For example, the trance state people go into when they use social media. Theoretically, you get more information from smarter people when they are thoughtful, but with modern systems it's best to keep their thoughts simple so you can compare their behavior to the simpler people that make up the vast majority of the data (and make them lose track of time until 1-2 hours pass, around when the system makes them feel like leaving the platform, which is obviously a trivial task).

EDIT: this was a pretty helpful thing to point out, I replaced every instance the word "regression to the mean" with "mediocrity" and "inducing mediocrity".

Replies from: gjm
comment by gjm · 2024-02-18T00:55:45.995Z · LW(p) · GW(p)

Let us suppose that social media apps and sites are, as you imply, in the business of trying to build sophisticated models of their users' mental structures. (I am not convinced they are -- I think what they're after is much simpler -- but I could be wrong, they might be doing that in the future even if not now, and I'm happy to stipulate it for the moment.)

If so, I suggest that they're not doing that just in order to predict what the users will do while they're in the app / on the site. They want to be able to tell advertisers "_this_ user is likely to end up buying your product", or (in a more paranoid version of things) to be able to tell intelligence agencies "_this_ user is likely to engage in terrorism in the next six months".

So inducing "mediocrity" is of limited value if they can only make their users more mediocre while they are in the app / on the site. In fact, it may be actively counterproductive. If you want to observe someone while they're on TikTok and use those observations to predict what they will do when they're not on TikTok, then putting them into an atypical-for-them mental state that makes them less different from other people while on TikTok seems like the exact opposite of what you want to do.

I don't know of any good reason to think it at all likely that social media apps/sites have the ability to render people substantially more "mediocre" permanently, so as to make their actions when not in the app / on the site more predictable.

If the above is correct, then perhaps we should expect social media apps and sites to be actively trying not to induce mediocrity in their users.

Of course it might not be correct. I don't actually know what changes in users' mental states are most helpful to social media providers' attempts to model said users, in terms of maximizing profit or whatever other things they actually care about. Are you claiming that you do? Because this seems like a difficult and subtle question involving highly nontrivial questions of psychology, of what can actually be done by social media apps and sites, of the details of their goals, etc., and I see no reason for either of us to be confident that you know those things. And yet you are happy to declare with what seems like utter confidence that of course social media apps and sites will be trying to induce mediocrity in order to make users more predictable. How do you know?

Replies from: TrevorWiesinger
comment by trevor (TrevorWiesinger) · 2024-02-18T01:39:24.484Z · LW(p) · GW(p)

Yes, this is a sensible response; have you seen Tristan Harris's Social Dilemma documentary? It's a great introduction to some of the core concepts but not everything. 

Modelling user's behavior is not possible with normal data science or for normal firms with normal data security, but is something that very large and semi-sovereign firms like the Big 5 tech companies would have a hard time not doing given such large and diverse sample sizes. Modelling of minds, sufficient to predict people based on other people, is far less deep and is largely a side effect of comparing people to other people with sufficiently large sample sizes. The dynamic is described in this passage [LW · GW] I've cited previously.

Generally, inducing mediocrity while on the site is a high priority, but it's mainly about numbness and suppressing higher thought e.g. those referenced in Critch's takeaways on CFAR [LW · GW] and the sequences. They want the reactions to content to emerge from your true self, but they don't want any of the other stuff that comes from higher thinking or self awareness.

You're correct that an extremely atypical mental state on the platform would damage the data (I notice this makes me puzzled about "doomscrolling"); however, what they're aiming for is a typical state for all users (plus whatever keeps them akratic while off the platform), and for elite groups like the AI safety community, the typical state for the average user is quite a downgrade.

Advertising was big last decade, but with modern systems, stable growth is a priority, and maximizing ad purchases would harm users in a visible way, so finding the sweet spot is easy if you just don't put much effort into ad matching (plus noticing that the advertising is predictive creeps users out, same issue as making people use for 3-4 hours a day). Acquiring and retaining large numbers of users is far harder and far more important, now that systems are advanced enough to compete more against each other (less predictable) than against the user's free time (more predictable, especially now that there has been so much user data collected during scandals, but all kinds of things could still happen). 

On the intelligence agency side, the big players are probably more interested in public sentiment about Ukraine, NATO, elections/democracy, covid etc by now, rather than causing and preventing domestic terrorism (I might be wrong about that though).

Happy to talk or debate further tomorrow.

Replies from: gjm
comment by gjm · 2024-02-18T03:06:15.264Z · LW(p) · GW(p)

Once again you are making a ton of confident statements and offering no actual evidence. "is a high priority", "they want", "they don't want", "what they're aiming for is", etc. So far as I can see you don't in fact know any of this, and I don't think you should state things as fact that you don't have solid evidence for.

Replies from: TrevorWiesinger, andrew-burns
comment by trevor (TrevorWiesinger) · 2024-02-18T20:37:39.663Z · LW(p) · GW(p)

They want data. They strongly prefer data on elites (and useful/relevant for analyzing and understanding elite behavior) over data on commoners. 

We are not commoners.

These aren't controversial statements, and if they are, they shouldn't be.

comment by Andrew Burns (andrew-burns) · 2024-02-18T05:04:03.993Z · LW(p) · GW(p)

Whenever someone uses "they," I get nervous.