Converging toward a Million Worlds

post by Joe Kwon · 2021-12-24T21:33:52.732Z · LW · GW · 1 comments

Contents

1 comment

Epistemic status: unfleshed thought that’s been brewing in my mind recently, but feels like it might be important. At the very least, I'm hoping to get pointers to writings and conversations that might've already transpired on this topic. 

Inspired by recent trends in social media, upcoming tech in VR/AR, and reading some older posts about complexity of human values as well as the distinction between the conscious and unconscious mind.

A lot of technologies deployed today, including social media, have a core incentive to maximize attention. This can be achieved by doing “good” things e.g. showing interesting and relevant content to users. But I feel wary about this trend in increasing capabilities for capturing attention — specifically with regard to the possibility that we might warpspeed into a world in which people are increasingly stripped of agency and farmed according to values that they are never given a chance to sufficiently evaluate.

This tugs at something that is inconspicuous (to most), yet crucial to the human experience: if our subconscious and conscious utilities differ, to what extent is this phenomenon of attention-capture asymmetrically targeting and exploiting subconscious "feel-goods"/values? To clarify, I think the Experience Machine thought experiment is a fair analogy at the extreme. Or more concretely, maybe I'm joyful and having lots of laughs from viewing extremely optimized TikTok content for 16 hours a day. And yet, on days when I feel like I have especially high executive control and awareness, I realize that this is a bad behavior to engage in because I desire certain higher-level goals and a type of fulfillment that is being hindered by being sucked into entertaining content every single day. 

In a non-social media context, mobile app video games have a huge userbase nowadays, especially with younger kids. I've played a few, and anecdotally I can say that they're incredibly well optimized to take advantage of subconscious and impulsive behaviors to maximize earnings -- through various in-app purchases, essentially veiled-gambling-for-kids "gacha" mechanisms, etc. In middle school and high school, I was obsessed with a lot of these types of games, and in hindsight I realize that at a time when I hadn't yet considered my values and goals, or experienced and learned more broadly about what life is like, these video games enamored me with feelings of artificial accomplishment and status which I couldn't as easily access in my real life. 

We understand pretty little about human values (which is incredibly complex). I find it convincing that what we want consciously and what our brain rewards and pushes towards unconsciously are different, and I think the modern world really exploits these misalignments in value and is going to cause a larger divergence in them. I worry that we’re moving really quickly into a reality where tons of people will experience an increased misalignment due to less agency at the conscious/executive level (from the ever-more-powerful technological machinery popping up all around them). This is especially worrying to me when I think about how people are sucked into these games/technologies/etc at such a young age.

The title of this post is pointing to what I think is a very plausible future reality, and a more general class of this phenomenon of divergence in values. Virtual spaces are becoming a more present, relevant, and engaging part of life for many people. It's incredible to see how many people's lives are so heavily embedded purely in virtual spaces (lots of kids who spend more time on TikTok, metaverse-type games, discord, etc. than outside their rooms). This isn't necessarily a bad thing and social media is amazing for all sorts of reasons, but maybe more scrutiny should fall on the consequences of a world in which there are huge chasms of realities sitting atop dramatically distinctive optimized values. As the number of influential technology environments and mechanisms grow, and the optimizing power for capturing attention (a function on values (which differ a lot between people and within people)) increases, it feels like people will start to get sucked into islands of virtual spaces, with little opportunity to evaluate what's happening to them, if they're happy with the values they're being sucked into, etc. 

I'm curious about what ideas and guesses here people agree with, disagree with, things I got totally wrong, general thoughts on how governance structures will keep up with a world that looks like this, is this okay, is it scary, are we prematurely over-optimizing on values that we should really be trying to understand better, etc.

Last small point that's maybe political: Maybe the demographics of each such reality, or what groups are most "sucked in" to various virtual worlds, is significantly dictated by those who are creating these attention-capture machinery/spaces. Or it's possible they're creating these things without much thought about the consequences. So like, TikTok is super entertaining and fun, but do the executives of TikTok want their own children to be spending several hours a day on TikTok like millions of American kids, or would they rather they spend their time a different way? I read that Steve Jobs and Bill Gates limited screen-time for their kids. Maybe they were keeping their kids from being warped into a world of a certain set of values and desires that would hinder them from moving towards a different set of goals?

1 comments

Comments sorted by top scores.

comment by Joe Kwon · 2022-02-18T19:13:45.865Z · LW(p) · GW(p)

In case anyone stumbles across this post in the future, I found these posts from the past both arguing for and against some of the worries I gloss over here. I don't think my post boils down completely to merely "recommender systems should be better aligned with human interests", but that is a big theme. 

https://forum.effectivealtruism.org/posts/xzjQvqDYahigHcwgQ/aligning-recommender-systems-as-cause-area [EA · GW]

https://www.alignmentforum.org/posts/TmHRACaxXrLbXb5tS/rohinmshah-s-shortform?commentId=EAKEfPmP8mKbEbERv [AF(p) · GW(p)]