Content generation. Where do we draw the line?

post by Q Home · 2022-08-09T10:51:37.446Z · LW · GW · 7 comments

Contents

  The lost message
    "But AI can generate a better message for me! Even the one I wouldn't initially like!"
  Rights to exist
None
7 comments

If you want to be affected by other people, if you want to live in a culture with other people, then I believe 4 statements below are true:

This would basically mean living in the Matrix without other people. Consumerist solipsism. Everything you see is generated, but not by other people.

For example, if you read someone's story, you let the author control what you experience. If you don't let anybody control what you experience, you aren't affected by other people.

Again, this is about control. If you want to be affected by other people, by their culture, you can't have 100% control over the value of their output.

"Right now I feel like consuming human made content. But any minute I may decide that AI generated content is more valuable and switch to that" - attitudes like this may make the value of people's output completely arbitrary, decided by you on a whim.

If you want to be affected by other people, you can't have 100% control over their image and output. If you create countless variations of someone's personality and exploit/milk them to death, it isn't healthy culturewise.

You're violating and destroying boundaries that allow the other person's personality to exist. (inside of your mind or inside the culture)

...

But where do we draw the line?

I'm not sure you can mix the culture of content generation/"AI replacement" and the human culture. I feel that with every step weakening the principles above the damage to the human culture will grow exponentially.

The lost message

Imagine a person you don't know. You don't care about them. Even worse, you're put off by what they're saying, you don't want to listen. Or maybe you just isn't interested in the "genre" of their message.

But that person may still have a valuable message for you. And that person still has a chance to reach you:

  1. They share their message with other people.
  2. The message becomes popular in the culture.
  3. You notice the popularity. You check out the message again. Or someone explains it to you.

But if any person can switch to AI generated content at any minute, transmitting the message may become infinitely harder or outright impossible.

"But AI can generate a better message for me! Even the one I wouldn't initially like!"

Then we're back at the square one: you don't want to be affected by other people.

Rights to exist

Consciousness and personality don't exist in a vacuum, they need a medium to be expressed. For example, text messages or drawings.

When you say "hey, I can generate your output in any medium!"

You say "I can deny you existence, I can lock you out of the world".

I'm not sure it's a good idea/fun future.

...

So, I don't really see where this "content generation" is going in the long run. Or in the very short run (GPT-4 plus DALL-E or "DALL-E 2" for everyone)

"Do you care that your favorite piece of art/piece of writing was made by a human?" is the most irrelevant question that you have to consider. Prior questions are: do you care about people in general? do you care about people's ability to communicate? do you care about people's basic rights to be seen and express themselves?

If "yes", where do you draw the line and how do you make sure that it's a solid line? I don't care if you think "DALL-E good" or "DALL-E bad", I care where you draw the line. What condition needs to break for you to say "wait, it's not what I wanted, something bad is happening"?

If my arguments miss something, it doesn't matter: just tell me where you draw the line. What would you not like to generate, violate, control? What would be the deal breaker for you?

7 comments

Comments sorted by top scores.

comment by Yitz (yitz) · 2022-08-09T19:09:56.837Z · LW(p) · GW(p)

I’m not sure I fully agree with you here (primarily because I don’t see art as essential to individual consciousness, although it is arguably essential to any cohesiveness larger cultural consciousness), but I’m really intrigued by this line of thinking. With regards to my personal line beyond which I’d be uncomfortable with AI generation, it’s already been passed, I think. I could (and likely will) be made significantly more uncomfortable, but as I don’t foresee any existing human digital art to be intractable with near-term technology, I’ve already made the mental leap into assuming the future demise of the commercial artist, and it’s associated consequences.

Replies from: yitz
comment by Yitz (yitz) · 2022-08-09T19:18:37.668Z · LW(p) · GW(p)

Thinking about it, the commercial artists who I expect to last the longest (assuming eventual takeover of all commercial activities by AI is even possible/can happen without killing humanity) are probably going to be in-person actors and sex-workers. Movie actors are already being replaced by deepfake doubles (so far mostly to cover actors who died before the end of a franchise, but that will probably change), while real-world animatronics still feel almost as lifeless as they did in the 80s. If your artwork can’t be depicted in digital without a significant reduction in quality, then my guess is your job will survive for longer than others.

Replies from: TAG
comment by TAG · 2022-08-10T17:37:48.316Z · LW(p) · GW(p)

So you expect sports to be taken over by robots? We already know that a car is faster than a person.

Replies from: yitz
comment by Yitz (yitz) · 2022-08-12T01:02:23.306Z · LW(p) · GW(p)

In many essential ways, sports stars are actors. We don’t watch because moving fast or passing a ball around is inherently interesting, but because of the human element and drama told through the progress of a good sports game/season

comment by Don Hussey (don-hussey) · 2022-08-09T16:53:22.637Z · LW(p) · GW(p)

REALLY interesting line of thought. May I ask what prompted this?

Replies from: Q Home
comment by Q Home · 2022-08-10T21:55:45.616Z · LW(p) · GW(p)

Just my emotions! And I had an argument about the value of artists behind the art (Can people value the source of the art? Is it likely that majority of people may value it?). Somewhat similar to Not for the Sake of Happiness (Alone) [LW · GW]. I decided to put the topic into a more global context (How long can you replace everything with AI content? What does it mean for the connection between people?). I'm very surprised that what I wrote was interesting for some people. What surprised you in my post?

I'm also interested in applying the idea of "prior knowledge" to values (or to argumentation, but not in a strictly probabilistic way). For example, maybe I don't value (human) art that much, or very uncertain about how much I value it. But after considering some more global/fundamental questions ("prior values", "prior questions") I may decide that I actually value human art quite a lot in certain contexts. I'm still developing this idea.

I feel (e.g. when reading arguments why AGI "isn't that scary") that there's not enough ways to describe disagreements. I hope to find a new way to show how and why people arrive at certain conclusions. In this post I tried to show "fundamental" reasons of my specific opinion (worrying about AI content generation). I also tried to do a similar thing in a post about Intelligence [LW · GW] (I wanted to know if that type of thinking is rational or irrational).

comment by [deleted] · 2022-08-09T17:42:12.425Z · LW(p) · GW(p)

Collective consciousness/hivemind, the content is mostly determined statistically. AI mostly operate in this domain with a lot of focus on temporal context and significance. Such collective consciousness determines what's applicable when. For example an AI learned purely out of information generated from a specific historical period would have a vastly different applicability when set to interact in a different temporal context. Realities of human civilizations are very different in different time periods, but you can always find universal patterns throughout. Such patterns can only be identified on a meta level of the details that are temporally dependent.

In simpler words, the things that do change in short periods of time are often superficial levels of content and engagement. The things that change on longer periods of time mostly happen on the meta level on top of the superficial level.

Reality itself is very different from the constructed realities of the individuals and collective consciousness. That's why language has such a significant role in everything we do as humans.

How does someone who take our reality for granted talk to someone who doesn't take our reality for granted? They can only successfully communicate on points that they agree linguistically, even though the underlying context and depth of the subject may be completely different between the two individuals. Since language is a superficial representation of thoughts, and thoughts originate from our own versions of reality/mental model, those two are ultimately not really effectively communicating at all even though they are both uttering words of language from their mouths. So what's the point of talking to someone whom you can't effectively communicate with? Well at least one person has change their own linguistic context to match the context of the other person. So the question becomes, how many mental models/realities does an individual typically exercise? Do they context switch between the models? Do the models overlap? Are some models purely subsets of bigger context, even though the encompassing model doesn't even exist in that person's mind? This is essentially the root of tribalism, people with different mental models with other out groups but share similar models with the in group members. They may all exist as some subset of an abstract larger model, even such model doesn't actually exist in any individual at a given point in time. I think AI essentially are these larger abstract models or an ensemble of them.