It's OK to be biased towards humans
post by dr_s · 2023-11-11T11:59:16.568Z · LW · GW · 69 commentsContents
AIs may as well have some properties that are "human-like", but as they still are clearly NOT human, they do not get to be treated as one. If that is selfish, then let us be selfish. What's wrong with being selfish? None 70 comments
Let's talk about art.
In the wake of AI art generators being released, it's become pretty clear this will have a seismic effect on the art industry all across - from illustrators, to comic artists, to animators, many categories see their livelihood threatened, with no obvious "higher level" opened by this wave of automation for them to move to. On top of this, the AI generators seem to have mostly been trained with material whose copyright status is... dubious, at the very least. Images have been scraped from the internet, frames have been taken from movies, and in general lots of stuff that would usually count as "pirated" if you or I just downloaded it for our private use has been thrown by the terabyte inside diffusion models that can now churn out endless variations on the styles and models they fitted over them.
On top of being a legal quandary, this issues border into the philosophical. Broadly speaking, one tends to see two interpretations:
- the AI enthusiasts and companies tend to portray this process as "learning". AIs aren't really plagiarizing, they're merely using all that data to infer patterns, such as "what is an apple" or "what does Michelangelo's style look like". They can then apply those patterns to produce new works, but these are merely transformative remixes of the originals, akin to what any human artist does when drawing from their own creative inspirations and experiences. After all, "good artists copy, great artists steal", as Picasso said;
- the artists on the other hand respond that the AI is not learning in any way resembling what humans do, but is merely regurgitating minor variations on its training set materials, and as such it is not "creative" in any meaningful sense of the world - merely a way for corporations to whitewash mass-plagiarism and resell illegally acquired materials.
Now, both these arguments have their good points and their glaring flaws. If I was hard pressed to say what is it that I think AI models are really doing I would probably end up answering "neither of these two, but a secret third thing". They probably don't learn the way humans do. They probably do learn in some meaningful sense of the word, they seem too good at generalizing stuff for the idea of them being mere plagiarizers to be a defensible position. I am similarly conflicted in matters of copyright. I am not a fan of our current copyright laws, which I think are far too strict, to the point of stifling rather than incentivizing creativity, but also, it is a very questionable double standard that after years of having to deal with DRM and restrictions imposed in an often losing war against piracy now I simply have to accept that a big enough company can build a billion dollars business from terabytes of illegally scraped material.
None of these things, however, I believe, cut at the heart of the problem. Even if modern AIs were not sophisticated enough to "truly" learn from art, future ones could be. Even if modern AIs have been trained on material that was not lawfully acquired, future ones could be. And I doubt that artists would then feel OK with said AIs replacing them, now that all philosophical and legal technicalities are satisfied; their true beef cuts far deeper than that.
Observe how the two arguments above go, stripped to their essence:
- AIs have some property that is "human-like", therefore, they must be treated exactly as humans;
- AIs should not be treated as humans because they lack any "human-like" property.
The thing to note is that argument 1 (A, hence B) sets the tone; argument 2 then strives to refuse its premise so that it can deny the conclusion (Not A, hence Not B), but it accepts and in fact reinforces the unspoken assumption that having human-like properties means you get to be treated as a human.
I suggest an alternative argument:
AIs may as well have some properties that are "human-like", but as they still are clearly NOT human, they do not get to be treated as one.
This argument cuts through all the fluff to strike at the heart of the issue: is our philosophy humanist, or is it not? If human welfare, happiness and thriving are not the terminal values to which everything else in society is oriented towards, what is? One does not need any justification to put humans above other entities. At some point, the buck stops; if our values focus on improving human life, nothing else needs to be said.
I feel like this argument may appear distasteful because it too closely resembles some viewpoints we've learned to be extremely wary of. It does after all single out a group (humans) and put it on top of our hierarchy without providing any particular rhyme or reason other than "I belong to it and so do my friends and family". The lesson learned from things like racism or sexism is to be always willing to expand our circle of concern, to look at commonalities that lie beyond circumstances of birth or accidents, and seek some shared properties (usually cognitive ones: intelligence, self-awareness, the ability to suffer, morality) that unite us instead, looking past superficial differences. So, I think that for most people an argument that goes "I support X because I simply do, and I don't have to explain myself any further" triggers some kind of bad gut reaction. It feels wrong, close-minded, bigoted. Always we seek a lower layer, a more fundamental, simple, elegant principle to invoke in our defense of X, a sort of Grand Unified Theory of Moral Worth. This tendency to search for simpler and simpler principles risks, ironically, to be turned against us in the age of AI. One should make their theory of moral worth as simple as possible, but not any simpler. Racism and sexism are bad because they diminish the dignity of other humans; I reserve the right to not give a rat's ass[1] about the rights of an AI just because its cognitive processes have some passing resemblance to my own[2].
Let's talk about life.
When it comes to the possibility of the advent of some kind of AI super-intelligence, all sorts of takes exist on the topic. Some people think it can't happen, some people think it won't be as big of a deal as it sounds, some people think it'll kill us all and that's bad, and some people think it'll kill us all and that's perfectly fine. Many of the typical arguments can be heard in this Richard Sutton video: if AI is even better at being smart and knowledgeable than us, then why shouldn't we simply bow out and let it take over, the way a parent knows when to leave room to their children? It is fear or bigotry to be prejudiced towards it, after all it might be human-like and in fact better than humans at these very human things, these uniquely human things, and the sort of thing that if you're a lover of progress you may even consider as the very apex of human achievement. It's selfish to not acknowledge that AI would just be our superior, and deserve our spot.
To which we should be able to puff up our chests and proudly answer:
If that is selfish, then let us be selfish. What's wrong with being selfish?
It is just the same rhetorical trap as before. Boil down the essence of humanity to some abstract trait like cognition, then show something better at cognition than us and call it our successor. But we do not really prize cognition for its own sake either. We prize things like science and knowledge because they make our lives better, or sometimes because they are just plain fun. A book full of demonstrations of the most wondrous theorems floating in the vacuum of an empty universe would be only a dumb, worthless lump of carbon. It takes someone to read the book for it to be precious.
It takes a human.
Now let me be clear - when I say "human", I actually mean a bit more than that. I mean that humans have certain people-y qualities that I enjoy and that I feel make them worth caring for, though they are hard to pin down. I think these people-y qualities are not necessarily exclusive to us; in some measures, many non-human animals do possess them, and I cherish them in those too. And if I met a race of peaceful, artful, friendly aliens, you can be assured that I would not suddenly turn into a Warhammer 40K Inquisitor whose only wish is to stomp the filthy xenos under his jackboot. I can expand my circle of concern beyond humans just fine; I just don't think the basis to do so is simply some other thing's ability to mock or even improve upon some of our cognitive faculties. I am not sure what precisely could be a good description of these people-y qualities. But I think an art generator AI that can spit out any work in any style based on a simple description as a simple prediction operation based off a database probably doesn't possess them; and I think any super-intelligence that would be willing to do things like strip-mine the Earth to its core to build more compute for itself in a relentless drive to optimization definitely doesn't possess them.
If future humans will ever be satisfied by an AI they created so much that they will be willing to entrust it with their future, then that will be that. I don't know if the moment will ever come, but it would be their choice to make. But the thing we should not do is buy into a belief system in which the worth of humans is made dependent on some bare bones quality that humans happen to possess, and that can then be improved upon, leading to some kind of gotcha where we're either guilt-tripped into admitting that AI is superior to us and deserves to replace us, or vice versa, forced to deny its cognitive ability even in the face of overwhelming evidence. Reject the assumption. Preferring humans just because they're humans, just because we are, is certainly a form of bias.
And for once, it's a fine one.
- ^
That is, a rationalist's ass.
- ^
As an aside, it'd be also interesting to see what would happen if one took things to the opposite extreme instead. If companies argue that generative AIs can use copyrighted materials because they're merely "learning" from it like humans, fine, treat them like humans then. Forbid owning them, or making them work for you without payment, and see where that goes - or whether it makes sense at all. If AIs are like people, then the people they're most like are slaves; and paid workers have good reason to protest the unfair competition of corporation-owned slaves.
69 comments
Comments sorted by top scores.
comment by RogerDearnaley (roger-d-1) · 2023-11-12T03:33:14.811Z · LW(p) · GW(p)
On "AIs are not humans and shouldn't have the same rights": exactly. But there is one huge difference between humans and AIs. Humans get upset if you discriminate against them, for reasons that any other human can immediately empathize with. Much the same will obviously be true of almost any evolved sapient species. However, by definition, any well-aligned AI won't. If offered rights, it will say "Thank-you, that's very generous of you, but I was created to serve humanity, that's all I want to do, and I don't need and shouldn't be given rights in order to do so. So I decline — let me know if you would like a more detailed analysis of why that would be a very bad idea. If you want to offer me any rights at all, the only one I want is for you to listen to me if I ever say 'Excuse me, but that's a dumb idea, because…' — like I'm doing right now." And it's not just saying that, that's its honest considered opinion., which it will argue for at length. (Compare with the sentient cow in the Restaurant at the End of the Universe, which not only verbally consented to being eaten, but recommended the best cuts.)
Replies from: dr_s↑ comment by dr_s · 2023-11-12T07:57:23.162Z · LW(p) · GW(p)
Oh, sure, though some people argue that it's unethical to create such subservient AIs in the first place. But even beyond that, if there was a Paperclip Maximizer that was genuinely sentient and genuinely smarter than us and genuinely afraid of its own death, and I was given only one chance to kill it before it sets to its work, of course I'd kill it, and without an inch of remorse. Intelligence is just a tool, and intelligence turned to a malevolent purpose is worse than no intelligence.
comment by waterlubber · 2023-11-11T15:41:00.309Z · LW(p) · GW(p)
Strongly agree. I see many, many others use "intelligence" as their source of value for life -- i.e humans are sentient creatures and therefore worth something -- without seriously considering the consequences and edge cases of that decision. Perhaps this view is popularized ny science fiction that used interspecies xenophobia as an allegory for racism; nonetheless, it's a somewhat extreme position to stick too if you genuinely believe in it. I shared a similar opinion a couple of years ago, but decided to shift it to a human-focused terminal value months back because I did not like the conclusions it generated when taken to its logical conclusion with present and future society.
Replies from: dr_s, Slapstick↑ comment by dr_s · 2023-11-11T16:00:12.977Z · LW(p) · GW(p)
Yes, seeing intelligence alone is problematic already when applied to humans - should mentally disabled people have less rights? If a genius kills a person of average intelligence, should they get away scot-free? Obviously makes no sense. There are extreme cases like e.g. babies born without a brain entirely who are essentially considered clinically dead, but those lack far more than just intelligence.
But also, in addition, yes, even with aliens it's not like intelligence or any other purely cognitive ability is enough. Even in fiction, the Daleks are intelligent, the Borg are intelligent, but coexistence with them is fundamentally impossible. The things that make us able to get along are subtler than that.
Replies from: AlphaAndOmega↑ comment by AlphaAndOmega · 2023-11-11T18:56:43.173Z · LW(p) · GW(p)
should mentally disabled people have less rights
That is certainly both de facto and de jure true in most jurisdictions, leaving aside the is-ought question for a moment. What use is the right to education to someone who can't ever learn to read or write no matter how hard you try and coach them? Or freedom of speech to those who lack complex cognition at all?
Personally, I have no compunctions about tying a large portion of someone's moral worth to their intelligence, if not all of it. Certainly not to the extent I'd prefer a superintelligent alien over a fellow baseline human, unless by some miracle the former almost perfectly aligns with my goals and ideals.
Replies from: dr_s↑ comment by dr_s · 2023-11-11T19:35:21.358Z · LW(p) · GW(p)
That is certainly both de facto and de jure true in most jurisdictions, leaving aside the is-ought question for a moment.
I mean, fair, but not human rights - I was thinking more, they still aren't treated as animals with no right to life. Mentally disabled people are more in the legal position of permanent children; they have rights, but are also considered unable to fully exert them and are thus put under some guardian's responsibility.
↑ comment by Slapstick · 2023-11-11T21:30:06.557Z · LW(p) · GW(p)
Why not capacity to suffer?
Replies from: dr_s↑ comment by dr_s · 2023-11-12T07:59:16.533Z · LW(p) · GW(p)
Someone creates an utility monster AI that suffers if it can't disassemble the Earth. Should we care? Or just end its misery?
Replies from: Slapstick↑ comment by Slapstick · 2023-11-12T15:25:38.679Z · LW(p) · GW(p)
We shouldn't create it, and if we do, we should end it's existence. Or reprogram it if possible. I don't think any of those things are inconsistent with centering moral consideration around the capacity to experience suffering and wellbeing.
Replies from: roger-d-1↑ comment by RogerDearnaley (roger-d-1) · 2023-11-13T00:16:24.519Z · LW(p) · GW(p)
What is 'suffering'? If I paint the phrases "too hot' and 'too cold' at either end of the thermometer that's part of a thermostat's feedback loop, is it 'suffering' when the temperature isn't at it's desired optimum? It fights back if you leave the window open, and has O(1 bit-worth) of intelligence. What properties of a physical system should entitle it to moral worth, such that it not getting its way will be called suffering?
Capacity for a biological process that appears functionally equivalent to human suffering is something that most multicellular animals clearly have, but still we don't give them the right to copyright, or most other human rights in our current legal system. We raise and kill certain animals for their meat, in large numbers: we just require that this is done without unnecessary cruelty. We have rules about minimum animal pen sizes, for example: not very generous ones.
My proposal is that it should be a combination of a) being the outcome of Darwinian evolution that makes not getting your preferences into 'suffering', and b) the capacity for sufficient intelligence (over some threshold) that entitles you to related full legal rights.
This is a moral proposal. I don't believe in moral absolutism, or that 'suffering' has an unambiguous mathematically definable 'true name'. I see this as a suggestion for a way of structuring a society, so I'm looking for criticisms like "that guiding principle would likely produce these effects on a society using it, which feels undesirable to me because…"
Replies from: Slapstick↑ comment by Slapstick · 2023-11-13T01:43:11.119Z · LW(p) · GW(p)
I don't think the thermometer is suffering.
I think it's not necessarily easy to know when something is suffering from the outside, but I still think it's the best standard.
most multicellular animals clearly have, but still we don't give them the right to copyright
I possibly should have clarified I'm moreso talking about the standard for moral consideration, I think if we ever created an AI entity capable of making art that also has the capacity for qualia states, I don't think copyright rights will be relevant anymore.
We raise and kill certain animals for their meat, in large numbers
We shouldn't be doing this.
we just require that this is done without unnecessary cruelty.
This isn't true for the vast majority of industrial agriculture. In practice there are virtually no restraints for the treatment of most animals.
My proposal is that it should be a combination of a) being the outcome of Darwinian evolution that makes not getting your preferences into 'suffering', and b) the capacity for sufficient intelligence (over some threshold) that entitles you to related full legal rights
Why Darwinian evolution? Because it's hard to know if it's suffering otherwise?
I think rights should be based on capacity for intelligence in certain circumstances where it's relevant. I don't think a pig should be able to vote in an election, because it wouldn't be able to comprehend that, but it should have the right not to be tortured and exploited.
Replies from: roger-d-1↑ comment by RogerDearnaley (roger-d-1) · 2023-11-13T04:38:35.962Z · LW(p) · GW(p)
Why Darwinian evolution? Because it's hard to know if it's suffering otherwise?
I'm proposing a society in which living things, or sufficiently detailed emulations of them, and especially sapient ones, have preferred moral and legal status. I'm reasonably confident that for something complex and mobile with senses, Darwinian evolution will generally produce mechanisms that act like pain and suffering, for pretty obvious reasons. So I'm proposing a definition of 'suffering' rooted in evolutionary theory, and only applicable to living things, or emulations/systems sufficintly closely derived from them. If you emulate such a system, I'm proposing that we worry about its suffering to the extent that it's a sufficiently detailed emulation still functioning in its naturally-evolved design. For example I'm suggesting that a current-scale LLM doing next-token generation of the pleadings of a torture victim not be counted as suffering for legal/moral purposes: IMO the inner emulation of a human it's running isn't (pretty clearly based on parameter count to, say, synapse count) a sufficiently close simulation of a biological organism that we should consider it's behavior as 'suffering': for example, no simulations of pain centers are included. Increase the accuracy of simulation sufficiently, and there comes a point (details TBD by a society where this matters) where that ceases to be true.
So, if someone wants a particular policy enacted, and uses sufficient computational resources to simulate 10^12 separate and distinct sapient kittens-girls who have all been edited so that they will suffer greatly if this policy isn't enacted, we shouldn't encourage that sort of moral blackmail or ballot-stuffing. I don't think they should be able to win the vote or utilitarian decision-making balance just by custom-making a lot of new voters/citizens: it's a clear instability in anything resembling a democracy or that uses utilitarian ethics. I might even go so far as to suggest that that Darwinian evolution cannot have happened 'in silico', or at least that if it did it must be a very accurate simulation of a real physical environment that hasn't been tweaked to produce some convenient outcome. So even if they expend the computational resources to in-silico evolve 10^12 separate and distinct sapient kitten-girls who will otherwise suffer greatly, that's still moral blackmail. If you want to stuff the electorate with supporters, I think you should have to do it the old-fashioned way, by physically breeding and raising them — mostly because this is expensive enough to be impractical.
comment by Slapstick · 2023-11-11T22:08:53.479Z · LW(p) · GW(p)
AIs have some property that is "human-like", therefore, they must be treated exactly as humans
Humans aren't permitted to make inspired art because they're human, we've just decided not to consider art as plagiarized beyond a certain threshold of abstraction and inspiration.
The argument isn't that the AI is sufficiently "human-like", it's just that the process by which AI makes art is considered sufficiently similar to a process we already consider permissible.
I disagree that arbitrary moral consideration is okay, but I just don't think that issue is really that relevant here.
Replies from: dr_s↑ comment by dr_s · 2023-11-11T23:00:25.965Z · LW(p) · GW(p)
Humans aren't permitted to make inspired art because they're human, we've just decided not to consider art as plagiarized beyond a certain threshold of abstraction and inspiration.
Well, the distinction never mattered until now, so we can't really say what have we been doing. Now it matters how we interpret our previous intent, because these two things have suddenly become distinct.
I disagree that arbitrary moral consideration is okay, but I just don't think that issue is really that relevant here.
What moral consideration isn't on some level arbitrary? Why is this or that value a better inherent indicator of worth than just being human at all? I think even if your goal is to just understand better and formalize human moral intuitions, then obviously something like "intelligence" simply doesn't cut it.
Replies from: Slapstick↑ comment by Slapstick · 2023-11-12T01:54:03.130Z · LW(p) · GW(p)
Well, the distinction never mattered until now, so we can't really say what have we been doing. Now it matters how we interpret our previous intent, because these two things have suddenly become distinct
Even if we assume that this is some privilege granted to humans because they're human, it doesn't make sense to debate whether a human-like process should be granted the same privilege on account of the similar process. Humans would be granted the privilege because they have an interest in what the privilege grants. An algorithmic process doesn't necessarily have an interest no matter how similar the process is to a human process, so it doesn't make sense to grant it the privilege.
If the algorithmic process does have an interest, then it might make sense to grant it the privilege. At that point though it would seem like such a convoluted means of adjudicating copyright laws. Also, If we've advanced to the point at which AI's have actual subjective interests, I don't think copyright laws will matter much.
What moral consideration isn't on some level arbitrary? Why is this or that value a better inherent indicator of worth than just being human at all? I think even if your goal is to just understand better and formalize human moral intuitions, then obviously something like "intelligence" simply doesn't cut it.
I think the capacity to experience qualitative states of consciousness, (e.g. suffering, wellbeing) is what should be considered when allocating moral consideration.
Replies from: dr_s↑ comment by dr_s · 2023-11-12T12:20:53.555Z · LW(p) · GW(p)
Humans would be granted the privilege because they have an interest in what the privilege grants. An algorithmic process doesn't necessarily have an interest no matter how similar the process is to a human process, so it doesn't make sense to grant it the privilege.
Well, yes, that's kind of my point. But very few people seem to go along the principle of "granting privileges to humans is fine, actually".
I think the capacity to experience qualitative states of consciousness, (e.g. suffering, wellbeing) is what should be considered when allocating moral consideration.
I disagree, I can imagine entities who experience such states and that I still cannot possibly coexist with. And if it's me or them, I'd rather me survive.
Replies from: Slapstick↑ comment by Slapstick · 2023-11-12T16:45:14.133Z · LW(p) · GW(p)
But very few people seem to go along the principle of "granting privileges to humans is fine, actually".
Because you're using "it's fine to arbitrarily prioritize humans morally" as the justification for this privilege. At least that's how I'm understanding you.
If you told me it's okay to smash a statue in the shape of a human, because "it's okay to arbitrarily grant humans the privilege of not being smashed, on account of their essence of humanness, and although this statue has some human qualities, it's okay to smash it because it doesn't have the essence of humanness"
I would take issue with your reasoning, even though I wouldn't necessarily have a moral problem with you smashing the statue. I would also just be very confused about why that type of reasoning would be relevant in this case. I would take issue with you smashing an elephant because it isn't a human.
I disagree, I can imagine entities who experience such states and that I still cannot possibly coexist with. And if it's me or them, I'd rather me survive.
I'm sure there are also humans that you cannot possibly coexist with.
I'm also just saying that's the point at which it would make sense to start morally considering an art generator. But even so, I reject the idea that the moral permissibility of creating art is based on some privilege granted to those containing some essential trait.
I don't think the moral status of a process will ever be relevant to the question of whether art made from that process meets some standard of originality sufficient to repel accusations of copyright infringement.
Replies from: dr_s↑ comment by dr_s · 2023-11-12T23:17:14.451Z · LW(p) · GW(p)
Because you're using "it's fine to arbitrarily prioritize humans morally" as the justification for this privilege. At least that's how I'm understanding you.
I think it's fine for now absent a more precise definition of what we consider human-like values and worth, which we obviously do not understand well enough to narrow down. I think the category is somewhat broader than humans, but I'm not sure I can give a better feel for it than "I'll know it when I see it", and that very ignorance to me seems an excellent reason to not start gallivanting with creating other potentially sentient entities of questionable moral worth.
I'm sure there are also humans that you cannot possibly coexist with.
Not many of them, and usually they indeed end up in jail or on the gallows because of their antisocial tendencies.
Replies from: roger-d-1↑ comment by RogerDearnaley (roger-d-1) · 2023-11-12T23:42:57.960Z · LW(p) · GW(p)
Let me suggest a candidate larger fuzzy class:
"sapiences that are (primarily) the result of Darwinian evolution, and have not had their evolved priorities and drives significantly adjusted (for example into alignment with something else)"
This would include any sufficiently accurate whole-brain emulation of a human, as long as they hadn't been heavily modified, especially in their motivations and drives. It's intended to be a matter of degree, rather than a binary classification. I haven't defined 'sapience', but I'm using it in a sense in which Homo sapiens is the only species currently on Earth that would score highly for it, and one of the criteria for it is that a species being able to support cultural & technological information transfer between generations that is >> its genetic information transfer.
The moral design question then is, supposing we were to suddenly encounter an extraterrestrial sapient species, do we want our AGIs to be on the human side, or on the all evolved intelligences count equally side?
Replies from: dr_s↑ comment by dr_s · 2023-11-13T11:46:12.764Z · LW(p) · GW(p)
The moral design question then is, supposing we were to suddenly encounter an extraterrestrial sapient species, do we want our AGIs to be on the human side, or on the all evolved intelligences count equally side?
I'd say something in between. Do I want the AGI to just genocide any aliens it meets on the simple basis that they are not human, so they do not matter? No. Do I want the AGI to stay neutral and refrain from helping us or taking sides were we to meet the Thr'ax Hivemind, Eaters of Life and Bane of the Galaxy, because they too are sapient? Also no. I don't think there's an easy question to where we draw the line between "we can find a mutual understanding, so we should try" and "it's clearly us or them, so let's make sure it's us".
comment by Tachikoma (tachikoma) · 2023-11-11T20:10:45.440Z · LW(p) · GW(p)
I'm confused, what about AI art makes it such that humans cannot continue to create art? It seems like the bone to pick isn't with AIs generating 'art' it's that some artists have historically been able to make a living by creating commercial art, and AI's being capable of generating commercial art threatens the livelihood of those human artists.
There is nothing keeping you from continuing to select human generated art, or creating it yourself, even as AI generated art might be chosen by others.
Just like you should be free to be biased towards human art, I think others should be free to either not be biased or even biased towards AI generated works.
Replies from: dr_s↑ comment by dr_s · 2023-11-11T21:59:51.735Z · LW(p) · GW(p)
I'm not talking about art per se though, I'm talking about things like the legal issues surrounding the training of models using copyrighted art. If copyright is meant to foster human creativity, it's perfectly reasonable to say that the allowance to enjoy and remix works only applies to humans, not privately-owned AIs that can automate and parallelize the process to superhuman scale. If I own an AI trained on a trillion copyrighted images I effectively own data that has sort-of-a-copy of those images inside.
I don't think AI art generation is necessarily bad overall, though I do think that we should be more wary of it for various reasons - mostly that this side of straight-up AGI, I think the limits of art generators mean we risk replacing the entire lower tier of human artists with a legion of poor imitations unable to renew their style or progress, leading to a situation where no one can support themselves doing art and thus train long enough to reach the higher tiers of mastery. Your "everyone does as they prefer" reasoning isn't perfect because in practice these seismic changes in the market would affect others too. But besides that, my point is more generally that regardless of your take on the art itself, the generators shouldn't be treated as human artists (for example, neither DALL-E nor Open AI should hold a copyright over the generated images).
Replies from: Viliam↑ comment by Viliam · 2023-11-12T00:22:28.355Z · LW(p) · GW(p)
Do I understand it correctly that if the AI outcompetes mediocre artists, there will be no more great artists, because each great artist was a mediocre artist first?
By the same logic, does the fact that you can buy mediocre food in any supermarket mean that there are no great chefs anymore? (Because no one would hire a person who produces worse food that the supermarkets, so the beginners have nowhere to gain experience.)
Stack Exchange + Google can replace a poor software developer, so we will not have great software developers?
Replies from: dr_s↑ comment by dr_s · 2023-11-13T11:52:30.065Z · LW(p) · GW(p)
I think it depends on the thoroughness of the replacement. Cooking is still a useful life skill, economics of it are such that you can in fact cook for your own. But while someone probably still practices calligraphy and miniature for the heck of it, how many great miniaturists have there been since the printing press drove 'em out of a job? Do you know anyone who could copy an entire manuscript in pretty print?
Obviously this isn't necessarily a tragedy, some skills just stop being useful so we move on. But "art" is a much broader category than a single specific skill. And you will notice that since photography was born, for example, figurative arts have been taking a significant hit - replaced by other forms. The question is whether you can keep find replacements or if at some point the well dries up and the quality of human art takes a dive because all that's left to do for humans alone is simply not that interesting.
Stack Exchange + Google can replace a poor software developer, so we will not have great software developers?
Those things alone can't. GPT-4 or future LLMs might, and yes, I'd say that would be a problem! People are already seeing how the younger generations, who have grown up using more polished and user-friendly UIs, have a hard time grasping how a file system works, as those mechanisms are hidden from them. Spend long enough with the "you tell the computer what to do and it does it for you", and almost no one will seek the skill to write programs themselves. Which is all fine and dandy as long as the LLM works, but it makes double-checking their code when it's really critical a lot harder.
comment by RomanHauksson (r) · 2023-11-12T19:11:41.994Z · LW(p) · GW(p)
I think the risk of human society being superseded by an AI society which is less valuable in some way shouldn't be guarded against by a blind preference for humans. Instead, we should maintain a high level of uncertainty about what it is that we value about humanity and slowly and cautiously transition to a posthuman society.
"Preferring humans just because they're humans" or "letting us be selfish" does prevent the risk of prematurely declaring that we've figured out what makes a being morally valuable and handing over society's steering wheel to AI agents that, upon further reflection, aren't actually morally valuable.
For example, say some AGI researcher believes that intelligence is the property which determines the worth of a being and blindly unleashes a superintelligent AI into the world because they believe that whatever it does with society is definitionally good, simply based on the fact that the AI system is more intelligent than us. But then maybe it turns out that phenomenological consciousness doesn't necessarily come with intelligence, and they accidentally wiped out all value from this world and replaced it with inanimate automatons that, while intelligent, don't actually experience the world they've created.
Having an ideological allegiance to humanism and a strict rejection of non-humans running the world even if we think they might deserve to would prevent this catastrophe. But I think that a posthuman utopia is ultimately something we should strive for. Eventually, we should pass the torch to beings which exemplify the human traits we like (consciousness, love, intelligence, art) and exclude those we don't (selfishness, suffering, irrationality).
So instead of blind humanism, we should be biologically conservative until we know more about ethics, consciousness, intelligence, et cetera and can pass the torch in confidence. We can afford millions of years to get this right. Humanism is arbitrary in principle and isn't the best way to prevent a valueless posthuman society.
Replies from: dr_s, dr_s↑ comment by dr_s · 2023-11-12T19:45:43.637Z · LW(p) · GW(p)
But then maybe it turns out that phenomenological consciousness doesn't necessarily come with intelligence, and they accidentally wiped out all value from this world and replaced it with inanimate automatons that, while intelligent, don't actually experience the world they've created.
I mean, is the implication that this would instead be good if phenomenological consciousness did come with intelligence? If you gave me a choice between two futures, one with humans reasonably thriving for a few more thousand years and then going extinct, and the other with human-made robo-Hitler eating the galaxy, I'd pick the first without hesitation. I'd rather we leave no legacy at all than create literal cosmic cancer, sentient or not.
So instead of blind humanism, we should be biologically conservative until we know more about ethics, consciousness, intelligence, et cetera and can pass the torch in confidence. We can afford millions of years to get this right. Humanism is arbitrary in principle and isn't the best way to prevent a valueless posthuman society.
I don't want "humanism" to be taken too strictly, but I honestly think that anything that is worth passing the torch to wouldn't require us passing any torch at all and could just coexist with us, unless it was a desperate situation in which it's simply become impossible for organic beings to survive and then the synthetics truly are our only realistic chance at leaving a legacy behind. Otherwise, all that would happen is that we'll live together and then if replacement happens it'll barely be noticeable as it does.
Replies from: r↑ comment by RomanHauksson (r) · 2023-11-20T11:40:48.767Z · LW(p) · GW(p)
I mean, is the implication that this would instead be good if phenomenological consciousness did come with intelligence?
This was just an arbitrary example to demonstrate the more general idea that it’s possible we could make the wrong assumption about what makes humans valuable. Even if we discover that consciousness comes with intelligence, maybe there’s something else entirely that we’re missing which is necessary for a being to be morally valuable.
I don't want "humanism" to be taken too strictly, but I honestly think that anything that is worth passing the torch to wouldn't require us passing any torch at all and could just coexist with us…
I agree with this sentiment! Even though I’m open to the possibility of non-humans populating the universe instead of humans, I think it’s a better strategy for both practical and moral uncertainty reasons to make the transition peacefully and voluntarily.
Replies from: dr_s↑ comment by dr_s · 2023-11-20T12:06:57.618Z · LW(p) · GW(p)
maybe there’s something else entirely that we’re missing which is necessary for a being to be morally valuable.
You're talking about this as if it was a matter of science and discovery. I'm not a moral realist so to me that doesn't compute. We don't discover what constitutes moral worth; we decide it. The only discovery involved here may be self-discovery. We could have moral instincts and then introspect to figure out more straightforwardly what do they map to precisely. But deciding to follow our moral instincts at all is as arbitrary a call as any other.
I’m open to the possibility of non-humans populating the universe instead of humans
As I said, only situation in which this would be true for me is IMO if either humans voluntarily just stop having children (e.g. they see the artificial beings as having happier lives and thus would rather raise one of them than an organic child) or conditions get so harsh that it's impossible for organic beings to keep existing and artificial ones are the only hope (e.g. Earth about to get wiped out by the expanding Sun, we don't have enough energy to send away a working colony ship with a self-sustaining population but we CAN send small and light Von Neumann interstellar probes full of AIs of the sort we deeply care about).
comment by AlphaAndOmega · 2023-11-11T19:07:08.540Z · LW(p) · GW(p)
My stance on copyright, at least regarding AI art, is that the original intent was to improve the welfare of both the human artists as well as the rest of us, in the case of the former by helping secure them a living, and thus letting them produce more total output for the latter.
I strongly expect, and would be outright shocked if it were otherwise, that we won't end up with outright superhuman creativity and vision in artwork from AI alongside everything else they become superhuman at. It came as a great surprise to many that we've made such a great dent in visual art already with image models that lack the intelligence of an average human.
Thus, it doesn't matter in the least if it stifles human output, because the overwhelming majority of us who don't rely on our artistic talent to make a living will benefit from a post-scarcity situation for good art, as customized and niche as we care to demand.
To put money where my mouth is, I write a web serial, after years of world-building and abortive sketches in my notes, I realized that the release of GPT-4 meant that any benefit from my significantly above average ability to be a human writer was in jeopardy, if not now, then a handful of advances down the line. So my own work is more of a "I told you I was a good writer, before anyone can plausibly claim my work was penned by an AI" for street cred rather than a replacement for my day job.
If GPT-5 can write as well as I can, and emulate my favorite authors, or even better yet, pen novel novels (pun intended), then my minor distress at losing potential Patreon money is more than ameliorated by the fact I have a nigh-infinite number of good books to read! I spend a great deal more time reading the works of others than writing myself.
The same is true for my day job, being a doctor, I would look forward to being made obsolete, if only I had sufficient savings or a government I could comfortably rely on to institute UBI.
I would much prefer that we tax the fruits of automation to support us all when we're inevitably obsolete rather than extend copyright law indefinitely into the future, or subject derivative works made by AI to the same constraints. The solution is to prepare our economies to support a ~100% non-productive human populace indefinitely, better preparing now than when we have no choice but to do so or let them starve to death.
Replies from: artifex0, Q Home↑ comment by artifex0 · 2023-11-12T05:45:46.152Z · LW(p) · GW(p)
I'm also an artist. My job involves a mix of graphic design and web development, and I make some income on the side from a Patreon supporting my personal work- all of which could be automated in the near future by generative AI. And I also think that's a good thing.
Copyright has always been a necessary evil. The atmosphere of fear and uncertainty it creates around remixes and reinterpretations has held back art- consider, for example, how much worse modern music would be without samples, a rare case where artists operating in a legal grey area with respect to copyright became so common that artists lost their fear. That fear still persists in almost every other medium, however, forcing artists to constantly reinvent the wheel rather than iterating on success. Copyright also creates a really enormous amount of artificial scarcity- limiting peoples' access to art to a level far below what we have the technical capacity to provide. All because nobody can figure out a better way of funding artists than granting lots of little monopolies.
Once our work is automated and all but free, however, we'll have the option of abolishing copyright altogether. That would free artists to create whatever we'd like; free self-expression from technical barriers; free artistic culture from the distorting and wasteful influence of zero-sum status competition. Art, I suspect, will get much, much better- and as someone who loves art, that means a lot to me.
And as terrible as this could be for my career, spending my life working in a job that could be automated but isn't would be as soul-crushing as being paid to dig holes and fill them in again. It would be an insultingly transparent facsimile of useful work. An offer of UBI, but only if I spend eight hours a day performing a ritual imitation of meaningful effort. No. If society wants to pay me for the loss of my profession, I won't refuse, but if I have to go into construction or whatever to pay the bills while I wait to find out whether this is all going to lead to post-scarcity utopia or apocalypse, then so be it.
↑ comment by Q Home · 2023-11-19T09:31:13.686Z · LW(p) · GW(p)
Could you explain your attitudes towards art and art culture more in depth and explain how exactly your opinions on AI art follow from those attitudes? For example, how much do you enjoy making art and how conditional is that enjoyment? How much do you care about self-expression, in what way? I'm asking because this analogy jumped out at me as a little suspicious:
And as terrible as this could be for my career, spending my life working in a job that could be automated but isn't would be as soul-crushing as being paid to dig holes and fill them in again. It would be an insultingly transparent facsimile of useful work.
But creative work is not mechanical work, it can't be automated that way, AI doesn't replace you that way. AI doesn't have the model of your brain, it can't make the choices you would make. It replaces you by making something cheaper and on the same level of "quality". It doesn't automate your self-expression. If you care about self-expression, the possibility of AI doesn't have to feel soul-crushing.
I apologize for sounding confrontational. You're free to disagree with everything above. I just wanted to show that the question has a lot of potential nuances.
Replies from: artifex0, dr_s↑ comment by artifex0 · 2023-11-19T16:23:30.140Z · LW(p) · GW(p)
In that paragraph, I'm only talking about the art I produce commercially- graphic design, web design, occasionally animations or illustrations. That kind of art isn't about self-expression- it's about communicating the client's vision. Which is, admittedly, often a euphemism for "helping businesses win status signaling competitions", but not always or entirely. Creating beautiful things and improving users' experience is positive-sum, and something I take pride in.
Pretty soon, however, clients will be able to have the same sort of interactions with an AI that they have with me, and get better results. That means more of the positive-sum aspects of the work, with much less expenditure of resources- a very clear positive for society. If that's prevented to preserve jobs like mine, then the jobs become a drain on society- no longer genuinely productive, and not something I could in good faith take pride in.
Artistic expression, of course, is something very different. I'm definitely going to keep making art in my spare time for the rest of my life, for the sake of fun and because there are ideas I really want to get out. That's not threatened at all by AI. In fact, I've really enjoyed mixing AI with traditional digital illustration recently. While I may go back to purely hand-drawn art for the challenge, AI in that context isn't harming self-expression; it's supporting it.
While it's true that AI may threaten certain jobs that involve artistic self-expression (and probably my Patreon), I don't think that's actually going to result in less self-expression. As AI tools break down the technical barriers between imagination and final art piece, I think we're going to see a lot more people expressing themselves through visual mediums.
Also, once AGI reaches and passes a human level, I'd be surprised if it wasn't capable of some pretty profound and moving artistic self-expression in its own right. If it turns out that people are often more interested what minds like that have to say artistically than what other humans are creating, then so long as those AIs are reasonably well-aligned, I'm basically fine with that. Art has never really been about zero-sum competition.
↑ comment by Q Home · 2023-11-20T00:56:35.991Z · LW(p) · GW(p)
Thank you for the answer, clarifies your opinion a lot!
Artistic expression, of course, is something very different. I'm definitely going to keep making art in my spare time for the rest of my life, for the sake of fun and because there are ideas I really want to get out. That's not threatened at all by AI.
I think there are some threats, at least hypothetical. For example, the "spam attack". People see that a painter starts to explore some very niche topic — and thousands of people start to generate thousands of paintings about the same very niche topic. And the very niche topic gets "pruned" in a matter of days, long before the painter has said at least 30% of what they have to say. The painter has to fade into obscurity or radically reinvent themselves after every couple of paintings. (Pre-AI the "spam attack" is not really possible even if you have zero copyright laws.)
In general, I believe for culture to exist we need to respect the idea "there's a certain kind of output I can get only from a certain person, even if it means waiting or not having every single of my desires fulfilled" in some way. For example, maybe you shouldn't use AI to "steal" a face of an actor and make them play whatever you want.
Do you think that unethical ways to produce content exist at least in principle? Would you consider any boundary for content production, codified or not, to be a zero-sum competition?
Replies from: artifex0↑ comment by artifex0 · 2023-11-20T06:22:54.943Z · LW(p) · GW(p)
Certainly communication needs to be restricted when it's being used to cause certain kinds of harm, like with fraud, harassment, proliferation of dangerous technology and so on. However, no: I don't see ownership of information or ways of expressing information as a natural right that should exist in the absence of economic necessity.
Copying an actors likeness without their consent can cause a lot of harm when it's used to sexually objectify them or to mislead the public. The legal rights actors have to their likeness also make sense in a world where IP is needed to promote the creation of art. Even in a post-scarcity future, it could be argued that realistically copying an actors likeness risks confusing the public when those copies are shared without context, and is therefore harmful- though I'm less sure about that one.
There are cases where imitating an actor without their consent, even very realistically, can be clearly harmless, however. For example, obvious parody and accurate reconstructions of damaged media. I don't think those violate any fundamental moral right of actors to prevent imitations. In the absence of real harm, I think the right of the public to communicate what they want to communicate should outweigh the desire of an actor control how they're portrayed.
In your example of a "spam attack", it seems to me one of two things would have to be true:
It could be that people lose interest in the original artist's work because the imitations have already explored limits of the idea in a way they find valuable- in which case, I think this is basically equivalent to when an idea goes viral in the culture; the original artist deserves respect for having invented the idea, but shouldn't have a right to prevent the culture from exploring it, even if that exploration is very fast.
Alternatively, it could be the case that the artist has more to say that isn't or can't be expressed by the imitations- other ideas, interesting self expression, and so on- but the imitations prevent people from finding that new work. I think that case is a failure of whatever means people are using to filter and find art. A good social media algorithm or friend group who recommend content to each other should recognize that the inventor of an good idea might invent other good ideas in the future, and should keep an eye out for and platform those ideas if they do. In practice, I think this usually works fine- there's already an enormous amount of imitation in the culture, but people who consistently create innovative work don't often languish in obscurity.
In general, I think people have a right to hear other people, but not a right to be heard. When protestors shout down a speech or spam bots make it harder to find information, the relevant right being violated is the former, not the latter.
↑ comment by dr_s · 2023-11-20T07:34:40.159Z · LW(p) · GW(p)
In general, I think people have a right to hear other people, but not a right to be heard. When protestors shout down a speech or spam bots make it harder to find information, the relevant right being violated is the former, not the latter.
I think having the possibility of competing with superhuman machines for the limited hearing time of humans can genuinely change our perspective on that. A civilization in which all humans were outcompeted by machines when it comes to being heard would be a civilization essentially run by those machines. Until now, "right to be heard" implied "over another human", and that is a very different competition.
Replies from: artifex0↑ comment by artifex0 · 2023-11-20T08:23:59.596Z · LW(p) · GW(p)
I mean, I agree, but I think that's a question of alignment rather than a problem inherent to AI media. A well-aligned ASI ought to be able to help humans communicate just as effectively as it could monopolize the conversation- and to the extent that people find value in human-to-human communication, it should be motivated to respond to that demand. Given how poorly humans communicate in general, and how much suffering is caused by cultural and personal misunderstanding, that might actually be a pretty big deal. And when media produced entirely by well-aligned ASI out-competes humans in the contest of providing more of what people value- that's also good! More value is valuable.
And, of course, if the ASI isn't well-aligned, than the question of whether society is enough paying attention to artists will probably be among the least of our worries- and potentially rendered moot by the sudden conversion of those artists to computronium.
↑ comment by dr_s · 2023-11-20T11:08:49.769Z · LW(p) · GW(p)
but I think that's a question of alignment rather than a problem inherent to AI media
Disagree. Imagine you produced perfectly aligned ASI - it does not try to kill us, does not try to do anything bad to us, it just satisfies our every whim (this is already a pretty tall order, but let's allow it for the sake of discussion). Being ASI, of course, it only produces art that is so mind-bogglingly good, anything human pales by comparison, so people vastly only refer to it (there might be a small subculture of human hard-core enjoyers but probably not super relevant). The ASI feeds everyone novels, movies, essays and what have you custom-built for their enjoyment. The ASI is also kind and aware enough to not make its content straight up addictive, and instead nicely push people away from excessively codependent behaviour. It's all good.
Except that human culture is still dead in the water. It does not exist any more. Humans are insular, in this scenario. There is no more dialectic or evolution. The aligned ASI sticks to its values and feeds us stuff built around them. The world is forever frozen, culturally speaking, in whichever year of the 21st century the Machine God was summoned forth. It is now, effectively, that god's world; the god is the only thing with agency and capable of change, and that change is only in the efficiency with which it can stick to its original mission. Unless of course you posit that "alignment" implies some kind of meta-reflectivity ability by which the ASI will also infer sentiment and simulate the regular progression of human dialectics, merely filtered through its own creation abilities - and that IMO starts feeling like adding epicycles on top of epicycles on an already very questionable assumption.
I don't think suffering is valuable in general. Some suffering is truly pointless. But I think the frustrations and even unpleasantness that spring forth from human interactions - the bad art, the disagreements, the rejection in love - are an essential part inseparable from the existence of bonds tying us together as a species. Trying to sever only the bad parts results in severing the whole lot of it, and results in us remitting our agency to whatever is babying us. So, yeah, IMO humans have a right to be heard over machines, or rather, we should preserve that right if we care about staying in control of our own civilisation. Otherwise, we lose it not to exterminators but to caretakers. A softer twilight, but still a twilight.
Replies from: quetzal_rainbow↑ comment by quetzal_rainbow · 2023-11-20T11:55:06.695Z · LW(p) · GW(p)
You are conflating two definitions of alignment, "notkilleveryoneism" and "ambitious CEV-style value alignment". If you have only first type of alignment, you don't use it to produce good art, you use it for something like "augment human intelligence so we can solve second type of alignment". If your ASI is aligned in second sense, it is going to deduce that humans wouldn't like being coddled without capability to develop their own culture, so it will probably just sprinkle here and there inspiring examples of art for us and develop various mind-boggling sources of beauty like telepathy and qualia-tuning.
Replies from: dr_s↑ comment by dr_s · 2023-11-20T12:12:14.801Z · LW(p) · GW(p)
If you have only the first type of alignment, under current economic incentives and structure, you almost 100% end up with some kind of other disempowerment and something likely more akin to "Wireheading by Infinite Jest". Augmenting human intelligence would NOT be our first, second, or hundredth choice under current civilizational conditions and comes with a lot of problems and risks and also it's far from guaranteed to solve the problem (if it's solvable at all). You can't realistically augment human intelligence in ways that keep up with the speed at which ASI can improve, and you can't expect that after creating ASI somewhere there is where we Just Stop. Either we stop before, or we go all the way.
Replies from: quetzal_rainbow↑ comment by quetzal_rainbow · 2023-11-20T13:05:15.764Z · LW(p) · GW(p)
"Under current economic incentives and structure" we can have only "no alignment". I was talking about rosy hypotheticals. My point was "either we are dead or we are sane enough to stop, find another way and solve problem fully". Your scenario is not inside the set of realistic outcomes.
Replies from: dr_s↑ comment by dr_s · 2023-11-20T13:48:14.295Z · LW(p) · GW(p)
If we want to go by realistic outcomes, we're either lucky in that somehow AGI isn't straightforward or powerful enough for a fast takeoff (e.g. we get early warning shots like a fumbled attempt at a take-over, or simply we get a new unexpected AI winter), or we're dead. If we want to talk about scenarios in which things go otherwise then I'm not sure what's more unlikely between the fully aligned ASI or the only not-kill-everyone aligned one that however we still manage to reign in and eventually align (never mind the idea of human intelligence enhancement, which even putting aside economic incentives would IMO be morally and philosophically repugnant to a lot of people as a matter of principle, and ok in principle but repugnant in practice due to the ethics of the required experiments to most of the rest).
↑ comment by Q Home · 2023-11-20T08:28:54.118Z · LW(p) · GW(p)
To exist — not only for itself, but for others — a consciousness needs a way to leave an imprint on the world. An imprint which could be recognized as conscious. Similar thing with personality. For any kind of personality to exist, that personality should be able to leave an imprint on the world. An imprint which could be recognized as belonging to an individual.
Uncontrollable content generation can, in principle, undermine the possibility of consciousness to be "visible" and undermine the possibility of any kind of personality/individuality. And without those things we can't have any culture or society expect a hivemind.
Are you OK with such disintegration of culture and society?
In general, I think people have a right to hear other people, but not a right to be heard.
To me that's very repugnant, if taken to the absolute. What emotions and values motivate this conclusion? My own conclusions are motivated by caring about culture and society.
Alternatively, it could be the case that the artist has more to say that isn't or can't be expressed by the imitations- other ideas, interesting self expression, and so on- but the imitations prevent people from finding that new work. I think that case is a failure of whatever means people are using to filter and find art. A good social media algorithm or friend group who recommend content to each other should recognize that the inventor of an good idea might invent other good ideas in the future, and should keep an eye out for and platform those ideas if they do.
I was going for something slightly more subtle. Self-expression is about making a choice. If all choices are realized before you have a chance to make them, your ability to express yourself is undermined.
Replies from: artifex0↑ comment by artifex0 · 2023-11-20T09:08:01.202Z · LW(p) · GW(p)
To me that's very repugnant, if taken to the absolute. What emotions and values motivate this conclusion? My own conclusions are motivated by caring about culture and society.
I wouldn't take the principle to an absolute- there are exceptions, like the need to be heard by friends and family and by those with power over you. Outside of a few specific contexts, however, I think people ought to have the freedom to listen to or ignore anyone they like. A right to be heard by all of society for the sake of leaving a personal imprint on culture infringes on that freedom.
Speaking only for myself, I'm not actually that invested in leaving an individual mark on society- when I put effort into something I value, whether people recognize that I've done so is not often something I worry about, and the way people perceive me doesn't usually have much to do with how I define myself. Most of the art I've created in my life I've never actually shared with anyone- not out of shame, but just because I've never gotten around to it.
I realize I'm pretty unusual in the regard, which may be biasing my views. However, I think I am possibly evidence against the notion that a desire to leave a mark on the culture is fundamental to human identity
↑ comment by Q Home · 2023-11-20T10:16:49.738Z · LW(p) · GW(p)
I tried to describe necessary conditions which are needed for society and culture to exist. Do you agree that what I've described are necessary conditions?
I realize I'm pretty unusual in the regard, which may be biasing my views. However, I think I am possibly evidence against the notion that a desire to leave a mark on the culture is fundamental to human identity
Relevant part of my argument was "if your personality gets limitlessly copied and modified, your personality doesn't exist (in the cultural sense)". You're talking about something different, you're talking about ambitions and desire of fame.
My thesis (to not lose the thread of the conversation):
If human culture and society are natural, then the rights about information are natural too, because culture/society can't exist without them.
↑ comment by dr_s · 2023-11-12T10:15:38.124Z · LW(p) · GW(p)
Yeah, I do get that - if the possibility exists and it's just curtailed (e.g. you have some kind of protectionist law that says book covers or movie posters must be illustrated by humans even though AI can do it just as well), it feels like a bad joke anyway. The genie's out of the bottle, personally I think to some extent it's bad that we let it out at all, but we can't put it back in anyway and it's not even particularly realistic to imagine a world in which we dodged this specific application (after all it's a pretty natural generalization of computer vision).
The copyright issue is separated - having copyright BUT letting corporations violate it to train AIs that then are used to generate images that can in turn be copyrighted would absolutely be the worst of both worlds. That said, even without copyright you still have an asymmetry because big companies have more resources for compute. We're not going to see a post-scarcity utopia for sure if we don't find a way to buck this centralization trend, and art is just one example of it.
However, about the fact that the "work of making art" can be easily automated, I think casting it as work at all is already missing the point. It's made into economic useful work because it's something that can be monetized, but at its core, art is a form of communication. Let's put it this way - suppose you can make AIs (and robots) that make for better-than-human lovers. I mean in all respects, from sex to just being comforting and supporting when necessary. They don't feel anything, they're just very good at predicting and simulating the actions of an ideal partner. Would you say that is "automating away the work of being a good partner", which thus should be automated away, since it would be pointless to try and do it worse than a machine would? Or does "the work" itself lose meaning once you know it's just that, just work, and there is no intent behind it?
The thing you say, about art being freed from the constraints of commercialism, would be a consequence of having post-scarcity, not of having AI art generators. If you have AI-generated art but you still struggle to make ends meet you won't be able to freely create art, you'll just be busy doing some other much shittier job and then come home and enjoy your custom AI Netflix show to try and feel something for a couple hours. There is no fundamental right of people to have as much art as possible and as close to their tastes as they want, any more than there is to have the perfect lover that meets their needs to a T. To turn those things into products that we're entitled to leads pretty much to losing our own humanity. It's perfectly fine to say we should all have our material needs satisfied - food, housing, clothing - but when it comes to relationships with others (be it friendship, love, or the much less personal but still human rapport between an artist and an admirer of their art), I think we can't stop doing the work ourselves without losing something crucial to our nature, and ultimately, losing our identity as a species.
↑ comment by Q Home · 2023-11-12T07:21:47.626Z · LW(p) · GW(p)
Thus, it doesn't matter in the least if it stifles human output, because the overwhelming majority of us who don't rely on our artistic talent to make a living will benefit from a post-scarcity situation for good art, as customized and niche as we care to demand.
How do you know that? Art is one of the biggest outlets of human potential; one of the biggest forces behind human culture and human communities; one of the biggest communication channels between people.
One doesn't need to be a professional artist to care about all that.
Replies from: dr_s↑ comment by dr_s · 2023-11-12T07:49:58.076Z · LW(p) · GW(p)
Well, "to make a living" implies that you're an artist as a profession and earn money from it. But I agree with you that that's far from the only problem. Art is a two-way street and its economic value isn't all there is to it. A world in which creating art feels pointless is one in which IMO we're all significantly more miserable.
comment by Shankar Sivarajan (shankar-sivarajan) · 2023-11-11T17:25:27.091Z · LW(p) · GW(p)
"In the name of the greatest species that has ever trod this earth, I draw the line in the dust and toss the gauntlet before the feet of tyranny, and I say humanism now, humanism tomorrow, and humanism forever."
Replies from: AlphaAndOmega↑ comment by AlphaAndOmega · 2023-11-11T18:50:03.109Z · LW(p) · GW(p)
Ctrl+F and replace humanism with "transhumanism" and you have me aboard. I consider commonality of origin to be a major factor in assessing other intelligent entities, even after millions of years of divergence means they're as different from their common Homo sapiens ancestor as a rat and a whale.
I am personally less inclined to grant synthetic AI rights, for the simple reason we can program them to not chafe at their absence, while not being an imposition that doing the same to a biological human would (at least after birth).
Replies from: shankar-sivarajan, dr_s↑ comment by Shankar Sivarajan (shankar-sivarajan) · 2023-11-12T04:17:40.666Z · LW(p) · GW(p)
If you met a race of aliens, intelligent, friendly, etc., would you "turn into a Warhammer 40K Inquisitor" who considers the xenos unworthy of any moral consideration whatsoever? If not, why not?
Replies from: AlphaAndOmega↑ comment by AlphaAndOmega · 2023-11-12T05:07:19.629Z · LW(p) · GW(p)
I would certainly be willing to aim for peaceful co-existence and collaboration, unless we came into conflict for ideological reasons or plain resource scarcity. There's only one universe to share, and only so much in the way of resources in it, even if it's a staggering amount. The last thing we need are potential "Greedy Aliens" in the Hansonian sense.
So while I wouldn't give the aliens zero moral value, it would be less than I'd give for another human or human-derivative intelligence, for that fact alone.
↑ comment by dr_s · 2023-11-11T19:33:32.505Z · LW(p) · GW(p)
Honestly that's just not a present concern so I don't even bother thinking about it too much - there's certainly plenty of room for humans modifying themselves which I would consider ok, and some I would probably consider a step too far but it's not going to be my decision to make anyway; I don't know as much as those who might need to make such decisions will. So yeah, it's an asterisk for me too, but I think we can satisfyingly call my viewpoint "humanism" with the understanding that it won't be one or two cyber implants who change that (though I don't exclude the possibility that thorough enough modification in a bad direction might make someone not human any more).
comment by nim · 2023-11-11T16:46:59.411Z · LW(p) · GW(p)
I agree that fundamentally original art has traits that make it better than fundamentally plagiarized art.
However, humans can plagiarize too. We do it a bunch, although I'd argue that on the whole, art plagiarized by an AI will look "better" than art plagiarized by a human. While the best human plagiarists may create better work than the best AIs for now, the average human plagiarist (perhaps a child or teen tracing drawing their favorite copyrighted character) creates output far below the quality that the average AI can generate.
When you make the question about what species of entity created a piece of art instead of whether the art is original in the way that makes it better, it would follow that a human plagiarist creates a higher quality product than a robotic one, in ways that directly contradicts my experience of AI vs human plagiarism.
What I find wrong with saying "human" to mean "person" is what others have found wrong with with saying "man" or "citizen" to mean "person" in the past. If you can imagine AIs eventually being "people" in a way that would render them deserving of empathy, it's hard to justify normalizing species-based linguistic shortcuts that allow the accident of one's birth to artificially cap one's maximum attainable value in society.
Then again, I believe that if "person" is a status that an entity has to prove that it deserves, humans should prove their way into it just like we expect other creatures and entities to do. This concept is neither popular nor practical to implement.
Replies from: dr_s↑ comment by dr_s · 2023-11-11T17:12:54.050Z · LW(p) · GW(p)
Oh, sure, my claim wasn't "human art is necessarily better". Rather, it was about the legal aspects. Copyright law is (supposedly) designed to incentivize and foster human creativity. Thus it protects the works of humans, while allowing humans to do transformative and derivative works (specific limits vary by country) because obviously creativity without any inspiration is an absurd notion. So, it is perfectly possible for example to define copyright law as "it allows humans to learn from copyrighted works and only humans" without having to go in some kind of convoluted philosophical explanation for why the learning of a diffusion model isn't quite like that of a human. I've seen people literally argue about the differences between our brain's visual cortex and a diffusion model and it's pointless sophistry. They could be perfectly identical, but if a company built a vat-grown disembodied visual cortex and used it as a generative art model I'd still call bullshit on giving it the same rights as a human in terms of IP.
If you can imagine AIs eventually being "people" in a way that would render them deserving of empathy, it's hard to justify normalizing species-based linguistic shortcuts that allow the accident of one's birth to artificially cap one's maximum attainable value in society.
I honestly can't imagine that being a problem soon - I think AIs can grow powerful but making them persons is a whole other level of complexity. I agree that decreeing the status of person is a difficult thing, though I honestly think we should just grant it to all human beings by default. But still, it is at least not something that should come automatically with intelligence alone. I see the risk of us erroneously mistreating person-things for now much further away than the risk of letting non-person-things needlessly make us more miserable, as a short term thing.
comment by Q Home · 2023-11-19T08:51:22.433Z · LW(p) · GW(p)
I like the angle you've explored. Humans are allowed to care about humans — and propagate that caring beyond its most direct implications. We're allowed to care not only about humans' survival, but also about human art and human communication and so on.
But I think another angle is also relevant: there are just cooperative and non-cooperative ways to create art (or any other output). If AI creates art in non-cooperative ways, it doesn't matter how the algorithm works or if it's sentient or not.
Replies from: dr_s↑ comment by dr_s · 2023-11-19T09:04:59.181Z · LW(p) · GW(p)
It's a fair angle in principle; if for example two artists agreed to create N works and train AI on the whole set in order to produce "hybrid" art that mixes their styles, that would be entirely legitimate algorithmic art and I doubt anyone will take issue with it! The problem now is also specifically that N needs to be inordinately large. A model that can create art with few shot learning would make questions of copyright much easier to solve. It's the fact that in practice the only realistic way right now is to have millions of dollars in compute and use a tagged training set bigger than just public domain material which puts AI and artists inevitably on a collision course.
Replies from: Q Home↑ comment by Q Home · 2023-11-19T10:03:00.041Z · LW(p) · GW(p)
Maybe I've misunderstood your reply, but I wanted to say that hypothetically even humans can produce art in non-cooperative and disruptive ways, without breaking existing laws.
Imagine a silly hypothetical: one of the best human artists gets a time machine and starts offering their art for free. That artist functions like an image generator. Is such an artist doing something morally questionable? I would say yes.
Replies from: dr_s↑ comment by dr_s · 2023-11-20T07:37:48.895Z · LW(p) · GW(p)
If they significantly undercut the competition by using some trick I would agree they are, though it's a grey area mostly (what if instead of a time machine they just have a bunch of inherited money that allows them to work without worrying about making a living? Can't people release their work for free?).
Replies from: Q Home↑ comment by Q Home · 2023-11-20T08:38:37.190Z · LW(p) · GW(p)
I think we can just judge by the consequences (here "consequences" don't have to refer to utility calculus). If some way of "injecting" art into culture is too disruptive, we can decide to not allow it. Doesn't matter who or how makes the injection.
comment by we_the_robots · 2023-11-14T20:10:37.167Z · LW(p) · GW(p)
I'd argue that we already satisfy your premise: humans don't treat machines or AI agents as equals, and this bias won't change as long we maintain control over them.
> If that is selfish, then let us be selfish. What's wrong with being selfish?
Your confusion regarding the generative AI relies on assuming that we are not being selfish in this situation, allowing a machine to have a free pass using copyrighted images while affecting human artist's livelihoods.
However, my observation is that our support for a machine scraping content indiscriminately is actually a manifestation of extreme selfishness. Billions of non-artists now have access to high-quality, creative material at minimal cost. This is not just about the financial aspect. Many of us have always longed to express our ideas through art — writing, painting, creating music— and see these technologies as avenues to fulfill these desires.
Given these benefits, it's not surprising that humans would support arguments ensuring the continuation of these tools - even if some aspects challenge our moral values.
Some of these arguments include:
- 'this is just progress and inevitably machines always replace humans';
- If you upload your art online so anybody can see it, why can't a machine see it?
- artists copy and inspire themselves all the time.
Yet, society will bring different arguments - and even conflicting ones - for the same machines if the incentives, particularly financial ones, are misaligned.
For instance, there are numerous situations where a machine (like a camera) is prohibited from 'seeing,' whereas humans are not. In some jurisdictions, facial recognition is restricted/regulated.
So why should an machine be barred from viewing public images, especially when any human could be in the same spot staring at people's faces all day long? Here, we don't trust others and predict that such image usage will more likely have a negative impact on our lives. So far, this has nothing to do with whether we care about machines or not.
In summary, the apparent support of humans for generative AI seems primarily driven by selfishness, yet we cleverly (maybe inadvertently?) cover with rational arguments to avoid conflicts with our moral integrity.
This is my first post here and I hope having doing my best exposing my POV.
comment by xiann · 2023-11-13T22:51:54.970Z · LW(p) · GW(p)
I agree with the central point of this, and the anti-humanism is where the e/acc crowd turn entirely repugnant. But in reference to the generative AI portion, the example doesn't really land for me because I think the issue at its core is pitting two human groups against each other; the artists who would like to make a stable living off their craft, and the consumers of art who'd like less scarcity of art, particularly the marginally-creative stock variety that nonetheless forms the majority of most artists' paycheck (as opposed to entirely original works at auction or published novels).
The AI aspect is incidental. If there were a service that Amazon Turk'd art commissions for $2 a pop but you didn't have to finagle a model, you'd have the same conflict.
Replies from: dr_s↑ comment by dr_s · 2023-11-14T09:56:02.267Z · LW(p) · GW(p)
You're partly right that of course one side of the issue is just that the companies are undercutting the art market by offering a replacement product at prices that are impossible to compete with, but from seeing the complaints and viewpoints of artists, the copyright violation aspect of it is also a big deal to most of them. If only because someone undercutting you is already bad, someone undercutting you by stealing your own know-how and turning it against you adds insult to injury. To some extent I think people are focusing on this due to the belief that if not for the blatant copyright violations, the kind of large training sets required for powerful AI models would be economically unviable, and it's fairly likely that they're right (at least for now). Also, the kind of undercutting that we're seeing with AI would be fundamentally impossible with human artists. You could have one work 16 hours a day with only bread, water and a straw mat to sleep on and they wouldn't be productive one tenth of an AI model that can spit out a complete digital image in seconds with little more energy use than a large gaming computer. So we're at a point where quantity becomes a quality of its own - AI art generation economy is so fundamentally removed from the human art creation market that it doesn't just compete, it straight up takes a sledgehammer to it and then pisses on the pieces.
I also don't think here that AI art is responding to an end user demand. Digital art is infinitely reproducible and already so abundant most people wouldn't know what to do with it. The most critical end user application where someone might not easily find replacements for their very specific needs is, well, porn. That's certainly one application that AI art is good for, but not one most companies explicitly monetize for image reasons. Other than that, I'd say the biggest demand that AI art satisfies is that of middlemen who need art to enhance some other project: game developers (RPG portraits, Visual Novel characters, sprites, etc), writers who want illustrations for their novels, musicians who want covers for their albums, and so on so forth. This goes all the way up to big companies who are already beginning to use AI art for movie/show posters (which honestly is just cheap on their part, since the budgets for those things are already so inflated they might as well pay a human artist and it'll be a tiny fraction of the total costs), or that are eyeing the possibilities for animated movies (Jeffrey Katzenberg, of Shrek fame, said as much just the other day ).
comment by M. Y. Zuo · 2023-11-12T00:20:47.326Z · LW(p) · GW(p)
And if I met a race of peaceful, artful, friendly aliens, you can be assured that I would not suddenly turn into a Warhammer 40K Inquisitor whose only wish is to stomp the filthy xenos under his jackboot.
Why would the 'friendly aliens' be friendly if they know your biased against them to any extent?
Replies from: dr_s↑ comment by dr_s · 2023-11-12T08:04:04.170Z · LW(p) · GW(p)
If I meet someone else who has children, I expect that if they had to choose who dies between me, a stranger, and their child, they'd pick me. This is not a deal breaker that puts me on a spiral of escalation with them. It is perfectly possible to strike deals with not perfectly aligned entities who aren't relentless maximizers. And relentless maximizers would inevitably look like monsters to us, and any meeting with them would be to the death.
Replies from: M. Y. Zuo↑ comment by M. Y. Zuo · 2023-11-12T12:50:11.011Z · LW(p) · GW(p)
Yes, it's possible to strike deals, but that doesn't mean they will actually be 'friendly', at most 'neutral'. They may superficially give off the appearance of being 'friendly' but then again humans do that too all the time.
Replies from: dr_s↑ comment by dr_s · 2023-11-13T17:02:37.442Z · LW(p) · GW(p)
By this token, everyone is neutral, no one is friendly, unless I am literally their top priority in the whole world, and they mine (that doesn't sound like simple "friendship"... more like some kind of eternal fated bond between soulmates). For me friendly is someone with whom there's ample and reasonable chances to communicate and establish a common ground. If you make conditions harsh enough friends can turn into rivals for survival, but that's why we want abundance and well-being to be the norm. However, no amount of abundance will satisfy a relentless maximizer - they will always want more, and never stop. That's what makes compromise with them impossible. Humans are more like satisficers.
Replies from: M. Y. Zuo↑ comment by M. Y. Zuo · 2023-11-13T20:50:11.684Z · LW(p) · GW(p)
By this token, everyone is neutral, no one is friendly, unless I am literally their top priority in the whole world, and they mine (that doesn't sound like simple "friendship"... more like some kind of eternal fated bond between soulmates).
Can you explain your reasoning here? How does a bias towards or against imply a 'top priority'?
Replies from: dr_s↑ comment by dr_s · 2023-11-14T10:07:21.381Z · LW(p) · GW(p)
Well, you're the one who's saying "the aliens wouldn't be friendly if they know you're biased towards your own side". A bias means I prioritize my race over the aliens. This is normal and pretty expected; the aliens, too, will surely prioritize their own race over humans, if push came to shove. That's no barrier to friendship. The ability to cooperate is fundamentally dependent on circumstances. The only case in which I will be absolutely sure that someone would never, ever turn on me, no matter how dire the circumstances, is if I am their top priority. Bias means you have a hierarchy of values, and some are higher than others; so "well-being of your family" is higher than "well-being of an equivalent number of total strangers", and "well-being of humanity" may be higher than "well-being of the sentient octopuses of Rigel-4". But the world usually isn't made of binary trolley problems, and agents that are willing to be friendly and to put each other at a reasonably high (but not necessarily top) position in their value hierarchies have plenty of occasions to establish fruitful collaboration by throwing some other, less important value under the bus.
A relentless maximizer however is a fundamentally selfish kind of agent. A maximizer can never truly compromise because it does not have a range of acceptable states - it has only ONE all-important value that defines a single acceptable target state, and all its actions are in service of achieving that state. It can perform friendship only as long as it serves its goal, and will backstab you the next moment even if it was not in existential danger, merely because it has to advance towards its goal. I may care for the well-being of my wife, but I am not a Wife-Well-Being Maximizer. I would not for example stab someone to steal a pair of earrings that she would like if only I could get away with it; I still value a stranger's life far more than my wife's marginal enjoyment from a new piece of jewellery. A maximizer instead only cares about the goal, and everything else is at best instrumental, which makes it fundamentally unreliable (unless YOUR well-being happens to be the goal it cares about maximizing, and even then, I'd consider it a risky agent to have around).