Should we push for requiring AI training data to be licensed?

post by ChristianKl · 2022-10-19T17:49:55.644Z · LW · GW · 9 comments

This is a question post.

Contents

  Answers
    16 zfurman
    5 Dave Orr
    4 qbolec
    2 Douglas_Knight
    2 the gears to ascenscion
    1 Nathan Helm-Burger
None
9 comments

We frequently speak about AI capability gain being bad because it shortens the timeframe for AI safety research. In that logic, taking steps to decrease AI capability would be worthwhile.

At the moment the large language models are trained with a lot of data without the company, that trains the language model, licensing the data. If there would be a requirement to license the required data, that would severely reduce the available data for language models and reduce their capabilities. 

It's expensive to fight lawsuits in the United States. Currently, there are artists who feel like their rights are violated by Dalle 2 using their art as training data. Similar to how Thiel funded the Gawker lawsuits, it would be possible to support artists in a suit against OpenAI to require OpenAI to license images for training Dalle 2. If such a lawsuit is well-funded it will be much more likely that a precedent for requiring data licensing gets set which would slow down AI development. 

I'm curious about what people who think more about AI safety than myself think about such a move. Would it be helpful?

Answers

answer by Zach Furman (zfurman) · 2022-10-19T20:49:18.282Z · LW(p) · GW(p)

I'm not sure exactly where I land on this, but I think it's important to consider that restricting the data companies can train on could influence the architectures they use. Self-supervised autoregressive models a-la GPT-3 seem a lot more benign than full-fledged RL agents. The latter is a lot less data hungry than the former (especially in terms of copyrighted data). There are enough other factors here to not make me completely confident in this analysis, but it's worth thinking about.

comment by Jozdien · 2022-10-19T21:09:39.548Z · LW(p) · GW(p)

I'm leaning toward the current paradigm being preferable to a full-fledged RL one, but want to add a point - one of my best guesses for proto-AGI involves massive LLMs hooked up to some RL system.  This might not require RL capabilities on the same level of complexity as pure RL agents, and RL is still being worked on today.

Replies from: dkirmani
comment by dkirmani · 2022-12-14T11:11:51.349Z · LW(p) · GW(p)

Agree, but LLM + RL is still preferable to muzero-style AGI.

Replies from: Jozdien
comment by Jozdien · 2022-12-14T18:12:32.130Z · LW(p) · GW(p)

I agree, but this is a question of timelines too. Within the LLM + RL paradigm we may not need AGI-level RL or LLMs that can accessibly simulate AGI-level simulacra just from self-supervised learning, both of which would take longer than many points requiring intermediate levels of LLM and RL capabilities, because people are still working on RL stuff now.

answer by Dave Orr · 2022-10-19T20:37:52.805Z · LW(p) · GW(p)

I'm not a lawyer and this is not legal advice, but I think the current US legal framework isn't going to work to challenge training on publicly available data.

One argument that something is fair use is that it is transformative [1]. And taking an image or text and using it to slightly influence a giant matrix of numbers, in such a way that the original is not recoverable, and which allows new kinds of expression, seems likely to count as transformative.

So if you think that restricting access to public data for training purposes is a promising approach [2], you should probably focus on trying to create a new regulatory framework.

Having said that, this is all US analysis. Other countries have other frameworks and may not have exact analogs of fair use. Perhaps in the EU legal challenges are more viable.

[1] https://www.nolo.com/legal-encyclopedia/fair-use-what-transformative.html

[2] You should think about what the side effects would be like. For instance, this will advantage giant companies that can pay to license or create data, and places that have low respect for law. Whether that's desirable is worth thinking through.

comment by ChristianKl · 2022-10-19T23:32:39.707Z · LW(p) · GW(p)

From the overview, it seems like there's one Supreme court case and the other cases are from lower courts.

Transformativeness is only one of the factors in the Supreme court case.

The supreme court case does say:

As to the music, this Court expresses no opinion whether repetition of the bass riff is excessive copying, but remands to permit evaluation of the amount taken, in light of the song's parodic purpose and character, its transformative elements, and considerations of the potential for market substitution.

(f) The Court of Appeals erred in resolving the fourth §107 factor, "the effect of the use upon the potential market for or value of the copyrighted work," by presuming, in reliance on Sony, supra, at 451, the likelihood of significant market harm based on 2 Live Crew's use for commercial gain. No "presumption" or inference of market harm that might find support in Sony is applicable to a case involving something beyond mere duplication for commercial purposes. The cognizable harm is market substitution, not any harm from criticism. As to parody pure and simple, it is unlikely that the work will act as a substitute for the original, since the two works usually serve different market functions. The fourth factor requires courts also to consider the potential market for derivative works. See, e. g., Harper & Row, supra, at 568. If the later work has cognizable substitution effects in protectable markets for derivative works, the law will look beyond the criticism to the work's other elements. 2 Live Crew's song comprises not only parody but also rap music. The absence of evidence or affidavits addressing the effect of 2 Live Crew's song on the derivative market for a nonparody, rap version of "Oh, Pretty Woman" disentitled 2 Live Crew, as the proponent of the affirmative defense of fair use, to summary judgment. Pp. 20-25.

If you would sue Dalle 2, you would likely argue that Dalle 2 does create market harm by creating competition. 

Replies from: dave-orr
comment by Dave Orr (dave-orr) · 2022-10-20T00:27:30.416Z · LW(p) · GW(p)

Creating competition doesn't count as harm -- it has to be direct substitution for the work in question. That's a pretty high bar.

Also there are things like stable diffusion which arguably aren't commercial (the code and model are free), which further undercuts the commercial purpose angle.

I'm not saying any of this is dispositive -- that's the nature of balancing tests. I think this is going to be a tough row to hoe though, and certainly not a slam dunk to say that copyright should prevent ML training on publicly available data.

(Still not a lawyer, still not legal advice!)

Replies from: ChristianKl
comment by ChristianKl · 2022-10-20T10:16:24.357Z · LW(p) · GW(p)

Let's say an artist draws images in a very unique style. Afterwards, a lot of Dalle 2 images get created in the same style. That would make the style less unique and less valuable.

It's plausible financial harm in a way that doesn't exist in the case on which the Supreme Court ruled.

I don't believe it's a slam dunk either. I do believe there's room for the Supreme Court to decide either way. The fact that it's not a slam dunk either way suggests that spending money on making a stronger legal argument is valuable.

Replies from: dave-orr
answer by qbolec · 2022-10-20T20:02:32.731Z · LW(p) · GW(p)

It's already happening https://githubcopilotinvestigation.com/ (which I've learned yesterday from is-github-copilot-in-legal-trouble post [LW · GW]) 
I think it would be interesting plot twist: humanity saved from AI FOOM by the big IT companies having to obey intellectual property rights they themselves defended for so many years :)

answer by Douglas_Knight · 2022-12-13T20:21:51.149Z · LW(p) · GW(p)

If you want to ban or monopolize such models, push for that directly. Indirectly banning them is evil.

They're already illegal. GPT-3 is based in large part on what appear to be pirated books. (I wonder if google's models are covered by its settlements with publishers.)

comment by gwern · 2022-12-13T21:52:10.163Z · LW(p) · GW(p)

GPT-3 is based in large part on what appear to be pirated books.

Which is argued to be "transformative" and thus not illegal.

Replies from: ChristianKl, Douglas_Knight
comment by ChristianKl · 2022-12-13T23:29:03.671Z · LW(p) · GW(p)

Even if it's transformative they should still have to buy one license for each book to be able to use it. 

Replies from: gwern
comment by gwern · 2022-12-14T01:46:53.153Z · LW(p) · GW(p)

Why?

Replies from: ChristianKl
comment by ChristianKl · 2022-12-14T11:01:11.391Z · LW(p) · GW(p)

The basic idea of copyright is that if you want to acquire a copy of a book you need to buy that copy. If they just downloaded lib-gen, they didn't buy the copies of the book they use and that would be a copyright violation. 

That's true whether or not you afterward do something transformative. 

Replies from: gwern
comment by gwern · 2022-12-15T23:10:41.838Z · LW(p) · GW(p)

What a bizarre normative assertion. That copyright violation would be true whether or not they used it to train a model or indeed, deleted it immediately after downloading it. The copyright violation is one thing, and the model is another thing. The license that one would buy has nothing to do with any transformative ML use, and would deny that use if possible (and likely already contains language to the effect of denying as much as possible). There is no more connection than there is in the claim "if you rob a Starbucks, you should buy a pastry first".

Replies from: ChristianKl
comment by ChristianKl · 2022-12-16T11:21:16.525Z · LW(p) · GW(p)

Yes, the copyright violation is true whether or not they used it to train a model. Douglas_Knight's claim is that the copyright violation occurred. If that's true, that makes it possible to sue them over it.
 

comment by Douglas_Knight · 2022-12-13T22:28:19.605Z · LW(p) · GW(p)

No, OpenAI is not arguing this. They are not arguing anything, but just hiding their sources. Maybe they're arguing this about using the public web as training data, but that doesn't cover pirated books.

Yes, a model is transformative, not infringement. But the question was about the training data. Is that infringement? Distributing the Pile is a tort and probably a crime by quantity. Acquiring the training data was a tort and probably a crime. I'm not sure about possessing it. Even if OpenAI is shielded from criminal responsibility, a crime was necessary for the creation and that was not enough to deter it.

Replies from: gwern
comment by gwern · 2022-12-14T01:46:20.987Z · LW(p) · GW(p)

No, OpenAI is not arguing this. They are not arguing anything, but just hiding their sources.

OpenAI is in fact arguing this and wrote one of the primary position papers on the transformative position.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2022-12-14T02:47:46.717Z · LW(p) · GW(p)

Does this link say anything about their illegal acquisition of the sources?

It sure looks to me like you and they are lying to distract. I condemn this lying, just as I condemned Christian's proposed lies.

comment by ChristianKl · 2022-12-13T21:28:32.007Z · LW(p) · GW(p)

If you want to ban or monopolize such models, push for that directly.

How would you do that? How would you write the laws?

answer by the gears to ascension (the gears to ascenscion) · 2022-10-20T00:08:41.000Z · LW(p) · GW(p)

Yes but only because I want to make sure that anyone who runs a copy of my soul gets a license for it from me first. it's not going to slow down capabilities except differentially moving capabilities to other groups.

answer by Nathan Helm-Burger · 2022-10-28T19:40:34.426Z · LW(p) · GW(p)

Too late. Much too late. Bird has flown the coop. No use shutting the barn doors after the cows have run away. Also, since the majority of the training data is just 'the internet', you can't get rid of this except by monitoring everyone's use of the internet to make sure they aren't making webscrapes for the purpose of later training an AI on that data. If all you do is cause some bureaucratic hassle for US companies, you give an edge to non-US groups and probably also encourage US groups to go elsewhere. 

9 comments

Comments sorted by top scores.

comment by shminux · 2022-10-19T18:09:11.699Z · LW(p) · GW(p)

The usual question is... will the US cede the AI research advantage to countries which do not care about US copyright?

Replies from: jessica.liu.taylor, ChristianKl
comment by jessicata (jessica.liu.taylor) · 2022-10-19T20:10:19.415Z · LW(p) · GW(p)

I don't especially think AI capabilities increases are bad on the margin, but if I did I would think of this as a multilateral disarmament problem where those who have the most capabilities (relative to something else, population/economy) and worst technological coordination should disarm first, similar to nukes; that would currently indicate US and UK over China etc. China has more precedent for government control over the economy than the West, so could more easily coordinate AI slowdown.

comment by ChristianKl · 2022-10-19T19:26:07.835Z · LW(p) · GW(p)

Export embargo's of GPU's can be used for those countries as the US is currently attempting to do with China. "Country X unfairly competes by violating US copyright" is likely a very good argument to motivate the US government to work to do embargos like that. 

It's also worth noting that this doesn't hamper all AI fields. Alpha Fold for example works well on bioinformatic data that licensed in a way where everyone can use it freely. 

In addition to just slowing everything down, it creates also pressures for better curation of data as it makes it easier for companies who sell curated data to operate. Having people make more explicit decision about which data to include in models might be benefitial for safety.

Replies from: jmh
comment by jmh · 2022-10-19T23:18:51.239Z · LW(p) · GW(p)

It is not clear to me export controls have been very successful so far -- in pretty much every area.

Replies from: ChristianKl
comment by ChristianKl · 2022-10-19T23:29:38.942Z · LW(p) · GW(p)

Export controls don't completely eliminate any exports but they do make it harder and more expensive. That's especially true if someone needs a lot of GPUs.

Replies from: jmh
comment by jmh · 2022-10-20T00:06:36.896Z · LW(p) · GW(p)

I think the mistake here is thinking that just because something get harder and more expensive that it will actually slow or materially reduce some stated outcome. The reason is that the costs and burdens may not fall on the targeted locus. In the GPU case that just means that none of those GPU get into uses such as consumer markets or gaming system/cards. 

While not perfect comparisons, in your view just how effective -- say in time to build, time to improve or ability to pursue an activity -- have export controls been regarding:

  1. Putin's ability to conduct his war in Ukraine?
  2. North Korea's nuclear and ballistic programs?
  3. China's tech and military advancements?
  4. Iran's nuclear and ballistic program?

For me, I can only see that the export efforts have only imposed costs on the general population (to differing degrees) rather than materially impacting the actual target activity. Perhaps I'm missing something and each of the above would be much more a problem than if the sanctions/export restrictions had not been put in place.

That said, I would agree that from a pure "on principle" basis not aiding and abetting something or someone you think is a danger to you is the right thing to do. However, I do think in this case it come much closer to virtue signalling than actions producing some binding constraint on the efforts.

Replies from: ChristianKl
comment by ChristianKl · 2022-10-20T10:07:43.355Z · LW(p) · GW(p)

Putin's military seems to run out of high precision ammunition and does not do well on the battlefield currently. It's hard to say how much of this is due to export controls and how much is due to other factors.

North Korea doesn't seems to have a lot of reliable intercontinental missiles. Their tech development seems quite slowed down. It isn't zero but also not fast.

China's tech advancement are a result of the West outsourcing a lot of tech production to China and not having embargos. I don't know much about Chinese military tech.

Iran still doesn't have nuclear weapons. It might still get them, but there's certainly a slowdown of development.

Nuclear and ballistic technology is pretty well understood for decades developing AGI without having existing designs to copy will be much harder.

comment by Daniel Paleka · 2022-10-19T18:08:55.376Z · LW(p) · GW(p)

Epistemic status: I'd give >10% on Metaculus resolving the following as conventional wisdom[1] in 2026.

  1. Autoregressive-modeling-of-human-language capabilities are well-behaved, scaling laws can help us predict what happens, interpretability methods developed on smaller models scale up to larger ones, ... 
  2. Models-learning-from-themselves have runaway potential, how a model changes after [more training / architecture changes / training setup modifications] is harder to predict than in models trained on 2022 datasets.
  3. Replacing human-generated data with model-generated data was a mistake[2]

 

  1. ^

    In the sense that e.g. chain of thought improves capabilities is conventional wisdom in 2022.

  2. ^

    In the sense of x-safety. I have no confident insight either way on how abstaining from very large human-generated datasets influences capabilities long-term. If someone has, please refrain from discussing that publicly, of course.

comment by ChristianKl · 2022-10-26T14:36:17.766Z · LW(p) · GW(p)

Discussion by Jason Calacanis and Molly Wood about why they think that artists likely have rights that can be violated by Dalle 2: