0 comments
Comments sorted by top scores.
comment by DanielFilan · 2023-10-30T20:16:16.919Z · LW(p) · GW(p)
I don't understand what the relevant tort is supposed to be: my impression is that civil lawsuits are supposed to be used to handle cases where one person has been significantly wrongfully harmed by another. Who's been harmed by the release of Llama2?
Replies from: DanielFilan, nathan-helm-burger↑ comment by DanielFilan · 2023-10-30T21:54:01.422Z · LW(p) · GW(p)
I think my favourite theory is that open-sourcing Llama 2 means that you now can't tell if text on the internet was written by humans or AI, which is sad (a) for personal reasons and (b) because it sort of messes up your ability to train models on human-generated data. But I'm not sure if this is or should be sufficient for a suit.
Replies from: jkaufman↑ comment by jefftk (jkaufman) · 2023-10-31T13:04:11.521Z · LW(p) · GW(p)
How is this specifically downstream from open sourcing, since text could already be generated by a closed source LLM?
Replies from: DanielFilan↑ comment by DanielFilan · 2023-10-31T15:08:40.229Z · LW(p) · GW(p)
Closed source LLMs could conceivably implement watermarking techniques, or log interactions, but this is infeasible with open source LLMs.
↑ comment by Nathan Helm-Burger (nathan-helm-burger) · 2023-10-30T20:37:51.740Z · LW(p) · GW(p)
Everyone on Earth, is my argument. Proving it safely would be the challenging thing.
Replies from: nathan-helm-burger↑ comment by Nathan Helm-Burger (nathan-helm-burger) · 2023-10-30T20:43:00.890Z · LW(p) · GW(p)
Wow, my response got a lot of disagreement votes rapidly. I'd love to hear some reasoning from people who disagree. It would be helpful for me to be able to pass an Intellectual Turing Test here. I'm currently failing to understand.
Replies from: quintin-pope, DanielFilan, cfoster0, RamblinDash, ChristianKl↑ comment by Quintin Pope (quintin-pope) · 2023-10-30T21:33:05.095Z · LW(p) · GW(p)
I strong downvoted and strong disagree voted. The reason I did both is because I think what you're describing is a genuinely insane standard to take for liability. Holding organizations liable for any action they take which they do not prove is safe is an absolutely terrible idea. It would either introduce enormous costs for doing anything, or allow anyone to be sued for anything they've previously done.
Replies from: jkaufman, nathan-helm-burger, DanielFilan, DanielFilan↑ comment by jefftk (jkaufman) · 2023-10-31T13:12:50.832Z · LW(p) · GW(p)
Proving it safely would be the challenging thing.
Holding organizations liable for any action they take which they do not prove is safe is an absolutely terrible idea
I think you're talking past each other. I interpret Nathan as saying that he could prove that everyone on earth has been harmed, but that he couldn't do that in a safe manner.
↑ comment by Nathan Helm-Burger (nathan-helm-burger) · 2023-10-30T22:22:05.994Z · LW(p) · GW(p)
Thanks Quintin. That's useful. I think the general standard of holding organizations liable for any action which they do not prove to be safe is indeed a terrible idea. I do think that certain actions may carry higher implicit harms, and should be held to a higher standard of caution.
Perhaps you, or others, will give me your opinions on the following list of actions. Where is a good point, in your opinion, to 'draw the line'? Starting from what I would consider 'highly dangerous and much worse than Llama2' and going down to 'less dangerous than Llama2', here are some related actions.
- Releasing for free a software product onto the internet explicitly engineered to help create bio-weapons. Advertising this product as containing not only necessary gene sequences and lab protocols, but also explicit instructions for avoiding government and organization safety screens. Advertising that this product shows multiple ways to create your desired bio-weapon, including using your own lab equipment or deceiving Contract Research Organizations into unwittingly assisting you.
- Releasing the same software product with the same information, but not mentioning to anyone what it is intended for. Because it is a software product, rather than an ML model, the information is all correct and not mixed in with hallucinations. The user only needs to navigate to the appropriate part of the app to get the information, rather than querying a language model.
- Releasing a software product not actively intended for the above purposes, but which does happen contain all that information and can incidentally be used for those purposes.
- Releasing an LLM which was trained on a dataset containing this information, and can regurgitate the information accurately. Furthermore, tuning this LLM before release to be happy to help users with requests to carry out a bio-weapons project, and double-checking to make sure that the information given is accurate.
- Releasing an LLM that happens to contain such information and could be used for such purposes, but the information is not readily available (e.g. requires jail breaking or fine tuning) and is contaminated with hallucinations which can make it difficult to know which portion of the information to trust.
- Releasing an LLM that could be used for such purposes, but that doesn't yet contain the information. Bad actors would have to acquire the relevant information and fine-tune the model on it in order for the model to be able to effectively assist them in making a weapon of mass destruction.
Where should liability begin?
Replies from: JBlack↑ comment by JBlack · 2023-10-31T00:57:15.773Z · LW(p) · GW(p)
All of the below is my current personal opinion, and not intended to be normative.
Once you remove the "Advertising" part from (1), you're left with "Releasing for free a software product onto the internet explicitly engineered to help create bio-weapons", which means that (2) seems definitely worthy of criminal liability.
(3) turns that into "Releasing for free a software product onto the internet which contains information that may help create bio-weapons", and is probably about where my fuzzy line for liability comes in. Some cases of (3) should be liable, some should not. If the information in it is just swept up as part of a big pile of publically available information, then no. If it contains substantial portions of non-public information about bio-weapon development, even as part of a much larger corpus of information not normally available to the public, then probably yes.
However: (4) re-introduces intent and should be criminally liable, since in this scenario it seems that the developer will happily fine-tune specifically for compliance and accuracy of bio-weapon development information. If they just fine-tune for compliance with user requests in general and do some random sampling of accuracy of information across a broad range of requests that don't specifically include bio-weapons development, we're back to the (3) case where they should be liable in some cases but not others.
(5) and (6) seem no worse than Google and Wikipedia and should basically never be liable.
↑ comment by DanielFilan · 2023-10-30T21:44:22.066Z · LW(p) · GW(p)
Also given the rest of the replies, I think he means that it would be challenging for a plaintiff to safely prove that Llama 2 enables terrorists to make bioweapons, not that the alleged harm is "making open-source AI without proving it safe" or such.
Replies from: nathan-helm-burger↑ comment by Nathan Helm-Burger (nathan-helm-burger) · 2023-10-30T22:27:04.356Z · LW(p) · GW(p)
Thanks Daniel, yes. To be more clear: I have evidence that I do not feel comfortable presenting which I believe would be more convincing than the evidence I do feel comfortable presenting. I am working on finding more convincing evidence which is still safe to present. I am seeking to learn what critics would consider to be cruxy evidence which might lie in the 'safe for public discussion' zone.
↑ comment by DanielFilan · 2023-10-30T21:41:25.070Z · LW(p) · GW(p)
FWIW the heavy negative karma means that his answer is hidden by default, so that readers can't easily see why OP thinks it might make sense to bring a lawsuit against Meta, which seems bad.
↑ comment by DanielFilan · 2023-10-30T20:54:02.902Z · LW(p) · GW(p)
I didn't disagree vote, but I'm a person on Earth, and I don't know how I've yet been harmed by the release of Llama 2. For example, to my knowledge, nobody has used Llama 2 to create a fatal disease and infect me with it, or take over the world and kill me, or say incredibly rude and false things about me, etc. So it doesn't sound like everyone on Earth has been harmed by the release of Llama 2.
Replies from: nathan-helm-burger↑ comment by Nathan Helm-Burger (nathan-helm-burger) · 2023-10-30T20:58:19.079Z · LW(p) · GW(p)
Yes, that's a fair point. I should clarify that the harm I am saying has been done is "information allowing terrorists to make bioweapons is now more readily comprehensible to motivated laypersons." It's a pretty vague harm, but it sure would be nice (in my view) to be able to take action on this before a catastrophe rather than only after.
Replies from: DanielFilan↑ comment by DanielFilan · 2023-10-30T21:07:21.557Z · LW(p) · GW(p)
I'm not a lawyer and haven't taken a course on law in my life, but my impression was that you have to wait for an actual bad thing to happen before a civil suit. If I take your proposed harm at face value, it sounds like it would also apply to anybody publishing a virology textbook and selling it without strict KYC measures, which seems like a bad idea.
[I]t sure would be nice (in my view) to be able to take action on this before a catastrophe rather than only after.
My impression is that this sort of thing is the domain of laws (e.g. mandating liability insurance or criminalizing certain risky actions) rather than lawsuits, but again I stress that I'm just some guy.
↑ comment by cfoster0 · 2023-10-31T15:38:34.948Z · LW(p) · GW(p)
At face value it seems not-credible that "everyone on Earth" has been actually harmed by the release of Llama2. You could try to make a case that there's potential for future harms downstream of Llama2's release, and that those speculative future harms could impact everyone on Earth, but I have no idea how one would claim that they have already been harmed.
Replies from: nathan-helm-burger↑ comment by Nathan Helm-Burger (nathan-helm-burger) · 2023-10-31T15:52:44.847Z · LW(p) · GW(p)
Yes, that's fair. I'm over-reacting. To me it feels very like someone is standing next to a NYC subway exit handing out free kits for making dirty nuclear bombs that say things on the side like, "A billion civilians dead with each bomb!", "Make it in your garage with a few tools from the hardware store!", "Go down in history for changing the face of the world!".
I want to stop that behavior even if nobody has yet been killed by the bombs and even if only 1 in 10 kits actually contains correct instructions.
As a bit of evidence that I'm not just totally imagining risks where there aren't any... The recent Executive Order from the US Federal Gov contains a lot of detail about improving the regulation of DNA synthesis. I claim that the reason for this is that someone accurately pointed out to them that there are gaping holes in our current oversight, and that AI makes this vulnerability much more dangerous. And their experts then agreed that this was a risk worth addressing.
For example:
" (i) Within 180 days of the date of this order, the Director of OSTP, in consultation with the Secretary of State, the Secretary of Defense, the Attorney General, the Secretary of Commerce, the Secretary of Health and Human Services (HHS), the Secretary of Energy, the Secretary of Homeland Security, the Director of National Intelligence, and the heads of other relevant agencies as the Director of OSTP may deem appropriate, shall establish a framework, incorporating, as appropriate, existing United States Government guidance, to encourage providers of synthetic nucleic acid sequences to implement comprehensive, scalable, and verifiable synthetic nucleic acid procurement screening mechanisms, including standards and recommended incentives. As part of this framework, the Director of OSTP shall:
(A) establish criteria and mechanisms for ongoing identification of biological sequences that could be used in a manner that would pose a risk to the national security of the United States; and
(B) determine standardized methodologies and tools for conducting and verifying the performance of sequence synthesis procurement screening, including customer screening approaches to support due diligence with respect to managing security risks posed by purchasers of biological sequences identified in subsection 4.4(b)(i)(A) of this section, and processes for the reporting of concerning activity to enforcement entities."
↑ comment by RamblinDash · 2023-10-30T20:48:02.440Z · LW(p) · GW(p)
Generally for a court case, a generalized grievance is not enough - you need to have a concrete and particularized harm. What you are talking about sounds most analogous to "Public Nuisance" - I don't find the Wikipedia summary to be very good, and don't have time at this moment to find a better one. But the short version is, to bring a claim for Public Nuisance, you have to either be the government, or else be an individual who has an additional special harm above and beyond the general harm to the public.
Replies from: nathan-helm-burger, nathan-helm-burger↑ comment by Nathan Helm-Burger (nathan-helm-burger) · 2023-10-31T21:04:45.337Z · LW(p) · GW(p)
Thankfully, it seems like the US Federal Government is more on the same page with me about these risks than I had previously thought.
"(k) The term “dual-use foundation model” means an AI model that is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters, such as by:
(i) substantially lowering the barrier of entry for non-experts to design, synthesize, acquire, or use chemical, biological, radiological, or nuclear (CBRN) weapons;
(ii) enabling powerful offensive cyber operations through automated vulnerability discovery and exploitation against a wide range of potential targets of cyber attacks; or
(iii) permitting the evasion of human control or oversight through means of deception or obfuscation.
Models meet this definition even if they are provided to end users with technical safeguards that attempt to prevent users from taking advantage of the relevant unsafe capabilities. "
Replies from: RamblinDash↑ comment by RamblinDash · 2023-11-01T16:18:54.876Z · LW(p) · GW(p)
I especially like "or could be easily modified to exhibit" here.
↑ comment by Nathan Helm-Burger (nathan-helm-burger) · 2023-10-30T20:55:47.240Z · LW(p) · GW(p)
Thanks RamblinDash, that's helpful. I agree that this does seem like something the government should consider taking on, rather than making sense as a claim from a particular person at this point. I suppose that if there is a catastrophe that results which can be traced back to Llama2, then the survivors would have an easier case to make.
↑ comment by ChristianKl · 2023-10-31T13:49:06.268Z · LW(p) · GW(p)
You can only make a lawsuit if there's an existing legal basis for them. No existing country has laws that require you to prove that everything you do is safe.
Free speech laws allow Bruce Schneier to run his Movie-Plot Threat Contest.
You can't even sue someone who sells guns when those guns are used to shoot people for being reckless.
Replies from: DanielFilan↑ comment by DanielFilan · 2023-10-31T15:12:00.780Z · LW(p) · GW(p)
No existing country has laws that require you to prove that everything you do is safe.
Nathan isn't saying that Meta should be sued for not proving that Llama 2 was safe before open sourcing. See this comment [LW(p) · GW(p)] and the reply.
comment by Shankar Sivarajan (shankar-sivarajan) · 2023-10-30T22:06:05.829Z · LW(p) · GW(p)
Do you believe it would make sense to sue the Python Software Foundation?
Replies from: nathan-helm-burger↑ comment by Nathan Helm-Burger (nathan-helm-burger) · 2023-10-30T22:24:11.860Z · LW(p) · GW(p)
Where on the list in this comment do you think the Python Software Foundation lies? I'm pretty sure they're somewhere below level 5 at this point. If they did take actions as a group which took them above level 5, then yes, perhaps.
https://www.lesswrong.com/posts/dL3qxebM29WjwtSAv/would-it-make-sense-to-bring-a-civil-lawsuit-against-meta?commentId=dRiLdnRYQA3bZRE9h [LW(p) · GW(p)]
Replies from: shankar-sivarajan↑ comment by Shankar Sivarajan (shankar-sivarajan) · 2023-10-31T04:11:14.524Z · LW(p) · GW(p)
Okay, if you're not doing "software that could be used for criminal actions" and instead picking criteria about supposedly dangerous information, my counterexample would be the Wikimedia Foundation. That would be on either Level 2 or 3.
If you want to quibble about "user-generated content" and webpage hosting not being an actual release, fine, consider Kiwix instead.
comment by Amalthea (nikolas-kuhn) · 2023-10-30T20:02:15.467Z · LW(p) · GW(p)
Minor point: It's unclear to me that a model that doesn't contain harmful information in the training set is significantly bad. One problem with current models is that they provide the information in an easily accessible form - so if bad actors have to assemble it for fine-tuning it at least makes their job partially harder. Also, it seems at least plausible that the fine-tuned model will still underperform compared to one that contained dangerous information in the original training data.
Replies from: nathan-helm-burger↑ comment by Nathan Helm-Burger (nathan-helm-burger) · 2023-10-30T20:40:48.633Z · LW(p) · GW(p)
Yes, I agree that it's much less bad if the model doesn't have harmful information in the training set. I do still think that it's easier (for a technically skilled team) to do a web scrape for all information related to topic X, and then do a fine-tuning on that information, than it is to actually read and understand the thousands of academic papers and textbooks themselves.