Maybe Anthropic's Long-Term Benefit Trust is powerless
post by Zach Stein-Perlman · 2024-05-27T13:00:47.991Z · LW · GW · 21 commentsContents
Introduction The facts Conclusion None 21 comments
Crossposted from AI Lab Watch. Subscribe on Substack.
Introduction
Anthropic has an unconventional governance mechanism: an independent "Long-Term Benefit Trust" elects some of its board. Anthropic sometimes emphasizes that the Trust is an experiment, but mostly points to it to argue that Anthropic will be able to promote safety and benefit-sharing over profit.[1]
But the Trust's details have not been published and some information Anthropic has shared is concerning. In particular, Anthropic's stockholders can apparently overrule, modify, or abrogate the Trust, and the details are unclear.
Anthropic has not publicly demonstrated that the Trust would be able to actually do anything that stockholders don't like.
The facts
There are three sources of public information on the Trust:
- The Long-Term Benefit Trust (Anthropic 2023)
- Anthropic Long-Term Benefit Trust (Morley et al. 2023)
- The $1 billion gamble to ensure AI doesn't destroy humanity (Vox: Matthews 2023)
They say there's a new class of stock, held by the Trust/Trustees. This stock allows the Trust to elect some board members and will allow them to elect a majority of the board by 2027.
But:
- Morley et al.: "the Trust Agreement also authorizes the Trust to be enforced by the company and by groups of the company’s stockholders who have held a sufficient percentage of the company’s equity for a sufficient period of time," rather than the Trustees.
- I don't know what this means.
- Morley et al.: the Trust and its powers can be amended "by a supermajority of stockholders. . . . [This] operates as a kind of failsafe against the actions of the Voting Trustees and safeguards the interests of stockholders." Anthropic: "the Trust and its powers [can be changed] without the consent of the Trustees if sufficiently large supermajorities of the stockholders agree."
- It's impossible to assess this "failsafe" without knowing the thresholds for these "supermajorities." Also, a small number of investors—currently, perhaps Amazon and Google—may control a large fraction of shares. It may be easy for profit-motivated investors to reach a supermajority.
- Maybe there are other issues with the Trust Agreement — we can't see it and so can't know.
- Vox: the Trust "will elect a fifth member of the board this fall," viz. Fall 2023.
- Anthropic has not said whether that happened nor who is on the board these days (nor who is on the Trust these days).
Conclusion
Public information is consistent with the Trust being quite subordinate to stockholders, likely to lose their powers if they do anything stockholders dislike. (Even if stockholders' formal powers over the Trust are never used, that threat could prevent the Trust from acting contrary to the stockholders' interests.)
Anthropic knows this and has decided not to share the information that the public needs to evaluate the Trust. This suggests that Anthropic benefits from ambiguity because the details would be seen as bad. I basically fail to imagine a scenario where publishing the Trust Agreement is very costly to Anthropic—especially just sharing certain details (like sharing percentages rather than saying "a supermajority")—except that the details are weak and would make Anthropic look bad.[2]
Maybe it would suffice to let an auditor see the Trust Agreement and publish their impression of it. But I don't see why Anthropic won't publish it.
Maybe the Trust gives Anthropic strong independent accountability — or rather, maybe it will by default after (unspecified) time- and funding-based milestones. But only if Anthropic's board and stockholders have substantially less power over it than they might—or if they will exercise great restraint in using their power—and the Trust knows this.
Unless I'm missing something, Anthropic should publish the Trust Agreement (and other documents if relevant) and say whether and when the Trust has elected board members. Especially vital is (1) publishing information about how the Trust or its powers can change, (2) committing to publicly announce changes, and (3) clarifying what's going on with the Trust now.
Note: I don't claim that maximizing the Trust's power is correct. Maybe one or more other groups should have power over the Trust, whether to intervene if the Trust collapses or does something illegitimate or just to appease investors. I just object to the secrecy.
Thanks to Buck Shlegeris for suggestions. He doesn't necessarily endorse this post.
- ^
- ^
Unlike with some other policies [LW(p) · GW(p)], the text of the Trust Agreement is crucial; it is a legal document that dictates actors' powers over each other.
21 comments
Comments sorted by top scores.
comment by Zach Stein-Perlman · 2024-05-27T13:02:04.658Z · LW(p) · GW(p)
Minor remark, inspired (but not necessarily endorsed) by Buck:
The whole point of the Trust is to be able to act contrary to the interests of massively incentivized stakeholders. This is fundamentally a hard task, and it would be easy for the Trust Agreement to leave the Trust disempowered for practical purposes even if the people who wrote it weren't trying to sabotage it. And as we saw with OpenAI, it's dangerous to assume that the de facto power structures in an AI company match what's on paper.
(This post is about a sharper, narrower concern — that if you read the relevant document you'd immediately conclude that the Trust has no real power.)
comment by Zach Stein-Perlman · 2024-05-27T13:01:49.073Z · LW(p) · GW(p)
I also wonder about other aspects of Anthropic's relationship with its stockholders:
- Anthropic's obligations to its stockholders & stockholders' power over Anthropic
- What its PBC status entails
- Whether its investors have any special arrangements distinct from normal investment in a PBC
- How stockholders can replace board members
- How shares (or voting power) are currently distributed between Amazon, Google, OP, and others.
comment by Zach Stein-Perlman · 2024-06-01T22:45:02.778Z · LW(p) · GW(p)
New: Anthropic lists the board members and LTBT members on a major page (rather than a blogpost with a specific date), which presumably means they'll keep it up to date and thus we'll quickly learn of changes, hooray:
Anthropic Board of Directors
Dario Amodei, Daniela Amodei, Yasmin Razavi, and Jay Kreps.LTBT Trustees
Neil Buddy Shah, Kanika Bahl, and Zach Robinson.
(We already knew these are the humans.)
In December 2023, Jason Matheny stepped down from the Trust to preempt any potential conflicts of interest that might arise with RAND Corporation's policy-related initiatives. Paul Christiano stepped down in April 2024 to take a new role as the Head of AI Safety at the U.S. AI Safety Institute. Their replacements will be elected by the Trustees in due course.
Again, this is not news (although it hasn't become well-known, I think) but I appreciate this public note.
comment by Zach Stein-Perlman · 2024-05-29T19:00:06.438Z · LW(p) · GW(p)
Jay Kreps, co-founder and CEO of Confluent, has joined Anthropic's Board of Directors. . . . Jay was appointed to the board by Anthropic's Long-Term Benefit Trust. . . . Separately, Luke Muehlhauser has decided to step down from his Board role to focus on his work at Open Philanthropy.
I'm glad that the Trust elected a board member.
I still really want to know whether this happened on schedule.
I'm interested in what will happen to Luke's seat — my guess is that the Trust's next appointment will fill it.
Replies from: habryka4↑ comment by habryka (habryka4) · 2024-05-29T23:26:08.965Z · LW(p) · GW(p)
The timing makes me think it didn't happen on schedule and they are announcing this now in response to save face and pre-empt bad PR from this post (though I am only like 75% confident that something like that is going on, and my guess is the appointment itself has been in the works for a while). Seems IMO like a bad sign to do that without being clear about the timing and the degree to which a past commitment was violated.
(Also importantly, they said they would appoint a fifth board-member, but instead it seems like this board member replaced Luke, so they actually stuck to 4)
Replies from: Zach Stein-Perlman, Zach Stein-Perlman, ryan_greenblatt↑ comment by Zach Stein-Perlman · 2024-05-30T16:32:48.435Z · LW(p) · GW(p)
Looks like it was timed to come out before today's TIME articles.
↑ comment by Zach Stein-Perlman · 2024-05-29T23:41:18.003Z · LW(p) · GW(p)
I bet the timing is a coincidence or due to internal questions/pressure, not PR concerns. Regardless I should ask someone at Anthropic how this post was received within Anthropic.
The plan was for the Trust to elect a fifth board member and also eventually replace Luke and Daniela. I totally believe Anthropic that Luke's departure was unrelated to Jay's arrival and generally non-suspicious. [Edit: but I do wish he'd been replaced with a safety-focused board member. My weak impression is that OP has the right to fill that seat until the Trust does; probably OP wants to distance itself from Anthropic but just giving up a board seat seems like a bad call.]
↑ comment by ryan_greenblatt · 2024-05-30T00:12:01.608Z · LW(p) · GW(p)
I originally reacted skeptically to this, but I am actually not that skeptical of "they posted now prompted by this, but this was in the works for a while". (I missed "this was in the works for a while" on my first read of your comment.)
Replies from: habryka4↑ comment by habryka (habryka4) · 2024-05-30T00:19:15.347Z · LW(p) · GW(p)
(I missed "this was in the works for a while" on my first read of your comment.)
No, I just gaslit you. I edited it when I saw your reaction as a clarification. Sorry about that, should have left a note that I edited it.
comment by ryan_greenblatt · 2024-05-27T17:50:20.165Z · LW(p) · GW(p)
Observation: it's possible that the board is also powerless with respect to the employees and the leadership.
I think the board probably has less de facto power than employees/leadership by a wide margin in the current regime.
Replies from: aysja↑ comment by aysja · 2024-05-27T21:36:03.996Z · LW(p) · GW(p)
Why do you think this? The power that I'm primarily concerned about is the power to pause, and I'm quite skeptical that companies like Amazon and Google would be willing to invest billions of dollars in a company which may decide to do something that renders their investment worthless. I.e, I think a serious pause, one on the order of months or years, is essentially equivalent to opting out of the race to AGI. On this question, my strong prior is that investors like Google and Amazon have more power than employees or the trust, else they wouldn't invest.
Replies from: ryan_greenblatt, Dr. David Mathers↑ comment by ryan_greenblatt · 2024-05-27T22:30:28.306Z · LW(p) · GW(p)
They might just (probably correctly) think it is unlikely that the employees will decide to do this.
↑ comment by Dr. David Mathers · 2024-05-28T08:34:58.405Z · LW(p) · GW(p)
People will sometimes invest if they think the expected return is high, even if they also think there is a non-trivial chance that the investment will go to zero. During the FTX collapse many people claimed that this is a common attitude amongst venture capitalists, although maybe Google and Amazon are more risk averse?
comment by Zach Stein-Perlman · 2024-05-30T16:32:40.402Z · LW(p) · GW(p)
The LTBT, whose members have no equity in the company, currently elects one out of the board’s five members. But that number will rise to two out of five this July, and then to three out of five this November.
This is encouraging and makes me not care anymore about seeing the "milestones." (And it explains why OP/investors/whoever didn't bother to replace Luke.) My concerns about investors' power over the Trust remain.
Also:
The LTBT’s first five members were picked by Anthropic’s executives for their expertise in three fields that the company’s co-founders felt were important to its mission: AI safety, national security, and social enterprise. Among those selected were Jason Matheny, CEO of the RAND corporation, Kanika Bahl, CEO of development nonprofit Evidence Action, and AI safety researcher Paul Christiano. [The other two were Neil Buddy Shah of the Clinton Health Access Initiative and formerly GiveWell and Zach Robinson of CEA and EV] (Christiano resigned from the LTBT prior to taking a new role in April leading the U.S. government’s new AI Safety Institute, he said in an email. His seat has yet to be filled.)
From this we can infer that the other four Trustees remain on the Trust, which is weak good news. [Edit: nope, Matheny left months ago due to potential (appearance of) conflict of interest. It's odd that this article doesn't mention that. As of May 31, Christiano and Matheny have not yet been replaced. It is maybe quite concerning if they—the two AI safety experts—are gone and not replaced by AI safety experts, and the Trust is putting non-AI-safety people on the board. Also I'm disappointed that Anthropic didn't cause me to know this before.]
Also:
Amazon and Google, he says, do not own voting shares in Anthropic, meaning they cannot elect board members and their votes would not be counted in any supermajority required to rewrite the rules governing the LTBT. (Holders of Anthropic’s Series B stock, much of which was initially bought by the defunct cryptocurrency exchange FTX, also do not have voting rights, Israel says.)
Google and Amazon each own less than 15% of Anthropic, according to a person familiar with the matter.
So then the question is: who does own voting shares; how are voting shares distributed? and how can this change in the future?
(Also we just got two more examples of Anthropic taking credit for the Trust, and one of the articles even incorrectly says "power ultimately lies with a small, unaccountable group.")
comment by dsj · 2024-05-28T13:49:49.255Z · LW(p) · GW(p)
I don’t know much background here so I may be off base, but it’s possible that the motivation of the trust isn’t to bind leadership’s hands to avoid profit-motivated decision making, but rather to free their hands to do so, ensuring that shareholders have no claim against them for such actions, as traditional governance structures might have provided.
Replies from: zac-hatfield-dodds↑ comment by Zac Hatfield-Dodds (zac-hatfield-dodds) · 2024-06-12T16:52:05.533Z · LW(p) · GW(p)
Incorporating as a Public Benefit Corporation already frees directors' hands; Delaware Title 8, §365 requires them to "balance the pecuniary interests of the stockholders, the best interests of those materially affected by the corporation’s conduct, and the specific public benefit(s) identified in its certificate of incorporation".
comment by Sodium · 2024-05-27T21:24:10.121Z · LW(p) · GW(p)
FYI, since I think you missed this: According to the responsible scaling policy update, the Long-Term Benefit Trust would "have sufficient oversight over the [responsible scaling] policy implementation to identify any areas of non-compliance."
Replies from: Zach Stein-Perlman↑ comment by Zach Stein-Perlman · 2024-05-27T21:31:53.695Z · LW(p) · GW(p)
This is not relevant to my thesis: maybe the Trust can be overruled or abrogated by stockholders.
I agree that the Trust has some oversight over the RSP:
[Anthropic commitments:]
Share results of ASL evaluations promptly with Anthropic's governing bodies, including the board of directors and LTBT, in order to sufficiently inform them of changes to our risk profile.
Responsible Scaling Officer. There is a designated member of staff responsible for ensuring that our Responsible Scaling Commitments are executed properly. Each quarter, they will share a report on implementation status to our board and LTBT, explicitly noting any deficiencies in implementation. They will also be responsible for sharing ad hoc updates sooner if there are any substantial implementation failures.
(This is nice but much less important than power over board seats.)
comment by Akash (akash-wasil) · 2024-05-27T20:21:36.958Z · LW(p) · GW(p)
Thanks for looking into this! A few basic questions about the Trust:
1. Do we know if trustees can serve multiple terms? See below for a quoted section from Anthropic's site:
Trustees serve one-year terms and future Trustees will be elected by a vote of the Trustees.
2. Do we know what % of the board is controlled by the trustees, and by when it is expected to be a majority?
The Trust is an independent body of five financially disinterested members with an authority to select and remove a portion of our Board that will grow over time (ultimately, a majority of our Board).
3. Do we know if Paul is still a Trustee, or does his new role at USAISI mean he had to step down?
Replies from: Zach Stein-PerlmanThe initial Trustees are:
Jason Matheny: CEO of the RAND Corporation
Kanika Bahl: CEO & President of Evidence Action
Neil Buddy Shah: CEO of the Clinton Health Access Initiative (Chair)
Paul Christiano: Founder of the Alignment Research Center
Zach Robinson: Interim CEO of Effective Ventures US
↑ comment by Zach Stein-Perlman · 2024-05-27T20:27:33.217Z · LW(p) · GW(p)
comment by Review Bot · 2024-05-27T22:12:22.464Z · LW(p) · GW(p)
The LessWrong Review [? · GW] runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2025. The top fifty or so posts are featured prominently on the site throughout the year.
Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?