OpenAI Boycott Revisit
post by Jake Dennie · 2024-07-22T01:44:55.094Z · LW · GW · 2 commentsContents
Summary Postulates Conclusion None 2 comments
Note: An initial proposal and some good discussion already existed on LW here [LW · GW]. I’m spurring this here as a post instead of a comment due to length, the need for a fresh look, and a specific call to action.
Summary
I think a petition-style boycott commitment could reach critical mass enough to significantly shift OpenAI corporate policy.
I specifically think a modular petition allowing different users to choose which goalposts the target must cross to end their boycott would be a good method of coalition building among those concerned about AI Safety from different angles.
Postulates
- OpenAI needs some reform to be a trustworthy leader in the age of AI
- Zvi’s Fallout and Exodus roundups are good summaries, but the main points are:
- The NDA Scandal: forcing employees to sign atypically aggressive non-disparagement and recursive non-disparagement agreements
- Firing Leopold Aschenbrenner for whistleblowing to the board
- Not keeping safety compute commitments
- Multiple safety leaders leaving amid suggestions that the culture no longer respects safety (eg Jan Leike)
- There is already a tangible and actionable set of demands advocated by experts in the area: the Right to Warn letter
- Point 4 is arguably a bridge too far and could be left out or weakened (or made optional with a modular petition)
- Consumer subscribers collectively have substantial leverage
- Majority of Open AI revenue comes from individual $20/mo subscribers according to FUTURESEARCH
- OpenAI is likely sensitive to revenue at the moment given the higher interest rate environment and the recent focus on investors on the imbalance between AI company CapEx and revenue (eg this Sequoia report)
- OpenAI has shown to be fairly reactive to recent PR debacles
- Modern boycotts have a significant success rate at changing corporate policy
- Ethical Consumer details a few successful boycotts per year for the last few years. Boycotts facing large multinationals, especially publicly traded ones like Microsoft, have historically done particularly well
- We can win broad support, even among average people
- Boycotting a paid subscription won't harm users much
- OpenAI’s latest model is available for free: paid perks are simply more usage, faster speed
- Switching to Claude is easy and Sonnet 3.5 is better
- Public sentiment is broadly suspicious of Big Tech and AI in particular
- Polls substantial bipartisan majorities of Americans would rather “take a careful controlled approach” than “move forward on AI as fast as possible”
- Boycotting a paid subscription won't harm users much
- Getting people to commit to something is a good way to galvanize support and spur agency for AI Safety in general
Arguments against and some answers/options
- This unfairly singles out OpenAI
- OpenAI likely the worst offender and has the most recent negative PR to galvanize support
- OpenAI is seen as the leader by the public. Other labs will follow once one company commits, or be seen publically as not caring about safety
- There are more important demands than those in the Right to Warn letter. Not restarting a subscription if they acquiesce to Right to Warn is moving the goalposts, but restarting one is helping a still dangerous company
- This is a most concrete and agreeable set of demands, and sets a precedent that the public is watching and willing to act
- A modular petition with different opt-in commitments, Right to Warn demands among them, could create a powerful coalition among those concerned about different aspects of AI Safety
- This will inevitably fail without a well-known advocate and/or well-funded marketing drive
- Ending enough subscriptions to make a dent in revenue, but moderate success even among those in tech could persuade engineers to not pursue work at OpenAI
- This may be why OpenAI has been so reactive to recent PR issues.
- There are multiple potential points for jumps into the mainstream even from a grassroots start, especially if the Right to Warn team or tech reporters notice and escalate
- Ending enough subscriptions to make a dent in revenue, but moderate success even among those in tech could persuade engineers to not pursue work at OpenAI
Conclusion
If, after feedback, I still think this is a good idea, I’d be interested in any advice or help in finding a place to host a commitment-petition, especially one with modular features to allow for commitments of different lengths and with different goalposts centered around the same theme.
2 comments
Comments sorted by top scores.
comment by closedAI · 2024-07-22T14:28:41.528Z · LW(p) · GW(p)
But what do you mean by "average people" here? Many feel like AI safety folks are no more than just power hungry parasites with no real capacity to bring any real value to the ongoing AI development.
Replies from: localdeity↑ comment by localdeity · 2024-07-23T01:39:48.047Z · LW(p) · GW(p)
In context, I think "average people" means "the individual consumers paying $20/month to OpenAI". Who do you mean by "many"? I doubt that posters at /r/localllama are representative of "average people" by the above definition.