Is it a bad idea to pay for GPT-4?

post by nem · 2023-03-16T20:49:26.502Z · LW · GW · 8 comments

Contents

8 comments

I don't want to accelerate an arms race, and paying for access to GPT seems like a perfect way to be a raindrop in a dangerous flood. My current idea is to pay an equal amount monthly to Miri. I'll view it as the price being $40 per month with half going to AI safety research.

Is this indefensible? Let me know. GPT-4 is very useful to me personally and professionally, and familiarity of language models will also be useful if I have enough time to transition into an AI safety career, which I am strongly considering.

If it is a good idea, should we promote the offsetting strategy among people who are similarly conflicted?

8 comments

Comments sorted by top scores.

comment by Neel Nanda (neel-nanda-1) · 2023-03-16T20:59:38.310Z · LW(p) · GW(p)

This seems completely negligible to me, given how popular ChatGPT was. I wouldn't worry about it

comment by Lone Pine (conor-sullivan) · 2023-03-16T20:57:53.135Z · LW(p) · GW(p)

Look up the evidence on the effectiveness of boycotts. My understanding is that they don't work. In particular, it seems unlikely to me that the alignment community (which is small) will have a meaningful impact on OpenAI's actual or perceived success.

Replies from: nem
comment by nem · 2023-03-16T21:14:54.139Z · LW(p) · GW(p)

I have a general principle of not contributing to harm. For instance, I do not eat meat, and tend to disregard arguments about impact. For animal rights issues, it is important to have people who refuse to participate, regardless of whether my decades of abstinence have impacted the supply chain.

For this issue however, I am less worried about the principle of it, because after all, a moral stance means nothing in a world where we lose. Reducing the probability of X-risk is a cold calculation, while vegetarianism is is an Aristotelian one.

With that in mind, a boycott is one reason not to pay. The other is a simple calculation: is my extra $60 a quarter going to make any tiny miniscule increase in X-risk? Could my $60 push the quarterly numbers just high enough so that they round up to the next 10s place, and then some member of the team works slightly harder on capabilities because they are motivated by that number? If that risk is 0.00000001%, well when you multiply by all the people who might ever exist... ya know? 

Replies from: conor-sullivan, None
comment by Lone Pine (conor-sullivan) · 2023-03-17T12:41:37.947Z · LW(p) · GW(p)

Are you doing anything alignment related? The benefits to you (either in productivity or in keeping you informed) might massively outweigh the marginal benefits to OpenAI's bottom line.

comment by [deleted] · 2023-03-16T21:51:12.438Z · LW(p) · GW(p)

Yes but you throw away your benefits. Using tools like this effectively might increase the chance you keep your job 50 percent or more.

comment by Max H (Maxc) · 2023-03-16T21:53:42.165Z · LW(p) · GW(p)

I don't think OpenAI is funding-constrained in any real way at the moment, and using new AI systems for mundane utility seems pretty harmless (more from Zvi [LW · GW]).

This is somewhat galaxy-brained thinking, but if GPT-4 generates enough revenue, perhaps it actually steers OpenAI execs towards slowing down? "If GPT-4 is already generating $X billion dollars on its own, why risk hundreds of millions or billions of dollars more, and a potential safety disaster or PR crisis, to train GPT-5 ASAP?"

Or, even more galaxy-brained, if enough people pay for ChatGPT+ to get mundane utility out of the chatbot, OpenAI will be capacity-constrained, possibly forcing them to raise prices (or at least delay lowering them) and price out some capabilities research that requires API use at scale.

Realistically though, I think the impact of paying for ChatGPT+ is minimal in either direction, even if everyone in your reference class also pays for it.

comment by the gears to ascension (lahwran) · 2023-03-17T00:30:11.596Z · LW(p) · GW(p)

I'd set a deadline for how long you'll ever use it for. I'm only doing one month.

comment by Amalthea (nikolas-kuhn) · 2023-03-16T22:07:07.457Z · LW(p) · GW(p)

I'm gonna pass on the question of whether it's defensible (like you, the thought of giving money to OpenAI makes me uneasy), but I do like the idea of an "Alignment tax". By general principles one should expect that there is some ideal proportion of money flowing into alignment/regulation efforts vs. AI development that makes the future maximally safe. So steering towards that seems like the right thing to do.