GPT-3-like models are now much easier to access and deploy than to develop

post by Ben Cottier (ben-cottier) · 2023-01-16T01:39:26.088Z · LW · GW · 3 comments

Contents

3 comments

3 comments

Comments sorted by top scores.

comment by Viliam · 2023-01-16T11:13:56.393Z · LW(p) · GW(p)

Should we expect a future where most people use GPT-like tools to generate text, but 90% of people use the models trained by 2 or 3 large companies?

This could allow amazing thought control of the population. If you want to suppress some ideas, just train your model to be less likely to generate them. As a consequence, the ideas will disappear from many people's articles, blogs, school essays.

Many people will publicly deny or downplay their use of GPT, so they will unknowingly provide a cover for this manipulation. People will underestimate the degree of control if they e.g. keep believing that news articles are still written by human journalists, when in fact the job of the journalist will consists of providing a prompt, and then choosing the best of a few generated articles.

Similarly, bloggers who generate their texts will be much more productive than bloggers who actually write them. Yes, many readers will reward quality over quantity, but the quality can be achieved in ways other than writing the articles, for example figuring out interesting prompts (such as "explain Fourier Transform using analogies from Game of Thrones"), or use some other tricks to give the blog a unique flavor.

What the companies need (and I do not know how difficult this would be technically) is to reverse-engineer why GPT produced certain outputs. For example, you train a model using some inputs. You ask it some questions, and select the inconvenient answers. Then you ask which input texts have contributed most strongly to generating the inconvenient answers. You remove those texts from the training set, and train a new model. This could even be fully automated if you can write an algorithm that ask the questions and predict which answers would be inconvenient.

Welcome to the glorious future where 99% of people support the party line on their social networks, because that is what their GPT-based comment-generating plugins have produced, and they were too lazy to change it.

comment by ChristianKl · 2023-01-16T23:37:54.372Z · LW(p) · GW(p)

I'm a bit surprised that you talk about someone needing a lot of expertise and training to be able to run BLOOM. Why is it so hard to use and not as easy to use as other open source software?

Replies from: ben-cottier
comment by Ben Cottier (ben-cottier) · 2023-02-25T12:56:59.774Z · LW(p) · GW(p)

To be clear (sorry if you already understood this from the post): Running BLOOM via an API that someone else created is easy. My claim is that someone needs significant expertise to be able to run their own instance of BLOOM. I think the hardest part is setting up multiple GPUs to run the 176B parameter model. But looking back, I might have underestimated how straightforward it is to get the open-source code to run BLOOM working. Maybe it's basically plug-and-play as long as you get an appropriate A100 GPU instance on the cloud. I did not attempt to run BLOOM from scratch myself.

I recall that in an earlier draft, my estimate for how many people know how to independently run BLOOM was higher (indicating that it's easier). I got push-back on that from someone who works at an AI lab (though this person wasn't an ML practitioner themselves). I thought they made a valid point but I didn't think carefully about whether they were actually right in this case. So I decreased my estimate in response to their feedback.