How will internet forums like LW be able to defend against GPT-style spam?

post by ChristianKl · 2020-07-28T20:12:56.458Z · score: 14 (5 votes) · LW · GW · No comments

This is a question post.


    16 Kaj_Sotala
    14 habryka
    3 Gurkenglas
    1 Alex K. Chen
No comments

GPT-3 seems to be skilled enough to write forum comments that aren't easy to identify as spam. While OpenAI reduces the access to it's API it will likely don't take that long till other companies develop similar API's that are more freely available. While this isn't the traditional AI safety question, it does seem like it starts to become a significant safety question.


answer by Kaj_Sotala · 2020-07-28T20:39:44.340Z · score: 16 (9 votes) · LW(p) · GW(p)

GPT-generated spam seems like a worse problem for things like product reviews, than for a site like LW where comments are generally evaluated by the quality of their content. If GPT produces low-quality comments it'll be downvoted, if it produces high-quality comments then great.

comment by gbear605 · 2020-07-28T21:43:40.948Z · score: 4 (3 votes) · LW(p) · GW(p)

If someone set up a GPT-3 bot that responded to every new LW post, it'd be really interesting to see how good its responses actually were. What would its karma be after a month?

comment by ChristianKl · 2020-07-28T21:47:02.758Z · score: 4 (2 votes) · LW(p) · GW(p)

It could provide a lot of comments that are borderline. Some of them containing links for SEO purposes. 

comment by Dagon · 2020-07-28T22:19:26.054Z · score: 6 (3 votes) · LW(p) · GW(p)

That seems like it doesn't take long for borderline comments with non-relevant links to be downvoted. Unless you mean DOS levels of "a lot", which is better addressed by more difficult account creation and restrictions on new posters.

BTW, I assume CAPTCHA is fully broken at this point.

comment by ChristianKl · 2020-07-29T09:28:23.384Z · score: 7 (3 votes) · LW(p) · GW(p)

BTW, I assume CAPTCHA is fully broken at this point.

Whether or not CAPTCHA is broken, a poor Indian can copy-paste a lot of posts per hour for little money.

comment by Dagon · 2020-07-29T17:08:25.045Z · score: 2 (1 votes) · LW(p) · GW(p)

The marginal cost of spam is orders of magnitude more if you have to pay humans, even very poor ones. In the arms race between spammers and operators and consumers of content, even fairly large/expensive fully-automated system capability is much scarier than even more robust semi-automated ones.

comment by Zachary Robertson (zachary-robertson) · 2020-07-29T12:21:05.784Z · score: -1 (4 votes) · LW(p) · GW(p)

I think the stereotyping (‘poor Indian’) is unnecessary to your point.

comment by ChristianKl · 2020-07-29T13:21:47.325Z · score: 8 (4 votes) · LW(p) · GW(p)

Why is is stereotyping to say that there are poor Indians? There are Indians who are rich and those who are poor. 

In India you can hire poor Indians who speak in a big city with good internet connectivity and pay them very little. 

comment by Zachary Robertson (zachary-robertson) · 2020-07-29T13:55:23.915Z · score: 0 (2 votes) · LW(p) · GW(p)

It's stereotyping to assume X will copy-paste a lot of posts per hour for little money where X is actually based on class/race status. Also, it's not central to your point so it seems easy to just remove.

comment by ChristianKl · 2020-07-29T15:14:39.462Z · score: 1 (4 votes) · LW(p) · GW(p)

By that reasoning if we take Marx classes of workers and capitals it would be stereotyping to say that workers are willing to do things because you pay them money. That doesn't seem to make a lot of sense to me.

Assuming that poor people are more willing to take lowly paid jobs might be class based as well, but it's important information to reason about.

I said nothing about a race about about a nationality. Indian Americans fall under minimum wage laws in the US in a way that people of Indian nationality living in India don't.  

Also, it's not central to your point so it seems easy to just remove.

It's not central but it helps people have models with gears to be able to visualize supply chains. 

comment by gbear605 · 2020-07-29T22:51:43.833Z · score: -1 (2 votes) · LW(p) · GW(p)

I'd say that it wasn't stereotyping, but saying "poor Indian" instead of "poor person" makes it seem unnecessarily racialized.

comment by ChristianKl · 2020-07-29T23:38:41.823Z · score: 13 (2 votes) · LW(p) · GW(p)

There are many people in the US who are poor people but who are still subject to US labor law that requires paying a minimum wage. For the point it's quite useful to us a term that doesn't include them. 

There are reasons why India is a good country for outsourcing these tasks. 

It's quite similar to speaking about shipping manufacturing jobs to China. It's insane to have political correctness pushing onto LessWrong in a way where you can't speak about which countries are good for having certain jobs in those countries.

If we learned anything in Germany it's that seeing everything in terms of race is a bad idea. The fact that you and Zachary can't see a talk about countries without pattern matching into race seems illustrative of how screwed up the discourse. Yielding to that on LessWrong where clear thinking is a high value seems very costly. 

comment by ChristianKl · 2020-07-29T09:30:37.818Z · score: 2 (1 votes) · LW(p) · GW(p)

BTW, I assume CAPTCHA is fully broken at this point.

Whether or not CAPTCHA is broken, a poor Indian can copy-paste a lot of posts per hour for little money.

answer by habryka · 2020-07-28T23:33:42.739Z · score: 14 (5 votes) · LW(p) · GW(p)

We already filter a lot of comments by well-meaning internet citizens who just kind of get confused about what LessWrong is about, and are spouting only mostly coherent sentences. So I think we overall won't have much of a problem with moderating this and our processes deal with it pretty well, at least for this generation of GPT-3 without finetuning (I can imagine finetuned versions of GPT-3 to be good enough to cause problems even for us). Karma also helps a lot.

I can imagine being concerned about the next generation of GPT though.

comment by ChristianKl · 2020-07-29T09:13:46.034Z · score: 2 (1 votes) · LW(p) · GW(p)

OpenAI seems to do enough diligence that GPT-3 itself is no concern. If however Yandex, Tencent or Baidu create a similar project, things would look different, so the concern isn't so much GPT-3.

answer by Gurkenglas · 2020-07-29T05:05:41.315Z · score: 3 (2 votes) · LW(p) · GW(p)

The obvious answer to spammers being run by GPT is mods being run by GPT. Ask it whether every comment is high-quality/generated, then act on that as needed to keep the site functional.

answer by Alex K. Chen · 2020-07-30T12:50:46.963Z · score: 1 (1 votes) · LW(p) · GW(p)

How about integrate with the underlay ? FYI I personally connected some of the team members in the project with each other.

No comments

Comments sorted by top scores.