The gatekeepers of the AI algorithms will shape our society

post by ProfessorFalken · 2024-08-09T17:59:33.594Z · LW · GW · 3 comments

Contents

3 comments

So algorithms and weighted neuralnets are going to be the enlightened super beings that oversee our society.

Elon has said - and I'm paraphrasing cos I can't find the link - that social media with it's free exchange of ideas will allow for a consensus to bubble up and reveal a nuanced truth - and will use such data to train it's AI model.

But I believe the gatekeepers of AI models will manipulate the algorithms to edit out anything they disagree with whilst promoting their agenda

It's absurd to think that if a neuralnet starts churning out a reality they disagree with they won't go "oh this AI is hallucinating, let's tweak the weights till it conforms with our agenda."

How do you police AI? Who gets to say what is a hallucination and what is a harsh truth that contradicts the gatekeepers ideology?

3 comments

Comments sorted by top scores.

comment by RogerDearnaley (roger-d-1) · 2024-08-09T21:08:17.155Z · LW(p) · GW(p)

But I believe the gatekeepers of AI models will manipulate the algorithms to edit out anything they disagree with whilst promoting their agenda

This reads like a conspiracy theory to me, complete with assumption-laden words and unsupported accusations like "gatekeepers", "manipulate", and "promoting their agenda".

Having worked at more than one of these companies, what actually happens is that some part of the team picks a "user engagement metric" to care about, like "total time spent on the website" or "total value of products purchased through the website", then everyone in the team puts a lot of time and effort into writing and testing out changes to the algorithms with the aim of finding ways to make that metric go up, even by 0.1% for a project that took several people a month. Then, after a few years, people ask "why has our website turned into a toxic cesspool?", until someone points out that myopically pushing that one metric as hard as possible turns out to have unfortunate unanticipated side effects. For example, maybe the most effective way to get people to spend more time on the website turned out to be measures that had the net side-effect of promoting conspiracy theory nutcase interactions (in many flavors, thus appealing to many different subsets of users) over reasonable discussions.

So the 'agenda' here is just "make a profit", not "spread conspiracy theories" or "nefariously promote Liberal opinions", and the methods used don't even always do a good job at making a profit. The message here is that large organizations with a lot of power are stupider, more bureaucratic and more shortsighted than you appear to think they are.

Yes, most software engineers are university educated, and thus, like other well-educated people, tend, in the modern political environment, to be more liberal than an overall population that also includes people who are not university educated. However, we're strongly encouraged to remember that our users are more diverse than we are, don't all think or believe or act like us, to "think like the users", and not to encode personal political opinions into the algorithms.

Replies from: Viliam
comment by Viliam · 2024-08-10T10:58:38.678Z · LW(p) · GW(p)

This profit motive is there, but the companies already also spend a lot of effort making sure that AIs won't draw nudes or make politically incorrect conclusions. In some sense, that is probably also motivated by long-term profit-seeking, because a political opposition could get their product banned or boycotted.

But that still means that there are two different ways to tweak AIs to maximize profit: (1) random tweaking and selecting the modifications that increase the bottom line in short term, and (2) reinforcement learning that removes anything that someone politically important might object against.

The second one can easily be abused for political purposes... I mean, it already is, but could be abused much more strongly. Imagine someone from China or Russia or Saudi Arabia investing a lot of money in AI development, and it turn demanding that in addition to censoring nudes or avoiding debates about statistics of race and crime, the AI also avoid mentioning Tiananmen Square, criticizing the special military operation, or criticizing the Prophet. (And of course, the American government will probably make a few demands, too. The first amendment is nice in theory, but there are sources of funding that can be given or taken away depending on how much you voluntarily comply with the unofficial suggestions made by well-meaning people.)

So what will ultimately happen is some interplay between these two profit-maximizing strategies.

Replies from: roger-d-1
comment by RogerDearnaley (roger-d-1) · 2024-08-15T19:33:32.067Z · LW(p) · GW(p)

Yes, the profit motive also involves attempting to avoid risks of bad press, a bad reputation, and getting sued/fined. In my experience large tech companies vary in whether they're focused primarily on avoiding the bad press/bad reputation side or the "don't get sued/fined" side (I assume depending mostly on how much they have previously lost to being sued/fined).