A Policy Proposal

post by phdead · 2024-09-29T20:45:34.745Z · LW · GW · 3 comments

Contents

    TL;DR: I think that the features used by recommendation systems should be configurable by end users receiving recommendations, and that this ability should be enforced by policy. Just as the GDPR protects a user's ability to choose which cookies are enabled, a user should be able to pick what data g...
  Part 1: Why regulate recommendation systems?
  Part 2: Why control over what features are used?
  Part 3: How could this be implemented?
  Part 4: Challenges
  Conclusion
None
3 comments

(Crosspost of https://phoropter.substack.com/p/a-policy-proposal)

TL;DR: I think that the features used by recommendation systems should be configurable by end users receiving recommendations, and that this ability should be enforced by policy. Just as the GDPR protects a user's ability to choose which cookies are enabled, a user should be able to pick what data goes into any algorithmically generated feed they view. The legislation would also enforce a minimum granularity for dividing feature inputs.

I expect this policy proposal to get a lot of pushback, and this post is about explaining:

  1. The specific problems this proposal tries to solve.
  2. Why this proposal solves those problems
  3. Why this is feasible on a technological / political level
  4. What the challenges are for effective implementation of this policy

Part 1: Why regulate recommendation systems?

There are two reasons a government may want to regulate a recommendation system.

  1. Governments may believe regulating recommendation systems may be societally beneficial, by enabling users to better control their relationship with technology.
  2. Governments may believe users have a right to control how their data is used.

These are both reasonable. To the first point, we all know people who cannot control their relationship to media to the point it interferes with their daily life. Many of us wish we used our phones less but are stuck "rotting in bed" due to a lack of tools to control how addictive our social media experience is.

In my own life, using parental controls on my phone (with a friend as the 'parent') and using apps such as Cold Turkey on my laptop have drastically improved my quality of life. However these tools are tricky to set up and force an all or nothing approach; better consumer rights would enable more people to realize the benefits of a healthy relationship to recommendation engines.

To the second point, remember that cold feeling that shivered down your spine when it came out that Facebook was listening in on people's audio messages to better recommend products? That worry almost feels blasé now in the age of massive data collection and aggregation, but it would be nice if you could selectively prevent social media websites from using every piece of data they have about you while still retaining use of the app.

Part 2: Why control over what features are used?

I've been thinking about this problem for a while now, and settled on this particular solution because I believe it exists in a goldilocks zone of achieving a lot of benefit with minimal cost:

  1. Policy proposals that restrict what type of data can be used for certain purposes seem likely to fail against the myriad applications of recommendation systems and hinder product quality.
  2. Policy proposals that restrict regulations to children miss the negative impacts these technologies have on adults.
  3. Different people want wildly different things from their technology. This proposal allows full flexibility for end users to choose what they want. Furthermore, most recommendation systems are already built to be able to work with unreliable data from various sources, so this regulation shouldn't decrease performance much.
  4. Enforcing a minimum granularity for dividing features to the recommendation systems (e.g. splitting out biometric data, location data, etc.) would guarantee users easy to use control.
  5. Spilling the secrets of what all data is used by recommendation systems without specifying how the data is used allows companies to retain competitive advantage while enforcing consumer protections. In particular, it gives users greater control over their relationship to technology by controlling their recommendations as well as greater control over how their data is used. For example, with this regulation users could give Facebook their location data for suggesting groups that they wish to join but not for advertising.

Part 3: How could this be implemented?

The existence of the GDPR and CCPA I think implies that this regulation could be implemented through existing legal frameworks. Technologically, recommendation systems can be designed to work with only a subset of total data per input / example. I don't mean to prevent inferred characteristics of the data: if the system can successfully guess your income bracket from your location, then it is allowed to use the inferred information even if you specify that you don't want your personal data included. While there are many recommendation systems that users interact on a day to day basis, I think that with the right wording the process could be quite seamless:

  1. For search applications, this could be part of the search controls.
  2. For suggestions, this could be configured on first usage.
  3. For advertising, you could configure settings for each provider of ads and have that synced over cookies across all devices.

I think that defining sets of data that users might want control over, and what goes into those sets, is feasible. Some examples:

  1. Personal Data - age, weight, race, income
  2. Location data - explicit location as well as IP address.
  3. User Data - messages on the site, user interactions, etc. This might benefit from greater granularity. I'm sure that if anyone actually runs with this idea the list of groupings will get discussed and debated for ages. I don't focus on it here.

Part 4: Challenges

I anticipate that this regulation will be quite controversial because it threatens the revenues that power the internet. This is a real problem! Ads enable the internet to exist as it is. Its quite possible that legislation like this would significantly hamper the social media / influencer economy. As I said above, I think this argument would bear more weight if this regulation made certain systems illegal. As it stands, this argument basically boils downs to "we shouldn't give people control over what they see on the internet because it impacts profits of certain industries".

At the moment I'm of the opinion that this problem can be solved by showing those with less profitable ads more ads or just barring unprofitable users, with the option to use the site as normal if they enabled targeting. Alternatively, vendors could focus on building goodwill between apps that show ads and their users.

I think some people will oppose the regulation for not being extreme enough. I would ask those who believe this to highlight what tighter regulation they support, and what in particular this would achieve. I also think that this regulation should only apply to companies with a certain minimum revenue or number or users.

Conclusion

I think we should seriously consider increased consumer rights for recommendation systems, and propose a framework to realize that. Feedback appreciated!

Currently the main question I have been debating is whether users should be forced to make a choice, or if the options should just be there but quietly default to whatever the provider prefers. I currently lean towards the latter as I would be loathe to add more terms and conditions to use an app or site.

I'm also curious if people with more understanding of policy spheres know what a naive idiot searching for solutions to a certain problem should do when they find the sketch of one. Advice appreciated.

3 comments

Comments sorted by top scores.

comment by habryka (habryka4) · 2024-09-29T21:18:21.474Z · LW(p) · GW(p)

TL;DR: I think that the features used by recommendation systems should be configurable by end users receiving recommendations, and that this ability should be enforced by policy. Just as the GDPR protects a user's ability to choose which cookies are enabled, a user should be able to pick what data goes into any algorithmically generated feed they view. The legislation would also enforce a minimum granularity for dividing feature inputs.

Using GDPR cookie regulation, one of the most obnoxious regulations of the last decade with an incredibly silly amount of negative externalities, as the central example, does not make me hopeful about your models of what makes good policy.

Replies from: phdead
comment by phdead · 2024-09-29T22:01:27.334Z · LW(p) · GW(p)

I think GDPR cookie regulation is bad because it forces users to make the choice, thus adding an obnoxious layer to using any website. The actual granular control to users I don't think is a problem? As I say towards the end, I don't think we should force users to choose upon using a website/app, but only allow for more granular control of what data will be used in what feeds.

Replies from: habryka4
comment by habryka (habryka4) · 2024-09-29T22:43:45.830Z · LW(p) · GW(p)

Supporting GDPR easily doubles the cost of many software projects and introduces unclear liability to a huge number of organizations who cannot afford that. 

It's an incredible pain to basically every organization I know of, including in situations that you really wouldn't expect it to (one example I recently heard: "organization cannot integrate sensitive external complaints about attendees into the admission process of their events because complaints would constitute private information which they then would need to share with the attendees the complaints are about").