Optimizing the news feed
post by paulfchristiano · 2016-12-01T23:23:55.403Z · LW · GW · Legacy · 13 commentsThis is a link post for https://sideways-view.com/2016/12/01/optimizing-the-news-feed/
Contents
13 comments
13 comments
Comments sorted by top scores.
comment by agilecaveman · 2016-12-02T00:31:38.599Z · LW(p) · GW(p)
This is really good, however i would love some additional discussion on the way that the current optimization changes the user.
Keep in mind, when facebook optimizes "clicks" or "scrolls", it does so by altering user behavior, thus altering the user's internal S1 model of what is important. This could frequently lead to a distortion of reality, beliefs and self-esteem. There have been many articles and studies correlating facebook usage with mental health. However, simply understanding "optimization" is enough evidence that this is happening.
While, a lot of these issues are pushed under the same umbrella of "digital addiction," i think facebook is a lot more of a serious problem that, say video games. Video games do not, as a rule, act through the very social channels that are helpful to reducing mental illness. Facebook does.
Also another problem is facebook's internal culture that, as of 4 years ago was very marked by the cool-aid that somehow promised unbelievable power(1 billion users, horray) without necessarily caring about responsibility (all we want to do is make the world open and connected, why is everyone mad at us).
This problem is also compounded by the fact that facebook get a lot of shitty critiques (like the critique of the fact that they run A/B tests at all) and has thus learned to ignore legitimate questions of value learning.
full disclosure, i used to work at FB.
Replies from: moridinamael, Raemon↑ comment by moridinamael · 2016-12-02T15:51:22.334Z · LW(p) · GW(p)
It's just baffling to me that this happened, because it seems on-face obvious that "outrageous" or intentionally politically inflammatory material would be an undesirable attractor in interest-space.
My Facebook feed thinks that I'm most interested in the stupidest and most inflammatory individuals and ideas because that's where my eyes linger for reasons that I don't reflectively approve of. I wonder how quickly it would "learn" otherwise if I made an effort to break this pattern.
↑ comment by Raemon · 2016-12-03T14:57:37.324Z · LW(p) · GW(p)
Why do you no longer work at FB?
(It seems like more people who care about things should try working at FB, in particular if there was any learnable path to gaining any degree of power over algorithms or values-of-the-company, but maybe this is just hopelessly naive)
comment by sarahconstantin · 2016-12-02T02:02:02.432Z · LW(p) · GW(p)
I really like this post. This is a practical way for social media sites to get data on people's reflective preferences as well as their behavioral responses.
Another possible mechanic would be to display a user's behavior to the user. "Here's what you pay the most attention to; here's what you said you want to pay the most attention to; here's how much they differ." Being aware/mindful of one's habitual behavior is traditionally believed to be a useful step in habit change. Of course, it may not be in FB's interest to display that information, but I'd imagine it's possible for apps that use the FB graph to do that as well.
Replies from: Lumifercomment by CarlShulman · 2016-12-02T00:26:05.112Z · LW(p) · GW(p)
A different possibility is identifying vectors in Facebook-behavior space, and letting users alter their feeds accordingly, e.g. I might want to see my feed shifted in the direction of more intelligent users, people outside the US, other political views, etc. At the individual level, I might be able to request a shift in my feed in the direction of individual Facebook friends I respect (where they give general or specific permission).
comment by Wei Dai (Wei_Dai) · 2016-12-02T12:26:52.908Z · LW(p) · GW(p)
I also expect Facebook and similar products to move gradually in this direction over the coming decade. If they don’t, I think that’s probably a bad sign.
How do you expect this to happen? Facebook actually wanting to be a better global citizen instead of maximizing profits? Government regulation? Some sort of market-based solution (Facebook offering a user-aligned paid option that most people will then opt for, or competition from other social networks)? I'd be quite surprised if any of these happened. Am I missing something?
And why isn't it a bad sign that Facebook hasn't already done what you suggested in your post?
Replies from: paulfchristiano↑ comment by paulfchristiano · 2016-12-02T18:08:40.716Z · LW(p) · GW(p)
How do you expect this to happen?
I think there are two mechanisms:
- Public image is important to companies like Facebook and Google. I don't think that they will charge for a user-aligned version, but I also don't think there would be much cost to ad revenue from moving in this direction. E.g. I think they might cave on the fake news thing modulo the proposed fixes mostly being terrible ideas. Optimizing user preferences may be worth it in the interests of a positive public image alone.
- I don't think that Facebook ownership and engineers are entirely profit-focused, they will sometimes do things just because they feel like it makes the world better at modest cost. (I know more people in Google and am less informed about FB.)
Relating the two, if e.g. Google organized its services in this way, if the benefits were broadly and understood, and if Facebook publicly continued to optimize for things that its users don't want optimized, I think it could be bad for the image of Facebook (with customers, and especially with hires).
I'd be quite surprised if any of these happened.
Does this bear on our other disagreements about how optimistic to be about humanity? Is it worth trying to find a precise statement and making a bet?
I'm probably willing to give > 50% on something like: "Within 5 years, there is a Google or Facebook service that conducts detailed surveys of user preferences about what content to display and explicitly optimizes for those preferences." I could probably also make stronger statements re: scope of adoption.
And why isn't it a bad sign that Facebook hasn't already done what you suggested in your post?
I think these mechanisms probably weren't nearly as feasible 5 years ago as they are today, based on gradual shifts in organization and culture at tech companies (especially concerning ML). And public appetite for more responsible optimization has been rapidly increasing. So I don't think non-action so far is a very strong sign.
Also, Facebook seems to sometimes do things like survey users on how much they like content, and include ad hoc adjustments to their optimization in order to produce more-liked content (e.g. downweighting like-baiting posts). In in some sense this is just a formalization of that procedure. I expect in general that formalizing optimizations will become more common over the coming years, due to a combination of increasing usefulness of ML and cultural change to accommodate ML progress.
Replies from: Wei_Dai, Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2017-09-09T16:58:00.845Z · LW(p) · GW(p)
I'm curious if you occasionally unblock your Facebook newsfeed to check if things have gotten better or worse. I haven't been using Facebook much until recently, but I've noticed a couple of very user-unfriendly "features" that seem to indicate that FB just doesn't care much about its public image. One is suggested posts (e.g., "Popular Across Facebook") that are hard to distinguish from posts from friends, and difficult to ad-block (due to looking just like regular posts in HTML). Another is fake instant message notifications on the mobile app whenever I "friend" someone new, that try to entice me into installing its instant messaging app (only to find out that the "notification" merely says I can now instant message that person). If I don't install the IM app, I get more and more of these fake notifications (2 from one recent "friend" and 4 from another).
Has it always been this bad or even worse in the past? Does it seem to you that FB is becoming more user-aligned, or less?
ETA: I just saw this post near the top of Hacker News, pointing out a bunch of other FB features designed to increase user engagement at the expense of their actual interests. The author seems to think the problem has gotten a lot worse over time.
Replies from: paulfchristiano↑ comment by paulfchristiano · 2017-09-16T03:43:55.354Z · LW(p) · GW(p)
I think that Facebook's behavior has probably gotten worse over time as part of general move towards cashing in / monetizing.
I don't think I've looked at my feed in a few years.
On the original point: I think at equilibrium services like Facebook maximize total welfare, then take their cut in a socially efficient way (e.g. as payment). I think the only question is how long it takes to get there.
Replies from: Wei_Dai, Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2019-08-25T19:21:47.857Z · LW(p) · GW(p)
I think at equilibrium services like Facebook maximize total welfare, then take their cut in a socially efficient way (e.g. as payment). I think the only question is how long it takes to get there.
I wonder if you have changed your mind about this at all. Unless I'm misunderstanding you somehow, this seems like an important disagreement to resolve.
↑ comment by Wei Dai (Wei_Dai) · 2017-09-17T01:33:51.997Z · LW(p) · GW(p)
On the original point: I think at equilibrium services like Facebook maximize total welfare, then take their cut in a socially efficient way (e.g. as payment). I think the only question is how long it takes to get there.
Why? There are plenty of theoretical models in economics where at equilibrium total welfare does not get maximized. See this post and the standard monopoly model for some examples. The general impression I get from studying economics is that the conditions under which total welfare does get maximized tend to be quite specific and not easy to obtain in practice. Do you agree? In other words, do you generally expect markets to have socially efficient equilibria and expect Facebook to be an instance of that absent a reason to think otherwise, or do you think there's something special about Facebook's situation?
↑ comment by Wei Dai (Wei_Dai) · 2016-12-05T06:56:13.357Z · LW(p) · GW(p)
I'm probably willing to give > 50% on something like: "Within 5 years, there is a Google or Facebook service that conducts detailed surveys of user preferences about what content to display and explicitly optimizes for those preferences."
The Slate article you linked to seems to suggest that Facebook already did something like that, and then backed off from it:
"Crucial as the feed quality panel has become to Facebook’s algorithm, the company has grown increasingly aware that no single source of data can tell it everything. It has responded by developing a sort of checks-and-balances system in which every news feed tweak must undergo a battery of tests among different types of audiences, and be judged on a variety of different metrics. ..."
"At each step, the company collects data on the change’s effect on metrics ranging from user engagement to time spent on the site to ad revenue to page-load time. Diagnostic tools are set up to detect an abnormally large change on any one of these crucial metrics in real time, setting off a sort of internal alarm that automatically notifies key members of the news feed team."
I think concern about public image can only push a company so far. Presumably all the complaints we're seeing isn't news to Facebook. They saw it coming or should have seen it coming years ago, and this is what they've done, which seems like the best predictor of what they'd be willing to do in the future.
If I understand correctly, what you're proposing that's different from what Facebook is already doing are 1) fully automated end-to-end machine learning optimizing only for user preferences and specifically not for engagement/ad revenue, 2) optimizing for preferences-upon-reflection instead of current preferences, and maybe 3) trying to predict and optimize for each user's individual preferences instead of using aggregate surveyed preferences (which is what it sounds like Facebook is currently doing).
- seems unlikely because Facebook ultimately still cares mostly about engagement/ad revenue and are willing to optimize for user preference only so far as it doesn't significantly affect their bottom line. So they'll want to either maintain manual control to override user preference when needed, or not purely target user preference, or both.
- might happen to some greater extent. But presumably there are reasons why they haven't done more in this direction already.
- I think Facebook would be worried that doing this will make them even more vulnerable to charges of creating filter bubbles and undermining democracy, etc.