Book review: The Age of Surveillance Capitalism

post by Richard_Ngo (ricraz) · 2022-02-14T07:50:01.273Z · LW · GW · 5 comments

Contents

5 comments

I recently finished Shoshana Zuboff’s book The Age of Surveillance Capitalism. It’s received glowing reviews, but left me disappointed. Zuboff spends much of the book outraged at the behaviour of big tech corporations, but often neglects to explain what’s actually bad about either the behaviour itself or the outcomes she warns it’ll lead to. The result is far more polemical than persuasive. I do believe that there are significant problems with the technology industry - but mostly different problems from the ones she focuses on. And she neglects to account for the benefits of technology, or explain how we should weigh them against the harms.

Her argument proceeds in three stages, which I’ll address in turn:

  1. Companies like Google and Facebook have an “extraction imperative” to continually “expropriate” more personal data about their users.
  2. They use this for “the instrumentation and instrumentalisation of behaviour for the purposes of modification, prediction, monetisation, and control.”
  3. Ultimately, this will lead to “a form of tyranny” comparable to (but quite different from) totalitarianism, which Zuboff calls instrumentarianism.

On data: I agree that big companies collect a lot of data about their users. That’s a well-known fact. In return, those users get access to a wide variety of high-quality software for free. I, for one, would pay thousands of dollars if necessary to continue using the digital products that are currently free because they’re funded by advertising. So what makes the collection of my data “extraction”, or “appropriation”, as opposed to a fair exchange? Why does it “abandon long-standing organic reciprocities with people”? It’s hard to say. Here’s Zuboff’s explanation:

Industrial capitalism transformed nature’s raw materials into commodities, and surveillance capitalism lays its claims to the stuff of human nature for a new commodity invention. Now it is human nature that is scraped, torn, and taken for another century’s market project. It is obscene to suppose that this harm can be reduced to the obvious fact that users receive no fee for the raw material they supply. That critique is a feat of misdirection that would use a pricing mechanism to institutionalise and therefore legitimate the extraction of human behaviour for manufacturing and sale. It ignores the key point that the essence of the exploitation here is the rendering of our lives as behavioural data for the sake of others’ improved control over us. The remarkable questions here concern the facts that our lives are rendered as behavioural data in the first place; that ignorance is a condition of this ubiquitous rendition; that decision rights vanish before one even knows that there is a decision to make; that there are consequences to this diminishment of rights that we can neither see nor tell; that there is no exit, no voice, and no loyalty, only helplessness, resignation, and psychic numbing.

This is fiery prose; but it’s not really an argument. In more prosaic terms, websites are using my data to serve me ads which I’m more likely to click on. Often they do so by showing me products which I’m more interested in, which I actively prefer compared with seeing ads that are irrelevant to me. This form of “prediction and control” is on par with any other business “predicting and controlling” my purchases by offering me better products; there’s nothing “intrinsically exploitative” about it.

Now, there are other types of prediction and control - such as the proliferation of worryingly addictive newsfeeds and games. But surprisingly, Zuboff talks very little about the harmful consequences of online addiction! Instead she argues that the behaviour of tech companies is wrong for intrinsic reasons. She argues that “there is no freedom without uncertainty” and that predicting our behaviour violates our “right to the future tense” - again taking personalised advertising as her central example. But the degree of personalised prediction is fundamentally the wrong metric to focus on. Some of the products which predict our personal behaviour in the greatest detail - sleep trackers, or biometric trackers - allow us to exercise more control over our own lives, increasing our effective freedom. Whereas many of the addictive games and products which most undermine our control over our lives actually rely very little on personal data - as one example, the Universal Paperclips game is incredibly addictive without even having graphics, let alone personalised algorithms. And the “slot machine”-style intermittent rewards used by mobile apps like Facebook again don’t require much personalisation.

It’s true that personalisation can be used to enhance these problems - I’m thinking in particular of TikTok, whose recommendation algorithms are scarily powerful. But there’s also a case to be made that this will become better over time. Simple metrics, like number of views, or number of likes, are easy for companies to optimise for. Whereas figuring out how to optimise for what people really want [EA · GW] is a trickier problem. So it’s not surprising if companies haven’t figured it out yet. But as they do, users will favour the products that give them the best experience (as one example, I really like the premise of the Dispo app). Whether or not those products use personal data is much less important than whether they are beneficial or harmful for their users.

Lastly, we come to the question of longer-term risks. What is Zuboff most worried about? She holds up the example of Skinner’s novel Walden Two, in which behavioural control is used to teach children better self-control and other virtuous behaviour. Her term for a society in which such tools are widely used is “instrumentarian”. This argument is a little strange from the beginning, given that Walden Two was intended as (and usually interpreted as) a utopia, not a dystopia. The idea that technology can help us become better versions of ourselves is a longstanding one; behavioural reinforcement is just one mechanism by which that might occur. I can certainly see why the idea is discomfiting, but I’d like to see an actual argument for why it’s bad - which Zuboff doesn’t provide.

Perhaps the most compelling argument against instrumentarianism from my perspective is that it paves the way for behavioural control technology to become concentrated and used to maintain political power, in particular by totalitarian regimes. But for reasons I don’t understand, Zuboff downplays this risk, arguing that “instrumentarian power is best understood as the precise antithesis of Orwell’s Big Brother”. In doing so, she holds up China as an example of where the West might be headed. Yet China is precisely a case in which surveillance has aided increasing authoritarianism, as seen most notably in the genocide of the Uighurs. Whereas, whatever the faults of big US tech companies in using data to predict consumer behaviour, they have so far stayed fairly independent from exercises of governmental power. So I’m still uncertain about what the actual harms of instrumentarianism are.

Despite this strange dialectic, I do think that Zuboff’s warnings about instrumentarianism contribute to preventing authoritarian uses of surveillance. So, given the importance of preventing surveillance-aided totalitarianism, perhaps I should support Zuboff’s arguments overall, despite my reservations about the way she makes them. But there are other reasons to be cautious about her arguments. As Zuboff identifies, human data is an important component for training AI. Unlike her, though, I don’t think this is a bad thing - if it goes well, AI development has the potential to create a huge amount of wealth and improve the lives of billions. The big question is whether it will go well. One of the key problems AI researchers face is the difficulty of specifying the behaviour we’d like our systems to carry out: the standard approach of training AIs on explicit reward functions often leads to unintended misbehaviour. And the most promising techniques for solving this involve harnessing human data at large scale. So it’s important not to reflexively reject the large-scale collection and use of data to train AIs - because as such systems become increasingly advanced, it’s this data which will allow us to point them in the right directions.

5 comments

Comments sorted by top scores.

comment by AnthonyC · 2022-02-14T15:21:54.483Z · LW(p) · GW(p)

But as they do, users will favour the products that give them the best experience 

 

This is one point I find difficult to believe, or at least difficult to find likely. Most people, who are not unusually savvy, already give much more credence to ads, and much less credence to the degree to which they are actually affected by ads, than they should be. Why should that reverse as ads get even better at manipulating us? Why should I expect people to start demonstrating the level of long-term thinking and short-term impulse control and willingness to look weird to their peers that such a shift would need? It's not like we have a great track record of collectively managing this for other addictive but harmful stimuli. whether informational, social, or biochemical.

Replies from: Joe Kwon, Viliam
comment by Joe Kwon · 2022-02-15T21:33:56.413Z · LW(p) · GW(p)

I'm also not sold on this specific part, and I'm really curious about what things support the idea. One reason I don't think it's good to rely on this as the default expectation though, is that I'm skeptical about humans' abilities to even know what the "best experience" is in the first place. I wrote a short rambly post touching on, in some part, my worries about online addiction: https://www.lesswrong.com/posts/rZLKcPzpJvoxxFewL/converging-toward-a-million-worlds [LW · GW]

Basically, I buy into the idea that there are two distinct value systems in humans. One subconscious system where the learning is mostly from evolutionary pressures, and one conscious/executive system that cares more about "higher-order values" which I unfortunately can't really explicate. Examples of the former: craving sweets, addiction to online games with well engineered artificial fulfillment. Example of the latter: wanting to work hard, even when it's physically demanding or mentally stressful, to make some type of positive impact for broader society. 

And I think today's modern ML systems are asymmetrically exploiting the subconscious value system at the expense of the conscious/executive value system. Even knowing all this, I really struggle to overcome instances of akrasia, controlling my diet, not drowning myself in entertainment consumption, etc. I feel like there should be some kind of attempt to level the playing field, so to speak, with which value system is being allowed to thrive. At the very least, transparency and knowledge about this phenomena to people who are interacting with powerful recommender (or just general) ML systems, and in the optimal, allowing complete agency and control over what value system you want to prioritize, and to what extent. 

comment by Tahp · 2022-02-21T01:53:54.453Z · LW(p) · GW(p)

The ad market amounts to an auction for societal control. An advertisement is an instrument by which an entity attempts to change the future behavior of many other entities. Generally it is an instrument for a company to make people buy their stuff. There is also political advertising, which is an instrument to make people take actions in support of a cause or person seeking power. Advertising of any type is not known for making reason-based arguments. I recall in an interview with the author that this influence/prediction market was a major objection to the new order. If there is to be a market where companies and political-power-seekers bid for the ability to change the actions of the seething masses according to their own goals, the author felt that the seething masses should have some say in it.

To me, the major issue here is that of consent. It may very well be that I would happily trade some of my attention to Google for excellent file-sharing and navigation tools. It may very well be that I would trade my attention to Facebook for a centralized place to get updates about people I know. In reality, I was never given the option to do anything else. Google effectively owns the entire online ad market which is not Facebook. Any site which is not big enough to directly sell ads against itself has no choice but to surrender the attention of its readers to Google or not have ads. According to parents I know, Facebook is the only place parents are organizing events for their children, so you need a Facebook page if you want to participate in your community. In the US, Facebook marketplace is a necessity for anyone trying to buy and sell things on the street. I often want to look up information on a local restaurant, only to find that the only way to do so is on their Instagram page, and I don't have an account, so I can't participate in that part of my community. The tools which are holding society together are run by a handful of private companies such that I can't participate in my community without subjecting myself to targeted advertising which is trying to make me do things I don't want to do. I find this disturbing.

comment by Ofer (ofer) · 2022-02-16T17:39:31.049Z · LW(p) · GW(p)

Simple metrics, like number of views, or number of likes, are easy for companies to optimise for. Whereas figuring out how to optimise for what people really want [EA · GW] is a trickier problem. So it’s not surprising if companies haven’t figured it out yet.

It's also not surprising for a different reason: The financial interests of the shareholders can be very misaligned with what the users "really want". (Which can cause the company to make the product more addictive, serve targeted ads that exploit users' vulnerabilities, etc.).