Unconscious Economics
post by jacobjacob · 2019-02-27T12:58:50.320Z · LW · GW · 30 commentsContents
If this is true and important, why doesn’t standard econ textbooks/courses explain this? None 30 comments
Here’s an insight I had about how incentives work in practice, that I’ve not seen explained in an econ textbook/course.
There are at least three ways in which incentives affect behaviour: 1) via consciously motivating agents, 2) via unconsciously reinforcing certain behaviour, and 3) via selection effects. I think perhaps 2) and probably 3) are more important, but much less talked about.
Examples of 1) are the following:
- When content creators get paid for the number of views their videos have... they will deliberately try to maximise view-count, for example by crafting vague, clickbaity titles that many people will click on.
- When salespeople get paid a commision based on how many sales they do, but do not lose any salary due to poor customer reviews... they will selectively boast and exaggerate the good aspects of a product and downplay or sneakily circumvent discussion of the downsides.
- When college admissions are partly based on grades, students will work really hard to find the teacher’s password and get good grades, instead of doing things like being independently curious, exploratory and trying to deeply understand the subject
One objection you might have to this is something like:
Look at those people without integrity, just trying so hard to optimise whatever their incentives tell them to! I myself, and indeed most people, wouldn’t behave that way.
On the one hand, I would make videos I think are good, and honestly sell products the way I would sell something to a friend, and make sure I understand my textbook instead of just memorising things. I’m not some kind of microeconomic robot!
And on the other hand, even if things were not like this… it’s just really hard to creatively find ways of maximising a target. I don’t know what appeals to ‘the kids’ on YouTube, and I don’t know how to find out except by paying for some huge survey or something... human brains aren’t really designed for doing maximising like that. I couldn’t optimise in all these clever ways even if I wanted to.
One response to this is:
Without engaging with your particular arguments, we know empirically that the conclusion is false. There’s a wealth of econometrics and micro papers showing how demand shifts in response to price changes. I could dig out plenty of references for you… but heck, just look around.
There’s a $10.000/year daycare close to where I live, and when the moms there take their kids to the cinema, they’ll tell them to pretend they’re 6 and not 7 years old just to get a $3 discount on the tickets.
And I’m pretty confident you’ve had persuasive salespeople peddle you something, and then went home with a lingering sense of regret in your belly…
Or have you ever seen your friend in a queue somewhere and casually slid in right behind them, just to get into the venue 5 minutes earlier?
All in all, if you give people an opportunity to earn some money or time… they’ll tend to take it!
This might or might not be a good reply.
However, by appealing to 2) and 3), we don’t have to make this response at all. The effects of incentives on behaviour don’t have to be consciously mediated. Rather...
- When content creators get paid for the number of views their videos have, those whose natural way of writing titles is a bit more clickbait-y will tend to get more views, and so over time accumulate more influence and social capital in the YouTube community, which makes it harder for less clickbait-y content producers to compete. No one has to change their behaviour/or their strategies that much -- rather, when changing incentives you’re changing the rules of game, and so the winners will be different. Even for those less fortunate producers, those of their videos which are on the clickbait end of things will tend to give them more views and money, and insofar as they just “try to make videos they like, seeing what happens, and then doing more of what worked”, they will be pushed in this direction
- When salespeople get paid a commission based on how many sales they do, but do not lose any salary due to poor customer reviews… employees of a more Machiavellian character will tend to perform better, which will give them more money and social capital at work… and this will give Machiavellian characteristics more influence over that workplace (before even taking into account returns to scale of capital). They will then be in positions of power to decide on which new policies get implemented, and might choose those that they genuinely think sound most reasonable and well-evidenced. They certainly don’t have to mercilessly optimise for a Machiavellian culture, yet because they have all been pre-selected for such personality traits, they’ll tend to be biased in the direction of choosing such policies. As for their more “noble” colleagues, they’ll find that out of all the tactics they’re comfortable with/able to execute, the more sales-y ones will lead them to get more hi-fives from the high-status people in the office, more room in the budget at the end of the month, and so forth
- When college admissions are partly based on grades… the case is left as an exercise for the reader.
If this is true and important, why doesn’t standard econ textbooks/courses explain this?
I have some hypotheses which seem plausible, but I don’t think they are exhaustive.
1. Selection pressure for explanations requiring the fewest inferential steps
Microeconomics is pretty counterintuitive (for more on the importance of this, see e.g. this post by Scott Sumner). Writing textbooks that explain it to hundreds of thousands of undergrads, even just using consciously scheming agents, is hard. Now both “selection effects” and “reinforcement learning” are independently difficult concepts, which the majority of students will not have been exposed to, and which aren’t the explanatory path of least resistance (even if they might be really important to a small subset of people who want to use econ insights to build new organisations that, for example, do better than the dire state of the attention economy. Such as LessWrong).
2. Focus on mathematical modelling
I did half an MSc degree in economics. The focus was not on intuition, but rather on something like “acquire mathematical tools enabling you to do a PhD”. There was a lot of focus on not messing up the multivariable calculus when solving strange optimisation problems with solutions at the boundary or involving utility functions with awkward kinks.
The extent of this mathematisation was sometimes scary. In a finance class I asked the tutor what practical uses there were of some obscure derivative, which we had spend 45 mins and several pages of stochastic calculus proving theorems about. “Oh” he said, “I guess a few years ago it was used to scheme Italian grandmas out of their pensions”.
In classes when I didn’t bother asking, I mostly didn’t find out what things were used for.
3. Focus on the properties of equilibria, rather than the processes whereby systems move to equilibria
Classic econ joke:
There is a story that has been going around about a physicist, a chemist, and an economist who were stranded on a desert island with no implements and a can of food. The physicist and the chemist each devised an ingenious mechanism for getting the can open; the economist merely said, "Assume we have a can opener"!
Standard micro deals with unbounded rational agents, and its arsenal of fixed point theorems and what-not reveals the state of affairs after all maximally rational actions have already been taken. When asked how equilibria manifest themselves, and emerge, in practice, one of my tutors helplessly threw her hands in the air and laughed “that’s for the macroeconomists to work out!”
There seems to be little attempts to teach students how the solutions to the unbounded theorems are approximated in practice, whether via conscious decision-making, selection effects, reinforcement learning, memetics, or some other mechanism.
Thanks to Niki Shams and Ben Pace for reading drafts of this.
30 comments
Comments sorted by top scores.
comment by JenniferRM · 2019-02-27T22:30:02.331Z · LW(p) · GW(p)
David Friedman is awesome. I came to the comments to give a different Friedman explanation for one generator of economic rationality from a different Friedman book than "strangepoop" did :-)
In "Law's Order" (which sort of explores how laws that ignore incentives or produce bad incentives tend to be predictably suboptimal) Friedman points out that much of how people decide what to do is based on people finding someone who seems to be "winning" at something and copy them.
(This take is sort of friendly to your "selectionist #3" option but explored in more detail, and applied in more contexts than to simply explain "bad things".)
Friedman doesn't use the term "mimesis", but this is an extremely long-lived academic keyword with many people who have embellished and refined related theories. For example, Peter Thiel has a mild obsession with Rene Girard who was obsessed with a specific theory of mimesis and how it causes human communities to work in predictable ways. If you want the extremely pragmatic layman's version of the basic mimetic theory, it is simply "monkey see, monkey do" :-P
If you adopt mimesis as THE core process which causes human rationality (which it might well not be, but it is interesting to think of a generator of pragmatically correct beliefs in isolation, to see what its weaknesses are and then look for those weaknesses as signatures of the generator in action), it predicts that no new things in the human behavioral range become seriously optimized in a widespread way until AFTER at least one (maybe many) rounds of behavioral mimetic selection on less optimized random human behavioral exploration, where an audience can watch who succeeds and who fails and copy the winners over and over.
The very strong form of this theory (that it is the ONLY thing) is quite bleak and probably false in general, however some locally applied "strong mimesis" theories might be accurate descriptions of how SOME humans select from among various options in SOME parts of real life where optimized behavior is seen but hard to mechanistically explain in other ways.
Friedman pretty much needed to bring up a form of "economic rationality" in his book because a common debating point regarding criminal law in modern times is that incentives have nothing to do with, for example, criminal law, because criminals are mostly not very book smart, and often haven't even looked up (much less remembered) the number of years of punishment that any given crime might carry, and so "can't be affected by such numbers".
(Note the contrast to LW's standard inspirational theorizing about a theoretically derived life plan... around here actively encouraging people to look up numbers [LW · GW] before making major life decisions is common.)
Friedman's larger point is that, for example, if burglary is profitable (perhaps punished by a $50 fine, even when the burglar has already sold their loot for $1500), then a child who has an uncle who has figured out this weird/rare trick and makes a living burgling homes will see an uncle who is rich and has a nice life and gives lavish presents at Christmas and donates a lot to the church and is friends with the pastor... That kid will be likely to mimic that uncle without looking up any laws or anything.
Over a long period of time (assuming no change to the laws) the same dynamic in the minds of many children could lead to perhaps 5% of the economy becoming semi-respected burglars, though it would be easy to imagine that another 30% of the private economy would end up focused on mitigating the harms caused by burglary to burglary victims?
(Friedman does not apply the mimesis model to financial crimes, or risky banking practices. However that's definitely something this theory of behavioral causation leads me to think about. Also, advertising seems to me like it might be a situation where harming random strangers in a specific way counts as technically legal, where the perpetration and harm mitigation of the act have both become huge parts of our economy.)
This theory probably under-determines the precise punishments that should be applied for a given crime, but as a heuristic it probably helps constrain punishment sizes to avoid punishments that are hilariously too small. It suggests that any punishment is too small which allow there to exist a "viable life strategy" that includes committing a crime over and over and then treating the punishment as a mere cost of business.
If you sent burglars to prison for "life without parole" on first offenses, mimesis theory predicts that it would put an end to burglary within a generation or four, but the costs of such a policy might well be higher than the benefits.
(Also, as Friedman himself pointed out over and over in various ways, incentives matter! If, hypothetically, burglary and murder are BOTH punished with "life without parole on first offense" AND murdering someone makes you less likely to be caught as a burglar, then murder/burglary is the crime that might be mimetically generated as a pair of crimes that are mimetically viable when only one of them is not viable... If someone was trying to use data science to tune all the punishments to suppress anti-social mimesis, they should really be tuning ALL the punishments and keeping careful and accurate track of the social costs of every anti-social act as part of the larger model.)
In reality, it does seem to me that mimesis is a BIG source of valid and useful rationality for getting along in life, especially for humans who never enter Piaget's "Stage 4" and start applying formal operational reasoning to some things. It works "good enough" a lot of the time that I could imagine it being a core part of any organism's epistemic repertoire?
Indeed, entire cultures seem to exist where the bulk of humans lack formal operational reasoning. For example, anthropologists who study such things often find that traditional farmers (which was basically ALL farmers, prior to the enlightenment) with very clever farming practices don't actually know how or why their farming practices work. They just "do what everyone has always done", and it basically works...
One keyword that offers another path here is one Piaget himself coined: "genetic epistemology". This wasn't meant in the sense of DNA, but rather in the sense of "generative", like "where and how is knowledge generated". I think stage 4 reasoning might be one real kind of generator (see: science and technology), but I think it is not anything like the most common generator, neither among humans nor among other animals.
comment by johnswentworth · 2020-12-26T18:41:57.916Z · LW(p) · GW(p)
Connection to Alignment
One of the main arguments in AI risk goes something like:
- AI is likely to be a utility maximizer (or goal-directed in some other sense)
- Goodhart, instrumental convergence, etc make powerful goal-directed agents dangerous by default
One common answer to this is "ok, how about we make AI which isn't goal-directed"?
Unconscious Economics says: selection effects will often create the same effect as goal-directedness, even if we're trying to build a non-goal-directed AI.
Discussions around CAIS [LW · GW] are one obvious application. Paul's "you get what you measure" failure-mode [LW · GW] is another. A less-obvious application which I've personally run into [LW · GW] recently: one strategy to deal with inner optimizers [LW · GW] is to design learning algorithms which specifically avoid regions of parameter space in which the trained system will perform optimization. The Unconscious Economics argument says that this won't actually avoid the risk: selection effects from the outer optimizer will push the trained system to misbehave in exactly the same ways, even without an inner optimizer.
Connection to the Economics Literature
During the past year I've found and read a bit more of the formal economics literature related to selection-effect-driven economics.
The most notable work seems to be Nelson and Winter's "An Evolutionary Theory of Economic Change", from 1982. It was a book-length attempt to provide a mathematical foundation for microeconomics grounded in selection effects, rather than assuming utility-maximizing agents from the get-go. Reading through that book, it's pretty clear why the perspective hasn't taken over economics: Nelson and Winter's models are not very good. Some of the larger shortcomings:
- They limit themselves to competition between firms, and their models contain details which limit their generalization to other kinds of agents
- They use a "static" notion of equilibrium (i.e. all agents are individually unchanging), rather than a "dynamic" notion (i.e. distribution of agents is unchanging)
- They seem to lack the mathematical skills to prove properties of reasonably general models; instead they rely heavily on simulation
I do not see any of these problems as substantial barriers to a selection-based theory; it's just that Nelson and Winter did not have the mathematical chops to make it happen, and nobody better seems to have come along since.
comment by a gently pricked vein (strangepoop) · 2019-02-27T15:10:28.938Z · LW(p) · GW(p)
It's worth noting that David Friedman's Price Theory clearly states this in the very first chapter, just three paragraphs down:
The second half of the assumption, that people tend to find the correct way to achieve their objectives, is called rationality. This term is somewhat deceptive, since it suggests that the way in which people find the correct way to achieve their objectives is by rational analysis--analyzing evidence, using formal logic to deduce conclusions from assumptions, and so forth. No such assumption about how people find the correct means to achieve their ends is necessary.
One can imagine a variety of other explanations for rational behavior. To take a trivial example, most of our objectives require that we eat occasionally, so as not to die of hunger (exception--if my objective is to be fertilizer). Whether or not people have deduced this fact by logical analysis, those who do not choose to eat are not around to have their behavior analyzed by economists. More generally, evolution may produce people (and other animals) who behave rationally without knowing why. The same result may be produced by a process of trial and error; if you walk to work every day, you may by experiment find the shortest route even if you do not know enough geometry to calculate it. Rationality in this sense does not necessarily require thought. In the final section of this chapter, I give two examples of things that have no minds and yet exhibit rationality.
I don't think it counts as a standard textbook, but it is meant to be a textbook.
On the whole, I think it's perfectly okay for economists to mostly ignore how the equilibrium is achieved, since like you pointed out, there are so many juicy results popping out from just the fact that they are achieved on average.
Also, I enjoyed the examples in your post!
Replies from: jacobjacob↑ comment by jacobjacob · 2019-02-27T17:34:05.450Z · LW(p) · GW(p)
I really appreciate you citing that.
I should have made it clearer, but for reference, the works I've been exposed to:
-
Hal Varian's undergrad textbook
-
Marginal Revolution University
-
Some amount of listening to Econ Talk, reading Investopedia and Wikipedia articles
-
MSc degree at LSE
↑ comment by Davidmanheim · 2020-12-25T08:37:21.798Z · LW(p) · GW(p)
I'll note that in my policy PhD program, the points made in the post were well-understood, and even discussed briefly as background in econ classes - despite the fact that we covered a lot of the same econ material you did. That mostly goes to show the difference between policy and "pure" econ, but still.
comment by johnswentworth · 2019-02-27T17:05:54.511Z · LW(p) · GW(p)
Thanks for writing this. Multiple times I've looked for a compact, self-contained explanation of this idea, thinking "surely it's common knowledge within econ?".
Replies from: jacobjacob↑ comment by jacobjacob · 2019-02-27T17:38:20.638Z · LW(p) · GW(p)
I found myself in a situation like: "if this is common knowledge within econ, writing an explanation would signal I'm not part of econ and hence my econ opinions are low status", but decided to go ahead anyway.
It's good you found it helpful. I'm wondering if equilibria like the above is a mechanism preventing important stuff from being distilled.
Replies from: ryan_b, Pattern↑ comment by ryan_b · 2019-02-28T16:20:30.421Z · LW(p) · GW(p)
writing an explanation would signal I'm not part of econ and hence my econ opinions are low status
I think this is the strongest possible argument for writing something.
A. I don't always know the true status of my opinions on a subject; there's no faster way to get such information than to voice the opinion and collect corrections. I routinely used this trick in school on professors, and it works on most any other kind of expert: if you can't get their attention, make a slightly wrong assertion on purpose, and they will hurry to correct you. Note: it does not usually work on people with applied expertise in finance, which is irritating but the reasons are obvious upon reflection.
B. There's a lot of value in different explanations of the same phenomena. After all, if the common knowledge within econ was sufficiently explanatory, it would be common knowledge for us all already and you would have had no doubts. I find it helps a lot to have multiple people/groups pointing at the same thing, so I can mentally triangulate.
comment by Ben Pace (Benito) · 2020-12-05T03:57:44.991Z · LW(p) · GW(p)
I think it's a common notion that if you were just good enough you wouldn't respond to incentives. I used to think it more myself. It's a key element to realize that the system will still create these outcomes even if nobody is consciously choosing to fall prey to them, and that to 'avoid incentives' in a system like that you'd need to actually model what the incentives are and what outcome they're systematically choosing, via selection effects, by correctly optimizing according to feedback in a complex domain, and so on. One cannot have model-free integrity.
So it feels to me like a very fundamental insight, written up well. (It's also related I think to some of the discussion about lying vs unconscious bias between Scott and Zack and Jessica and so on.)
comment by Pablo (Pablo_Stafforini) · 2019-03-01T17:52:56.872Z · LW(p) · GW(p)
Here’s an insight I had about how incentives work in practice, that I’ve not seen explained in an econ textbook/course.
There are at least three ways in which incentives affect behaviour: 1) via consciously motivating agents, 2) via unconsciously reinforcing certain behaviour, and 3) via selection effects. I think perhaps 2) and probably 3) are more important, but much less talked about.
Jon Elster distinguishes these three different ways in Explaining Social Behavior. He first draws a distinction between 1-2 ("reinforcement") on the one hand, and 3 ("selection"), on the other. He then draws a further distinction between 1 ("conscious rational choice") and 2 ("unintentional choice"). Here are the relevant excerpts from ch. 11 (emphasis in the original; I have added numbers in square brackets to make the correspondence between your distinctions and his more conspicuous):
In this chapter, I discuss explanations of actions in terms of their objective consequences... There are two main ways in which this can happen: by reinforcement [1-2] and by selection [3]... If the consequences of given behavior are pleasant or rewarding, we tend to engage in it more often; if they are unpleasant or punishing it will occur less often. The underlying mechanism could simply be conscious rational choice [1], if we notice the pleasant or unpleasant consequences and decide to act in the future so as to repeat or avoid repeating the experience. Often, however, the reinforcement can happen without intentional choice [2].
comment by johnswentworth · 2020-12-02T19:04:31.627Z · LW(p) · GW(p)
In order to apply economic reasoning in the real world, this is an indispensable concept, and this post is my go-to link for it.
comment by habryka (habryka4) · 2019-03-24T03:46:32.068Z · LW(p) · GW(p)
Promoted to curated: I think this post summarized a pretty core insight that I've seen implicitly referenced in a lot of different posts and discussion over the years, and I think that in itself is a very valuable service.
I also think this post got less exposure than I would like, and I that one of the benefits of curation is to highlight very good posts that I think were initially overlooked by readers on LessWrong.
I think the biggest changes I would make is I think also the reason why it got less attention than I think it should have, which is the degree to which the post starts with an abstract point, without really grounding it in motivation or examples. I think had this post started with one or multiple concrete examples of the phenomenon it is trying to explain, instead of three examples of the precise phenomenon that it is trying to contrast with, I would have been less confused on first reading this post, and I expect others would have been too.
I was also particularly appreciative of the discussion on this post, which raised some valuable points and gave some important references, that made this post a lot more valuable to me. Which I think is also a good reason to curate something.
comment by CarlShulman · 2019-03-28T02:40:26.775Z · LW(p) · GW(p)
There is a literature on firm productivity showing large firm variation in productivity and average productivity growth by expansion of productive firms relative less productive firms. E.g. this , this , this , and this.
Replies from: ESRogs↑ comment by ESRogs · 2019-03-28T17:42:56.240Z · LW(p) · GW(p)
I'm not totally sure I'm parsing this sentence correctly. Just to clarify, "large firm variation in productivity" means "large variation in the productivity of firms" rather than "variation in the productivity of large firms", right?
Also, the second part is saying that on average there is productivity growth across firms, because the productive firms expand more than the less productive firms, yes?
comment by PeterMcCluskey · 2019-02-27T16:36:17.277Z · LW(p) · GW(p)
Economists usually treat minds as black boxes. That seems to help them develop their models, maybe via helping them to ignore issues such as "I'd feel embarrassed if my mind worked that way".
There doesn't seem to be much incentive for textbooks to improve their effectiveness at persuading the marginal student. The incentives might even be backwards, as becoming a good economist almost requires thinking in ways that seem odd to the average student.
comment by jacobjacob · 2021-01-11T20:05:23.114Z · LW(p) · GW(p)
Author here: I think this post could use a bunch of improvements. It spends a bunch of time on tangential things (e.g. the discussion of Inadequacy and why this doesn't come through in textbooks, spending a while initially setting up a view to then tear down).
But really what would be nice is to have it do a much better job at delivering the core insight. This is currently just done in two bullets + one exercise for the reader.
Even more important would be to include JenniferRM's comment which adds a core mechanism (something like "cultural learning").
Overall, though, I still stand by the importance of the underlying concept; and think it's a crucial part of the toolkit required to apply economic thinking in practice.
comment by Zvi · 2021-01-11T19:26:51.511Z · LW(p) · GW(p)
This points out something true and important that is often not noticed, and definitely is under-considered. That seems very good. The question I ask is, did this cause other people to realize this effect exists, and to remember to notice and think about it more? I don't know either way.
If so, it's an important post, and I'd be at moderately excited to include it.
If not, it's not worth the space.
I'm guessing this post could be improved/sharpened relatively easily, if it did get included - it's good, and there's nothing wrong exactly, but feels like it could use some tinkering.
The nominations cite different places than where I would be excited, which is a sign the post is indeed doing work, but I find it interesting that the most remembered takeaway is something like "if people ignore the incentives the incentives don't agree to ignore you," and the implication that this is a 'why we all can't ignore the incentives' which I think is misplaced but mostly a distinct argument?
Replies from: Raemon↑ comment by Raemon · 2021-01-11T19:38:33.073Z · LW(p) · GW(p)
FYI this post came up in discussions of It's Not The Incentives, It's You (as something that might be the main driver for "what's up with Academia?")
Replies from: rohinmshah↑ comment by Rohin Shah (rohinmshah) · 2021-01-13T17:43:54.773Z · LW(p) · GW(p)
Note the idea would have come up anyway; this is a pretty core belief of mine, and I believe I was the one to bring it up on the academia post (after which someone linked this post).
Well, to be more accurate, I actually believe a stronger claim than the one in this post, that focuses on imitation as the mechanism for unconscious incentive-following. (Though it looks like comments later made this connection, under the title "cultural learning".)
(I still think this is a good post and will be voting for inclusion; just wanted to clarify impact)
comment by johnswentworth · 2020-02-23T18:42:39.443Z · LW(p) · GW(p)
Just stumbled on an old econ paper specifically about this topic: Alchian's "Uncertainty, Evolution and Economic Theory". I came to it via this essay on Alchian, which mentions that a lot of Alchian's material was just standard thinking from the pre-WWII era which was forgotten by the more macro-focused Keynesian economics paradigm (which dominated from roughly WWII until the Lucas Critique). Through that lens, it's kind of surprising that people haven't revisited economic selection pressure much in the context of modern microfoundations - most current work on microfoundations still assumes rational utility-maximizing agents. It wouldn't surprise me if there's some interesting theory to be discovered there, although it would probably require building a very different mathematical foundation from what's currently used.
comment by ryan_b · 2019-02-27T15:25:53.578Z · LW(p) · GW(p)
Interesting. I had always assumed that the focus in economics was on 1 because 2 and 3 aren't particularly controllable; it is very easy to say "I want X, and will pay $Y" but it is much harder to impose an unconscious reinforcement regime or a selection effect.
comment by ChristianKl · 2019-03-28T16:42:59.468Z · LW(p) · GW(p)
When content creators get paid for the number of views their videos have, those whose natural way of writing titles is a bit more clickbait-y will tend to get more views, and so over time accumulate more influence and social capital in the YouTube community, which makes it harder for less clickbait-y content producers to compete.
It's not that simple. It depends a lot on the decisions of what YouTube wants to reward. YouTube is free to tune their algorithms to reward or punish clickbaity titles by determining how often they get shown as recommended video.
YouTube likely measures metrics such as what percentage of visitors watch the whole video/like it/leave a comment that might be negatively effected by misleading headlines.
Replies from: jacobjacob↑ comment by jacobjacob · 2019-03-28T16:45:21.285Z · LW(p) · GW(p)
Thanks for pointing that out, the mention of YouTube might be misleading. Overall this should be read as a first-principles argument, rather than an empirical claim about YouTube in particular.
comment by [deleted] · 2019-03-26T14:31:19.851Z · LW(p) · GW(p)
I have a different hypothesis for the "people aren't like that!" response. It's about signalling high status in order to be given high status. If I claim that "people aren't bad where I come from", it signals that I'm somehow not used to being treated badly, which is evidence that I'm not treated badly, which is evidence that mechanisms for preventing bad behavior are already in place.
This isn't just a random idea, this is introspectively the reason that I keep insisting that people really aren't bad. It's a sermon. An invitation to good people and a threat to bad ones.
The one who gets bullied is the one that openly behaves like they're already being bullied.
comment by TheWakalix · 2019-02-28T04:44:27.392Z · LW(p) · GW(p)
When content creators get paid for the number of views their videos have, those whose natural way of writing titles is a bit more clickbait-y will tend to get more views, and so over time accumulate more influence and social capital in the YouTube community, which makes it harder for less clickbait-y content producers to compete.
Wouldn't this be the case regardless of whether clickbait is profitable?
Replies from: Vaniver↑ comment by Vaniver · 2019-02-28T05:45:20.019Z · LW(p) · GW(p)
If instead you had to pay for every view (such as in environments where bandwidth costs are expensive, such as interviewing candidates for a job), then you would do the opposite of clickbait, attempting to get people to not 'click on your content.' (Or people who didn't attempt to get their audience to self-screen would lose out because of the costs to those who did.)
Replies from: TheWakalix↑ comment by TheWakalix · 2019-03-25T00:38:32.383Z · LW(p) · GW(p)
I agree that there's a monetary incentive for more people to write clickbait, but the mechanism the post described was "naturally clickbaity people will get more views and thus more power," and that doesn't seem to involve money at all.
Replies from: jacobjacob↑ comment by jacobjacob · 2019-03-25T21:58:37.606Z · LW(p) · GW(p)
Good point, there's selection pressure for things which happen to try harder to be selected for ("click me! I'm a link!"), regardless of whether they are profitable. But this is not the only pressure, and depending on what happens to a thing when it is "selected" (viewed, interviewed, etc.) this pressure can be amplified (as in OP) or countered (as in Vaniver's comment).