Replacing expensive costly signals

post by KatjaGrace · 2018-02-17T00:50:00.500Z · LW · GW · 13 comments

Contents

13 comments

I feel like there is a general problem where people signal something using some extremely socially destructive method, and we can conceive of more socially efficient ways to send the same signal, but trying out alternative signals suggests that you might be especially bad at the traditional one. For instance, an employer might reasonably suspect that a job candidate who did a strange online course instead of normal university would have done especially badly at normal university.

Here is a proposed solution. Let X be the traditional signal, Y be the new signal, and Z be the trait(s) being advertised by both. Let people continue doing X, but subsidize Y on top of X for people with very high Z. Soon Y is a signal of higher Z than X is, and understood by the recipients of the signals to be a better indicator. People who can’t afford to do both should then prefer Y to X, since Y is is a stronger signal, and since it is more socially efficient it is likely to be less costly for the signal senders.

If Y is intrinsically no better a signal than X (without your artificially subsidizing great Z-possessors to send it) then in the long run Y might only end up as strong a sign as X, but in the process, many should have moved to using Y instead.

(A possible downside is that people may end up just doing both forever.)

For example, if you developed a psychometric and intellectual test that only took half a day and predicted very well how someone would do in an MIT undergraduate degree, you could run it for a while for people who actually do MIT undergraduate degrees, offering prizes for high performance, or just subsidizing taking it at all. After the best MIT graduates say on their CVs for a while that they also did well on this thing and got a prize, it is hopefully an established metric, and an employer would as happily have someone with the degree as with a great result on your test. At which point an impressive and ambitious high school leaver would take the test, modulo e.g. concerns that the test doesn’t let you hang out with other MIT undergraduates for four years.

I don’t know if this is the kind of problem people actually have with replacing apparently wasteful signaling systems with better things. Or if this doesn’t actually work after thinking about it for more than an hour. But just in case.

13 comments

Comments sorted by top scores.

comment by Qiaochu_Yuan · 2018-02-17T02:25:15.673Z · LW(p) · GW(p)

This doesn't seem to work if the package of signaled traits Z includes conformity, which is e.g. part of the package of traits being signaled by college, and college is probably not unique in having this property. Doing anything other than college is nonconformist, so anyone you can persuade to do it is failing to signal conformity and is actually not maximally conformist. It is really, really hard to create new signals of conformity.

Bryan Caplan explicitly makes the point that college signaling conformity is important in The Case Against Education and it made me realize that I had been really neglecting it as a component of college's signaling package (along with intelligence, which I had previously focused most of my attention on, and conscientiousness).

Edit: Riffing off of Benquo's comment, if you think conformity's just in fact not that important a signal to look for when e.g. hiring talented people, what this suggests is not that you try to create a cheaper signal than college but that you start companies that hire nonconformists, and loudly signal this fact. There's an open question of to what extent such companies would actually be successful, though. Maybe nonconformists work poorly in teams or something.

comment by Benquo · 2018-02-17T02:17:41.586Z · LW(p) · GW(p)

There’s a common narrative, in contemporary American culture, of a bunch of misfits banding together to outcompete the legitimate but wasteful incumbents. Think of the protagonist’s scrappy training in Rocky or The Karate Kid (vs the wealthy antagonist’s fancy training regime), or even in Twister, where the bad-guy tornado scientists drive fancy new black SUVs, while the good-guy tornado scientists drive a beat-up old pickup truck and get their tornado-scanners to fly by cutting up old aluminum cans for wings. Or Star Wars, where the Empire is glossy and standardized and uses gigantic powerful ships and the Rebels are diverse and quirky and just sort of doing the thing. Or Moneyball, especially analogous to college admissions, where the bad guys use expensive talent scouts and money, and the good guys have a small amount of money (for a baseball team) and one guy who knows some statistics.

The trope currently seems to be in the process of being ground into dust by postmodern marketers with no taste, but it was originally a coherent thing, and seems to be talking about the sort of problem you’re discussing. This implies a strategic narrative where the normative response to entrenched rent-extracting meritocracies is not to try to win acceptance by them / change minds, but organize a separate system full of misfits to actually outperform. At least, according to the poets and bards of our age. To some extent we can think of Abraham as having done this, as well as the various founding cultures of the US.

I think this is also the narrative of Clayton Christensen’s The Innovator’s Dilemma.

(Cross-posted this comment from Katja's blog)

comment by Daniel_Armak · 2018-02-17T20:25:52.475Z · LW(p) · GW(p)

I came up with many reasons why this approach might fail. The fact there are so many suggests that I don't have very a good model and/or may be engaging in motivated reasoning.

In the general case, the recipients of the signals may not understand what is being signalled, or that signalling is involved at all, so they won't accept a substitute signal. E.g., most people are unaware or disbelieve that education serves for signalling more than teaching. They would not hire people who were merely accepted to MIT and would have received good grades, because they think MIT teaches important job skills.

There are several other potential problems with the given example, which may not be problems with the general approach:

  1. Most employers don't want to innovate in recruiting strategies. They're already trying to innovate in R&D or product design or marketing. It makes sense to be conservative elsewhere, due to limited resources (you need good HR to execute an unconventional recruitment strategy) and to hedge risk. They will not want to be the first to adopt a new strategy unless they think it will be wildly better than the standard one. But hiring non-graduates is only better in that it saves money, and that's not usually a big enough advantage unless the company simply can't afford to hire graduates. (See: startups founded and staffed by college dropouts.)
  2. There are many potential principal-agent problems. An individual recruiter or project manager may not want to make unconventional choices because they'll be blamed personally if it turns out badly ("no-one gets fired for buying IBM"). A team or division lead may not want to hire non-grads because their peers and bosses don't understand their logic, so they'll be looked down on for having a team with more "junior" people. College graduates on the regular career track may be hostile to non-grads because they perceive them as unfairly being allowed to skip the hard work that the graduates put in.
  3. Large companies (and government agencies, etc.) often regulate the positions offered to company employees in terms of job title, compensation, job progression, and requirements like diplomas. A large company may be the best place to experiment with new, nonstandard approaches to hiring, because it can survive a small experiment failing. But it may be less able to do so in practice, because HR and related departments are vertically integrated, and a software programming team doesn't have the formal authority to create a new type of position with customized entry criteria and salary.
comment by ChristianKl · 2018-02-17T19:34:02.634Z · LW(p) · GW(p)

You not only need a test that predicts it initially but you would also need to have a test that's resistent to optimizing for doing well on the test. As far as I understand we lack a nongameable test for conscientiousness that can be done in a short timeframe.

Big companies like Google test the effectiveness of their hiring criteria predicting employee success.

If you can make a good case that you have developed a superior psychometric tool for evaluating candidates to hire, there are big employers who would want to buy your tool. While a startup might not have to funds to invest into great psychometric tools big employers can and there's a lot of money to be made by getting better at hiring and optimizing it.

While Google might be a forerunner at quantitatively evaluating their hiring criteria it should provide enough benefit to companies that over time all the large companies will do that.

When providing new ways of credentialing I consider it ineffective to ask: "How can we do what the status quo does?" In the status quo companies still make many bad hiring decisions.

A better question is "How can we better predict people's performance?"

I think that Tetlock provided an answer with his work on experts in politics where he found predition making as a way to evaluate performance. I described how the same principle can work in medicine in Prediction-based Medicine.

comment by Dagon · 2018-02-21T16:57:27.389Z · LW(p) · GW(p)

This supposes that the target of signaling is aware of the game and has pretty good knowledge of the underlying traits they're looking for, and that they know the underlying traits are also desired by people who judge the judges.

There's a big coordination and switching cost, especially for real-world cases where much of the signaling value is multi-level (hiring managers wanting to impress their bosses by hiring the "quality", people wanting to impress their friends by the "desirability" of their mate).

Unless there's a lot of universal knowledge about what others are looking for in signals, any change is risky to the early adopters.

comment by Allen Kim · 2018-02-17T03:06:57.463Z · LW(p) · GW(p)

I think something similar happened in the case of coding bootcamps. One thing I've noticed is that some of those who invested in the old signaling method were incentivized to reject and convince others to reject the new signal. Coding is one of the skills less reliant on signaling so I imagine this would be a bigger problem in other fields.

Edit: Also riffing off Benquo's post, I think it's also quite common for good programmers who felt they were undervalued in the market to start their own startups. On the other hand, coding bootcamps also seem to have "worked" to some extent. I think in general it depends on the risk/rewards of accepting the new signal. It's easier bear the risk yourself than to convince someone else to, but in the case of the tech industry there was enough incentive to take that chance.

Replies from: cousin_it
comment by cousin_it · 2018-02-17T14:12:13.317Z · LW(p) · GW(p)

I'd be the first to cheer if bootcamp-educated programmers were better than average (due to better selection or better teaching), but they aren't.

Replies from: Raemon
comment by Raemon · 2018-02-17T23:40:03.055Z · LW(p) · GW(p)

Are they worse than average? It seems to me they just need to be "about as good" and be dramatically cheaper.

Replies from: cousin_it
comment by cousin_it · 2018-02-18T02:57:48.979Z · LW(p) · GW(p)

They're cheaper, but not as good on average. Remember, these aren't the people who loved coding since childhood (they all got good jobs already). We're talking about people who had no special love for coding, and then went through a three month bootcamp. Also keep in mind that the quality of bootcamps varies widely. It's definitely not the kind of better signal that Katja is talking about.

Replies from: Raemon
comment by Raemon · 2018-02-18T03:20:18.025Z · LW(p) · GW(p)

Ah, gotcha.

Some confounding things in my worldview:

a) This is based on vague priors rather than empiricism, but I currently draw a strongish distinction between coding bootcamps that you pay to join, vs bootcamps that take N% of your first year salary. The latter seem more incentivized to only take people who they are fairly confident they can help land a good job (and help land them that job)

b) I still would expect people who make it through a generic pay-to-play bootcamp would be better than people with no training, all things being equal. (like, seems like they'd need to at least sort-of-know-what-an-API-call was, vs a rando who might literally know nothing). Like, it makes sense if the signal isn't that reliable, but statistically seems like a randomly selected bootcamp has a reasonably large chance of having some-kind-of-standards-of-who-gets-to-graduate.

I wouldn't be impressed with someone just because they did some random bootcamp, but if I'm sorting through a list of resumes, and need to weed it down, and two people have similarly looking experience/side-projects-or-lack-thereof, but one's been to a bootcamp and one hasn't, that seems marginally better to elevate to "actually talk to them" status?

c) I also had been more comparing bootcamps to a 4-year degree (which maybe tests conscienciousness more, but from what I hear doesn't do much to guarrantee that you have the sense of causality that you need to program.

comment by Ben Pace (Benito) · 2019-11-27T21:05:05.923Z · LW(p) · GW(p)

I'm nominating this post in conjunction with Qiaochu's comment on it [LW(p) · GW(p)]. 

This post laid out a clear, neat mechanism for replacing expensive costly signals, to be used for education, which seemed like a great innovation to me. Then Qiaochu explained that it fails if one of the signals is for conformity, which is a key insight (which Qiaochu got from Bryan Caplan's book on education), that has changed how I think about the education problem. I still think back on this post from time to time as having caused me to crystalise that insight.

comment by Ben Pace (Benito) · 2018-02-17T09:06:30.804Z · LW(p) · GW(p)

Promoted to frontpage.

comment by rannur · 2018-02-17T14:22:11.388Z · LW(p) · GW(p)

I'm not sure your proposed solution will work unless we assume people exclusively send signal X or Y. Whoever is subsidizing people to send signal Y needs to have some relatively simple process of identifying the people they want to subsidize to send signal Y. In the MIT degree example you gave the process was simply selecting people who were already sending signal X, but you could also try to identify individuals who have an obviously high Z.

Since the subsidizer can identify people who should be subsidized, then so can I as an employer. If you come to me with signal Y, then I will check whether you are likely to have been subsidized to send signal Y rather than signal X. If you were likely to have been subsidized, then I will accept you since you had a good incentive to choose Y over X. If you are unlikely to have been subsidized, then I will assume you chose to send signal Y because you were unable to send signal X. What is going to become culturally acceptable is sending signal Y in conjunction with a signal that you were incentivized to choose Y over X. Sending signal Y by itself will continue to be seen as a fallback for those who couldn't send X. Effectively signal Y will be split into two separate signals:

  • Signal Y+ = Signal Y and proof that you had an incentive to send Y over X.
  • Signal Y- = Signal Y, but no argument for why you chose Y over X.

Your subsidy will only increase people sending Y+, employers will continue to see Y- as proof you couldn't do X or Y+. And the moment you stop subsidies no one will have a reason to send signal Y+.

If you could somehow make Y+ and Y- hard to distinguish, then it could work but I can't come up with a mechanism for accomplishing that.