On the concept of "talent-constrained" organizations
post by VipulNaik · 2014-03-14T16:42:11.713Z · LW · GW · Legacy · 28 commentsContents
1. Talent constraint because of cash constraint 2. Genuine absence of talented people 3. Talented people would or should be willing to work for low pay 4. Workplace egalitarianism and morale 5. Irrationality of funders None 28 comments
Some people have claimed that organizations are often talent-constrained. In other words, they're not short on money, but they're not able to find talented people. Specifically, some people, such as biomedical researcher John Todd, have claimed they'd turn down large amounts of additional money in order to be able to hire superstars. Others have claimed that the effective altruism movement is talent-constrained.
I'll use talent-constrained in the following sense: an organization is talent-constrained if it's willing to turn down a substantial amount of additional money in order to be able to hire a superstar, and that additional money it's willing to turn down is enough to hire several people at the current salaries they offer.
My first reaction to claims of talent constraint is: why don't these organizations bid up the price of talent to the levels that they claim they're willing to forgo to hire that talent? There could be many possible answers. I'll explore the most salient here.
1. Talent constraint because of cash constraint
Some organizations are cash-constrained, so the ways they would use additional cash at the margin differs significantly from the ways they can reallocate existing cash. So, the fact that they'd be willing to forgo huge amounts of additional money in order to hire new talent doesn't necessarily mean that they can reallocate existing money to bid for superstar talent.
While I agree that this is a common situation, particularly for small organizations, I don't think that talent-constrained is the right description of this situation.
2. Genuine absence of talented people
In some cases, the talented people the organization needs are genuinely very rare. So it may so happen that the organization simply hasn't been approached by any person who'd be impressive enough to hire at a high wage. However, I don't find this explanation very convincing.
People decide whether or not to approach organizations based partly on publicly available information about how much those organizations pay. If the organizations in question don't pay most of their workers high salaries (presumably because these workers aren't superstars) then the superstars who are considering whether to apply to those organizations may believe they're not going to be paid high salaries, hence they may not bother to apply. If organizations care enough about hiring superstars, they need to proactively indicate in their hiring advertisements that they are willing to pay large amounts for superstars.
If there truly exist no talented people who fit the description the organization needs, then the real problem is that the organization is simply engaging in wishful thinking. Calling it "talent-constrained" is misleading because it's bemoaning the absence of an option that is impossible to have anyway. (The concept of talent constraint may still make sense at a broader societal level; perhaps more people need to train in relevant fields when they are younger, or perhaps licensing restrictions or migration restrictions are preventing the hiring of talented people).
3. Talented people would or should be willing to work for low pay
The claim here is that one of the characteristics that defines genuinely talented people is a strong intrinsic motivation to work very hard. Those who are willing to work only in exchange for stellar pay are unlikely to be good cultural fits for the job, and are unlikely to be retained in the field.
This type of explanation may make sense in some cases. For instance, it arguably works in principle for effective altruism organizations: they want to hire people who are genuinely passionate about effective altruism. Demanding a high salary as a precondition of being employed is a negative signal and suggests one is more interested in personal gain than in altruism.
4. Workplace egalitarianism and morale
Significant disparities in the amounts of money that different people in an organization are paid can be bad for morale. Therefore, even if there are a few highly talented people whose marginal contribution commands high salaries, paying them more would either create workplace fiction due to income disparities, or force employers to raise everybody's salaries to a higher level. Neither of these may pass a cost-benefit analysis.
5. Irrationality of funders
The most uncharitable explanation is that employers and their funders are simply irrational. In this view, they have an intrinsic aversion to paying people large amounts of money, and this aversion doesn't stand up to rational scrutiny. The aversion may be displayed by people running the organization, or the people funding them (which may be a larger institution with which the organization is affiliated, or rich individual and foundation donors, or a large number of small donors). For instance, an effective altruism organization that paid a salary of $300,000 to its CEO might lose the support of donors who are repelled by the huge amounts of money made. Research labs at universities may be constrained by the payscales used by the universities. They may also be bureaucratically constrained with respect to reallocating funds from equipment to salaries in order to quickly scoop up a star researcher.
Of the explanations offered, which do you think carries the most weight for specific organizations that you know claim to be talent-constrained? Are there other explanations that I missed? What do you think of my critiques and discussion of specific explanations?
Thanks to Jonah Sinick and Ben Todd for comments that inspired this post (I didn't run the actual post by them).
28 comments
Comments sorted by top scores.
comment by gwern · 2014-03-14T17:08:14.709Z · LW(p) · GW(p)
Another suggestion (a variant on #2 which explains why the shortage is worse than one would expect given how many smart people there are out there): there is a shortage of reliably diagnosable talented people. Hiring requires multiple factors of which 'having talent' is only one, you must also be able to signal in some way your overall appropriateness and safety.
One of the impressions I get from descriptions of the interview process at Google & Facebook (and previously, Microsoft) is that they were more worried about hiring a flawed candidate than rejecting a talented candidate. The reasoning being that these places are 'o-ring production' sorts of places, where a single person could wreak a lot of havoc, by either commission or omission; so despite their shortage of talent, they're forced to be paranoid in their hiring process and biased towards rejection.
Mere money doesn't solve their problem: they can offer tons of money towards random candidates, but not to the ones which are visibly/reliably talented (which are a small subset of the talented).
Replies from: Richard_Kennaway, asr↑ comment by Richard_Kennaway · 2014-03-14T22:21:22.516Z · LW(p) · GW(p)
Mere money doesn't solve their problem: they can offer tons of money towards random candidates, but not to the ones which are visibly/reliably talented (which are a small subset of the talented).
A way around that might be to make it known that big salaries are available, but not up front, only by proven merit after being given a job. Does this already happen?
Replies from: CarlShulman, None, gwern↑ comment by CarlShulman · 2014-03-15T23:29:38.195Z · LW(p) · GW(p)
This actually seems very common in office jobs where you find many workers with million dollar salaries. Wall Street firms, strategy consultancies, and law firms all use models in which salaries expand massively with time, with high attrition along the way: the "up-or-out" model.
Even academia gives tenured positions (which have enormous value to workers) only after trial periods as postdocs and assistant professors.
Main Street corporate executives have to climb the ranks.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2014-03-17T20:17:03.812Z · LW(p) · GW(p)
4, 5, and 2 in that order. You might think you could bypass 2 by advertising a high enough salary, but keep in mind that just advertising a high salary being available gives you problems 4 and 5 immediately, and if you don't advertise a superstar salary and don't have a reputation for paying it, then you may not be approached by any talent who's both money-desiring enough, and strong enough as a talent, to force you to confront the question of whether you need to actually take on disadvantages 4 and 5 for that particular person.
This reply is based on experience.
Replies from: lukeprog, somervta↑ comment by lukeprog · 2014-04-06T19:26:44.223Z · LW(p) · GW(p)
Based just on my experience at MIRI, I'll add another vote to "4, 5, and 2 in that order," especially if #5 includes funders and if #2 includes gwern's "shortage of reliably diagnosable talented people."
Item #4 is a pretty big deal in practice. 'Nuff said.
I've exhibited #5 throughout my tenure as CEO at MIRI, and perhaps still do. I've been repeatedly resistant to higher salaries and in retrospect I think the Board was right in two cases to be less timid than I was. Now the big worry is funders: the EA movement, in particular, may prefer martyr-ish salaries, though on that point I'm relieved to see that GiveWell's founders still make substantially more than I do.
On #2, consider MIRI's hiring of myself and Nate Soares. Neither of us are "superstars" — at least not yet; we'll try! — but we are clearly good for MIRI at the present stage, and yet I came in with no executive experience and no relevant technical background, and Nate came in with no research publications, having learned logic and model theory a few months before his hiring. There are probably other good hires out there available to MIRI but I just don't know what they look like. And of course in general, the world is not training FAI talent the way it trains, say, programming talent or finance talent. So in MIRI's case there is a pretty unusual "genuine absence of talented people."
↑ comment by somervta · 2014-03-19T01:42:53.619Z · LW(p) · GW(p)
In the context of math talent (as opposed to philosophical/reductionist/naturalist*) at MIRI?
*I'm interested in whether the talent in question was something that is already understood by academia (in the sense that being really good at math in particular ways is well understood to be a quality that is desireable by the community, but the specific type of reductionist philosophical talent that you would be looking for isn't seen that way by academic philosophy in general.)
comment by owencb · 2014-03-17T16:37:05.498Z · LW(p) · GW(p)
Thanks for the exploration of the issue.
I spent some time thinking about this question a while ago. My general conclusion was that some version of your factor (4) is doing a lot of the work; I then investigate how this leads to a meaningful distinction between funding constraint and talent constraint. I've just shared my notes here (they were framed for CEA, but should be generally applicable).
The general argument behind expecting (4) to be a big factor doesn't rely on 'fairness' or 'morale'. It can arise even for totally self-interested rational agents. It goes something like this:
- Employing someone is a trade. There is a maximum salary you'd pay for their labour, and a minimum they'd accept. You end up paying them something in the middle, and the trade surplus is split between you.
- Individuals are better able to hide their preferences than larger organisations. If the organisation is known to paying $X to person Y for their labour, similarly qualified people are likely to ask for salaries closer to $X in the knowledge that the organisation is happy to pay this rate.
- So paying high salaries to some people shifts the balance of power in salary negotiations in favour of other employees. The employer will capture less of the trade surplus of employment in those cases.
↑ comment by VipulNaik · 2014-03-17T18:40:24.500Z · LW(p) · GW(p)
Thanks for the linked write-up. I think that provides a good theoretical framework for the issue. And maybe you can do a LessWrong post based off of your writeup -- that should get more attention and I'm eager to see what others think of your framing.
comment by CronoDAS · 2014-03-15T09:36:44.250Z · LW(p) · GW(p)
Let's get a little more specific here.
Can anyone here name one currently living individual that MIRI would like to hire away from their current position to work on Friendly AI research, if money were no object? Terence Tao, perhaps? Do you think he would leave his current position as a university professor if you could offer him, say, a ten million dollar annual salary?
Replies from: lukeprog↑ comment by lukeprog · 2014-04-07T03:50:20.793Z · LW(p) · GW(p)
To do research, someone's got to have some actual interest in the problem space, or they'll end up fiddling around and doing stuff that's good for their interests or their long-term career but not necessarily for what their employer wants. So I don't know who has the capacity to acquire that interest. Tao would be good if he acquired an interest in the subject but I don't know if he could. Gowers at least commented on Baez's summary of the earlier Christiano result, but a short G+ comment isn't that much evidence. I don't currently know of any math superstars who want to work on FAI theory but only for a high salary — if I did, and I thought it would be a good hire, I'd reach out to MIRI's donors and try to solicit targeted donations for the hire.
Replies from: Adele_L, XiXiDu↑ comment by Adele_L · 2014-04-07T05:45:23.943Z · LW(p) · GW(p)
Vladimir Voevodsky is a math superstar who plausibly could acquire such an interest.
Here is a summary of a recent talk he gave. After winning the Fields medal in 2002 for his work on motivic cohomology, he felt he was out of big ideas in that field. So he "decided to be rational in choosing what to do" and asked himself “What would be the most important thing I could do for math at this period of development and such that I could use my skills and resources to be helpful?” His first idea was to establish more connections between pure and applied mathematics. He worked on that for two years, and "totally failed." His second idea was to develop tools/software for mathematicians to help mathematicians check their proofs. There had already been lots of work on this subject, and several different software systems for this purpose already existed. So he looked at the already existing software. He found that either he could understand them, and see that they weren't what he wanted, or that they just didn't make any sense to him. "There was something obviously missing in the understanding of those." So he took a course at Princeton University on programming languages using the proof assistant Coq. Halfway through the course, he suddenly realized that Martin-Lof types could essentially be interpreted as homotopy types. This lead to a community of mathematicians who developed Homotopy Type Theory/Univalent Foundations with him, which is a completely new and self-contained foundation of mathematics.
Andrej Bauer, one of the Homotopy Type theorists, has said "We've already learned the lesson that we don't know how to program computers so they will have original mathematical ideas, maybe some day it will happen, but right now we know how to cooperate with computers. My expectation is that all these separate, limited AI success, like driving a car and playing chess, will eventually converge back, and then we're going to get computers that are really very powerful." Plausibly, Voevodsky himself also has some interest in AI.
So here is a mathematician with:
- a solid track record of solving very difficult problems, and coming up with creative new insights.
- good efforts to make rational decisions in what sort of mathematics he does, yielding an interest and willingness in completely switching fields if he thinks he can do more important things there.
- an ability to solve practical problems using very abstract mathematics.
I think it would be worth trying to get him interested in FAI problems.
comment by Stefan_Schubert · 2014-03-14T21:56:49.242Z · LW(p) · GW(p)
I think there is a lot to #5. This is, as you hint at, connected with egalitarianism. There is a natural tendency to want to pay people who do the same job roughly the same and only under strong market pressure will the income of the talented person match the value he provides to the employer. In the academia, where I work, it is blatantly obvious that some people contribute vastly more than others but still they are paid roughly as much. Only in the more competitive American system are the wage spreads starting to reflect differences in output. No doubt is this a development that will continue.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2014-03-15T22:07:29.809Z · LW(p) · GW(p)
#5 + hypocrisy. The employers may be saying "we are offering huge money and the talented people still aren't coming" when their offer actually may not seem like "huge money" to the people who have the necessary skills.
Some changes in life are not reversible. Imagine that you are a talented person and you already have a decent job and make decent money. Would you change it for another job just because it offers you 10% more? I probably wouldn't, because you never know, the new job may actually suck, and returning to the old place may no longer be an option. But for twice the salary, I would take the risk. But the employer might think that's too much, and that their +10% option already is very generous. (In such case maybe the solution would be to offer 10% more, plus a one-time huge extra bonus for staying in the new job for 6 months.)
Another possibility, a bit similar to #3, but not exactly the same -- the talented people may be motivated by other things than money; and maybe they already have all the money they need (assuming they aren't effective altruists). They now optimize for other things. To attract them, you would have to offer some of those other things, e.g. shorter working hours, more freedom, etc.
Replies from: army1987↑ comment by A1987dM (army1987) · 2014-03-16T14:30:32.220Z · LW(p) · GW(p)
Imagine that you are a talented person and you already have a decent job and make decent money. Would you change it for another job just because it offers you 10% more? I probably wouldn't, because you never know, the new job may actually suck, and returning to the old place may no longer be an option.
In other words, the labour market resembles an oligopsony much more than one would guess by looking at the total number of employers alone.
Replies from: Lumifer↑ comment by Lumifer · 2014-03-16T18:08:56.487Z · LW(p) · GW(p)
I think it's a point about risk aversion, not about the structure of the labour market.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2014-03-16T20:20:00.325Z · LW(p) · GW(p)
I think there are multiple causes.
People are risk-averse. But even if they weren't, changes usually have transaction costs. For example if people have to move from one city to another, that costs something. Also it means that their partner could have to change their job to stay together.
If you want to make a lot of money, you have to specialize in something. That naturally reduces the number of potential employers. You can switch do doing something different, but again, there are transaction costs.
Replies from: Lumifer↑ comment by Lumifer · 2014-03-16T23:52:43.117Z · LW(p) · GW(p)
All true. And, again, all of that doesn't have much to do with whether the labour market is an oligopsony.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2014-03-18T10:03:05.201Z · LW(p) · GW(p)
Seems to me that with enough specialization there are few buyers and few sellers. Which of these numbers is smaller probably depends on specific specialization, and may change over time.
comment by CronoDAS · 2014-03-15T09:41:43.383Z · LW(p) · GW(p)
I hate to use a group selection argument, but maybe people think that trying to poach top talent by offering higher salaries will just get them into a bidding war that would be bad for everyone but the people being bid on?
Replies from: leplen↑ comment by leplen · 2014-03-16T23:02:21.165Z · LW(p) · GW(p)
This isn't really a group selection argument, and it's definitely something that happens.