Literature Review: Distributed Teams

post by Elizabeth (pktechgirl) · 2019-04-16T01:19:27.307Z · score: 96 (34 votes) · LW · GW · 33 comments

Contents

  Introduction
  How does distribution affect information flow?
  How does distribution interact with conflict?
  When are remote teams preferable?
  How to mitigate the costs of distribution
None
33 comments

Introduction

Context: Oliver Habryka commissioned me to study and summarize the literature on distributed teams, with the goal of improving altruistic organizations. We wanted this to be rigorous as possible; unfortunately the rigor ceiling was low, for reasons discussed below. To fill in the gaps and especially to create a unified model instead of a series of isolated facts, I relied heavily on my own experience on a variety of team types (the favorite of which was an entirely remote company).

This document consists of five parts:

My overall model of worker productivity is as follows:

Highlights and embellishments:

Sources of difficulty:

How does distribution affect information flow?

“Co-location” can mean two things: actually working together side by side on the same task, or working in parallel on different tasks near each other. The former has an information bandwidth that technology cannot yet duplicate. The latter can lead to serendipitous information sharing, but also imposes costs in the form of noise pollution and siphoning brain power for social relations.

Distributed teams require information sharing processes to replace the serendipitous information sharing. These processes are less likely to be developed in teams with multiple locations (as opposed to entirely remote). Worst of all is being a lone remote worker on a co-located team; you will miss too much information and it’s feasible only occasionally, despite the fact that measured productivity tends to rise when people work from home.

I think relying on co-location over processes for information sharing is similar to relying on human memory over writing things down: much cheaper until it hits a sharp cliff. Empirically that cliff is about 30 meters, or one hallway. After that, process shines.

List of isolated facts, with attribution:

How does distribution interact with conflict?

Distribution increases conflict and reduces trust in a variety of ways.

When are remote teams preferable?

How to mitigate the costs of distribution

Cramton 2016 was an excellent summary paper I refer to a lot in this write up. It’s not easily available on-line, but the author was kind enough to share a PDF with me that I can pass on.

My full notes will be published as a comment on this post.

33 comments

Comments sorted by top scores.

comment by Elizabeth (pktechgirl) · 2019-04-16T01:21:18.976Z · score: 25 (7 votes) · LW · GW

Notes have been moved to this post [LW · GW] to save scrolling.

comment by Error · 2019-04-16T16:55:16.143Z · score: 2 (1 votes) · LW · GW

Suggestion: Attach or link these, rather than putting them inline in a comment. I like that they're available, but I had to scroll down many screens to find the actual comments.

comment by Raemon · 2019-04-30T02:12:59.068Z · score: 12 (5 votes) · LW · GW

Curated.

(It seemed important that Habryka not be the one to curate this piece, since he had commissioned it. But I independently quite liked it)

Several things I liked about this post:

  • It told me some concrete things about remote teams. In particular:
    • the notion that you should either go "fully remote" or "not remote"
    • the notion that the benefits of co-locating drop off after a literal radius which extends 30m.
  • It gave me some sense of how good the evidence on remote teams are (i.e. not very), while providing a bunch of links to followup if I wanted to get an even better sense.
  • LessWrong currently doesn't feel like rewards serious scholarship as much as it should, so I'd like to generally reward it when it happens. I also think this post did a good job if combining short, easily readable takeaways with the more extensive background literature.
comment by Raemon · 2019-04-30T02:14:16.737Z · score: 15 (6 votes) · LW · GW

Object-level Musings on Peer Review

Note: the following is my personal best guesses about directions LW should go. Habryka disagrees significantly with at least some of the claims here — both on the object and meta levels.

This post was also jumped out significantly as... aspiring to higher epistemic standards than the median curated post. This led me to thinking about it through the lens of peer review (which I have previously mused about [LW · GW])

I ultimately want LessWrong to encourage extremely high quality intellectual labor. I think the best way to go about this is through escalating positive rewards, rather than strong initial filters.

Right now our highest reward is getting into the curated section, which... just isn't actually that high a bar. We only curate posts if we think they are making a good point. But if we set the curated bar at "extremely well written and extremely epistemically rigorous and extremely useful", we would basically never be able to curate anything.

My current guess is that there should be a "higher than curated" level, and that the general expectation should be that posts should only be put in that section after getting reviewed, scrutinized, and most likely rewritten at least once. Still, there is something significant about writing a post that is at least worth considering for that level.

This post is one of a few ones in the past few months that I'd be interested in seeing improved to meet that level. (Another recent example is Kaj's sequence on Multi-Agent-Models).

I do think it'd involve some significant work to meet that bar. Things that I'm currently thinking of (not highly confident that any of this is the right thing, but showcasing what sort of improvements I'm imagining)

  • Someone doing some epistemic spot checks on the claims made here
  • Improving the presentation (right now it's written in a kind of bare-bones notes format)
  • Dramatically improving the notes, to be more readable
  • Improving the diagram of elizabeth's model of productivity so it's easier to parse.
  • Orienting a bit more around the "the state of management research is shitty" issue. I think (low confidence) that a good practice for LessWrong, if we review a field and find that the evidence base is very shaky, it'd be good to reflect on what it would take to make the evidence less shaky. (This is beyond scope for what habryka originally commissioned, but feels fairly important in the context I'm thinking through here)

Is it worth putting all that work for this particular post? Dunno, probably not. But it seems worth periodically reflecting on how far the bar would be set, when comparing what LessWrong could ultimately be vs. what is necessary to in-practice be.

comment by hermanubis · 2019-04-30T02:36:37.573Z · score: 13 (4 votes) · LW · GW

What about getting money involved? Even relatively small amounts can still confer prestige better than an additional tag or homepage section. It seems like rigorous well-researched posts like this are valuable enough that crowdfunding or someone like OpenPhil or CFAR could sponsor a best-post prize to be awarded monthly. If that goes well you could add incentives for peer-review.

comment by Elo · 2019-04-30T02:55:56.317Z · score: 9 (3 votes) · LW · GW

Money might do the opposite. "I did all this work and all I got was... several dollars and cents".

comment by Said Achmiz (SaidAchmiz) · 2019-05-01T01:28:29.984Z · score: 5 (4 votes) · LW · GW

A small amount of money would do the opposite of conferring prestige; it would make the activity less prestigious than it is now.

comment by toonalfrink · 2019-05-02T12:36:54.490Z · score: 11 (4 votes) · LW · GW

My impression is that money can only lower prestige if the amount is low relative to an anchor.

For example a $3000 prize would be high prestige if it's interpreted as an award, but low prestige if it's interpreted as a salary.

comment by ioannes_shade · 2019-05-07T21:10:29.721Z · score: 0 (3 votes) · LW · GW

cf. https://en.wikipedia.org/wiki/Knuth_reward_check

comment by Said Achmiz (SaidAchmiz) · 2019-05-08T06:19:20.035Z · score: 7 (4 votes) · LW · GW

What makes this situation unusual is that being acknowledged by famous computer scientist Donald Knuth to have contributed something useful to one of his works is inherently prestigious; the check is evidence of that reward, not itself the reward. (Note that many of the checks do not even get cashed! A trophy showing that you fixed a bug in Knuth’s code is vastly more valuable than enough money to buy a plain slice of pizza.)

In contrast, Less Wrong is not prestigious. No one will be impressed to hear that you wrote a Less Wrong post. How likely do you think it is that someone who is paid some money for a well-researched LW post will, instead of claiming said money, frame the check and display it proudly?

comment by Davidmanheim · 2019-05-10T06:22:34.477Z · score: -1 (2 votes) · LW · GW

I think you're viewing intrinsic versus extrinsic reward as dichotomous rather than continuous. Knuth awards are on one end of the spectrum, salaries at large organizations are at the other. Prestige isn't binary, and there is a clear interaction between prestige and standards - raising standards can itself increase prestige, which will itself make the monetary rewards more prestigious.

comment by Elizabeth (pktechgirl) · 2019-05-10T14:59:52.273Z · score: 4 (2 votes) · LW · GW

I don't see where Said's comment implies a dichotomous view of prestige. He simply believes the gap between LessWrong and Donald Knuth is very large.

comment by Davidmanheim · 2019-05-22T09:20:13.212Z · score: -1 (2 votes) · LW · GW

Sure, but we can close the global prestige gap to some extent, and in the mean time, we can leverage in-group social prestige, as the current format implicitly does.

comment by Elizabeth (pktechgirl) · 2019-04-30T22:30:31.656Z · score: 5 (2 votes) · LW · GW
Orienting a bit more around the "the state of management research is shitty" issue

Can you say more about this? That seems like a very valuable but completely different post, which I imagine would take an order of magnitude more effort than investigation into a single area.

comment by Raemon · 2019-04-30T22:44:45.356Z · score: 5 (2 votes) · LW · GW

Yeah, there's definitely a version of this that is just a completely different post. I think Habryka had his own opinions here that might be worth sharing.

Some off the cuff thoughts:

  • Within scope for something "close to the original post", I think it'd be useful to have:
    • clearer epistemic status tags for the different claims.
      • Which claims are based on out of date research? How old is the research?
      • Which are based on shoddy research?
      • What's your credence for each claim?
    • More generally, how much stock should a startup founder place in this post? In your opinion, does the state of this research rise to the level of "you should most likely follow this post's advice?" or is it more like "eh, read this post to get a sense of what considerations might be at play but mostly rely on your own thinking?"
  • Broader scope, maybe it's own entire post (although I think there's room for a "couple paragraphs version" and a "entire longterm research project" version)
    • Generally, what research do you wish had existed, that would have better informed you here?
    • Are there are particular experiments or case studies that seemed (relatively) easy to replicate, that just needed to be run again in the modern era with 21st century communication tech?
comment by Elizabeth (pktechgirl) · 2019-04-30T23:14:51.906Z · score: 2 (1 votes) · LW · GW
clearer epistemic status tags for the different claims....

I find it very hard, possibly impossible, to do the things you ask in this bullet point and synthesis in the same post. If I was going to do that it would be on a per-paper basis: for each paper list the claims and how well supported they are.

Generally, what research do you wish had existed, that would have better informed you here?

This seems interesting and fun to write to me. It might also be worth going over my favorite studies.

comment by Raemon · 2019-04-30T23:28:35.203Z · score: 5 (2 votes) · LW · GW
I find it very hard, possibly impossible, to do the things you ask in this bullet point and synthesis in the same post

Hard because of limitations on written word / UX, or intellectual difficulties with processing that class of information in the same pass that you process the synthesis type of information?

(Re: UX – I think it'd work best if we had a functioning side-note system. In the meanwhile, something that I think would work is to give each claim a rough classification of "high credence, medium or low", including a link to a footnote that explains some of the detais)

comment by Elizabeth (pktechgirl) · 2019-05-01T00:37:51.639Z · score: 5 (2 votes) · LW · GW

Data points from papers can either contribute directly to predictions (e.g. we measured it and gains from colocation drop off at 30m), or to forming a model that makes predictions (e.g. the diagram). Credence levels for the first kind feel fine, but like a category error for model-born predictions . It's not quite true that the model succeeds or fails as a unit, because some models are useful in some arenas and not in others, but the thing to evaluate is definitely the model, not the individual predictions.

I can see talking about what data would make me change my model and how that would change predictions, which may be isomorphic to what you're suggesting.

The UI would also be a pain.

comment by Larks · 2019-04-16T16:06:48.856Z · score: 12 (6 votes) · LW · GW

In light of this:

Build over-communication into the process.
In particular, don’t let silence carry information. Silence can be interpreted a million different ways (Cramton 2001).

Thanks for writing this! I found it very interesting, and I like the style. I particularly hadn't properly appreciated how semi-distributed was worth than either extreme. It's disappointing to hear, but seemingly obvious in retrospect and good to know.

comment by Benito · 2019-04-16T09:15:27.891Z · score: 10 (5 votes) · LW · GW

This is awesome, thanks.

In case it’s of interest to anyone, I recently wrote down some short, explicit models of the costs of remote teams (I did not try to write the benefits). Here’s what I wrote:

  • Substantially increases activation costs of collaboration, leading to highly split focus of staff
  • Substantially increases costs of creating common knowledge (especially in political situations)
  • Substantially increases barriers to building trust (in-person interaction is key for interpersonal trust)
  • Substantially decreases communication bandwidth - both rate and quality of feedback - increasing the cost of subtle, fine-grained and specific positive feedback harder, and making strong negative feedback on bad decisions much easier, leading to risk-aversion.
  • Substantially increases cost of transmitting potentially embarrassing information, and incentivises covering up of low productivity, as it’s very hard for a manager to see the day-to-day and week-to-week output.
comment by Elizabeth (pktechgirl) · 2019-04-16T17:11:50.646Z · score: 2 (2 votes) · LW · GW
Substantially increases activation costs of collaboration, leading to highly split focus of staff

I think this is a mixed blessing rather than a cost. It makes staff members less likely to be working in alignment with one another, but more likely to be working in their personal flow in the Csikszentmihalyi sense of the word. I believe these two things trade off against each other in general, and things moving the efficient frontier are very valuable.

comment by Davidmanheim · 2019-05-09T09:15:48.791Z · score: 9 (5 votes) · LW · GW

This is a fantastic review of the literature, and a very valuable post - thank you!

My critical / constructive note is that I think that many of the conclusions here are state with too much certainty or are overstated. My promary reasons to think it should be more hedged are that the literature is so ambiguous, the fundamental underlying effects are unclear, the model(s) proposed in the post do not really account for reasonable uncertainties about what factors matter, and there is almost certainly heterogeneity based on factors that aren't discussed.

comment by Elizabeth (pktechgirl) · 2019-05-09T20:20:48.125Z · score: 6 (3 votes) · LW · GW

Thanks for the kind words.

I'm unclear if you think all conclusions should be hedged like that, or my specific strong conclusions (site visits are good, don't split a team) are insufficiently supported.

comment by Davidmanheim · 2019-05-10T06:55:50.147Z · score: 3 (2 votes) · LW · GW

Somewhere in the middle. Most conclusions should be hedged more than they are, but some specific conclusions here are based on strong assumptions that I don't think are fully justified, and the strength of evidence and the generality of the conclusions isn't clear.

I think that recommending site visits and not splitting a team are good recommendations in general, but sometimes (rarely) could be unhelpful. Other ideas are contingently useful, but often other factors push the other way. "Make people very accessible" is a reasonable idea that in many contexts would work poorly, especially given Paul Graham's points on makers versus managers. Similarly, the emphasis on having many channels for communication seems to be better than the typical lack of communication, but can be a bad idea for people who need time for deep work, and could lead to furthering issues with information overload.

All of that said, again, this is really helpful research, and points to enough literature that others can dive in and assess these things for themselves.

comment by Elizabeth (pktechgirl) · 2019-05-10T14:53:54.599Z · score: 7 (4 votes) · LW · GW

That makes sense. Neither of those was my intention- I declare at the beginning that the research is crap; repeating it at every point seems excessive. And I assumed people would take the conclusions as "this will address this specific problem" rather than "this is a Pure Good Action that will have no other consequences."

I understand that this isn't how it came across to you, and that's useful data. I am curious how others feel I did on this score.

comment by Benito · 2019-05-02T22:15:19.186Z · score: 4 (2 votes) · LW · GW

Datapoint: Stripe's Fifth Engineering Hub is Remote. HN discussion.

comment by Matthijs Cox (matthijs-cox) · 2019-04-22T11:16:12.628Z · score: 4 (3 votes) · LW · GW

Fascinating.

It seems a certain amount of dynamics is relevant, as indicated by the site visits and retreats. I guess you assume the co-located team is static, i.e. no frequent home working or reshuffling with other teams?

I wonder if it's possible to model the impact of such vibrations and transitions between team formations. For example, the Scaled Agile framework proposes static co-located teams with a higher layer of people continuously transferring information between the teams. The teams retreat into a large event a few times a year. Due to personal circumstances I'd love to know their BS factor.

comment by Elizabeth (pktechgirl) · 2019-04-22T23:10:07.543Z · score: 3 (2 votes) · LW · GW

Teams were typically static for the duration of the studies, although IIRC some were newly formed task-focused teams and would reshuffle after the task was over.

Some studies looked at the effect of WFH in co-located team. I didn't focus on this because it wasn't Oliver's main question, but from some reading and personal experience:

  • If a team is set up for colocation, you will miss things working from home, which will hurt alignment and social aspects like trust. This scales faster than linearly.
  • Almost everyone reports increased productivity working from home.
  • But some of that comes from being less interruptible, which hurts other people's productivity.
  • Both duration of team and the expectation of working together in the future do good things to morale, trust, and cooperation.

Based on this, I think that:

  • Some WFH is good on the margins.
  • The more access employees have to quiet private spaces at work, the less the marginal gains from WFH (although still some, for things like midday doctors' appointments or just avoiding the commute). I think most companies exaggerate how much these are available.
  • "Core Hours" is a good concept for both days and times in office, because it concentrates the time people need to defensively be in the office to avoid missing things.
  • How Scaled Agile effects morale and trust will be heavily dependent on how people relate to the meta-team. If they view themselves as constantly buffeted between groups of strangers, it will be really bad. If they view the meta-team as their real team, full of people they trust and share a common goal with but don't happen to be working as closely with at this time, it's probably a good compromise.
comment by Elizabeth (pktechgirl) · 2019-04-22T23:11:35.185Z · score: 1 (1 votes) · LW · GW

The most relevant paper I read was Chapter 5 of Distributed Work by Hinds and Kiesler. You can find it in my [LW · GW] notes [LW · GW] by searching for "Chapter 5: The (Currently) Unique Advantages of Collocated Work"

comment by ryan_b · 2019-04-16T15:45:17.545Z · score: 2 (1 votes) · LW · GW

Excellent work! I particularly like including your notes in the comments.

I have one question about OODA (I see long loops mentioned in the post, but without attribution; I don't see them mentioned in the notes explicitly). Could you talk more about the long-loop conclusion, and how remote work benefits from it?

My naive guess is that the bandwidth issues associated with remote work cause feedback to take longer, which means longer OODA loops are a desirable trait in the worker, but my confidence is not particularly high.

comment by Elizabeth (pktechgirl) · 2019-04-16T17:51:47.138Z · score: 7 (4 votes) · LW · GW

RE: OODA loops as a property of work: let's take the creation of this post as an example. There were broadly four parts to writing it:

1. Talking to Oliver to figure out what he wanted

2. Reading papers to learn facts

3. Relating all the facts to each other

4. Writing a document explaining the relation

Part 1 really benefited from co-location, especially at first. It was heavily back and forth, and so benefited from the higher bandwidth. The OODA loop was at most the time it took either of us to make a statement.

Part 2 didn't require feedback from anyone, but also had a fairly short OODA loop because I had to keep at most one paper in my head at a time, and dropping down to one paragraph wasn't that bad.

Part 3 had a very long OODA loop because I had to load all the relevant facts in my head and then relate them. An interruption before producing a new synthesis meant losing all the work I'd done till that point.

I also needed all available RAM to hold as much as possible at once. Even certain background noise would have been detrimental here.

Part 4 had a shorter minimum OODA loop than part 3, but every interruption meant reloading the data into my brain, so longer was still better.

Does that feel like it answered your questions?

comment by ryan_b · 2019-04-16T21:12:18.025Z · score: 4 (2 votes) · LW · GW

That is much better, but it raises a more specific question: here you described the loop as a property of the task; but then you also wrote

  • Hire people like me
    • long OODA loop

Which seems to mean you are the one with the long loop. I can easily imagine different people having different maximum loop-lengths, beyond which they are likely to fail. Am I correct in interpreting this to mean something like trying to ensure that the remote worker can handle the longest-loop task you have to give them?

comment by Elizabeth (pktechgirl) · 2019-04-17T00:20:48.827Z · score: 6 (3 votes) · LW · GW

I think tasks, environments and people have a range of allowable OODA loops, and that it's very damaging if there isn't an overlap of all three.