post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by ialdabaoth · 2017-10-21T08:33:17.928Z · LW(p) · GW(p)

Oh Jesus Christ, this thing.

Would anyone on LessWrong2 be interested in "Brent Dill's Collected Wisdom and/or Madness About Social Systems: The Sequence"?

Replies from: Raemon, lahwran
comment by Raemon · 2017-10-22T00:45:31.229Z · LW(p) · GW(p)

I've already specifically asked for this, but for common knowledge purposes, yes.

comment by John_Maxwell (John_Maxwell_IV) · 2017-10-20T00:02:15.222Z · LW(p) · GW(p)

There's a lot of literature on what we do wrong, but not a lot of ready-made "techniques" just sitting there to help us get it right—only a scattering of disconnected traditions in things like management or family therapy or politics.

Given the unfortunate state of social science, my guess is that the best sort of evidence re: group rationality is observing which companies succeed in highly competitive industries without having any special advantages, especially those that attribute their success to their corporate culture. This Amazon reviewer thinks Koch Industries is such a company. I haven't read Koch's book, but this interview has interesting quotes like:

...we try to model our company around is what a philosopher scientist [Michael] Polanyi called “The Republic of Science.

My read of the interview is that Koch succeeded by solving the problem of exploration neglect.

Replies from: Vaniver, gworley
comment by Vaniver · 2017-10-20T02:08:03.634Z · LW(p) · GW(p)

I read The Science of Success a while ago, and thought it was very good; I was somewhat surprised by how simple their approach seemed to be. (Roughly, management is compensated based on the net present value of what they manage, rather than whether they hit metric targets or so on; this both encourages creativity and makes sure the actual goal flows through all decision-making.)

comment by Gordon Seidoh Worley (gworley) · 2017-10-20T00:38:25.784Z · LW(p) · GW(p)

This also matches with much of modern company valuation being due to intangible rather than physical assets. Companies of course have some advantages as organizations, with much clearer boundaries and death conditions than communities have. This seems to conform with the idea, though, that successful businesses are the best places to look to understand how to run successful orgs of any type, though the challenge remains to figure out what is specific to these businesses and what can generalize to other orgs.

Replies from: romeostevensit
comment by romeostevensit · 2017-10-20T07:08:41.045Z · LW(p) · GW(p)

(decision theory quality * tightness of feedback loops)/proxy divergence = winning

Replies from: John_Maxwell_IV
comment by whpearson · 2017-10-20T07:21:56.035Z · LW(p) · GW(p)

Another important question. How to make a collective decision. Single people and small committees miss information that other group members might have relevant to the choice. Votes and large committees/working groups suck and are very heavy weight. How should decisions be made.

I've got some feelings that people shouldn't try and optimise 1 and 2 too much. It skeevs me out when I see people doing it, like people are trying to pull levers in my head, and I have to spend mental energy figuring out if I agree with the way people are trying to pull the levers or not. It might be that there is no such thing as optimal group cohesion policy just like you cannot be universally inclusive.

comment by weft · 2017-10-19T21:59:09.615Z · LW(p) · GW(p)

If I could upvote this twice, I would.

I think it's a common-but-harmful thing when people choose a community by maximizing along the "people who think like me/ are similar to me/ are interested in similar things as I am" axis. I posit that many people would be much happier if they instead chose the best d*mn community they can access (most functional, most fulfilling, etc.), and just satisficed along the "people who think like me" axis.

ETA: I specifically got back into Ballooning after about five years out of it because after I moved to a big city, I was dissatisfied and unhappy with the level of commitments and intertwininess in interpersonal relationships and communities. I thought "What is the strongest community I have access to?" and that's where I joined. If one of any number of other activities I enjoy had better communities I would have gone there instead.

comment by Diffractor · 2017-10-20T04:45:20.615Z · LW(p) · GW(p)

My first stab at it (will be doing over the weekend). Collect a big list of drama and -storms, and look for commonalities or overarching patterns, in either the failure modes, or in the list of what could have been done to prevent them ahead of time. There are lots of different group failure modes, but a lot of people seem to have an ugh field around even acknowledging the presence of drama, let alone participating in it.

Thus, this seems like a worthwhile thing to throw some effort at, with a special eye towards actually finding the social version of a nuclear reactor control rod.

comment by John_Maxwell (John_Maxwell_IV) · 2017-10-19T23:29:58.213Z · LW(p) · GW(p)

To the best of my knowledge, there's no clear model or technique that tells e.g. the manager of a startup how to balance fostering safety against holding high standards which might require telling people they aren't measuring up.

One way of looking at this problem is to think of the manager as being kind of like a coach. If a coach notices that their athlete is not breaking a sweat, that's a signal to make the workouts more difficult. If a coach notices symptoms of overtraining, that's a signal to make the workouts less difficult.

Another way of looking at it is to consider when safety is useful. I suspect that the optimal level of arousal for idea generation is below the optimal level for completion of well-defined next actions.

comment by Gunnar_Zarncke · 2017-10-20T22:29:33.152Z · LW(p) · GW(p)

I think there are a few other dimensions. Not sure whether you see this as a different category:
size of the community - bigger communities/teams/companies inherently need different organisational means and there seem to be non-linearities involved i.e. there are certain optima of organisation (like a single person doing all the paperwork) and growing beyond what can be handled at that size requires leaving a local optimum. This seems to be one core insight of Growing Pains which I'm currently reading and which is totally relevant (though focussed on businesses).

type of the community - what is the main type of purpose of the community?

  • mutual support

  • relaxed company

  • getting something done

  • advertising for a cause

I'm uncertain whether this makes sense of whether is should be along social/religious/economic reasons.

Other relevant links:

- https://aeon.co/essays/like-start-ups-most-intentional-communities-fail-why

- http://lesswrong.com/r/discussion/lw/p23/dragon_army_theory_charter_30min_read/

comment by Raemon · 2017-10-20T04:02:39.833Z · LW(p) · GW(p)

Very grateful for this post, and commit to responding in more detail on Saturday

Replies from: Raemon, Raemon
comment by Raemon · 2017-10-23T04:00:55.119Z · LW(p) · GW(p)

Epistemic Status: brainstorm

Initial approach: aiming for a descending depth first brainstorm rather than breadth first.

Most of this turned out to explore the question of what happens when organizations scale, and assumed a focus on "groups of 10 or so that are trying to slowly scale"

Background assumptions

As else-noted, a company, or other explicit organization, has a major leg-up against "a random diffuse community", and the problem with the self-described "rationality community" is that it has no actual boundaries or goals.

If you are working with a diffuse-group, I think step 1 is to refactor it into something that has boundaries and goals that can be explicitly discussed/agreed upon. (My preferred way to do this is to plant a flag and create a subspace that says "this subspace has specific goals and higher standards in service of those goals.")

Assuming You've Somehow Got a Group Capable-In-Principle of Agreeing Upon Goals

Decide which of your Open Problems matter - I think it might be worth taking stock of the various known-problems in group dynamics (or ones you can brainstorm), with a given group, and deciding which of them you're okay you actually try to experiment or improve the state-of-the-art of, and which you're just going to be like "okay, it seems like some similar-reference-class group has pretty good standards and we'll copy those".

I'd expect "looser groups designed for generic progress" to end up having different issues and pressures than "explicit organizations aiming to accomplish a goal."

[Note: at first I meant the previous sentence as a statement, and upon reflection it is more of a _prediction_ and I'm less certain it's correct, especially depending on what sort of 'loose group' you have going on]

Zero to Ten vs 150 to A Thousand

There's an initial problem you face of getting the right people together to seed the culture. There's a (I think reasonably well understood??) problem of growing that culture - slowly, deliberately, so that at each step the culture has time to reinforce itself before adding more people.

I have a vague sense that there's a quantum-shift when you go from "the Leader(s) know everybody, to super-dunbar-number".

For all the open problems you list, are you are you more worried about seeding your first 10 people, growing them to 150, or scaling past that?

Finding People vs Training People

I think the EA community is struggling because it seems like you need people who are excellent along a number of dimensions:

  • actually good at their job

  • have strong epistemics

  • have strong ability to cooperate/coordinate

  • value alignment

And there basically aren't enough people who succeed at all four things, so you have to make tradeoffs.

Growing from 10 - 50

I think most groups Connor might be involved with, the issue currently at stake is "establishing the first 10 or so people, followed by slowly expanding" (I don't know of any EA or rationality groups, professional or otherwise, that have explicit goals you expect people to cooperate on, or are larger than 50 people)

If groups of 10ish have been failing to exhibit group rationality, I think it's a plausibly good strategy to either focus only on strengthening the group rationality of the existing group, or to find some compromise on "what's a reasonable level of rationality we can share that is enough that to start scaling"

I do think, even if focused strongly on the "make sure the existing group is strong before scaling", that there should be some kind of thought given to when and how to scale. I think most groups do eventually need to be big to accomplish things, and trying to optimize a tiny group of people can be similar to a "Meta Trap" where you're never satisfied enough to move on to Stage 2.

(Unless you have a specific task you're trying to do, and the task is clearly optimized for the number of people you have)

The First Growing Pain

I suspect there's a quantum shift, before reaching Dunbar's number, which is "the hierarchy is more than 2 levels deep", which happens around 50-75 people, which will put strain on the group, and a bunch of Brent-Dill-esque concerns will get further exacerbated.

Pre-First-Growing-Pain

Assuming you're still at 1 or 2 levels of hierarchy, I assume the Hanson/Dill/Vassar/Rao style concerns of "people respond to local social incentives leading to warped behavior, politicing, working at cross purposes" thing will still be relevant. I'm mostly aware of a bunch of things not to do or ways in which things will fail by default.

The zeroth level problem is maybe people not even agreeing on how to even pretend to handle that sort of thing. Common solutions:

- pretend it's not a thing, rely on a shared Guess Culture with unwritten rules, and filter for people with the right set of assumptions on what those rules are.

- make sure everyone is at least about to acknowledge that Social Reality is a thing so that they can talk about it explicitly. (This doesn't necessarily presume any particular manner of resolving issues, could still be guess culture, but informed guess culture is... well actually I don't know if it's better or wrose)

Transparency, Lackthereof, Stress

I have a vague sense that there's a failure mode among orgs within and a couple degrees removed from the rationalsphere (idealistic startups of a certain bent. Bridgewater is an example), where the founders try for:

a) self improvement
b) transparency
c) optional: care about people's feelings (but not necessarily have any skill at doing so), and try to resolve things thoroughly

And this gets bad because of:

  • focus on self improvement and transparency results in a lot of criticism, which is hard to do right and leads to lots of stress

  • resolving criticism and dealing with feelings is really exhausting

  • because people aren't actually very good at dealing with feelings, everyone quickly learns that the caring about feelings isn't For Real, but they have to pretend that they're doing it, which is even more exhausting and perhaps threatening

  • The transparency is "real" up until the moment when management has anything important that they need to hide from employees, at which point it rapidly becomes "employees are forced into transparency, management is transparent when convenient."

    (I think it's probably better to straight-up say "management is going to try to be transparent when convenient" from beginning)

  • transparency mixed with criticism makes everything worse, even more stressful

That's about what I have time for for now

comment by Raemon · 2017-10-22T03:43:46.846Z · LW(p) · GW(p)

I spend 30+ minutes but had already used up most of my brain for the day and didn't come up with things that were worth spending other people's time on. Yet.

comment by nBrown · 2017-10-20T01:16:29.940Z · LW(p) · GW(p)

Some group-rationality articles:

Quoting Sarah Constantin from The Craft is not the Community:

" “Company culture” is not, as I’ve learned, a list of slogans on a poster. Culture consists of the empirical patterns of what’s rewarded and punished within the company. Do people win promotions and praise by hitting sales targets? By coming up with ideas? By playing nice? These patterns reveal what the company actually values."

Also a segment from Melting Asphalt's wonderful Crony Beliefs.
What is the company culture of your mind?:

"By way of analogy, let's consider how beliefs in the brain are like employees at a company. This isn't a perfect analogy, but it'll get us 70% of the way there.

Employees are hired because they have a job to do, i.e., to help the company accomplish its goals. But employees don't come for free: they have to earn their keep by being useful. So if an employee does his job well, he'll be kept around, whereas if he does it poorly — or makes other kinds of trouble, like friction with his coworkers — he'll have to be let go."

comment by alwhite · 2017-10-20T01:14:52.322Z · LW(p) · GW(p)

The first thing that jumps out at me is the phrase "nourish/reward them over time such that their needs are not better met". It would seem that the first step of a rational group is to understand and meet the needs of the members of that group. I also see this as a very complex problem to solve in and of itself.

comment by Vaughn Papenhausen (Ikaxas) · 2017-10-25T04:15:43.532Z · LW(p) · GW(p)

First: thank you for writing this post, emphatically agree that these are issues that need to be discussed systematically.

Second: I think lots of people in the LW community are already aware of him, but I want to point at Jonathan Haidt as someone who is doing good work on these kinds of problems (would welcome disagreement on this point, as I think I'm a bit too confident in it for my own good).

Third: A problem suggested by Haidt's work to add to this list, in the context of a society/nation-scale group (epistemic status: somewhat half-baked):

To optimize for cohesion (at least in a population containing authoritarians, likely unavoidable at a nation scale), a community should emphasize similarities among group members; to optimize for truth-seeking, a community should be viewpoint diverse (couldn't find one link that summed up the whole argument on short notice, but I think this comes close). It seems to me that norms that foster truth-seeking are in tension with norms that foster cohesion, to the extent that the former requires diversity while the latter requires sameness. Perhaps this doesn't apply as much to smaller, more intentional communities (in particular because such communities can be selected for people who value diversity, and against people who are threatened by/uncomfortable with it), but on a nation scale I think it does apply. Would welcome criticism on this as well, the idea is somewhat half-formed and I have not given up hope that there is a way to reconcile these two goals in a satisfactory way. I plan to write at least one longer-form, top-level post on this topic at some point.

comment by [deleted] · 2017-10-20T17:35:51.753Z · LW(p) · GW(p)Replies from: Conor Moreton
comment by Conor Moreton · 2017-10-20T20:09:11.078Z · LW(p) · GW(p)

Loren ipsum