post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by JenniferRM · 2017-05-26T08:37:52.444Z · LW(p) · GW(p)

The most common cause of the collapse of high investment intentional communities is romantic drama.

(Maybe the Dragon Barracks are so obviously a boy thing that you're taking for granted that there will be no girls in the house, but all the weird non-gendered pronouns like "a Dragon will brush its teeth" imply either an attempt to have a team composed of both men or women, or else a hilarious level of contempt for the agency of your space monkeys. I'm going to assume that you're imagining mixed gender living arrangements rather than already starting with verbal de-personalization of presumed uniformly male space monkeys...)

So anyway, assuming men and women in the house at the same time, that's what usually causes things to collapse in the long run.

The two standard failure modes are Bonobo egalitarianism that collapses due to the accumulation of residual jealousies over time or else a harem forms around the charismatic cult leader (which isn't necessarily a failure mode... it is just a sign of a cult leader whose stated community goals are a load of hypocritical baloney compared to the real goal of getting more than his "fair share" of tail -- cue the Limp Bizkit song).

There are lots of patches for this sort of thing that have historically worked for various kinds of communities. Requiring celibacy is an obvious one that monasteries often use. Disallowing any romantic statuses except "single" and "closed dyadic marriage" (with a managed "courting" status to mediate the one way transition) is another standard trick.

Whatever the rule is, the standard enforcement mechanism is "ostracism" because the real problem from a social engineering perspective is the accumulation of complicated feelings that slow and redirect the workings of the social machine away from its stated purposes and towards managing the wreckage of new and old love triangles. If you throw away the cogs that are liable to have "complicated feelings" and replace them with non-complicated cogs... then the machine should continue to run as designed?

(I think maybe the romantic mores that were junked in the US in the 1960's arose in the first place because villages are kinda like auto-poetic intentional communities. The pragmatically useful norms of village romance, that kept the village from exploding, could be semi-safely junked because (well, obviosuly "the pill" but also because) cities are anonymous and moderately well mixed... essentially everyone in a city is already pre-ostrasized by everyone else, and we each are desperately struggling to create a synthetic village-like community despite the isolating forces of urban mixing. In an already chaotic urban romantic economy a divorce causing additional minor lesioning of the local social graph is like a dust devil in a hurricane. There might actually be a lot of dust devils caused by hurricane turbulence for all I know, but I'm pretty sure no one cares much because the actual hurricane make them irrelevant.)

Anyway, for the above reasons, you might want to just say "this is a fraternity and if women want to start a rationalist sorority that can be a separate thing". Alternatively, think about romantic norms up front.

Replies from: Larks, John_Maxwell_IV, Duncan_Sabien
comment by Larks · 2017-05-28T21:46:11.295Z · LW(p) · GW(p)

One idea that is probably necessary but not sufficient is for the Commander (and anyone else with any authority in the house) to have an absolute commitment not to sleep with anyone else in the house.

Edit: with this rule, a different/earlier version of me might have been interested. Without it I would never be.

comment by John_Maxwell (John_Maxwell_IV) · 2017-05-27T05:40:09.102Z · LW(p) · GW(p)

Anyway, for the above reasons, you might want to just say "this is a fraternity and if women want to start a rationalist sorority that can be a separate thing".

Possible advantage of this solution: I've noticed that male bonding gets a lot easier when a group goes from being "almost all guys" to "all guys". (I imagine it would get easier still if you are regularly doing testosterone-elevating things that require coordination with your group of guys, the way sports teams, armies, fraternities, and heavy metal bands do. I suspect men have a pack hunting instinct that gets activated in circumstances like these.)

Replies from: Jacobian, username2
comment by Jacob Falkovich (Jacobian) · 2017-05-31T04:15:52.904Z · LW(p) · GW(p)

Data point to the contrary: I spent two years in a closed military unit with 44 guys and 5 girls (in Israel). Each of the girls went through at least a couple of in-unit boyfriends at the time, but that wasn't a major source of drama. It took quite a bit of suffering to forge the unit bonds (a 4-month combat boot camp to start our service), but by the end of it, people cared about "the unit" as a whole more than about personal drama. I certainly can't imagine that the "bonding" could have been any stronger without the girls there.

Replies from: Jacobian
comment by Jacob Falkovich (Jacobian) · 2017-05-31T15:24:42.418Z · LW(p) · GW(p)

And one final point of support for DA: while I was living in a closed barracks, with five girls, a huge workload, strict rules and significant barriers to exit, I read Ender's Game and thought "this is exactly like my life, and it's awesome".

I agree with some of the critics here that Duncan is overconfident in his ability to make this work. I also agree that there's a limit to how much you can learn from a work of fiction about space monkey superchildren. But a lot of the criticism here is even more overconfident, and it comes from people who never lived in DA-like situation in their lives so all the evidence they're basing their criticism on is fictional.

Replies from: Duncan_Sabien, handoflixue
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-06-03T04:15:55.289Z · LW(p) · GW(p)

It's especially worth noting that the group is highly competent and self-selecting for the environment, too, so we're likely to respond in the same way you did (i.e. if we want to say that your experience "beat outside view," then we're pretty well set up for ours to beat outside view similarly, even if that outside view is somewhat unpromising).

comment by handoflixue · 2017-06-03T02:09:42.151Z · LW(p) · GW(p)

it comes from people who never lived in DA-like situation in their lives so all the evidence they're basing their criticism on is fictional.

I've been going off statistics which, AFAIK, aren't fictional. Am I wrong in my assumption that the military, which seems like a decent comparison point, has an above average rate of sexual harassment, sexual assault, bloated budgets, and bureaucratic waste? All the statistics and research I've read suggest that at least the US Military has a lot of problems and should not be used as a role-model.

Replies from: Kaj_Sotala, tristanm
comment by Kaj_Sotala · 2017-06-03T18:14:32.431Z · LW(p) · GW(p)

Counterpoint:

Listening to the ways that various interviewees talked about the military ethos, and used value language to describe their experiences, I found myself thinking, like we Rogerians do, "What is it they are trying to get me to understand? There is something they are trying to be insistent about, but it's not clear; what is it?"

Eventually, I found something between the lines. It's hard to express directly; it works best if we start with what I hypothesize is the other side.

The US military, and probably all militaries ever, have a really quite low tolerance for fuckups. When somebody isn't dependable, when somebody doesn't exercise adequate restraint in their conduct, they get marginalized so they can't do too much damage, or simply gotten rid of.

All these youngsters join up, and have it drummed into them that they have these huge responsibilities to their fellow warriors and their nation, and they must do their jobs right. It's not just that they have to cover their squad mates in fire-fights, but things like, "If you don't clean this surface correctly, the guy who is going to try to land a plane on this deck will die and maybe take a bunch of us with it." And they discover, yes, they have it in them to do their jobs that well, that dependably. They are somebody who pulls their weight and can be counted on.

And furthermore, they discover they are in a whole society of people who are equally determined to be dependable, to pull their weight and be somebody who can be counted on. That can be a down-right rapturous experience; I know, because there's other ways to have at least some of that experience, such as through the performing arts, and having tasted it, I can attest it's positively intoxicating. It's like falling in love. Or maybe it is falling in love: this probably is more the basis of that intense camaraderie shared by veterans who served together than common adversity or common purpose.

Civilian society, as a whole, is, in contrast, replete with fuckups. People who can't get out of their own way enough to be depended on, people who don't take commitments seriously, people who are exploitative, who phone it in, to try to get away with minimal contributions, who don't care about those who rely on their work, who don't want to be relied upon, people who don't want to have self-restraint. We don't get to throw those people out of society, so there they are, being part of civilian society, fucking up, and their fucking up being tolerated.

People in the military, who subscribe to the discipline of speech and courtesy described above, are way, way, way, way, way too polite to actually come out and say, "We're different from civilians because we're not used to putting up with fuckups," but that is what it sounds like is lurking between the lines. It feels like they're trying to apologetically and politely say something that more bluntly put might sound like, "See, among us, fucking up is not okay; being a fuck up is not okay. We have these values and stuff which say it's not okay. And we totally get that that's okay in civilian life, where if you want to be a fuckup, that's your free choice. In our culture, the military culture, we see that as not a legitimate choice. We see that as bad – and comport ourselves accordingly."

If I am correct that this is the subtext, it also explains some of the difficulty that discharged service members can experience reintegrating into civilian society. The go-to explanation for difficulties reintegrating is usually PTSD or other socio/emotional "damage" that prevents reintegration. But that would be how civilian society sees it: "if you can't join us, it must be because you're broken." But what if it's just straight-up acculturative stress, from (re)joining a society with a very different value system, and one which does not support and espouse values that were not merely emotionally important, but plainly and obviously organized the left society in ways one prized?

Replies from: gwern
comment by gwern · 2017-06-03T19:24:12.919Z · LW(p) · GW(p)

Personally, I don't think that the military helps. The claim is implausible as personality traits are pretty stubborn things. Anecdotes are definitely confounded as militaries these days can be selective (literally administering IQ tests), and young men who enlist will mature as a simple matter of time. Military-style boot camps are one of the juvenile justice interventions we can say don't work well or maybe at all ("Preventing future offending of delinquents and offenders: what have we learned from experiments and meta-analyses?", Mackenzie & Farrington 2015) despite being aimed at the 'youngsters' who ought to most benefit from not being 'fuckups' and being aimed much more explicitly at that goal with a lower bar of success. And the natural experiments I know of like the Vietnam War draft lottery show permanent large harms to income from being drafted (most famously, Angrist 1990), which is certainly not what one would expect from a magical organization which turns fuckup civilians into reliable soldiers and explains why super-competent soldiers have such difficulty comporting in & reintegrating into a civilian life of tragic incompetence everywhere.

Replies from: Duncan_Sabien, Kaj_Sotala, NancyLebovitz
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-06-03T20:38:00.727Z · LW(p) · GW(p)

Some confounds/conflations in the above? Like, I agree with the truth value of the specific examples you've cited, but I think I disagree with the implicit claim that they're necessarily entangled with the thing Kaj is quoting.

e.g. yes, juvenile military institutions don't prevent people from being deliquent or discourage future criminality, but that's not to say that they don't cause those people, while embedded, to be reliable for object-level tasks and deadlines.

Similarly, the absolute horror and chaos that was Vietnam War combat, and the subsequent shredding of the psyches of people who didn't volunteer to be there, seems fundamentally different from e.g. modern duty on an aircraft carrier or WWII quartermastering. It doesn't seem incoherent or contradictory to say both [military culture promotes reliability] and also [being drafted in Vietnam screws you up, military schools don't fix teenage delinquency].

I also note that both examples cited talk about people who don't self-select in, which—if relevant—wouldn't surprise me.

I think "implausible because personality traits are pretty stubborn" is an overconfident statement—personality traits are pretty stubborn, but being thoroughly embedded in a culture that forces you to practice certain skills and surrounds you with coherent social pressures is also pretty stubborn. And in point of fact, while within that context, culture clearly dominates over personality traits, whatever else happens afterwards.

If I've misunderstood your claims, please forgive and correct—I feel like I might've missed your crux.

comment by Kaj_Sotala · 2017-06-05T08:59:42.898Z · LW(p) · GW(p)

Duncan's comment already touched upon this, but just to highlight it: both of your cited studies are about situations where people were literally forced to join against their will; the Vietnam example additionally has those people exposed to the horror that was Vietnam. Being forced to join something against one's will tends to make people very resistant against the norms advocated there, and even to actively behave in the opposite way as soon as they get out of there. (I'm reminded of all the kids who decided, for many years afterwards, they want to have nothing to do with sports or exercise because they had to suffer through school gym class.) It's not a condition where you'd even expect to get much of the internalized pride in the group norms, and desire to act accordingly, that was discussed in my quote.

I get that you picked those studies to combat the confounding from selection (both in the military screening its candidates and the candidates themselves self-selecting), but the context of this discussion was "is Dragon Army a good idea". Dragon Army participants are also going to be both self-selected and heavily screened for suitability, so whether or not this kind of an intervention would work for the population at large isn't actually the question we're interested in.

comment by NancyLebovitz · 2017-07-18T00:55:58.370Z · LW(p) · GW(p)

An actual military has life-and-death work. This might even be more important than consent.

A military-style "boot camp" for delinquents is a cargo cult by comparison.

comment by tristanm · 2017-06-03T19:24:01.605Z · LW(p) · GW(p)

Unfortunately I think at this point the discussion can only go towards a back and forth on what is good and bad about the military, which can't be very profitable, and this kind of debate has gone on for so long already that it's embedded into popular culture. It's also very heavily culture-warish.

Clearly, the military is adapted for one task, which requires an extraordinary amount of dependability and low likelihood of failure. There's also an extraordinary cost for that low likelihood of failure, which encompasses the things you pointed out. I don't think any society has survived very long being converted into 100% military culture, nor has it survived getting rid of it completely.

Replies from: ChristianKl
comment by ChristianKl · 2017-06-03T21:02:06.746Z · LW(p) · GW(p)

Clearly, the military is adapted for one task, which requires an extraordinary amount of dependability and low likelihood of failure.

Maybe a low likelihood of certain kind of errors for which it optimizes, but not in general. An above average rate of sexual assault is a sign of failure.

Losing track of money in the middle of a war that might go to anyone is also a failure (https://www.theguardian.com/world/2007/feb/08/usa.iraq1).

The NSA lost their cyber-weapons (maybe to Russian spies) and now you have civilian targets like hospitals getting attacked because they didn't do their OPSec properly.

The US military accidentally bombs hospitals.

comment by username2 · 2017-05-27T10:54:10.495Z · LW(p) · GW(p)

Romantic entanglements and their fallout are not ruled out by all male environments even if the members do not identify as homosexual. So still important to consider these issues even if there are no women at all.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2017-05-27T18:56:07.542Z · LW(p) · GW(p)

Can confirm. I was in a fraternity in college with many gay members, some of whom occasionally hooked up and caused manageable levels of drama. This was a relatively recent phenomenon in the history of the fraternity; I think as recently as 10 years before my time nobody was out, and then some people came out after joining.

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-26T09:25:15.472Z · LW(p) · GW(p)

Currently there are both men and women interested (though many more men than women).

All of your points above seem sound at first glance, and yes, it's on the docket to be sorted out. I don't think I want to go full monastery, but there's a decent chance the house itself will end up being activity-restricted in some way.

Thanks for the detailed model-sharing.

Replies from: Raemon, JenniferRM
comment by Raemon · 2017-05-27T05:39:59.852Z · LW(p) · GW(p)

I want to add a strong "romantic entanglements are a big risk" voice.

My worst experience with rationalists (and possibly some of their worst experiences with me) were when romance/sex conflict came up. It turns out people are really bad at being rational when that happens. (This was exacerbated by a lot of people being inexperienced, which may or may not be the case in Dragon Army, but it makes sense for romance and sex drive being something just overwhelms the prefrontal cortex)

comment by JenniferRM · 2017-05-30T06:50:00.067Z · LW(p) · GW(p)

I'm glad the model was deemed useful :-) Good luck.

comment by Mass_Driver · 2017-05-26T07:51:01.063Z · LW(p) · GW(p)

1) I agree with the very high-level point that there are lots of rationalist group houses with flat / egalitarian structures, and so it might make sense to try one that's more authoritarian to see how that works. Sincere kudos to you for forming a concrete experimental plan and discussing it in public.

2) I don't think I've met you or heard of you before, and my first impression of you from your blog post is that you are very hungry for power. Like, you sound like you would really, really enjoy being the chief of a tribe, bossing people around, having people look up to you as their leader, feeling like an alpha male, etc. The main reason this makes me uncomfortable is that I don't see you owning this desire anywhere in your long post. Like, if you had said, just once, "I think I would enjoy being a leader, and I think you might enjoy being led by me," I would feel calmer. Instead I'm worried that you have convinced yourself that you are grudgingly stepping up as a leader because it's necessary and no one else will. If you're not being fully honest about your motivations for nominating yourself to be an authoritarian leader, what else are you hiding?

3) Your post has a very high ratio of detailed proposals to literature review. I would have liked to see you discuss other group houses in more detail, make reference to articles or books or blog posts about the theory of cohousing and of utopian communities more generally, or otherwise demonstrate that you have done your homework to find out what has worked, what has not worked, and why. None of your proposals sound obviously bad to me, and you've clearly put some thought and care into articulating them, but it's not clear whether your proposals are backed up by research, or whether you're just reasoning from your armchair.

4) Why should anyone follow you on an epic journey to improve their time management skills if you're sleep-deprived and behind schedule on writing a blog post? Don't you need to be more or less in control of your own lifestyle before you can lead others to improve theirs?

Replies from: Qiaochu_Yuan, Duncan_Sabien, robot-dreams
comment by Qiaochu_Yuan · 2017-05-26T18:25:18.180Z · LW(p) · GW(p)

I don't think I've met you or heard of you before, and my first impression of you from your blog post is that you are very hungry for power. Like, you sound like you would really, really enjoy being the chief of a tribe, bossing people around, having people look up to you as their leader, feeling like an alpha male, etc.

As someone who knows Duncan moderately well in person and has been under his leadership in a few contexts (CFAR instructor training and the recent Dragon Army experiment), I can confirm that this is nowhere close to true. What Duncan is hungry for is for the world to be better, and he thinks as a contingent fact that being the chief of this particular tribe is the best way for him to do that. I agree with Duncan's assessment of himself that if someone else stepped up to do the thing he would breathe an enormous sigh of relief, rather than be in any way jealous.

Why should anyone follow you on an epic journey to improve their time management skills if you're sleep-deprived and behind schedule on writing a blog post?

It depends on how urgent you think Duncan thinks having this blog post out sooner rather than later is. If Duncan were optimizing for looking like he has his shit together he could have either just not mentioned that he was sleep-deprived and behind schedule, or he could have gotten more sleep and fallen further behind schedule. Instead he posted the blog post, and went out of his way to mention that he was sleep-deprived and behind schedule, because he is optimizing for something else.

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-26T09:35:25.683Z · LW(p) · GW(p)

1) Thanks.

2) Nope, you're just way off (though I appreciate the candor). I thought about coming up with some sort of epistemically humble "maybe" or "I can see where you got that impression," but it seems more advisable to simply be direct, and to sound as confident as I am. I've been a leader, and I've been a follower, and I've transitioned in both directions within the same contexts, and there's no special draw there along any of the lines you laid out. In particular, I think the statement "this needs to happen, and no one else is going to do it" is actually true; if some contender wants to stand up and credibly claim they can pull this off better than me, I will IMMEDIATELY hand them the baton and breathe a sigh of relief—my actual favorite place to be is second or third in command.

Feel free to PM me if you're actually curious about my history, or to poke around my reputation within the community, or to ask any of the dozen or so people who've worked with me for a couple of years, or the twenty people who attended the dry run experiment last week (I can point you in their direction more specifically, also through PM).

(I also considered whether to update/change my tone given your first impression, but it seems to be enough of an outlier that I probably won't make any deliberate effort.)

3) I think you and I might disagree fairly strongly on the importance/value/worth of "the literature" in this arena. Part of the whole point here is that I have a solid inside view developed from a unique set of experiences that a lot of other people are doing it wrong. I think there's some value in literature review (e.g. the sources that Benquo listed up above seem worth at least an afternoon's perusing), but in three separate fields I've found that my idiosyncratic ideas that everyone said contradicted the literature and wouldn't work did, in fact, work, and produced excellent results; I'm not actually convinced that there's enough EV to justify more than a quick, 80/20 skim of the available info. I'm currently reasoning from my armchair—that's a fair point. But also the whole screed is "let's get down to the business of running experiments and gathering data," and I note again that we did already do a test weekend that gave promising preliminary support to a lot of my models and claims.

4) Another quite sound/reasonable criticism, taking the outside view with no priors to add detail to your model. In point of fact, though, it's been a 90th percentile unusual month (I'm the curriculum director in an org that just ran its most ambitious sprint of events to date, including bringing in a round of new employees whose training I was almost entirely responsible for, and then since that ended I've been churning hard on this project), and it's not particularly strong evidence about other months. Also, I think it's reasonable to posit that one needs to be more or less in control before leading others, but I note it's not obvious—I can clearly envision (for instance) models in which one person sacrifices themselves to push everyone else forward. That's not what I plan to do, but the picture isn't as straightforward as a clever-sounding false equivalency.

Also, lastly, remember the house is supposed to help me, too:

I personally feel that I am operating far below my healthy sustainable maximum capacity, and I'm not alone in that, and something like Dragon Army could help.

I'm not the only one with skills, and a big part of it is creating a construct that I can use to level up and improve. The part where I impose structure is separate from the part where maybe I could leverage social pressure to improve my own workflow.

Replies from: Lumifer
comment by Lumifer · 2017-05-26T16:11:36.168Z · LW(p) · GW(p)

I think the statement "this needs to happen, and no one else is going to do it" is actually true

Can you point to some reasons why you believe that an authoritarian commune is a good idea (besides "let's try and see what this button does")?

in three separate fields I've found that my idiosyncratic ideas that everyone said contradicted the literature and wouldn't work did, in fact, work, and produced excellent results

"Who needs literature, I'm smarter than all of them" is a worrisome attitude. By the way, did you check what the literature actually said? In my experience what "everyone says" literature claims is usually NOT what the literature really claims.

the whole screed is "let's get down to the business of running experiments and gathering data,"

What is the price for the experiment and who will pay it?

Replies from: Duncan_Sabien, Viliam
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-26T16:17:04.072Z · LW(p) · GW(p)

Er ... I think the whole post above is all about answering your first question? I'm confused, and feel somewhat strawmanned by the summary "let's try it and see what this button does." Because high-commitment, high-structure environments have a long, long history of being actually productive and useful and net-good for a lot of the people that go through them, and ought to be in the toolkit despite their known failure modes, and given the rationalist community's strong predilections towards individualism, prioritizing flexibility and following short-term motivation, and not committing to things, it seemed naive to expect that a high-commitment, high-structure environment would come into existence via committee. Note that, while not super emphasized in the post above, a major assumption is "if I'm right, I should be able to largely put down the baton six months in when the thing is clearly working," i.e. it's more about the structure than the authoritarianism specifically (the authoritarianism being simply a necessary catalyst imo).

The price for the experiment is largely distributed across its members; it's the money involved in housing and whatever difficulty people suffer from giving up a not-insignificant-but-overall-fairly-small fraction of their agency and self-determination. It's roughly analogous, I think, to the price one pays to become a black belt, only condensed down into six months rather than spread across several years.

As far as "who needs literature, I'm smarter than all of them" being worrisome—I'm okay with people being worried. Those people are being actively encouraged to influence things here, and also the whole system is based on iteration, and also I object to the strawmanning again (I've said more than once that there's some value to be had there, but am being summed up as rejecting it entirely), and also I am, in fact, smarter than a lot of them. Not all, but a lot, and it's been proven before in multiple domains, and I'd be an idiot to ignore that.

Replies from: Lumifer
comment by Lumifer · 2017-05-26T16:59:04.650Z · LW(p) · GW(p)

I'm confused, and feel somewhat strawmanned by the summary "let's try it and see what this button does."

That wasn't a summary of your position, that was a straw counterpoint for you to kick :-)

high-commitment, high-structure environments have a long, long history of being actually productive

Well... it's complicated. Such environments are good for producing tools for a purpose. Cogs in a machine, maybe, or mass-produced minds from the same mold, or even cannon fodder if you're unlucky -- note that the military is the prototypical "high-commitment, high-structure" institution.

Having tools is certainly productive from the point of the view of the purpose. And it is true that some (maybe many) people feel that being a tool gives you a purposeful life, better than being pointlessly adrift. But, as I said, it's complicated :-/

it's more about the structure than the authoritarianism specifically

Structure needs to be enforced -- otherwise everyone could easily set up the needed amount of structure in their life themselves. The point of the exercise is, basically, "I will organize your life for you" and that doesn't work in the no-stick all-carrot setups.

I guess the concept I worry about is responsibility: if you will organize my life for me, you become responsible for it while my responsibility diminishes.

I am, in fact, smarter than a lot of them

That's a good thing to be, but not necessarily to believe in :-D

In any case, I'm not saying you should do what the literature says, I'm saying you should know what the literature says, and not on the basis of hearsay either.

The price for the experiment is largely distributed across its members

Yes. The price (I'm mostly speaking about things other than money) is uncertain, in statistical terms it's a random variable with a particular distribution. The question is how far the tail stretches: how bad is the worst-case scenario?

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-26T17:33:26.617Z · LW(p) · GW(p)

Ah, gotcha. Thanks. =)

I think the point of the exercise is less "I will organize your life for you," and more "we will reduce our ability to hide from one another, and therefore all be more likely to conform to our shared sense of that-which-is-endorsed." The "I will organize" part is more "I will get us all together and turn on some of the relevant and hopefully-appropriate spotlights, and then moderate the discussion about which spotlights should turn back off."

I have hopes that we can see the worst-case scenarios coming in time to avert them or eject, and that therefore the effective worst-case scenario is basically something like "I had a rough six months and have to find another room to rent again."

Strong agreement with basically everything you say above.

comment by Viliam · 2017-06-01T12:06:32.889Z · LW(p) · GW(p)

Can you point to some reasons why you believe that an authoritarian commune is a good idea (besides "let's try and see what this button does")?

Because in real world there are many successful authoritarian organisations? More or less every company you heard about is de facto authoritarian inside (sure, there are exceptions, too).

Because "our kind" seems to have bias against coordination, and an authoritarian leadership is a possible way to solve it?

who will pay it?

Volunteers.

Replies from: Lumifer
comment by Lumifer · 2017-06-01T15:10:30.950Z · LW(p) · GW(p)

Because in real world there are many successful authoritarian organisations?

The issue isn't so much "authoritarian" as it is the combination of "authoritarian" and "commune".

Communes tend to be totalitarian and this one is explicitly set up as such (high-commitment, full-immersion, etc.) This makes it a dangerous environment -- if people mention noticing the skulls, that's because there are a LOT of skulls. "Authoritarian" means submission to the authority and in a totalitarian context that means total submission.

Authoritarian organizations like companies merely claim about 40 hours of your time per week plus obedience to a set of mostly external rules. And, of course, they pay you recognizing that their claim is a burden on you :-)

I understand where the impulse comes from: grassroots left is notoriously disorganized with the Occupy movement having been, perhaps, the peak of that -- no leadership, no specific demands, lots of talking, zero achieved. But I would be a lot more comfortable with a "normal" goal-directed organization which focuses on external goals and not on molding the minds of its members. I'm very suspicious of mind-molding.

Besides, Duncan's comments throughout the last week left me with grave doubts about his suitability to lead this kind of project. Low credence, of course, since I'm reacting merely to an internet persona and not to someone I know in real life, but my opinion of that persona took a marked turn to the worse.

an authoritarian leadership is a possible way to solve it?

Sure, it's a possible way. I'm concerned with the cost / benefit ratio, though. Plus benevolent God Emperors are in short supply.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2017-06-01T16:42:49.521Z · LW(p) · GW(p)

Communes tend to be totalitarian

Cite? The kinds of communes my friends and acquaintances have lived in, haven't seemed totalitarian at all.

Replies from: Lumifer
comment by Lumifer · 2017-06-01T17:11:07.006Z · LW(p) · GW(p)

Not in the sense that the secret police will check your underwear drawer for forbidden literature, but in the sense that they require conforming in more encompassing and more personal ways than the usual institutions of the society (like a workplace or a college, etc.)

Note that things which are basically shared living arrangements on a smaller or larger scale are sometimes called communes even though they don't requite active integration into the life of that mini-society -- I don't have those in mind.

And, of course, this totalitarianism is not a binary variable but an axis with, essentially, a solitary isolated individual at one end and a hive mind on another.

comment by robot-dreams · 2017-05-26T15:52:10.897Z · LW(p) · GW(p)

I agree that 4 is a concern.

I disagree about 2. After having (a) participated in the weekend experiment and (b) done some "back-channel" references on Duncan, my impression is that he hates the fact that leadership will isolate him from the group he really wants to be a part of. I expect that if the experiment is successful, Duncan will eagerly set aside leadership and integrate himself with the group.

comment by zjacobi · 2017-05-28T22:31:45.025Z · LW(p) · GW(p)

I think the troll obliquely raised on good point with their criticism of the example for Rule 6:

For example, a Dragon who has been having trouble getting to sleep but has never informed the other Dragons that their actions are keeping them awake will agree that their anger and frustration, while valid internally, may not fairly be vented on those other Dragons, who were never given a chance to correct their behavior

Let me pose a question to the reader of my comment: would you rather live in a house where you have to constantly verbally ask the other residents to stop doing things that they could have reasonably foreseen would bother you, or would you rather live in a house where people actually used reasonable expectations of what other people want to guide their behavior and therefore acted in a way that preempted causing other people irritation?

Treating something like your sleep disturbances as your responsibility is fine if e.g. you (like me) have lots of trouble falling asleep and something like people whispering 15 metres from your room is keeping you from falling asleep. In that case, those people are doing everything right and really don't know that they're hurting you. It is unreasonable to get angry at them if you haven't explained to them why their behaviour is bad for you.

Sometimes it's less clear though. I sometimes use the microwave after midnight. I know that the microwave can be heard in my room and in my room mate's room. When I use the microwave and think he might be asleep, I stop it before the timer finishes and it beeps loudly. There's not much excuse to wait for my room mate to specifically request that I do this; I'm more than capable of figuring out a) the microwave beeping at the end is loud and the sort of thing that can disrupt sleep and b) there's a way I can stop that from happening. It does show some failure of consideration if I were to shrug off the potential inconvenience that the microwave could present for my room mate for the slight benefit of not having to watch the microwave.

This points to one of the failure modes of Tell Culture, where people use it as an excuse to stop doing any thinking about how their actions can affect other people. This actually suggests that one potential house experimental norm could be something like "before making an action that might effect another Dragon, pause and consider how it might effective them and if the effect will be a net positive."

What this all comes down to for me is that it seems unfair to ask people to assume goodwill without also asking them to always attempt to act with goodwill.

Replies from: drethelin, Duncan_Sabien
comment by drethelin · 2017-05-29T21:52:04.926Z · LW(p) · GW(p)

I like this comment but I think what this and the original trollpost miss out on is that LW community in general, due to having a lot of people with autism and sensory issues, has a ton of people who actually do NOT have "reasonable expectations of what other people want to guide their behavior". The OP quoted here is making a common typical-mind type error. Of COURSE it's better to live with people who intuit your preferences and act in accordance to them without being told what they are. But it's obnoxious to shit on attempted to solutions to a problem by insisting that morally good people could never have the problem in the first place.

Replies from: zjacobi
comment by zjacobi · 2017-05-30T02:37:30.594Z · LW(p) · GW(p)

Agreed. I have a bunch of social anxiety and dislike it when a certain degree of social smoothness is treated as necessary to be sorted into to the category of "good person".

My specific criticism is of people (and I don't just mean other people; I've failed here before) who could (with ease, not with Herculean effort) intuit preferences but use Tell Culture or direct communication norms to completely avoid doing so. This is especially maddening if you have social anxiety, because you're left anxious about bringing the thing up, especially to someone who seems so otherwise socially competent.

Thanks for the chance to clarify my views here!

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-30T03:33:23.914Z · LW(p) · GW(p)

Yeah, +1 for not "hiding" behind Tell Culture to save effort.

One of the fixes for the anxiety thing is Circling/Focusing/pair debugging culture, which goes a loooooong way toward both a) building the trust and safety required to bring up such issues with less anxiety and b) actually providing Schelling points for when to say it. We're also doing a weekly retrospective where it'll be low-cost and high-support to gently point at such things.

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-28T23:14:58.523Z · LW(p) · GW(p)

+1 to all of that, especially especially the last line.

comment by sapphire (deluks917) · 2017-05-26T16:56:09.947Z · LW(p) · GW(p)

-- A note: I originally sent Duncan criticism privately. I didn't want to add too much negativity to the discussion. But Duncan asked me to post publicly and I will defer to his judgement. Its his project and he is a very capable guy. I really hope DA succeeds, the rationalist community could be doing much better on many metrics. In general I find the model of DA very promising. But I have some serious concerns.

-- The ethics code seems extremely strict.

For example this rule strikes me as extraordinarily hard to follow: "A Dragon will assume good faith in all interactions with other Dragons". As does "A Dragon will be fully present and supportive when interacting with other Dragons in formal/official contexts".

Earlier in the document Duncan said "Discuss, and then agree upon, and then rigidly and rigorously enforce a norm of perfection in all formal undertakings". This implies to me that Duncan intends to enforce the CoC pretty strictly. Should Duncan be confidant its reasonable to expect such large deviations from how humans normally operate? I should note that normal bootcamps do not require as much psychologically from their recruits. Even though bootcamps require obedience they don't normally require recruits to think a certain way.

Duncan explicitly said he was willing to modify norms that members felt were too hard to follow (" Be ruthless about discarding standards during strategic review; if a member of the group says that X or Y or Z is too high-cost for them to sustain, believe them, and make decisions accordingly."). But he also said that the CoC was unlikely to change. If I thought the CoC was meant more as a set of guidelines than strict rules I would be less worried. But that is not how I interpreted the post.

-- How many people do we expect to leave or get kicked out?

I have moderated some internet communities (And admin an active one now). Temp bans and warnings can only go so far. At some points you have to be willing to pull the trigger and ban people.

The section on reparations reassured me that Duncan was thinking hard about to keep people from falling off the path. In addition, unlike most internet communities, the DA recruits will be heavily vetted. But in order to enforce the reparations you either have to appeal to social pressure or the threat of kicking people out. I think the standards are very strict so serious discipline might be needed.

-- Are there practical or ethical problems with this plan?

People who get kicked out of DA are still required to pay rent until they can find a replacement. Assuming they are on the lease it seems highly unlikely you can kick them out of the house. However if someone gets kicked out of the house they might be pretty negative towards the rest of the group. It probably a bad situation to keep them around, but maybe they can't easily find a replacement or a new place to live.

Secondly people who get kicked out might be psychologically unable to remain at the DA barracks. But until they can find someone to replace them they are on the hook for rent. In my personal opinion joining dragon army should be a "Good deal" for anyone involved. Its important that the downside of: "get kicked out" -> "lose friends, need to find a replacement despite the fact that you got kicked out and maybe can't give DA a good review, on the hook for lots of rent" is manageable. I would really hate to see anyone get hurt. I assume Duncan shares my concerns but he didn't address them in the post.

In addition, has Duncan looked into the legalities surrounding renter's rights in California (and Berkeley in particular)? This isn't in the post even if he has done the research.

-- Duncan said the following "I also considered whether to update/change my tone given your first impression, but it seems to be enough of an outlier that I probably won't make any deliberate effort.

Its plausible to me they aren't much of an outlier. I had the same reaction, as did several people I showed Duncan's post to (though other people thought Duncan's post sounded fine). If I didn't know Duncan was the curriculum director at CFAR I would have thought he was crazy and probably dangerous. Stuff about "living under my thumb", self comparisons to Tyler Durden and the ender's game quote about "quick, decisive obedience" really worried me. Some of the most shocking stuff, from my perspective, was in the pop culture references. But a number of things in the main text gave off an extremely strong cult vibe. Some examples include the "house salute" and the "Various call-and-response patterns surrounding house norms". I should note I am not accusing Duncan of anything, based on his reputation he seems trustworthy. But his tone definitely set off loud alarm bells for me.

--

Again I am really happy people are considering new rationalist norms. Duncan seems like a very good choice to lead an experimental project. The general strategy of DA seems like a good one. But I wanted to share my concerns.

Replies from: Duncan_Sabien, ChristianKl
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-26T17:59:36.789Z · LW(p) · GW(p)

+1; general appreciation for your willingness to make the commentary public, so that I and others can interact with it openly.

EDIT: I got distracted dealing with the troll. I still hope to return to this comment, but if I fail to, please know that I am definitely mulling it over and taking its content seriously, and that I again thank you for posting.

comment by ChristianKl · 2017-05-27T11:25:23.252Z · LW(p) · GW(p)

I have moderated some internet communities (And admin an active one now). Temp bans and warnings can only go so far. At some points you have to be willing to pull the trigger and ban people.

In an internet community, you have less tools to change behavior than in personal conversations (and I say that as having moderated in a big personal development internet forum for years).

As far as personal development frameworks go ideas like of "code of perfection" can be found in Landmark (/The Four Agreements). On the other hand the actual verbal techniques advocated are NVC/Circling/Focusing/Internal Double Crux, which have values of authenticity and accepting the emotions that arise in the moment.

Humans sometimes do have instincts to see other people in bad faith. There are two ways to deal with it. ① Surpress it because you have a codex that doesn't allow the instinct to be carried out. ② Bring it authentically to the front and be open about it.

Landmarkish thought would advocate ① while Circling leads to ②. Both can work as cultural norms but they are different and if there's a desire to be in Circling mode, don't have rules that require the other.

Replies from: Decius
comment by Decius · 2017-05-28T07:43:43.294Z · LW(p) · GW(p)

I'm managing/leading an internet gaming community, and the only tools I've ever had to use are selection and conversation.

I've had one person leave because their goal in joining was to acquire enough information and power to cause harm and they were so unsubtle about it that I was able to identify that and stop them. One additional person left because our norms of 'don't cheat' and 'be nice to our friends' were given to him gently by everyone in voice chat every time they were violated.

Oddly enough, both of those people ended up joining a specific competing group that held neither of the norms 'don't cheat' nor 'don't make public rape threats towards people who call out your cheating'.

And my selection method? Be public and pushy about what kind of norms you have, and push away people who don't already have and want to follow those norms.

comment by [deleted] · 2017-05-26T20:43:41.543Z · LW(p) · GW(p)

This post is so thoroughly repulsive and disgusting that I made an account for the sole purpose of pointing out how transparently and obviously perverse this fucked-up proposal is. Naturally I don't have any actual desire to be critical or rude; it's just that nobody else is doing it, so because of my infinite kindness and charity (if you have any doubts, rest assured that my closest friends and colleagues will all attest to my beneficent nature), I find myself obligated to step up to the batting plate, so to speak. Ah, if only someone could release me from this great burden. If only.

The author seems to have missed the part of Ender's Game about the protagonists being children. It's generally not a good thing for adults to role-play as children (the reasons for which are, I hope, sufficiently obvious to not require elaboration). The dominant impression I get from this is that this resembles the antifa movement and the anti-antifa movement: it's a bunch of immature adults LARPing but pretending that they aren't doing so.

Note that despite the author's insistence on the validity of his experience as a CFAR instructor, he fails to actually point to any concrete benefits that people have derived from that instruction -- plausibly because those benefits, when concretely stated without embellishment, are at best underwhelming. Note also that (1) no mention of dealing with problems arising from interpersonal romance are mentioned in the post and (2) the author's reply to the comment that does point out the probable future existence of such problems receives what can at best be termed a cursory and dismissive reply.

This suggests that, contrary to the author's assertion of having amassed a diverse and broad range of skills, and contrary to whatever accolades his colleagues may see fit to place upon him, he hasn't yet attained the level of social awareness of a typical American high school student. It also suggests that the author's ability to model himself and to model others has more-or-less not yet attained the level of sophistication required to view people as more than one-dimensional. I.e., the post seems to suggest an attitude of "I, a good person, will find a bunch of good people, and we'll make these good things happen". I'm pretty sure I've met high school students with a more nuanced (and less optimistic) understanding of human nature.

Naturally, this would be excused if the Berkeley rationalist community were full of people who are actually good people and who tend to get things done. Let's check: Qiaochu Yuan, one of the most mathematically sophisticated members, has to the best of my knowledge hit a dead end in his PhD, and is becoming a CFAR instructor in Seattle, which makes it seem as though he's actually concretely worse off compared to the counterfactual in which the rationalist community didn't exist; Eliezer Yudkowsky has shifted in the direction of posting practically-untrue, self-aggrandizing bullshit on Twitter and Facebook instead of doing anything productive; Arbital is best described as a failure; word is going around that Anna Salamon and Nate Soares are engaging in bizarre conspiratorial planning around some unsubstantiated belief that the world will end in ten years, leading to severe dissatisfaction among the staff of MIRI; despite the efforts of a very valiant man, people have still not realized that autogynephilic men with repressed femininity and a crossdressing fetish pretending to be women aren't actually women; CFAR itself is trending in the direction of adding bureaucracy for bureaucracy's sake; my own personal experience with people branded as "CFAR instructors" has been extremely negative, with them effectively acting arrogant out of proportion to their competence, not to mention their below-average levels of empathy; there was that bizarre scandal last year in which someone was accidentally impregnated and then decided not to abort the child, going against what had previously been agreed upon, and proceeded to shamelessly solicit donations from the rationalist community to support her child; etc., etc., etc.

In effect, there seems to be some sort of self-deception around the fact that the Berkeley rationalist community is by almost all reasonable standards severely dysfunctional, with the best people actually being on the periphery of the community. It's almost as if the author is coming up with the "Dragon Army" in an attempt to help everyone collectively delude themselves into believing they're much better than they are, because he can't bear to actually look at the Berkeley rationalist community and see it for what it is: a pile of garbage. Just like how a child from a broken family might imagine that everyone's getting along. Unfortunately(?), flinching away from the truth doesn't actually make reality go away.

Amusingly, it actually does seem as though the author partially realizes this. Let's review the criteria which the author hopes the members of "Dragon Army" will fulfill after a year's worth of cult membership:

(1) Above-average physical capacity (2) Above-average introspection (3) Above-average planning & execution skill (4) Above-average communication/facilitation skill (5) Above-average calibration/debiasing/rationality knowledge (6) Above-average scientific lab skill/ability to theorize and rigorously investigate claims (7) Average problem-solving/debugging skill (8) Average public speaking skill (9) Average leadership/coordination skill (10) Average teaching and tutoring skill (11) Fundamentals of first aid & survival (12) Fundamentals of financial management (13) At least one of: fundamentals of programming, graphic design, writing, A/V/animation, or similar (employable mental skill) (14) At least one of: fundamentals of woodworking, electrical engineering, welding, plumbing, or similar (employable trade skill)

"Above-average"? "Average"? Not exactly a high bar. "At least one employable mental skill, and at least one employable trade skill"? Is the correct inference here that the typical participant is actually expected to be not employable at all (i.e., deficient in both categories)? "First aid & survival" -- if there was ever any doubt that this is actually just sophisticated childish role-playing... The fact that I (in contrast with the Berkeley rationalist community) have put very little directed effort into the meta-goal of self-improvement and nevertheless plausibly already satisfy 11 of these 14 criteria, with the other 3 not seeming particularly difficult to attain, is not a good sign!

Despite the fixation on "evolving norms" or whatever, the author seems to be particularly blind to what social reality is actually like and what actually makes communities get along. Consider, e.g., the following quote:

for example, a Dragon who has been having trouble getting to sleep but has never informed the other Dragons that their actions are keeping them awake will agree that their anger and frustration, while valid internally, may not fairly be vented on those other Dragons, who were never given a chance to correct their behavior

Let me pose a question to the reader of my comment: would you rather live in a house where you have to constantly verbally ask the other residents to stop doing things that they could have reasonably foreseen would bother you, or would you rather live in a house where people actually used reasonable expectations of what other people want to guide their behavior and therefore acted in a way that preempted causing other people irritation?

There are two inferences to be made here:

  1. Members of the Berkeley rationalist community are particularly prone to using bureaucratic rule-setting as a way to compensate for their severely below-average social skills, and
  2. Members of the Berkeley rationalist community are particularly low-empathy and embody the worst of individualism, such that they don't actually care whether or not what they're doing might bother others until they're told to stop.

In my personal experience, both inferences are correct. Ultimately, what this comes down to is a bunch of socially-inept losers with near-autistic social skills trying to attain the sort of basic social harmony that comes naturally to more competent people via a combination of bizarre mimicry and a mountain of bureaucracy. Naturally, and contrary to the author's bizarre childish idealism, one can expect a hell of a lot of repressed irritation, interpersonal drama, and general unpleasantness from this experiment.

To top off the turd cake with a cherry, the author's science fiction writing is trash:

I felt my stomach twist, felt that same odd certainty, this time wrapped in a layer of the coldest, blackest ice. “You came to kill us,” I said. There was a soft rustle as the others straightened, pressure on my shoulders as the space between us closed. “You came to kill us all.”

Anyone who can vomit that out on a page and feel proud of it isn't fit to lead or teach anything. Period. The world would be concretely better off if the author, and anyone like him, killed themselves.

Replies from: Valentine, Duncan_Sabien, drethelin, Duncan_Sabien, None, grendelkhan, None, The_Jaded_One, The_Jaded_One, a_different_face
comment by Valentine · 2017-05-27T21:12:21.277Z · LW(p) · GW(p)

PSA:

Do not feed trolls.

In ages past, vitriol like this would be downvoted into oblivion. This was out of recognition that norms of good discourse are more important than the content of arguments. Failure to abide by this spreads rot and makes good communal epistemic hygiene even more difficult.

I notice downvoting is disabled now. Which, sadly, means that people will be tempted to engage with this. Which reinforces a norm of having one's dissent noticed by acting like an unapologetic asshole. Which burns the future of this garden.

So as a close second, I advise just thoroughly ignoring 18239018038528017428 unless and until they step up to meet more noble conversational norms. If there are good points to be made here, they should be converted into the truth-seeking style Less Wrong aspires to so that we can all engage with them in a more hygienic way.

I appreciate Duncan's attempts to do that conversion and speak to the converted form of the argument.

But unless and until I see enough evidence to convince me otherwise, I assume 18239018038528017428's intentions are not truth-seeking. I assume they are inflammatory and will not change via civil discourse.

Ergo, request to all:

Do not feed trolls.

PS: I will follow my own advice here and have no intention of replying to 18239018038528017428 unless and until they transpose their discourse into the key of decency. I expect them to reply to me here, probably with more vitriol and some kind of personal attack and/or attempt to discredit me personally. My ignoring them should be taken as my following my own policy. Note that if 18239018038528017428 does reply with vitriol, it will probably be in some way fashioned as an attempt to make my very refusal to engage look like confirmation of their narrative. Please filter your reading of any replies to my message here accordingly.

Replies from: John_Maxwell_IV, Elo, komponisto
comment by John_Maxwell (John_Maxwell_IV) · 2017-05-27T22:20:48.477Z · LW(p) · GW(p)

I'm the person who advocated most strongly for getting the downvote disabled, and I share some of 18239018038528017428's skepticism about the community in the Bay Area, but I strongly agree with Val's comment. There are already a ton of case studies on the internet in how fragile good conversational norms are. I'm going to email Vaniver and encourage him to delete or edit the vitriol out of comments from 18239018038528017428.

(Also ditto everything Val said about not replying to 18239018038528017428)

Replies from: Vaniver, Maxlove, cousin_it
comment by Vaniver · 2017-05-28T00:19:57.623Z · LW(p) · GW(p)

I'm going to email Vaniver and encourage him to delete or edit the vitriol out of comments from 18239018038528017428.

Thanks for that; I had already noticed this thread but a policy of reporting things is often helpful. It seemed like Duncan was handling himself well, and that leaving this up was better than censoring it. It seems easier for people to judge the screed fairly with the author's original tone, and so just editing out the vitriol seems problematic.

With the new site, we expect to have mod tools that will be helpful here, like downvoting making this invisible-by-default, to ip-banning and other things to make creating a different throwaway account difficult.

Replies from: komponisto
comment by komponisto · 2017-05-28T07:11:52.744Z · LW(p) · GW(p)

For the record: at the risk of being a lonely dissenter, I strongly disagree with any notion that any of this discussion should have been censored in any way. (I was even grateful for the current impossibility of downvoting.)

Five years ago, or even two, my opinion would have been quite different. By this point, however, I have undergone a fairly massive update in the direction of thinking people are far, far too sensitive about matters of "tone" and the like. These norms of sensitivity are used to subtly restrict information flow. Ultimately Duncan and everyone else are better off knowing about the numerically-pseudonymous commenter's opinion in all of its gory detail. In fact, I would go so far as to say that the more they engage with this individual, the better; especially since the natural tendency will be to go in the opposite direction, circle the wagons, and dismiss the critic as a low-status outsider -- a behavior pattern that doesn't need more practice, IMHO.

(At any rate, the individual seems contemptuous enough of their targets that I would expect them to disengage on their own before the full value of discussion with them has been extracted.)

Replies from: John_Maxwell_IV, Evan_Gaensbauer, entirelyuseless, FeepingCreature
comment by John_Maxwell (John_Maxwell_IV) · 2017-05-29T03:30:34.295Z · LW(p) · GW(p)

I'm also curious to hear what made you update.

It's true that sensitivity norms can have subtle effects on a conversation, but nastiness norms can too. If you look at the study cited in the "hold off on proposing solutions" essay, you can see a case where politicizing a topic restricts the space of ideas that are explored. (I think this is actually a more natural takeaway from the study than "hold off on proposing solutions".) Nasty conversations also often see evaporative cooling effects where you are eventually just left with hardliners on each side. In general, I think nasty conversations tend to leave any line of reasoning that doesn't clearly support the position of one side or the other under-explored. (This is a pretty big flaw in my opinion, because I think divided opinions are usually an indicator of genuinely mixed evidence. If the evidence is mixed, the correct hypothesis is probably one that finds a way to reconcile almost all of it.) Furthermore I would predict that arguments in nasty conversations are less creative and generally just less well thought through.

Here's another argument. Imagine 18239018038528017428 showed you their draft comment minus the very last sentence. Then they showed you the last sentence "The world would be concretely better off if the author, and anyone like him, killed themselves." Would you tell them to add it in or not? If not, I suspect there's status quo bias, or something like it, in operation here.

Anyway, I think there better ways to address the issue you describe than going full vitriol. For example, I once worked at a company that had a culture of employees ribbing each other, and sometimes we would rib each other about things other employees were doing wrong that would be awkward if they were brought up in a serious manner. I think that worked pretty well.

In fact, I would go so far as to say that the more they engage with this individual, the better; especially since the natural tendency will be to go in the opposite direction, circle the wagons, and dismiss the critic as a low-status outsider -- a behavior pattern that doesn't need more practice, IMHO.

I just want to point out that Duncan did in fact put a tremendous amount of time in to engaging with this critic (more time than he put in to engaging with any other commenter in this thread, by my estimate).

Replies from: komponisto
comment by komponisto · 2017-05-29T05:53:06.184Z · LW(p) · GW(p)

My other comment should hopefully clarify things, as least with regard to politicization in particular.

To spell out the implications a bit more: the problem with political discourse, the reason it kills minds, is not that it gets heated; rather, it freezes people's mental categories in ways that prevent them from making ontological updates or paradigm shifts of any kind. In effect, people switch from using physical cognition to think about arguments (modus ponens, etc.), to using social cognition instead (who wins, who loses, etc.). (Most people, of course, never use anything but social cognition in arguments; politics makes even "nerds" or "intellectuals" behave like typical humans.)

It is in fact possible for "heated" or even "nasty" discourse to be very information-rich; this makes sense if you realize that what counts as "nasty" depends on social norms. If you encounter discourse from a different social context (even, for example, simply because the speaker has misunderstood the social context and its norms!) you may read it as "nasty", despite the fact that the author was specifically intending to communicate content.

Now, of course I don't consider 18239018038528017428's comment to be optimally worded -- but then, I wouldn't, because I didn't write it. This is the important thing to understand: there is value to be had in getting detailed input on the mental states of people unlike oneself.

I agree that Duncan deserves positive reinforcement for engaging with this critic to the extent he did. But I think it was actually good for him epistemically to do so, not just as a demonstration of his willingness-to-bend-over-backwards, and thus, good social nature.

comment by Evan_Gaensbauer · 2017-06-02T06:32:47.458Z · LW(p) · GW(p)

As someone who doesn't live in the Bay Area, has no intention of moving there in the near future, and who resents the idea that anyone who wants to be part of what ought to be a worldwide rationality needs to eventually move to the Bay Area to do so. I'm part of the rationality and effective altruism communities, and I too have taken to task community members in the Bay Area for acting as though they can solve community coordination problems with new projects when acknowledgement of the underwhelming success or failure of prior projects never seems to take place. I do that on Facebook, though, where not only my civilian identity and a track record of my behaviour is. There are closed groups or chats where things are less open, so it's not as damaging, and even if I make a post on my own Facebook feed for over one thousand people to see, if I say something wrong, at least it's out in the open so I may face the full consequences of my mistakes.

I know lots of people mentioned in '18239018038528017428' comment. I either didn't know those things about them, or I wouldn't characterize what I did know in such terms. Based on their claims, '18239018038528017428' seems to have more intimate knowledge than I do, and I'd guess is also in or around the Bay Area rationality community as well. Yet they're on this forum anonymously, framing themselves as some underdog taking down high-status community members, when the criteria for such hasn't been established other than "works at MIRI/CFAR", and what they're doing is just insulting and accusing regular people like the rest of us on the internet. They're not facing the consequences of their actions.

The information provided isn't primarily intended to resolve disputes, which I would think ought to be the best application of truth-seeking behaviour in this regard, which is expected as a if not the only primary purpose of discourse here. Primary purposes of '18239018038528017428's comment were to express frustration, slander certain individuals, and undermine and discredit Duncan's project without evidence to back up their claims. These are at cross-purposes with truth-seeking behaviour.

There's nothing I do which is more policed in terms of tone on the basis of sensitivity that '18239018038528017428' isn't doing. While we're talking about norms of sensitivity, let's talk about norms for resolving interpersonal disputes. All the differences between how I and lots of others in the community do it, even if the tone we use isn't always splendid or sensitive, and how '18239018038528017428' do it, are what separates people who have a non-zero respect for norms, and those who don't. This coming from me, a guy who lots of people think probably already flaunts social norms too much.

I am anti-sympathetic to '18239018038528017428' and whether they're censored. Another reason not to resolve interpersonal disputes like this in public on a website like LessWrong is most people in online communities don't like seeing this sort of drama dominate discourse, and in particular there are lots of us who don't care for ever more drama from one zip code being all anyone pays attention to. That defies the purpose of this site, and saps the will of people not in the Bay Area to continue to engage in the rationality community. That's not what anyone needs. Since we've established '18239018038528017428' seems close enough to probably be part of the Berkeley rationality community already, there are plenty of channels like private group chats, mailing lists, or other apps where everyone involved can be connected, but user '18239018038528017428' wouldn't need to out themselves in front of everyone to do it. They could've had had a friend do it.

There are plenty of ways they could've accomplished everything they would've wanted without being censored, and without doing it on LessWrong. When they have access to plenty of online spaces which serve the same purpose, there's no reason LW must allow that speech to the chagrin of all other users. While I get that you think a Chesterton's fence for discourse is being torn down here, I don't believe that's what's going on here, and I think the preferences of everyone else on LessWrong who isn't personally involved deserves a say on what they are and aren't okay with being censored on this site.

Replies from: komponisto
comment by komponisto · 2017-06-06T05:45:55.879Z · LW(p) · GW(p)

You don't seem to be addressing what I said very much if at all, but rather to mostly be giving your reaction to 18239018038528017428's comments. This is demonstrated by the fact that you take for granted various assumptions that it was the purpose of my comment to call into question.

In particular, the speech is not being allowed "to the chagrin of all other users". I am notably non-chagrinned by the speech being allowed, and I advocate that people be less chagrinned by such speech being allowed.

Needless to say, to be allowed is not to be approved.

comment by entirelyuseless · 2017-05-28T13:58:08.906Z · LW(p) · GW(p)

By this point, however, I have undergone a fairly massive update in the direction of thinking people are far, far too sensitive about matters of "tone" and the like.

What convinced you of this?

Replies from: komponisto
comment by komponisto · 2017-05-29T04:06:09.821Z · LW(p) · GW(p)

What convinced you of this?

A constellation of related realizations.

  • A sense that some of the most interesting and important content in my own field of specialization (e.g. the writings of Heinrich Schenker) violates, or is viewed as violating, the "norms of discourse" of what I took to be my "ingroup" or "social context"; despite being far more interesting, engaging, and relevant to my concerns than the vast majority of discourse that obeys those norms.

  • A sense that I myself, despite being capable of producing interesting content, have been inhibited from doing so by the fear of violating social norms; and that this (which is basically a form of cowardice) is likely to also be what is behind the stifled nature of norm-conforming discourse referred to above.

  • A sense that the ability to look beyond discourse norms (and the signaling value of violation or conformity thereto) and read texts for their information content is extremely intellectually valuable, and in particular, makes texts originating in outgroup or fargroup cultures much more accessible -- the epistemic usefulness of which should go without saying.

  • A sense that a generalized version of this principle holds: the ability to conform to discourse norms, despite their information-obstructing nature, yet still succeed in communicating, functions as a signal of high status or tight embeddedness within a community, achieved via countersignaling. In particular, it cannot be successfully imitated by those not already of similar status or embeddednees: the attempt to imitate Level 4 results in Level 1.

  • A sense that discourse norms, and norms of "civility" generally, are the result of optimization for a purpose entirely distinct from the efficient transmission of information. Namely, they are there to reduce the risk of physical violence; in fact they specifically trade off communicative efficiency for this. Hence: politics, diplomacy, law -- the domains in which discourse is most tightly "regulated" and ritualized being specifically those most concerned with the prevention of physical violence, and simultaneously those most notorious for hypocrisy and obscurantism. This, by contrast, does not seem to be what an internet forum concerned with truth-seeking (or even an associated real-life community of minimally-violent individuals living in a society characterized by historically and globally high levels of trust) is supposed to be optimizing for!

Replies from: Valentine, John_Maxwell_IV, entirelyuseless
comment by Valentine · 2017-05-31T01:04:15.031Z · LW(p) · GW(p)

Cool. Let's play.

I notice you make a number of claims, but that of the ones I disagree with, none of them have "crux nature" for me. Which is to say, even if we were to hash out our disagreement such that I come to agree with you on the points, I wouldn't change my stance.

(I might find it worthwhile to do that hashing out anyway if the points turn out to have crux nature for you. But in the spirit of good faith, I'll focus on offering you a pathway by which you could convince me.)

But if I dig a bit, I think I see a hint of a possible double crux. You say:

A sense that discourse norms, and norms of "civility" generally, are the result of optimization for a purpose entirely distinct from the efficient transmission of information.

I agree with a steelman version of this. (I don't think it is literally entirely distinct — but I also doubt you do, and I don't want to pressure you to defend wording that I read as being intended for emphasis rather than precise description.) However, I imagine we disagree about how to value that. I think you mean to imply "…and that's bad." Whereas I would add instead "…and that's good."

In a little more detail, I think that civility helps to prevent many more distortions in communication than it causes, in most situations. This is less needed the more technical a field is (whatever that means): in math departments you can just optimize for saying the thing, and if seeming insults come out in the process then that's mostly okay. But when working out social dynamics (like, say, whether a person who's proposing to lead a new kind of rationalist house is trustworthy and doing a good thing), I think distorted thinking is nearly guaranteed without civility.

At which point I cease caring about "efficient transmission of information", basically because I think (a) the information being sent is secretly laced with social subtext that'll affect future transmissions as well as its own perceived truthiness, and (b) the "efficient" transmission is emotionally harder to receive.

So to be succinct, I claim that:

  • (1) Civility prevents more distortion in communication than it creates for a wide range of discussions, including this one about Dragon Army.
  • (2) I am persuadable as per (1). It's a crux for me. Which is to say, if I come to believe (1) is false, then that will significantly move me toward thinking that we shouldn't preserve civility on Less Wrong.
  • (3) If you disagree with me on (1) and (1) is also a crux for you, then we have a double crux, and that should be where we zoom in. And if not, then you should offer a point where you think I disagree with you and where you are persuadable, to see whether that's a point where I am persuadable.

Your turn!

comment by John_Maxwell (John_Maxwell_IV) · 2017-05-29T06:52:58.499Z · LW(p) · GW(p)

I'm gonna address these thoughts as they apply to this situation. Because you've publicly expressed assent with extreme bluntness, I might conceal my irritation a little less than I normally do (but I won't tell you you should kill yourself).

A sense that some of the most interesting and important content in my own field of specialization (e.g. the writings of Heinrich Schenker) violates, or is viewed as violating, the "norms of discourse" of what I took to be my "ingroup" or "social context"; despite being far more interesting, engaging, and relevant to my concerns than the vast majority of discourse that obeys those norms.

Did he tell people they should kill themselves?

This strikes me as an example of the worst argument in the world. Yes, telling people to kill themselves is an alternative discourse norm, alternative discourse norms can be valuable, but therefore telling people to kill themselves is valuable? Come on. You can easily draw a Venn diagram that refutes this argument. Alternative discourse norms can be achieved while still censoring nastiness.

A sense that I myself, despite being capable of producing interesting content, have been inhibited from doing so by the fear of violating social norms; and that this (which is basically a form of cowardice) is likely to also be what is behind the stifled nature of norm-conforming discourse referred to above.

Telling forum users they should kill themselves is not gonna increase the willingness of people to post to an online forum. In addition to the intimidation factor, it makes Less Wrong look like more of a standard issue internet shithole.

A sense that the ability to look beyond discourse norms (and the signaling value of violation or conformity thereto) and read texts for their information content is extremely intellectually valuable, and in particular, makes texts originating in outgroup or fargroup cultures much more accessible -- the epistemic usefulness of which should go without saying.

This can be a valuable skill and it can still be valuable to censor content-free vitriol.

A sense that a generalized version of this principle holds: the ability to conform to discourse norms, despite their information-obstructing nature, yet still succeed in communicating, functions as a signal of high status or tight embeddedness within a community, achieved via countersignaling. In particular, it cannot be successfully imitated by those not already of similar status or embeddednees: the attempt to imitate Level 4 results in Level 1.

Yes, it takes a lot of effort to avoid telling people that they should kill themselves... Sorry, but I don't really mind using the ability to keep that sort of thought to yourself as a filter.

A sense that discourse norms, and norms of "civility" generally, are the result of optimization for a purpose entirely distinct from the efficient transmission of information. Namely, they are there to reduce the risk of physical violence; in fact they specifically trade off communicative efficiency for this. Hence: politics, diplomacy, law -- the domains in which discourse is most tightly "regulated" and ritualized being specifically those most concerned with the prevention of physical violence, and simultaneously those most notorious for hypocrisy and obscurantism. This, by contrast, does not seem to be what an internet forum concerned with truth-seeking (or even an associated real-life community of minimally-violent individuals living in a society characterized by historically and globally high levels of trust) is supposed to be optimizing for!

If we remove Chesterton's Fences related to violence prevention, I predict the results will not be good for truthseeking. Truthseeking tends to arise in violence-free environments.

Maybe it'd be useful for me to clarify my position: I would be in favor of censoring out the nasty parts while maintaining the comment's information content and probably banning the user who made the comment. This is mainly because I think comments like this create bad second-order effects and people should be punished for making them, not because I want to preserve Duncan's feelings. I care more about trolls being humiliated than censoring their ideas. If a troll delights in taking people down a notch for its own sake, we look like simps if we don't defect in return. Ask any schoolteacher: letting bullies run wild sets a bad precedent. Let me put it this way: bullies in the classroom are bad for truthseeking.

See also http://lesswrong.com/lw/5f/bayesians_vs_barbarians/ Your comment makes you come across as someone who has led a very sheltered upper-class existence. Like, I thought I was sheltered but it clearly gets a lot more extreme. This stuff is not a one-sided tradeoff like you seem to think!

For obvious reasons, it's much easier to convert a nice website to a nasty one than the other way around. And if you want a rationalist 4chan, we already have that. The potential gains from turning the lesswrong.com domain in to another rationalist 4chan seem small, but the potential losses are large.

Replies from: komponisto
comment by komponisto · 2017-05-29T08:09:48.696Z · LW(p) · GW(p)

Because you've publicly expressed assent with extreme bluntness

Who said anything about "extreme"?

You are unreasonably fixated on the details of this particular situation (my comment clearly was intended to invoke a much broader context), and on particular verbal features of the anonymous critic's comment. Ironically, however, you have not picked up on the extent to which my disapproval of censorship of that comment was contingent upon its particular nature. It consisted, in the main, of angrily-expressed substantive criticism of the "Berkeley rationalist community". (The parts about people killing themselves were part of the expression of anger, and need not be read literally.) The substance of that criticism may be false, but it is useful to know that someone in the author's position (they seemed to have had contact with members of the community) believes it, or is at least sufficiently angry that they would speak as if they believed it.

I will give you a concession: I possibly went too far in saying I was grateful that downvoting was disabled; maybe that comment's proper place was in "comment score below threshold" minimization-land. But that's about as far as I think the censorship needs to go.

Not, by the way, that I think it would be catastrophic if the comment were edited -- in retrospect, I probably overstated the strength of my preference above -- by my preference is, indeed, that it be left for readers to judge the author.

Now, speaking of tone: the tone of the parent comment is inappropriately hostile to me, especially in light of my other comment in which I addressed you in a distinctly non-hostile tone. You said you were curious about what caused me to update -- this suggested you were interested in a good-faith intellectual discussion about discourse norms in general, such as would have been an appropriate reply to my comment. Instead, it seems, you were simply preparing an ambush, ready to attack me for (I assume) showing too much sympathy for the enemy, with whatever "ammunition" my comment gave you.

I don't wish to continue this argument, both because I have other priorities, and also because I don't wish to be perceived as allying myself in a commenting-faction with the anonymous troublemaker. This is by no means a hill that I am interested in dying on.

However, there is one further remark I must make:

Your comment makes you come across as someone who has led a very sheltered upper-class existence

You are incredibly wrong here, and frankly you ought to know better. (You have data to the contrary.)

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2017-05-30T04:27:21.528Z · LW(p) · GW(p)

Well, you've left me pretty confused about the level of importance you place on good-faith discussion norms :P

Replies from: komponisto
comment by komponisto · 2017-05-30T07:14:30.266Z · LW(p) · GW(p)

Positive reinforcement for noticing your confusion. It does indeed seem that we are working from different models -- perhaps even different ontologies -- of the situation, informed by different sets of experiences and preoccupations.

comment by entirelyuseless · 2017-05-29T17:17:08.623Z · LW(p) · GW(p)

All of these are reasonable points, given the fixed goal of obtaining and sharing as much truth as possible.

But people don't choose goals. They only choose various means to bring about the goals that they already have. This applies both to individuals and to communities. And since they do not choose goals at all, they cannot choose goals by the particular method of saying, "from now on our goal is going to be X," regardless what X is, unless it is already their goal. Thus a community that says, "our goal is truth," does not automatically have the goal of truth, unless it is already their goal.

Most people certainly care much more about not being attacked physically than discovering truth. And most people also care more about not being rudely insulted than about discovering truth. That applies to people who identify as rationalists nearly as much as to anyone else. So you cannot take at face value the claim that LW is "an internet forum concerned with truth-seeking," nor is it helpful to talk about what LW is "supposed to be optimizing for." It is doing what it is actually doing, not necessarily what people say it is doing.

That people should be sensitive about tone is taken in relation to goals like not being rudely insulted, not in relation to truth. And even the argument of John Maxwell that "Truthseeking tends to arise in violence-free environments," is motivated reasoning; what matters for them is the absence of violence (including violent words), and the benefits to truth, if there are any, are secondary.

Replies from: komponisto
comment by komponisto · 2017-05-29T22:02:31.554Z · LW(p) · GW(p)

All of these are reasonable points, given the fixed goal of obtaining and sharing as much truth as possible.

Is the implication that they're not reasonable under the assumption that truth, too, trades off against other values?

What the points I presented (perhaps along with other things) convinced me of was not that truth or information takes precedence over all other values, but rather simply that it had been sacrificed too much in service of other values. The pendulum has swung too far in a certain direction.

Above, I made it sound like it the overshooting of the target was severe; but I now think this was exaggerated. That quantitative aspect of my comment should probably be regarded as heated rhetoric in service of my point. It's fairly true in my own case, however, which (you'll hopefully understand) is particularly salient to me. Speaking up about my preoccupations is (I've concluded) something I haven't done nearly enough of. Hence this very discussion.

But people don't choose goals.

This is obviously false, as a general statement. People choose goals all the time. They don't, perhaps, choose their ultimate goals, but I'm not saying that truth-seeking is necessarily anybody's ultimate goal. It's just a value that has been underserved by a social context that was ostensibly designed specifically to serve it.

Most people certainly care much more about not being attacked physically than discovering truth.

But not infinitely much. That's why communicational norms differ among contexts; not all contexts are as tightly regulated as politics, diplomacy, and law. What I'm suggesting is that Less Wrong, an internet forum for discovering truth, can afford to occupy a place toward the looser end of the spectrum of communicational norms.

This, indeed, is possible because a lot of other optimization power has already gone into the prevention of violence; the background society does a lot of this work, and the fact that people are confronting each other remotely over the internet does a fair portion of the rest. And contrary to Maxwell's implication, nobody is talking about removing any Chesterton Fences. Obviously, for example, actual threats of violence are intolerable. (That did not occur here -- though again, I'm much less interested in defending the specific comment originally at issue than in discussing the general principles which, to my mind, this conversation implicates.)

The thing is: not all norms are Chesterton Fences! Most norms are flexible, with fuzzy boundaries that can be shifted in one direction or the other. This includes norms whose purpose is to prevent violence. (Not all norms of diplomacy are entirely unambiguous, let alone ordinary rules of "civil discourse".) The characteristic of fences is that they're bright lines, clear demarcations, without any ambiguity as to which side you're on. And just as surely as they should only be removed with great caution, so too should careful consideration guide their erection in the first place. When possible, the work of norms should be done by ordinary norms, which allow themselves to be adjusted in service of goals.

There are other points to consider, as well, that I haven't even gotten into. For example, it looks conceivable that, in the future, technology, and the way it interacts with society, will make privacy and secrecy less possible; and that social norms predicated upon their possibility will become less effective at their purposes (which may include everything up to the prevention of outright violence). In such a world, it may be important to develop the ability to build trust by disclosing more information, rather than less.

Replies from: entirelyuseless
comment by entirelyuseless · 2017-05-29T23:02:53.077Z · LW(p) · GW(p)

I agree with all of this. (Except "this is obviously false," but this is not a real disagreement with what you are saying. When I said people do not choose goals, that was in fact about ultimate goals.)

comment by FeepingCreature · 2017-05-31T11:41:37.155Z · LW(p) · GW(p)

Five years ago, or even two, my opinion would have been quite different. By this point, however, I have undergone a fairly massive update in the direction of thinking people are far, far too sensitive about matters of "tone" and the like.

Yeah but exposure therapy doesn't work like that though. If people are too sensitive, you can't just rub their faces in the thing they're sensitive about and expect them to change. In fact, what you'd want to desensitize people is the exact opposite - really tight conversation norms that still let people push slightly outside their comfort zone.

comment by Maxlove · 2017-06-14T07:23:24.421Z · LW(p) · GW(p)

There are already a ton of case studies on the internet in how fragile good conversational norms are.

I need access to these studies!

comment by cousin_it · 2017-05-31T11:36:48.685Z · LW(p) · GW(p)

Out of curiosity, why do you prefer having downvotes disabled? (Here's a comment explaining why I want them back.)

comment by Elo · 2017-05-27T21:50:36.240Z · LW(p) · GW(p)

But unless and until I see evidence otherwise, I assume 18239018038528017428's intentions are not truth-seeking.

Evidence: time and energy put into the comment. Evidence: not staying silent when they could have.

I am not saying theee offending comments are valid, instead I am curious as to why you discounted what I identify as evidence?

Replies from: Valentine
comment by Valentine · 2017-05-27T22:07:04.779Z · LW(p) · GW(p)

Ah, I was using a more colloquial definition of evidence, not a technical one. I misspoke.

What goes through my mind here is, "Trolls spend a lot of time and energy making comments like this one too, and don't stay silent when they could, so I'm not at all convinced that those points are more consistent with a world where they're truth-seeking than they are with a world in which they're just trolling."

I still think that's basically true. So to me those points seem irrelevant.

I think what I mean is something more like, "Unless and until I see enough evidence to convince me otherwise…." I'll go back and edit for that correction.

comment by komponisto · 2017-05-28T07:36:41.019Z · LW(p) · GW(p)

norms of good discourse are more important than the content of arguments

In what represents a considerable change of belief on my part, this now strikes me as very probably false.

Replies from: Valentine, Elo
comment by Valentine · 2017-05-28T19:37:12.857Z · LW(p) · GW(p)

I'm open. Clarify?

Replies from: komponisto, Elo
comment by komponisto · 2017-05-29T04:10:27.731Z · LW(p) · GW(p)

See this comment; most particularly, the final bullet point.

Replies from: Valentine
comment by Elo · 2017-05-28T19:53:57.943Z · LW(p) · GW(p)

I offer this model insofar as it helps with communicating about the puzzle -
http://bearlamp.com.au/a-model-of-arguments/
and this one
http://bearlamp.com.au/filter-on-the-way-in-filter-on-the-way-out/

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-26T20:52:50.911Z · LW(p) · GW(p)

Strong support for this person's willingness to contribute the opposite opinion.

Strong support for this person's willingness to take the time to write things up in detail.

Strong appreciation for the trust implicit in this being posted here (i.e. it's a compliment along the lines of "I expect not to be punished for speaking the truth as I see it.")

Some regret/sadness that they're this triggered and vitriolic, and for the tendency toward choosing the worst or straw-est interpretation at every point rather than taking the time to question their own responses and include nuance, but on the other hand, still appreciation for how this contributes to the overall health of the discussion by opening up new threads for debate and ensuring that there isn't an echo chamber (i.e. maybe it takes that level of aggression to accomplish the thing, and a gentler critique wouldn't be taken seriously enough?).

Significant disagreement with the choice to hijack the topic at hand to vent about things that are either mostly or completely unrelated, and make claims that are unsubstantiated or wildly inaccurate, and engage in some specious logic toward the end (e.g. ad hominem fallacy).

Hope to have some time later today to respond to the better points this raises.

Thanks for your contribution.

Replies from: None
comment by [deleted] · 2017-05-26T21:02:47.561Z · LW(p) · GW(p)

The fact that you think it's "ad hominem" is itself a betrayal of your own inexperience and lack of perception. It's perhaps one of the most relevant and least fallacious arguments to make: your fiction is a direct expression of your aesthetics, and the inference I draw from your fiction is that you do not have good aesthetics, and therefore should not be trying, or even pretending, to do something that by nature requires very good aesthetic sense.

It also indicates a tremendous amount of immaturity and childishness. I could have written something better in high school. That's not a good sign. Your ability to write characters and dialogue is directly tied to your ability to model the world accurately and understand the nuances of human behavior. Ergo, clichéd and trite writing is very damning.

Replies from: Elo, Decius
comment by Elo · 2017-05-26T23:43:29.811Z · LW(p) · GW(p)

Many words. Probably took a while to write. Some unnecessary things like telling the writer to kill themselves and levelling inherent criticism like attributes of other writing. Other writing is pretty irrelevant to the qualities of this piece. You may have some points in this dung heap but you make it hard to find them. Is it even worth engaging you in conversation?

Replies from: None
comment by [deleted] · 2017-05-27T00:48:06.113Z · LW(p) · GW(p)

Oh, I see. You're what the Eternal September phenomenon is all about. You shouldn't feel ashamed that you aren't cognitively gifted enough to quickly and rapidly comprehend the salient points I made without substantial expenditure of mental effort, because you were born this way, which also accounts for your overestimation of the amount of time it took for me to write my comments. But please don't pollute the comment space under my comments with your puerile excretions.

Replies from: cata, math_viking
comment by cata · 2017-05-28T07:09:21.648Z · LW(p) · GW(p)

Perhaps your excessive cognition is ironically blinding you to the grandiose mediocrity of your overwrought replies, such as this one here, which sounds like something I would have written in third grade if I wasn't already too smart to have written it then, which, as a truly capable mind might have already conceived, I was.

comment by math_viking · 2017-06-04T04:01:32.273Z · LW(p) · GW(p)

Your original comment, though harsh, at least contained some useful insights. Don't ruin that by posting comments that are nothing more than 6 lines of insults that no one wants to read.

comment by Decius · 2017-05-27T00:57:04.800Z · LW(p) · GW(p)

Part right.

Most of the arguments you set forth are more fallacious and less relevant than not liking all the author's fiction.

But that's because most of the arguments you set forth were of the type "Bay Area rationalists have had a lot of problems and therefore this specific plan will have similar problems."

Replies from: None
comment by [deleted] · 2017-05-27T01:14:41.616Z · LW(p) · GW(p)

Oh, I see. This is the part where you're too attached to your ingroup to realize what a total failure the Berkeley rationalist community is. I bet you also think the Sequences and HPMOR are well-written.

comment by drethelin · 2017-05-29T20:52:53.769Z · LW(p) · GW(p)

This is why we need downvotes.

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-27T01:31:47.751Z · LW(p) · GW(p)

[Note: I've typed this comment without refreshing the page, and thus have not seen any of the other responses that may have cropped up in the past few hours, nor taken those responses into account in any way yet. I'm seeing only the original reply, here.]

Part 1 of ?

Repeating my thanks before heading into what will be a mix of concession and disagreement—I have qualms about the way you engaged with this post, but am grateful for the fact that you did engage, at all, rather than just staying quiet, and I want to support the core of that even as I complain about certain aspects of your chosen method.

I think your first paragraph had one clear point: "I, as a smart, perceptive person who sees things others often fail to see, found a lot of this viscerally upsetting, which is probably a sign that there are actual problems." I liked that you added this point, and I think it would've been stronger if you hadn't been so deliberately assholish with the rest of it. I'm going to take the core point seriously as I read further, and see if I can get a clear sense of what it is you see that I don't.

The comment about Ender's Game (paragraph 2) is a misunderstanding on your part, either deliberate or easy to clear up—there's no wargaming in the plan, there's no battle room, there are no other groups of people playacting as other armies. The aesthetic of Dragon Army was, in short: everyone is expected to keep their eyes open and act independently to do what seems right and sane in the moment. Groups should practice coordinating together to build trust and be capable of action-requiring-more-than-one-individual, but the assumption is that an army run by forty minds will trump an army run by one.

In paragraph 3, you make a valid point about the efficacy and usefulness of CFAR, which is indeed worth questioning, and the side you're holding down is not obviously wrong. It's a bit overwrought, given that the phrase "insistence on the validity of his experience as a CFAR instructor" is a clear strawman; I was almost as emphatic about the fact that I've written nerdy fanfic, so I think you were just looking for an opportunity to climb up on a soapbox? That being said, your point about interpersonal romance being a relevant and important factor matches my own intuition, and I wish you had appreciated the fact that I wanted to continue thinking carefully about correct solutions rather than just spam the first ideas that popped into my head.

In paragraph four, you make an entirely unfounded leap that is beneath the quality of what's expected from a poster on this forum. All of your "this suggests" are false handwaving, and I find the rest of your assertions generally laughable, given that there's only one person in this thread so far who's demonstrated deep antisocial behavior, and that you're hurling these insults from a position of anonymity. However, I'm going to continue to take things one paragraph at a time rather than assuming that I've seen your entire position as soon as I've got a mockable straw model, so we'll start fresh with your next point.

Hmmm. In the first sentence of paragraph 5, you and I seem to converge somewhat—we both agree that the Bay Area rationalist community is not living up to its promise, and has too few people doing good and impactful work. I'm glad to share this bit of world-model with you. I note that my idea for what to do about it—try a different sort of house/community—is just one possible strategy among many, and I'm curious if you have other concrete suggestions that you'd be willing to offer. I'm especially curious what you're actually doing, as you seem to have a sort of ... scathing dismissal? ... of everyone else, and I'd expect from your tone that you must be engaged in at least one concretely high-promise project (else it all smacks of rank hypocrisy). Would you be willing to detail a) what you're up to, or b) a few concrete proposals that you suspect are higher promise? At this point, it'd be hard to simply abandon the Dragon Army idea, but if a good enough alternative came along, I would take it. The point is not to be seen to be right, it's to actually make an impact.

I notice that the rest of that paragraph is basically off-topic. Without contributing to the off-topicness, I want to say that I do, indeed, find at least a couple of worthwhile points of agreement within it, but I think most of it is wrong, in addition to being somewhat morally reprehensible re: vicious attacks, and that you're overconfident in your assertions. If you'd like to shoot me a private message, I'd be happy to say where I agree and where I disagree.

Oh, interesting—paragraph six also begins with a claim I have a lot of sympathy for/agreement with. I don't hold it as strongly as you do, but I do think there's a lot of clear dysfunction and self-deception in the community, and I'd like to take steps to correct it. I don't know how to evaluate your claim that the best people are on the periphery (as I'm a weird mix of professionally central and socially somewhat distant), but again—if you'd like to make concrete recommendations about who I should talk to, or direct some of the people you hold in high esteem to comment on this thread, I suspect you're right about there being a lot of untapped value. I do note that Dragon Army is not actually pulling from the central or highest status people, but thus far looks to be made up of a lot of solid, normal, representative rationalists, so I think your claim about trying to delude people is straightforwardly false, as is your assumption that I don't see or don't want to see any warts and flaws. (I believe there are lots of people who will back me up on this, including some who will claim that I've been too hostile or critical. That's partially why I sympathize with the strength of your negativity.)

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-27T01:32:11.525Z · LW(p) · GW(p)

Part 2 of 2

Ah, paragraph seven contains the unword "cult," which I think you're using to say something, but I'd rather you just actually said the thing, instead of applying the empty, stretched, multi-interpretation label. Like, I think if you laid out specific, concrete objections, I and others could benefit from them, but just saying cult is lazy name-calling.

I do somewhat agree with your objections to the list of specific skills attained after a year. I had hoped that the large word DRAFT at the top, plus the repeated statements that the whole plan was to iterate, and that I didn't expect to be able to figure out the right stuff on the first try, would've clued you in to the fact that I, too, am aware that the list is inadequate. Do you have specific suggestions for replacements? Keep in mind, the hard problem is to balance things-that-will-be-generally-useful-for-a-medium-sized-group-of-people against the fact that everyone involved has their own specific career and expertise already. Part of the impetus here is social, part of it is becoming well-rounded, part of it is practicing the skill of gaining/improving skills, and all of that is trying to avoid skating into trivial irrelevancy. Got any ideas?

As a meta note, I think that people who cower behind anonymity don't deserve to make concrete claims about their skill sets without backing them up, so until further notice and on a policy level, I'm treating your claim that you meet 11 out of 14 criteria as a flat-out lie (despite its plausibility overall). You're currently nothing and nobody and have no skills; that will change as soon as you a) reveal yourself or b) demonstrate credibility under this pseudonym.

Your next attempt to strawman things takes a sub-point out of context and deliberately ignores the actual requirement being made, which was that people hold their beliefs and models with skepticism/realize that their internal experience does not represent absolute truth, and that they treat one another with a behaviorist's lens, using revealed preferences and past behavior as predictors, rather than relying on mental summations that may be false or straw. I'm curious whether, setting aside your mockery of a subpoint, you agree with that point.

Interestingly enough, I have reasonable credence in your two inferences. In my experience, members of this community do attempt to install norms to compensate for social failings (and do have a somewhat higher-than-average level of social ineptitude). And also, I think many people in this community are low-empathy and embody the bad side of individualism. However, unlike you, I see that a lot of people are trying damn hard to correct this, and I'm curious whether you think they should be written off for not being good enough already, or whether you have specific suggestions that differ from the ones already being tried. I note that a big part of what Dragon Army intends to do is just try a whole bunch of stuff (including stuff already known to work; there's no premium on novelty), and that I think data will be better than armchair ranting.

I suspect you haven't done much in the way of looking in the mirror when you type the words "repressed irritation, interpersonal drama, and general unpleasantness." Certainly you don't meet any of my standards for "how a decent person behaves." I'm going to try to avoid the fundamental attribution error here, though, and assume that we've hit some combination of a) a bad day, b) the problems of online communication, and c) you being unusually triggered or having run out of some important resources.

I'm not going to engage with the ad hominem attack at the end, which, in addition to being wrong as a tactic, also fails in specific. I think that if you compare yourself, who is suggesting suicide as a solution, with OSC, who is definitely wrong about a lot of things but has never gone so far as to claim a fellow human would be better off killing themselves, you'll note that you might be on the wrong side. I'd check my cap for a skull, at least in the context of today's mood.

For anyone else—I welcome calm, reasoned elaboration on any of the on-topic points this person made. When I went through blow-by-blow, there were fewer than I'd hoped, but there are true and valuable and important criticisms here, and I'm glad they've been added to the mix, and I wouldn't mind further discussion of them.

Replies from: None, math_viking, ChristianKl
comment by [deleted] · 2017-05-27T02:03:00.648Z · LW(p) · GW(p)

I liked that you added this point, and I think it would've been stronger if you hadn't been so deliberately assholish with the rest of it.

Sure, but it's fun to be an asshole. I love knocking people down a peg. Especially in public.

The comment about Ender's Game (paragraph 2) is a misunderstanding on your part, either deliberate or easy to clear up

Asserting that this isn't elaborate playacting is not very convincing in light of the fact that your first two proposed group norms are (1) a greeting salute and (2) a call-and-response mechanism. I played the beginning of Final Fantasy XIII two nights ago and thought that was the most cringeworthy stuff I've seen in months, but you managed to top even that.

I wish you had appreciated the fact that I wanted to continue thinking carefully about correct solutions rather than just spam the first ideas that popped into my head.

The more important thing here is that you imagine this as a problem that can be solved when in fact if the problem did arise, that would itself preclude it from being easily solved. The "solution" is to not select immature people who you can reasonably expect to get into interpersonal drama, which precludes the vast majority of the rationalist community, which is part of the point of my comment.

if you'd like to make concrete recommendations about who I should talk to

I can suggest that you talk to Satvik Beri, and maybe direct him to my comment as well, although I feel slightly bad for potentially causing him to spend time on this.

Ah, paragraph seven contains the unword "cult," which I think you're using to say something, but I'd rather you just actually said the thing, instead of applying the empty, stretched, multi-interpretation label.

I mean that the Berkeley rationalist community is a cult in the full and unqualified sense of the word "cult". You, as a high priest, naturally disagree.

Your next attempt to strawman things takes a sub-point out of context and deliberately ignores the actual requirement being made, which was that people hold their beliefs and models with skepticism/realize that their internal experience does not represent absolute truth, and that they treat one another with a behaviorist's lens, using revealed preferences and past behavior as predictors, rather than relying on mental summations that may be false or straw.

This is a good thing practically by construction.

My point is that this is almost completely unnecessary in a world where people begin by defaulting to behavior that is very unlikely to bother others. I am also gesturing at the following:

  1. The rationalist community does not default to such behavior, which is an indication of the conjunction of near-autistic social skills and remarkably low empathy, and
  2. The rationalist community does not default to such behavior, but instead of anyone pointing out that this is a reasonable thing to default to (c.f. Japanese society), people try to patch it up with legalism, bureaucracy, and a laundry list of rules, which in my experience makes it feel like I'm talking to the low-IQ HR department of a large multinational conglomerate.

The fact that the Berkeley rationalist community seems particularly bad at this is a major red flag in almost every conceivable fashion.

However, unlike you, I see that a lot of people are trying damn hard to correct this, and I'm curious whether you think they should be written off for not being good enough already

I think they should be thrown off a bridge, either metaphorically or literally. I find it detestable to have them near me at all.

I suspect you haven't done much in the way of looking in the mirror when you type the words "repressed irritation, interpersonal drama, and general unpleasantness." Certainly you don't meet any of my standards for "how a decent person behaves." I'm going to try to avoid the fundamental attribution error here, though, and assume that we've hit some combination of a) a bad day, b) the problems of online communication, and c) you being unusually triggered or having run out of some important resources.

Two questions:

  1. Does it look to you like my irritation is "repressed"?
  2. I'm completely anonymous. Exactly what interpersonal drama am I causing here?

I agree that I can be, when I want to be, a very unpleasant person.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-27T02:23:25.180Z · LW(p) · GW(p)

I don't think you actually succeeded in knocking anyone down a peg, though. I'd bet ~$50 that a neutral, outside observer (say, from a different English speaking country) would say that a) you come off far worse than anyone else in the thread and b) they didn't find your post convincing.

I think our disagreement over the distinction between playacting and not boils down to something like, I believe that the very small nuts-and-bolts of social interaction (jargon, in-jokes, simple trigger-action responses like sneeze "bless you") are more important than most people give them credit for. In other words, I think the silly theater ends up actually mattering? Or, to be more specific—I think most of it doesn't matter, but some small bits of it end up being really important, and so it's an arena I want to do explicit experimentation with. I want to see whether the small salute actually ends up being relevant to bonding and sense-of-purpose, and no, I don't have a double blind or anything like that, but I will be asking a bunch of fairly introspective people for their thoughts afterward.

I suspect, from your reaction, that you'd basically assert that this premise is false, and that the ... skin? ... of social interaction is meaningless, at least compared to the actual connections and information conveyed. This seems like a sensible, plausible position to take, but I think your mockery of the alternative hypothesis is unfounded.

I agree that if romance/sex/etc pop up, that would preclude the problem from being easily solved, but where did you get the impression that I was afraid of attempting to solve hard problems? There's definitely a filter to screen out immature or uncontrolled people; while you yourself might make it through, the persona you're currently expressing would've been rejected by the second paragraph of your original response. We've already turned away people for a variety of reasons, and at least one because of exactly this axis.

I appreciate the recommendation that I run things by Satvik. He's a perceptive thinker and I haven't run this by him yet. I wish that you'd responded in specific to more of my requests to draw out your suggestions—you're continuing to clarify your models of the problems, but not offering much in the way of replacements for the things I'm planning to try.

You're still not saying what you actually mean by the word "cult." There's a decent chance I'd agree with you—I've described the Bay Area rationalist community as a cult myself, even recently, when talking to friends and family members. But I was careful to disambiguate exactly what I meant by that, and I can't help but note that your continued refusal to spell it out makes me suspect that you don't actually have a coherent thing to say, and are just trying to score easy points.

I agree again with 1 (low empathy, etc.) though I think the strength of the effect is smaller than you seem to think it is. I think that you're still not believing me when I say I agree with 2? Note that I'm calling you out for unacceptable rudeness in this thread, for instance. I also suspect you have a huge typical mind thing going on, and vastly underestimate how easy it is for people to rub each other wrong while acting in complete good faith in a normal society—the bed example was maybe poorly chosen, but I disagree with you that it's easy to "default to behavior that is very unlikely to bother others." I've been in a wide range of social milieu, and it's much less about the actual behavior and much more about people's cough willingness to pick nits and start fights.

I think that you've lost all moral authority by doubling down on your "people should die for this" claim, and because of that, I think this'll be my last attempt to engage with you as an equal (you're not my equal; at least this facet of your personality is my clear inferior). I will, however, continue to read if you make those concrete suggestions I'm hoping you have somewhere.

In answer to your last two questions: yes, it looks like your irritation is repressed. Not here, because my main hypothesis is that here is where you finally felt safe to vent a ton of irritation that you've been repressing in other arenas, for long amounts of time. Just look back at your first post—maybe a quarter of it was in response to me, and the rest is long-simmering, long-festering frustration about a bunch of other things (some of them valid and some of them not). Textbook repress-then-explode. And 2, your claim that posting anonymously equates to not causing interpersonal drama is again so laughable that unless it's a deliberate joke, you're revealing this persona to be less socially aware than literally the most awkward and inept rationalist I've ever met.

You're not unpleasant so much as just ... not showing yourself to be worth the time. I really hoped I could get more out of you, because I actually know, on a deep level, that I don't have all the answers and the opposition is the first best place to look. But in terms of useful-criticism-per-word, you've been outdone by every other person who's registered reservation or disagreement here.

Replies from: Pimgd
comment by Pimgd · 2017-05-29T10:45:23.273Z · LW(p) · GW(p)

I don't know if I'm neutral (no, because I have an account here for a while now), but I wouldn't have the same confidence to swing that bet out of there like you do. The post in and of itself is not convincing enough for me to say that your idea won't work, but it certainly makes me go "hmm, well, he might have a point there".

Specifically:

  • "Normal" people don't need to explicitly write out all the rules for their housing with regards to social rules.
  • But here there's a large list of rules and activitities and all that with the goal of getting group housing to work properly.
  • Also, here's some examples of the group of people that you want to source your participants from having low social skills.
  • By the way, if you set up a ton of rules then it usually won't work.
  • Thus, there's a pretty big chance that the rules will not work out and that the social skills of the participants will be too low to have the group housing work.

I am not convinced that this is the truth.

However, if I read in a year from now that this is what happened, I would not be surprised.

Basically what I'm saying is I can see 1 or 2 people leaving due to drama despite the rules if you try this, with a chance greater than, I dunno, 10%?

Replies from: JacekLach, Viliam
comment by JacekLach · 2017-05-30T18:20:50.091Z · LW(p) · GW(p)

You're looking at content, not status (as implied by 'knocking someone down a peg'). My immediate reaction to the top-level comment was: "well, they have some good points, but damn are they embarassing themselves with this language". Possibly shaped by me being generally sceptical about the ideas in the OP.

As far as the bet is about the form of the post, rather than the content, I think Duncan's pretty safe.

comment by Viliam · 2017-06-01T13:31:35.468Z · LW(p) · GW(p)

"Normal" people don't need to explicitly write out all the rules for their housing with regards to social rules.

I have seen normies having endless fights about trivial things, such as "who should buy toilet paper", that a simple explicit norm could solve. (For example "people keep buying the paper in turns, when you buy one check this box to keep everyone informed" or "Joe buys the paper, everyone else gives Joe $2 each month" or whatever.)

The best case, of course, would be trying to be nice by default, and solve explicitly the situations where the default behavior fails. But that seems like what would quite likely happen in the Dragon Army anyway... or maybe I am just applying the typical mind fallacy here.

Replies from: Lumifer
comment by Lumifer · 2017-06-01T15:26:26.586Z · LW(p) · GW(p)

I have seen normies having endless fights about trivial things

You should take the Hansonian approach. Fights over toilet paper are not about toilet paper.

comment by math_viking · 2017-06-04T03:48:22.965Z · LW(p) · GW(p)

I do somewhat agree with your objections to the list of specific skills attained after a year. I had hoped that the large word DRAFT at the top, plus the repeated statements that the whole plan was to iterate, and that I didn't expect to be able to figure out the right stuff on the first try, would've clued you in to the fact that I, too, am aware that the list is inadequate. Do you have specific suggestions for replacements? Keep in mind, the hard problem is to balance things-that-will-be-generally-useful-for-a-medium-sized-group-of-people against the fact that everyone involved has their own specific career and expertise already. Part of the impetus here is social, part of it is becoming well-rounded, part of it is practicing the skill of gaining/improving skills, and all of that is trying to avoid skating into trivial irrelevancy. Got any ideas?

I'm not the originator of this thread, but that part did resonate with me. I don't think there's anything wrong with those skills, but the combination of choice of skills and the desired level of competency does seem to be decidedly mediocre given the effort and people involved.

1) Above-average physical capacity

What is average? In the US, you could probably be somewhat overweight with no strength, speed, endurance, or agility to speak of and still be "above average."

(2) Above-average introspection

I would expect almost all of the people who volunteer to be part of a rationalist group house to be there or pretty close to there already.

(3) Above-average planning & execution skill (4) Above-average communication/facilitation skill (5) Above-average calibration/debiasing/rationality knowledge

I think my previous comment applies here as well. Perhaps you have a different conception of "average" than I do, but I think if you're going to establish a long-term mini-dictatorship of a group house, you should be aiming for quite a bit higher than "above average."

(6) Above-average scientific lab skill/ability to theorize and rigorously investigate claims

I don't really understand this one. Is your group house actually going to have the ability to practice conducting laboratory experiments? That's a very high overhead endeavor.

(7) Average problem-solving/debugging skill (8) Average public speaking skill (9) Average leadership/coordination skill (10) Average teaching and tutoring skill

Average? Your goals are to reach average, after a year of dedicated effort? Getting into the 80th percentile of anything numbered 1-10 on this list should require a minimum of effort on the part of dedicated individuals following strict rules, unless you have some specific medical condition interfering.

(11) Fundamentals of first aid & survival

How fundamental is fundamental? This also shouldn't take very long if you are willing to put in the effort and practice a bit (2 weeks, at the outside, though you could the true basics in a long weekend). I don't know how it's related to the rest of the goals, though, or why it's important enough to be on the rest of the list. Also, you should practice many of these skills in the actual wilderness, which means time away from everything else.

(12) Fundamentals of financial management

Again, I'm not sure what's "fundamental." You could spend 2 days on this, or the entire year.

(13) At least one of: fundamentals of programming, graphic design, writing, A/V/animation, or similar (employable mental skill) (14) At least one of: fundamentals of woodworking, electrical engineering, welding, plumbing, or similar (employable trade skill)

Do you have the ability to teach/practice trade skills at the house? I would expect leaning any of these things, to an employable level, within a year, would require spending time similar to a full-time job somewhere that has infrastructure, in addition to a significant investment of money (at least a few thousand dollars). (I checked some local welding and plumbing classes at community colleges, which is where I'm getting those numbers).

Someone who already has one of these skills (I'm guess you'll have a few coders at least) is going to be at a tremendous advantage in terms of time and possibly money compared to someone who is not. 13 and 14 are going to each represent a greater time investment than the others combined, unless you already have them.

As a meta note, I think that people who cower behind anonymity don't deserve to make concrete claims about their skill sets without backing them up, so until further notice and on a policy level, I'm treating your claim that you meet 11 out of 14 criteria as a flat-out lie (despite its plausibility overall). You're currently nothing and nobody and have no skills; that will change as soon as you a) reveal yourself or b) demonstrate credibility under this pseudonym.

I don't know if you care, but I would say I already meet a similar number of these criteria. The only one I definitely don't meet is 14. I'm willing to tie this account to my real name and explain/prove why I meet them (though some of them would be quite difficult to really prove, I could only argue).

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-06-04T13:22:56.949Z · LW(p) · GW(p)

The problem seems to be to be the tradeoff between going deep and going wide, with the added complexity that going deep on the wrong thing seems strictly worse than going wide, and so we're defaulting to going wide where there's uncertainty.

Put another way, it's unlikely that any of those specific skills are going to be particularly important to any of our longest-term goals, but it also seems counterproductive to just sit there thinking about which direction to go in. I'm usually not the biggest expert in the room, but I usually am the most generally competent in terms of being able to fill holes or solve whatever problem crops up, and it's because I have a habit of just constantly churning and picking up new skills and methods and heuristics wherever I go. I suspect that others would benefit from a similar habit, in particular because once "the right skill" does come along, you have both the affordance to start learning it and a variety of experiences allowing you to learn quickly and efficiently.

That's a claim. Not necessarily supported, but reasonable, I think, and worth trying out.

I note that I disagree that it's easy to break averages in all of these things at once. People who don't actually check their abilities against a standard tend to be wildly overconfident, and people tend to underestimate how long it will take them to learn X or accomplish Y; these things are solidly documented. And while competence does tend to cluster (e.g. "G"), so the picture's not quite as bleak as the second half of this sentence, once you've got a dozen different domains and shooting to be above the 50% mark in all of them, you're looking at a person who's approximating one in four thousand, and when you try to get a whole group to hit that mark, the challenge is pretty real. I wouldn't be surprised if most people have most of this easy, but I think you're not fully grokking the difficulty of making everybody baseline competent in all of these domains. For instance, you note that many of these skills require only a few weeks, but I don't know if you added up all of those weeks, compared them to the time commitment, and noted that they're all being practiced off-hours and people have their own jobs and lives as well.

It's a floor, though, not a ceiling—we're aiming at "world class skill," we're just not naively expecting that getting there is going to be easy, and initial expectations are meant to be exceeded.

Various additional points ...

  • The trade skill goal got scaled back in response to another comment; it was the hardest/sketchiest one to begin with.
  • We will have some ability to practice trade skills at the house, and are adopting a norm of going and seeking professional instruction outside from time to time.
  • I buy that you meet a large number of these criteria; I meet most of them myself. But the ones I don't have are sticky/tricky.
Replies from: math_viking
comment by math_viking · 2017-06-04T20:48:04.511Z · LW(p) · GW(p)

. And while competence does tend to cluster (e.g. "G"), so the picture's not quite as bleak as the second half of this sentence, once you've got a dozen different domains and shooting to be above the 50% mark in all of them, you're looking at a person who's approximating one in four thousand,

I don't think these skills are anywhere near independent. It's also not obvious that they're normally distributed. And, being above the 50% mark in a dozen skills by coincidence being unlikely does not at all tell you how hard it is to gain skills if you put in some deliberate work.

I generally am sympathetic to the argument that stuff can be harder than one assumes, but I also am generally cynical about the "average" level of most of these skills. Most people probably don't even know what "calibration" means precisely enough to test their own level of calibration. I'm not trying to be arrogant here, I pretty much have only heard about the idea of writing down your confidence level of a bunch of predictions and seeing what comes true from the rationalist community and rationalist-adjacent ones.

For the sake of avoiding this issue, and because rather than using terms like "above-average," I would attempt to pin down ahead of time requirements that are as specific as possible to measure progress in each of the areas you care about.

For instance, you note that many of these skills require only a few weeks, but I don't know if you added up all of those weeks, compared them to the time commitment, and noted that they're all being practiced off-hours and people have their own jobs and lives as well.

I don't think it should take a few weeks each to exceed average in most of these skills. I expect it to take a few weeks total (or 1 day a week for a few months).

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-06-04T23:39:34.993Z · LW(p) · GW(p)

I'm plausibly interested in betting a few hundred dollars against you, especially if (as seems likely, given your confidence) you were to bet $1000 against my $250 or something like that. If I imagine the hundred closest people I know uttering the above, I think all but one or two of them are wrong/overconfident.

Replies from: math_viking
comment by math_viking · 2017-06-05T05:10:20.935Z · LW(p) · GW(p)

What statement, specifically, would we be betting on? It's certainly plausible that I'm underestimating the difficulty in getting an entire group to above these standards in comparison to getting one person. Though, I think the main issue may be a difference in what we perceive as average, rather than a model of how hard learning these skills is.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-06-05T13:25:13.832Z · LW(p) · GW(p)

I spent five minutes trying to operationalize, but I couldn't come up with anything that seemed workable. For now, we'll just proceed knowing that at least one of us is wrong. =)

Replies from: math_viking
comment by math_viking · 2017-06-05T15:48:52.650Z · LW(p) · GW(p)

Either way is fine with me, but if you can express in any way what you think "average" is for some of these skills, I would like to know because now I'm really curious.

Thanks for taking so much time to keep responding to a fairly random commenter!

comment by ChristianKl · 2017-05-27T11:52:21.952Z · LW(p) · GW(p)

As a meta note, I think that people who cower behind anonymity don't deserve to make concrete claims about their skill sets without backing them up, so until further notice and on a policy level, I'm treating your claim that you meet 11 out of 14 criteria as a flat-out lie (despite its plausibility overall).

The amount of criteria he hit's likely depends on the definition of average. The reference class matters a great deal.

comment by [deleted] · 2017-05-26T20:57:39.170Z · LW(p) · GW(p)

(Comment too long to add more directly.)

Somewhere else in the comments, Qiaochu says:

I am super, super in favor of this experiment, and would have enthusiastically participated fully in it something like 2 years ago, before moving to Terabithia. I think it's tackling the biggest things missing from the community and am very excited to see what happens.

Well, given the trajectory of your own life, Qiaochu, I think that actually counts as an argument against "Dragon Army", and really the rationalist community as a whole, being good for the participants. I notice that you've shifted from posting insightful, detailed blog posts to impersonally spamming links to rationalist ingroup bullshit on Facebook all the time -- in some sense it's like you've been trending in the direction of being less and less of a real person as time goes on. (Which, as a friend of mine pointed out, is actually generically very common, like how a smart and quirky high school student goes to Harvard, starts adopting more and more of a "professional" demeanor, becomes progressively less interesting, and eventually dies a mental death far in advance of their physical expiration...)

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-27T01:46:26.074Z · LW(p) · GW(p)

Oh, dear. This is terrible, and I wish you hadn't posted it, because there's literally no value to be had in delivering this sort of message in this sort of way. Disendorse; I claim this is evidence that most of your arguments about social capability should be somewhat discounted, since they're coming from someone unskilled.

Replies from: Raemon, drethelin
comment by Raemon · 2017-05-27T03:10:04.067Z · LW(p) · GW(p)

I honestly think this person has been engaged with enough, at least until they make the kind of concrete claims you've been asking for. I think it's commendable to have responded with the good mix of "look at their plausibly good points while calling them out on their bad points", but at some point it becomes uncommendable to engage with people who are clearly not arguing in good faith.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-27T03:24:59.497Z · LW(p) · GW(p)

Yeah, I'm done replying at this point. +1 for the outside view check, though—if I weren't already done, I would've appreciated your intervention.

comment by drethelin · 2017-05-29T21:17:34.718Z · LW(p) · GW(p)

I disagree.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-29T21:19:24.614Z · LW(p) · GW(p)

Fair. Care to put forth a model? You don't have to; simply weighing in is also a contribution (just a less useful one).

Replies from: drethelin
comment by drethelin · 2017-05-29T21:34:05.480Z · LW(p) · GW(p)

Our ability to concretely describe the effects of social groups on people in general are kind of limited, but things like "person X joined social group Y and now they concretely do behavior Z" are available. If you see people join a group and then become concretely worse (in your own assessment), I think it can be valuable to refer to specifics. I think it can be important and virtuous to convey what you think is a pernicious process, and unfortunately naming someone you personally know is a very effective, if cruel way to do it. Anecdata, and especially anecdata based on the content of someone's facebook feed, is not a great snapshot of a person at different times, but it's still a source of information.

I'm not sure what you think a better sort of way to deliver this sort of message is, but to some extent any nicer way to do it would be less effective in conveying how bad you think the situation is.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-29T21:55:41.874Z · LW(p) · GW(p)

That seems true and correct to me. I note that my response to this specific comment was ... motivationally entangled? ... with my responses to this person's other comments, and that I was adopting a cross-comment strategy of "try to publicly defend certain norms while engaging with everything else that doesn't violate those norms."

I think it's defensible to say that, in so doing, I lost ... fine-grained resolution? ... on the specific thing being said above, and could've teased out the value that you were able to identify above separate from my defense of a) norms and b) Qiaochu.

Thanks!

comment by grendelkhan · 2017-06-09T21:57:06.188Z · LW(p) · GW(p)

I strongly support this post.

It would be much better if it were less inflammatory. The last sentence, in particular, is reprehensible. But you respond to the substance of the criticism you get, not the criticism you might want or wish to have at a later time. Otherwise you might as well be slashing your own tires. The vast majority of the discussion below is simple tone policing. Someone's telling you that your house is on fire, and you're complaining that they're shouting.

It's correct that it's incredibly troubling that the author didn't even consider romantic drama in designing his bootcamp. It's correct that these are really not impressive outcomes. They're moderately-functional outcomes. Shouldn't there be some sort of control group where people attempt a similar level of life-changing upward momentum on their own and see if it was actually effective to cede their autonomy? It is correct that trying to LARP a bizarre combination of Ender's Game and Fight Club is perhaps not a sign that this person has any idea how grown-ups work.

And most troubling of all, why weren't these issues noted by anyone who Duncan ran this idea by first? Why does it take this level of willingness to break with social norms to notice the skulls? And no, intoning "I Have Noticed The Skulls" doesn't mean you've actually addressed the problem unless you actually address it. Twelfth virtue!

In a broader sense, what the hell happened? I read the Sequences roughly when they came out, commented here occasionally, moved over to SSC and, more often, the associated subreddit. I donate effectively and regularly, I do my best to tax people's bullshit with bets, and I do feats with spaced repetition. Apparently while I was doing that and not being directly involved in the community, it turned into... this. Scott Alexander is getting published in moderately prestigious outlets. AI risk is mainstream. Effective Altruism is considerably more mainstream than it was. But the community at the center of it has, if anything, regressed, from what I've seen here.

Replies from: Lumifer
comment by Lumifer · 2017-06-09T23:00:07.836Z · LW(p) · GW(p)

is perhaps not a sign that this person has any idea how grown-ups work.

Maybe it wasn't designed for grown-ups. To quote Duncan,

I'm currently hoping to create a rationality/epistemology/worldsaving bootcamp for middle schoolers.

comment by [deleted] · 2017-05-28T05:35:03.820Z · LW(p) · GW(p)

Members of the Berkeley rationalist community are particularly low-empathy and embody the worst of individualism, such that they don't actually care whether or not what they're doing might bother others until they're told to stop.

lol

comment by The_Jaded_One · 2017-06-14T19:50:03.340Z · LW(p) · GW(p)

someone was accidentally impregnated and then decided not to abort the child, going against what had previously been agreed upon, and proceeded to shamelessly solicit donations from the rationalist community to support her child

They were just doing their part against dysgenics and should be commended.

comment by The_Jaded_One · 2017-06-14T19:47:28.955Z · LW(p) · GW(p)

word is going around that Anna Salamon and Nate Soares are engaging in bizarre conspiratorial planning around some unsubstantiated belief that the world will end in ten years

Sounds interesting, I'd like to hear more about this.

comment by a_different_face · 2017-05-27T00:47:19.477Z · LW(p) · GW(p)

despite the efforts of a very valiant man, people have still not realized that autogynephilic men with repressed femininity and a crossdressing fetish pretending to be women aren't actually women

Being only on the periphery of the community, I'm extremely curious who said valiant man is (full disclosure: this is so I can avoid them and/or assess why the community has not yet shunned them, as I would hope they'd shun you).

Replies from: None, Fluttershy
comment by [deleted] · 2017-05-27T01:21:15.399Z · LW(p) · GW(p)

Being only on the periphery of the community, I'm extremely curious why your instinctual reaction to a very politically incorrect idea is to shun the people supporting it, and why your model of the world bizarrely concludes that (1) people who live 20+ years as men and then decide, because of their autogynephilic fetish and repressed femininity, that they're better off as women and therefore are women, and (2) people who have severe mental illnesses that cause them to become suicidal upon contemplation of their own bodies are somehow Actually the Opposite Sex in some timeless, eternal manner which becomes true as soon as they realize it's true.

Being only on the periphery of the community, I'm extremely curious why you imagine people who are objectively a bunch of losers who can't seem to accomplish anything of value would be the ones shunning me rather than the other way around. If I were a member of the cultlike "community", sure, social ostracization would be possible. (Thankfully, I'm not.)

Replies from: Decius, Duncan_Sabien, a_different_face
comment by Decius · 2017-05-28T06:46:50.377Z · LW(p) · GW(p)

For someone who thinks that they are immune to being shunned, you sure do use an anononym.

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-27T01:41:15.999Z · LW(p) · GW(p)

I've had some thoughts and feelings in this vein; skepticism of trans and so forth. I hold that skepticism with skepticism, though, and I do not reach the point of telling the several extremely smart, perceptive, capable, and empathetic trans humans I know that they're e.g. dumb or wrong or sick or confused, when I have no inside view, and I think it's somewhat abhorrent and damaging to the social fabric to start these conversations in any but the most careful and respectful way. That being said, I'd be curious to hear more of the thoughts on the other side of the zeitgeist. If you feel like naming this valiant man in private, I commit to not sharing their name any farther than they themselves say is okay.

Replies from: Zack_M_Davis, komponisto
comment by Zack_M_Davis · 2017-05-27T13:39:15.250Z · LW(p) · GW(p)

If you feel like naming this valiant man in private, I commit to

Hi! 18239018038528017428 is almost certainly referring to me! (I would have predicted that you'd already have known this from Facebook, but apparently that prediction was wrong.)

somewhat abhorrent and damaging to the social fabric to start these conversations in any but the most careful and respectful way.

I tried that first. It turns out that it doesn't work: any substantive, clearly-worded claims just get adversarially defined as insufficiently respectful. I still had something incredibly important to protect (there is a word for the beautiful feeling at the center of my life, and the word is not woman; I want the right to use my word, and I want the right to do psychology in public and get the right answer), so I started trying other things.

Replies from: tcheasdfjkl, entirelyuseless
comment by tcheasdfjkl · 2017-06-01T02:32:26.650Z · LW(p) · GW(p)

Zack, I think the problem (from my perspective) is that you tried being respectful in private, and by the time you started talking about this publicly, you were already being really harsh and difficult to talk to. I never got to interact with careful/respectful you on this topic.

(I understand this may have been emotionally necessary/unavoidable for you. But still, from my perspective there was a missing step in your escalation process. Though I should acknowledge that you spurred me to do some reading & writing I would not otherwise have done, and it's not impossible that your harshness jolted me into feeling the need to do that.)

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2017-06-01T02:49:07.424Z · LW(p) · GW(p)

Yeah, that makes sense. Sorry. Feel free to say more or PM me if you want to try to have a careful-and-respectful discussion now (if you trust me).

Replies from: tcheasdfjkl
comment by tcheasdfjkl · 2017-06-01T03:21:04.177Z · LW(p) · GW(p)

Thanks. I don't think that would be good for me, at least right now, but thanks for the offer.

My thoughts on the matter are mostly in my ITT entry on Ozy's blog and then also in the most recent thread on this topic on their blog. I guess I'd be somewhat curious about your responses to those thoughts.

comment by entirelyuseless · 2017-05-27T16:09:29.709Z · LW(p) · GW(p)

any substantive, clearly-worded claims just get adversarially defined as insufficiently respectful

I agree. E.g. Scott Alexander has said he will ban people from his blog is they do not speak as if the trans theories were true, even if they believe them to be false. But that doesn't mean it is a good option to be as rude as possible, like 18239018038528017428 above. (Obviously I am not saying that you have adopted this approach either.)

comment by komponisto · 2017-05-27T09:30:29.908Z · LW(p) · GW(p)

I do not reach the point of telling the...humans I know that they're e.g. dumb or wrong or sick or confused

If you'll allow me, I would like to raise a red-flag alert at this sentence. It seems poorly worded at best, and in worse scenarios indicative of some potentially-bad patterns of thought.

Presumably, as a member of a community of aspiring rationalists, not to mention the staff of CFAR, telling the people you know when (you think) they're wrong or confused is, or should be...your daily bread. (It goes without saying that this extends to noticing your own confusion or wrongness, and encouraging others to notice it for you when you don't; the norm, as I understand it, is a cooperative one).

Telling people when they might be sick is (if you'll forgive me) hardly something to sneeze at, either. They might want to visit a doctor. Health is, for understandable reasons, generally considered important. (This includes mental health.)

As for dumb, well, I simply doubt that comes up often enough to make the statement meaningful. Whatever may be said about the rationalist community, it does not appear to draw its membership disproportionately from those of specifically low intelligence. Your acquaintances -- whatever their other characteristics -- probably aren't "dumb", so to tell them they are would simply be to assert a falsehood.

So: may I be so bold as to suggest either a reformulation of the thought you were trying to express, or even a reconsideration of the impulse behind it, in the event that the impulse in question wasn't actually a good one?

Replies from: Duncan_Sabien, Fluttershy, ChristianKl
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-27T16:39:05.832Z · LW(p) · GW(p)

This is a fair point. I absolutely do hold as my "daily bread" letting people know when my sense is that they're wrong or confused, but it becomes trickier when you're talking about very LARGE topics that represent a large portion of someone's identity, and I proceed more carefully because of both a) politeness/kindness and b) a greater sense that the other person has probably thought things through.

I don't have the spoons to reformulate the thought right now, but I think your call-out was correct, and if you take it on yourself to moderately steelman the thing I might have been saying, that'll be closer to what I was struggling to express. The impulse behind making the statement in the first place was to try to highlight a valuable distinction between pumping against the zeitgeist/having idiosyncratic thoughts, and just being a total jerk. You can and should try to do the former, and you can and should try to avoid the latter. That was my main point.

Replies from: komponisto, Fluttershy
comment by komponisto · 2017-05-27T22:58:47.767Z · LW(p) · GW(p)

Here's what it looks like to me, after a bit of reflection: you're in a state where you think a certain proposition P has a chance of being true, which it is considered a violation of social norms to assert (a situation that comes up more often than we would like).

In this sort of situation, I don't think it's necessarily correct to go around loudly asserting, or even mentioning, P. However, I do think it's probably correct to avoid taking it upon oneself to enforce the (epistemically-deleterious) social norm upon those weird contrarians who, for whatever reason, do go around proclaiming P. At least leave that to the people who are confident that P is false. Otherwise, you are doing epistemic anti-work, by systematically un-correlating normative group beliefs from reality.

My sense was that you were sort of doing that above: you were seeking to reproach someone for being loudly contrarian in a direction that, from your perspective (according to what you say), may well be the right one. This is against your and your friends' epistemic interests.

(A friendly reminder, finally, that talk of "being a total jerk" and similar is simply talk about social norms and their enforcement.)

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-28T05:17:22.221Z · LW(p) · GW(p)

I was not aiming to do "that above." To the extent that I was/came across that way, I disendorse, and appreciate you providing me the chance to clarify. Your models here sound correct to me in general.

comment by Fluttershy · 2017-05-28T20:21:05.628Z · LW(p) · GW(p)

Your comment was perfectly fine, and you don't need to apologize; see my response to komponisto above for my reasons for saying that. Apologies on my part as there's a strong chance I'll be without internet for several days and likely won't be able to further engage with this topic.

comment by Fluttershy · 2017-05-28T20:16:11.640Z · LW(p) · GW(p)

Duncan's original wording here was fine. The phrase "telling the humans I know that they're dumb or wrong or sick or confused" is meant in the sense of "socially punishing them by making claims in a certain way, when those claims could easily be made without having that effect".

To put it another way, my view is that Duncan is trying to refrain from adopting behavior that lumps in values (boo trans people) with claims (trans people disproportionately have certain traits). I think that's a good thing to do for a number of reasons, and have been trying to push the debate in that direction by calling people out (with varying amounts of force) when they have been quick to slip in propositions about values into their claims.

I'm frustrated by your comment, komponisto, since raising a red-flag alert, saying that something is poorly worded at best, and making a large number of more subtle negative implications about what they've written are all ways of socially discouraging someone from doing something. I think that Duncan's comment was fine, I certainly think that he didn't need to apologize for it, and I'm fucking appalled that this conversation as a whole has managed to simultaneously promote slipping value propositions into factual claims, and promote indirectly encouraging social rudeness, and then successfully assert in social reality that a certain type of overtly abrasive value-loaded proposition making is more cooperative and epistemically useful than a more naturally kind style of non-value-loaded proposition making, all without anyone actually saying something about this.

Replies from: komponisto
comment by komponisto · 2017-05-29T05:18:15.339Z · LW(p) · GW(p)

Your principal mistake lies here:

"socially punishing them by making claims in a certain way, when those claims could easily be made without having that effect

Putting communication through a filter imposes a cost, which will inevitably tend to discourage communication in the long term. Moreover, the cost is not the same for everyone: for some people "diplomatic" communication comes much more naturally than for others; as I indicate in another comment, this often has to do with their status, which, the higher it is, the less necessary directness is, because the more people are already preoccupied with mentally modeling them.

I'm frustrated by your comment, komponisto

If we're engaging in disclosures of this sort, I have felt similarly about many a comment of yours, not least the one to which I am replying. In your second paragraph, for example, you engage in passive aggression by deceptively failing to acknowledge that the people you are criticizing would accuse you of the exact same sin you accuse them of (namely, equating "trans people disproportionately have certain traits" and "boo trans people"). That's not a debate I consider myself to be involved in, but I do, increasingly, feel myself to be involved in a meta-dispute about the relative importance of communicative clarity and so-called "niceness", and in that dispute, come down firmly on the side of communicative clarity -- at least as it pertains to this sort of social context.

I read your comment as a tribal cheer for the other, "niceness", side, disingenuously phrased as if I were expected to agree with your underlying assumptions, despite the fact that my comments have strongly implied (and now explicitly state) that I don't.

Replies from: Fluttershy
comment by Fluttershy · 2017-05-30T05:36:48.806Z · LW(p) · GW(p)

Putting communication through a filter imposes a cost, which will inevitably tend to discourage communication in the long term.

As does allowing people to be unduly abrasive. But on top of that, communities where conversations are abrasive attract a lower caliber of person than one where they aren't. Look at what happened to LW.

Moreover, the cost is not the same for everyone

It's fairly common for this cost to go down with practice. Moreover, it seems like there's an incentive gradient at work here; the only way to gauge how costly it is for someone to act decently is to ask them how costly it is to them, and the more costly they claim it to be, the more the balance of discussion will reward them by letting them impose costs on others via nastiness while reaping the rewards of getting to achieve their political and interpersonal goals with that nastiness.

I'm not necessarily claiming that you or any specific person is acting this way; I'm just saying that this incentive gradient exists in this community, and economically rational actors would be expected to follow it.

communicative clarity and so-called "niceness"

That's a horrible framing. Niceness is sometimes important, but what really matters is establishing a set of social norms that incentivize behaviors in a way that leads to the largest positive impact. Sometimes that involves prioritizing communicative clarity (when suggesting that some EA organizations are less effective than previously thought), and sometimes that involves, say, penalizing people for acting on claims they've made to other's emotional resources (reprimanding someone for being rude when that rudeness could have reasonably been expected to hurt someone and was entirely uncalled for). Note that the set of social norms used by normal folks would have gotten both of these cases mostly right, and we tend to get them both mostly wrong.

Replies from: komponisto, Zack_M_Davis
comment by komponisto · 2017-05-30T07:00:48.547Z · LW(p) · GW(p)

communities where conversations are abrasive attract a lower caliber of person than one where they aren't. Look at what happened to LW.

To whatever extent this is accurate and not just a correlation-causation conversion, this very dynamic is the kind of thing that LW exists (existed) to correct. To yield to it is essentially to give up the entire game.

What it looks like to me is that LW and its associated "institutions" and subcultures are in the process of dissolving and being absorbed into various parts of general society. You are basically endorsing this process, specifically the aspect wherein unique subcultural norms are being overwritten by general societal norms.

The way this comes about is that the high-status members of the subculture eventually become tempted by the prospect of high status in general society, and so in effect "sell out". Unless previously-lower-status members "step up" to take their place (by becoming as interesting as the original leaders were), the subculture dies, either collapsing due to a power vacuum, or simply by being memetically eaten by the general culture as members continue to follow the old leaders into (what looks like) the promised land.

comment by Zack_M_Davis · 2017-05-30T16:36:45.048Z · LW(p) · GW(p)

Moreover, it seems like there's an incentive gradient at work here; the only way to gauge how costly it is for someone to act decently is to ask them how costly it is to them, and the more costly they claim it to be, the more the balance of discussion will reward them by letting them impose costs on others via nastiness while reaping the rewards of getting to achieve their political and interpersonal goals with that nastiness.

I agree that the incentives you describe exist, but the analysis cuts both ways: the more someone claims to have been harmed by allegedly-nasty speech, the more the balance of discussion will reward them by letting them restrict speech while reaping the rewards of getting to achieve their political and interpersonal goals with those speech restrictions.

Interpersonal utility aggregation might not be the right way to think of these kinds of situations. If Alice says a thing even though Bob has told her that the thing is nasty and that Alice is causing immense harm by saying it, Alice's true rejection of Bob's complaint probably isn't, "Yes, I'm inflicting _c_ units of objective emotional harm on others, but modifying my speech at all would entail _c_+1 units of objective emotional harm to me, therefore the global utilitarian calculus favors my speech." It's probably: "I'm not a utilitarian and I reject your standard of decency."

comment by ChristianKl · 2017-05-27T12:13:02.197Z · LW(p) · GW(p)

In most cases calling someone sick when the person suffers from a mental issue isn't the best way to get them to seek professional help for it.

Replies from: komponisto
comment by komponisto · 2017-05-27T22:58:56.325Z · LW(p) · GW(p)

What is the best way? It's not like you can trick them into it.

A more serious issue, I would have thought, would be that the "professional help" won't actually be effective.

Replies from: ChristianKl
comment by ChristianKl · 2017-05-28T11:28:04.469Z · LW(p) · GW(p)

If you don't have any specific tools, I would advocate a mix of asking questions to help the other person clarify their thinking and providing information.

"Did you symptoms X and Y are signs of clinical mental illness Z?" is likely more effective than telling the person "You have mental illness Z."

If the other person doesn't feel judged but can explore the issue in a safe space where they are comfortable of working through an ugh-field, it's more likely that they will end up doing what's right afterwards.

Replies from: komponisto
comment by komponisto · 2017-05-29T10:42:51.560Z · LW(p) · GW(p)

I don't think "Did you know symptoms X and Y are signs of clinical mental illness Z?" is appreciably different from "You very possibly have mental illness Z", which is the practical way that "You have mental illness Z" would actually be phrased in most contexts where this would be likely to come up.

Nevertheless, your first and third paragraphs seem right.

Replies from: ChristianKl
comment by ChristianKl · 2017-05-29T13:49:36.102Z · LW(p) · GW(p)

In a conversation, you get another reaction if you ask a question that indirectly implies that the other person has a mental illness than if you are direct about it. The phrasing of information matters.

comment by a_different_face · 2017-05-27T03:01:17.681Z · LW(p) · GW(p)

This is about behavior, not belief.

I have not disputed "autogynephilic men with repressed femininity and a crossdressing fetish pretending to be women aren't actually women", though neither have I affirmed it.

Regardless, I still would not want you, personally, in any community I'm part of, because your behavior is bad. I'm not interested in debating this this; obviously we disagree on what acceptable behavior looks like. Whatever; different strokes for different folks - clearly this community is not for you, but also you seem to still be here, for some reason.

And I would still want to know who's going around trying to convince people of that statement, so that I could avoid them (for their proselytizing, not for their beliefs) and/or assess why the community has not yet shunned them. (Obviously you can shun the community while it simultaneously shuns you. These are not mutually exclusive.)

So, again, I still want to know who you're talking about. Who are you talking about?

Replies from: Zack_M_Davis, 29f8c80d-235a-47bc-b
comment by Zack_M_Davis · 2017-05-27T12:44:27.953Z · LW(p) · GW(p)

Hi! 18239018038528017428 is almost certainly talking about me! My detailed views are probably more nuanced and less objectionable than you might infer from the discussion in this thread? But to help you assess for yourself why "the community" (whatever that is) has not yet shunned me, maybe start with this comment (which also contains links to my new gender blog).

Replies from: a_different_face
comment by a_different_face · 2017-05-27T15:07:12.773Z · LW(p) · GW(p)

Ah, thanks. Turns out I do know who you are and have already thought about the question of why (and to what extent) the community continues to interact with you to my satisfaction. (And yes, the throwaway's description of you is somewhat misleading, though mostly that's because, from their behavior, I would expect anyone they praise to be terrible without redeeming features).

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2017-05-28T01:28:09.976Z · LW(p) · GW(p)

have already thought about the question of why (and to what extent) the community continues to interact with you to my satisfaction.

For obvious reasons, I'm extremely curious to hear your analysis if you're willing to share. (Feel free to PM me.)

from their behavior, I would expect anyone they praise to be terrible without redeeming features

I don't think that's a good inference! (See the anti-halo effect and "Are Your Enemies Innately Evil?") Even if you think the throwaway's rudeness and hostility makes them terrible, does it really make sense for guilt-by-association to propagate to anyone the throwaway approves of for any reason?

(from the great-grandparent)

This is about behavior, not belief. [...] (for their proselytizing, not for their beliefs)

I think it would be less cruel and more honest to just advocate for punishing people who believe a claim, rather than to advocate for punishing people who argue for the claim while simultaneously insisting that this isn't a punishment for the belief. What would be the point of restricting speech if the goal isn't to restrict thought?

Replies from: a_different_face
comment by a_different_face · 2017-05-30T00:34:10.816Z · LW(p) · GW(p)

For obvious reasons, I'm extremely curious to hear your analysis if you're willing to share. (Feel free to PM me.)

Probably this is going to be too blunt, but it's honest, and I'm assuming you'd prefer that:

Basically, because you are psychotic, not an asshole (or at least, afaict, only an asshole as a consequence). And dealing with people who are behaving poorly because of mental issues is a hard problem, especially in a community where so many people have mental issues of one sort or another.

Again, this doesn't mean I disagree with you (and again neither have I claimed to agree). The fact of your psychosis is not obviously prior to your beliefs. But it is very obviously prior to how you have acted on those beliefs. Or at least it is obvious to me, having spent a great deal of time with friends who behave like you've behaved (in public, at any rate; of course you should discount this evidence given that I haven't interacted with you in person, or at least not much).

Even if you think the throwaway's rudeness and hostility makes them terrible, does it really make sense for guilt-by-association to propagate to anyone the throwaway approves of for any reason?

It's evidence, yes.

I think it would be less cruel and more honest to just advocate for punishing people who believe a claim, rather than to advocate for punishing people who argue for the claim while simultaneously insisting that this isn't a punishment for the belief. What would be the point of restricting speech if the goal isn't to restrict thought?

... This is a much larger conversation for another time. If you have not already internalized "just because I believe something is true does not make it socially acceptable for me to go around trying to convince everyone else that it's true", I don't know that I will be able to briefly explain to you why that is the case.

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2017-05-30T04:10:18.026Z · LW(p) · GW(p)

but it's honest, and I'm assuming you'd prefer that

Yes, thank you!

Basically, because you are psychotic

I definitely went through some psychosis states back in February and April, but I seem to be pretty stably back to my old self now. (For whatever that might be worth!) I have a lot of regrets about this period, but I don't regret most of my public comments.

If you have not already internalized "just because I believe something is true does not make it socially acceptable for me to go around trying to convince everyone else that it's true", I don't know that I will be able to briefly explain to you why that is the case.

Oh, I think I understand why; I'm not that socially retarded. Even so—if there's going to be one goddamned place in the entire goddamned world where people put relatively more emphasis on "arguing for true propositions about human psychology because they're true" and relatively less emphasis on social acceptability, shouldn't it be _us_? I could believe that there are such things as information hazards—I wouldn't publicize instructions on how to cheaply build a suitcase nuke—but this isn't one of them.

Replies from: a_different_face
comment by a_different_face · 2017-05-30T05:14:20.514Z · LW(p) · GW(p)

if there's going to be one goddamned place in the entire goddamned world where people put relatively more emphasis on "arguing for true propositions about human psychology because they're true" and relatively less emphasis on social acceptability, shouldn't it be us?

Sure. And we do put relatively more emphasis. But we have not completely and totally thrown away all social convention. Nor should we: much of it exists for good reason.

comment by 29f8c80d-235a-47bc-b · 2017-05-28T23:35:40.255Z · LW(p) · GW(p)

That seems so obviously true the idea of shunning someone for fighting against people arguing the opposite seems crazy to me. I thought we just called used "she" to be polite, not thought we believed them to be women in any meaningful sense.

Replies from: a_different_face
comment by a_different_face · 2017-05-30T00:36:14.586Z · LW(p) · GW(p)

I cannot imagine participating in this community for any length of time and sincerely concluding that the mental state you've described is actually universal.

comment by Fluttershy · 2017-05-28T19:03:09.551Z · LW(p) · GW(p)

assess why the community has not yet shunned them

Hi! I believe I'm the only person to try shunning them, which happened on Facebook a month ago (since Zack named himself in the comments, see here, and here). The effort more or less blew up in my face and got a few people to publicly say they were going to excluded me, or try to get others to exclude me from future community events, and was also a large (but not the only) factor in getting me to step down from a leadership position in a project I'm spending about half of my time on. To be fair, there are a couple of places where Zack is less welcome now also, (I don't think either of us have been successfully excluded from anything other than privately hosted events we weren't likely to go to anyways), and someone with the viewpoint that shunning him was the wrong thing for me to do also stepped down from an equivalent leadership position in order to maintain a balance. So, I guess we're in a stalemate-like de facto ceasefire, though I'd be happy to pick up the issue again.

I still stand by my response to Zack. It would have been better if I'd been skilled enough to convince him to use a less aggressive tone throughout his writing by being gentler myself; that's an area where I'm still trying to grow. I think that collaborative truthseeking is aided rather than hindered by shunning people who call others "delusional perverts" because of their gender. This is, at least in part, because keeping discussions focused on truthseeking, impact, etc. is easier when there are social incentives (i.e. small social nudges that can later escalate to shunning) in place that disincentivize people from acting in ways that predictably push others into a state where they're hurt enough that they're unable to collaborate with you, such as by calling them delusional perverts. I know that the process of applying said social incentives (i.e. shunning) doesn't look like truthseeking, but it's instrumental to truthseeking (when done with specificity and sensitivity/by people with a well-calibrated set of certain common social skills).

Replies from: Zack_M_Davis, Elo
comment by Zack_M_Davis · 2017-05-31T22:14:50.222Z · LW(p) · GW(p)

(Just noticed this.)

a large (but not the only) factor in getting me to step down from a leadership position in a project I'm spending about half of my time on. [...] and someone with the viewpoint that shunning him was the wrong thing for me to do also stepped down from an equivalent leadership position in order to maintain a balance.

I wasn't aware of this, but it seems unfortunate. If successfully ostracizing me isn't going to happen anyway, "both of you step down from something that you previously wanted to do" seems like a worse outcome than "neither of you step down."

(For my own part, while I wouldn't invite you to any parties I host at my house, I have no interest in trying to get other people to exclude you from their events. I consider my goal in this whole affair as simply to make it clear that I don't intend to let social pressure influence my writing—a goal at which I think I've succeeded.)

shunning people who call others "delusional perverts" because of their gender

I hadn't bothered addressing this earlier, because I wanted to emphasize that my true rejection was "I don't negotiate with emotional blackmailers; I'm happy to listen and update on substantive criticism of my writing, but appeal to consequences is not a substantive criticism", but since it is relevant, I really think you've misunderstood the point of that post: try reading the second and third paragraphs again.

What I'm trying to do there is highlight my disapproval of the phenomenon where the perceived emotional valence of language overshadows its literal content. I understand very well that the phrase "delusional pervert" constitutes fighting words in a way that "paraphilic with mistaken views" doesn't, but I'm interested in developing the skill of being able to simultaneously contemplate framings with different ideological/emotional charges, especially including framings that make me and my friends look bad (precisely because those are the ones it's most emotionally tempting to overlook). People who aren't interested in this skill probably shouldn't read my blog, as the trigger warning page explains.

(Seriously, why isn't the trigger warning page good enough for you? It's one thing to say my writing to should have a label to protect the sensitive, but it's another thing to say that you don't want my thoughts to exist!)

It would have been better if I'd been skilled enough to convince him to use a less aggressive tone throughout his writing by being gentler myself

Not all goals are achievable by sufficiently-skilled gentle social manipulation. If you can show me an argument that can persuade me to change my behavior given _my_ values, then I'll do so. If no such argument exists, then your skill and gentleness don't matter. (At least, I hope I'm not that hackable!)

comment by Elo · 2017-05-28T19:16:42.676Z · LW(p) · GW(p)

it sounds like something happened and there was some miscommunication and things are not fully healed. Would you like help with that?

Replies from: Fluttershy
comment by Fluttershy · 2017-05-30T04:54:27.386Z · LW(p) · GW(p)

I appreciate your offer to talk things out together! To the extent that I'm feeling bad and would feel better after talking things out, I'm inclined to say that my current feelings are serving a purpose, i.e. to encourage me to keep pressing on this issue whenever doing so is impactful. So I prefer to not be consoled until the root issue has been addressed, though that wouldn't have been at all true of the old version of myself. This algorithm is a bit new to me, and I'm not sure if it'll stick.

Overall, I'm not aware that I've caused the balance of the discussion (i.e. pro immediate abrasive truthseeking vs. pro incentives that encourage later collaborative truthseeking & prosociality) to shift noticeably in either way, though I might have made it sound like I made less progress than I did, since I was sort of ranting/acting like I was looking for support above.

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2017-05-31T22:32:58.486Z · LW(p) · GW(p)

encourage me to keep pressing on this issue whenever doing so is impactful. So I prefer to not be consoled until the root issue has been addressed

Is this really a winning move for you? I'm not budging. It doesn't look like you have a coalition that can deny me anything I care about. From my perspective, any activity spreading the message "Zack M. Davis should be shunned because of his writing at http://unremediatedgender.space/" is just free marketing.

comment by Benquo · 2017-05-26T02:21:14.788Z · LW(p) · GW(p)

This seems similar to Leverage in a lot of ways. It seems like it would be really instructive to contrast your plan with Leverage's plan - as initially intended, and as executed - to see what you plan to invest in that they aren't, what you're not doing that they are, and costs and benefits of those differences.

Other contrasting case studies might also add clarity:

  • Esalen
  • kibbutzim
  • the old Singularity Institute house
  • residential colleges
  • fraternities
  • Buddhist monasteries
  • Christian monasteries
  • actual armies
  • actual paramilitary organizations / militias
  • Sea Org

It probably makes sense to 64/4 these with rough sketches from memory/stereotypes/Wikipedia-ing before bothering to do any time-intensive research.

Replies from: Duncan_Sabien, ChristianKl, epursimuove
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-26T02:23:59.347Z · LW(p) · GW(p)

Yep. I don't have strong ties to Leverage, but I'm talking with a couple of the people and have friends involved who have better models than me. +1 to this point.

comment by ChristianKl · 2017-05-26T11:39:50.738Z · LW(p) · GW(p)

Esalen is worth noting because it's a place that's extremely intellectually productive. There are many different paradigms of bodywork that come out of Esalen.

Esalen is central for the history of Feldenkrais, Rolfing and a bunch of other paradigms.

If you could build a community that succeeds to do for rationality what Esalen did for bodywork that would be a huge success.

Replies from: casebash
comment by casebash · 2017-05-28T06:30:27.750Z · LW(p) · GW(p)

What is Esalen?

Replies from: ChristianKl
comment by ChristianKl · 2017-05-28T09:30:16.666Z · LW(p) · GW(p)

The Wikipedia page is https://en.wikipedia.org/wiki/Esalen_Institute

In his Cargo Cult speech Feymann describes the place by saying:

Most people believe so many wonderful things that I decided to investigate why they did. And what has been referred to as my curiosity for investigation has landed me in a difficulty where I found so much junk to talk about that I can’t do it in this talk. I’m overwhelmed. First I started out by investigating various ideas of mysticism, and mystic experiences. I went into isolation tanks (they’re dark and quiet and you float in Epsom salts) and got many hours of hallucinations, so I know something about that. Then I went to Esalen, which is a hotbed of this kind of thought (it’s a wonderful place; you should go visit there). Then I became overwhelmed. I didn’t realize how much there was.

I was sitting, for example, in a hot bath and there’s another guy and a girl in the bath. He says to the girl, “I’m learning massage and I wonder if I could practice on you?” She says OK, so she gets up on a table and he starts off on her foot—working on her big toe and pushing it around. Then he turns to what is apparently his instructor, and says, “I feel a kind of dent. Is that the pituitary?” And she says, “No, that’s not the way it feels.” I say, “You’re a hell of a long way from the pituitary, man.” And they both looked at me—I had blown my cover, you see—and she said, “It’s reflexology.” So I closed my eyes and appeared to be meditating.

comment by epursimuove · 2017-06-11T05:24:09.572Z · LW(p) · GW(p)

64/4 these

What does this mean? Google isn't helping and the only mention I see on LW is this post.

Replies from: Benquo
comment by Benquo · 2017-06-11T07:47:11.402Z · LW(p) · GW(p)

The Pareto Principle says that you can 80:20 many things, i.e. get 80% of the value from 20% of the work. If you 80:20 the 20%, you end up with 64% of the value for 4% of the work.

comment by orthonormal · 2017-05-25T22:13:07.522Z · LW(p) · GW(p)

In the spirit of Murphyjitsu, the most obvious failure mode that you didn't mention is that I expect you to burn out dramatically after a few weeks, from exhaustion or the psychological strain of trying to optimize the experiences of N people. The bootcamp phase is not analogous to anything I've heard of you doing sustainably for an extended period of time.

So, do you expect Dragon Army Barracks to work if Eli has to take over for you in Week Four?

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-25T22:24:41.221Z · LW(p) · GW(p)

Hmm, interesting. My self-model is somewhat incapable of burning out during this, due to an ability to run forever on spite (that's only somewhat tongue-in-cheek).

It's a solid point, though. If I condition on burnout, I think that Eli manages or not based on the level of specificity and concreteness that we managed to get in place in the first few weeks. Like, I don't think Eli is competent (yet) to create the thing, but I do think he's competent to oversee its maintenance and preservation. So that seems to put a somewhat higher priority on early systemization and scaffold-building than might have otherwise been in my plan.

Good question.

Edit: also, probably the closest analogue to this in my past is being the sole functioning RA on a dorm hall of ~30 high schoolers in a high-stress school environment. That was probably within the same order of magnitude of juggling, once you account for the fact that my increase in skill since then is balanced by the increase in complexity/responsibility. I did a lot to try to manage the experience of those thirty people.

Replies from: Valentine, Screwtape
comment by Valentine · 2017-05-25T23:19:30.238Z · LW(p) · GW(p)

FWIW, my model of Duncan agrees with his model of himself here. I don't expect him to burn out doing this.

…and even if he does, I expect that the combo of Eli plus the sort of people I imagine being part of Dragon Army would pull it through. Not guaranteed, but with a strong enough chance that I'm basically not worried about a failure mode along the lines of "Flops due to Duncan burnout and subsequent systems failures."

comment by Screwtape · 2017-05-30T15:41:54.350Z · LW(p) · GW(p)

I would like to say that I share your strong preference for being second in command over first and would like to add a datapoint that I find being first in command to be really stressful in a way that doesn't hit me or mess with my decision making until after I relinquish the role, at which point it hits hard, and am curious if that happens or has happened to you. (Examples; Being first responder in a medical emergency and keeping everything going right up until the victim had arrived at the E.R. and then throwing up and shaking for the rest of the night, leading a major college class project for a semester that went really well and then essentially shutting down and hiding in my room for a week.)

If I were trying to do what you seem to be trying to do, I would be setting myself up for a major crash once I'd brought the experiment to a close or handed off the baton. Obviously our minds are different in many ways, but I figured it was worth checking to see if you had that issue and found a solution that might be stealable.

comment by [deleted] · 2017-05-25T23:07:27.816Z · LW(p) · GW(p)

For the next three months, I will embark on my own experiment of living in a high-standards high-group-activity environment. Specifically, a Buddhist temple.

The temple has an even tighter schedule. All residents wake up together at 5 am and go to sleep together at 10 pm. The rest is meditation, study and work, with 4 hours of free time. The weekends are free, so it adds up to being told what to do for 85 hours per week.

Over the years, I have stayed there six times for a week. The first days are usually a fight to adjust to the lower standards of living (the unpleasant valley). As the days go by, I become increasingly energized and sharp. When I leave, I'm in the best state I can be. Not even a CFAR workshop measures up to how much I upgrade in such a short time. And it's not the meditation. I've gone for days without really meditating and I would still upgrade.

This has led me to believe that something about our individualist style of living is profoundly wrong, at least for some people. Seems like a solution to many of our problems lies in collectivism. Think mental health, akrasia, huffelpuff virtue, etc.

I am really interested in how this is going to fly. Please do post updates. I would also love to share my perspective. I think I'll have some interesting data.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-25T23:12:33.936Z · LW(p) · GW(p)

If you're willing, sharing your perspective in more detail here is welcome (so that all the models are in one place). Else, you're welcome to PM or email me.

comment by ialdabaoth · 2017-05-26T00:15:22.608Z · LW(p) · GW(p)

I have tried similar things.

My strongest recommendation is to beware of internal power struggles. Even if you are fully understood to be in charge, if everyone under you is in a state of emotional mutiny, you WILL become compromised, and you WILL make mistakes, and those mistakes WILL be used to justify further emotional mutiny. This will spiral until you lose everything.

Moreso, some percentage of your trusted minions WILL undergo emotional mutiny. They will discover that they'd rather be somewhere else, doing something else. They'll discover that there are people other than you they'd like in charge of their lives. They will discover that they don't trust you as much as they thought they did. Even if you pick the best people -- hell, ESPECIALLY if you pick the best people, because the best people will have other people vying for their attention, seeking to undermine you from without.

comment by verbiage_ecstatic_duplicate0.33943569543246666 · 2017-06-01T06:57:50.231Z · LW(p) · GW(p)

Chiming in because the problem of helping people level up is close to my heart.

Putting the social dynamics of the experiment aside (since there are plenty of people discussing that aspect), I'd like to offer some good-natured skepticism about the overall approach. (Good-natured meaning, I hope you actually do pursue this because I'm genuinely curious about how this will play out -- assuming the safety concerns others have raised are handled well, of course).

My skepticism is: this is too meta and too complicated to lead to actual progress.

I spent a few years at company that tried to inculcate a deliberate process for getting to the right answer, including a culture of radical honesty and formal procedures for making decisions and learning from mistakes. This was a major priority at the company for a long period of time (last I checked, it's still going on), with backing from the entire senior management team, and was enforced by firing people who couldn't or wouldn't skillfully participate. I.e., they took it really seriously and put a lot of effort into it. The people who conceived and implemented it were in my opinion extremely smart and competent.

That said, in my opinion the effort spent on this program did more harm than good to the functioning of the company. The values and culture became an end in itself, as opposed to a means for helping achieve goals, and endless amounts of time and energy were spent debating, elucidating, learning, and critiquing the system. Competent professionals ended up becoming ineffectual because they gave up (or were forced out of) their unreflective expertise and got stuck in endless cycles of second-guessing. Some of that self-reflection may have given rise to new levels of skill (in my case, I did in fact feel like I benefited from my time there, although I think that was largely because it was my first job out of college so I didn't have that much to un-learn), but generally people felt disempowered by the initiative rather than improved.

In contrast, for the last few years, I've been running a tiny company where we have very little meta discussion and mostly just do object-level work. I feel 1000x more productive now than I did at my prior job.

My takeaway from this is that the optimal ratio of meta-level tuning to object-level practice is [small number] : [large number]. Meta-level thinking is extremely valuable and important, but I view it as the rudder on a boat: you need to be constantly making adjustments to keep pointing in the right direction, but 99% of the power generation goes into the main engine pointing forward.

If I had to generate a hypothesis as to why the concrete achievements of the rationalist community are less than might be desired, it would be that the community spends way to much of its energy on meta topics instead of on object-level progress. This is understandable, since a) meta-level discussion of rationality is what created the community in the first place, and b) object-level discussion can often be very boring compared to meta-level discussion. (I miss the intellectual stimulation of my previous job, even as I see it as basically a waste of time in terms of actually building a successful company). While understandable, I think it leads to predictable outcomes: a lot of talk happens but not much gets accomplished.

Looking at the proposed charter, I suspect there will be a very high amount of meta-level discussion, probably significantly more so than at my prior job that I thought was way too meta. That's because a) it's built in to the daily schedule, b) it's built into the mission, which is expected to evolve over time with the participants, and c) it's built into the community that the participants will be drawn from.

In addition to being too meta, I also suspect this experiment is too complex. Experimenting with a bunch of different norms, on top of the code of conduct and daily schedule, seems wildly ambitious to me. In the company I worked for, the set of norms and practices were set in stone by executive fiat, recruits to the company were presented with them prior to accepting jobs, and adherence to them were a major part of performance evaluation, and there was still a very high employee churn rate and a general agreement that the norms / practices as specified weren't consistently well-practiced throughout the company. The Dragon charter is for a smaller group of people, which makes things easier, but the norms / practices are expected to be a moving target, which makes things harder.

In my personal experiments with self-improvement, I've had the most success with extremely simple plans. My most successful self-intervention to date has been to download a simple habit tracker on my phone, and add a new daily habit, moving on to the next only after successful completion of the prior one for 30 days. When I first started trying to learn new habits, I would add a bunch of new habits at once, and I would always fail. It took me a very long time to get patient enough to only try to change one thing at a time (which requires accepting that I'm going to have habits I don't like in the interim that I don't try to do anything about).

Similarly, I've been successful growing my current company by having an extremely boring strategy of: ship code, talk to customers, ship code, talk to customers.

Simplicity does not come naturally to me; I like my ideas and strategies to be convoluted, complicated, ambitious, and interesting -- I get very bored with simple, straightforward approaches. So I'm a big believer in simplicity because I've learned the hard way against all my natural inclinations that -- unlike my natural inclinations -- it actually works.

So if I were trying to design a charter, I would pick one or two things that I think would be most likely to have a game-changing impact, and just focus on those things until they worked (or didn't). In contrast, the charter as it exists now feels to me like it has way too many moving pieces. That's just my intuition, of course, but I hope I've given a feel for where that intuition comes from.

Anyway, I admire the ambition in doing a project like this, so I hope my criticism is constructive and useful.

Replies from: Duncan_Sabien, cousin_it
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-06-01T07:41:36.903Z · LW(p) · GW(p)

Thanks for the long and detailed response. I enjoyed reading it.

It's interesting that you highlight meta as being a dangerous failure mode—I actually strongly agree, which is why the aesthetic is tuned toward stuff like "just exercise" and "housemates should produce visible work." My sense is that a strategy of just doing stuff outstrips in practice a strategy of think really hard until you find the ideal move, especially when you take into account how many iterations you can get in if you're churning hard.

Hilariously, though, I'm further inside the rationalist bubble than I thought, because I accept your overall summation even though the intent was to be THE OBJECT LEVEL HOUSE (or at least, the house that does stuff even if it goes meta on norms). I still think we're set up to be relatively ahead, but as you point out, that's not necessarily a sufficient bar.

However, I'm much more concerned with:

In addition to being too meta, I also suspect this experiment is too complex. Experimenting with a bunch of different norms, on top of the code of conduct and daily schedule, seems wildly ambitious to me.

That rings very true to me, and has been an active concern of mine for the past couple of weeks. It seems like there are something like a hundred activities/experiments/norms/projects that are worthy of including in this, and something like 1.3 slots per week (and thus not even room for half), and I'm not at all certain how to best pick and choose and prioritize and optimize for success. In part, I'm hoping that if we just throw ourselves in and iterate (see above) we'll do better than if we agonize, but yeah, there are a lot of moving parts, and I wouldn't be surprised if we ended up trying to drastically simplify in like our fifth week house meeting.

If I had to really zero in on basics, I think they are:

  • Never give up on an experiment until its predetermined end date
  • Spend ~20 hours a week actually interacting in the same physical space as housemates (at least a subset)

... those, I think, are the iron core of the project.

Replies from: entirelyuseless
comment by entirelyuseless · 2017-06-01T14:04:21.509Z · LW(p) · GW(p)

Spend ~20 hours a week actually interacting in the same physical space as housemates (at least a subset)

I'm curious why this is so important to you, unless that it's just something to try out. I currently live alone and I like it that way, and I see no reason why spending more time with other people would be such a great thing.

You seem really rigid about excuses though. I think the tendency will be that people will come up with an excuse which one finds it unpleasant or difficult to dispute. For example, when I was in the data science bootcamp in Berkeley, people would very frequently say, "I'm sick and I will be working from home today." Now a lot of people were in fact sick precisely because of so much physical proximity. But it was very obvious in many cases that the basic reason they were staying home was that they were tired of all the company and felt the need to get away. They did not however feel comfortable saying, "I just feel the need to get away."

The same thing was true when I lived in a monastery. You could not say "I just feel like sleeping in this morning," so people said "I didn't come this morning because I didn't feel well." We all knew that this simply meant they were tired and felt like sleeping in. But no one is comfortable confronting someone with the fact that they're not really sick if they say they are.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-06-01T16:32:00.503Z · LW(p) · GW(p)

The focus on physical presence is a combination of research showing that it matters (there's some stuff I've collected from Dunbar, for example) and strong personal intuition from past experience. In many ways, it's the core of the thing being tested out, but I have a lot of weight on "it turns out to matter more than just about anything else."

re: excuses, the intention of the house is Not To Do The Stupid Thing.

Clearly, "mental health" days are a real phenomenon—I've taken some myself. And on a larger scale, psych blockers/motivational issues are also real. So it'd be stupid to a) pretend they don't happen, and b) push directly against them all the time, and never look at undercutting them or working around them. This plan pushes directly against them some, with commitments to just show up anyway, but that's not the only tool—one of the things I hope to do is increase the candor of all housemates, at least within the context of the house. This will take some practice and reinforcement, but I much prefer a norm of "Huh. I notice I just really didn't want to show up today" --> figure out what's going on and address it systematically, to a norm of "little white lie that nobody calls out."

It's also worth noting that the house has a pretty high introvert quotient, so there will be a lot of us (myself included) who are motivated to safeguard systems giving one the ability to get away from people for a while.

comment by cousin_it · 2017-06-02T07:52:11.627Z · LW(p) · GW(p)

Thank you for writing that! It's great to see the "too meta" problem spelled out so clearly. It's similar to the situation in programming that has long puzzled me. Many people and companies have accumulated processes that they swear by (code review, type systems, continuous integration, agile and whatnot) but at the same time lots of people do amazing work with very little process.

It seems like meta stuff has a way of self-justifying and growing, like a bureaucracy. It's useful if you're stuck and nothing works, but if you're making any progress at all, it's better to steer with the engine so to speak. Radical meta proposals sound attractive to people who have fought their minds to a standstill, but even for such people I think a better idea is starting one small object-level thing on a strict schedule (gym is a good choice), making the mind more mobile for other things in turn.

comment by a_different_face · 2017-05-27T00:45:01.779Z · LW(p) · GW(p)

This is a neat idea!

I expect it to fail. And I kind of wish you wouldn't try: I give maybe a 1/4 chance this fails sufficiently dramatically and publicly that I become less willing to be associated with the community because people start associating it with that failure.

In particular, here is what I expect to happen (~60% confidence it goes down something like this):

  • Someone will start regularly defecting within the first three months. Maybe they don't keep up with their chores, maybe they skip meetings, maybe they fail to get along with someone and they fight, maybe they persist in doing something they've been asked repeatedly not to do, maybe they chafe under your leadership and start practicing malicious compliance. I don't expect intentional defection so much as executive dysfunction, to be clear, but it has the same effect either way.

  • You, personally, will lack the force of character or charisma to fix it. (I haven't met you in person, so this might be way off; I'm just going off your writing and those of your pictures on Facebook I can see. But it takes an extraordinarily good manager to deal with this problem, and there's nothing in your bio which implies you are one.) You also, not being legally their military superior, won't have any actually worthwhile carrots or sticks to offer - this is the core problem, as I see it, that you lack the legal authority to properly enforce anything. Also, rationalists are weird, and often don't respond that well to the usual incentives.

  • The rest of the house will lose confidence in your leadership as a consequence.

  • Bad things. I don't actually know what happens at this step - people move out, or just stop playing by your rules and it reverts to a standard if unusually dysfunctional group house, or what.

Unfortunately I don't have fixes to offer you here, other than "try to figure out an enforcement mechanism which will work even on rationalists and which you can legally carry out". I can't think of such an enforcement mechanism, but haven't even put a full five minutes into it. Maybe you already have one in mind and I've missed it. To be clear, I don't think "ostracism" will be remotely sufficient, because of the aforementioned weirdness and the fact that people will have other friends to fall back on. (I guess you could only invite people without other friends, or require them to cut off contact with said friends, but that is a terrible idea.) I also want to say that I've seen a number of other communities either fail or struggle due to lack of an explicitly specified and actually effective enforcement mechanism for their rules.


Tiny side note: I think it's very important that members have regular one-on-one meetings with someone other than you, in case their problems are problems with you which they aren't willing to bring up to your face.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-27T01:51:22.059Z · LW(p) · GW(p)

Thanks for this detailed model. I had a sense of this as a failure mode, but I like the specific way you've expressed it.

I do actually have a fair bit of managerial skill. I dunno if it's better than 1/100, but it's at least in that range. I also completely agree about regular one-on-one meetings with other people; in part, that's what the "pair debugging/rapport building" time commitment is. I wonder if you think it's important that they be with a specific other person, or if you think just fostering lots of one-on-one communication hits the thing you're gesturing toward?

Replies from: a_different_face
comment by a_different_face · 2017-05-27T02:46:41.585Z · LW(p) · GW(p)

A specific other person intuitively sounds better to me, but that might just be because that's how it has been done in organizations I've been in. (Though it sounds hard to schedule if it's not a specific person, otherwise, and it's important that this be a regular thing with the specific topic of "talk about how things are going", not just general spending time together.) Maybe your second in command, maybe a different person from the command structure - I assume there's going to be people other than you with roles like "general household management" (I am thinking of office managers, if you're familiar).

I don't think the pair time accomplishes quite this. Having a specific time set aside for one-on-one meetings specifically as the regular opportunity to bring up issues means issues which might otherwise have stayed at the back of the mind get brought up more. Generic time spent together does not accomplish this. It's approximately the same reason you want scheduled one-on-one meetings with everyone in the house despite presumably spending a lot of time with the people in the house in other contexts.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-27T03:01:06.531Z · LW(p) · GW(p)

Hmmm. It might be good to install as a house norm that everyone has an outside advisor that they commit to checking in with, either once a week or biweekly. Like, someone not directly affiliated with Dragon Army in any way.

Replies from: Decius, malcolmocean, Alicorn
comment by Decius · 2017-05-28T07:27:08.425Z · LW(p) · GW(p)

That's only useful if the outside advisor has some level of veto power. I'd suggest something like allowing them to trigger a discussion meeting /outside of Dragon Army Territory/ with the advised, optionally including the Commander and/or other members, and also at the option of the advisor including legal counsel or a medical practitioner.

Not because I expect anyone to need the safeguards involved, but because making those explicitly part of the Expectations makes it harder to coerce somebody into not getting help. Making coercion of the type "You're fine, no need to waste time and leaving your ingroup to try to explain to some /outsider/ what's going on, they won't understand anyway" ring red alarm bell flags is a feature.

Replies from: Duncan_Sabien
comment by MalcolmOcean (malcolmocean) · 2017-05-28T20:27:09.802Z · LW(p) · GW(p)

I am open to being an outside advisor / buddy / contact etc to individuals within this and/or with the project as a whole.

Replies from: Nisan
comment by Nisan · 2017-05-31T05:36:59.218Z · LW(p) · GW(p)

Me too!

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-31T06:18:26.413Z · LW(p) · GW(p)

Can I get contact info from you? I already have Malcolm's; if there's an email address you can use to send a message to TK17Studios at gmail dot com, I can then offer that address to anyone without an obvious check-in.

Replies from: Nisan
comment by Nisan · 2017-05-31T07:13:56.423Z · LW(p) · GW(p)

Sent.

comment by Alicorn · 2017-05-30T20:52:13.557Z · LW(p) · GW(p)

Throwing in with Malcolm as interested in being an outside sanity check.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-31T06:18:21.712Z · LW(p) · GW(p)

Can I get contact info from you? I already have Malcolm's; if there's an email address you can use to send a message to TK17Studios at gmail dot com, I can then offer that address to anyone without an obvious check-in.

comment by Kaj_Sotala · 2017-05-26T15:14:05.328Z · LW(p) · GW(p)

Caveat/skull: The obvious problem is people attempting to game the system—they notice that ten pushups is way easier than doing the diligent work required to show up on time 95 times out of 100.

Not a full solution, but gesturing in a direction that you might find useful: build the system in such a way that gaming it is encouraged and useful, and that the punishments are somehow self-balancing.

E.g. if the punishment is "do some chores", somebody who figures out that doing the chores is easier than their other obligations is at least clearing the list of all the chores that need to be done. If they run out of chores to do, new tasks can be added to the list, and they can choose whether doing them is still worth it.

I'm here kinda reminded of the evolution of pen'n'paper RPGs, which originally had disadvantages you could buy during character creation that made you more powerful in exchange; of course people would munchkin by "forgetting" the disadvantages during play. Newer games got past that by making disadvantages give you zero points during character creation (or even cost!), and instead had them award benefits if you roleplayed them during actual game. In general, games have gotten the better the more they have built "trying to munchkin the rules, automatically leads you to play the game more like it was designed to be played" as a fundamental game design principle.

Not sure of how to do the "self-balancing costs" thing, but I am reminded of the bidding systems some houses have for chores, where you offer money for doing some task and if someone else finds the offered amount of money more valuable than the pain of doing the chore they do it; otherwise you do it yourself.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-26T15:45:20.930Z · LW(p) · GW(p)

+1 to the general idea; not sure how to implement it myself but it's worth some five-minute timers.

comment by Nisan · 2017-05-27T04:53:44.634Z · LW(p) · GW(p)

Are there people external to the project who are going to keep an eye on this? I think it would be sensible for each participant to have a buddy outside the house who checks in with them regularly. And for each buddy to know who the other buddies are.

Replies from: Duncan_Sabien, Qiaochu_Yuan
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-27T16:51:31.927Z · LW(p) · GW(p)

I've come around somewhat to the outside buddy idea below; I dunno about the buddies knowing each other. That seems to introduce a whole new layer of difficulty, unless you're just talking about, like, an email list.

Replies from: Nisan
comment by Nisan · 2017-05-27T20:58:36.580Z · LW(p) · GW(p)

Cool. Yes, a mailing list sounds even better than the low-tech solution I had in mind, which was "every buddy learns 80% of the names of the other buddies through the grapevine, and they happen to be one or two hops away on the social network".

comment by Qiaochu_Yuan · 2017-05-27T08:56:58.589Z · LW(p) · GW(p)

This seems extreme. Do you not expect that each participant will already have at least one friend outside the house they can talk to about the house if things go poorly, without this needing to be an explicit policy? Or do you worry that things will go so poorly that this won't work for some reason? If so, can you share a more detailed model?

Replies from: jsteinhardt
comment by jsteinhardt · 2017-05-27T17:29:34.887Z · LW(p) · GW(p)

I think there's a difference between a friend that one could talk to (if they decide to), and a friend tasked with the specific responsibility of checking in and intervening if things seem to be going badly.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2017-05-27T18:56:21.780Z · LW(p) · GW(p)

Sure, but what I'd like to know is why Nisan thinks that difference is important in this case.

Replies from: jsteinhardt, Nisan
comment by jsteinhardt · 2017-05-27T20:51:14.016Z · LW(p) · GW(p)

Parts of the house setup pattern-match to a cult, cult members aren't good at realizing when they need to leave, but their friends can probably tell much more easily.

(I don't mean the above as negatively as it sounds connotatively, but it's the most straightforward way to say what I think is the reason to want external people. I also think this reasoning degrades gracefully with the amount of cultishness.)

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-28T05:26:56.708Z · LW(p) · GW(p)

Yep, this is why I'm in favor of the "outside friend" norm. In particular, despite not planning to make a bad cult, if I accidentally do, I'm in favor of it being noticed as soon as possible, so it can either be fixed or dismantled.

comment by Nisan · 2017-05-27T20:49:43.336Z · LW(p) · GW(p)

I'm not proposing a house policy here. I'm suggesting that a Dragon would do well to have regular followups with someone outside the house, and I'm proposing that some members of the wider community offer to be those someones.

In the past I've had regular video calls with a couple people who were doing long-term experiments with their lifestyle; I think it was helpful. I believe such an arrangement was part of the Leverage polyphasic sleep experiment.

Jacob is right: There's a difference between a friend one can reach out to if one needs to, and a friend one is scheduled to talk to once a week. Personally, I struggle to keep up with friends without scheduled meetings, and it sounds like the Dragon Army will be very busy.

Also, there is a difference between reaching out to a friend when things have gone very wrong and one needs to get out; and bringing up a less drastic problem during a weekly check-in. In the first case, you need a couch to crash on and maybe a lawyer. In the second case, you need someone who will listen to you and bring an outside perspective, and maybe refer you to other resources.

Partially, I'm afraid that if this doesn't go well, our community will lose a cohort of promising people. It would be a shame if that happened because we failed to pay attention to how they were doing.

But also, if the experiment goes very well, this arrangement would be a means by which the wider community can learn from what went right.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2017-05-27T22:19:02.104Z · LW(p) · GW(p)

Partially, I'm afraid that if this doesn't go well, our community will lose a cohort of promising people.

I really don't know what you mean by "lose" here (and I'm worried that others will have varying interpretations as well). Do you mean they'll become less promising? Not promising? Leave the community? Go crazy? Die?

Anyway, this seems sensible, but I still want to nudge you and everyone else in the direction of sharing more explicit models of what you think could actually go wrong.

Replies from: Nisan
comment by Nisan · 2017-05-27T23:39:08.105Z · LW(p) · GW(p)

Sorry, I was imagining a scenario where a person has an unpleasant experience and then leaves the community because for the last several months all their close contacts in the community were in the context of an unpleasant living situation. That's bad for the person, and unfortunate for the community as well.

Replies from: Decius
comment by Decius · 2017-05-28T07:16:36.379Z · LW(p) · GW(p)

I see a possible failure mode where a member of a participant's family not into any rationalist community sees the Dragon Army rules and pattern-matches the rules and behavior into 'cult' (not arguing whether that pattern match is correct here, just saying that it might happen).

A family member concerned that their loved one might be involved in a dangerous cult might take extraordinary measures to remove that person from the situation, which might get very ugly.

I'm not sure that a nonparticipating buddy is sufficient to mitigate the risk of 'rescue'.

comment by Benquo · 2017-05-26T02:18:58.629Z · LW(p) · GW(p)

Praise: The focus on actually doing a thing is great.

Criticism: Most of this post was about methods the house will have, why these are OK, etc. Comparatively little was about what the house is going to used to accomplish outside itself. This seems worth putting much more up-front thought into given how much of the point is to make a house that can actually do a thing. Probably your methods and selection criteria are not very well-calibrated for whatever project will turn out to be best - human coordination is much easier when you're coordinating about something in particular.

Obviously you will not know everything perfectly in advance no matter how much planning you do - but planning to accomplish a particular thing is very qualitatively different from planning to accomplish things in general.

Praise: A lot of the details on how to live together well (group exercise, food, time explicitly set aside for checking in) seem really good. If step 1 is just "learn to live well together," that is itself a respectable project, and one most of the Rationalists have failed at. Probably most attempts at this fail, we only observe the old communes that didn't fall apart.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-26T02:26:25.884Z · LW(p) · GW(p)

I like both your praise and your criticism. re: the criticism, one of the reasons I've held off a bit is a suspicion that I can't actually well-model the sorts of things the house will accomplish once fully formed (that it will be stranger/more surprising than I think). I had some thoughts, like running a talk series at prestigious universities, publishing a book or a movie, creating an org to teach rationality to middle- or high-schoolers and then doing it, building a robot car, trying to develop Veritaserum, etc. but they were all over the map.

Replies from: Benquo, Raemon
comment by Benquo · 2017-05-26T03:12:36.636Z · LW(p) · GW(p)

I can't actually well-model the sorts of things the house will accomplish once fully formed

My best guess is that having a highly specific plan that includes steering/replanning capacity and then totally abandoning it when the wheels hit the road because it turns out to be the wrong thing is way better than having a generic plan.

I had some thoughts, like running a talk series at prestigious universities, publishing a book or a movie, creating an org to teach rationality to middle- or high-schoolers and then doing it, building a robot car, trying to develop Veritaserum

I'd love to see how you'd design a house specifically for any one of these goals. Robot car is the one that I think would give you the most feedback from your internal models during the planning stage, followed by publishing a book or movie. "Create an org" is a bit recursive, and a talk series is probably either too easy or too vague. Not sure what you mean by develop Veritaserum but it seems to strongly overlap with some of Leverage's most plausibly successful research.

I claim with moderate confidence that simply walking through how the house as currently planned might go about building a robot car would substantially improve not just your plans for particular object-level capacity, but general capacity. "How will this organization change its mind?" might be a lot harder to cash out usefully than "How will this organization change its mind about valve design for the fuel injector?".

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-26T03:50:19.169Z · LW(p) · GW(p)

re: your best guess, that makes sense. It's possible I should just choose one of those plans above (many of which actually have lots of fairly detailed planning behind them already) and run with it for now.

Eli Tyre strongly agrees with your last paragraph, and is (correctly, and appreciated-ly) pushing for the first large-scale project to be determined sooner rather than later.

comment by Raemon · 2017-05-26T02:58:30.446Z · LW(p) · GW(p)

Hmm.

Thing that sticks out to me: you mentioned the value of doing something as a house as opposed to as a company. Some of these seem like the sorts of things one does at-a-company-in-particular (and seem like they're require the amount of time commitment that a job requires). Is there something that distinguishes doing this as a house vs doing this as a particularly intensive company?

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-26T03:49:09.031Z · LW(p) · GW(p)

Note that those are deliberately not in the charter itself, because I doubt they're sufficient.

Two things distinguish it—one, starting a company is harder than starting a house, and two, a major part of this is to bind people in a society, and everyone around me already seems to have separate buckets for "my job" and "my life." I think it's important to start leveling up people and getting people moving in the "my life" bucket, and that the "my job" bucket already has plenty of forward momentum and pressure.

comment by IlyaShpitser · 2017-05-30T22:18:19.742Z · LW(p) · GW(p)

[I don't want to be here, but this is important].

To Duncan: I am not going to say you are trying to start a cult group, like some other folks did in this thread. However, I am going to suggest some background readings on cults if you are interested. Cults are a hobby of mine. My favorite cults are Scientology, unofficial Scientology derivatives who kept most parts of the belief system (yes they exist), and the Fellowship of Friends and other Gurdjieff-offshoot cults. Also Carlos Castaneda's group is a fun one. Those are the fun ones to read about.

To people Duncan is talking to: you are a human being, not a space monkey. The space monkey road is not a good road, I speak from personal painful experience. The space monkey road is going to abstract personal growth issues in a way that will be counterproductive for you in the long run, imo.

Replies from: Duncan_Sabien, cousin_it
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-31T18:24:16.009Z · LW(p) · GW(p)

Ilya: if you recommend your top 2-5 sources, I'll commit to reading at least 30,000 words in the next two weeks. (I ask for more than one source in case you propose things I've already read.)

Replies from: IlyaShpitser
comment by IlyaShpitser · 2017-05-31T18:39:06.337Z · LW(p) · GW(p)

Scientology: http://www.xenu.net/ (clambake.org). Lots of interesting links there, including about offshoots.

Castaneda: https://www.amazon.com/Sorcerers-Apprentice-Life-Carlos-Castaneda/dp/1583942068. Also some other stuff online, easy to google.

Live stuff on Robert Burton's Fellowship of Friends: http://robertearlburton.blogspot.com/. Also some exposes are googleable. Also some stuff on wikileaks. I have personal second hand info on this cult (was never in it, but know people who were). The Fellowship of Friends has their main base (Apollo, in Yuba County) in California and preys on educated, high salary types.

There are a ton of Gurdjieff offshoots in various states of virulence/danger. One thing I learned about the concept "cult" is it's a fairly fuzzy concept and sort of dissipates around the edges into fairly benign reading groups/clubs and so on. Probably has to do with how charismatic the main person (almost always male) is. So discussions of whether something is "culty" or not are, to me, kind of silly. If the question is raised at all, probably yes a bit culty.


I like reading lots of heterogenous sources and personal accounts to try to piece together what's happening in places like that, rather than books.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-31T18:45:20.708Z · LW(p) · GW(p)

Thanks! Half of these are brand-new to me; commitment made.

comment by cousin_it · 2017-05-31T10:08:04.611Z · LW(p) · GW(p)

My favorite cult to read about is Rajneeshism. It's very recent, the head guy was almost supernaturally charismatic by all accounts, and the story is hilarious! From the collection of 93 Rolls-Royces to a bioterror attack by poisoning salad bars in an Oregon town with salmonella (yes).

BTW, Scott of slatestarcodex has also chimed in against the OP's proposal:

On third thought, everyone else is right and I am wrong. The Dragon Army group house is a very bad idea, enough so that it’s okay to be forceful in encouraging Duncan to modify it or other people not to join it. This is true even if the required modifications are so hard that they end up sinking the project.

Replies from: IlyaShpitser, Duncan_Sabien, Duncan_Sabien, Duncan_Sabien
comment by IlyaShpitser · 2017-05-31T14:38:34.422Z · LW(p) · GW(p)

Slatestar: "Also, Duncan’s taking the wrong strategy by denying it’s a cult. His pitch should be “Hey, cults seem pretty good at controlling their members, let’s get together a bunch of people who are interested in using cult techniques to become the best people they can be by their own values, and see if we can make it work.”"

And the circle is complete.

Replies from: tristanm
comment by tristanm · 2017-05-31T19:15:33.450Z · LW(p) · GW(p)

I agree with Scott on this. When proposing that we should return to well-explored territory found to be dangerous (which is what I claim cults are), we should at least be honest about the fact that we're returning to old territory, and perhaps argue that it was in fact not as well-explored as we thought and there might be good things to be found there.

But instead, Duncan appears to be arguing that, according to the Pendulum model, we have moved so far past the "old way of doing things" that we skipped over the optimum and are now in another poor solution. He suggests his proposal is a gentle nudge towards the optimum, but this doesn't seem to square with the fact that the "cult" model is the "old way of doing things" that we we're previously stuck in. So to me it seems more like "swing even harder in the opposite direction!" when the pendulum should actually be slowing down, moving towards the optimum with less momentum than it had previously.

Replies from: Duncan_Sabien, iamaknave
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-06-01T19:01:45.128Z · LW(p) · GW(p)

I disagree with Scott that this qualifies as a cult. Outside post that I think sums up the relevant difference.

I'm also opposed to calling it a cult just because a lot of people took one glance at it and leapt to the most uncharitable stereotype possible.

Replies from: tristanm
comment by tristanm · 2017-06-01T19:13:00.137Z · LW(p) · GW(p)

I agree that "cult" is a loaded and derogatory word and probably should be abandoned in favor of more information-carrying terminology. It might be better described as the centralized authority model. I stand by my claim that the centralized authority model is a return to old territory, though, and this meshes well with Scott's model of the formation of the bi-modal distribution of peoples' priors about this (marginalized groups have probably been exposed more to the centralized authority model than privileged Westerners).

Replies from: Lumifer
comment by Lumifer · 2017-06-01T19:37:23.098Z · LW(p) · GW(p)

"cult" ... might be better described as the centralized authority model.

I don't know about that. There are a lot of organizations with highly centralized authority which are not cults (by any definition). For example, the military.

I would probably define "cult" as an entity which, when faced with the question "Who are you going to believe, me or your lying eyes?" strongly encourages the answer "You, of course you!" In more abstract terms, a cult depends on controlling the information flow to its members, both through isolation and through inculating high trust for "internal" claims and low trust for all "external" claims.

comment by iamaknave · 2017-05-31T20:56:42.078Z · LW(p) · GW(p)

Cults are not good at getting members to fulfill their own values. Consider the amount of cults that valued sexual purity and ended up with a whole lot of rape and child molestation.

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-06-01T07:27:29.379Z · LW(p) · GW(p)

BTW, Scott of slatestarcodex has updated his post with an "on fourth thought" (in addition to his excellent theory on the dynamic motivating disagreement) that states he's moving away from concern (though not necessarily all the way to "unconcerned"). I'm hoping you would've posted this yourself—having sort of implicitly committed to using Scott's opinion as an advisory authority—if I hadn't done so myself first. Not just trusting him when he's on your side, and so forth.

I’m encouraged by this both because they seem like good ideas and because they sound like he’s thought this through more fully than I originally thought.

Also, if we are going to keep bringing in questionable outside blogging as source material, there's this, which I feel fairly treated by and comes from an author with actual relevant life experience.

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-31T18:48:04.801Z · LW(p) · GW(p)

Also, if we are going to keep bringing in questionable outside blogging as source material, there's this, which I feel fairly treated by and includes people with actual life experience rather than those talking out of their butts.

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-31T17:22:39.301Z · LW(p) · GW(p)

EDIT: Scott of slatestarcodex has updated his post with an "on fourth thought" that states he's moving away from concern (though not necessarily all the way to "unconcerned").

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-31T18:33:47.305Z · LW(p) · GW(p)

Note: I've also reached out to Scott directly myself.

comment by cousin_it · 2017-05-29T19:08:16.125Z · LW(p) · GW(p)

I think most people can do well by joining the kinds of relationships that are time-tested (marriage, friendship, work, school, gym, army, church...) From how much trouble it took society to get these halfway working and find decent boundaries, you should be skeptical of inventing new ones that will work in your lifetime. Especially if they look suspiciously similar to cults which we already know don't work.

And I'm not even sure why you need to invent new relationships! You might feel like you have huge problems that require one huge hammer to solve, but that feeling is deceptive. Mitigating the problems one by one, with boring well-known fixes, is easier and works better. If you want to get fit, join a gym. If you want to learn something, go to school. These will give you the right amount of structure and your daily dose of socialization, without regimenting your life like a boot camp, and you'll be guided by competent people instead of fumbling your way as a crowd of amateurs.

Replies from: Duncan_Sabien, ChristianKl
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-29T19:25:51.179Z · LW(p) · GW(p)

I think there are a fair number of wrong (or at least underjustified/unfounded) claims in the above. e.g. "cults don't work."

This is largely not a new invention, and is instead largely a return to structures and values that have been known to work in the past, and have been loosened/undermined in the past few decades.

Replies from: cousin_it
comment by cousin_it · 2017-05-29T20:08:13.511Z · LW(p) · GW(p)

I think there are a fair number of wrong (or at least underjustified/unfounded) claims in the above. e.g. "cults don't work."

My opinion of CFAR just fell from "neutral" to "mildly harmful" because they hired someone who's willing to say the above. On old LW (where Eliezer wrote a sequence on avoiding cults and I was contributing decision theory math) this would've been unbelievable. Or maybe I've been missing the signs, not being in the Bay Area.

Replies from: Duncan_Sabien, drethelin
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-29T20:15:44.565Z · LW(p) · GW(p)

You're not thinking or arguing clearly, and are instead leaping to conclusions and pulling from stereotypes.

If you lose respect for CFAR over that, it's the result of your own confusion, and the loss of your endorsement is not one I'd lose sleep over.

One can say "guns are indeed effective" and not be advocating for wanton gun violence. It's a statement about objective reality—guns do things—not a statement about normative values. Similarly, I can argue with your claim "cults don't work" (which is clearly, demonstrably false on at least some axes; cults were in fact successful enough to cause large damage to a lot of people's lives at the very least) without saying "HECK YEAH, GO CULTS."

I'll continue to engage, or not, based on whether or not you respond reasonably to the above. Sorry for the impatience, but I've written thousands upon thousands of words in this thread by now, and I'm not at all in the mood to let people strawman me at this point (even if they want to try to pull a sneaky status move by claiming seniority-on-the-forum and trying to shame a certain kind of statement without any model behind the shaming).

(I also note that you didn't bother to respond AT ALL to my claim that you're making unfounded leaps, nor to my claim that this is in fact a return to previous proven systems rather than an attempt to invent a new one, which makes me think that in addition to smushing together unrelated things in your arguments, you're not actually here to discuss, i.e. swap statements back and forth on a topic and in fact interact with what the other person is saying, and are instead here to just score points or confirm (rather than falsify) your own models.)

Replies from: cousin_it
comment by cousin_it · 2017-05-29T20:58:03.905Z · LW(p) · GW(p)

If you took my original comment to mean that cults are harmless, that's a bit bizarre.

As for previous proven systems, I'm not sure which ones you mean. The closest analogue is religious or socialist communes, which turn bad too often for my taste. The happiest exception is kibbutzim which weren't nearly as authoritarian as your idea. Then you have the army, which exists today just fine and we know what it's good for, not sure why we need another one. Then there are boarding schools, sport camps etc. but these are based on learning from professionals which you don't have.

Replies from: Duncan_Sabien, Decius
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-29T21:14:31.146Z · LW(p) · GW(p)

sigh.

I took your original comment to be saying "cults don't work."

Then, when I said "they do, though," I took your second comment to be pearl-clutching and saying "well, now I think CFAR must be (slightly) evil or stupid for hiring someone who is willing to say out loud that cults work (gasp)."

You cannot possibly have drawn out of my statements above "Duncan thinks cousin_it thinks cults are harmless."

I'm going to disengage because it's not easy to have discourse with you (say things clearly, stick to a topic, expose reasoning, actually make progress toward truth or convergence). I don't understand how your reasoning process works. I'm finding this subthread frustrating and low-value, and thus far the specific points I have been able to tease out of what you're saying, I generally disagree with (and trust my domain knowledge and expertise more than I trust your skepticism-without-any-concrete-evidence-backing-it-up-from-someone-who's-already-demonstrated-willingness-to-make-unfounded-leaps).

comment by Decius · 2017-05-30T02:42:00.826Z · LW(p) · GW(p)

The Army works just fine, and has goals that aren't ours. Why not steal much of their model /which works and has been proven to work/?

Especially if the problematic aspects of Army culture can be avoided by seeing the skulls on the ground.

Replies from: Lumifer
comment by Lumifer · 2017-05-30T15:06:35.342Z · LW(p) · GW(p)

The militaries have a pretty big stick. You can go to prison for insubordination or disobeying orders; in wartime you might well just be shot for that. The Dragon Army... will give you a stern talking to?

Replies from: Decius
comment by Decius · 2017-05-30T23:25:10.634Z · LW(p) · GW(p)

.... will banish you from the tribe.

The only person I heard of go to the brig was one who broke into barracks and stole personal property. Falsifying official records or running off to run a side job as a real estate broker was more of a '30 days restriction, 30 days extra duty, reduction in rate to the next inferior rate, forfeiture of 1/2 month's base pay for 2 months' thing.

comment by drethelin · 2017-05-29T20:43:20.299Z · LW(p) · GW(p)

This is why we need downvotes.

Replies from: cousin_it, Duncan_Sabien
comment by cousin_it · 2017-05-29T21:57:38.956Z · LW(p) · GW(p)

Actually I agree. It feels weird to see that one person upvoted my comment without knowing how many would have downvoted it. The same might apply to Duncan's post, from the comments it seems like it was really polarizing, but the score only shows the 28 upvotes. If I may be allowed another reference to old LW, Eliezer used to advocate that people downvote more, ideally without replying. I think he saw it as a defense against noise and then left when the noise became too much.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-29T23:46:35.112Z · LW(p) · GW(p)

You can get a clearer-if-still-imperfect sense from contrasting upvotes on parallel, opposing comments, e.g. it has 28 upvotes and 1029823904812309481320948blargltroll has 10. I highly doubt this would have ever received sufficient mass of downvotes to become invisible.

Replies from: TezlaKoil
comment by TezlaKoil · 2017-05-30T00:42:02.635Z · LW(p) · GW(p)

You can get a clearer-if-still-imperfect sense from contrasting upvotes on parallel,

I'm fairly certain that P(disagrees with blargtroll | disagrees with your proposal) >> P(agrees with blargtroll | disagrees with your proposal), simply because blargtroll's counterargument is weak and its followups reveal some anger management issues.

For example, I would downvote both your proposal and blargtroll's counterargument if I could - and by the Typical Mind heuristic so would everyone else :)

That said, I think you're right in that this would not have received sufficiently many downvotes to become invisible.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-30T01:03:01.156Z · LW(p) · GW(p)

First time I've heard it referred to as a heuristic. +1 =P

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-29T21:22:15.968Z · LW(p) · GW(p)

This is a little ambiguous, and it would be more helpful to be concrete.

Replies from: drethelin
comment by drethelin · 2017-05-29T21:40:58.142Z · LW(p) · GW(p)

It's an appeal to authority and someone shitting on an organization based on one line of a lesswrong comment by one member of that organization, with no request for clarification or depth.

comment by ChristianKl · 2017-05-29T21:21:46.325Z · LW(p) · GW(p)

I don't think there a good argument that a Western Church works that much better than a Yoga Ashram and the setup of the Dragon Army is relatively similar to a Yoga Ashram.

If you want to learn something, go to school. These will give you the right amount of structure and your daily dose of socialization, without regimenting your life like a boot camp, and you'll be guided by competent people instead of fumbling your way as a crowd of amateurs.

When comparing kids with decent parents schooled children don't do much better than unschooled children.

When I was at university learning computer programming I quite often used the not-time-tested StackOverflow over the time-tested method of asking the tutor.

Replies from: cousin_it
comment by cousin_it · 2017-05-29T21:39:40.909Z · LW(p) · GW(p)

Churches don't have cohabitation, they're more like clubs, so the risk is lower. And in an ashram you hopefully get taught by a yoga professional, not just bossed around. I don't see the value of OP's proposal compared to either.

I thought homeschooled kids were usually taught by parents? Though I agree that you can learn stuff on your own. The problem is learning a broad range of ideas in a manageable time, not just picking a narrow path through topics that catch your interest, and for that many adults find universities useful. Not to mention you meet many smart people and pick up good memes without realizing it. Somewhere on Tumblr I saw a proposal for improving MIRI's research that said simply "everyone gets a PhD", I thought there was a lot of truth to that.

Replies from: Duncan_Sabien, ChristianKl
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-29T23:47:54.847Z · LW(p) · GW(p)

I note that you continue to assume/argue as if there will be zero relevant professional expertise, despite the fact that the professional expertise CITED IN THE MAIN POST is far from the only professional expertise that will be brought to bear during the experiment. In our very first dry run outing, we hired professional instruction to learn a new skill—you are factually incorrect in your assertions.

You are doing your level best to make sure to interpret everything here in the strawest, most negative possible light. "just bossed around." I'm starting to assume you literally haven't read the post, because it's rapidly becoming the only possible explanation for your conclusions.

Replies from: cousin_it
comment by cousin_it · 2017-05-30T00:45:03.625Z · LW(p) · GW(p)

You're not setting up a school with yourself as teacher, though. You're setting up a commune with yourself as boss, with rules above and beyond what schools usually require, and which would lead to student rebellion if they were imposed in a university. So if learning is the point, I'd like to understand how your thing is better than a school.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-30T01:02:18.402Z · LW(p) · GW(p)

To which I reply, if you'd actually like to understand, a good place to start would be "read the post." At least the troll did that much.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-30T02:13:51.190Z · LW(p) · GW(p)

Also, you miiiiiiiiiight try not leaning so hard on the typical mind fallacy. West Point is a university that people self-select into with far, far, far stricter rules than this, and they don't rebel. Ditto many sorts of monasteries and temples and martial arts dojos and retreat centers (including many that are non-religious), ditto all sorts of invasive practices by organizations attempting to turn around lives-gone-astray or turbocharge the already-successful (e.g. regimens put into place by life coaches or intensive business training groups).

You're confusing "I don't like this" with "this is objectively bad" (or "I would rebel against this" with "no sane people would not rebel against this") which—to quote you—on Old LW would have been unbelievable.

Once you make even a single good faith attempt to pass my ideological Turing test (my attempt to pass yours is the multithousand word post above), I'll start taking your criticisms as seriously as I've taken everyone else's.

Replies from: cousin_it
comment by cousin_it · 2017-05-30T08:50:34.426Z · LW(p) · GW(p)

"Life coaches", bullshido dojos and religious brainwashing houses aren't a good group to be in. It seems to me that such places are fine at teaching authority, but not the best way to teach anything else. I wouldn't go to West Point to learn math or even history, I'd go somewhere that focuses on math or history instead. And even for fitness, dojos lose to compartmentalized workouts like lifting or BJJ.

Maybe my mistake is misunderstanding the rationalist community. I know that they are a slightly weird bunch, but it'd take a lot to convince me that a boot camp environment would suit them. In the Russian Army such folks tended to be miserable, whereas in relaxed civilian jobs they thrived. That's part of why I'm replying to you, I feel that nerdy types are vulnerable to proposals like yours but ultimately don't benefit from them. They already have a lot of tension and random windmills going in their minds, putting them in a pressure container makes it worse, compared to doing casual normal stuff.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-30T16:52:43.221Z · LW(p) · GW(p)

Your mistake isn't misunderstanding the rationalist community, it's strawmanning and stereotyping and typical minding. If you stopped for one second to think, huh, maybe somebody who's clearly not an idiot and whose models so strongly disagree with mine might see something I don't, and approached this thing with curiosity instead of blunt assertions about how it's terrible and you know better, you could've, I dunno, asked questions about places where you're confused, and I would have answered them, and like many, many other places in this thread, there would've been a process of mutual updating and convergence as I showed you cool conclusions from thinking I'd already done, and you helped me find holes and flaws and make fixes, and both of us came out of the interaction with a clearer view of reality and a stronger ability to do good in the world.

Even now, like a dozen comments in, you refuse to stop it—putting scare quotes around life coaches and attaching the word bullshit to the word dojos and adding brainwashing to the phrase "religious houses." You are not here in good faith; you've got a negative model you're in love with and you're confirmation biasing all over the place. You're every bit as much a troll as the anonymous person was—you're just more subtle about it.

Oh, well.

Replies from: Lumifer, cousin_it
comment by Lumifer · 2017-05-30T17:38:46.092Z · LW(p) · GW(p)

You are not here in good faith

Besides being pure ad hominem, you seem to understand "good faith" as trying to help you. Let me point out that no one has any obligations to help you or to cooperate with you -- refusal to do so is not bad faith. Pointing out that your endeavour is misguided and doomed to failure (assuming that's a point of view honestly held) is not in bad faith either, even if you do not accept the arguments made.

You are perfectly free to not cooperate with people who won't cooperate with you, but that lack of cooperation on their part is neither malice nor trolling.

You got a lot more defensive over the past few days.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-30T19:43:59.466Z · LW(p) · GW(p)

I disagree that the summary is ad hominem—I think it is a concrete description of my highest-probability model explanation of cousin_it.

I don't interpret good faith as trying to help me. I do interpret it as trying to help us, where I define "us" as "all of the people on LW and in the rationalist community" specifically, and more broadly as "all humans."

I don't see cousin_it as doing any kind of truth-seeking or curious investigation, nor do I see them as taking a principled stance against something that is actively dangerous (the way the troll did). Instead, they're just throwing out straw criticisms without actually bothering to put in the work to engage with the actual topic at hand. It smacks of either careless antagonism or an attempt to score cheap points, whereas many of the people who are openly and unrepentantly opposed to this project still seem, to me, to be acting in good faith.

Replies from: Lumifer, Duncan_Sabien
comment by Lumifer · 2017-05-30T20:57:54.491Z · LW(p) · GW(p)

I disagree that the summary is ad hominem—I think it is a concrete description of my highest-probability model explanation of cousin_it.

Buzzword compliance aside, this is precisely what ad hominem is: "a ... description of ... ". The subject is your proposal for a commune -- not your beliefs about cousin_it.

I don't interpret good faith as trying to help me. I do interpret it as trying to help us, where I define "us" as "all of the people on LW and in the rationalist community" specifically, and more broadly as "all humans."

That sounds to me like pious crap. I don't see you as different from the 99.9+% of people who are not qualified to judge who is trying to help "all humans" and who is not -- and that's even besides the oft-made observation that road to hell is never in need of repair.

Let me remind you again -- we are discussing your proposal for a commune, not whose intentions are pure.

As I said, you are free to cooperate or not, but focusing on what you see as personal shortcomings of people who disagree with you seems like a road that leads to bad places. Especially given that you put forward yourself as the Dear Leader of this potential commune.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-30T21:03:13.199Z · LW(p) · GW(p)

Right. The problem is, only some of us are actually discussing.

In point of fact, most of us are actually discussing, but threeish people have just dropped in to lecture with no even hypothetical willingness to change their minds (or at least none credibly demonstrated, as I claim I've credibly demonstrated mine).

EDIT: Also, on reflection, I still think you're either misusing the term ad hominem or mischaracterizing the critique I'm making of cousin_it. I'm not trying to make claims about them as a whole person (e.g. they're bad in general or they lack the ability to engage in good faith in general), which is I think what is required for it to be ad hominem—I have to be making some fundamental attribution, and I'm not. I'm saying that the words they've typed in this thread are inconsistent with someone acting in good faith, which is a claim about observations and causality, and not about character.

Replies from: Lumifer
comment by Lumifer · 2017-05-30T21:09:15.819Z · LW(p) · GW(p)

You have unreasonable expectations for an internet discussion :-P

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-30T21:14:22.913Z · LW(p) · GW(p)

I thought Less Wrong was special. I actually did.

Replies from: Lumifer
comment by Lumifer · 2017-05-30T21:17:33.434Z · LW(p) · GW(p)

It is. Imagine what would happen if you were to put your proposal onto, say, Reddit. However LW, thankfully, is not a hive mind.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-30T21:23:09.378Z · LW(p) · GW(p)

I assume you have noted, because you're perceptive, but just to say here—I have repeatedly expressed credible gratitude for the presence of countervailing models and criticisms and so forth, and done at least some significant updating in plain sight. I don't think it would be fair for people to round me off to "was looking for a hive mind."

Replies from: Lumifer
comment by Lumifer · 2017-05-30T21:27:08.815Z · LW(p) · GW(p)

The point here is merely to what degree LW is special and what can you expect from it. I neither said nor implied that you went looking for a hive mind.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-30T21:40:38.980Z · LW(p) · GW(p)

Yeah, I want to similarly underscore/perhaps redundantly state that you have demonstrated extremely high and consistent credibility when it comes to productively engaging in discourse. With the comment above, I was underscoring a thing that plausibly could've just gone unstated.

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-30T19:46:47.052Z · LW(p) · GW(p)

I agree I got a lot more defensive over the past 36 hours, but you'll note it's confined almost entirely to two specific cases where I feel people are approaching with unjustified confidence in extremely uncharitable models, after all of the other discussion that's gone on (which I feel should've earned me some credibility).

Replies from: Lumifer
comment by Lumifer · 2017-05-30T21:03:51.638Z · LW(p) · GW(p)

unjustified confidence in extremely uncharitable models

From your point of view, maybe -- but it's not the only one.

You seem to be welcoming comments about which parts of your plan to slightly bend, adjust, and repaint, but you are visibly hostile to the idea that your proposal is flawed at its core and cannot be saved regardless of tinkering with its details.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-30T21:13:08.312Z · LW(p) · GW(p)

Yes—that's because the proposal is not flawed at its core, and I'm not going to pretend that it is to satisfy pearl-clutchers and lecturers. (More accurately: I have greater than 95% confidence that this experiment, conditioned on it meeting existing criteria for launch, does not cause great harm to people on the scale of six months.)

I note that I am willing to engage with my real, extant uncertainty with people who don't approach from a holier-than-thou know-it-all condescending lecturing position. For instance, it's not even clear that the house will actually happen, because it's not clear that there will be enough people who think that it's a good idea. I'm not trying to convince any of the potential members—instead, I'm simply revealing revealing revealing the models, shining as much light on them as possible, so people can neutrally evaluate, and I still have ~33% credence on "there won't be enough justified faith to do it."

If someone were to say "Hmmm. I'm reasonably confident that this proposal is flawed at its core and can't work; here are my objections and here are my questions," I'd engage with them (and this is a credible claim if you look back through this thread). What I won't engage with is people who don't even know me who are trying to pull status moves to put themselves above me (and therefore in a position to judge) from the get-go.

As another way to state my point, I'm credibly offering good faith and charity to the vast majority of critics (all but the bottom 3%). But the people who are coming in with deontologically hostile models are not offering me any good faith and charity in return. And you're right that no one owes me that, but similarly I don't owe them any response other than "yeah, screw you, too."

Replies from: Lumifer
comment by Lumifer · 2017-05-30T21:25:04.672Z · LW(p) · GW(p)

that's because the proposal is not flawed at its core

And how do you know that?

Or, let's put it this way: which evidence short of actually attempting to implement this would persuade you that the proposal is flawed?

who are trying to pull status moves

So, how much do you care about status? Why is it a big deal?

similarly I don't owe them any response

True. But you are offering them a response. This response illustrates how you react to what you believe is unjustified criticism -- and it is not "I disagree. Tap."

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-30T21:39:29.138Z · LW(p) · GW(p)

The confidence regarding it not being flawed at its core comes from related past experience, confidence in the individuals involved, the direct evidence of the positive value of norms stolen from Dreamship and Event Horizon, faith in the safety valves of Circling, pair debugging, internal and external check-ins, and commitment to iteration, and the results of having run a trial version that went quite well.

There was evidence I could have gathered from the experimental weekend that would have persuaded me the proposal was flawed, and there were similarly potentially unknown arguments that people here on LW might have offered up that would have been persuasive, too, but at this point, I can't outline concrete predictable evidence that would cause me to not run this (not actually all that ambitious) experiment. It's like the pants ending up in Washington DC—there probably exists evidence that would convince me, but I can't reasonably guess what it might be.

In response to both the status question and the owed-response question, I do believe that people need to adopt a policy of loudly objecting to moves they want to be considered outside the Overton window, especially if those people have some social capital to spend (because they're doing it not only for themselves but also on behalf of the disenfranchised who can't afford to push back). In other words, in part, I'm fighting the two people I think are Doing It Wrong because I want to be publicly seen fighting on behalf of not that. I think that it overall increases rather than decreases my credibility on axes that I think are relevant.

Replies from: Lumifer
comment by Lumifer · 2017-05-31T14:52:21.809Z · LW(p) · GW(p)

I do believe that people need to adopt a policy of loudly objecting to moves they want to be considered outside the Overton window

You are either grandstanding or misusing terms. People's objections to your proposal (including both form and content) are firmly within the Overton Window and are nowhere near its boundaries. I have trouble believing that you actually want as tiny an Overton Window as you imply.

If I may make a suggestion? Stop digging. The narrower you make the range of acceptable thought/speech, the less adequate you look. The more you attack and denigrate people who fundamentally disagree with you, the less credibility you have as a leader.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-31T17:40:53.014Z · LW(p) · GW(p)

Note again that we are on Less Wrong and within the rationalist community, both of which are very much built around norms of reasoning and discourse; I'm not suggesting a tiny Overton window for the world at large or even one that's this constricted on all axes.

But yes—I think both Less Wrong and the rationalist community would be far, far closer to the ideal versions of themselves if they doubled or tripled their callouts-of and refusal-to-engage-with sloppy and biased and inappropriate discourse. Overton window being "things a politician can say on TV"—I want "styles of discourse that a high-status rationality community member can publicly endorse" to not include the stuff cousinit and handoflixue were doing. My concerns are almost entirely about form, because I think correct form leads to improved content. I could take any of the objections that cousin_it or handoflixue or 128bargl had and recast them into (e.g.) "the sort of sentences Julia Galef or Rob Bensigner would say," and they'd be worth fully engaging with, but in their current form, I claim there's more long-term civilizational value to rejecting them.

I'm entirely okay with losing credibility with people who don't value the above. Those people shouldn't hold me in high esteem—we have at least partially opposing goalsets, and will at least occasionally be actual antagonists relative to one another; I'm actually taking some mild encouragement from how violently people I fundamentally disagree with are disagreeing with this project, because it's weak circumstantial evidence that I'm moving in the correct direction. (i.e. the less adequate I look to you is not necessarily an appropriate measure; Sanders and Clinton both frequently made moves that made them look less adequate to some people.)

And I again disagree with your characterization that I'm attacking and denigrating people who fundamentally disagree with me, and I'm surprised that you're rounding things off that carelessly. If you want to see personal attacks and denigration, look at (e.g.) the blog post that cousin_it cited to Kaj. Nothing I've done here comes anywhere close to that—I'm attacking and denigrating specific forms of argument, and specific modes of reasoning. For example, if you look at the time where handoflixue asked a clear and cogent question without any unfounded critical leaps, I gave a multiparagraph answer with lots of concrete detail. I grumbled at them a bit for their other interactions with me, but I didn't treat their point or question any differently because they'd bugged me elsewhere. I have no problem with specific people; it's just that at some point my prior on the VOI of engaging with them drops too low. It's Bayes—one of my fundamental moral principles is that you should trust in revealed preferences, and barring credible reasons to believe someone's made a major personality shift, you should evaluate them as the sum of their actions.

(Also, I think it's not grandstanding if I'm literally practicing what I'm preaching in real time? Like, I'm doing exactly what I claim a person ought to do, not just moralizing with no action behind it.)

Replies from: Lumifer
comment by Lumifer · 2017-05-31T18:10:33.428Z · LW(p) · GW(p)

would be far, far closer to the ideal versions of themselves if they doubled or tripled their callouts-of and refusal-to-engage-with sloppy and biased and inappropriate discourse

I don't think so. I think they would be dead or sufficiently engrossed in navel-gazing to be functionally dead.

I claim there's more long-term civilizational value

So, grandstanding.

Those people shouldn't hold me in high esteem—we ... will at least occasionally be actual antagonists

It's perfectly reasonable to hold one's enemies in high esteem and in fact one of the traditional measures of success is the caliber of enemies you've acquired along the way. For non-fatal competitions you actually want the best, highest-esteem enemies you could find -- they will push you to become better (as opposed to nuisance pests who will only encourage you to stay irritated and smug).

I'm actually taking some mild encouragement

That's the classic "reverse stupidity" argument.

Nothing I've done here comes anywhere close to that

As Alicorn pointed out, the situation is not symmetric. Writing a Tumblr rant is a very different thing from asking multiple people to surrender not insignificant amounts of autonomy to you, as well as become emotionally and financially entangled in a project of yours.

I'm attacking and denigrating specific forms of argument, and specific modes of reasoning

No, you don't. You actually tend to oscillate between ad hominem attacks and replying to specific criticisms.

Or maybe you don't think of the "you think wrong thoughts expressed in the wrong way and you should be ashamed of yourself" as an attack? Let me assure you that it is.

at some point my prior on the VOI of engaging with them drops too low

If that were so, you would stop engaging with them. But you don't.

ETA

I think it's not grandstanding if I'm literally practicing what I'm preaching in real time?

That's not how it works. If you loudly proclaim that, say, the use of mis-gendered pronouns is a major human rights violation akin to torture (or that letting trans people use the bathrooms they want is the end of Western civilization), you are grandstanding even if you literally throw a temper tantrum in real life.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-31T18:43:31.335Z · LW(p) · GW(p)

I'm now feeling deliberately misunderstood, and if you're doing that on purpose, I ask you to stop.

We disagree about Overton windows; that's good, and cruxy.

According to the definition of grandstanding that Google throws up when you type in the word, you're misusing it (particularly, the word requires you to make claims about my internal state and purpose, i.e. what I'm doing X for, and your best source of data there is my self-report). It's not grandstanding, and I note it's far easier for you to name-call than to actually make a specific critique stick.

It's perfectly reasonable to hold some of your enemies in high esteem—for instance, I note we're disagreeing pretty heavily here, and I have a great deal of respect for you. But it's unfounded to jump from some to all. Many of the people opposed to this idea are not high-caliber thinkers and reasoners, whatever other value they have as human beings.

reversed stupidity

I was extremely careful to say what I actually meant, and then you were extremely careful to strawman me by quoting only part of my words, as if I didn't say "weak circumstantial" right in the same sentence.

Operationalize your claims that I'm making ad hominem attacks, and I'll address them one by one. I predict you'll definitely be able to find 1-3 examples of me sticking a foot across the line, and that they'll be outweighed by a factor of at least five by me doing the thing I claimed I was doing. I predict you will find no examples that are anywhere near as gross as the ones put forth by cousin_it and handoflixue. I'd be willing to monetize this as a bet.

I've stopped engaging with them for their own sake. I have previously explained to you that I think it's important to be seen openly defending good norms, and thus continue to engage with them for myself and everyone else. I think it was pretty lame of you to just ... pretend I hadn't said that, and again strawman by criticizing me for the thing I'm not really doing.

I am losing respect for you in this subthread, but right now it's something like "I had you at 957 points, and I'm worried you're going to drop to 949." Hopefully this is just some combination of a little bit of triggering and the fact that both of us care about getting this right, and not that you endorse overall the tack you're taking anymore than I'd endorse the worst 10% of my own reactions on this post.

Replies from: Lumifer
comment by Lumifer · 2017-05-31T19:01:50.465Z · LW(p) · GW(p)

definition of grandstanding

My working definition of grandstanding is basically "declaring that one's words or actions have outstanding significance or impact". Case in point: you being concerned with "long-term civilizational value". I strongly suspect that your cluefulness about long-term civilizational values is... limited.

as if I didn't say "weak circumstantial"

It doesn't help you. Weak circumstantial evidence is still evidence and under reverse stupidity you just don't have any.

Operationalize your claims that I'm making ad hominem attacks, and I'll address them one by one.

I have no interest in fisking your comments. I offered you an outside view -- if you think it's wrong, there is no reason for me to try to convince you.

I've stopped engaging with them ... and thus continue to engage with them

Pick one, will ya? X-)

I think it's important to be seen openly defending good norms

Maybe, but when you say stuff like "I deny your right to judge and interrogate me" you sound like an idiot. The fact that you were capable of typing that sentence and pressing "Send" is not a good sign.

I am losing respect for you in this subthread

I appreciate your concern, but I think I'll be fine. Really, I will :-P

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-06-01T00:33:43.066Z · LW(p) · GW(p)

I'm glad, because you just lost a lot more. I do, indeed, think your outside view is deeply flawed, and I've just lost an illusion about how you in particular are likely to go about engaging in discourse. As an example, you just pulled a fifth-grader-bully trick in the quote

I've stopped engaging with them ... and thus continue to engage with them

that was purposefully thickheaded in ignoring the whole point of that paragraph.

I didn't think you would troll/deliberately mischaracterize, endorsedly, when not triggered-in-the-moment. That was firmly outside of my model of you. Now I know something new about you, and it will be useful to me in the future.

Replies from: Lumifer, jordanlammon
comment by Lumifer · 2017-06-01T01:12:15.891Z · LW(p) · GW(p)

A funny thing about you: the more you talk, the worse you look. You started by presenting a very reasonable image -- you listened and you expressed willingness to take into account people's concerns. A bit more than a week passed and you're already screaming at people IN ALL CAPS, calling them "a jerk" and dropping dark hints about knowledge that "will be useful to [you] in the future". How is your stress tolerance? You are not performing well when people disagree with you.

You also try to be manipulative -- not very successfully, mind you -- by dispensing praise and criticism in order to gain the results you want. Since we're are being all frank'n'all, my opinion of your adequacy as a leader went down a lot during this week -- mostly because you wouldn't shut up. I sincerely reiterate my advice to stop digging.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-06-01T02:35:59.422Z · LW(p) · GW(p)

I don't mind this whole "the more you talk, the worse you look" thing, because a) it's symmetrical, and b) I'm entirely comfortable being seen for having exactly the preferences and principles I do have.

I've responded sharply, at this point, to exactly four people: a universally acknowledged troll, two people who started out clearly strawmanning me and being heavily anchored on negative opinions without justification, and now you, as you abandon standards in pursuit of scoring points.

I have not willfully misrepresented people, or immediately leapt to unfounded conclusions about their deep character, or engaged in cheap-trick point-scoring tactics against people who didn't shoot first (with one exception that Alicorn called me out on, and I edited), or any of the other behaviors that I don't reflectively endorse. I have certainly pulled none of the subpar junk that you've pulled in this subthread, and I'm proud to have opposed you as you've done it.

As I've noted elsewhere—I don't much care about irrelevant opinions, and as people have demonstrated themselves to be below the bar of what I expect from a LWer and a rationalist, I correspondingly cease to mind what their overall judgment of me is. I generally try to judge how likely a person's opinion is to closely correlate with truth and useful perspective, and while I hold my disregard with skepticism on the meta level, so as to not unfairly write people off, ultimately evidence is evidence. There are some people who simply demonstrate, fairly conclusively, that they aren't going to play fair, think straight, update on evidence, etc., and are literally not worth listening to, in a VOI sense (though they may still be worth opposing in public).

I state again that something like 97% of the participants in this thread do seem like their opinions are likely to closely correlate with truth and provide useful perspective, and I'm grateful for the hours that total strangers have poured into helping me dodge mistakes. This project is something like 50% less likely to fail and 30% more likely to be really successful (relative to where it was a week ago) thanks to those contributions.

And sure—probably most of the neutral parties are shaking their heads somewhat—thinking things like "Duncan's being too aggressive here" or "Duncan's fighting fights not worth fighting" or "I wish he hadn't posted X." But that's coin I'm spending deliberately, in open defense of things I think are worth defending. There's no point in social capital if all you do is hoard it—at some point, people who've accrued ought to take risks holding lines that others can't afford to defend. If I lose 5% of the respect that I've gained, but also meaningfully embolden others who were too hesitant to defend themselves against bullies by giving them the sense they're not the only ones bothered by poor discourse, that's a purchase I endorse. Freedom from trolls isn't free—turns out even Lumifer will occasionally use Trump-style tactics, if they dislike you enough.

Replies from: Lumifer
comment by Lumifer · 2017-06-01T05:08:12.175Z · LW(p) · GW(p)

LOL. You smell SJW-ish. A white knight selflessly spending his social capital to defend the weak against the bullies. Against "Trump-style tactics" even! And, of course, you will not be denied for your cause is just.

You are clearly incapable of shutting up so this will be amusing.

So tell me more about things you think are worth defending -- especially from the likes of me. Are we still talking about the mere forms of expression which you disapprove of or there's some deeper ideology involved? Do you see me as lacking honor, or empathy, or proper morals, or the desire to remake the world, or something else?

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-06-01T05:17:59.415Z · LW(p) · GW(p)

I note for others reading this comment and wondering why it hasn't been addressed that I've at least temporarily ceased replying to Lumifer and a couple of other posters on a policy level, for reasons surrounding norms of discourse, strawmanning, epistemic humility, presence or absence of good faith, etc. It's possible that the above contains good questions or insights; if someone else chooses to repost/re-ask/rephrase sections of this, I'll likely respond to them.

http://lesswrong.com/lw/c1/wellkept_gardens_die_by_pacifism/

Replies from: Lumifer
comment by Lumifer · 2017-06-01T05:22:10.822Z · LW(p) · GW(p)

Oh, good! So I can point out things to you and you won't be able to talk back? :-D

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-06-01T05:23:13.622Z · LW(p) · GW(p)

I note for others reading this comment and wondering why it hasn't been addressed that I've at least temporarily ceased replying to Lumifer and a couple of other posters on a policy level, for reasons surrounding norms of discourse, strawmanning, epistemic humility, presence or absence of good faith, etc. It's possible that the above contains good questions or insights; if someone else chooses to repost/re-ask/rephrase sections of this, I'll likely respond to them.

http://lesswrong.com/lw/c1/wellkept_gardens_die_by_pacifism/

Replies from: Lumifer
comment by Lumifer · 2017-06-01T05:25:12.442Z · LW(p) · GW(p)

Once more, please :-)

It's been a while since the last time I was officially added to the list of the Enemies of the People and... ritually cast out, I guess? This time there even a list of high crimes I'm guilty of -- "reasons surrounding norms". Woe is me!

comment by jordanlammon · 2017-06-01T05:12:42.823Z · LW(p) · GW(p)

something to do new thing and that was purposefully paragraph we have assignment help uk to solve the controversy like this

Replies from: Lumifer
comment by Lumifer · 2017-06-01T05:18:34.411Z · LW(p) · GW(p)

SPAMMITY SPAM SPAM

comment by cousin_it · 2017-05-30T18:00:53.439Z · LW(p) · GW(p)

I was hoping you'd show how your community will be better than current authoritarian communities, which I deeply dislike. Instead you insist that current authoritarian communities are fine and we need more of them. Hopefully you see why that's unlikely to change my mind, imperfect as it is. Heck, my dislike for cults was clear from the first comment, which makes your jumping onto it even more weird. A master of soft skills would've chosen literally anything else as an opening. Even now in the middlegame you can still turn it around, though I can understand if you're frustrated and don't want to. My own goal in this conversation is mostly achieved, I can go on or not, up to you.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-30T19:47:52.477Z · LW(p) · GW(p)

I was hoping you'd show how your community will be better than current authoritarian communities, which I deeply dislike.

Please. Actually. Read. The. Available. Information. Above.

Replies from: cousin_it
comment by cousin_it · 2017-05-30T20:13:23.905Z · LW(p) · GW(p)

I've read your post, it's nothing but red flags. You're literally proposing that DA members greet each other with a salute and trust you more than themselves. The few upsides you mention (participants are smart, time is limited, etc) come across as excuses why you should get power now. Check out nostalgebraist's tumblr for more folks who got the same vibe. Your comments make things worse, you clearly like authoritarian communities in your heart rather than consider them a necessary evil.

comment by ChristianKl · 2017-05-30T08:49:22.438Z · LW(p) · GW(p)

I thought homeschooled kids were usually taught by parents?

I use the phrase unschooling and not homeschooling but even if a child gets taught by their parents that still suggests that the average teacher is not skilled enough to provide value to his student that allows them to outperform students taught by lay-people.

The problem is learning a broad range of ideas in a manageable time, not just picking a narrow path through topics that catch your interest, and for that many adults find universities useful. Not to mention you meet many smart people and pick up good memes without realizing it.

The same arguments could be made for why the Dragon Army is a good idea.

Replies from: cousin_it
comment by cousin_it · 2017-05-30T10:15:37.328Z · LW(p) · GW(p)

Let's take a random skill from the proposed curriculum, like welding. You could try externally motivated self-study at OP's group house, or you could go to a community college and ask how long they'll take to make you a certified welder. It seems to me that even without the authoritarian LARPing, the first option is a weird hybrid, like a flying submarine. It's more costly than either full self-study (if you can do it) or fully spoon-fed learning at a traditional place for a set term.

The OP's proposal is to dial motivation to 11 and hope that it leads to effective learning. Even if that doesn't backfire, at most it lets you see the next bottleneck, and you don't know how many there are. Traditional schools have solved all of them, and can teach people predictably without requiring much motivation (except for showing up). For well understood skills, I think they are better than rationalist groups in every way.

Replies from: ChristianKl, JacekLach
comment by ChristianKl · 2017-05-30T20:21:56.576Z · LW(p) · GW(p)

Traditional schools have solved all of them, and can teach people predictably without requiring much motivation

Traditional schools know how to teach welding but when it comes to teaching introspection or teaching teaching and tutoring skill it's less clear.

Teachers who have a master degree aren't better than their colleges. As far as we know those two years of being in the university to learn to teach better is worthless for teaching skills.

I would also doubt that it's easier to learn programming via a community college course than by living together with people who can program well and who are willing to tutor you a bit.

Replies from: cousin_it
comment by cousin_it · 2017-05-30T20:27:20.802Z · LW(p) · GW(p)

I'm sorry to say but teaching introspection, rationality or other skills we don't have reliable tests for is a scam. The fact that more than half of the OP's curriculum consists of such skills is a big red flag. And learning programming doesn't require any measures described in the OP, I know it, you know it.

Replies from: ChristianKl
comment by ChristianKl · 2017-05-30T20:35:28.825Z · LW(p) · GW(p)

And learning programming doesn't require any measures described in the OP, I know it, you know it.

Yes, but you make the argument that traditional institutions of learning are superior. For programming, I don't think that's the case.

I'm sorry to say but teaching introspection, rationality or other skills we don't have reliable tests for is a scam.

Do you believe that liberal arts college who claim to teach critical thinking are also scams? From my perspective, they are a lot more scammy because they actually have the money and time to research whether their claims are true.

I think a person who tries a new project where they have a goal that they can't measure well is a lot scammy than big institutions like a liberal art college.

Replies from: cousin_it
comment by cousin_it · 2017-05-30T20:44:18.989Z · LW(p) · GW(p)

100% agree that formal education for programming sucks today.

Do you believe that liberal arts college who claim to teach critical thinking are also scams?

Yeah, pretty much. They take your money and then you can get a job doing something else if you're good at that thing.

I think we're mostly in agreement?

comment by JacekLach · 2017-05-30T18:38:39.226Z · LW(p) · GW(p)

I don't think the goal of OPs proposal is to learn any particular skill. To me it mostly looks like trying to build a tightly-knit group so that each member can use the others as external motivators and close friends to discuss life plans and ideas in detail not really possible between modern colleagues and friends. I.e. the goal is not learning a skill, it's building a mutual support group that actually works.

comment by theotetia · 2017-05-26T21:15:34.940Z · LW(p) · GW(p)

Have you ever lived under obedience? This is often considered a prerequisite for holding command of e.g. a monastery.

Replies from: None, Duncan_Sabien
comment by [deleted] · 2017-05-27T01:28:20.617Z · LW(p) · GW(p)

Would anyone who has lived under obedience write such an astoundingly self-unaware post?

The answer to both questions is no.

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-27T01:53:12.431Z · LW(p) · GW(p)

No, I haven't. I've participated in a variety of commitment contexts, none of which were at the level of monastic seriousness.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2017-05-27T05:57:31.069Z · LW(p) · GW(p)

I would guess it's more about the ability to accurately model what it's like to be a subordinate (as opposed to being about commitment).

comment by CronoDAS · 2017-05-26T00:42:31.139Z · LW(p) · GW(p)

I couldn't comment on the linked Medium article, so I'd like to say that, for many students, particularly middle and high school students, it is simply not true that they are in class voluntarily. I was routinely threatened with dire consequences if I didn't go to school, and attempts to remain at home and refuse to go were met with physical force - I was literally pulled out of my bed and taken to the car or bus. School is about as voluntary as the military draft.

Replies from: Duncan_Sabien, robot-dreams
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-26T01:08:27.686Z · LW(p) · GW(p)

You missed the entire point.

Edit: my original response was unnecessarily brusque and rude, and I apologize. I can elaborate further, but in the meantime, you might squint at the doc again, because it was a particular message about agency aimed at people in exactly your kind of situation.

Replies from: CronoDAS
comment by CronoDAS · 2017-05-26T16:30:39.582Z · LW(p) · GW(p)

The end result of my experiment in school refusal was being put on psychiatric medication. (Which actually did help, if you consider changing my preferences to something more socially acceptable to be helping.)

In hindsight, my best strategy might have been seeking a diagnosis of delayed sleep phase syndrome and requesting accomodations under the Americans with Disabilities Act. (The trigger for all this was that the school changed its starting time from 8:10 AM to 7:40 AM and I was not willing to deal with getting up any earlier.)

I was in a special education school from third to seventh grade, and I was absolutely forced to be physically present at that school as much as any prison inmate was forced to be physically present in prison. They couldn't force me to do schoolwork, and there were times I accepted a loss of privileges as the consequence for not participating, but any attempt to leave would be met by physical force. (The school even had a "time-out room" in which a student that became violent - a not uncommon occurrence - could be locked inside until he or she had calmed down.)

Participation was indeed a choice. Being physically present was not.

comment by robot-dreams · 2017-05-26T15:20:45.013Z · LW(p) · GW(p)

Going to class was not voluntary for me either. The consequences of not going to class included: parents screaming at me, parents kicking my ass (tiger parent style; we didn't do "grounding" in my household), truancies going onto my "permanent record", a full day of detention on a Saturday, etc. Things that people call "voluntary" don't usually result in physical and emotional damage if you don't do them.

Nonetheless, I skipped class a few times in middle school, and I suffered the consequences as a result. Were the consequences worth the glorious days of freedom that I spent skateboarding near the beach, sitting in a local comic book store marathoning manga, etc.? Maybe; maybe not.

But whether I go to class is a choice that I alone have the freedom to make. My parents and the school can set the consequences, and they can apply a lot of pressure to make particular options more or less appealing, but they can never take away my ability to choose.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2017-05-26T23:10:39.554Z · LW(p) · GW(p)

but they can never take away my ability to choose.

So far! Security mindset.

comment by jbeshir · 2017-05-30T15:48:26.490Z · LW(p) · GW(p)

On the positive side, I think an experiment in a more centrally managed model makes sense, and group activity that has become integrated into routine is an incredibly good commitment device for getting the activity done- the kind of social technology used in workplaces everywhere that people struggle to apply to their other projects and self-improvement efforts. Collaborative self-improvement is good; it was a big part of what I was interested in for the Accelerator Project before that became defunct.

On the skulls side, though, I think the big risk factor that comes to mind for me for any authoritarian project wasn't addressed directly. You've done a lot of review of failed projects, and succeeded projects, but I don't get an impression you've done much of a review of abusive projects. The big common element I've seen in abusive projects is that unreasonable demands were made that any sensible person should have 'defected' on- they were asked things or placed under demands which from the outside and in retrospect staying in the group was in no way worth meeting- and people didn't defect. They stayed in the abusive situation.

A lot of abusive relationships involve people trading off their work performance and prospects, and their outside relationship prospects, in order to live up to commitments made within those relationships, when they should have walked. They concede arguments when they can't find a reason that will be accepted because the other person rejects everything they say, rather than deciding to defect on the personhood norm of use of reasons. I see people who have been in abusive relationships in the past anxiously worrying about how they will find a way to justify themselves in circumstances where I would have been willing to bite the bullet and say "No, I'm afraid not, I have reasons but I can't really talk about them.", because the option of simply putting their foot down without reasons- a costly last resort but an option- is mentally unavailable to them.

What I draw from the case studies of abusive situations I've encountered, is that humans have false negatives as well as false positives about 'defection'; that is, people maintain commitments when they should have defected as well as defecting when they should have maintained commitments. Some of us are more prone to the former, and others are more prone to the latter. The people prone to the former are often impressively bad at boundaries, at knowing when to say no, at making a continually updated cost/benefit analysis to their continued presence in an environment, at protecting themselves. Making self-protection a mantra indicates that you've kind of seen a part of it, but the overall model being "humans defect on commitments too much" rather than "humans are lousy at knowing when to commit and when not to" seems like it will miss consideration of what various ideas will do with false negatives often.

The rationalist community as a whole probably is mostly people with relatively few false negatives and mostly false positives. Most of us know when to walk and are independent enough to be keeping an eye on the door when things get worrying, and have no trouble saying "you seem to be under the mistaken impression I need to give you a reason" if people try to reject our reasons. So I can understand failures the other way not being the most salient thing. But the rationalist community as a whole is mostly people who won't be part of this project.

When you select out the minority who are interested in this project, I think you will get a considerably higher rate of people who fail in the direction of backing down if they can't find a reason that (they think) others will accept, in the direction of not having good boundaries, and more generally in the direction of not 'defecting' enough to protect themselves. And I've met enough of them in rationalist-adjacent spaces that I know they're nearby, they're smart, they're helpful, some are reliable, and they're kind of vulnerable.

I think as leader you need to do more than say "protect yourself". I think you need to expect that some people you are leading will /not/ say no when they should, and you won't successfully filter all of them out before starting no more than you'll filter all people who will fail in any other way out before starting. And you need to take responsibility for protecting them, rather than delegating it exclusively for them to handle. To be a bit rough, "protect yourself" seems like trying to avoid part of the leadership role that isn't actually optional: that if you fail in the wrong way you will hurt people, and you as leader are responsible for not failing in that way, and 95% isn't good enough. The drill instructor persona does not come off as the sort of person who would do that- with the unidirectional emphasis on committing more- and I think that is part of why people who don't know you personally find it kinda alarming in this context.

(The military, of course, from which the stereotype originates, deals with this by simply not giving two shits about causing psychological harm, and is fine either severely hurting people to turn them into what it needs or severely hurting them before spitting them out if they are people who are harmed by what it does.)

On the somewhat more object level, the exit plan discussed seems wildly inadequate, and very likely to be a strong barrier against anyone who isn't one of our exceptional libertines leaving when they should. This isn't a normal house share, and it is significantly more important than a regular house share that people are not prevented from leaving by financial constraints or inability to find a replacement who's interested. The harsh terms typical of an SF house share are not suitable, I think.

The finding a replacement person part seems especially impractical, given most people trend towards an average of their friends and so if their friends on one side are DA people, and they're unsuited to DA, their other friends are probably even more unsuited to DA on average. I would strongly suggest taking only financial recompense on someone leaving for up to a limited number of months of rent if a replacement is not secured, and either permitting that recompense to be paid back at a later date after immediate departure, or requiring it as an upfront deposit, to guarantee safety of exit.

If there are financial costs involved with ensuring exit is readily available, there are enough people who think that this is valuable that it should be possible to secure capital for use in that scenario.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-30T16:42:50.670Z · LW(p) · GW(p)

Strong approval of all of this. The short answer is, I've spent tens of hours working more closely with the people who will actually be involved looking at all of the issues you raise here. We're all aware of things like the potential for emotional abuse and financial entrapment, and putting possible solutions into place, and I simply didn't feel the need to lengthen the post by another third to include stuff that's only half-in-progress and also largely too detailed/irrelevant to outsiders.

(As a single bite-sized example: the "protect yourself" mantra is there to lay the baseline, but thus far we're also including a) explicit "non-conformity" training in bowing out of activities, coupled with strong norms of socially supporting people who "rule #1" themselves out, and clear ways to resolve anxiety or embarrassment and save face, b) weekly open-ended retrospectives that include room for anonymous feedback as well as public, c) two one-on-ones per week with me in which the number one focus is "how are you, can you be supported in any way," d) outside check-ins with someone completely unrelated to the house, to provide a fresh perspective and safe outlet, and e) regular Circling and pair debugging so that everyone knows "where everyone is" and has a cheap Schelling point for "I need help with X.")

Replies from: Screwtape, handoflixue
comment by Screwtape · 2017-05-30T20:23:11.319Z · LW(p) · GW(p)

This is tangentially related at best, but if you have some high quality non-conformity training I would love to borrow it for my local purposes. I've got some, but still feel like it's the largest weakness in the rationality training I've been doing.

comment by handoflixue · 2017-05-30T19:23:28.028Z · LW(p) · GW(p)

I would be much more inclined to believe you if you would actually discuss those solutions, instead of simply insisting we should "just trust you".

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-30T20:14:58.096Z · LW(p) · GW(p)

How can you read the parenthetical above and dismiss it as "not discussion" and still claim to be anything other than deontologically hostile?

Replies from: handoflixue
comment by handoflixue · 2017-05-30T20:50:07.283Z · LW(p) · GW(p)

Because basically every cult has a 30 second boilerplate that looks exactly like that?

When I say "discuss safety", I'm looking for a standard of discussion that is above that provided by actual, known-dangerous cults. Cults routinely use exactly the "check-ins" you're describing, as a way to emotionally manipulate members. And the "group" check-ins turn in to peer pressure. So the only actual safety valve ANYWHERE in there is (D).


You're proposing starting something that looks like the cult. I'm asking you for evidence that you are not, in fact, a cult leader. Thus far, almost all evidence you've provided has been perfectly in line with "you are a cult leader".

If you feel this is an unfair standard of discussion, then this is probably not the correct community for you.


Also, this is very important: You're asking people to sign a legal contract about finances without any way to to terminate the experiment if it turns out you are in fact a cult leader. This is a huge red flag, and you've refused to address it.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-30T21:06:10.128Z · LW(p) · GW(p)

I'm not interested in entering into a discussion where the standard is "Duncan must overcome an assumption that he's a cult leader, and bears all the burden of proof." That's deeply fucked up, and inappropriate given that I willingly created a multithousand word explanation for transparency and critique, and have positively engaged with all but the bottom 3% of commentary (of which I claim you are firmly a part).

I think you're flat-out wrong in claiming that "almost all evidence you've provided has been perfectly in line with 'you are a cult leader.'" The whole original post provided all kinds of models and caveats that distinguish it from the (correctly feared and fought-against) standard cult model. You are engaged in confirmation bias and motivated cognition and stereotyping and strawmanning, and you are the one who is failing to rise to the standard of discussion of this community, and I will not back off from saying it however much people might glare at me for it.

Replies from: tristanm, handoflixue
comment by tristanm · 2017-05-30T22:46:48.839Z · LW(p) · GW(p)

I'm not interested in entering into a discussion where the standard is "Duncan must overcome an assumption that he's a cult leader, and bears all the burden of proof."

While I agree that a lot of the criticism towards you has been hostile or at least pretty uncharitable, I would only point out that I suspect the default tendency most people have is to automatically reject anything that shows even the most minor outward signs of cultishness, and that these heavy prior beliefs will be difficult to overcome. So, it seems more likely that the standard is "outward signs of cultishness indicate a cult, and cults are really bad" rather than "Duncan is a cult leader." (This is sort of similar to the criticisms of the rationality community in general).

I think there are a lot of reasons why people have such heavy priors here, and that they aren't completely unjustified. I myself have them, because I feel that in most cases where I have observed outward signs of cultishness, it turned out these signals were correct in indicating an unhealthy or dangerous situation. I don't think it's necessary to go into detail about them because it would take a huge amount of space and we could potentially get into an endless debate about whether these details bear any similarity to the set-up you are proposing.

So it generally seems that your responses to the people who have these very heavy priors against what you are doing to be along the lines of "You can't just come in here with your heavy priors and expect that they alone constitute valid evidence that my proposal is a bad idea", and in that regard your rebuttal is valid. However, I do personally feel that, when someone does show up in an argument with very confident prior belief in something, the charitable principle is to assume at least initially that they have a possibly valid chain of evidence and reasoning that led them to that belief.

It could be that there is some social collective knowledge (like a history of shared experiences and reasoning) that led up to this belief, and therefore it is generally expected that we shouldn't have to back-track through that reasoning chain (therefore allowing us to make confident statements in arguments without producing the evidence). I think that "cults" are a fairly good example of this kind of knowledge - things people almost universally consider bad, except for cult members themselves, so much so that saying otherwise could be considered taboo.

And this is definitely not to claim that every taboo is a justified taboo. It's also not to say that you haven't argued well or presented your arguments well. I'm only arguing that it's going to be an uphill battle against the naysayers, and that to convince them they are wrong would probably require back-tracking through their chain of reasoning that led to their prior belief. In addition, if you find yourself becoming frustrated with them, just keep the above in mind.

For essentially the above reasons, my model predicts that most of the people who decide to participate in this endeavor will be those who trust you and know you very well, and possibly people who know and trust people who know and trust you very well. Secondly, my model also predicts that most of the participants will have done something similar to this already (the military, bootcamps, martial arts dojos, etc.) and successfully made it through them without burning out or getting distressed about the situation. Thus it predicts that people who don't know you very well or who have never done anything similar to this before are unlikely to participate and are also unlikely to be swayed by the arguments given in favor of it. And even more unfortunately, due to the predicted composition of the participants, we may not be able to learn much about how successful the project will be for people who wouldn't normally be inclined to participate, and so even if the outcome on the first run is successful, it will still be unlikely to sway those people.

I don't place much weight on this model right now and I currently expect something like a 30% chance I will need to update it drastically. For example, you might already be receiving a ton of support from people who have never tried this and who don't know you very well, and that would force me to update right away.

Also, even though I don't know you personally, I generally feel positively towards the rationality community and feel safe in the knowledge that this whole thing is happening within it, because it means that this project is not too disconnected from the wider community and that you have sufficient dis-incentives from actually becoming a cult-leader.

In short: Don't let the negativity you are facing become too much of a burden, just keep in mind that it's possible that many of the most negative critics (besides obvious trolls) are not acting in bad faith, and that it could require more work than is feasible to engage with all of it sufficiently.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-30T23:05:20.519Z · LW(p) · GW(p)

I like everything you've said here, including the gentle pointers of places where I myself have been uncharitable or naive.

comment by handoflixue · 2017-05-30T21:15:13.922Z · LW(p) · GW(p)

Also, this is very important: You're asking people to sign a legal contract about finances without any way to to terminate the experiment if it turns out you are in fact a cult leader. This is a huge red flag, and you've refused to address it.


I would be vastly reassured if you could stop dodging that one single point. I think it is a very valid point, no matter how unfair the rest of my approach may or may not be.

comment by taygetea · 2017-05-27T14:39:10.588Z · LW(p) · GW(p)

This post puts me maybe 50% the way to thinking this is a good idea from my previous position.

My largest qualm about this is well-represented by a pattern you seem to show, which starts with saying "Taking care of yourself always comes first, respect yourself", then getting people to actually act on that in simple, low-risk low-involvement contexts, and assuming that means they'll actually be able to do it when it matters. People can show all the signs of accepting a constructed social norm when that norm is introduced, without that meaningfully implying that they'll use it when push comes to shove. Think about how people act when actual conflicts with large fight/flight/freeze responses interact with self-care norms. I suspect some typical-mind, as my model of you is better at that than most people. I think it depends on what "running on spite" cashes out to. This is kind of a known skull, but I think the proposed solution of check-ins is probably insufficient.

My other big concern is what comments like your reply to Peter here imply about your models and implicit relationship to the project. In this comment, you say you'll revise something, but I pretty strongly anticipate you still wanting people to do the thing the original wording implied. This seems to defuse criticism in dangerous ways, by giving other people the impression that you're updating not just the charter, but your aesthetics. Frankly, you don't seem at all likely to revise your aesthetics. And those, ultimately, determine the true rules.

To summarize the nature of my issues here in a few words: aesthetic intuitions have huge amounts of inertia and can't be treated like normal policy positions, and people's self-care abilities (and stress-noticing abiities) cannot be trusted in high-stress environments, even under light to moderate testing.

-Olivia

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-27T16:45:19.936Z · LW(p) · GW(p)

I'm unlikely to revise the aesthetics, but a) the particular operationalization/expression of those aesthetics, and b) the boundary/balance between both the aesthetics and other people's agency are fully open to debate, iteration, and consensus.

The whole point is to test out the aesthetic as it exists, to see whether it produces a better life for people, so it's important not to compromise it until some actual testing has taken place. But imagine e.g. a constructed social norm is approved of, proves to be problematic twice, and has one week left before its originally established "re-evaluate" point—I posit you get much better data out of seeing what happens if you keep the norm firmly in place, see the fallout for a week, watch people grumble and adjust, and then re-evaluate on schedule, than if you just constantly say "NOPE, DIDN'T WORK, SCREW THAT."

I think there's a strong instinct to buck norms and update in the moment, and that this is a pendulum swing thing—it's good that we do this a lot more than we did two decades ago, but it's bad that we do it as much as we do. There's value in learning to live with rules that don't change, or rules that are slightly stupid, and by setting rules firmly in place for e.g. three weeks at a time, I think you capture some of that value, at a low price in terms of loss of the flexibility thing.

Does that seem coherent/a valid response to your qualm?

Another way to say this is that I think the bar for "discard this norm" should be raised one notch higher from (straw description) "it bothered one of us once" to "it bothered several of us several times." If you keep it past the former, I think you see interesting effects in how people shape themselves around one another, and I think there's some valuable effect from transferring some sovereignty back from the individual to the social fabric (i.e. everybody's not just quittable at all times).

Replies from: Decius
comment by Decius · 2017-05-28T07:07:50.785Z · LW(p) · GW(p)

Evaluating whether to change a thing at the moment when it is maximally annoying (as would be the case in ad-hoc votes) will have different results from evaluating it at a predetermined time.

I'd suggest evaluating the policy of 'demand that an approved norm be in place until the scheduled vote' on the first scheduled vote following each scheduled vote in which 'a norm was dropped that people wanted to have it dropped mid-cycle but couldn't because of the policy'.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-28T07:49:36.585Z · LW(p) · GW(p)

Your suggestion makes sense for an experiment, but misses the whole point of this experiment. This, to me, seems like exactly the unpleasant valley dynamic. "We tried holding ourselves to a standard of 'we finish the experiments that we start,' but we got a couple of experiments in and we didn't like it. Let's stop."

Replies from: Decius
comment by Decius · 2017-05-28T08:14:32.506Z · LW(p) · GW(p)

"Last fortnight, we canceled [Idea which appeared to be horrible seconds after implementing it], which we continued for an entire fortnight because of our policy. Today we look at all available evidence and must decide if the meta-experiment generates benefits greater than the costs."

If you have no norm for evaluating that rule explicitly, it doesn't mean that you won't evaluate it. Maybe evaluating it every time it applies is excessive, but pretending that you won't quickly learn to put exit clauses in experiments that are likely to need them 'notwithstanding any other provision' is failing to accurately predict.

Replies from: ChristianKl
comment by ChristianKl · 2017-05-28T09:27:40.190Z · LW(p) · GW(p)

failing to accurately predict.

I think you miss the point that Duncan wants to train the ability to be out-of-comfort zone by following through on goals that are set. A norm being very annoying wouldn't be a reason to drop it before the scheduled vote. The norm would have to actually create substantial harm.

Replies from: Decius
comment by Decius · 2017-05-28T21:06:48.597Z · LW(p) · GW(p)

I read that "this is causing substantial harm" would be insufficient to cancel a norm, but expect that "this is creating a physical hazard" would be enough to reject the norm mid-cycle. The problem is that every edge has edge cases, and if there's a false negative in a mideterm evaluation of danger...

Maybe I'm concluding that the paramilitary aesthetic will be more /thing/ than others are. In my observation authoritarian paramilitary styled groups are much more /thing/ than other people expect them to be. (My own expectations, OTOH are expected to be accurate because subjectivity.

Replies from: ChristianKl
comment by ChristianKl · 2017-05-28T21:27:06.408Z · LW(p) · GW(p)

Duncan's rule one is "A Dragon will protect itself".

I don't think whether something is physical would be the prime distinction but whether the harm is substantial. If following a norm would likely result in someone losing his job, that isn't physical harm but substantial harm that likely warrants violating the norm.

comment by Decius · 2017-05-26T00:00:34.033Z · LW(p) · GW(p)

"roughly 90 hours a month (~1.5hr/day plus occasional weekend activities)" My math says that those weekend activities total the 1.5 hours every day has and also 10 additional hours every weekend.

"Any Dragon who leaves during the experiment is responsible for continuing to pay their share of the lease/utilities/house fund, unless and until they have found a replacement person the house considers acceptable, or have found three potential viable replacement candidates and had each one rejected. After six months, should the experiment dissolve, the house will revert to being simply a house, and people will bear the normal responsibility of "keep paying until you've found your replacement." "

It seems counterproductive to have people who have left the experiment living in the same house until they are replaced. Exit terms such as 'two months notice, or less if a suitable replacement can be found or otherwise agreed' are less coercive.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-26T01:12:24.665Z · LW(p) · GW(p)

Yeah, your exit norm is more what I was looking for. Thanks for the rework/reword ... I'll update it to something more like that soon.

The actual number we're shooting for is 30h/week, but not 30 hours every week. More like 20 hours most weeks and 40 or 50 every now and then.

Replies from: Decius
comment by Decius · 2017-05-27T01:16:00.002Z · LW(p) · GW(p)

21 hours most weeks is 3 hours per day, or 2 hours during each weekday and ~10 for the weekend. Just making sure that your daily and weekly estimates don't contain math errors, not saying anything about the sufficiency of those numbers.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-27T01:36:04.079Z · LW(p) · GW(p)

Oh, goodness, you're actually completely right. I just dumbbrained. The goal is 21 hours per week, on average, but with most weeks having more like 12 hours and some having more like 40.

The numbers are somewhat higher in the beginning both a) because it's easier to relax expectations than to tighten them, and b) I do suspect we want to frontload the togetherness and do more individual stuff after norming and bonding.

comment by Lumifer · 2017-05-26T02:20:29.716Z · LW(p) · GW(p)

For the record: not for me. At all.

Replies from: Vaniver, Duncan_Sabien
comment by Vaniver · 2017-05-27T07:08:35.126Z · LW(p) · GW(p)

I am Jack's complete lack of surprise.

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-26T07:53:33.051Z · LW(p) · GW(p)

I'm curious whether the not for me is "there are different kinds of people and different kinds of brains and different kinds of personalities, and they actually sometimes need different nutrients, and this one is bad for Lumifers," or whether it's "there's something fundamentally broken here that I'm particularly sensitive to but others are more able to tolerate."

If the latter, I'd love it if you ever manage to put it into words. The goal is to avoid as many of the Stupid Things as possible.

Replies from: None, Lumifer, entirelyuseless, ksv
comment by [deleted] · 2017-06-01T00:37:48.751Z · LW(p) · GW(p)

So, I have actually lived in a semi-authoritarian culture, and have a sort of unique experience of seeing high rates of autism function under that culture (and let's not deny the high rates of autism in this subculture). While this doesn't sound like "cult" to me, I can think of a couple ways gratuitous harm could occur even if everyone is operating in good faith.

  1. Person A harms Person B. Person B realizes that their violation threshold is much lower than they thought when they signed on, and they want to bring it up for discussion, but you and Person A have a much better rapport than you and Person B. And Person B was uniquely attracted to this because they need their self-care to largely be outsourced to a group structure. So they don't actually have the skills they need to be agenty outside of group expectations, and simply continue to be harmed while being unable to bring it to anyone's attention until it's much to late to repair the relationships. I'd like to present myself as someone who has gotten feedback along the lines of "you're competent and mature" and who still does this sort of thing. It's not something that's easily predicted by the person or by people observing them.

  2. As mentioned in (1), simply outsourcing functionality to a group structure can leave people helpless when they have to act against the group or act without the group. I don't see much thought put towards transition plans for people when they leave DAB. Relating back to the childhood and adolescent experiences I claimed gave me insight into this, I have seen a lot of people flail once their version of the role you're taking here is gone. And they get hurt. This applies even more to people who've required extra structure to function, as in the case of autism (and I am one of those autistic kids that flailed). You might say that people are accepting that they will get no transition help once they leave the immersive, structured environment you're creating, but it seems naive to not at least prep them for the struggles they might have.

2a. Transition is even more important given that this is a necessarily isolating endeavor. The things you're proposing take a ton of time! People will be making a lot of interpersonal sacrifices to participate, and that will degrade whatever safety net they think they'll have if they leave.

Personally, I'm trying really really hard to separate criticisms from an aesthetic distaste and the fact that this looks like things I have been actively harmed by, when the people in charge were people who loved me and had my best interests at heart. So, apologies, because this comment is definitely biased by that.

As far as "there are different kinds of people and this is bad for helldalgos" goes, this is bad because I would do something like this if I tried to participate: outsource most of my functionality to group norms, overstate my ability to be transparent enough to function in a high trust environment like this, end up hiding rule violations, feel guilty, become dishonest, and have periodic emotional and mental breakdowns where I burn all of my relationships in the house to the ground. The fact that I behave like this under authoritarian structures might be a problem, but it's not one that's fixed all at once by starting an immersive group project where someone is in charge of me. I said a few hours ago to someone else that I would definitely participate if I didn't have so many roots where I live now and if I could actually stand living in the Bay, but upon reflection, I think not.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-06-01T00:50:24.834Z · LW(p) · GW(p)

This is outstanding, and I appreciate you taking the time to write it up.

I think 1) is an interesting and important dynamic that I had not previously thought about, and I'm curious if you have concrete thoughts as to how to repair it. I think that simply acknowledging it, and committing to accede to opinions-other-than-my-own in evaluating whether it's going on, is an important first step but only gets like 15% of the way there. Similarly, I think norms of regular retrospectives and Circling-type environments will make it marginally more likely that people can bring this stuff forward and get it addressed, but not entirely because anxiety, status, etc.

My first brainstorm there produces things like anonymous feedback structures, "interrupting" norms where people are free to call things to a halt, requests-to-speak-privately and requests-for-third-party-mediation as strong guaranteed "yesses," and maybe something like a norm that people can call for discussion or mediation to halt until their ideological Turing test has been passed? e.g. I can't just brush past your claim of harm; you have an absolute right to stop things from moving forward until you are satisfied that I at least understand the magnitude of your internal experience, even if I disagree with your description of what happened externally.

As for 2), it's an ongoing conversation; back-and-forth in these comments has already produced a lot of clarity on both non-defecty, structured ways of leaving, and also urgent, ejector-seat methods. (I've been a little slow to post concrete details because I claim the person clamoring for them loudest isn't engaging in good faith, but I'd be happy to PM). My current sense, though, is that these structures, while they should be put in place as soon as possible, should also be discussed with the group, rather than emerging entirely under my models.

Thanks again, particularly for your separating criticisms from aesthetic distaste—I feel you absolutely succeeded at that goal, and I felt that your comment was both a) actually valuable and b) entirely constructive.

Replies from: None
comment by [deleted] · 2017-06-01T02:34:54.399Z · LW(p) · GW(p)

I'm not sure how to solve it except to avoid authoritarian structures, which is obviously counterproductive for your project. I would recommend taking any opportunity you have to exhibit through actions that fairness can be expected despite your existing rapport with someone. The things you suggested are helpful but not comprehensive. You could screen for anxiety, but this behavior can be found in people who wouldn't otherwise consider themselves anxious. And it's not entirely fueled by anxiety, either.

I like the "interrupting" norm idea; I can see it becoming prone to a weaponized-victimhood sort of abuse but that would be easier to see and stop from the outside than the dynamic it could solve. And if someone is constantly claiming that they've been harmed, that's probably a good sign that DAB isn't a healthy environment for them anyways.

I would be louder about insisting on plans for various types of leaving if I had more of a stake in this project. If I were planning to participate or someone I cared about was, I would be insisting on it with a similar degree of intensity as the other comments you're referencing. That's a major part of what will keep it from being what some people are calling abusive, but that I think belongs under the wider umbrella of "dysfunctional." You're right that it should be collaborative, and I don't expect graceful exit plans to leap fully formed from your skull, but yeah. I endorse that level of intensity in terms of expressing just how important exit plans are.

I should admit that as of a few hours ago I have an ongoing bet about the year-long success of this project, and I took the pessimistic view (for reasons other than abuse or harm). I was also incredibly worried about people getting hurt when I read through the document and some other comments. But, having talked to some other people that know you and reading other things you've said, I am definitely less worried about harm through intent or negligence than I was. I am still pretty concerned about harm through ignorance or dynamics inherent in the interaction between the people and the system.

Replies from: Duncan_Sabien, Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-06-01T03:02:09.807Z · LW(p) · GW(p)

Also, excellent post from Slatestarscratchpad that sums up (I think) something like 85% of the fundamental disagreement:

One thing that’s seemed striking to me in this Dragon Army discussion is the priors on different people’s threat assessments.

I remember when I was younger, I used to want to meet my friends from the Internet, and my parents were horrified, and had all of these objections like “What if they’re pedophiles who befriended you so they could molest you?” or “What if they’re kidnappers who befriended you so they could kidnap you?”, or less lurid possibilities like “What if they’re creepy drug people and they insist on bringing you along to their creepy drug abuse sessions and won’t let you say no?”

And I never developed a good plan that countered their concerns, like “I will bring pepper spray so I can defend myself”. It was more about rolling my eyes and telling them that never happened in real life. I’ve now met hundreds of Internet friends, and I was absolutely right - it’s never happened, and any effort I put into developing a plan would have been effort wasted.

I’m not claiming there are no Internet pedophiles or kidnappers. I’m saying that based on my own Internet communities, and my threat-detection abilities, and the base rate, I was pretty sure it was more in the realm of terrorism (the kind of stuff you hear about on the news) than the realm of car accidents (the stuff that happens to real people and that you must be guarding yourself against at every moment).

This is also how I think of people turning out to be abusers. It’s possible that anyone I date could turn out to be an abuser, just like it’s possible I could be killed by a terrorist, but it’s not something likely enough that I’m going to take strong precautions against it. This is obviously a function of my personal situations, but it’s a real function of my personal situation, which like my Internet-friend-meeting has consistently been confirmed over a bunch of different situations.

(Please don’t give me the “that’s just male privilege!” speech; men and women get abused at roughly similar rates. I do think that probably women are socialized to fear abuse much more, and that’s a big part of this, and probably other axes of marginalization contribute more)

One interesting thing about Tumblr and the SJ-sphere in particular is that because it comes disproportionately from marginalized communities, it has this sort of natural prior of “people often turn out to be abusers, every situation has to be made abuser-proof or else it will be a catastrophe”. I once dated someone I knew on Tumblr who did a weird test on me where (sorry, won’t give more details) they deliberately put me in a situation where I could have abused them to see what I would do. When they told me about this months later, I was pretty offended - did I really seem so potentially-abusive that I had to be specifically cleared by some procedure? And people explained to me that there’s this whole other culture where somebody being an abuser is, if not the norm, at least high enough to worry about with everyone.

I’m not sure what percent of the population is more like me vs. more like my date. But I think there’s a failure mode where someone from a high-trust culture starts what they think is a perfectly reasonable institution, and someone from a low-trust culture says “that’s awful, you didn’t make any effort to guard against abusers!”. And then the person from the high-trust culture gets angry, because they’re being accused of being a potential abuser, which to them sounds as silly as being accused of being a potential terrorist. If you told your Muslim friend you wouldn’t hang out with him without some safeguards in case he turned out to be a terrorist, my guess is he’d get pretty upset. At the very least it would engender the “stop wasting my time” reaction I had when my parents made me develop anti-pedophile plans before meeting my Internet friends.

And then the person from the low-trust culture gets angry, because the person has just dismissed out of hand (or even gotten angry about) a common-sense attempt to avoid abuse, and who but an abuser would do something like that?

I think it’s interesting that the Dragon Army idea received more positive feedback or constructive criticism on LW (where it was pitched to, and which is probably culturally more similar to me) and more strongly negative feedback on Tumblr (which is more full of marginalized people and SJ-aligned people, and also maybe more full of abusers as judged by the number who get called out all the time).

Replies from: None
comment by [deleted] · 2017-06-01T03:37:07.631Z · LW(p) · GW(p)

Yeah, I saw that earlier. In my case, I'm not panicked (or at least, I quickly became not panicked) about rampant abuse, and I also have not been directly exposed to a lot of abuse. My concerns are more about ways I've been directly exposed to harm by authoritarianism with good intentions. It's no coincidence that that is what I was inclined to bring up. Since I'm probably not unique, there's probably something worth taking seriously in every complaint. But everyone is probably weighting their own concerns the most. So that summarizes to something like:

-abuse is often perpetuated in structures that share significant characteristics with DAB, and you should think about specific plans to avoid abusing people

-there are unique systemic issues with authoritarian structures that facilitate unsustainable dysfunction even when no individual person is deviating much from normal behavior

-sex and romance will cause problems and it might be worth restricting behavior inside the house

-etc (I have not read every single complaint)

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-06-01T03:45:10.492Z · LW(p) · GW(p)

+1 to all this. In particular, if my pendulum swing model is correct, the new position of the pendulum (extreme aversion to the risk of abuse) is a result of the pendulum's previous stuck point being "a lot of people suffering abuse in these kinds of environments."

I'm proposing swinging back toward the old norm and trying not to cross the ideal point, and I agree it's a hard problem. Posts like yours are excellent for improving models and reducing risk as a result.

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-06-01T02:43:32.986Z · LW(p) · GW(p)

I think it's okay for people to bet against it; we're going to have a large betting norm within the house. If nobody bet against, I wouldn't have anybody to bet with!

Exit plans are now #1 on the "to finalize" list, and have had multiple eyes on. I strongly endorse the way that LW has focused me toward that part of things, which I was underweighting. However, I also note that some people SHOULD ultimately still be dissatisfied with the exit norms, and therefore choose not to participate. Like, that's part of the definition of high-stakes, high-commitment—it's not for everybody, and in fact if everybody were on board with the exit norms being sufficient it ... wouldn't be much of anything?

The key, in my opinion, is being clear clear clear clear clear, and that particular part of it was not clear enough to the potential participants, and it will be now.

Thanks again for your willingness to write things up.

comment by Lumifer · 2017-05-26T15:49:07.965Z · LW(p) · GW(p)

Mostly the former. I am an individualist and dislike collectivism. As befits a proper individualist :-) I also recognize that people are different and what's good for me is not necessarily good for thee. I can survive and function in collectivist environments like you propose, but I don't like them and don't see a good reason for me to be there.

As to the latter, it's hard to do a pre-mortem on something that's still in flux. Communes of different kinds -- from monasteries to kibbutzim and hippies -- have been around for many centuries and clearly some people like them and find them useful. There's enough history (which I'm not all that familiar with) to learn where the common pitfalls lie and what are the major trade-offs that you would be facing. I can't recommend a book, but I'm sure there's a few.

Generally speaking, I would expect the most likely mode of failure to be the way power dynamics develop. Authority and power are complicated and deadly -- tightly-knit communities can go very bad quickly this way (consult your favourite cult group horror story). Adding sex to the mix generally makes things... more volatile. The rationalist community doesn't strike me as being particularly capable of managing power issues.

comment by entirelyuseless · 2017-05-26T15:17:22.884Z · LW(p) · GW(p)

I agree with Lumifer.

Emotionally, the whole proposal strikes me as cultlike in a bad way. I can't defend that as a factual claim since I only skimmed the post (precisely because it is not relevant to me), but I am pretty sure that living in such a situation even for a short while would make me feel very, very bad.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-26T15:34:42.522Z · LW(p) · GW(p)

Same question posed to you—to the best of your ability to tell, is this a bug in the system, a bug in you personally, or a simple instance of diff'rent strokes for diff'rent folks? And if a bug in the system, can you point straight at it?

Replies from: handoflixue
comment by handoflixue · 2017-05-30T19:43:36.701Z · LW(p) · GW(p)

Speaking entirely for myself: You are proposing a dangerous venture. The path is littered with skulls. Despite this, you have not provided any concrete discussion of safety. When people have brought the subject up, you've deflected.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-30T19:49:01.060Z · LW(p) · GW(p)

I suspect you haven't actually poked around in all of the comments—I can point to multiple places where I've provided concrete discussion of safety, if you spend five minutes looking and can't find it.

comment by simple_name (ksv) · 2017-05-31T13:14:28.826Z · LW(p) · GW(p)

The biggest concern/red flag for me is one aspect of the authoritarian nature of the project. I would be perfectly fine with fully outsourcing decisions (giving higher intellectual status) but not with being a subordinate in full generality. What I'm trying to point at is the difference between "What should I do? He said to do "x" and I trust his expertise so this is my best option and I'm going to make myself do it if unpleasant" and someone forcing me to do the thing.

Which of the two would be my intuitive reaction depends mostly on your character/attitude and this is something that is completely missing from the discussion so far. Hopefully that is because people know you so they are sure it wouldn't be a problem but your comments here only show competence and don't exclude arrogance or enjoying power too much and beginning to boss people around. I found concerning the comparisons to military bootcamps and talking about tyrants as this somewhat paints the image of "someone shouting at people to do stuff" which I expect to have severe negative effects and build up resentment quickly. In other words it seems to me that constraining your image strictly to the one who decides what is to be done as opposed to someone who also enforces the execution would reduce the risk of failure of the experiment. Enforcing by regulating incentives should be fine as it won't speak to System 1 and provoke the low-level "Who are you to tell me what to do" reaction.

Maybe this is an obvious point that having a nice and respectful leader is better than powerful tyrant but I'm not sure how far I can generalize from my own preferences so decided to share anyway. Apologies if this doesn't make sense or wastes your time, I'm new to posting here.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-31T17:21:25.984Z · LW(p) · GW(p)

This is a clear and cogent point, and thanks for posting it.

I suspect the authoritarian stuff is a necessary catalyst, to get the group cohered together and working, and after an initial period it becomes less and less useful. For instance, I think a major part of the thing is getting everyone to be in the same room at the same times, and that happens fastest and becomes ingrained easiest if someone's just dictating the time (after reasonably accounting for everyone's constraints and preferences).

But once everyone's all in the same room, I don't think it makes too much sense for an authoritarian to dictate what happens. Like, I think the useful thing is something along the lines of "well, if you all can't decide where we're going to eat, then we're getting pizza"—my plan is to set a minimum bar of "this is a useful thing to be doing," and to demand that we do at least that, but to in no way restrict people from coming up with something better/more effective/more worthwhile.

So, we start off by having morning exercise and weekly dinner, and then over time, people who are chafing because the morning exercise get to say, "Hey, you know what would be a better use of this slot of togetherness that is taken as a given? Doing X or Y or Z." The authoritarianism is there to support the scaffold, but is not there to say what grows on it, except in the most general sense of "let's try to improve" and "let's lean toward important stuff rather than trivial."

I also note that I'm somewhat overemphasizing the authoritarian bit, because I expect it's the most difficult piece to swallow, and I want to really really really really really make sure that I don't undersell how strict things will end up being. It seems way worse to lose a couple of people who would've liked it because I made it sound too restrictive than to include people who are going to be trapped and unhappy because I didn't give them enough warning.

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-06-01T03:03:02.373Z · LW(p) · GW(p)

An excellent post from Slatestarscratchpad that sums up (I think) something like 85% of the fundamental disagreement that's fueling the more heated clashes:

One thing that’s seemed striking to me in this Dragon Army discussion is the priors on different people’s threat assessments.

I remember when I was younger, I used to want to meet my friends from the Internet, and my parents were horrified, and had all of these objections like “What if they’re pedophiles who befriended you so they could molest you?” or “What if they’re kidnappers who befriended you so they could kidnap you?”, or less lurid possibilities like “What if they’re creepy drug people and they insist on bringing you along to their creepy drug abuse sessions and won’t let you say no?”

And I never developed a good plan that countered their concerns, like “I will bring pepper spray so I can defend myself”. It was more about rolling my eyes and telling them that never happened in real life. I’ve now met hundreds of Internet friends, and I was absolutely right - it’s never happened, and any effort I put into developing a plan would have been effort wasted.

I’m not claiming there are no Internet pedophiles or kidnappers. I’m saying that based on my own Internet communities, and my threat-detection abilities, and the base rate, I was pretty sure it was more in the realm of terrorism (the kind of stuff you hear about on the news) than the realm of car accidents (the stuff that happens to real people and that you must be guarding yourself against at every moment).

This is also how I think of people turning out to be abusers. It’s possible that anyone I date could turn out to be an abuser, just like it’s possible I could be killed by a terrorist, but it’s not something likely enough that I’m going to take strong precautions against it. This is obviously a function of my personal situations, but it’s a real function of my personal situation, which like my Internet-friend-meeting has consistently been confirmed over a bunch of different situations.

(Please don’t give me the “that’s just male privilege!” speech; men and women get abused at roughly similar rates. I do think that probably women are socialized to fear abuse much more, and that’s a big part of this, and probably other axes of marginalization contribute more)

One interesting thing about Tumblr and the SJ-sphere in particular is that because it comes disproportionately from marginalized communities, it has this sort of natural prior of “people often turn out to be abusers, every situation has to be made abuser-proof or else it will be a catastrophe”. I once dated someone I knew on Tumblr who did a weird test on me where (sorry, won’t give more details) they deliberately put me in a situation where I could have abused them to see what I would do. When they told me about this months later, I was pretty offended - did I really seem so potentially-abusive that I had to be specifically cleared by some procedure? And people explained to me that there’s this whole other culture where somebody being an abuser is, if not the norm, at least high enough to worry about with everyone.

I’m not sure what percent of the population is more like me vs. more like my date. But I think there’s a failure mode where someone from a high-trust culture starts what they think is a perfectly reasonable institution, and someone from a low-trust culture says “that’s awful, you didn’t make any effort to guard against abusers!”. And then the person from the high-trust culture gets angry, because they’re being accused of being a potential abuser, which to them sounds as silly as being accused of being a potential terrorist. If you told your Muslim friend you wouldn’t hang out with him without some safeguards in case he turned out to be a terrorist, my guess is he’d get pretty upset. At the very least it would engender the “stop wasting my time” reaction I had when my parents made me develop anti-pedophile plans before meeting my Internet friends.

And then the person from the low-trust culture gets angry, because the person has just dismissed out of hand (or even gotten angry about) a common-sense attempt to avoid abuse, and who but an abuser would do something like that?

I think it’s interesting that the Dragon Army idea received more positive feedback or constructive criticism on LW (where it was pitched to, and which is probably culturally more similar to me) and more strongly negative feedback on Tumblr (which is more full of marginalized people and SJ-aligned people, and also maybe more full of abusers as judged by the number who get called out all the time).

Replies from: taygetea, Raemon
comment by taygetea · 2017-06-03T09:02:11.708Z · LW(p) · GW(p)

I think people tend to need a decent amount of evidence before they start talking about someone looking potentially abusive. Then the crux is "does this behavior seem normal or like a predictive red flag?". In those cases, your lived experience directly influences your perception. Someone's actions can seem perfectly fine to most people. But if some others experience spooky hair-raising flashes of their questionably abusive father or a bad ex, that's evidence. The people who didn't think anything was weird brush off the others as oversensitive, risk averse, or paranoid. Then those raising alarms think of everyone else as callous, imperceptive, or malicious. It's not just people who don't alieve the correct base rates. Certainly those people exist, though they're much more plentiful on Tumblr than in person or on LW. It's very non-obvious whether a strong reaction is correct.

Neither side can truly accept the other's arguments. It's a bad situation when both sides consider the other's reasoning compromised beyond repair. That brings politics and accusations of bad faith on all sides. But there is a fact of the matter, and the truth is actually unclear. Anyone thinking at enough of a distance from the issue should have honest uncertainty. I suspect you're particularly prone to refusing to let the conflicting experience of others be seen by your deep internal world-models, to strongly underestimating the validity and reliability of that type of evidence. That would cause what you say to be parsed as bad faith, which other people then respond to in kind. That would cause a positive feedback loop where your prior shifts even further away from them having useful things to say. Then you'd end up a frog boiled in a pot of drama nobody else is experiencing. I'm not sure this is what's happening, but it looks plausible.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-06-03T13:55:08.045Z · LW(p) · GW(p)

Strong endorsement of all of this.

comment by Raemon · 2017-06-02T19:05:01.715Z · LW(p) · GW(p)

I like this, and am curious if it caused anyone who was embroiled in the more intense discussions to change their mind or actions?

comment by Qiaochu_Yuan · 2017-05-31T00:26:57.613Z · LW(p) · GW(p)

I would like everyone posting criticism, especially heated criticism, to keep very firmly in mind that Duncan did not have to write this. Whatever your opinion of him, at least make sure you've factored in the evidence that he wrote this whole, weird thing, complete with references to Ender's Game, Fight Club, etc. instead of writing either 1) nothing or 2) something much more reassuring.

There are critics who think Duncan is incompetent and overconfident, and about this hypothesis I can say at least that it is consistent with Duncan having written this post. Then there are critics who think Duncan is, I dunno, evil or power-hungry or something, and I think those people are mostly failing to see what is in front of them.

Replies from: None, handoflixue, cousin_it
comment by [deleted] · 2017-05-31T04:31:57.101Z · LW(p) · GW(p)

a

comment by handoflixue · 2017-05-31T00:47:30.559Z · LW(p) · GW(p)

The whole point of him posting this was to acknowledge that he is doing something dangerous, and that we have a responsibility to speak up. To quote him exactly: "good and moral people have to stop and wonder whether their friends and colleagues with slightly weaker epistemics and slightly less-honed allergies to evil are getting hoodwinked".

His refusal to address basic safety concerns simply because he was put off by my tone is very strong evidence to me that people are indeed being hoodwinked. I don't care if the danger to them is because he's incompetent, overconfident, evil, or power-hungry. I care that people might get hurt.

(I would actually favor the hypothesis that he is incompetent/overconfident. Evil people have more sensible targets to go after)

Replies from: None, Duncan_Sabien
comment by [deleted] · 2017-05-31T04:36:04.796Z · LW(p) · GW(p)

a

Replies from: Mirzhan_Irkegulov
comment by Mirzhan_Irkegulov · 2017-06-04T16:56:29.637Z · LW(p) · GW(p)Join https://www.reddit.com/r/SneerClub/Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-06-04T17:49:12.724Z · LW(p) · GW(p)

Er. It's a bit awkward, given that I'm at least somewhat sympathetic to the claim about Diego, and it's not valid or justified to put me in the same category, but the side of the story that claims Diego is evil primarily cites his conduct within romantic relationships (including courtship and breakup), financial relationships (such as sharing a lease with housemates), and some bits about his willingness to cooperate or defect in social interactions. The details and their interpretations belong somewhere other than this thread.

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-31T18:17:37.255Z · LW(p) · GW(p)

I think you're confusing "refusal to address basic safety concerns to handoflixue directly" with "refusal to address basic safety concerns at all." I deny your right to judge and interrogate me, because of your failure to exhibit clear thinking and good discourse. I've engaged with those very same points in many other comment threads, though—there are literally only three people in this entire thread for whom I've determined that the EV of digging into their perspective is not worth it.

I note that there's a bet waiting in the wings to lend your harsh words credibility. You could charitably offer to donate your winnings to salving the pain of the people you claim to care about.

Replies from: Mycroft65536, handoflixue, handoflixue
comment by Mycroft65536 · 2017-06-01T04:02:56.250Z · LW(p) · GW(p)

Duncan,

I think you're dramatically underestimating how your responses are being read by third parties. Your style of response to handoflixue specifically has made at least one person I've spoken to decide to avoid giving you well thought out criticism out of fear of you yelling at them and being very confrontational.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-06-01T04:51:29.459Z · LW(p) · GW(p)

shrug

If you stumble upon a schoolyard fight, and immediately assume that the person you see punching is fundamentally violent and has high odds of attacking you, I think you're skipping an important step of checking to see whether they're the bully or whether they're defending themselves. Most of us have had the experience (either direct or vicarious) of being absolutely infuriated by the people who try to pretend like there's a perfect symmetry between the punch thrown by the aggressor and the punch thrown by the defender—it's not hypocritical to both support "not starting fights" and "being willing to end them."

I am aware of the risk of losing people around the edges, yeah. But I can't do anything except point to the scores and scores of other responses (it might be over a hundred by now) in which I've thanked people for critique, responded in depth, updated visibly in real time, etc.

People get anxious, and maybe they disengage. But anyone who's not going to be openly and unjustifiably uncharitable has nothing to fear from me in particular. I'm not going to not stand up for myself against bullies and trolls, even if it costs me some quiet whispers that would've contained good content.

Everything is tradeoffs. To put it another way: The person who's refusing to give me their well-thought-out criticism is either a) unable because of costs/time constraints to look further and see that my claim they have nothing to fear is credible, or b) themselves jumping to unfounded conclusions based on less data than they have available to them.

If a), then fair play—this is nobody's first priority except mine, and I don't feel entitled to everyone's opinions; it's perfectly reasonable to have a policy of not spending a lot of time if your first impression is strongly negative.

If b), and they have time to look but are choosing not to and running with a strawman without questioning their own conclusions, then ... well ... it probably wouldn't have gone well anyway.

If c), they've followed the whole chain in chronological order and they still think I'm at fault, then that just means we have strongly differing priors on right and wrong/acceptable and unacceptable, and once you get down to values on that level, I don't know how well we'd be able to pass one another's ITTs anyway.

To the best of my ability to judge, handoflixue's earlier comments (e.g. above and below this comment) were absolutely dripping with assumption-of-evil-intent, outright insults, unfounded leaps to Harsh Judgments of my fundamental character, poor logic, fallacious smears, and so on and so forth. They dropped into the thread after there were already over a hundred comments, including many where I'd demonstrated credible evidence of good faith and willingness to change my mind, which they completely ignored. They continued to ask loaded, unfair questions and set up strawmans over and over and over, with at least a dozen posts containing both deontological hostility and bad epistemics. They then offered a single token apology conditional on "if" their tone had been too harsh (rather than just saying sorry, I crossed the line, as I myself have done in these comments at least twice), and dropped the overtly hostile tone while continuing to subtly insinuate that I'm a bad actor in every post.

(I note that in the places where they didn't do this, I answered them in the same way I was answering everyone else, up until deciding to disengage on a policy level.)

Given that my stated role model is Ender Wiggin, if somebody thinks handoflixue's approach is okay, or thinks that I shouldn't have defended myself, then it shouldn't be surprising that I claim, as my personal opinion, that their moral compass is drastically askew. There's a different question about whether I've marginally erred, e.g. by being 15% too defensive, but that shouldn't trigger someone who's not going to be hostile in the first place to be afraid.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-06-01T04:55:18.339Z · LW(p) · GW(p)

To put it another way: The person who's refusing to give me their well-thought-out criticism is either a) unable because of costs/time constraints to look further and see that my claim they have nothing to fear is credible, or b) themselves jumping to unfounded conclusions based on less data than they have available to them.

If a), then fair play—this is nobody's first priority except mine, and I don't feel entitled to everyone's opinions; it's perfectly reasonable to have a policy of not spending a lot of time if your first impression is strongly negative.

If b), and they have time to look but are choosing not to and running with a strawman without questioning their own conclusions, then ... well ... it probably wouldn't have gone well anyway.

If c), they've followed the whole chain in chronological order and they still think I'm at fault, then that just means we have strongly differing priors on right and wrong/acceptable and unacceptable, and once you get down to values on that level, I don't know how well we'd be able to pass one another's ITTs anyway.

handoflixue's earlier comments were absolutely dripping with assumption-of-evil-intent, outright insults, unfounded leaps to harsh judgments of my fundamental character, poor logic, fallacious smears, and so on and so forth. They dropped into the thread after there were already over a hundred comments, including many where I'd demonstrated credible evidence of good faith and willingness to change my mind, which they completely ignored. They continued to ask loaded, unfair questions and set up strawmans over and over and over, with at least a dozen posts containing both deontological hostility and bad epistemics. They then offered a single apology conditional on an "if" (rather than just saying, sorry, I was too harsh, as I myself have done in these comments at least twice), and dropped the overtly hostile tone while continuing to subtly insinuate that I'm a bad actor in every post.

If somebody thinks that's okay, or thinks that I shouldn't have defended myself, then that's somebody whose moral framework is, in my personal opinion, drastically askew. There's a different question about whether I've marginally erred, e.g. by being 15% too defensive, but that shouldn't trigger someone who's not going to be hostile in the first place to be afraid.

Replies from: Lumifer
comment by Lumifer · 2017-06-01T05:36:43.933Z · LW(p) · GW(p)

handoflixue's earlier comments were absolutely dripping with assumption-of-evil-intent, outright insults, unfounded leaps to harsh judgments of my fundamental character, poor logic, fallacious smears, and so on and so forth. They dropped into the thread after there were already over a hundred comments, including many where I'd demonstrated credible evidence of good faith and willingness to change my mind, which they completely ignored. They continued to ask loaded, unfair questions and set up strawmans over and over and over, with at least a dozen posts containing both deontological hostility and bad epistemics.

Just pondering this passage. Interesting.

comment by handoflixue · 2017-05-31T19:00:51.836Z · LW(p) · GW(p)

Fine. Reply to my OP with links to where you addressed other people with those concerns. Stop wasting time blustering and insulting me - either you're willing to commit publicly to safety protocols, or you're a danger to the community.

If nothing else, the precedent of letting anyone recruit for their cult as long as they write a couple thousand words and paint it up in geek aesthetics is one I think actively harms the community.

But, you know what? I'm not the only one shouting "THIS IS DANGEROUS. PLEASE FOR THE LOVE OF GOD RECONSIDER WHAT YOU'RE DOING." Go find one of them, and actually hold a conversation with someone who thinks this is a bad ideas.

I just desperately want you to pause and seriously consider that you might be wrong. I don't give a shit if you engage with me.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-06-01T00:27:53.672Z · LW(p) · GW(p)

I note for others reading this comment and wondering why it hasn't been addressed that I've ceased replying to handoflixue and a couple of other posters on a policy level, for reasons surrounding norms of discourse, strawmanning, epistemic humility, presence or absence of good faith, etc. It's possible that the above contains good questions or insights; if someone else chooses to repost/re-ask/rephrase sections of this, I'll likely respond to them.

http://lesswrong.com/lw/c1/wellkept_gardens_die_by_pacifism/

comment by handoflixue · 2017-05-31T19:11:44.284Z · LW(p) · GW(p)

Also: If you refuse to give someone evidence of your safety, you really don't have the high ground to cry when that person refuses to trust you.

comment by cousin_it · 2017-05-31T10:06:14.875Z · LW(p) · GW(p)

Scott just chimed in against:

On third thought, everyone else is right and I am wrong. The Dragon Army group house is a very bad idea, enough so that it’s okay to be forceful in encouraging Duncan to modify it or other people not to join it. This is true even if the required modifications are so hard that they end up sinking the project.

comment by iamaknave · 2017-05-30T19:20:57.048Z · LW(p) · GW(p)

One particularly dangerous failure mode is that people may lose the capacity to recognize when the situation is toxic, unhealthy or counter-productive. The sunk cost fallacy is a powerful thing, as are the effects of strong emotional attachment. You may want to consider having a mandatory short vacation period from the house. This will allow people to take some space to get perspective on the house.

You also may want to mandate external social supports such as therapy, external friend groups, etc.

Replies from: jimrandomh, Duncan_Sabien
comment by jimrandomh · 2017-05-30T19:25:10.395Z · LW(p) · GW(p)

Agree. I think there are ways to do this without making it seem scary or unnatural, like "everyone visits family for a week around Thanksgiving".

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-31T17:26:15.738Z · LW(p) · GW(p)

This seems entirely sensible. We've already decided to institute the outside friend check-in, and occasional vacation is an obvious extension of that.

comment by gwillen · 2017-05-26T10:26:37.211Z · LW(p) · GW(p)

I find this project very interesting! I can imagine an alternate-universe version of me being super excited to join it. I think it's even possible that the this-universe version of me could benefit a lot from joining it. (I would see most of the benefit from myself in solving Problem 2, I think.)

But... I think there is not more than an 80% chance I would make it 6 months in such an environment without hitting the eject button to preserve my own sense of (physical or psychological) safety. (That is, a chance of at least 20% that I would hit the eject button.) I do think it's great that Code of Conduct rule #1 encourages people to protect their own safety even at the cost of leaving the project. (Although for people of limited economic means this might be hard to execute, given the need to find a replacement, so probably "has the means to deal with needing to leave if the project doesn't work out" is a screening factor.)

It's possible this is just a fact about me, more than about the project. But I don't have the sense that a lot of other members of the rationalosphere would well tolerate, say, an actual military boot camp environment, which feels a lot like the direction this is aimed. It's possible I'm misunderstanding the degree of control you / the project expects to exert over the lives of the participants. But I know that I got happier when I adopted the rule that adulthood means never letting anybody force me to do anything that feels unsafe, even if refusing has significant costs. (For comparison, my largest concern about going to a CFAR workshop was that being subjected to a "comfort zone expansion" exercise, while in remote woods, with complete strangers, on a sunk cost of thousands of dollars, would be a high-stakes problem if I didn't like how it went. Pete Michaud correctly disabused me of this concern during the interview.) Again, perhaps this just means that Dragon Army is not for me. But I'm curious what you think about it. It seems hard to imagine I could go 6 months of committing to try to perfectly execute all the stated rules plus one experimental norm per week without ending up in at least one situation where following the rules felt unsafe.

Separately, I'm interested in whether you think Problem 4 could be tackled separately from an all-consuming project like Dragon Army. I feel like I have seen the "desperately hoping nobody will bail after the third meeting" thing a lot before, but usually the context is "a bunch of people vaguely want to get a thing done but nobody has really committed to it", in which context bailing after the third meeting is not violating any norms or agreements. Without making any new norms, one already has the option of actually asking for explicit commitments, rather than just seeing who shows up, and I think this option is not used often enough. I guess the failure mode of trying to solve Problem 4 alone is, if you ask for explicit commitments, you discover that people just won't give them in the first place. Dragon Army seems like a big hammer to solve this but maybe it's the only way?

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-26T10:59:54.152Z · LW(p) · GW(p)

I think the main issue here is culture. Like, I agree with you that I think most members of the rationalsphere wouldn't do well in a military bootcamp, and I think this suggests a failing of the rationalist community—a pendulum that swung too far, and has weakened people in a way that's probably better than the previous/alternative weakness, but still isn't great and shouldn't be lauded. I, at least, would do fine in a military bootcamp. So, I suspect, would the rationalists I actually admire (Nate S, Anna S, Eli T, Alex R, etc). I suspect Eliezer wouldn't join a military bootcamp, but conditional on him having chosen to do so, I suspect he'd do quite well, also. There's something in there about being able to draw on a bank of strength/go negative temporarily/have meta-level trust that you can pull through/not confuse pain with damage/not be cut off from the whole hemisphere of strategies that require some amount of battering.

It makes sense to me that our community's allergic to it—many people entered into such contexts before they were ready, or with too little information, or under circumstances where the damage was real and extreme. But I think "AVOID AT ALL COSTS! RED FLAG! DEONTOLOGICAL REJECTION!" is the wrong lesson to take from it, and I think our community is closer to that than it is to a healthy, carefully considered balance.

Similarly, I think the people-being-unreliable thing is a bullshit side effect/artifact of people correctly identifying flexibility and sensitivity-to-fluctuating-motivation as things worth prioritizing, but incorrectly weighting the actual costs of making them the TOP priorities. I think the current state of the rationalist community is one that fetishizes freedom of movement and sacrifices all sorts of long-term, increasing-marginal-returns sorts of gains, and that a few years from now, the pendulum will swing again and people will be doing it less wrong and will be slightly embarrassed about this phase.

(I'm quite emphatic about this one. Of all the things rationalists do, this one smacks the most of a sort of self-serving, short-sighted immaturity, the exact reason why we have the phrase "letting the perfect be the enemy of the good.")

I do think Problem 4 can probably be solved incrementally/with a smaller intervention, but when I was considering founding a house, one of my thoughts was "Okay, good—in addition to all the other reasons to do this, it'll give me a context to really turn a bazooka on that one pet peeve."

Replies from: Vaniver, Qiaochu_Yuan, catch223, handoflixue, gwillen, John_Maxwell_IV
comment by Vaniver · 2017-05-26T18:32:36.221Z · LW(p) · GW(p)

I suspect Eliezer wouldn't join a military bootcamp, but conditional on him having chosen to do so, I suspect he'd do quite well, also.

Eliezer wasn't able to complete high school, for what I suspect are related reasons. (The sleep thing may have contributed, but I think it was overdetermined.)

I think I would have been extremely miserable if I had gone through boot camp at 18; I think I would have been able to bear going through it by ~25.

comment by Qiaochu_Yuan · 2017-05-26T18:41:59.065Z · LW(p) · GW(p)

I think a relatively tight analogy can be made between attitudes towards the authoritarianism of a military bootcamp and attitudes towards romantic relationships. Like, if you go through a string of really bad relationships with partners who consistently abused you, you might update that there's something inherently abusive about relationships and that you just shouldn't be in one again, ever, because your autonomy is too important. On the other hand there is such a thing as a healthy relationship, even a healthy relationship in which you have less than perfect autonomy because you've made some commitments that you're following through on, and you might be lucky enough to find yourself in one in the future if you're open to the possibility and search carefully for someone to commit to.

I think I disagree that the pendulum will swing back in the future though. The rationality community being the way it is now, prioritizing flexibility the way it does now, probably has the property that it attracts people who are prioritizing flexibility and turns off people who are looking for reliability. So if anything I expect the problem to get worse over time unless someone makes a deliberate effort to attract looking-for-reliability sorts of people - hopefully Dragon Army can do this.

Replies from: Lumifer
comment by Lumifer · 2017-05-26T20:57:20.244Z · LW(p) · GW(p)

a relatively tight analogy can be made between attitudes towards the authoritarianism of a military bootcamp and attitudes towards romantic relationships

I don't get the analogy. So, if you go through a string of really bad military bootcamps? But you need to stay open to the possibility of a really good bootcamp that you can and should commit to?

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2017-05-26T21:17:40.531Z · LW(p) · GW(p)

Yes, but using "military bootcamp" as a symbol of broader kinds of authorities you could submit to, e.g. schools, employers, governments, and keeping in mind that people are learning about how authorities work based on others' experiences and not just their own.

comment by catch223 · 2017-05-28T23:33:49.952Z · LW(p) · GW(p)

As someone who's done the whole military thing (am I alone?), I agree with your view that most members of the rationalsphere would struggle immensely in bootcamp, both in turns of physicality and culture (I'm referring mostly to the Army and Marines here, which focus on actual combat training vs. the Air Force and Navy that don't).

I totally agree that you would have 0 problems (other than patience with the stupid parts) as you have a high degree of physical ability, emotional resilience, and general cognitive ability. You would very likely excel. I could say the same of Val and Pete, and I'm sure Eli would do well (I don't know the others you listed well enough to venture a guess).

I have never met Eliezer. However, I suspect he would struggle a great deal and be unlikely to succeed from what I've read and been told. I can't imagine Eliezer playing say football well either. My model of him just says he's simply not optimized for that kind of environment where his intellectual strengths would be limited and his weaknesses amplified. It's just not a remotely optimal environment for someone who is (according to my model of him) built like a race car, extreme performance within strict parameters (flat track, maintenance, etc.).

And that's okay. The military enlisted system at least typically focuses on taking both physical and intellectual generalists and training them to perform a specific job. It's all about the averages. The cockpit is decidedly not adjusted for individual needs or specialized performance for the vast majority of military personnel.

I do hope you're at least somewhat right about the long-term, increasing-marginal-returns sorts of gains, since that's my current strategy for achieving high impact on important matters.

comment by handoflixue · 2017-05-30T19:49:05.811Z · LW(p) · GW(p)

Similarly, I think the people-being-unreliable thing is a bullshit side effect

You may wish to consider that this community has a very high frequency of disabilities which render one non-consensually unreliable.

You may wish to consider that your stance is especially insulting towards those members of our community.

You may wish to reconsider making uncharitable comments about those members of our community. In case it is unclear: "this one smacks the most of a sort of self-serving, short-sighted immaturity" is not a charitable statement.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-31T17:14:06.254Z · LW(p) · GW(p)

Oh, I missed this one in the shuffle. Note that you chose to quote less than half a sentence, because if you quoted the whole sentence you'd have a heck of a time setting up the strawman you wanted to knock down.

Replies from: tcheasdfjkl
comment by tcheasdfjkl · 2017-06-01T03:04:59.869Z · LW(p) · GW(p)

Hi Duncan, I'm a relative newcomer (this is my first LW thread, though I've participated in rationalsphere discussions elsewhere), so this may not carry much weight, but I want to somewhat agree with handoflixue here.

One of my stronger reactions to your post is "this is an impossible set of expectations for me and a lot of others". Which is fine, obviously you can have expectations that some people can't live up to, and of course it is very good that you are making these expectations very clear.

But I sort of get the sense that you are a person who is fundamentally capable of being reliable and regularly making good life choices pretty easily, and that you sort of don't get that for a lot of people these things are really hard even if they understand what the right choice is and are legitimately trying their best to do that.

This is based only partly on your post and somewhat more on a mini-talk which (IIRC) you gave at a CFAR community night where you posed the question "does it even make sense for people to seek out advanced rationality techniques such as the ones discussed here when they're not displaying basic rationality such as eating a reasonable diet and sleeping enough?". Even then, this question struck me as dangerously wrong-headed, and now that you are proposing to be in charge of people, this seems to take on more importance.

Advanced rationality techniques, at least when applied to one's self-conception and life choices, are basically therapy. "Failures of basic rationality" are often better described as "mental health issues". Therapy is how you deal with mental health issues. People with mental health issues need more therapy/advanced rationality, not less! I've seen it hypothesized that one reason we have so many mentally ill rationalists is because people with mental health issues must learn rationality in order to function, at least to some degree that is more than most people need.

I don't actually know you, so my information is pretty incomplete, but my impression is that if someone fails to act in a way you (and they!) think is reasonable, you're likely to become baffled and frustrated and try to deal with the problem by imposing stricter expectations & consequences. This might work for some people, but for many, it will just make them miserable and less productive because they will be angry at themselves for failing at things that they "should" be able to do.

I think it's likely that your way of dealing with this is basically to screen out the people who are likely to react poorly to your approach, in addition to causing others like me to self-select out. That's fine, I guess, though I would still be on the lookout for this sort of issue as a possible failure mode, and maybe also just demonstrate more compassionate awareness that things like reliability are actually almost impossible for some people, and maybe not attribute all of this to having the wrong culture or mindset.

(My general opinion of your project is "this sounds scary and I want to stay very far away from it, and this makes me somewhat wary of the people involved, and I wouldn't recommend participation to people I know, at the same time I am really curious about how this will go so selfishly I'm a little glad it's happening so I can gain information from it".)

Replies from: Duncan_Sabien, fubarobfusco, Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-06-01T03:30:54.188Z · LW(p) · GW(p)

Thanks for the long comment. I really appreciate your candor and perspective—I do think I get the fact that other minds don't work like mine, but you're right in sniffing out that a lot of that knowledge is top-down and parts of me are still instinctively typical-minding a lot. I work hard to remind myself, e.g. I have triggers on certain words or feelings that cause me to review memories of specific times when my assumptions about what was going on in someone else's head were blindingly false.

I think I generally agree with you that there's a large overlap between rationality and therapy, and I'm intrigued by the hypothesis re: mentally ill rationalists; it seems to be pretty plausible.

Here's my actual plan if someone fails to act in a way that things seem reasonable. Note that this is the "everything but the kitchen sink option," including aaaaaallll of the steps one might take, and that for smaller disagreements, this can be done as a speed run or stepwise.

  • Determine whether to follow up in the moment or later based on the needs of the activity, determine whether to follow up in private, in group, or via delegation based on the apparent needs of the person.
  • Start by asking. What did they think was going on? What were their thought processes? Assume from the outset that people act in consistent, coherent ways, and that basically everyone is trying to make the world a better place.
  • Try to pass their ideological Turing test. In other words, try to reflect back to them the priorities they were holding and the goals they were attempting to achieve, and keep falsifying my hypotheses until they give a clear endorsement of my summary.
  • Ask them to model me, in return (note: one important subthread of how the house will run is a check-in along the lines of "is Duncan clear, consistent, and model-able?"). See if they can predict what my priorities were, and if they have a sense of what I'm reacting to. Do not make this some kind of sick high-pressure quiz dynamic ... if they shrug and say "dunno," I'll just explain.
  • Try to lay out, from as birds'-eye as possible a perspective, the conflicting goalsets. Point at the causal chains that brought them into conflict, and highlight my model of where things are broken. Ask them if they have a different model/let them update my picture with a better sense.
  • Form a new plan for the future; explicitly discuss weighing the goals against one another, and how they ought to stack up. Possibly include other people in the discussion at this point, particularly if the defection seemed to have externalities.
  • Assume that plan failed. Come up with a plausible explanation for why; try to patch the first or second obvious holes. Form an intention going forward.
  • Check whether reparations need to be made. Hopefully, there's a standard formula (as in the pushups example). If not, do a similar process of attempting to converge on a good face-saving/balance-restoring action. If there isn't a clear satisfactory solution, default to a compromise and schedule a future check-in.
  • Through all of this, run things by others if either party thinks that'd be beneficial. Also consider things like anxiety/introversion, and have the conversation at a deliberate time rather than forcing it if it's not urgent.

So yeah, in a sense, this might result in stricter expectations and consequences, but not in a blind, top-down way. In situations where there needs to be an immediate response, I'll take an action/give an order and expect it to work, but I'll want to revisit any such quick authoritarian moves after the fact, to explain my thinking and confirm absence of undue harm (and apologize/make amends of my own if necessary).

Overall, though, the idea is to build a high trust environment, and trust goes both ways and is easier to lose than to gain. The thing I want people in the house to actually be justified in believing is "Duncan always has good intentions and is making decisions from some kind of a model. He'll explain when he can, and if he doesn't, it's because he has another model saying why he can't, and he'll instead explain both models once the thing is over."

The idea being that I prove trustworthiness in situations 1-8, and people grant me a little leeway in situation 9. But 1-8 definitely have to come first.

comment by fubarobfusco · 2017-06-01T03:26:09.087Z · LW(p) · GW(p)

Advanced rationality techniques, at least when applied to one's self-conception and life choices, are basically therapy. "Failures of basic rationality" are often better described as "mental health issues". Therapy is how you deal with mental health issues. People with mental health issues need more therapy/advanced rationality, not less! I've seen it hypothesized that one reason we have so many mentally ill rationalists is because people with mental health issues must learn rationality in order to function, at least to some degree that is more than most people need.

This reminds me of Romeo's comment over here:

http://lesswrong.com/lw/oym/how_id_introduce_lesswrong_to_an_outsider/dryk

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-06-01T03:17:35.239Z · LW(p) · GW(p)

Um. Quick reply before I go further—I'm really really confident that the community talk night thing you're remembering either wasn't me or that the quote doesn't resemble what I said. I strongly agree with you that that's a dangerously wrong-headed way to try carving up the world.

Replies from: tcheasdfjkl
comment by tcheasdfjkl · 2017-06-01T03:28:46.884Z · LW(p) · GW(p)

Oh, sorry for that mistake, then! Probably it was someone else. feels mildly embarrassed

I'm glad to hear you agree with my assessment of that way of thinking. In that case not very much of my comment actually stands.

comment by gwillen · 2017-05-26T18:32:18.534Z · LW(p) · GW(p)

Thank you for your thoughtful response!

comment by John_Maxwell (John_Maxwell_IV) · 2017-05-28T01:51:39.667Z · LW(p) · GW(p)

I suspect Eliezer wouldn't join a military bootcamp, but conditional on him having chosen to do so, I suspect he'd do quite well, also.

Doesn't Eliezer delete comments on Facebook that suggest exercise as a means of weight loss?

Replies from: Decius
comment by Decius · 2017-05-28T08:02:00.757Z · LW(p) · GW(p)

That's not because he didn't do the exercise. Bootcamp doesn't care if you lose weight, they only care if you execute the weight loss program. If you doesn't meet any of the body proportion standards, you just have to perform extra exercise.

Replies from: catch223
comment by catch223 · 2017-05-28T23:14:06.331Z · LW(p) · GW(p)

Bootcamp (i.e. the military) cares very much about both losing sufficient weight to meet the standard as well as the ability to perform at a basic level of physical fitness. The different U.S. military services have differing standards, but the general requirements are all comparable.

In an environment where the food supply is tightly controlled and there is constant movement, people tend to lose a lot of weight quite rapidly.

However, if you don't meet the body proportion standards after a certain time, you will be separated from the military.

Replies from: Decius
comment by Decius · 2017-05-30T02:35:51.097Z · LW(p) · GW(p)

Part of the program is separating people who don't lose weight. That doesn't mean they care about the height/weight, only that the next box is 'process for separation'.

There's not a lot other than adherence to procedure that most of the military actually does care about.

Replies from: catch223
comment by catch223 · 2017-05-31T01:58:16.027Z · LW(p) · GW(p)

I'm not sure if I'm totally missing your point, or if you're making a point that's a distinction without a difference.

In Army basic training, there are two standards one must meet:

  1. height/weight, adjusted for age and gender
  2. PT test, which consists of push-ups, sit-ups, and a 2-mile run, with scoring adjusted for age and gender

Either one will get you chaptered out of the Army within certain timeframes. There is a lot of fine print for specific situations (basic training has some extra cushion), but that's the ground truth. These same principles apply to the military at large, but the standards and fine print differ.

I don't know how that squares with: "That doesn't mean they care about the height/weight."

In an organization so devoted to adherence to procedure, what the procedures are set up to be is often a pretty strong indicator of what the organization cares about...

Replies from: Decius
comment by Decius · 2017-06-29T03:59:59.491Z · LW(p) · GW(p)

No individual cares about anything other than the procedures. Thus, the organization as a whole cares only about the procedures. The behavior is similar /with the procedures that exist/ to caring about fitness, but there is also a procedure to change procedure.

If the organization cared about fitness, the procedure to change the height/weight standards would be based on fitness. As it is, it is more based on politics. Therefore I conclude that the Army cares more about politics and procedures than fitness, and any behavior that looks like caring about fitness is incidental to their actual values.

comment by katydee · 2017-05-26T02:29:19.386Z · LW(p) · GW(p)

With respect to power dynamics point one and two, there is another person known to the community who is perhaps more qualified and already running something which is similar in several respects - Geoff Anders of Leverage Research. So I don't think this is precisely the only group making an attempt to hit this sort of thing, though I still find it novel and interesting.

(disclaimer: I was at the test weekend for this house and am likely to participate)

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-26T03:12:35.446Z · LW(p) · GW(p)

Yeah, Geoff and Leverage have a lot I would love to look at and emulate, but I haven't been running under the assumption that I'd just ... be allowed to. I'm beginning some conversations that are exciting and promising.

That being said, I do think that the overall goals are somewhat different. Leverage (as far as I can tell) is building a permanent superteam to actually do stuff. I think Dragon Army is building a temporary superteam that will do stuff in the short and medium term, but is more focused on individual leveling up and sending superhero graduates out into the world to do lots and lots of exploring and tackle a wide number of strategies. My model of Leverage is looking for the right thing to exploit on, whereas I'm looking for how to create competent people, and while there's a lot of overlap those are not the same Polaris.

I similarly think Geoff is highly competent and certainly outstrips me in some ways (and possibly is net more qualified), but I'd posit I outstrip him in a roughly similar number of ways, and that he's better matched for what Leverage is doing and I'm better matched for what DA is doing (sort of tautologically, since we're each carving out mountains the way we think makes the most sense). I think the best of all would be if Geoff and I end up in positions of mutual respect and are able to swap models and resources, but I acknowledge he's a good five years my senior and has no reason to treat me as an equal yet.

EDIT: Also note that Geoff is disqualified by virtue of already being busy, and as for "just join Leverage," well ... they've never really expressed interest in me up to this point, so I figured I wouldn't bother them unless I was no longer employed day-to-day.

Replies from: Benquo
comment by Benquo · 2017-05-26T03:15:45.035Z · LW(p) · GW(p)

What do you think are the key advantages & disadvantages of your Polaris vs Leverage's? How does this relate to methods?

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-26T03:47:35.171Z · LW(p) · GW(p)

I dunno about "key." Open-ended brainstorm, keeping in mind that my models of Leverage are vague and straw and NO insult is intended if I get things wrong ...

Leverage advantages - provides a discriminator that lets you tell more accurately who fits and who doesn't, sounds better if your goal is to accrue funding, is better if your goal is to return money to an investor, provides your participants with a strong mission that they can write in their hearts rather than a vague one that might be hard to grasp, gives you a narrowing principle that helps you discard certain kinds of growth as irrelevant/boon-doggle with reasonably high confidence

Leverage disadvantages - seems (from my limited outside vantage point) to require people to more closely conform to the shape of the leader/take on a singular mission rather than allowing for different colors in the spectrum, seems to fall prey to the intellectual property and get-there-first problems that encourage isolation from the broader network of allies, (maybe) requires you to somewhat distort what you're doing to please investors, (maybe) requires you to strike the balance between deciding-too-soon and being-decision-paralyzed because you have to cohere around a smaller number of goals at a time

Dragon Army advantages - adheres (slightly) more closely to what the average rationalist wants and thus opens you up to a (slightly) wider range of participants, causes members to gain leadership and facilitation skills of necessity rather than accidentally/luckily, (somewhat more) forces people to confront the question what do you really want instead of giving them an easy out by handing them a distracting answer, doesn't require as much funding, biases toward action rather than running the risk of spiraling up into the meta

Dragon Army disadvantages - more vulnerable to strawmanning and skepticism because it is less coherent and clear, much more vulnerable to confusion or failure if I get hit by a bus because the models all live in my head and aren't yet interactable, runs the risk of losing people who are impatient and feel like they're lost in triviality, is less viscerally rewarding (jack of all tradesing, that is) than getting gold medals as a master, needs a longer runway/longer start time because it's more explicitly about culture building and less about objective checkpoints that you can meet on the fly

incomplete

Note that I CANNOT STRESS ENOUGH that my models of Leverage are VAGUE AND PROBABLY WRONG and also note that I'm sleep-deprived and I am aware that this may not really answer your question.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-26T07:49:38.914Z · LW(p) · GW(p)

Oh, also: AFAIK, Leverage is actually fairly low on precommitment, i.e. if someone were to want everyone to get together in the same room at the same time on a regular basis, they would have to go around and win the argument something like forty times, and at any time someone who'd previously been convinced could just say, "actually, never mind, I changed my mind and I have better things to do again," and there aren't any ... initially consensual, eventually coercive? ... structures in place.

Nothing, in short, to get people across the unpleasant valley except their own epistemics and willpower ... no firm, unyielding scaffold that can be built such that others can rely on it. So, Leverage has the advantage of not having the failures of such a system (e.g. people getting trapped and wasting time), and Dragon Army has the advantages of having the benefits of such a system (Actual Reliability that doesn't require inordinate upfront costs, the ability to develop an ecology of affordances over time upon a Schelling point of togetherness).

comment by handoflixue · 2017-05-30T11:10:29.203Z · LW(p) · GW(p)

First, you seem to think that "Getting Useful Things Done" and "Be 99.99% Reliable" heavily correlate. The military is infamous for bloated budgets, coordination issues, and high rates of sexual abuse and suicide. High-pressure startups largely fail, and are well known for burning people out. There is a very obvious failure state to this sort of rigid, high pressure environment and... you seem unaware of it.

Second, you seem really unaware of alternate organizational systems that actually DO get things done. The open source community is largely a loose model of "80% reliable" components, and yet great things get built by these collaborations. Rome wasn't built in a day, and neither was Linux.

"we often convince ourselves that 90% or 99% is good enough, when in fact what's needed is something like 99.99%."

Third, and most bluntly: I don't think you have the slightest knowledge of Fault Tolerant Design, or how to handle Error Cases, if you would say something like this. I write software that can rely on it's inputs working maybe 80% of the time. This is accounting software, so it is NOT allowed to fuck up on corner cases. And I do it just fine. 80% is perfectly sufficient, if you know how to build a system that fails safely.

I think this makes you a uniquely bad candidate for this sort of endeavor, because the first iteration of this experiment is going to be running at maybe 80% reliability. You're going to have a ton of bugs to iron out, and the first run needs to be someone who can work with 80%. And you seem pretty blunt that you're inept in that area.

Fourth, your thresholds for success are all nebulous. I'd really expect testable predictions, ideally ones that are easy for the community to evaluate independent of your own opinions. It seems like the goal of this exercise should be to produce data, more than results.


All that said, I do value the focus on iteration. I think you will be prone to making more mistakes, and inflicting more unnecessary suffering on participants, but I do not think you have any sort of malicious intent. And with no one else really stepping up to run this sort of experiment... well, if people are willing to make that sacrifice, I'm happy to learn from them?

But I think you dramatically over-estimate your ability, and you're selling short how badly the first version is going to go. There are going to be bugs. You are going to need to learn to deal with the 80% that you get.

And on top of that, well, the consequences for failure are actually worse than being homeless, since you're also responsible for finding a replacement. That's a really huge risk to ask people to take, when you yourself have absolutely nothing at stake.

I think your heart may well be in the right place, but the idea as currently conceived is actively harmful, and desperately needs to build in much better safety protocols. It also needs to be much clearer that this is an initial draft, that it will go badly as people try to figure this out, and that initial participants are going to be suffering through an unoptimized process.


Finally: You don't have a fail safe for if the whole idea proves non-viable. As it stands right now, you kick everyone out but leave them on the hook for rent until they've run 3 replacement candidates by you. In the meantime, you enjoy a rent free house.

It really feels like it needs an "ABORT" button where the participants can pull the plug if things get out of control; if you turn out power mad; or if it just turns out a significant number of participants badly estimated how this would go.

The fact that you have nothing on the line, and no fail-safe / abort clause... really, really worries me?


TL;DR: Your plan is dangerous and you haven't given nearly enough thought to keeping people safe. Scrap what you have and rebuilt it from the ground up with the notion of this being a safe experiment (and I want to emphasis both the word "safe" and the word "experiment" - you should be expecting the initial version of this to fail at producing results, and instead largely produce data on how to do this better in the future)

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-30T16:45:27.902Z · LW(p) · GW(p)

Nah.

(Having exchanged half a dozen comments with cousin_it, I now recognize the pattern of a) you're defaulting to the least charitable interpretation at every possible split point, b) many of your claims and conclusions are flat-out false, c) you're incredibly confident that you're correct about all of your assumptions and are including zero nuance or uncertainty, and therefore d) this thread will produce nothing of value. I feel no need to convince people who a, b, and c, especially those who are unable to distinguish object level standards from meta level ones. Contrast your post with jbeshir's, for instance, which is also highly critical but in an entirely constructive way that doesn't make the same mistakes.)

Yes, we have noticed the skulls. (http://slatestarcodex.com/2017/04/07/yes-we-have-noticed-the-skulls/)

Replies from: Kaj_Sotala, handoflixue, cousin_it
comment by Kaj_Sotala · 2017-05-30T19:10:58.109Z · LW(p) · GW(p)

Datapoint: I thought handoflixue's comment was much more reasonable and less uncharitable than cousin_it's opening comment was; in particular, the points about needing an explicit abort procedure sounded very reasonable and it makes me slightly worried to see you making a comment that implies you're just disregarding them. (only slightly because of my personal trust in you and your abilities; I expect that people who don't know you, will get much more worried)

EDIT: I wrote this comment before reading your reply to jbeshir's comment; your response there further reduces my worry.

Replies from: cousin_it, Duncan_Sabien
comment by cousin_it · 2017-05-30T20:59:31.488Z · LW(p) · GW(p)

Kaj, I'm surprised. What do you think of this? Especially Ctrl+F "self-insert" and "horns effect".

Replies from: Kaj_Sotala, Duncan_Sabien
comment by Kaj_Sotala · 2017-05-31T10:25:31.183Z · LW(p) · GW(p)

What do you think of this?

Not knowing the author, can't say much else than "someone freaked out"? I see mostly a strong emotional reaction, which looks to me similar than a bunch of other strong emotional reactions that people have had when they've pattern-matched things in the rationalist community to their stereotype of a cult, without really actually understanding the community (or necessarily cults either).

Replies from: cousin_it, cousin_it, cousin_it
comment by cousin_it · 2017-05-31T11:07:21.394Z · LW(p) · GW(p)

Ah, now I see why some smart folks were okay with Duncan's idea. They pattern-matched criticisms of it to criticisms of the rationalist community! That's sneaky, even Scott fell prey to it, though he came around quickly (check his tumblr).

It seems like the only way "weird" groups can defend against such radicalization over time is by adopting "normie" ideas. I've been advocating that for a while, but I know it's a hard sell here because many rationalists feel hurt by normies.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2017-05-31T12:35:09.254Z · LW(p) · GW(p)

They pattern-matched criticisms of it to criticisms of the rationalist community!

Well, what else can you say to a criticism that's mostly an emotional outburst? That post was using every opportunity it could to interpret Duncan's post in a maximally uncharitable light and turn stuff into ad hominems, such as "yes, dude, I too had a job in college". I searched for the "self-insert" phrase like you asked me to, and it brought up a line where the author expressed not liking Duncan's writing. What substantive point am I supposed to take out of someone's literary preferences? (also the author mischaracterizes "A Weekend with the Legion" - to the extent that it's a self-insert fic, it's one of joining a rationalist group house not founding one, and I'm not sure where the "mary sue" thing came from)

For me personally, a lot of what Duncan wrote resonated in me a lot in that I've long wished to live in a society that would be arranged kind of like he described Dragon Army, and it seemed clear that he'd seen the same things and worked off a similar model. Whereas few of the criticisms seemed to understand those intuitions/emotional needs that I presume we're both operating out of, so ended up missing the mark. E.g. I'm totally willing to buy it when he says that he doesn't actually want to be the leader, both because I've met him, and also because not wanting to be the leader is a major part of why I'm not trying to create a similar project myself now that I've read his post (that, and because it would be too difficult to explain to people without them pattern-matching it into cults).

Replies from: cousin_it, cousin_it
comment by cousin_it · 2017-05-31T13:57:40.637Z · LW(p) · GW(p)

It feels weird saying this to you, and please don't take it too seriously, but if you feel an emotional need to live in a commune with salutes, push-up punishments and restrictions on criticism, have you considered that your emotions might be wrong (from an outside perspective)? For example, many of my emotions are wrong, that's why I don't text my exes while drunk.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2017-05-31T14:43:37.073Z · LW(p) · GW(p)

No offense taken.

The things you mentioned seem to me more like incidental than essential features of the commune; also I'm not saying that I would agree with Duncan on exactly everything regarding the design - for one, I thought Ender's Game was an okay book but didn't see what all the fuss about it was. :) But then again, his project, and I'm sure that my ideal aesthetics wouldn't be his ideal aesthetics either.

The core things that do appeal to me are... well, this is a little hard to verbalize, since like him this is operating more off a system 1, pattern matching basis rather than any explicit first principles. But things like agreement with the sense that the pendelum of modern society has swung a little too far with regard to individualism and commitment, a sense that there is genuine value in being part of a group where everyone is genuinely entirely commited to the project and each other's welfare ("One for all, all for one"), where people are willing to try whatever weird things if it works without needing to worry about what outsiders might think, and generally having a strong supportive social structure that offers you help when you're struggling, pushes you to become the best possible version of yourself when you might otherwise slack off, and provides frequent feedback of how you're doing regardless.

I think I'd be much happier off in a situation like that, rather than the current situation where it feels like I mostly have to figure out everything myself and it's a constant struggle to find allies for any project that would make things better and which I can't pull off just by myself.

But sure, I'm open to the possibility that I'm wrong in this and such an environment wouldn't actually be good for me, or that I'm reading too much into Duncan's post and that the intuitions he's operating out of are actually substantially different from the ones I'm having.

Replies from: cousin_it, Lumifer
comment by cousin_it · 2017-05-31T15:06:24.447Z · LW(p) · GW(p)

If the problem is lack of supporting structure in modern life, surely the answer is joining traditional institutions, not more costly and risky social experiments?

Replies from: Vaniver, Kaj_Sotala
comment by Vaniver · 2017-05-31T21:25:36.623Z · LW(p) · GW(p)

surely the answer is joining traditional institutions

I think this depends on how much alignment you can expect to have with traditional institutions. Quakers let in gays and atheists, but the politics of the typical member grated; joining the Mormons would involve celibacy until God calls up the prophet and tells them that being gay is okay (which I cautiously expect in less than ten years) and lying about beliefs in the supernatural. Joining the military involves participating in 'wars' that I disagree with strenuously, and when I was the right age to do it "don't ask don't tell" was still official policy (and, I later learned from an acquaintance who did go to the Academy I would've gone to, being openly atheistic was seen as an invitation for hazing by some of the instructors).

Replies from: cousin_it
comment by cousin_it · 2017-05-31T21:56:41.090Z · LW(p) · GW(p)

I'm not inviting people to join the Mormons. The OP's curriculum would be better covered by joining a gym, meditation group, public speaking club or graphic design course, which don't have the problems you mention.

Replies from: Vaniver, Lumifer
comment by Vaniver · 2017-06-01T00:09:25.780Z · LW(p) · GW(p)

I brought up the Mormons because I seriously considered joining them (and rejected it for the above reasons).

I think you're fundamentally misunderstanding the nutrient being sought out if you think that the list of four things you mention (individually or all together) would actually satisfy the relevant hunger.

Replies from: cousin_it
comment by cousin_it · 2017-06-01T07:28:36.185Z · LW(p) · GW(p)

I thought the point was learning skills and interacting with people. If the real point is filling a tribe shaped hole in your soul, I can only repeat my question to Kaj. Are you sure that yearning for a tribe is an emotion that serves your interests?

Replies from: Kaj_Sotala, Vaniver
comment by Kaj_Sotala · 2017-06-01T16:53:02.973Z · LW(p) · GW(p)

Are you sure that yearning for a tribe is an emotion that serves your interests?

Given how yearning for a tribe is a "powerful, fundamental, and extremely pervasive motivation" (old paper, but later research has only served to further confirm the general notion), I would guess yes; for me personally, "being in a tribe" seems very much like the strongest unmet terminal goal that I have.

Replies from: cousin_it
comment by cousin_it · 2017-06-01T16:59:08.081Z · LW(p) · GW(p)

That seems like proving too much, since I don't yearn for a tribe. Are you sure you aren't confusing your social needs for a specific dream of fulfilling them?

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2017-06-01T17:11:26.613Z · LW(p) · GW(p)

A motivation can be "extremely pervasive" without being universal. (very few things in psychology are truly universal) You may not share the yearning, but I've certainly run into plenty of people who do.

Are you sure you aren't confusing your social needs with a specific way to fulfill them?

That is possible, and I have made that kind of a mistake before, but if there's an alternative way of fulfilling them I haven't found it.

comment by Vaniver · 2017-06-03T01:25:45.313Z · LW(p) · GW(p)

I thought the point was learning skills and interacting with people. If the real point is filling a tribe shaped hole in your soul

It seems to me like there are flavors of 'interacting with people' that require tribe-mates.

Are you sure that yearning for a tribe is an emotion that serves your interests?

Having a tribe is one of my interests.

comment by Lumifer · 2017-06-01T01:17:15.719Z · LW(p) · GW(p)

I think you misunderstand the point. The goal is not to develop skills, the goal is to create an emotional web of support that comes from being a bona fide member of a tightly-knit tribe. You don't (normally) get that at a gym or a public speaking group.

comment by Kaj_Sotala · 2017-05-31T15:19:39.917Z · LW(p) · GW(p)

Possibly excluding some religious communities, which I wouldn't want to join because I'm not religious, I don't know of any traditional institutions that would provide general life support. Schools have some support structures in place that are aimed at helping you do better at school, martial arts training supports you become better at martial arts, etc. Which traditional institution is one that you can just join, and which is aimed at making all of its members become the best versions of themselves in all respects?

(By the way, I forgot to reply to this in the earlier comment, but I think that interpreting "start from the assumption of good faith when interacting with other members of the house" as "no criticizing the leader" is... not a particularly charitable interpretation.)

Replies from: cousin_it, tristanm
comment by cousin_it · 2017-05-31T15:39:10.990Z · LW(p) · GW(p)

When deciding who to put in power and how much power to give them, the principle of charity is harmful.

It seems to me that institutions that claim to make you better in every way are always scams. The fact that a school will teach you only welding, and will give you a welder certificate in a certain number of weeks if you keep showing up, is a feature. If you join two or three institutions according to your interests, you'll be fully booked in both self-improvement and social interaction, and it's still less costly or risky than joining an authoritarian commune.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2017-06-01T15:56:47.366Z · LW(p) · GW(p)

When deciding who to put in power and how much power to give them, the principle of charity is harmful.

There's healthy skepticism and then there's twisting words wildly beyond any reasonable interpretation...

Also the level of skepticism should be proportionate to the level of authority requested; it makes sense to be more skeptical the more power someone wants. But my reading of the original post agrees with Sinal's reading, who compares the level of authoritarianism with that of a Boy Scout troop leader. The original post has stuff like the first rule of conduct for a dragon being to protect themselves; it mentioned that people can "hard veto" proposed experimental norms; people are free to leave the experiment if they wish. Duncan's authority seems to be limited to upholding policies that were agreed upon by group consensus and running them for a limited time; he has mentioned in the comments that he can be removed from power using the kind of procedures one would expect, e.g. a majority vote. The specific examples of his "tyrannical" powers that were given were things like deciding that a specific meeting will be held on Tuesdays even though not everyone wants the meeting to be on a Tuesday.

The Boy Scout troop leader probably has more power over his scouts than Duncan has in the house, and I doubt we'd consider people obviously unsuitable to be scout leaders for the sin of suggesting that scouts should assume good intent in their dealings with each other.

You're talking like joining this commune would be a huge enormous risk, and I just don't see that. Sure there's a risk, but it's on the same order as joining any other commune or moving in with other roommates - you risk having a miserable time for a while if it turns out you're not a good fit for each other, and then things may be inconvenient for a while as you need to look for a new place where to live.

Personally I made the mistake of moving in with some wildly incompatible roommates at least once, and have also on other occasions lived together with other people who I'd strongly have preferred not to live together with. Yes, it sucked a lot and made me much more miserable than I probably would have been otherwise. But then I moved out and don't think I've suffered any lasting consequences, and despite the unpleasantness I still don't consider it a risk on the order of "has to absolutely be avoided".

It seems to me that institutions that claim to make you better in every way are always scams. The fact that a school will teach you only welding, and will give you a welder certificate in a certain number of weeks if you keep showing up, is a feature.

Agreed that this is a feature: sometimes one really does only want to learn welding. But if you want to learn dancing and everyone's only teaching welding, with all the places that claim to teach dancing actually being scams... then that's a major problem for you, and suggests that you'd get a lot out of it if someone did found a dancing school that actually taught dancing and wasn't a scam.

Replies from: cousin_it
comment by cousin_it · 2017-06-01T17:04:40.985Z · LW(p) · GW(p)

I think claiming to teach skills that aren't taught by any traditional institutions is fishy. (This isn't an isolated demand, I've argued with CFAR folks that they should prioritize research into testing rationality, instead of jumping head first into teaching it.)

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2017-06-01T17:08:24.341Z · LW(p) · GW(p)

Duncan's project isn't really about teaching skills, though.

Replies from: Duncan_Sabien, cousin_it
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-06-01T17:19:49.089Z · LW(p) · GW(p)

Yeah, when we want to learn things beyond the expertise of a house member (such as when we learned to use firearms during the weekend experiment) we bring in professional help.

comment by cousin_it · 2017-06-01T17:24:45.151Z · LW(p) · GW(p)

The post says it will help you achieve three goals, of which self-improvement is the most important, and gives a list of 15 skills it will help you learn (many of which are fishy by my standard above).

comment by tristanm · 2017-05-31T17:12:10.061Z · LW(p) · GW(p)

Which traditional institution is one that you can just join, and which is aimed at making all of its members become the best versions of themselves in all respects?

I think what you're referring to is something like the Holy Grail of institutions. So if someone claims that they've found the global optimum of institutions, the right reaction should be one of heavy skepticism. It's not wrong to seek the global optimum, but when someone proposes that it exists in some well-explored territory based on a somewhat simple model, the argument they should present for it would probably look something like 1) We overlooked some seemingly trivial, but serious details that would have fixed the major issues we had previously and/or 2) Iterating on this idea for a while will not result in diminishing gains for a considerable time.

What we have in society right now is a bunch of local optimums for specific needs. I think we should be prepared for the scenario in which the global optimum looks weird, and is composed of sort of a hodgepodge of various fixes and hacks and specific set-ups to meet different requirements for different people. And I know this looks ugly, but that's typically what solutions as the output of optimization processes look like. I consider a single hierarchical institution to be a simple model, and therefore consider it unlikely that such an ambitious goal will be reached using such a solution.

So based on my above model of institutions I sort of place low probability on a solution that consists of a simple model already well-explored or without a considerable amount of details tacked-on that have been found through consistent iteration and optimization. Right now I think this experiment will have to be run with significant fail-safe mechanisms in place and outside observation so that this process can actually take place.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2017-06-01T16:00:08.433Z · LW(p) · GW(p)

without a considerable amount of details tacked-on that have been found through consistent iteration and optimization.

Isn't starting from a simple model and then iterating and optimizing (i.e. exactly what Duncan is proposing) the only way to get to that point?

Replies from: tristanm
comment by tristanm · 2017-06-01T17:25:11.155Z · LW(p) · GW(p)

It's not obvious to me that Duncan is proposing that. See my comment here. To me, it seems more like iterating and optimizing towards the minimum would get you something far from both the extremes of the libertarian egalitarian model and the one-person-in-charge-of-everything model.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2017-06-01T18:23:03.479Z · LW(p) · GW(p)

I mentioned in another comment that Duncan's role seems to be "upholding policies that were agreed upon by group consensus and running them for a limited time"; this does seem like it's pretty distant from both rampant individualism and one-person-in-charge-of-everything to me.

I'm not sure of how to interpret your referenced comment; you seem to be talking about the "old model" being "cults", but I don't know what you mean by cults - I interpret a "cult" to be something like "a small group rallied around a charismatic leader with absolute authority", but I don't think that has been the predominant mode of social organization at any point in history?

Replies from: tristanm
comment by tristanm · 2017-06-01T19:24:27.732Z · LW(p) · GW(p)

I interpret "cult" as applicable to both small and large groups and not dependent on whether the leader has charisma or not (It could also refer to small tribes with chieftains, dictatorships, absolute monarchies, etc.). And I think in this regard it has been the predominant mode of social organization throughout history.

But after seeing Scott's "on fourth thought" I have been more convinced that Duncan has been moving in the direction of placing limits on his power and making sure the appropriate safe-guards are in place, which has updated me away from seeing the pendulum as swinging too far in the opposite direction. I think the question remains whether or not continued updates and iterations will involve further limitations on his authority.

comment by Lumifer · 2017-05-31T15:16:58.302Z · LW(p) · GW(p)

being part of a group where everyone is genuinely entirely commited to the project and each other's welfare ("One for all, all for one"), where people are willing to try whatever weird things if it works without needing to worry about what outsiders might think, and generally having a strong supportive social structure that offers you help when you're struggling, pushes you to become the best possible version of yourself when you might otherwise slack off, and provides frequent feedback of how you're doing regardless.

Sure. You are describing a church group, or maybe an entire sect/denomination (see e.g. pretty much all early Protestant movements).

Is it a good idea? As usual, it depends :-/ Sometimes it works out and sometimes it doesn't. Sometimes you spend a safe and content life doing good work, and sometimes you find yourself killing evil abominations like Catholics.

Besides, such groups evolve and usually not in a good direction. Becoming bureaucratic and ossified is relatively harmless, but being taken over by sociopaths (as per ribbonfarm) can be much worse.

comment by cousin_it · 2017-05-31T13:34:30.859Z · LW(p) · GW(p)

Ok. If you don't mind, I'll use you as an interpreter for Duncan, since he doesn't answer questions much. Can you explain why the idea of a group house with salutes, push-up punishments, restrictions on criticism etc. appeals to you? Is there any evidence that it would help learn skills more effectively, compared to taking a class? Why do you feel that the obvious dangers aren't dangers, apart from knowing Duncan personally (many real world tyrants were reportedly charming in person) and seeing the list of excuses that's identical to that of every other cult?

comment by cousin_it · 2017-05-31T10:58:08.789Z · LW(p) · GW(p)

I resisted playing the fallacy game with Duncan because he's clearly just parroting stuff, but I expected better from you. Okay, let's go. "You're being emotional" and "you're pattern matching" are examples of the bulverism fallacy. Your turn.

comment by cousin_it · 2017-05-31T10:37:53.518Z · LW(p) · GW(p)

OK. I'm even more surprised about you now but let's drop this.

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-31T03:02:22.535Z · LW(p) · GW(p)

This person's post, while containing some overlap with the more true and useful criticism here, is also not the sort of thing I expect people to cite on LW and not, I think, a useful entry in the back and forth here.

On the other hand, the difference in our levels of endorsement of it explains a lot about why our interaction went south in a hurry.

Quoting Qiaochu:

I would like everyone posting criticism, especially heated criticism, to keep very firmly in mind that Duncan did not have to write this. Whatever your opinion of him, at least make sure you've factored in the evidence that he wrote this whole, weird thing, complete with references to Ender's Game, Fight Club, etc. instead of writing either 1) nothing or 2) something much more reassuring.

There are critics who think Duncan is incompetent and overconfident, and about this hypothesis I can say at least that it is consistent with Duncan having written this post. Then there are critics who think Duncan is, I dunno, evil or power-hungry or something, and I think those people are mostly failing to see what is in front of them.

Replies from: Alicorn
comment by Alicorn · 2017-05-31T04:44:16.610Z · LW(p) · GW(p)

I was tentatively willing to give you some benefit of the doubt even though I don't know you but I'm really disappointed that you feel the need to score points against a rationalist-adjacent posting to her Tumblr about how your post looks to her from her outside vantage point. I brought a similar-amount-of-adjacent friend to the seder and it freaked her out. Rationalist shit looks bizarre from a couple steps away. You do not have to slam my friend for not being impressed with you.

Replies from: drethelin, Duncan_Sabien
comment by drethelin · 2017-05-31T05:41:38.498Z · LW(p) · GW(p)

That's kind of unfair, considering the sheer amount of point-scoring going on in the original post.

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-31T04:45:37.987Z · LW(p) · GW(p)

Fair point. I will edit the above to remove point-scoring criticism; if this person wanted to be exposed to it, they would've posted here directly. I'll ask you to leave your comment so it's clear what originally occurred.

That being said, they certainly have no qualms about tearing into me. Like, my response to them was not a response to "I am unimpressed" or "I have a negative reaction to this," and I think it's a little disingenuous or unfair of you to summarize their content thusly. It's ... an asymmetric expectation of charity? Holding a double standard? Or something like that. I'd hope you'd offer feedback to them similar to what you said to me here, to see how they respond.

Replies from: Alicorn
comment by Alicorn · 2017-05-31T05:32:43.539Z · LW(p) · GW(p)

I know her and she has earned some charity from me. You're a stranger soliciting a line of credit. Also, her task is "opine on Tumblr" and yours is "benevolent dictatorship". If you want me to convey to her that your feelings were hurt I could do that for you, I suppose.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-31T05:59:55.240Z · LW(p) · GW(p)

It's less that my feelings were hurt (they were, a little, but I've developed a pretty thick skin around "strangers are wrong about me"), and more that you're saying, to me, "hey, please don't be uncharitable or overly critical or focus on point-scoring," and I think the point-scoring exhibited in that post would cause me, in your shoes, to make a symmetric point to my friend. It's a consistency thing, of supporting the norms I want to see in all places, ignoring partisan or loyalty lines (being willing to call out my allies as much as I'm willing to call out a stranger or an enemy).

I guess if I were to ask you to convey a message, it would be "this person thinks you've jumped to unfounded conclusions, and wonders what odds you'd put on 'I might be wrong.'"

Replies from: Alicorn
comment by Alicorn · 2017-05-31T06:40:24.963Z · LW(p) · GW(p)

I don't really see the situations as symmetrical or calling for identical norms.

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-30T20:18:57.591Z · LW(p) · GW(p)

Thanks. As Lumifer has pointed out, I have become more defensive in the past 36 hours, but I claim it's almost entirely limited to the two individuals who have shown themselves to be deontologically hostile and extremely overconfident in their models. There's obviously wiggle room in there to say "Eh, even given that, Duncan, I think you're overreacting," but if so, it's because I feel that after a hundred comments and a multithousand word post (that I didn't have to make at all, in the first place) I deserve some credit à la I've clearly demonstrated willingness to engage positively with criticism and update publicly and admit wrong and so on and so forth (and therefore don't like comments that presuppose me not being all those things).

comment by handoflixue · 2017-05-30T19:24:18.808Z · LW(p) · GW(p)

I have absolutely no confidence that I'm correct in my assertions. In fact, I was rather expecting your response to address these things. Your original post read as a sketch, with a lot of details withheld to keep things brief.

The whole point of discussion is for us to identify weak points, and then you go in to more detail to reassure us that this has been well addressed (and opening those solutions up to critique where we might identify further weak points). If you can't provide more detail right now, you could say "that's in progress, but it's definitely something we will address in the Second Draft" and then actually do that.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-30T20:12:35.287Z · LW(p) · GW(p)

I've said "that's in progress, but it's definitely something we will address in the Second Draft" all over these comments. You jumped into the discussion two days in and just ... didn't bother to read? I feel defensive and upset over this, because a big part of doing this whole thing out in the public view was to build credibility as a good actor who listens and updates, and I feel like you just completely ignored all the evidence of that as you started to write your critique.

And in that critique, you used a bunch of phrases like "I don't think you have the slightest knowledge of Fault Tolerant Design" and "you haven't given nearly enough thought to keeping people safe" and "you yourself have absolutely nothing at stake" and "you seem really unaware of X" and "you're a uniquely bad candidate" and "the idea as conceived is actively harmful" and on and on and on. You cannot pretend that this language does not indicate strong confidence. Words have meaning.

And most of those things presuppose stuff about my internal state, or my experience, or actions I have or have not taken, and assert those things as fact or extremely likely probability, rather than putting in any kind of hedge or owning "I could be wrong about this" or whatever. You take all sorts of things that you cannot possibly know, and instead of asking about them, build up a structure in which they're taken as given and Everything Is Bad. You do say "it seems to me" a few times, so some credit is due there, but overall, your post was overwhelmingly assertive and aggressive and lecturing/condescending, in stark contrast to the vast majority of the critical feedback (and in stark resemblance to the few comments I've responded to with hostility).

You did not come across as trying to identify weak points and then find out what I thought about them; you came across as trying to tell me that I'm bad/dumb/evil.

For the record: all of your points are under consideration, many of them have been completed to satisfaction within the group, and those which remain are either a) highlighted elsewhere in the comments here by me saying "Yeah, that's a solid point, we should do something about that," or b) have, on reflection, been ranked as low-priority.

Replies from: handoflixue
comment by handoflixue · 2017-05-30T21:12:43.473Z · LW(p) · GW(p)

In the absence of a sound rebuttal to the concerns that I brought up, you're correct: I'm quite confident that you are acting in a way that is dangerous to the community.

I had, however, expected you to have the fortitude to actually respond to my criticisms.

In the absence of a rebuttal, I would hope you have the ability to update on this being more dangerous than you originally assumed.


Bluntly: After reading your responses, I don't think you have the emotional maturity necessary for this level of authority. You apparently can't handle a few paragraphs of criticism from an online stranger with no investment in the situation. Why should I possibly expect you to be more mature when dealing with an angry participant whose housing depends on your good will?


On the off chance that you're actually open to feedback, and not just grandstanding to look good...

1) I apologize if my tone was too harsh. You are attempting something very dangerous, on a path littered with skulls. I had expected you were prepared for criticism.

2) Commit to posting a second draft or addendum, which addresses the criticisms raised here.

3) Reply to my original post, point by point. Linking me to other places in the thread is fine.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-30T21:18:14.677Z · LW(p) · GW(p)

Screw you; it's not "on the off chance," it's been overwhelmingly demonstrated and backed up by multiple people in this thread. You're attempting to highlight "emotional maturity" in a way that means "I want you to let me be socially dominant over you, despite the fact that I'm violating norms of good faith and discourse."

Tolerance is not a moral absolute; it is a peace treaty. Tolerance is a social norm because it allows different people to live side-by-side without being at each other’s throats. It means that we accept that people may be different from us, in their customs, in their behavior, in their dress, in their sex lives, and that if this doesn’t directly affect our lives, it is none of our business. But the model of a peace treaty differs from the model of a moral precept in one simple way: the protection of a peace treaty only extends to those willing to abide by its terms. It is an agreement to live in peace, not an agreement to be peaceful no matter the conduct of others. A peace treaty is not a suicide pact.

In fact, what I have is sufficient emotional maturity to notice when I'm being bullied, and not roll over, even if it's somewhat socially frowned upon for the bullied to fight back openly. i.e. I reflectively endorse both the calmness and openness with which I've reacted to the majority of commenters, and the degree to which I have risen to and matched your hostility rather than just letting you punch unfairly.

I'll do 3) if and only if you rewrite your original point to include a generally normal amount of epistemic uncertainty/humility for claims made on LessWrong about a person you don't know well, after that person's demonstrated willingness to be transparent and to update.

Replies from: handoflixue, handoflixue
comment by handoflixue · 2017-05-30T21:48:32.781Z · LW(p) · GW(p)

And just to be clear: I don't give a shit about social dominance. I'm not trying to bully you. I'm just blunt and skeptical. I wouldn't be offended in the least if you mirrored my tone. What does offend me is the fact that you've spent all this time blustering about my tone, instead of addressing the actual content.

(I emphasize "me" because I do acknowledge that you have offered a substantial reply to other posters)

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-30T22:10:56.387Z · LW(p) · GW(p)

I don't want to mirror your tone because I think your tone is both socially corrosive and epistemically unsound. I've at least in part been fighting you so hard because I want to publicly defend a stance that the way you've acted in this thread is unacceptable. Saying "I'm just blunt and skeptical" is not a complete description of the posts you've made; others in this thread have been blunt and skeptical without jumping to conclusions, lecturing, and being wildly overconfident that their map is accurate enough to justify throwing excrement around.

I think you've fallen far short of the standard of a place like LW in this thread, and I want that opinion known to anyone trying to model me.

Replies from: handoflixue
comment by handoflixue · 2017-05-30T22:21:03.210Z · LW(p) · GW(p)

You seem to feel that publicly shaming me is important. Should participants in your group also expect to be publicly shamed if they fall short of your standards / upset you?

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-30T23:28:56.805Z · LW(p) · GW(p)

With the caveat that I'm attempting to shame the way you're going about engaging in discourse much more than I'm shaming the core of you as a person (really, you're the one operating on the level of the fundamental attribution error within this particular thread; look in a mirror)—yes, absolutely. Part of having standards is making it socially unacceptable to fall grossly short of them.

That's modified by things like the "saving face" section above, and the clear intention for all of us to grow and improve, me included—none of us are getting it right on the first try, and you have to scaffold growth and reward with gentle affirmation people who are willing to try to change for the better.

It's further modified by the fact that people who don't like these standards can simply not join, and I've spent now well in excess of 100 hours making my models crystal clear to those who are considering opting in (so that their decision can be fully informed).

But yeah—anybody who's falling as far short as you absolutely deserves to be called out for it, and given a choice between "do these concrete things differently" or "lose social points." Since you've repeatedly refused to stop jumping to conclusions and ignore evidence that I'm acting in good faith and not an idiot—since you've refused to do concrete things differently—yeah, I wholeheartedly endorse you losing social points, and people updating the way they assume interactions with you will go as a result.

Replies from: handoflixue
comment by handoflixue · 2017-05-31T00:37:16.157Z · LW(p) · GW(p)

I've changed my tone and apologized.

You've continued to dismiss and ridicule me.

You've even conceded to others that I'm a cut above the "other trolls" here, and have input from others that I'm trying to raise concerns in good faith.

What more do you want?

comment by handoflixue · 2017-05-30T21:40:57.094Z · LW(p) · GW(p)

Alright. As a test of epistemic uncertainty:

I notice that you didn't mention a way for participants to end the experiment, if it turns out abusive / cult-like. How do you plan to address that?

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-30T21:59:14.258Z · LW(p) · GW(p)

I think the problem here is the same as the problem of enforcing repayment of loans. If someone borrows a bunch of money, and then later has no money to repay, how should society respond?

Obviously, the thing is not simply "demand money." Similarly, though, there can't be no standard of requiring recompense, because that sets up a really bad incentive.

So my current plan is (in addition to really heavily highlighting that people need to think this through/talk with their advisors/visualize failure/ensure they have a buffer sufficient for likely amounts of damage) to set up something like the following norms:

  • If you conclusively determine that you need to drop from the experiment, no one is allowed to argue or convince; this is referred to as "rule-one-ing out," and is a thing that we will explicitly practice in small doses in the hope that this will transfer over to larger spaces.
  • If dropped, you retain full access to kitchen, bathrooms, lawn, living room, etc. but agree to physically avoid house activities (and those house activities will e.g. change to not use shared rooms that you live in). You're also welcome to leave, but maintain the same sort of "normal" financial obligation that people have when they suddenly vanish, i.e. you're still paying for your slot for a little while.
  • "A little while" means that you agree to put forth good-faith effort to find a viable replacement. I said "three potential replacements" as an initial guess to point toward "it's harder to replace yourself here than in normal houses; there should be some limit to your obligation if we say 'no' to your first two choices; you're definitely not on the hook forever." It's possible that the answer should be "two" or something else.
  • In the event that this fails, something like "you're on the hook, financially, for rent payments in the 2-6 week window from the time you drop," which seems like a non-Draconian and fairly boilerplate norm ("this month, and next month too if 'this month' ends really soon").
  • In the event that this fails, I was planning to just ... secretly and quietly absorb the blow? This is made worse by your demand that it be explicit (some things are better as back pocket options), but whatever—few people will see this part. The idea is that OBVIOUSLY (unless you're starting from the presumption that Duncan is evil) you have to make accommodations for a person who is (by the time they reach this step) both emotionally and financially exhausted/compromised, and part of the whole point of having a large community is that it creates flexibility to absorb blows like that (the damage is spread out enough that it becomes manageable on an individual level).

So at that point, yeah—people could just straight-up defect on the house, and the idea was NOT to blare that from the rooftops, because now there's a clear incentive for defectors to just defect and impose costs on everyone else. That would've been better left as an obvious implicit norm that's universal among decent people.

On a broader, whole-house level, we're having open retrospectives every week, with opportunities for both nonymous and anonymous feedback and discussion. I put the odds of this going that far south in under six months at far less than 1%, but in the event that a majority of people decide the thing is bad, it'll be at most six days before they have a chance to all say so, at the obvious Schelling point for coordination, at which point there'll be a clearly decisive mass of consensus and I'll just—be overruled. This is further made more likely to happen if-it-needs-to-happen by the fact that elsewhere in the thread I've committed to instituting a requirement that people check in weekly with outside advisors, and by the fact that there are multiple strong/iconoclastic/independent/healthily self-protective personalities in the mix who would have little to no fear in openly opposing me if they needed to, and by the fact that there's a known second-in-command who's a good coordinator in the event that things need to happen without me being looped in (noble coup).

In short, the obvious stuff.

Replies from: handoflixue, handoflixue, Duncan_Sabien
comment by handoflixue · 2017-05-30T22:33:56.862Z · LW(p) · GW(p)

I notice I am very confused as to why you keep reiterating actual talking points from actual known-dangerous cults in service of "providing evidence that you're not a cult."

For instance, most cults have a charismatic ("well known") second-in-command who could take over should there be some scandal involving the initial leader. Most cults have written thousands of words about how they're different from other cults. Most cults get very indignant when you accuse them of being cults.

On the object level: Why do you think people will be reassured by these statements, when they fail to differentiate you from exist cults?

Stepping up a level: how much have you read about cults and abusive group dynamics?

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-30T23:24:54.969Z · LW(p) · GW(p)

On the object level: because a plurality if not a majority of actual, real humans have indeed been reassured by them, including some who were open critics and said things like "I traveled 50% of the distance toward 'this is a good idea' [just from this post]." It's worth noting that I'm not going to refrain from saying true things that cults have also said; reversed stupidity is not intelligence and the thrust of this post was never "differentiate myself from cults," it was "here's a thing I want to try."

On the discourse level: still jumping to conclusions left and right. "When Duncan said well known, he must have meant charismatic, obviously." False—Eli Tyre is many, many good things, but "charismatic" is not usually a compliment given to him. Furthermore, I note that you decided to ignore all of the other object-level content in favor of picking one nit (based on false assumptions), so I'm taking that as "you had nothing good to criticize in that other stuff, and so you decided not to say anything at all," i.e. you're unable to say "good point" and update incrementally.

Stepping up a level: since you're inclined to view everything I say in the worst possible light and uncharitably leaping to conclusions, I claim that I'm justified in theorizing that literally no answer would've satisfied you (had I said 10 hours, you'd have been smugly dismissive of my lack of research; had I said 1000 you'd have said 'well, you obviously weren't paying attention'), and that it was a bullshit question to begin with.

We're done; I anticipate that other skeptics in this thread (like decius and lumifer and deluks and taygetea, for example) will provide me with the overwhelming majority of the value you might offer, and at a fraction of the cost in you're-doing-a-bunch-of-the-things-the-sequences-exist-to-warn-against.

Replies from: handoflixue, handoflixue
comment by handoflixue · 2017-05-31T00:38:22.531Z · LW(p) · GW(p)

Also, as far as "we're done" goes: I agreed to rewrite my original post - not exactly a small time commitment, still working on it in fact. Are you seriously reneging on your original agreement to address it?

comment by handoflixue · 2017-05-31T00:07:28.229Z · LW(p) · GW(p)

See, now you're the one leaping to conclusions. I didn't say that all of your talking points are actual talking points from actual cults. I am confused why even some of them are.

If you can point me to someone who felt "I wrote thousands of words" is, in and of itself, a solid argument for you being trustworthy, please link me to it. I need to do them an epistemic favor.

I was using "charismatic" in the sense of having enough of it to hold the group together. If he doesn't have enough charisma to do that, then he's kinda worthless as a commanding officer, neh?

Your claim is false. I wanted to know at what level to hold this conversation. I legitimately can't tell if you're waving a bunch of "this is a cult" red flags because you're trying to be honest about the risks here, because you don't realize they're red flags, or because you're playing N-Dimensional chess and these red flags are somehow all part of your plan.

comment by handoflixue · 2017-05-30T22:28:10.028Z · LW(p) · GW(p)

Can you elaborate on the notion that you can be overruled? Your original post largely described a top-down Authoritarian model, with you being Supreme Ruler.

How would you handle it if someone identifies the environment as abusive, and therefor refuses to suggest anyone else join such an environment?

You discuss taking a financial hit, but I've previously objected that you have no visible stake in this. Do you have a dedicated savings account that can reasonably cover that hit? What if the environment is found abusive, and multiple people leave?

Anyone entering your group is signing a legal contract binding them to pay rent for six months. What legal commitments are you willing to make regarding exit protocols?

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-30T23:15:30.231Z · LW(p) · GW(p)

I notice that you are unusually unable to notice yourself jumping to conclusions. As a challenge, can you find the conclusions you're still jumping to, above, without curiosity or caveat? Note the plural on "conclusions."

  1. An excellent question whose answer I'm interested in exposing to literally anyone other than you, the troll, and cousin_it. Also, a question that has been openly and actively discussed and is not yet fully finalized, but boils down to "pretty close to the obvious stuff about voting majorities."

  2. I am not and have not at any point required that "people should proselytize this, and encourage others to join." So, I wouldn't object or find it unreasonable if someone didn't encourage others to join.

  3. You've previously talked out of your butt without ever expressing curiosity as to my visible stake in this. So, repeat my answer to 1: a fine question, which everyone is encouraged to feel curiosity about, and which I'd be motivated and eager to discuss with the potential participants and everyone except you, the troll, and cousin_it.

  4. Similarly, an excellent question that I don't think is any of your business, though I continue to endorse the fact that I've voluntarily made it the good 97% of LessWrong's business. And I know this is giving away part of the answer, but you just assumed that people would be signing lease agreements with me rather than with the owner of whatever house we rent (and therefore that I would have some fully controlling role in determining exit protocols, rather than simply being a coordinator and a negotiator).

Replies from: handoflixue
comment by handoflixue · 2017-05-30T23:57:46.187Z · LW(p) · GW(p)

I used the word visible to make it clear that there might be some stake which is not visible to me. If you have made your stakes visible in this thread, I'll admit I missed it - can you please provide a link?

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-30T22:02:03.208Z · LW(p) · GW(p)

Furthermore, locking all of this into place in formal language was not a thing I was going to do by myself, but rather was going to be a collaborative, consensus-based process engaged in by the group as a whole, which is obvious if you look at all the other places in this thread and in the original post where I say that we're going to discuss and iterate and figure things out together.

Or, for example, by the fact that I chose Dragon Army as the model, and not (as has come up elsewhere) Salamander Army.

comment by cousin_it · 2017-05-31T09:41:53.900Z · LW(p) · GW(p)

You shouldn't quote Scott for support, because he just wrote this:

On third thought, everyone else is right and I am wrong. The Dragon Army group house is a very bad idea, enough so that it’s okay to be forceful in encouraging Duncan to modify it or other people not to join it. This is true even if the required modifications are so hard that they end up torpedoing the project.

Link

comment by MaryCh · 2017-06-02T15:58:52.487Z · LW(p) · GW(p)

First, thank you for writing the post so fully and readably - it is really impressive! And I wish you would go to do this, in whatever way you would decide upon. But even if I thought full well the setup was safe (which I do) and the results were exactly as intended, in the most useful and generally good way, I wouldn't join.

Because I think that when people become parents, they suddenly find themselves in a world that is much more uncertain. You can't reliably say that you will sleep through the night, for example, even when the kid mostly does. And this is already hard enough to get used to - I know from experience - and it is also hard to begin anew (though this might be less so for men.) Imagine having actually trained yourself to be 100% in control of what you do, or even letting other people know that you are such kind of person. It's just not robust.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-06-02T17:37:06.572Z · LW(p) · GW(p)

Thanks for the comment. This is unique among perspectives given so far, and I liked seeing it a lot.

comment by Viliam · 2017-06-01T18:02:30.860Z · LW(p) · GW(p)

Reading the comments... well, this escalated quickly.

I can imagine this going either horribly right or horribly wrong. So I appreciate if a group of volunteers actually does the experiment, instead of everyone offering their preferred analogy for what should happen. Preferably with good safety mechanism, of which I can imagine two, already mentioned in this debate:

(1) Give members a mandatory time off, once in a while, to spend with their friends outside the "Army". Not just a weekend, but a full week, once in a while.

(2) If possible, it would be good to reduce the financial impact of leaving the group as much as possible. In a perfect world, there would be none. But of course, if you want to live in the same house, that costs money. It would be nice if the group could somehow collect extra money, as an insurance, to allow people leave without financial consequences. Perhaps make everyone pay 10% or 20% extra for the house?

There is always a tension between freedom and commitment, and between individual freedom and group cooperation. It seems generally good to err on the side of freedom, because people in positions of power often have a bias in favor of less freedom (for others, of course), so this is how we balance it. On the other hand, akrasia -- almost a proverbial trait of wannabe rationalists -- is often an inability to follow one's own commitments. Already damaging for individuals; making group activity almost impossible. It would be nice to be able to overcome this, and enter high-commitment situations (with limited scope, for limited time). Otherwise, we lose a lot of potential.

I can imagine myself benefitting from some kind of commitment enforcement, and rational life coaching in general. Of course, the devil is in the details. That's where things can go wrong easily. But if we can create enough safeguards, I support trying this, because there is so much to win.

A possible approach could be to select in advance two or three people trusted by the rationalist community as supervisors of the project. The supervisors would not participate in the project directly, but would have regularly scheduled meetings with members, individually, outside of the project, where the members could provide their opinions, and after hearing all of them, the supervisors would post an anonymized summary report on LW.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-06-01T18:29:47.718Z · LW(p) · GW(p)

This is all generally sensible. +1

EDIT: Except for the part about posting an anonymized summary report on LW. It's entirely reasonable to have outside advisors and supervisors (in the sense of "well, if the thing's as good as I say it'll be, then I have no reason to want to hide"). However, it's silly to pretend that the house grants LW any kind of oversight, or specifically seeks LW's approval—I posted here because I thought LW would be a) mildly interested and b) would, in exchange for the mild interestingness be willing to provide some solid, concrete criticism, but that's pretty much as far as it goes.

comment by ChristianKl · 2017-05-26T10:33:57.971Z · LW(p) · GW(p)

A "culture of abundance" in which food and leftovers within the house are default available to all, with exceptions deliberately kept as rare as possible

That reminds me of an event during a retreat where a cake couldn't get backed because they required chocolate that was brought to bake the cake was consumed beforehand. It was even baking-chocolate.

It seems like good cooking or baking leads to people buying specific ingredients and it's bad if they can't count on those ingredients not being consumed before the planned meal.

Replies from: Duncan_Sabien, Sinal
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-26T10:46:49.980Z · LW(p) · GW(p)

Yeah, I think notes saying "do not eat" will suffice; the key is just to get people to use that coin only when it's for a specific plan.

Replies from: evand
comment by evand · 2017-05-26T13:11:22.155Z · LW(p) · GW(p)

You might also want a mechanism to handle "staples" that individuals want. I have a few foods / ingredients I like to keep on hand at all times, and be able to rely on having. I'd have no objections to other people eating them, but if they did I'd want them to take responsibility for never leaving the house in a state of "no X on hand".

comment by Sinal · 2017-05-27T05:06:58.183Z · LW(p) · GW(p)

The food policy strikes me as one of the more trivial and unimportant parts of the proposal. I'm not saying you're taking it too seriously -- I think that shared living spaces should have clear rules about who gets to eat what. It's just that this particular food policy seems easily to change without changing the core "authoritarian" structure of the Dragon Barracks.

Funny story by the way, I really like it.

Replies from: ChristianKl
comment by ChristianKl · 2017-05-27T11:32:59.360Z · LW(p) · GW(p)

Funny story by the way, I really like it.

To add to the story, the person who wanted to bake the cake build the oven for baking it beforehand out of parts like an old washing machine.

comment by handoflixue · 2017-05-31T19:56:59.069Z · LW(p) · GW(p)

Concerns about your philosophy

1) You focus heavily on 99.99% reliability. That's 1-in-10,000. If we only count weekdays, that's 1 absence every 40 years, or about one per working lifetime. If we count weekends, that's 1 absence every 27 years, or 3 per lifetime. Do you really feel like this is a reasonable standard, or are you being hyperbolic and over-correcting? If the latter, what wold you consider an actual reasonable number?

2) Why does one person being 95% reliable cause CFAR workshops to fail catastrophically? Don't you have backups / contingencies? I'm not trying to be rude, I'm just used to working with vastly less fragile, more fault-tolerant systems, and I'm noticing I am very confused when you discuss workshops failing catastrophically.

the problem is that any rate of tolerance of real defection (i.e. unmitigated by the social loop-closing norms above) ultimately results in the destruction of the system.

3) Numerous open source programs have been written via a web of one-shot and low-reliability contributors. In general, there's plenty of examples of successful systems that tolerate significantly more than 0.01% defection. Could you elaborate on why you think these systems "close the loop", or aren't destroyed? Could you elaborate on why you think your own endeavors can't work within those frameworks? The framing seems solidly a general purpose statement, not just a statement on your own personal preferences, but I acknowledge I could be misreading this.

4) You make a number of references to the military, and a general philosophy of "Obedience to Authority". Given the high rate of sexual assault and pointless bureaucracy in the actual military, that seems like a really bad choice of role model for this experiment. How do you plan to avoid the well known failure states of such a model?

5) You raise a lot of interesting points about Restitution, but never actually go in to details. Is that coming in a future update?

every attempt by an individual to gather power about themselves is at least suspect, given regular ol' incentive structures and regular ol' fallible humans

6) You seem to acknowledge that you're making an extraordinary claim here when you say "I've noticed the skulls". Do you think your original post constitutes extraordinary proof? If not, why are you so upset that some people consider you suspect, and are, as you invited them to do, grilling you and trying to protect the community from someone who might be hoodwinking members?

7) Do you feel comfortable with the precedent of allowing this sort of recruiting post from other people (i.e. me)? I realize I'm making a bit of an ask here, but if I, handoflixue, had written basically this post and was insisting you should trust me that I'm totally not running a cult... would you actually trust me? Would you be okay with the community endorsing me? I am using myself specifically as an example here, because I think you really do not trust me - but I also have the karma / seniority to claim the right to post such a thing if you can :)

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-06-01T02:47:58.148Z · LW(p) · GW(p)

I note for others reading this comment and wondering why it hasn't been addressed that I've ceased replying to handoflixue and a couple of other posters on a policy level, for reasons surrounding norms of discourse, strawmanning, epistemic humility, presence or absence of good faith, etc. It's possible that the above contains good questions or insights; if someone else chooses to repost/re-ask/rephrase sections of this, I'll likely respond to them.

http://lesswrong.com/lw/c1/wellkept_gardens_die_by_pacifism/

comment by MalcolmOcean (malcolmocean) · 2017-05-28T20:30:52.305Z · LW(p) · GW(p)

I want to publicly express my strong support for this experiment/meta-experiment.

I think that my support is particularly noteworthy as I'm presently a core member of a different taking-each-other-seriously co-living experiment that is profoundly different in its philosophy. (Mine is not in Berkeley, nor rationalist.) Therefore some people might assume that I would be opposed to Dragon Army Barracks.

Things in common between the experiment I'm part of and Dragon Army Barracks:

  • is "high-commitment, high-standards, high-investment"
  • is trying to actually make & achieve something together
  • is addressing unanchored abandoned loneliness thing
  • has consciously explicated commitments and assumptions
  • is intended to produce a high-level of consistent excellence and ability to effectively collaborate

Things that are different:

  • We're very far from authoritarian or hierarchical. Although we're also not egalitarian, consensus-based, or even democratic per se... but we have essentially zero of telling-other-people-what-to-do
  • Our basic collective navigating framework is [Kegan-5 / fluid mode / post-rational], rather than [Kegan-4 / systematic mode / rational] (good summary of this distinction)
  • Our focus is almost entirely on the meta-level of building the new cultural platform we're building. We don't have any expectations of each other on the levels of specific object-level projects or explicit behavioral norms (aside from ones necessary for the house's function)

I think that these differences are core to why I am part of this project that I'm part of, and why I consider it to be the most valuable investment I could be making with my time and energy. I am, therefore, non-Berkeley-residence aside, not going to be applying to DA. As I said above though, I strongly support Dragon Army Barracks as an experiment and potentially as an ongoing resource to individual and collective growth.

Reasons why I think that DA is a good idea:

  • Expected value of high amounts of worthwhile object-level output. As Sebastian Marshall says, "the gains made from living more purposefully are forever - the time you've spent well will remains well-spent even if you fall off for a while sometimes. Most people don't even try, which is why most people don't succeed."
  • I expect it will also produce a lot of developmental progress for people involved; that if you were to be able to sort rationalists by amount of growth in a year, the Dragons would all be in the top quartile, and would occupy many of the top 10 slots. This, even if the experiment were to end after 6 months.
  • The DA Barracks is an intervention that is attempting to produce change on a very fundamental level of the system that is a group house. This is a powerful leverage point (see Donella Meadow's article... I would say this is around a 2 or 3, and most group houses have only done mild experiments at the 4-6 level.)
  • I agree with and/or resonate with the six points that Duncan makes in Section 2 of this document.
  • The project-level value of learning here is also very high: this will greatly inform future experiments, whatever their leadership basis.
  • If I had kids, I would absolutely sign them up for any summer camps or classes Duncan was running. I think the amount of power he would have in relation to them would be similar to the amount of power he'll have in this situation.

A final reason is this: I think that we as humanity need to rapidly make progress on being able to effectively coordinate in non-hierarchical ways, which is what the project I'm part of is about. Corollarily, humanity is kind of mediocre at doing this in many contexts. Therefore if non-hierarchical projects aren't emphatically directed towards solving that challenge itself, I expect them to be outperformed by projects that are leveraging existing understanding about how to coordinate effectively in hierarchical ways. i.e. in this case, Dragon Army Barracks.

Replies from: Qiaochu_Yuan, ChristianKl
comment by Qiaochu_Yuan · 2017-05-29T00:06:18.782Z · LW(p) · GW(p)

I really, really wish Kegan levels didn't come in an order, so a claim to be at a higher Kegan level than someone else didn't look so starkly like a claim to superiority. It's turning me off even trying to take them seriously, because everyone who uses them looks like they're just self-aggrandizing to me.

Replies from: malcolmocean
comment by MalcolmOcean (malcolmocean) · 2017-05-29T21:27:28.994Z · LW(p) · GW(p)

I'm totally with you in wishing that Kegan levels weren't getting socially entangled with claims to superiority!

...but that can't be achieved in the way you describe: they would be a fundamentally different thing if they didn't come in the order they do. It's not a personality typing system, it's a model of human development over time. Probably some people who are talking about them are self-aggrandizing; people are known to do that with just about everything they can get their hands on.

I suspect that your heuristics about not trusting people who brag about their Kegan levels are probably decently good heuristics, as it could be reasonably expected that that would be ineffective in just the way you're describing here.

I first learned about the CDT model from a conversation I had with someone who used to work with Kegan, and who readily noted that he was not himself consistently operating out of stage 5. Robert Kegan has said that about himself too, which I found surprising and originally interpreted as being a failure mode in the opposite direction—false humility or something. But now it strikes me as not that unlikely. There's a big difference between being able to recognize abstractly (or in others) what it means to be subject to one's own interpretations & ideologies, and being able to actually not do it.

There's an unfortunate phenomenon here, where the value of the concept gets diluted because the people who are finding the Kegan models helpful but aren't claiming to be at higher Kegan levels than others... are harder to notice.

Anyway, I realize that I may sound like I'm making a superiority claim here myself. I will address that directly, kind of like Duncan is doing re: skulls above.

My understanding—based more on reading things like this than Kegan's own work—is that the "fluid mode" (~=K-5) does have capabilities that the "systematic mode" (~=K-4) does not; much like multivariate calculus can be used to re-derive the equation for the volume of a sphere, but not the reverse. Is multivariate calculus superior to sphere equations? In functional senses yes, but not in a social status way. And also not in all domains! It's certainly slower if you just need to calculate the volumes of a bunch of spheres.

I've spent a considerable amount of time over the past year working to develop the ability to operate in the fluid mode, and I think that that makes a lot of sense for me and many other people, but I don't think that that's highest priority for everyone right now. Hence my strong support for Dragon Army.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-29T21:32:42.039Z · LW(p) · GW(p)

I like the paragraph "my understanding" a lot. In particular, while I think I have some limited, flickering access to K5, I notice that operations which come out of being solidly K4 often cause me to outstrip/outperform people who are entirely in K5, which seems to me to be something analogous to "I'm successfully calculating the volumes of a bunch of spheres and you're just stuck there mired in re-derivation."

i.e. relative strengths in different domains.

Replies from: ChristianKl
comment by ChristianKl · 2017-05-29T22:16:50.621Z · LW(p) · GW(p)

I'm not sure what it means to be entirely K5. To me the phrase sounds like Chapman's description of the postmodernists who are at K3 and tried to skip K4 entirely and are without any real access to the ability to use a system.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-29T22:21:51.662Z · LW(p) · GW(p)

Fair. "People who overwhelmingly operate from a thing where I'm comfortable applying the label K5," where overwhelmingly means 90+% and comfortable means 90+%.

comment by ChristianKl · 2017-05-28T20:36:13.481Z · LW(p) · GW(p)

Our basic collective navigating framework is Kegan-5 / fluid mode / post-rational, rather than Kegan-4 / systematic mode / rational (good summary of this distinction)

How do you filter for people who are Kegan-5 when you are seeking to accept members?

Replies from: malcolmocean
comment by MalcolmOcean (malcolmocean) · 2017-05-29T21:27:31.015Z · LW(p) · GW(p)

We don't! Each of the individual members themselves aren't necessarily Kegan-5, but the person spearheading the project (who is in her 70s) certainly is. And so, therefore, are our models, our equivalent to a "charter", etc.

It's also the case that the mode of interaction that we're training here is fluid as opposed to systematic, which shows up in the ways that we make agreements, commitments, and the general way-we-do-things-here. I was very much operating in (and committed to!) systematic mode when I first joined several years ago, and I'm still getting comfortable with this. It's challenging but worth it, and we're working to build a bridge to meta-rationality to make that learning process easier.

I think that Duncan's intended context will potentially be (a) an awesome place to go from Kegan-3 to Kegan-4, and (b) an awesome place to operate in an exceedingly high-functioning Kegan-4 way. It asks that of its members. I don't expect it to create a demand for most Dragons to operate in a Kegan-5 way, which is the core different between it and the project I'm a part of.

Replies from: ChristianKl
comment by ChristianKl · 2017-05-29T21:49:49.404Z · LW(p) · GW(p)

Is there more information available on your project publically? Or some information I can get non-publically?

Replies from: malcolmocean
comment by MalcolmOcean (malcolmocean) · 2017-05-30T11:55:45.426Z · LW(p) · GW(p)

Not officially at this stage; we're in a process of overhauling a lot of things, including answers to questions like "who are we?" and "what are we calling ourselves?"

That said, this category of posts on my blog has a lot of content about our philosophy, models, culture, etc.

comment by PeterBorah · 2017-05-25T21:50:04.269Z · LW(p) · GW(p)

Somewhat scattered reactions:

  • I am really interested to see the result of this experiment.

  • I think the underlying models are extremely plausible, with the next bullet point as a possible exception.

  • I am aesthetically very skeptical of phrases like "absolutely reliable" (in Problem 4). I don't think it's possible for something to be absolutely reliable, and it seems dangerous/brittle to commit to achieving something unachievable. However, this may be primarily an aesthetic issue, since I think the solution presented in Problem 3 is very sensible.

  • I don't buy claim 4, "It does actually require a tyrant". I agree that it isn't always possible to achieve consensus. I don't think that hierarchical authority is the only way to solve that problem. Democratic Centralism is a well-tested alternative, for instance.

  • I find the code of conduct worrisome, at least as presented. The rules seem likely to encourage hypocrisy and dishonesty, since they make psychologically implausible demands which in many cases are undetectable at time of infraction. This could potentially be mitigated by norms encouraging confession/absolution for sins, but otherwise I expect this to have corrosive effects.

  • I am totally uninterested in joining the experiment, despite my interest in its outcome. I would be likely to be interested in substantially more time-boxed activities with similar expectations.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-25T21:57:50.862Z · LW(p) · GW(p)

"norms encouraging confession/absolution for sins" is a somewhat ... connotation-laden ... phrase, but that's a big part of it. For instance, one of the norms I want to build is something surrounding rewarding the admission of a mistake (the cliff there is people starting to get off on making mistakes to get rewarded, but I think we can dodge it), and a MAJOR part of the regular check-ins and circles and pair debugs will be a focus on minimizing the pain and guilt of having slipped up, plus high-status people leading the way by making visible their own flaws and failings.

+1 for noticing and concern. Do you have any concrete tweaks or other suggestions that you think might mitigate?

Also: "absolute" is probably the wrong word, yeah. What I'm gesturing toward is the qualitative difference between 99% and 99.99%.

Replies from: Valentine, PeterBorah
comment by Valentine · 2017-05-25T23:16:16.510Z · LW(p) · GW(p)

I am aesthetically very skeptical of phrases like "absolutely reliable" (in Problem 4). I don't think it's possible for something to be absolutely reliable, and it seems dangerous/brittle to commit to achieving something unachievable. However, this may be primarily an aesthetic issue, since I think the solution presented in Problem 3 is very sensible.

[…]

Also: "absolute" is probably the wrong word, yeah. What I'm gesturing toward is the qualitative difference between 99% and 99.99%.

There's definitely a qualitative shift for me when something moves from "This is very likely to happen" to "This is a fact in the future and I'll stop wondering whether it'll happen."

While I think it's good to remember that 0 and 1 are not probabilities, I also think it's worthwhile to remember that in a human being they can be implemented as something kind of like probabilities. (Otherwise Eliezer's post wouldn't have been needed!) Even if in a Bayesian framework we're just moving the probability beyond some threshold (like Duncan's 99.99%), it feels to me like a discrete shift to dropping the question about whether it'll happen.

I think that's a fine time to use a word like "absolute", even if only aesthetically.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-25T23:22:58.295Z · LW(p) · GW(p)

Yeah, there's some switch from "am maintaining uncertainty" to "am willing to be certain and absorb the cost of an unpleasant surprise." Or from "would not be surprised by failure" to "have decided to be surprised by failure."

comment by PeterBorah · 2017-05-25T22:11:36.004Z · LW(p) · GW(p)

Those sound like good ideas for mitigating the corrosive effects I'm worried about.

My personal aesthetic vastly prefers opportunity framings over obligation framings, so my hypothetical version of the dragon army would present things as ideals to aspire to, rather than a code that must not be violated. (Eliezer's Twelve Virtues of Rationality might be a reasonable model.) I think this would have less chance of being corrosive in the way I'm concerned about. However, for the same reason, it would likely have less force.

Re: absolute. I agree that there can be a qualitative difference between 99% and 99.99%. However, I'm skeptical of systems that require 99.99% reliability to work. Heuristically, I expect complex systems to be stable only if they are highly fault-tolerant and degrade gracefully. (Again, this may still be just an aesthetic difference, since your proposed system does seem to have fault-tolerance and graceful degradation built in.)

Replies from: evand, Duncan_Sabien
comment by evand · 2017-05-25T23:54:06.058Z · LW(p) · GW(p)

However, I'm skeptical of systems that require 99.99% reliability to work. Heuristically, I expect complex systems to be stable only if they are highly fault-tolerant and degrade gracefully.

On the other hand... look at what happens when you simply demand that level of reliability, put in the effort, and get it. From my engineering perspective, that difference looks huge. And it doesn't stop at 99.99%; the next couple nines are useful too! The level of complexity and usefulness you can build from those components is breathtaking. It's what makes the 21st century work.

I'd be really curious to see what happens when that same level of uncompromising reliability is demanded of social systems. Maybe it doesn't work, maybe the analogy fails. But I want to see the answer!

Replies from: Lumifer, JacekLach
comment by Lumifer · 2017-05-30T17:45:35.464Z · LW(p) · GW(p)

to see what happens when that same level of uncompromising reliability is demanded of social systems

Who exactly will be doing the demanding and what would be price for not delivering?

Authoritarian systems are often capable of delivering short-term reliability by demanding the head of everyone who fails ("making the trains run on time"). Of course pretty soon they are left without any competent professionals.

comment by JacekLach · 2017-05-30T17:35:23.798Z · LW(p) · GW(p)

Do you have examples of systems that reach this kind of reliabilty internally?

Most high-9 systems work by taking lots of low-9 components, and relying on not all of them failing at the same time. I.e. if you have 10 95% systems that fail completely independently, and you only need one of them to work, that gets you like eleven nines (99.9{11}%).

Expecting a person to be 99% reliable is ridiculous. That's like two sick days per year, ignoring all other possible causes of failing to make a task. Instead you should build systems and organisations that have slack, so that one person failing at a particular point in time doesn't make a project/org fail.

Replies from: evand
comment by evand · 2017-05-30T19:57:56.918Z · LW(p) · GW(p)

Well, in general, I'd say achieving that reliability through redundant means is totally reasonable, whether in engineering or people-based systems.

At a component level? Lots of structural components, for example. Airplane wings stay attached at fairly high reliability, and my impression is that while there is plenty of margin in the strength of the attachment, it's not like the underlying bolts are being replaced because they failed with any regularity.

I remember an aerospace discussion about a component (a pressure switch, I think?). NASA wanted documentation for 6 9s of reliability, and expected some sort of very careful fault tree analysis and testing plan. The contractor instead used an automotive component (brake system, I think?), and produced documentation of field reliability at a level high enough to meet the requirements. Definitely an example where working to get the underlying component that reliable was probably better than building complex redundancy on top of an unreliable component.

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-25T22:29:18.736Z · LW(p) · GW(p)

Yeah. I've got a couple brilliant and highly capable friends/allies/advisors who also STRONGLY prefer opportunity framings over obligation framings. I think that's one of the things where the pendulum has overcorrected, though—I think the rationality community as a whole is rather correctly allergic to obligation framings, because of bad experiences with badly made obligations in the past, but I think we're missing out on an important piece of the puzzle. You can run a successful thing that's, like, "we'll do this every week for twelve weeks, show up as much as you like!" and you can run a successful thing that's, like, "we'll do this if we get enough people to commit for twelve weeks!" and I think the two styles overlap but there's a LOT of non-overlap, and the Bay Area rationalists are missing half of that.

Replies from: PeterBorah
comment by PeterBorah · 2017-05-25T22:37:54.605Z · LW(p) · GW(p)

"we'll do this if we get enough people to commit for twelve weeks!"

I actually totally buy this. There are some things where you just have to commit, and accept the obligations that come with that.

My hesitation primarily comes from the fact that the code of conduct seems intended to be pervasive. It even has requirements that happen entirely inside your own mind. These seem like bad features for an obligation-based system.

My model is that obligation-based systems work best when they're concrete and specific, and limited to specific times and circumstances. "Commit to performing specified activities twice a week for twelve weeks" seems good, while "never have a mental lapse of type x" seems bad.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-25T22:51:58.406Z · LW(p) · GW(p)

That makes sense, yeah. I'm hoping the cure comes both from the culture-of-gentleness we referenced above, and the above-board "Yep, we're trying to restructure our thinking here" and people choosing intelligently whether to opt in or opt out.

Good place to keep an eye out for problems, though. Yellow flag.

Edit: also, it's fair to note that the bits that go on inside someone's head often aren't so much "you have to think X" as they are "you can't act on ~X if that's what you're thinking." Like, the agreement that, however frustrated you might FEEL about the fact that people were keeping you up, you're in a social contract not to VENT at them, if you didn't first ask them to stop. Similarly, maybe you don't have the emotional resources to take the outside view/calm down when triggered, but you're aware that everyone else will act like you should, and that your socially-accepted options are somewhat constrained. You can still do what feels right in the moment, but it's not endorsed on a broad scale, and may cost.

Replies from: PeterBorah
comment by PeterBorah · 2017-05-25T22:58:23.853Z · LW(p) · GW(p)

it's fair to note that the bits that go on inside someone's head often aren't so much "you have to think X" as they are "you can't act on ~X if that's what you're thinking."

This framing does bother me less, so that is a fair clarification. However, I don't think it applies to some of them, particularly:

will not form negative models of other Dragons without giving those Dragons a chance to hear about and interact with them

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-25T23:11:32.805Z · LW(p) · GW(p)

True. Updated the wording on that one to reflect the real causality (notice negative model --> share it); will look at the others with this lens again soon. Thanks.

comment by Dagon · 2017-05-25T21:29:47.862Z · LW(p) · GW(p)

I applaud the experiment, and the writeup! Do you have a place where you'll publish metrics (people contacted, interest level, etc. before starting, and self-reported or objective measures of your stated objectives every week)?

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-25T21:38:55.469Z · LW(p) · GW(p)

That's not been formally set, but yes—that's the biggest ask we get from outsiders interested, and it's clearly one of the "obvious things" that we ought to do, so it's been part of the plan for a while now. We just have to hammer out the details once the group is set.

Depending on interest, we may publish those updates here on LW, or make them available through my blog or FB, or some other option we haven't thought of yet.

Replies from: AndHisHorse
comment by AndHisHorse · 2017-05-26T12:39:43.772Z · LW(p) · GW(p)

From the skeptical side, I would strongly suggest committing to a publicly visible schedule for updates, reports on transitions (e.g. out of bootcamp), and a final report. The outside world would be well served by knowing how this turns out, and having a schedule which is evidently independent of considerations such as "is this currently going well" would do a great deal to reassure us that we will know in time.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-26T15:47:04.856Z · LW(p) · GW(p)

I do note that, while I'd like to collect data and make that data available to other humans trying to do cool stuff in the world, I'm not particularly concerned with assuaging all skeptics/reassuring people who, from the outside, think that it's bad. This post is sort of my one big push to do that, after which I planned to shrug and just let people make the judgments they're gonna make.

A schedule is still a solid structure just along the "do this properly" axis, though.

Replies from: Decius
comment by Decius · 2017-05-27T01:19:54.361Z · LW(p) · GW(p)

If you don't commit to publishing negative results, I commit to refusing to trust any positive results you publish.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-27T01:38:01.463Z · LW(p) · GW(p)

That's absolutely fair. The point I'm trying to make is that it's not about publishable results either way. Like, yes, I'd like to ship useful information to the outside world, but that's a distant second priority to making good things happen on the ground.

What I do commit to is not making the choice to publish based on whether things are good or bad. I commit to publishing if and only if a) I have spare time and cycles, and b) there's something useful for others to hear.

Replies from: Decius
comment by Decius · 2017-05-28T06:59:02.878Z · LW(p) · GW(p)

The only way there would be nothing useful to learn is if there was a complete failure due to circumstances outside of the influence of anyone involved, such as an earthquake that halted the plan. Even then a quick note to that effect would be of use.

comment by Screwtape · 2017-05-30T17:24:40.918Z · LW(p) · GW(p)

0) This is not for me, not because of a bug in the proposed structure but because I don't know you and don't know any of the people recommending you. There are two people that immediately came to mind who, if they proposed this with themselves in your place, I would join up with over most situations and three more I would probably follow like this over my current situation.

1) You can't name something Dragon Army and not expect nerd pedantry, but this is pedantry with a point behind it. Dragon Army (in the book) distributed leadership down as much as possible. Each toon leader had more degrees of freedom from Ender's plans, each toon had a second who was expected to make decisions, and soldiers were more free to question their toon leaders. I know Dragon Army (the name) has a certain positive association in rationalist circles, but what you're describing sounds more like Salamander Army. This is meant as nerd pedantry more than disagreement with your proposed goals or metrics (Salamander was doing really well in the standings after all) but the difference between Salamander and Dragon hierarchy seems important in this context. Dragon Army won by having a dozen good commanders all thinking at once, Salamander won by having one or two good commanders and being able to expect sharp obedience from everyone under them.

2) The second highest value change (Highest is brought up in point 0) would be some form of "I Told You So" and accountability. I find I am much happier to submit to doing things I think are incorrect if my dissension has been recorded and I can point at it later. Something like an internal prediction market is probably overkill and would erode confidence in leadership in a bad way, but a norm where someone could say "I'm 70% confident this treehouse won't support enough weight if we nail it like that" and someone quickly sticks that in a google form might be fast enough not to interrupt things. This may or may not help with general cohesion or be relevant to the people who are actually probably joining.

This is sort of related to how often "sure, I'll do it the way you said as long as I have it in writing that I think it's dumb" has saved me by covering my rear, it also provides an important check on an incompetent leader, but mostly I'd want it because then the nagging thought "this is a bad idea" is out of my head and I can forget about it for a while. It's sort of like singing a song out loud sometimes stops it being stuck in your head.

3) "Internal economy trading effort for money and so on" Can I pay someone to do my lateness-apology push ups for me? That's a joking example, but given the likelihood of having large income discrepancies something of that nature may come up, and it might be worth having a framework for it. In the same ballpark, intense cooperation seems like it might be odd in non-DA associated things. Examples; what happens if one member applies for a job at a company another member works for? What happens if one member commits a crime and asks other members to be their alibi? I don't really expect either of those examples to actually come up, but they are examples where organizations structurally similar to what you're proposing can do very well for its members in ways that maybe aren't good for the surrounding social structures.

4) If I knew that this general sort of setup was working well for all concerned, I wouldn't consider it lasting indefinitely with the same leader to be a bad thing. That said, since you stated an intention to only lead it for about a year, 'temporary' leaders leading indefinitely is pretty strongly associated with this general sort of setup no longer working well for all concerned. If this started today, and you were still leading it in two years, I'd take that as evidence something has gone wrong. This gets lessened greatly if individual people are regularly rotating out of the group and all have wonderful praises for it.

All of the above is even more true for romantic/sexual relations between the leadership and the rank-and-file.

5) I'm strongly in favour of this being tried, and I'll be reading any updates with great interest. Good luck!

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-30T17:42:46.351Z · LW(p) · GW(p)

Thanks for the detailed comment!

1) Yeah, I'm emphasizing the more authoritarian parts, because those are the more dangerous/aversive ones, but in fact Dragon Army is the source of the aesthetic. I agree with almost everything you said in 1), and that's what the house is supposed to be like. Don't forget, though, that while Ender distributed authority as broadly as possible, he was firmly, absolutely in command, in the end. When he spoke, they moved. The key thing was that a) he used that as rarely as possible and b) he didn't undercut his toon leaders when he exercised central authority.

2) Yeah, absolutely. We've already installed a norm of making direct, one-to-one bets, and are almost certainly going to install prediction markets and "I told you so" structures. In particular, I think the people originally opposed to a given failed experiment should be given greater weight in the next decision, if their predictions about that experiment came true. It's tough to balance this against "creating perverse incentives," but I think we can manage it.

3) Yes. It's tricky, because we have to work out rates-of-exchange between e.g. rich and poor participants, but an internal economy is something I hope to create with second-priority urgency (i.e. in the first couple of months).

4) I'm not committed to ceasing after a year, if all is going swimmingly, but essentially I want to open that question up to the group itself after six months.

5) Thanks!

Replies from: Screwtape
comment by Screwtape · 2017-05-30T19:57:26.070Z · LW(p) · GW(p)

My curiosity is satisfied by your answers to 2-4, but I want to dig a little deeper into 1 if you don't mind.

The source of the aesthetic is Dragon Army but emphasizing Salamander since those are the pieces more likely to be found off-putting makes sense to me. If someone's on the fence, they probably shouldn't go forward. That said, you may have overemphasized your ideal here. Ender was not firmly, absolutely in command; his toon leaders took up body-guarding him over his direct objections in a way that they wouldn't have for a more authoritarian commander. Would you consider such a mutiny to be a sign you'd failed, or a sign you'd succeeded? (I strongly don't expect body-guarding to be relevant, but I can imagine similar well-intentioned disagreements.)

Also, since you are changing the emphasis like this I wonder what your plans are for any Nikolai Delphikis* or Beans** that wind up involved? "Screen or vet people carefully so we don't have any" is noted as probably a good idea, but is also insufficient.

*By Nikolai, I mean someone who would be happy following a confident leader, but feels out of their depth being expected to constantly adapt without sufficient direction. A potentially good Salamander member who read the Salamander description, and was surprised by the Dragon direction it took. Maybe even someone who looks very Dragon-like in most situations, but finds themselves the least improving member of what you set up. On the one hand, if you're pulling from the rationalist population this seems an unexpected direction to find errors in, on the other hand I have had the experience unexpectedly of finding myself the slowest and least agenty person in a group and it was demoralizing in a way that made me empathize with the fictional Nikolai.

**By Bean, I mean someone who gets involved expecting more degrees of freedom or a higher position on the hierarchy than they wind up with. Bean put himself in Dragon Army knowing he was coming right out of launch, knowing he was small, and knowing Ender would have no reason to pay particular attention to this particular rookie, and then got upset that he wasn't given any authority or special notice. If you have at least fifteen people not counting yourself or your second, I'd be willing to make a 1:1 bet that you are going to wind up with someone wanting more degrees of freedom or more authority than you want to give them.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-30T22:07:25.877Z · LW(p) · GW(p)

I actually take the text of Ender's Game pretty seriously as a model; I think it offers a lot of good perspective on human morality and interaction. So I actually have the example of the toon leaders bodyguarding Ender as a salient ... um ... parable? ... in my head already, and would view that as a sign I'd succeeded.

We've already got a Bean; his name is Eli Tyre. His position as second-in-command didn't exist through the whole eight months of planning this until 12 hours before I posted the charter. Similarly, the more credible responsibility others can take, the more I get to do less; the only block here is credibly believing that the people taking power will do the right thing on all the levels of meta, or setting up scaffolds such that damage-from-mistakes is minimized and survivable.

As for Nikolais, the first priority is the sign of the derivative (are you progressing positively), the second priority is the derivative (is your progress steep), and a distant, distant third is your actual position (are you in fact now good at X). A major part of the point of the house is to make everyone, myself included, feel a bit like Nikolai? i.e. we want everyone to be at the edge of their growth. But similarly, we want every Nikolai to have a Bean ... hence the tight-knit, do-things-together, check-in one-on-one social structure.

I ... think that answered your questions? Let me know if I missed something important.

comment by Dapple · 2017-05-27T18:44:33.365Z · LW(p) · GW(p)

I think it's a solid proposal.

One major caveat I think is that it's a structure that wouldn't work for most people in the rationality community. Calling most of them libertines incompatible with such a strict framework wouldn't be too far from the truth. But those are the views of a very distant outsider who doesn't know the the deeper views/feelings of the Berkeleyans you refer to, and is only familiar at a superficial glance.

But for a niche group of strongly driven baby rationalists lacking for direction/purpose who aren't opposed to operating within a strict structure, I don't know how this wouldn't be an ideal framework to use.

As a former military enlisted, I think all the military comparisons made are valid. Allow me to include one more. I believe that also like the military, there will be a high turnover rate - once people get what they want out of the community, they leave. As I allude to earlier, the appeal of joining is acquiring skills in discipline/organization/direction. Once those are acquired, there is very little left to motivate people to stay. But, in both cases, this isn't really a bad thing either. If everyone leaves after the one year commitment, but they reflect on the experience positively, then it would still be considered a success.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-28T05:28:47.914Z · LW(p) · GW(p)

Yeah. In most-but-not-all of my conceptions of the house, I imagine "leaving" the post of guy-in-charge after a year, if not six months. Maybe not leaving the context as a whole, but "turning over" as far as roles are concerned.

Replies from: Decius
comment by Decius · 2017-05-28T07:02:33.252Z · LW(p) · GW(p)

It's hard to go from being the boss of someone to being their subordinate, and vice versa. I think it's more plausible to shift into an advisory, strategic, consultant, or executive role rather than swap.

comment by passinglunatic · 2017-05-27T00:40:09.522Z · LW(p) · GW(p)
  1. Sounds awful to me. I would absolutely hate to live somewhere where I was regularly told what to do and/or expected to fit in with rituals. I tolerate this kind of thing at work because I have to.

  2. What will you say when people come to you saying "I'm not sure this is really worth it for me"? I personally don't think self-improvement is a very stable overall goal. In my cursory acquaintance, most cults/high-demand living situations tend to believe in "something greater" - often something quite ridiculous, but nonetheless something bigger than the individual. Perhaps it is important to have something which seems to trump feelings of personal discomfort.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-27T01:52:27.447Z · LW(p) · GW(p)

Basically what I tell people (in answer to 2) is "ABSOLUTELY trust that instinct. This requires pretty high confidence that this is the right move, and DEFINITELY high confidence that if it's the wrong move you won't take significant damage over the six month period. If you're unsure, the answer should be 'no.'"

comment by ChristianKl · 2017-05-26T10:27:55.914Z · LW(p) · GW(p)

If, of course, the expectation is that everybody shows up on Tuesday and Thursday evenings, and the cost of not doing so is not being present in the house, suddenly the situation becomes simple and workable.

Does this means that a person who's ill and needs to be a week in the hospital will get kicked out? What about a person who's absent for a funeral of a relative? Business trips?

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-26T10:49:48.616Z · LW(p) · GW(p)

The number of excuses for not being present is basically the most restrictive list you'd expect—if you're literally not in town, if you're sick, if you're attending to a personal tragedy. The idea is not to make the house anyone's first priority, it's to make it something like everyone's third priority (but actually above all but a couple of things).

So, no missing exercise because of a party, no missing it because you kinda need to work late, etc. Maybe missing for a once-in-a-year opportunity like a talk or a concert that you've been looking forward to for ages, with specific recompense to your housemates for the cost imposed by your absence? But in short, it's the thing that other stuff has to work around, not vice-versa.

Replies from: Decius, gwillen, ChristianKl
comment by Decius · 2017-05-27T01:05:01.555Z · LW(p) · GW(p)

Losing one's job to avoid missing a house meeting (needed to work late) is the kind of bad priority that should be addressed.

Perhaps some kind of explicit measure where housemates judge and excuse or not each case on a case-by-case basis, including a measure to request leave in advance as well as in arrears?

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-27T01:48:47.967Z · LW(p) · GW(p)

Sorry, I should've been more clear. "Kinda" was the important operational word, there, and you're correct to point out that the priorities could be easily be construed as clearly bad.

I think your latter norm is basically what's going to happen. The key thing I want to avoid is the slippery slope whereby there's no clear line of "this counts as a defection." I think needing to work late is 100% acceptable. What I was pointing at was something like, "I could wrap this up by coming in early tomorrow, or I could defect on the standing group exercise appointment ..."

I want to thank you for the number of concrete, clear criticisms you're making, and the manner in which you're making them. I like your style.

Replies from: Decius
comment by Decius · 2017-05-28T07:58:18.654Z · LW(p) · GW(p)

A defection would be any case in which a member did not arrive on time or participate fully. Period.

I'm suggesting that there be a formal process by which a member arrives late, performs ten pushups, and joins the event in progress. At the conclusion of the event, he says "My Uber driver was involved in a minor collision on my way here and that delayed me for too long to arrive on time." and (by secret ballot?) the Army votes and some adequate margin of them excuse the failure.

The other aspect I suggested is that a Dragon might say "[event] is next week and I would like to attend but it conflicts with exercise. May I be excused from exercise for [event]?". Again, the Army would vote and decide if the absence is excused.

I'm at a loss as to what to do to sanction a member who is not excused. The military has a long list of 'corrective actions' and 'punishments' that they can apply only because they don't constitute 'kidnapping' or other crimes. I guess you could possibly make those '[task] or removal from the Army', but that runs straight into the eviction problem. I think that it's absolutely critical that there's a credible threat underlying the discipline, precisely so that it is less likely to be needed, and the only one I find plausible is ejection, which becomes complicated because of Housing law and morality.

comment by gwillen · 2017-05-26T19:11:16.042Z · LW(p) · GW(p)

Ok, this sounds quite a bit less authoritarian than I was picturing, and I basically did expect that you were planning to require this to be essentially everyone's first priority, maybe tied with paid employment at best, but even then requiring that paid employment take specific forms not conflicting with the experiment. (I had definitely framed it this way in my head when I was asking my other question in this thread.) I don't know if I'm the only one.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-27T02:01:48.296Z · LW(p) · GW(p)

This is part of why I'm glad the conversation is unfolding as it is—probably not a lot of people will read literally every comment, but for anyone who's confused, we have a clear record of where I was wrong and changed my mind, or where I was unclear and people raised confusions.

I think DA should be third or fourth, with obvious things that might come ahead of it being work, family, pre-existing strong friendships, romance, and lifelong core passions.

comment by ChristianKl · 2017-05-26T11:20:29.696Z · LW(p) · GW(p)

It's likely useful to be clear about the process beforehand. Don't plan for 100% but have a process for what happens when things go sideways.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-26T15:04:10.200Z · LW(p) · GW(p)

Yeah, that's a major section of the post above. Specifics to be hammered out with the actual group, in the first weekend.

Replies from: ChristianKl
comment by ChristianKl · 2017-05-27T12:30:40.671Z · LW(p) · GW(p)

I'm not personally interested in living in the US, but if I would be interested then the specifics would be important to me.

A lot of personal development seminars in the style of CFAR require a person to be at a specific location from Thursday to Sunday.

Many forms of traveling mean that a person is gone for a week and would be missing. I don't know about how audience but being strongly location bound can be a problem for some people.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-27T16:49:24.348Z · LW(p) · GW(p)

Many house members work for CFAR, and will be gone from Wednesday through Monday as a result. Many other house members travel for business or personal growth. The "I'm literally out of town" excuse is fully valid and supported, and if someone ends up trying to game the system by constantly traveling, I think the requirements of the house are still forcing them to grow in an interesting way. =)

comment by DaystarEld · 2017-05-26T04:30:33.479Z · LW(p) · GW(p)

I wanted to comment in order to, at the very least, publicly say that I love pretty much everything about this, and am crossing both fingers for its resounding success, both for the good it will do and the lessons we can all learn from it even if its success ends up looking different from how it's currently envisioned.

The main point of potential limitation that you acknowledge is that the time investment and rigid scheduling leaves a lot of people out of luck: either because their job is not a standard 9-5 that would allow for predictable morning or evening availability, or due to people having their own projects to work on. This can be seen as a plus, of course, since the house is going to be committing to long term and serious group projects, which is more beneficial in both directions for those who aren't currently committed to other endeavors.

So I will be interested in seeing how things adapt if someone in the house, for example, levels up a bit, finds a new job that changes their ability to commit to house requirements, or discovers or commits to a desire to create, say, a long running series of blog posts/videos/web serial/etc, but at the potential sacrifice of group projects, if not house-wide projects.

The ideal outcome in such a case might be, if possible, "wait until the current year is up then find a replacement and go do my own thing." Since people are likely to grow attached to the people and living situation of such a tight-knit "army," however, there's going to be some internal friction and conflict for many.

This ties into the larger idea, which I love, of "individual leveling up and sending superhero graduates out into the world to do lots and lots of exploring and tackle a wide number of strategies." I don't know how a "graduate" would be determined, other than potentially just by the person themselves.

Do you forsee yourself basically having a sitdown with one of your Dragons in a year or so and saying, "Hey X, I think you've grown a lot in your time with us, I'm so happy you were part of this, and I think we're hitting some pretty strong diminishing returns on what we can teach you going forward. Meanwhile I've got about a hundred people banging on the door to join us, but no room for them. You have three months to think seriously about what you're going to do next, and then we'll help you find a new place to live, preferably nearby if you want to stay close?"

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-26T05:07:13.507Z · LW(p) · GW(p)

Yes.

Replies from: DaystarEld
comment by DaystarEld · 2017-05-27T01:13:48.625Z · LW(p) · GW(p)

Cool. Do you plan on tracking each member's progress? Maybe a file for each member, with a score for "physical capacity, introspection, planning and execution skill," etc? Or are you planning on keeping things less concrete than that?

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-27T04:13:06.219Z · LW(p) · GW(p)

It's one of the early questions for the group to solve via consensus; I'll probably take notes privately and there'll probably be a public status/progress board.

comment by JasonGross · 2017-05-26T04:18:12.430Z · LW(p) · GW(p)

Positive:

After reading the "What is Dragon Army [Barracks]?", my emotional response was "oooh, maybe I want to join!", whereas before, my emotional tone was "looks interesting and I want to see what happens"+"long-term social commitment tied to housing, ahhhhhhhh"

Less positive:

"its members are willing to hold doubt in reserve and act with full force in spite of reservations—if they're willing to trust me more than they trust their own sense of things (at least in the moment, pending later explanation and recalibration on my part or theirs or both)." Owwwww. This is not a thing I think I'm capable of, and this is not a thing I think I want to twist myself into being capable of. That said, there's an approximation to this (which might or might not be what you are actually pointing at), which I could easily see myself doing: it could become the case that my sense of things would frequently be "Duncan has more domain knowledge and better intuitions and a better sense of things than I do here", and I could possibly act with full force in spite of reservations when that is my sense of things (and am in fact doing something like that right now by posting here rather than on your FB wall), but, at least for me, trust is not yet a decision or a choice, but a thing that is built.

Relatedly, I think point 5 in the code of conduct is where I have the most internal pushback; committing to being unconditionally fully present and supportive, even if, e.g., I'm emotionally blown out or ~triggered, seems ... violating.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-26T05:10:35.112Z · LW(p) · GW(p)

It's more the latter approximation, but it is nonzero the first thing, which is a skill I think extremely worth building for transfer to other arenas (e.g. "I have no reason to be greater than 1% confident that this strategy to ameliorate AI ex-risk will work, but also it will only work if I try full-force for six months, and I don't have any better options...").

Note that rule 1 (protect yourself) supersedes rule 5 (maybe that wasn't clear) and there will be ways to regain face/the house is committed to not doing the Stupid Thing.

comment by CronoDAS · 2017-05-26T00:06:47.558Z · LW(p) · GW(p)

I am frequently only 95% reliable or less. This is likely a bad thing and has led me to compensate in what are probably a lot of bad ways. Among them are a general reluctance to make commitments and a fear of responsibility. Is this something fixable or something I should deal with and work around?

Replies from: Duncan_Sabien, Decius, JacekLach
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-26T01:10:56.230Z · LW(p) · GW(p)

I don't think reluctance to make commitments here is actually a bad patch—there's something really mature and refreshing about someone who says, "Look, I find myself needing to cancel a lot, so I don't want to PROMISE I'll be there, K?"

I think it's fixable, but it's also possible that it's not a super urgent priority/the most important thing for you. I'd evaluate THAT question first, and then if you decide it is important, take things slowly as you try to improve it. Expect to make mistakes, look for what actually works rather than what should work, etc.

comment by Decius · 2017-05-27T01:22:14.877Z · LW(p) · GW(p)

Having a well-calibrated belief in your own reliability is better than being overconfident in yourself.

Making yourself more reliable is also an improvement. Whether that improvement is worth the cost is beyond my ability to guess.

comment by JacekLach · 2017-05-30T18:48:13.249Z · LW(p) · GW(p)

TBH I strongly disagree with OP's suggestion that 95% reliability is low / bad, at least read literally. I personally definitely fail verbal 'soft commitments' ("I expect this will be done by end of week") with way more than 5% rate; probably more like 20-30. Part of it is being in business where hidden complexity strikes at any time, and estimating is hard; part of it is because of cultural communication norms.

If you ignore soft commitments, then the easy way to improve reliability is to make less hard commitments. Instead of "I'll definitely be there at 9 am sharp", say "I'll do my best to be there at 9 am". Manage expectations. Then if you have to message them 30 mins before that you're stuck in traffic / running late, your reliability is not impacted.

For stuff with really hard acceptance criteria (you actually have to be there for 9 am, because the plane won't wait), the right way to improve reliability is to build fault tolerant systems; make a soft commitment to be there an hour before, or have more people work on a problem than you expect to be necessary.

comment by Valentine · 2017-05-25T23:54:25.874Z · LW(p) · GW(p)

I really like this. I enjoy your aesthetic and ambition.

[…]But something magical does accrue when you make the jump from 99% to 100%[…]

There's something about this whole section that nags me. I really, really like the aesthetic… and yet… there's something about how it's phrased here that inspires a wish in me to argue with you about what you said.

I think what you're trying to get at here is how, when you convert a "shades of grey" perspective into a "No, this either hits these standards or it doesn't" kind of discrete clarity, it's possible to switch from approximation to precision. And when you chain together steps that each have to work, you can tell what the output is much more clearly if you're getting each step to give a binary "Yes, I'm working properly" or "Nope, not quite meeting the defined standard."

And I think you're using this to suggest that Dragon Army should be a system with discretely clear standards and with each component of the system (i.e., each person) either (a) definitely meeting that standard or (b) recognizing where they don't and then building up to that standard. This makes the whole system dependable in a way you just cannot do if there are no clear discrete standards or if the system is lax about some component not meeting the standards (e.g., giving someone a pass for merely "trying").

I think this is what you mean when you say, "[…]the `absolute' part is important." The word "absolute" is standing in for having these standards and endeavoring for literally 100% of the (finitely many, discrete) components of the system 100% meeting those standards.

Confirm/deny/improve?

All of the above was meant to point at reasons why I suspect trusting individuals responding to incentives moment-by-moment to be a weaker and less effective strategy than building an intentional community that Actually Asks Things Of Its Members.

Yep, I agree. Free markets are a terrible strategy for opposing Moloch.

It's worth noting that the people most closely involved with this project (i.e. my closest advisors and those most likely to actually sign on as housemates) have been encouraged to spend a significant amount of time explicitly vetting me with regards to questions like "does this guy actually think things through," "is this guy likely to be stupid or meta-stupid," "will this guy listen/react/update/pivot in response to evidence or consensus opposition," and "when this guy has intuitions that he can't explain, do they tend to be validated in the end?"

I just want to add public corroboration on this point. Yes, Duncan encouraged along these lines. My own answers round to "is good" in each case. I'm really just flat-out not worried about him leading Dragon Army.

And it doesn't quite solve things to say, "well, this is an optional, consent-based process, and if you don't like it, don't join," because good and moral people have to stop and wonder whether their friends and colleagues with slightly weaker epistemics and slightly less-honed allergies to evil are getting hoodwinked. In short, if someone's building a coercive trap, it's everyone's problem.

I really like that you point out things like this.

Should the experiment prove successful past its first six months, and worth continuing for a full year or longer, by the end of the first year every Dragon shall have a skill set including, but not limited to[…]

I like the list, overall. I can give you a more detailed commentary in person rather than digging in here. Let me know if you'd prefer it done in public here. (Just trying not to overly tax public attention with personal impressions.)

[…]we are trying not to fall prey to GOODHART'S DEMON.

Heh. That reference made me laugh. :-) I like that as a focus, as will surprise you not at all.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-26T05:25:12.150Z · LW(p) · GW(p)

Confirm. More particularly, I'm pointing at something like "being able to rely on a plot that requires ten things to go right" (e.g. a CFAR workshop).

Feel free to add any number of additional thoughts and personal impressions—I like the idea of being able to say "But I did due diligence! We argued everything right out in the open fora!"

comment by Raemon · 2017-05-25T23:46:50.248Z · LW(p) · GW(p)

I'm in the "would probably like to be in round 2 of the experiment" camp (I think there's probably a frustratingly large number of people in that camp and a frustratingly small number in the "let's do this!" camp. I hope there's enough of the latter for this to work.

My main question (which you may or may not yet know the answer to) is "if you don't have a Duncan, what version of this experiment would you recommend running?

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-28T05:22:40.038Z · LW(p) · GW(p)

Get your house together and start a norm of "ironclad try-things experiments" lasting no less than two weeks and no more than four (with overlap or not being up to you). So, you have a regular house meeting where you all say, "What thing to we want to try, and top-down make ourselves keep trying past the first possible warning signs, because we suspect there's value on the far side of the valley?"

And then you run something like eight full experiments before you abandon the meta-norm.

Replies from: Raemon
comment by Raemon · 2017-05-28T06:22:21.111Z · LW(p) · GW(p)

Cool. Makes sense.

comment by handoflixue · 2017-05-31T20:06:40.299Z · LW(p) · GW(p)

Concerns about you specifically as a leader

1) This seems like an endeavor that has a number of very obvious failure modes. Like, the intentional community community apparently bans this sort of thing, because it tends to end badly. I am at a complete loss to name anything that really comes close, and hasn't failed badly. Do you acknowledge that you are clearly treading in dangerous waters?

2) While you've said "we've noticed the skulls", there's been at least 3 failure modes raised in the comment which you had to append to address (outsider safety check-ins, an abort/exit strategy, and the issue of romantic entanglement). Given that we've already found 3 skulls you didn't notice, don't you think you should take some time to reconsider the chances that you've missed further skulls?

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-06-01T02:47:45.125Z · LW(p) · GW(p)

I note for others reading this comment and wondering why it hasn't been addressed that I've ceased replying to handoflixue and a couple of other posters on a policy level, for reasons surrounding norms of discourse, strawmanning, epistemic humility, presence or absence of good faith, etc. It's possible that the above contains good questions or insights; if someone else chooses to repost/re-ask/rephrase sections of this, I'll likely respond to them.

http://lesswrong.com/lw/c1/wellkept_gardens_die_by_pacifism/

comment by handoflixue · 2017-05-31T19:39:13.618Z · LW(p) · GW(p)

Genuine Safety Concerns

I'm going to use "you have failed" here as a stand-in for all of "you're power hungry / abusive", "you're incompetent / overconfident", and simply "this person feels deeply misled." If you object to that term, feel free to suggest a different one, and then read the post as though I had used that term instead.

1) What is your exit strategy if a single individual feels you have failed? (note that asking such a person to find a replacement roommate is clearly not viable - no decent, moral person should be pushing someone in to that environment)

2) What is your exit strategy if a significant minority of participants feels you have failed? (i.e. enough to make the rent hit significant on you, not enough to outvote you)

3) What is your exit strategy if a majority of participants feel you have failed? (I realize you addressed this one somewhere in the nest, but the original post doesn't mention it, and says that you're the top of the pack and the exception to an otherwise flat power structure, so it's unclear if a simple majority vote actually overrules you)

4) What legal commitments are participants making? How do those commitments change if they decide you have failed? (i.e. are you okay with 25% of participants all dropping out of the program, but still living in the house? Under what conditions can you evict participants from their housing?)

5) What if someone wants to drop out, but can't afford the cost of finding new housing?

6) It sounds like you're doing this with a fairly local group, most of whom know each other. Since a large chunk of the community will be tied up in this, are you worried about peer pressure? What are you doing to address this? (i.e. if someone leaves the experiment, they're also not going to see much of their friends, who are still tied up spending 20+ hours a week on this)

Questions I think you're more likely to object to

(Please disregard if you consider these disrespectful, but I think they are valid and legitimate questions to ask of someone who is planning to assume not just leadership, but a very Authoritarian leadership role)

7) You seem to encounter significant distress in the face of people who are harshly critical of you. How do you think you'll handle it if a participant freaks out and feels like they are trapped in an abusive situation?

8) In this thread, you've often placed your self-image and standards of respect/discourse as significantly more important than discussion of safety issues. Can you offer some reassurances that safety is, in fact, a higher priority than appearances?

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-06-01T02:48:08.027Z · LW(p) · GW(p)

I note for others reading this comment and wondering why it hasn't been addressed that I've ceased replying to handoflixue and a couple of other posters on a policy level, for reasons surrounding norms of discourse, strawmanning, epistemic humility, presence or absence of good faith, etc. It's possible that the above contains good questions or insights; if someone else chooses to repost/re-ask/rephrase sections of this, I'll likely respond to them.

http://lesswrong.com/lw/c1/wellkept_gardens_die_by_pacifism/

comment by Jacob Falkovich (Jacobian) · 2017-05-30T19:14:53.156Z · LW(p) · GW(p)

We will have an internal economy whereby people can trade effort for money and money for time and so on and so forth, because heck yeah.

The last time I lived in an actual barracks, we did exactly that and it worked out great. In brief, all chores were assigned values in a currency and auctioned out to members. Members with less currency had priority in taking over tasks. If no one volunteered, the person with the least currency had to do it. Eventually, these points were used for some bartering of mutual services.

There's more detail and more bad jokes in the blog post I wrote about it.

comment by oge · 2017-05-26T15:06:04.304Z · LW(p) · GW(p)

Hey Duncan, where can I sign up for this?

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-26T15:44:49.245Z · LW(p) · GW(p)

Huh, I had not imagined Oge alongside the other probable housemates, but I'm imagining it now and it's certainly interesting.

Send me an email with more details about your interest?

comment by ChristianKl · 2017-05-26T10:55:40.390Z · LW(p) · GW(p)

At least one of: fundamentals of woodworking, electrical engineering, welding, plumbing, or similar (employable trade skill)

How much time investment do you think it needs to pick up one of those skills?

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-26T11:06:14.361Z · LW(p) · GW(p)

Y'know, that was the section I was least confident in. I think I'm updating my assertion to something like "will have logged an initial 20 hours, enough to understand the territory and not feel identity-blocked from moving forward if desired."

I suspect you're looking at at least 100 hours to even begin to be competent to do informal contract work in any of those fields, and properly more like 1000+ hours' training. Some of them require certification, as well.

Replies from: evand, gwillen, ChristianKl
comment by evand · 2017-05-26T13:06:27.086Z · LW(p) · GW(p)

Those numbers sound like reasonable estimates and goals. Having taught classes at TechShop, that first handful of hours is important. 20 hours of welding instruction ought to be enough that you know whether you like it and can build some useful things, but probably not enough to get even an intro-level job. It should give you a clue as to whether signing up for a community college class is a good idea or not.

Also I'm really confused by your inclusion of EE in that list; I'd have put it on the other one.

comment by gwillen · 2017-05-26T19:06:55.113Z · LW(p) · GW(p)

I was assuming "fundamentals of" didn't imply getting the skill to the point that one actually would be employable with it, just that one would get enough of the basics to do the skill and continue to practice it. That level seems eminently achievable. The greater level does seem challenging.

comment by ChristianKl · 2017-05-26T12:07:25.990Z · LW(p) · GW(p)

Can you share why you consider this to be a goal important enough to put into the list?

Replies from: Duncan_Sabien, drethelin
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-26T15:03:10.273Z · LW(p) · GW(p)

Yeah. It's something like, conclusive proof that one is not falling into traps of fixed mindset? And also it's a productive sort of comfort-zone exploration, where intellectual nerdy nerds like myself are getting their hands literally dirty in a way that's likely to be healthy and rounding.

comment by drethelin · 2017-05-29T21:08:52.798Z · LW(p) · GW(p)

Personally I think having some people living in a house who know how to improve and maintain it is a good way to avoid many of the potential longterm problems of living in what's likely to be a century old building.

comment by Lulie · 2018-02-10T22:04:28.373Z · LW(p) · GW(p)
... a strong sense based off both research and personal experience that physical proximity matters, and that you can't build the correct kind of strength and flexibility and trust into your relationships without actually spending significant amounts of time with one another in meatspace on a regular basis, regardless of whether that makes tactical sense given your object-level projects and goals.
But I'm going to hold off on going into those in detail until people insist on hearing about them or ask questions/pose hesitations that could be answered by them.

Why does physical proximity matter?

It seems intuitively true to me, but what really is it about physical proximity that makes such a big difference?

My guesses are around 'information bandwidth': seeing how someone interacts in the physical world is a large channel of information, which you don't get from online interaction. (At least, we won't until VR becomes more sophisticated, and can do things like track and render our eye movements.)

Replies from: Benito
comment by Ben Pace (Benito) · 2018-02-10T23:29:28.976Z · LW(p) · GW(p)

A key part of how we do trust is the massive amount of information gained from seeing people in a wide variety of social contexts.

You can assess a person's abilities to deal with a friend who is upset; to behave well in one on one settings, small parties, or formal events; to deal themselves with being under great stress; to correctly pick up on subtle social cues that someone could use some help / is overstepping their bounds. I've made assumptions about people's reliability and trustworthiness after interactions via text/skype, yet seen them act in distinctly weird ways at social gatherings that makes others uncomfortable, and I learned I'd made false assumptions of generality.

We have a lot of heuristics based on facial expressions and socialising cues that are hard to explicate (or even notice).

comment by artemium · 2017-06-01T06:16:55.955Z · LW(p) · GW(p)

This is a good idea that should definitely be tested. I completely agree with the Duncan that modern society, and especially our community is intrinsically to allergic to authoritarian structure despite strong historical proof that this kind of organisations can be quite effective.

would consider joining in myself but given my location that isn't an option.

I do think that in order to build successful organisation based on authority the key factor are personal qualities and charisma of the leader and rules play smaller part.

As long as project is based on voluntary participation, I don't see why anyone should find it controversial. Wish you all the best.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-06-01T06:24:50.626Z · LW(p) · GW(p)

Thanks for the support. Hope to funnel some interesting data back to the rest of the world.

comment by [deleted] · 2017-05-31T04:26:23.680Z · LW(p) · GW(p)

a

Replies from: drethelin, MaryCh, cousin_it, Duncan_Sabien
comment by drethelin · 2017-05-31T05:59:20.374Z · LW(p) · GW(p)

THIS IS WHY WE NEED DOWNVOTES.

Replies from: metatroll, gwillen
comment by metatroll · 2017-05-31T06:52:14.788Z · LW(p) · GW(p)

Downvote Army, Theory and Charter (2300 year read)

Replies from: username2
comment by username2 · 2017-06-02T15:50:28.670Z · LW(p) · GW(p)

Thank you metatroll; you're our only hope.

comment by gwillen · 2017-06-03T08:45:15.059Z · LW(p) · GW(p)

I don't disagree that downvotes are valuable, but I think what was needed here was moderator action. It's much too late for that now, though. (And I'm not blaming the moderators -- I've been in their shoes and their job is very difficult. There would have been plenty of blame heaped on them if they'd done what I think is the right thing.)

comment by MaryCh · 2017-06-01T14:36:46.937Z · LW(p) · GW(p)

Why do people insist on "critisizing people" instead of "criticizing proposals"? this is like fundamental attribution error, only intentional.

comment by cousin_it · 2017-05-31T09:20:44.536Z · LW(p) · GW(p)

As someone who has also criticized Duncan strongly, I don't think telling him to kill himself ended up helping your goals, whatever they are. There's no point trying to influence people and doing it poorly.

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-31T04:38:48.548Z · LW(p) · GW(p)

Make a bet; put up or shut up. $1000 to your $100 that no one opting in to Dragon Army experiences significant emotional distress as a result of its requirements, bet evaluated at the end of the first six months.

comment by Sinal · 2017-05-27T05:24:08.780Z · LW(p) · GW(p)

This post makes me miss my days in marching band, or in the Boy Scouts. Honestly it doesn't sound all that authoritarian. Can you not accomplish the same thing using a traditional organization and a meeting place? Why does it have to be a house?

Replies from: John_Maxwell_IV, Qiaochu_Yuan
comment by John_Maxwell (John_Maxwell_IV) · 2017-05-27T07:27:00.289Z · LW(p) · GW(p)

This post makes me miss my days in marching band, or in the Boy Scouts. Honestly it doesn't sound all that authoritarian.

I agree with the sentiment. It seems that most things in modern culture like marching band or Boy Scouts which demand commitment and/or group cohesion are at least a few decades old. I suspect this is because we have developed cultural antibodies towards the creation of new things like this (as evidenced by some of the comments in this thread).

When Tocqueville visited the United States in the 1830s, it was the Americans' propensity for civic association that most impressed him as the key to their unprecedented ability to make democracy work. "Americans of all ages, all stations in life, and all types of disposition," he observed, "are forever forming associations. There are not only commercial and industrial associations in which all take part, but others of a thousand different types--religious, moral, serious, futile, very general and very limited, immensely large and very minute... Nothing, in my view, deserves more attention than the intellectual and moral associations in America."

Source: http://xroads.virginia.edu/~HYPER/DETOC/assoc/bowling.html (This quote is part of a longer essay about declining social capital in the US)

If we've lost cultural memories about how to create new associations, early attempts to get "association culture" going again may fail. But they seem like very worthwhile experiments. (I suppose if people dislike learning things the hard way, they might be able to read about the early history of successful associations and glean some best practices?)

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2017-05-27T23:09:43.373Z · LW(p) · GW(p)

Exploring this intuition more deeply: Successful communities are known to do community stuff like sing together and have secret handshakes. If a person in a fledgling community proposes doing something this for the first time, people in our culture are apt to shut them down by saying it's weird or (if done for the explicit purpose of community-building) inauthentic. The skeptics miss the fact that the weirdness is a feature, not a bug. Doing weird stuff with other people builds deep friendships, in the same way sharing private thoughts and fears builds deep friendships. Then somewhere along the line the weird stuff starts to become a tradition, and the fact that it's a tradition builds group cohesion in a different way.

Hypothesis for why the antibodies exist: People noticed that there were standard methods for creating in-group identification, and these methods were exploited by con artists, advertisers, managers trying to get their employees to work harder, teachers trying to get their students to behave, etc. Antibodies formed in response.

Replies from: Kaj_Sotala, Duncan_Sabien
comment by Kaj_Sotala · 2017-05-28T16:11:22.346Z · LW(p) · GW(p)

Hypothesis for why the antibodies exist: People noticed that there were standard methods for creating in-group identification, and these methods were exploited by con artists, advertisers, managers trying to get their employees to work harder, teachers trying to get their students to behave, etc. Antibodies formed in response.

Given that the standard response to "weird" groups that demand cohesion/commitment seems to be "that sounds like a cult", it feels like these antibodies could have developed after the cult scares, which Wikipedia tells me showed up seriously in the 1970s.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2017-05-29T00:02:04.918Z · LW(p) · GW(p)

Yeah, this is my hypothesis. Vietnam and Watergate probably seriously contributed to a general erosion of trust in authorities as well.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2017-05-30T05:05:40.271Z · LW(p) · GW(p)

Wasn't "Generation X" supposed to have been really cynical compared to the generations that preceded it? When I search for "cynical generation" on Google I get a bunch of results about how Millennials are really cynical too. So maybe it was a permanent shift.

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-28T05:25:43.681Z · LW(p) · GW(p)

I like both of these posts a lot. Thanks for adding them—they helped me make explicit something implicit that I felt very strongly.

comment by Qiaochu_Yuan · 2017-05-27T09:00:16.253Z · LW(p) · GW(p)

Can you not accomplish the same thing using a traditional organization and a meeting place? Why does it have to be a house?

A couple of reasons occur to me. First, everyone's real goddamn busy. If you already live in a rationalist house and also have a job there's not gonna be a ton of time or attention left in your life for other stuff as big as what Duncan wants Dragon Army to be. Second, Duncan wants people to do things like exercise with each other first thing in the morning before heading off to work, and it seems really annoyingly difficult to coordinate something like this with anyone other than the people you live with.

In general it's just way, way easier to coordinate all sorts of activities with the people you live with than with anybody else. My most direct experience with this was living in a fraternity and seeing the difference between the brothers who did and didn't live in the house; there was a big difference in terms of social accessibility and bonding, and accordingly we strongly encouraged people to live in the house when at all possible.

comment by ChristianKl · 2017-05-26T10:51:10.720Z · LW(p) · GW(p)

A Dragon will take responsibility for its actions, emotional responses, and the consequences thereof, e.g. if late will not blame bad luck/circumstance, if angry or triggered will not blame the other party. [...] a Dragon who has been having trouble getting to sleep but has never informed the other Dragons that their actions are keeping them awake will agree that their anger and frustration, while valid internally, may not fairly be vented on those other Dragons

This sounds like you need an agreement about what it means to blame and what it means to vent another person. Circling norms would suggest that it's good if a person expresses their emotions.

One solution would be that if a person goes in venting mode, the reaction of the other person would be to go into Circling/NVC or Focusing queries and the venting person is responsible to answer the queries.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-26T11:04:43.864Z · LW(p) · GW(p)

Circling and Double Crux facilitation are absolutely top priorities, for interpersonal communication. I strongly agree with your suggestion.

comment by GuySrinivasan · 2017-05-26T02:31:49.454Z · LW(p) · GW(p)

Love it. Reminds me of my strong preference to make rules for myself and follow them even when it seems locally silly over trying to continually make good decisions in the moment.

comment by Decius · 2017-05-26T00:04:50.923Z · LW(p) · GW(p)

I would also add the rules that cover the edge cases:

A Dragon does not skirt the letter or intent of the rule, or attempt to comply minimally with either.

comment by Qiaochu_Yuan · 2017-05-25T23:57:57.898Z · LW(p) · GW(p)

I am super, super in favor of this experiment, and would have enthusiastically participated fully in it something like 2 years ago, before moving to Terabithia. I think it's tackling the biggest things missing from the community and am very excited to see what happens.

comment by sohois · 2017-06-01T11:27:05.581Z · LW(p) · GW(p)

I've got a seemingly obvious flaw to point out; in fact, it appears so obvious to me that I would be surprised if it hadn't been addressed in the original post or one of the subsequent comments and I simply skipped over it. Nonetheless, it may be of use.

I feel that the whole experiment is rather undermined by selection bias. I think its a fair assumption that you would want this method tried elsewhere were the experiment successful, you would want "Dragon Houses" to pop up anywhere where there is a sufficient rationalist community. However, it would be an error to think that the dragon house model would work elsewhere once you aren't picking out the people most suitable for living there, and have to expand to more general types. Again, I feel it is a fair assumption that some people will simply be a lot more suited to authoritarian communities, such as the Army, than others. If you can pre-approve for authoritarian types, and eject those who don't fit as you identify them, then it seems far more likely that the community will survive, but it could still be inferior to another model that does not select so heavily.

Is this merely a proof of concept? i.e., you will run the dragon house for a short period of time under perfect conditions to ensure that it is not a complete disaster, and does not result in the 'toxic cult' dangers that others have outlined, before expanding the experiment? In which case the selection bias would be removed in the second run and you could ascertain the general effectiveness of the model.

Apologies for the poor construction of the above, I struggled somewhat to put it into words but I hope you can comprehend regardless

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-06-01T16:37:09.660Z · LW(p) · GW(p)

No, this is a valid point.

I'm not, in fact, looking to build an exportable model. I'd like to export pieces of a model—specific activities that worked, norms that are high-impact, cool insights and so forth. But I don't particularly want or expect other Dragon Army houses to pop up elsewhere.

It's not a proof-of-concept so much as something that I, myself, want to have access to/experience/be a part of. If I can make it work even just this one time, in this one place, that's enough.

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-31T04:57:24.270Z · LW(p) · GW(p)

Making this top-level instead of troll-feeding:

Make a bet; put up or shut up. $1000 to your $100 that no one opting in to Dragon Army experiences significant emotional distress as a result of its requirements, bet evaluated at the end of the first six months.

I extend this offer to cousin_it and handoflixue (not to 18blahblah because they're not representing themselves as a real person).

Replies from: handoflixue, Alicorn, cousin_it, entirelyuseless
comment by handoflixue · 2017-05-31T19:24:21.517Z · LW(p) · GW(p)

And it doesn't quite solve things to say, "well, this is an optional, consent-based process, and if you don't like it, don't join," because good and moral people have to stop and wonder whether their friends and colleagues with slightly weaker epistemics and slightly less-honed allergies to evil are getting hoodwinked. In short, if someone's building a coercive trap, it's everyone's problem.

I don't want to win money. I want you to take safety seriously OR stop using LessWrong as your personal cult recruiting ground. Based on that quote, I thought you wanted this too.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-06-01T00:26:01.945Z · LW(p) · GW(p)

Point of fact/order: I have recruited ZERO people as a result of this post; that was never its intention, I already had a set of ~20 people plausibly interested and THIS IS WHY I CONTINUE TO ENGAGE WITH EVERYONE OTHER THAN YOU, STOP SLIPPING IN STRAWMANNED NEGATIVE STEREOTYPES INTO THE VAST MAJORITY OF YOUR COMMUNICATION HOLY CRAAAAAAAAP.

Only one new person has expressed interest, and has greater than thirty percent odds of getting in; by this point, I feel justified in saying you're a jerk; get somebody else to post your reservations if you want them addressed. You have BY FAR earned the right to be ignored.

(I'm curious what sort of mental process leads you to be overconfident in a false/straw conclusion ten times in a row, and yet still not pause and do any sort of meta-check the eleventh time, but alas, I shall not find out.)

Replies from: None
comment by [deleted] · 2017-06-01T01:22:37.371Z · LW(p) · GW(p)

You flipping out in response to text comments, despite having the luxury of time and privacy to compose your responses doesn't bode well for how you'd react to a member screaming in your face about how you hoodwinked them into an abusive arrangement.

You may feel that handoflixue is strawmanning you, assuming bad faith, etc, but the person screaming in your face could do much much worse, even if you did everything right! If you can't handle this level of criticism gracefully, you're not fit to lead anything like your proposal.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-06-01T02:19:09.410Z · LW(p) · GW(p)

You can connect those dots, but I do not. In particular, I'm less flipping out at handoflixue in particular, and more loudly signaling strong rejection of what they're doing. In other words, it's much, much, much more about "everyone else" at this point than it is about handoflixue, who I made a policy level decision not to cooperate with many comments back. I reject the implicit assumption in your post that "always be quiet/calm/nice/polite back" is actually a good rule—in real life, Gandhi only wins against an enemy who's willing to update, and however much handoflixue has indeed rolled back their tone, they haven't even tried to stop strawmanning and jumping to conclusions.

You can certainly disagree with me about whether these policies I'm following are net good or optimized in ways you'd endorse, and that's entirely cool—the point is not to please everyone, but to be 1) principled, 2) consistent, and 3) transparent. Nobody who would enter this experiment (i.e. be intrinsically interested AND make it past all the filters) will end up behaving as poorly as handoflixue. That's kind of the whole point of filters—to prevent people who embrace and endorse unacceptable-according-to-the-subgroup behavior.

Replies from: Lumifer, None
comment by Lumifer · 2017-06-01T02:35:52.257Z · LW(p) · GW(p)

I'm less flipping out at handoflixue in particular, and more loudly signaling strong rejection of what they're doing

So you're screaming at people to virtue signal?

comment by [deleted] · 2017-06-01T03:01:17.081Z · LW(p) · GW(p)

I think you're being wildly optimistic about your vetting procedures. I don't think you can reliably predict how people will react in high-stress situations with your filters.

in real life, Gandhi only wins against an enemy who's willing to update

Well too bad, because your hypothetical screaming roommate isn't willing to update, and they're screaming in your face at 2am anyway. Can you defuse the situation? Or will you end up with, at best, a messy eviction that's traumatizing for all parties involved?

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-06-01T03:10:42.437Z · LW(p) · GW(p)

I ask, on a meta level: was this question rhetorical?

Because I suspect there's literally no answer I could give that would satisfy you (but I hold that suspicion lightly, and will believe you if you tell me I'm wrong).

The thing is, any true and honest answer to "how will you defuse that situation?" is something like 20% principles and 80% context/subtlety/reactions in the moment.

The answer to your question is yes, I can defuse the situation, and the confidence in the yes comes from the fact that I have defused such situations before—when other people caused them. I've also defused such situations when caused by me, but outside of the context of fights within a romantic relationship (which I claim is a special case, and where I also played the role of defuser more than the role of exploder when I couldn't just head things off at the pass in the first place), the last time I caused such a situation was about sixteen years ago. I learned how to not cause them.

And no, I can't give you a blow-by-blow that will sound convincing, because again, it's all context. And in other places in this post, where I listed general principles and heuristics, people who were already predisposed to be hostile pointed out that, from their perspective, it sounded a lot like empty platitudes.

So I'm curious what the point of the question was, and if it was to honestly ask, I'm curious what sort of answer would actually satisfy you.

I'll take silence to mean "it was a rhetorical question."

Replies from: None
comment by [deleted] · 2017-06-01T11:40:48.179Z · LW(p) · GW(p)

Yes, that particular question was rhetorical.

But my more general point is that I think you're wildly overconfident in your ability to manage difficult social situations because I think very few people could successfully navigate the issues that will arise if this goes wrong, and you haven't given me enough reason to think that you're extraordinarily good. What little I know of you (this comments section) points towards you being a fairly regular person that gets upset when people pummel you with unfair criticism and reacts in fairly regular ways. I am not convinced that is good enough to undertake a dangerous and BINDING venture.

Since I think it would take an extraordinary person to pull off a soft landing if this goes catastrophically wrong, it would take rather extraordinary evidence to convince me that you are such a person. The sort of answer that would satisfy me is of the sort that involves a good number of other people testifying that they know based on experience that you would be able to handle the worst-case scenarios.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-06-01T16:34:51.332Z · LW(p) · GW(p)

You're welcome to have your prior of "I think very few people could navigate these kinds of social issues" cause you to bet, in each specific case, that the answer is "no;" default skepticism is clearly the logical strategy there.

But I don't know where you got the impression that I was trying to update your opinion/satisfy you, or even providing evidence that potentially could. Like, sure, if you conceptualize all of this as "Duncan trying to impress the general public and get them to endorse him personally," then this is a pretty poor showing—but that wasn't what this post was for. It's not one of the targets and never was.

I was seeking as many concrete, object-level criticisms and ideas as I could find, and that's it. I get that you're not convinced of me personally, but I was never trying to convince you in the first place; you thinking I'm overconfident or not is pretty noisy evidence and not worth optimizing for. (Further, I posit that attempting to be convincing via internet comments would be a fool's errand anyway.)

The simple fact is, I am indeed one of those very few people. I can point to three or four other people who are better just in my own small social circle, but they're busy doing other things and can't take the time to start a house. You'll note that there are a few people openly testifying as to my ability in these comments, and also that the biggest testimony is "deciding to join in," of which twenty did on an experimental weekend and around ten are planning to, long haul.

comment by Alicorn · 2017-05-31T05:37:03.002Z · LW(p) · GW(p)

Evaluated at the end of six months how?

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-31T05:56:27.662Z · LW(p) · GW(p)

Participant (or dropout) self-report, including (if necessary) a reasonable buffer time of like a month for e.g. someone who left the house to get un-headstuck if they feel headstuck.

comment by cousin_it · 2017-06-08T09:33:31.209Z · LW(p) · GW(p)

After thinking about it for a while, I'm willing to accept some bets, but not this one. Most of my objections here have been about learning skills, not emotional distress. How about we agree on objective tests for some of the skills you listed (e.g. 10 pull-ups on video, welding certificate, $X earned on graphic design commissions) and make a 50:50 bet for $500 that less than 50% of the planned participant-skill pairings will come true in a year? (I'm open to bargaining about every part of this.) If it puts a fire under you and many people learn tangible skills as a result, for me it will be money well spent.

comment by entirelyuseless · 2017-05-31T14:53:25.197Z · LW(p) · GW(p)

You might not want to accept in order to limit total risk, but I would like to take the $100 side of this bet if you are agreeable.

This is not because I think your project is guaranteed to fail, but because there is definitely a chance much larger than 10% chance of at least one person saying that they suffered significant emotional distress. Your implied odds here are very overconfident. If there are e.g .six participants, a chance of 98% of not saying that they had such distress implies an approximately 12% chance of someone saying that they did.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-31T17:29:31.530Z · LW(p) · GW(p)

Yeah, I have to limit total financial risk, but I can at least offer you a bet of something like $200 to $20.

Note that I'm very specifically betting about emotional distress because of the unique aspects of the experiment, and not simply "no one will be distressed at any time." Like, I'm not betting at ten to one odds that no one's going to have a rough month or get angry at one another, I'm just betting that it won't be because of any of the ways in which Dragon Army is different from casual group housing.

Replies from: entirelyuseless, eeuuah
comment by entirelyuseless · 2017-06-01T02:28:14.318Z · LW(p) · GW(p)

$200 to $20 is fine, as well as the limitation to the specific aspects of this group house, as long as the fact of this matter is judged based on the self assessment of the participants. (That is, if some participant says it was because of those aspects, I win the bet, without further investigation into the accuracy of their claim.)

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-06-01T02:44:25.771Z · LW(p) · GW(p)

Yeah, it absolutely should come from the participants, free of investigatory pressures. Confirmed!

comment by eeuuah · 2017-05-31T20:25:45.448Z · LW(p) · GW(p)

For an experiment that is going to have an explicit cost in the tens of thousands of dollars, and an even higher implicit cost, $1000 doesn't seem like very much to bet on an aspect of it which you are confident in.

Not that the experiment would necessarily be an overall failure if some participants experience great emotional stress and washed out. A sufficiently high performance pressure org should expect wash outs.

(For what it's worth, I am sympathetic to the sort of thing you're trying to do here, and would be interested in participating in a similar experiment, but am very turned off by particular elements of your approach.)

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-06-01T00:27:15.678Z · LW(p) · GW(p)

Betting $1000 has a low cost, but betting $1000 multiple times adds up; I don't make very much money and won't be making any through this experiment (indeed, I expect to lose money funding various little projects).

Replies from: eeuuah
comment by eeuuah · 2017-06-01T17:53:51.590Z · LW(p) · GW(p)

Yeah that's reasonable. I read your post as being unwilling to bet even $1000 overall, my b if that was a misinterpretation.

I didn't expect you would be making any money on this venture (social organizing is usually expensive) - I expect that anyone willing to put together a venture like this is doing it because they think the outcome will be good, not because they think it will be personally profitable.

Regardless, I look forward to seeing what comes of your experiment.

comment by philh · 2017-05-30T10:57:01.769Z · LW(p) · GW(p)

I'm curious whether and how you're planning to accommodate Dragons with disabilities (physical or mental)? Specifically regarding the skills you expect them to be above-average at by the end of the year, but also in general.

(If the answer is something like "Dragon army is probably not a good fit for helping them become their most awesome", I think that's totally fine.)

(This is a long comment thread, and you might have answered my question already. If so, feel free to just tell me so, and I'll try to search it out. I did do an in-page search for the word "disabled".)

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-30T16:48:51.807Z · LW(p) · GW(p)

Thus far, we're not prepared to help people with major disabilities become their most awesome. We've got a few minor physical and mental issues in the mix, but they're all the sorts of things that require on-the-spot accommodation or exemption from fewer than 10% of our activities.

comment by vackosar · 2017-07-11T19:45:52.004Z · LW(p) · GW(p)

Sounds good. I think you should attempt this on small scale and with less rules and report back collectively. Personally I would be open to try out some vanilla variation on this.

The name sound little bit childish. I would rather choose something more serious and mature. Like name of some mineral, chemical, geographical point or similar.

comment by Larks · 2017-06-01T03:07:24.362Z · LW(p) · GW(p)

The section on goals reminded me a little of Plato's Republic. The perfect society involves sacrificing all wealth, art, free expression, and what does it offer in return?

Victory in war against similar-sized enemies.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-06-01T03:15:15.312Z · LW(p) · GW(p)

Ack erk

I do have a concept of a scale of luxuries/art/expression and so forth, where it goes from a beautiful, rich, well-lived life on one end to lotus eating/masturbation on the other. But one of the first things I learned as I tried to explain this concept to others was that the judgment aspect was deeply personal, and that e.g. what I would identify as lotus eating in myself might be the-bare-minimum-to-make-life-worthwhile in someone else. Trying to tell someone else that what they were doing was lotus eating was just an irretrievably asshole move; even asking could cross the line if it carried with it overtones of judgment or pressure.

I strongly disagree with Plato, at least as you've summarized here. I agree that is a horror and a travesty.

Replies from: Larks
comment by Larks · 2017-06-02T02:03:03.335Z · LW(p) · GW(p)

I think Plato fans would probably argue I'm being somewhat unfair. If nothing else, the society described was intended as a metaphor for the virtuous person, not necessarily as an actually good society in itself.

More relevantly, I didn't intend for this to be a major criticism of your endeavor. I think if you can avoid sexual conflict (for which I recommend celibacy on your part) this could be worthwhile for (some) people.

Replies from: Screwtape
comment by Screwtape · 2017-06-02T20:15:38.828Z · LW(p) · GW(p)

Clarifying, because I've been turning the sexual conflict part over in my head for the past few days; do you mean celibacy as in not being with anyone else in the project, or celibacy as in not being with anyone at all during the duration of the project? The former makes sense though I'm only about 75% sure I agree, the later seems really odd.

comment by ChristianKl · 2017-05-29T14:15:34.844Z · LW(p) · GW(p)

Duncan, how much time do you plan to invest yourself into this project? Do you plan to reduce your work load at CFAR to make time for the Dragon Army?

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-29T16:30:34.972Z · LW(p) · GW(p)

No plans to reduce CFAR workload; it'll probably take the vast majority of my other time.

comment by username2 · 2017-06-02T15:55:09.201Z · LW(p) · GW(p)

If the supreme commander of the Dragon Army is inclined to spend so much energy against a few abrasive critics on LW, imagine how much energy will have to be exerted once the War begins in earnest. Or will the Barracks be a secret location?

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-06-02T17:36:03.464Z · LW(p) · GW(p)

I think the answer is "all the energy."

More seriously, a major driving force behind both the post and the comments is transparency—people who are interested in refining their model of me have a much better model now than they might've two weeks ago, including where I draw the line and several of my different modes when dealing with things I think are unfair (e.g. quite different styles of response to 182blargl and handoflixue). I'm pretty "pro" being able-to-be-predicted, and I generally endorse my stands even though there are 3-6 places where I think I got marginally too heated, so this is decent data.

comment by casebash · 2017-05-28T06:03:22.900Z · LW(p) · GW(p)

What's Goodhart's Demon?

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2017-05-28T06:29:16.827Z · LW(p) · GW(p)

A riff on Goodhart's Law, which is that any measure which becomes a metric ceases to be a good measure, or more broadly the dynamic that's behind "teaching to the test."