Comment by unreal on The Relationship Between the Village and the Mission · 2019-05-14T03:01:57.014Z · score: 10 (5 votes) · LW · GW

I'm realizing that I need to make the following distinction here:

Village 1) There is a core of folks in the village that are doing a hard thing (Mission). and also their friends, family, and neighbors who support them and each other but are not directly involved in the Mission.

Village 2) There is a village with only ppl who are doing the direct Mission work. Other friends, family, etc. do not make their homes in the village.

I weakly think it's possible for 1 to be good.

I think 2 runs into lots of problems and is what my original comment was speaking against.

Comment by unreal on The Relationship Between the Village and the Mission · 2019-05-12T23:33:52.156Z · score: 27 (8 votes) · LW · GW

[ comment copied from Facebook / I didn't read the full article before making this comment ]

i am somewhat anti-"Mission-centered Village."

i think entanglement between mission and livelihood already causes problems. (you start feeling that the mission is good b/c it feeds you, and that an attack on the mission is an attack on your ability to feed yourself / earn income)

entanglement between mission and family/home seems like it causes more of those problems. (you start feeling that if your home is threatened in any way, this is a threat to the mission, and if you feel that your mission is threatened, it is a threat to your home.)

avoiding mental / emotional entanglement in this way i think would require a very high bar: a mind well-trained in the art of { introspection / meditation / small-identity / surrender / relinquishment } or something in that area. i suspect <10 ppl in the community to meet that bar?

Comment by unreal on The Relationship Between the Village and the Mission · 2019-05-12T23:32:50.719Z · score: 13 (5 votes) · LW · GW

[ comment copied from Facebook / I didn't read the full article before making this comment ]

agree with a lot of this, esp the part about not trying to welcome everyone / lower barrier to entry to the point that there's no commitment involved

i think a successful village will require a fair amount of commitment and sacrifice, in terms of time, effort, opportunity cost, and probably money

if everyone is looking to maximize their own interests, while pursuing a village, i think this will drain resources to the point that nothing gets done or things fall apart. a weak structure will beget a fragile village. and i think a fragile village can easily be net harmful.

at the same time, it's good to be considerate to people who can't contribute a whole lot due to disability or financial insecurity.

Comment by unreal on Dependability · 2019-04-08T17:26:22.773Z · score: 2 (1 votes) · LW · GW

Hmm I wonder if there's something about meditation in a monastic setting where all the rules need to be strived to be followed that does something.

Because I'm pretty sure a number of the residents of the monastery here have become much more reliable after a year or two of being here.

It might be context-dependent too, but I seem to not be worried about that problem as much for me. I feel above-average at the generalization skill and think I can take some useful things out of a specific context into other contexts.

Comment by unreal on Dependability · 2019-03-30T12:27:06.570Z · score: 2 (1 votes) · LW · GW

I don't think I've properly conveyed what I mean by Dependability, judging by the totality of the comments. Or, maybe I've conveyed what I mean by Dependability, but I did not properly explain that I want to achieve it in a specific way. I'm looking to gain the skill through compassion and equanimity. A monastic lifestyle seems appropriate for this.

I also did not at all explain why I'm specifically disadvantaged in this area, compared to the average person. And I think that would bring clarity too, if I explained that.

Comment by unreal on Dependability · 2019-03-28T19:58:24.643Z · score: 19 (6 votes) · LW · GW

I will try to explain where my disagreement is.

1. Concept space is huge. There are more concepts than there are words for concepts. (There are many possible frames from which to conceptualize a concept too, which continues to explode the number of ways to think about any given concept.)

2. Whenever I try to 'coin' a term, I'm not trying to redefine an old concept. I have a new concept, often belonging in a particular new frame. This new concept contains a lot of nuance and specificity, different from any old words or concepts. I want to relay MY concept, which contains and implies a bunch of models I have about the world. Old words would fail to capture any of this—and would also fail to properly imply that I want to relay something confusingly but meaningfully precise.

3. I'm not 'making up' these concepts from nothing. I'm not 'thinking of ways to add complexity' to concepts. My concepts are already that complex. I'm merely sharing a concept I already have, that is coming forth from my internal, implicit models—and I try to make them explicit so others can know what concepts I already implicitly, subconsciously use to conceptualize the world. And my concepts are unique because the set of models I have are different from yours. And when I feel I've got a concept that feels particularly important in some way, I want to share it.

4. I want to understand people's true, implicit concepts—which are probably always full of nuance and implicit models. I am endlessly interested in people's precise, unique concepts. It's like getting a deep taste of someone's worldview in a single bite-sized piece. I like getting tastes of people's worldviews because everyone has a unique set of models and data, and that complexity is reflected in their concepts. Their concepts—which always start implicit and nonverbal, if they can learn to verbalize them and communicate them—are rich and layered. And I want them. (Also I think it is a very, very valuable skill to be able to explicate your implicit concepts and models. LessWrong seems like a good place to practice.)

5. "But what about building upon human knowledge, which requires creating a shared language? What about figuring out which concepts are best and building on those?" I agree this is a good goal to have. The platform of LessWrong is already built to prune concept space (with multiple ways for concepts to be promoted or demoted).

But I do think this goal is "at odds" with my goal of sharing my concepts, learning others' concepts, and diving into the depths of concept space. What I want here is to be in the "whiteboarding" phase where lots of ideas and thoughts are allowed to surface, and maybe it's their first time really seeing the light, but I get feedback, and other people have associated thoughts and share those. And it's a generative sort of phase, rather than a pruning phase.

It seems plausible my posts should stay in my 'blog' and off the front page? I don't fully understand the point of front page vs blog personally. But I'd be happy to keep my posts in the corner of "my blog" and do the 'whiteboarding' thing there.

If any of the mods want to discuss this dilemma with me (I'd prefer doing this offline), I'd be into getting more opinions on this.

Comment by unreal on Dependability · 2019-03-27T04:37:54.176Z · score: 4 (2 votes) · LW · GW

There's some overlap with conscientiousness, but dependability doesn't include being organized, being efficient, caring about achievement or perfection, being hardworking, being careful, being thorough, or appearing competent.

Grit seems important for trying and follow-through in particular!

Comment by unreal on Dependability · 2019-03-27T00:11:40.572Z · score: 2 (3 votes) · LW · GW

I guess I disagree :P

Dependability

2019-03-26T22:49:37.402Z · score: 60 (22 votes)
Comment by unreal on Active Curiosity vs Open Curiosity · 2019-03-24T06:36:54.135Z · score: 2 (1 votes) · LW · GW

I've been watching a bunch of videos on this, and I'm finding them quite interesting so far.

http://iainmcgilchrist.com/videos/

Also I agree lots of precision and discernment are useful to maintain here. It could get "floppy" real fast if people aren't careful with their concepts / models.

Comment by unreal on Rest Days vs Recovery Days · 2019-03-22T06:57:17.262Z · score: 8 (5 votes) · LW · GW

Connotations of Rest that I find relevant:

  • lack of anxiety
  • PSNS activation
  • relaxed body (while not necessarily inactive or passive body)
  • a state that you can be in indefinitely, in theory (whereas Recover suggests temporary)
  • meditative (vs medicative)
  • not trying to do anything / not needing anything (whereas Recover suggests goal orientation)
  • Rest feels more sacred than Recovery

Concept that I want access to that "Recover" doesn't fit as well with:

Comment by unreal on Active Curiosity vs Open Curiosity · 2019-03-21T19:42:35.121Z · score: 10 (2 votes) · LW · GW

Ian McGilchrist came out a book on brain hemispheres and their specialized roles called The Master and His Emissary. This summary was useful: https://www.reddit.com/r/streamentry/comments/b39n4x/the_divided_brain_and_awakening_theorycommunity/

The Left Hemisphere handles narrow focus (like a bird trying to pick out a seed among a bunch of pebbles and dirt), while the Right Hemisphere handles broad, open focus (the same bird keeping some attention on the background for predators). The LH is associated with tool use and manipulation of objects. The RH is associated with exploration and experiential data gathering.

I don't immediately know how the hemispheres may be involved in the types of Curiosity. But a plausible hypothesis might be that Active Curiosity would be more left-brained and Open Curiosity would be more right-brained.

Comment by unreal on Rest Days vs Recovery Days · 2019-03-21T01:28:49.100Z · score: 6 (3 votes) · LW · GW
It's not that you're just doing whatever you "feel" like, in a generic sense. You're doing something like Focusing on your stomach in particular

Yes, this is right.

I also do predict the stomach is where most people should be Focusing on, for getting proper Rest. I think there's some kind of ongoing battle between the head and the stomach, and people/society tends to favor the head.

But I get mileage out of doing Focusing on all kinds of areas.

Comment by unreal on Rest Days vs Recovery Days · 2019-03-21T01:05:01.668Z · score: 12 (3 votes) · LW · GW

So some very general links (since 'improving productivity on chores and future planning' sounds like it could mean a lot of things):

Overall, I've gotten large gains out of designing my life such that work feels like water flowing downhill rather than me trying to trudge uphill.

I use Policy-Based Intentions a fair amount, as a way to save willpower. I'm like a game designer trying to design the maze that my mouse is running in, if that makes sense. And I try to make it easy for the mouse to make the correct decisions depending on the situation.

Comment by unreal on Rest Days vs Recovery Days · 2019-03-20T19:33:36.337Z · score: 11 (5 votes) · LW · GW

I think Kaj is right. But in general, video games / TV feel like they help me escape the present moment, avoid thinking about something or feeling my body, and keep me in my head. Video games also have that feeling of fake productivity which makes them feel like a compulsive "pretend work." (Aka pica.)

I guess I also should have distinguished "reading for pleasure" and "productive reading." I was advocating for the former and not so much the latter.

Once, I did a spontaneous picnic where I put a blanket outside somewhere nice and brought a basket of food and a book. And I just lounged outside, reading [Annihilation] and eating and looking at nature. If I imagine having TV instead, I feel like I lose the ability to choose where my attention goes freely. With a book, I can pause or daydream and take my time with it more easily.

But really it's up to you what counts as Restful. I can imagine watching video interviews Restful for some reason. Or listening to podcasts. I'm less sure what Restful video games for me would be.

Comment by unreal on Rest Days vs Recovery Days · 2019-03-20T19:18:37.242Z · score: 4 (2 votes) · LW · GW

I would experiment with that in the following ways:

  • Try not doing any projects and see how that is (This seems good for what Zvi / Ben describe as an emergency check / Sabbath as alarm.)
  • When you feel like working on a project, do so but periodically check "Do I still feel good about doing this right now? Is this yummy? Do I want to be doing this?" Do the check and then follow what seems good in the moment.

Rest Days vs Recovery Days

2019-03-19T22:37:09.194Z · score: 114 (50 votes)
Comment by unreal on Active Curiosity vs Open Curiosity · 2019-03-17T00:07:39.143Z · score: 8 (4 votes) · LW · GW

Open curiosity does not actively seek to understand. Which is why I call the other one 'active'.

I suspect concentrated and diffuse curiosity are both referring to types of active curiosity. Open curiosity is talking about something different.

Active Curiosity vs Open Curiosity

2019-03-15T16:54:45.389Z · score: 68 (26 votes)
Comment by unreal on Active Curiosity vs Open Curiosity · 2019-03-15T13:11:43.489Z · score: 8 (4 votes) · LW · GW

yes, this is basically what I'm referring to

Comment by unreal on Policy-Based vs Willpower-Based Intentions · 2019-02-28T22:10:31.135Z · score: 9 (4 votes) · LW · GW

Oh yeah. I do think the nature of the task is an important factor. It's not like you can willy-nilly choose policy-based or willpower-based. I did not mean to present it as though you had a choice between them.

I was more describing that there are (at least) two different ways to create intentions, and these are two that I've noticed.

But you said that you can't use this on everything, so maybe the policies that I would need willpower to install just happen to be different from the policies that you would need willpower to install.

This seems likely true.

It's not that I don't have policies, it's that this description sounds like you can just... decide to change a policy, and then have that happen automatically.

It is true that I can immediately change certain policies such that I don't need to practice the new way. I just install the new way, and it works. But I can't install large complex policies all in one go. I will explain.

the Lyft thing sounded complicated to memorize and I would probably need to consciously think about it on several times when I was actually doing the tipping before I had it committed into memory.

With zero experience of Lyft tipping, I would not just be able to think up a policy and then implement it. Policy-driven intentions are collaborations between my S1 and S2, so S2 can't be doing all the work alone. But maybe after a few Lyft rides, I notice confusion about how much to tip. Then maybe I think about that for a while or do some reading. Eventually I notice I need a policy because deciding each time is tiring or effortful.

I notice I feel fine tipping a bit each time when I have a programming job. I feel I can afford it, and I feel better about it. So I create and install a policy to tip $1 each time and run with that; I make room for exceptions when I feel like it.

Later, I stop having a programming job, and now I feel bad about spending that money. So I create a new if-then clause. If I have good income, I will tip $1. If not, I will tip $0. That code gets rewritten.

Later, I notice my policy is inadequate for handling situations where I have heavy luggage (because I find myself in a situation where I'm not tipping people who help me with my bag, and it bothers me a little). I rewrite the code again to add a clause about adding $1 when that happens.

Policy re-writes are motivated by S1 emotions telling me they want something different. They knock on the door of S2. S2 is like, I can help with that! S2 suggests a policy. S1 is relieved and installs it. The change is immediate.

Comment by unreal on Policy-Based vs Willpower-Based Intentions · 2019-02-28T21:05:01.648Z · score: 4 (2 votes) · LW · GW

That's interesting!

How do other people handle the tipping thing? Whether for a driver or at a restaurant? Are you kind of deciding each time?

How do you handle the question of "who pays for a meal" with acquaintances / new people / on dates? My policy in this area is to always offer to split.

How do you handle whether to give money to homeless people or if someone is trying to offer you something on the street? My policy here is to always say no.

I'm curious what other people are doing here because I assumed most people use policies to handle these things.

Comment by unreal on Policy-Based vs Willpower-Based Intentions · 2019-02-28T20:58:41.571Z · score: 8 (4 votes) · LW · GW

I have not much considered group intention-setting. This seems super interesting to explore too.

Phenomenologically, I feel it kind of as... the agreements or intentions of the group (in a circle) recede into the background, to form the water we're all in together. Like it gets to relax in the VERY BACK of my mind and also I'm aware of it being in the back of other people's minds.

And from that shared container / background, I "get to move around" but it's like I am STARTING with a particular set of assumptions.

Other potential related examples:

  • I'm at a Magic tournament. I know basically what to expect—what people's goals are, what people's behaviors will be, what the rules of the game are and how to enforce them. It's very easy for me to move here because a lot of the assumptions are set in place for me.
  • I'm in church as a kid. Similar to the above. But maybe less agreeable to me or more opaque to me. I get this weird SENSE that there are ways I'm supposed to behave, but I'm not totally sure what they are. I'm just trying to do what everyone else seems to be doing... This is not super comfortable. If I act out of line, a grownup scolds me, is one way I know where the lines are.

Potential examples of group policy-based intentions:

  • I have a friend I regularly get meals with. We agree to take turns paying for each other, explicitly.
  • I have a friend, and our implicit policy is to tell each other as soon as something big happens in our lives.

As soon as a third person is added to the dynamic, I think it gets trickier to ensure it's a policy-based intention. (Technology might provide many exceptions?) As soon as one person feels a need to remind themselves of the thing, it stops being a policy-based intention.

Willpower-based intentions in groups feel they contain a bunch of things like rules, social norms, etc.

Comment by unreal on Policy-Based vs Willpower-Based Intentions · 2019-02-28T20:42:29.336Z · score: 4 (2 votes) · LW · GW

There is definitely this sense that exerting force or willpower feels like an EXTERNAL pressure even if that pressure does not have an external source that I could point to or even gesture at. But it /feels/ external or 'not me'.

I have some trauma related to this. I could've gone into the trauma stuff more, but I think it would have made the post less accessible and also more confusing, rather than less. So I didn't. :P

Comment by unreal on Policy-Based vs Willpower-Based Intentions · 2019-02-28T20:38:18.188Z · score: 2 (1 votes) · LW · GW

oh. I must have messed that up. I am OK with this being on the front page. I have definitely noticed some bugs here and there. Esp around the account settings page and trying to change my moderation guidelines. But I think I maybe just messed up the checkbox. Is it default checked to 'not ok'? Because if so, I left it alone thinking it was checked to 'is ok to promote'.

Comment by unreal on Policy-Based vs Willpower-Based Intentions · 2019-02-28T17:25:33.953Z · score: 5 (3 votes) · LW · GW

I enjoyed that article! Seems worth including the link in my article too. Thanks.

Your definition of intention seems different from my use of "willpower-based intention." My 'willpower-based intention' always has a conscious element and cannot do things like "work in the background without my awareness at all." It's maybe quite related to the thing in your forehead.

My policy-based intentions feel kind of like pulling up my inner code guts, making a little rewrite or alteration, and putting them back into my guts. This is a conscious process (the installation), but then the change runs automatically, without holding conscious intentions.

I'm very bad at using these to create personal habits, like drinking water everyday or taking vitamins everyday. I don't think these count. They require willpower after a while.

But maybe I one-time decide the best configuration of spices on the spice rack or how my kitchen is arranged. Then it is automatic for me to place things back where they belong after using it, and it is also automatic for me to want to organize things so they're back where they belong when they get messed up.

These 'desires' for things to be a certain way live in my belly. And it feels like my belly carries motivations and behaviors that I can ride out.

It feels relaxing to have a policy I can lean on, and to carry out the policy. Like water running downhill.

You could maybe think of it as 'intentions you already want to do anyway'. But with policies, your conscious mind can also make alterations / rewrite that code directly. Without any need for convincing, arguing, pushing. It is more of a collaboration I am in between elephant / rider—coming up with good policies makes us feel good and relaxed.

Policy-Based vs Willpower-Based Intentions

2019-02-28T05:17:55.302Z · score: 62 (18 votes)
Comment by unreal on Strong Votes [Update: Deployed] · 2018-06-02T04:41:59.506Z · score: 8 (3 votes) · LW · GW

I was assuming the list comes out once -> I learn enough to understand what types of posts get what voting patterns (or, I learn that the data doesn't actually tell me very much, which might be more likely), but after that I don't need any more lists of posts.

I don't care if it has my own posts on it, really. I care more about 'the general pattern' or something, and I imagine I can either get that from one such list, or I'll figure out I just won't get it (because the data doesn't have discernible patterns / it's too noisy).

Comment by unreal on Strong Votes [Update: Deployed] · 2018-06-02T00:06:05.289Z · score: 8 (3 votes) · LW · GW

I prefer the one-time cost vs the many-time cost.

Comment by unreal on Strong Votes [Update: Deployed] · 2018-06-01T23:25:16.373Z · score: 11 (5 votes) · LW · GW

That makes sense.

But it's really confusing for my models of the post.

Cause there is a real difference between (lots of 2-users voted on this vs. a few 5-users voted on this). Those feel very different to me, and I'd adjust my views accordingly as to whether the post was, in fact, successful.

I get that you're trying to make "lots of 2-users" and "a few 5-users" basically amount to the same value, which is why you're scaling it this way.

But if a post ACTUALLY only has 2-users upvoting it and almost no 5-users, and other posts have 5-users voting on it but very few 2-users, that seems ... worth noting.

Although, you could prob achieve the same by publishing an analysis of upvote/downvote patterns.

You could, for instance, release a list of posts, ranked by various such metrics. (Ratio of low:high user votes. Ratio of high:low user votes. Etc. Etc.)

That would be interesting!

Comment by unreal on Monopoly: A Manifesto and Fact Post · 2018-06-01T23:14:38.979Z · score: 7 (2 votes) · LW · GW

The book The Fine Print covers a lot of examples of "special privileges granted by the government" in a number of industries (rail, telecom, energy). I read it a long time ago, so don't remember a ton from it. But in case anyone's interested in more concrete examples of this.

Comment by unreal on Monopoly: A Manifesto and Fact Post · 2018-06-01T23:12:15.346Z · score: 11 (2 votes) · LW · GW

Really glad you wrote this post. I think it's trying to speak to something I've been concerned with for a while—a thing that feels (to me) like a crux for a lot of current social movements and social ills in the States (including the social justice movement, black lives matter, growing homelessness / decreasing standards of living for the poorest people). And of course, the whole shit-pile that is our health care system.

Some Questions / Further Comments:

(Please respond to each point as a separate thread, so that threads are segregated by topic / question.)

1) My guess is that under "Services and construction", where you list "transportation", you mean a different "transportation" than the one in the graph, which has "Transportation and Warehousing" as its own category? I'd appreciate clarification / disambiguation in the article.

2) I agree with your point RE: intangibles, that they correlate / go together with monopoly. But it's difficult for me to tell HOW MUCH they 'go together'. And whether it is strictly 'a bad sign'. While I'm not a huge fan of how patents sometimes play out, I am a fan of branding. While you can't just try to transfer the effect of Coca-Cola's branding to your new product, I think you can, in fact, try to compete on branding.

(It would be terrible if someone tried to take exclusive rights over the use of the color red in logos or something, though. Hopefully that doesn't ever happen.)

And, honestly, I think the 'value' of their branding might not be too inaccurately priced, in some sense? (Even if the product reduces in quality, I think the branding has value beyond trying to measure quality of product.) I also don't whether 'intangibles' includes things like 'excellent customer service', but if it does, that seems like true value, not 'fake value'. Even though it doesn't directly cash out into more product.

Over time, I think more of what we consider valuable should be in intangibles? Seems like a sign of people having enough useful things that they can now afford to put money into "nice experiences." And in many ways, people value having fewer choices because it cashes out into less effort.

3) Similarly, 'company culture'—while it is 'dark matter' as Robin Hanson says—seems appropriate to value highly in some cases. I don't think most 'monopoly situations' are a result of some company just having a really good, un-copyable company culture, but in general, I do expect it to be very difficult to transfer / copy really excellent company cultures. And as a result, I do expect something monopolistic-looking to emerge as a result of—not shady dealings or exclusive privileges facilitated by government—but as a natural consequence of very few companies, in fact, being really good places to work.

I would really like to be able to disambiguate between the situations where: There are only 3 main firms in this industry. Is it because those 3 firms are in fact providing outsized value in a way that's hard to compete with? Or, is this happening because the government made some poor decisions that favored certain companies for not-very-good reasons, and they leveraged this into an effective monopoly?

Comment by unreal on Strong Votes [Update: Deployed] · 2018-06-01T22:03:37.540Z · score: 3 (3 votes) · LW · GW

That is too many numbers to parse! I only care about the # of ppl who've interacted with the post. Can I just have THAT number as a tooltip? That would mostly resolve my concern here.

Also, it's kind of weird to me that I have 5 vote power given I've only really interacted with this site for... a few months? And you guys have, like, 6? 7? Are you sure your scaling is right here? :/

Comment by unreal on Strong Votes [Update: Deployed] · 2018-06-01T22:01:44.686Z · score: 6 (2 votes) · LW · GW

Would you still be sad if your strong vote was maxed at 5?

1:15 is a big difference! But 1:5 is a lot less. And 1:3 is even lot lot less!

Comment by unreal on Strong Votes [Update: Deployed] · 2018-06-01T21:43:55.858Z · score: 19 (5 votes) · LW · GW

some thoughts before i try this out:

I am worried about this thing where I both want to know: how many ppl liked a thing vs how strongly ppl liked a thing. More for posts than for comments. For posts, if I see a number like 100, I am very confused about how many ppl liked it. It seems to range between 20-50. But if... the vote power actually goes up to 15. Then... I will be confused about whether it's like... 10-50. That's... a big difference to me.

I'd almost would like it if for posts, 1 = normal 2 = strong for ppl with lower karma. And for people with more karma, it's 1 = normal 3 = strong. ? Or something that reduces the possible range for "how many ppl liked the post."

There's also a clear dynamic where people with 4-6 karma tend to check LW more frequently / early, so ... um... karma tends to go up more quickly at the beginning and then kind of tapers off, but it's like...

I dunno, it's kind of misleading to me.

Why do you top out at 16 instead of 5? I'm just ... confused by this.

Kind of wish all 'weak votes' were 1, too, and karma scores only kick in if you strong vote.

Comment by unreal on Strong Votes [Update: Deployed] · 2018-06-01T21:35:30.407Z · score: 8 (3 votes) · LW · GW

that link seems broke

Comment by unreal on Duncan Sabien: "In Defense of Punch Bug" · 2018-05-29T21:24:54.426Z · score: 24 (5 votes) · LW · GW

I am fascinated by this conversation/disagreement about Ender's Game. I think it might be really important. I am upvoting both comments.

Some things it makes me consider:

a) When is violence / attacking the outgroup justified?

b) Would it have been abusive if the children hadn't been lied to? (I lean no. But given that they were lied to, I lean yes.)

c) Is it OK to sometimes frame "the default ways of the universe" as a kind of outgroup, in order to motivate action 'against' them? Ender's Game was about another sentient lifeform. But in ways, the universe has "something vaguely resembling" anthropomorphizable demons that tend to work against human interests. (We, as a community, have already solidified Moloch as one. And there are others.) In a way, we ARE trying to mobilize ourselves 'against the outgroup'—with that outgroup being kind of nebulous and made-up, but still trying to point at real forces that threaten our existence/happiness.

Q for benquo:

How do you feel about sports (or laser tag leagues)?

Comment by unreal on Duncan Sabien on Moderating LessWrong · 2018-05-29T20:47:35.704Z · score: 32 (6 votes) · LW · GW

If you found out some of those cons (or some close version of them) were necessary in order to achieve those pros, would anything shift for you?

For instance, if you see people acting to work on/improve/increase the cons... would you see those people as acting badly/negatively if you knew it was the only realistic way to achieve the pros?

(This is just in the hypothetical world where this is true. I do not know if it is.)

Like, what if we just live in a "tragic world" where you can't achieve things like your pros list without... basically feeding people's desire for community and connection? And what if people's desire for connection often ends up taking the form of wanting to live/work/interact together? Would anything shift for you?

(If my hypothetical does nothing, then could you come up with a hypothetical that does?)

Comment by unreal on Moderating LessWrong: A Different Take · 2018-05-28T21:09:13.778Z · score: 21 (5 votes) · LW · GW

That makes sense.

It would look less like you were emotionally compromised if you tried to do the double crux thing in addition to pointing out the norms violations. E.g., "I think you're over the line in these ways. [List of ways] But, if you did have some truth to what you're saying, would it be this? [attempt at understanding their argument / what they are trying to protect]"

(Maybe you have done this, and I missed it.)

But if you haven't done this, why not?

Alternatively, another move would be, "I feel ___ about engaging with your arguments because they strike me as really uncharitable to the post. Instead I would like to just call out what I think are a list of norms you are violating, which are important to me for wanting to engage with your points."

^This calls to the fact you are avoiding engaging with the critique on your post. (There are plenty of other ways to do this, I just gave one possible example.)

Does that move seem reasonable / executable?

(I'm noticing that if you felt you "should" do these things, it would be an unreasonable pressure. I think you are absolutely NOT obligated to engage in these ways. I'm pointing at these moves because they would cause me, and likely others, to respect you more in the arena of online debate. I already respect you plenty in lots of other arenas, so. This is like extra?)

Comment by unreal on Moderating LessWrong: A Different Take · 2018-05-28T06:35:27.037Z · score: 35 (8 votes) · LW · GW

Weird, I was expecting you to disagree. I was trying to illustrate what I thought you were missing in your own arguments around this.

In the disputes I've seen you engage in, this is kind of what it looks like is happening. (Except you're not a mod, just the author of the post.)

Comment by unreal on Moderating LessWrong: A Different Take · 2018-05-28T05:07:33.453Z · score: 38 (9 votes) · LW · GW

I think it's fine for participants to engage this way.

If a moderator gets embroiled in a disagreement where one side is saying "You're criticizing me wrong" vs "I'm trying to criticize you for X." Then this can get real awkward.

If the criticism itself has (potentially) some truth or validity, but the moderator doesn't acknowledge any of that and instead keeps trying to have a conversation about how the criticism is wrong/improper by LW's standards, then the way this looks is:

a) A moderator is trying to dodge being criticized

b) They are using the mantle of "upholding LW's standards" to hide behind and dodging double cruxing at the object level

c) They aren't acknowledging the overall situation, and so it's unclear whether the mod is aware of how this all looks and whether they're doing it on purpose, or if they're feeling defensive and using principles to (subconsciously) dodge criticism

Here, it is valid to care about more than just whether the mod is technically correct about the criticism's wrongness! The mod might be correct on the points they're making. But they're also doing something weird in the conversation, where it really seems like they're trying to dodge something. Possibly subconsciously. And the viewers are left to wonder whether that's actually happening or if they're mistaken. But it's awkward for a random viewer to try to "poke the bear" here, given the power differential.

Even worse, if someone does try to "poke the bear" and the mod reacts by denying any accusations of motivated reasoning, but continuing to leave the dynamic unacknowledged and then claiming that this is a culture that should be better than that.

In my head, it is obvious why this is all bad for a mod to do. So I didn't explain quite why it's bad. I can try if someone asks.

Comment by unreal on Moderating LessWrong: A Different Take · 2018-05-27T19:23:55.562Z · score: 11 (3 votes) · LW · GW

Where does the piece say that?

Comment by unreal on Duncan Sabien on Moderating LessWrong · 2018-05-26T21:56:14.323Z · score: 9 (4 votes) · LW · GW

If you only plan on annotating past discussions that have long-since died, I mind a lot less. But for a discussion that is still live or potentially live, it feels like standing on a platform and shouting through a loudspeaker. I'd advocate for only annotating comments without any activity within the past X months.

Comment by unreal on Duncan Sabien on Moderating LessWrong · 2018-05-26T21:08:08.895Z · score: 5 (5 votes) · LW · GW
add mod annotations to those threads, saying that these things are over the line

i find this idea very distasteful

Comment by unreal on Duncan Sabien on Moderating LessWrong · 2018-05-26T09:16:35.064Z · score: 31 (13 votes) · LW · GW

Maybe this "social contract" is a fine thing for LessWrong to uphold.

But rationalists should not uphold it, in all places, at all times. In fact, in places where active truth-seeking occurs, this contract should be deliberately (consensually) dropped.

Double Cruxing often involves showing each other the implementation details. I open up my compartments and show you their inner workings. This means sharing emotional states and reactions. My cruxes are here, here, and here, and they're not straightforward, System-2-based propositions. They are fuzzy and emotionally-laden expectations, movies in my head, urges in my body, visceral taste reactions.

The sixth virtue is empiricism. The roots of knowledge are in observation and its fruit is prediction. What tree grows without roots? What tree nourishes us without fruit? If a tree falls in a forest and no one hears it, does it make a sound? One says, “Yes it does, for it makes vibrations in the air.” Another says, “No it does not, for there is no auditory processing in any brain.” Though they argue, one saying “Yes”, and one saying “No”, the two do not anticipate any different experience of the forest. Do not ask which beliefs to profess, but which experiences to anticipate. Always know which difference of experience you argue about. Do not let the argument wander and become about something else, such as someone’s virtue as a rationalist. Jerry Cleaver said: “What does you in is not failure to apply some high-level, intricate, complicated technique. It’s overlooking the basics. Not keeping your eye on the ball.” Do not be blinded by words. When words are subtracted, anticipation remains.

That is, ultimately, about implementation details (and sharing them). It's about phenomenology. And that extends to the subjective experience of not only the five senses, but emotions, thoughts, and unnamed aspects of experience.

If you don't want to open up your implementation details to me, that is cool. But we're not going to go to the depths of truth-seeking together without it. Which, again, might be fine for this forum, but I don't think that makes this place "better" for truth-seeking; I think it makes it worse.

Comment by unreal on The Berkeley Community & The Rest Of Us: A Response to Zvi & Benquo · 2018-05-26T08:07:19.893Z · score: 20 (4 votes) · LW · GW

Also, the story is basically: for a while there was a LessWrong meetup, but then this got dropped and transformed into an EA Meetup. Then there were only EA meetups for a while. Then I started RRG and brought rationality back as its own hub, creating the Seattle Rationality FB group as well. The rationality community grew. Now there are multiple rationalist group houses including a new hub. People did leave for Berkeley, but weekly RRG is still going afaik, and there is still an active community, although its composition is perhaps quite different now.

Comment by unreal on The Berkeley Community & The Rest Of Us: A Response to Zvi & Benquo · 2018-05-26T08:03:07.766Z · score: 20 (4 votes) · LW · GW

I'm confused by either your Seattle timeline or your use of the term "Rationality Reading Group."

As far as I know, I started the Rationality Reading Group in 2015, after my Jan CFAR Workshop. We read through a bunch of the Sequences.

I left Seattle in late 2016 and left RRG in some other capable hands. To this day, RRG (afaik) is still going and hasn't had any significant breaks, unless they did and I just didn't know about it.

In any case, I'd appreciate some kind of update to your post such that it is either more accurate or less confusing...

Moderating LessWrong: A Different Take

2018-05-26T05:51:40.928Z · score: 41 (11 votes)
Comment by unreal on Prune · 2018-03-01T00:49:33.663Z · score: 27 (5 votes) · LW · GW

I teach a class on Creative Focusing, and it's basically an exercise in lowering the Gates.

The feeling I get is one of knowingly jumping off a cliff into the unknown. I call this "Surrendering to the unknown."

I open my mouth and let my gut generate poetry in real-time, on the fly.

There's still often some Pruning active, but I can move closer or further from the edge—like releasing the water more quickly or slowly, in your metaphor. It's a dial I can tune.

It is a bit similar to putting on masks, as written about in Impro. Also similar to blending in IFS.

Comment by unreal on Focusing · 2018-02-27T07:00:38.459Z · score: 10 (2 votes) · LW · GW

hmm. i don't really have much to say on that prediction. maybe it's falsifiable. i find the comparison a bit odd.

i consider there to be two main ways of getting to know oneself.

inside view methods, like introspection; and outside view methods, like observing our own behavior over time and noticing patterns or analyzing dreams, thoughts, tastes.

they do seem both useful in getting to know myself. does that match what you predict?

Comment by unreal on Focusing · 2018-02-27T00:42:25.424Z · score: 11 (2 votes) · LW · GW

I slightly object then to this phrase "I’ll start by explaining my most gears-like model for why focusing works"

yes, it is accurate, in that it's YOUR most gears-like model, but to me this reads like a misuse of the term 'gears-like'

'gears-like' implies—if it turned out to be some other thing or work some other way, you'd be shocked and would have to consciously check the evidence (the inside of the box) again.

later you include the right to claim it as a fake framework, which feels more like what it actually is.

Comment by unreal on Meta-tations on Moderation: Towards Public Archipelago · 2018-02-26T00:25:13.323Z · score: 5 (1 votes) · LW · GW

Yup! That totally makes sense (the stuff in the link) and the thing about the coins.

Also not what I'm trying to talk about here.

I'm not interested in sharing posteriors. I'm interested in sharing the methods for which people arrive at their posteriors (this is what Double Crux is all about).

So in the fair/unfair coin example in the link, the way I'd "change your mind" about whether a coin flip was fair would be to ask, "You seem to think the coin has a 39% chance of being unfair. What would change your mind about that?"

If the answer is, "Well it depends on what happens when the coin is flipped." And let's say this is also a Double Crux for me.

At this point we'd have to start sharing our evidence or gathering more evidence to actually resolve the disagreement. And once we did, we'd both converge towards one truth.

Comment by unreal on Meta-tations on Moderation: Towards Public Archipelago · 2018-02-25T23:43:53.389Z · score: 13 (3 votes) · LW · GW

Aw Geez, well if you happen to explain your views somewhere I'd be happy to read them. I can't find any comments of yours on the Sabien's Double Crux post or on the post called Contra Double Crux.

Comment by unreal on Meta-tations on Moderation: Towards Public Archipelago · 2018-02-25T23:30:20.807Z · score: 12 (3 votes) · LW · GW

I do not know how to operationalize this into a bet, but I would if I could.

My bet would be something like...

If a person can Belief Report / do Focusing on their beliefs (this might already eliminate a bunch of people)

Then I bet some lower-level belief-node (a crux) could be found that would alter the upper-level belief-nodes if the value/sign/position/weight of that cruxy node were to be changed.

Note: Belief nodes do not have be binary (0 or 1). They can be fuzzy (0-1). Belief nodes can also be conjunctive.

If a person doesn't work this way, I'd love to know.

Comment by unreal on Meta-tations on Moderation: Towards Public Archipelago · 2018-02-25T23:15:20.389Z · score: 11 (4 votes) · LW · GW

[ I responded to an older, longer version of cousin_it's comment here, which was very different from what it looks like at present; right now, my comment doesn't make a whole lot of sense without that context, but I'll leave it I guess ]

This is a fascinating, alternative perspective!

If this is what LW is for, then I've misjudged it and don't yet know what to make of it.

To me, the game isn't about changing minds, but about exchanging interesting ideas to mutual benefit. Zero-sum tugs of war are for political subreddits.

I disagree with the frame.

What I'm into is having a community steered towards seeking truth together. And this is NOT a zero-sum game at all. Changing people's minds so that we're all more aligned with truth seems infinite-sum to me.

Why? Because the more groundwork we lay for our foundation, the more we can DO.

Were rockets built by people who just exchanged interesting ideas for rocket-building but never bothered to check each other's math? We wouldn't have gotten very far if this is where we stayed. So resolving each layer of disagreement led to being able to coordinate on how to build rockets and then building them.

Similarly with rationality. I'm interested in changing your mind about a lot of things. I want to convince you that I can and am seeing things in the universe that, if we can agree on them one way or another, would then allow us to move to the next step, where we'd unearth a whole NEW set of disagreements to resolve. And so forth. That is progress.

I'm willing to concede that LW might not be for this thing, and that seems maybe fine. It might even be better!

But I'm going to look the thing somewhere, if not here.

Circling

2018-02-16T23:26:54.955Z · score: 110 (52 votes)

Slack for your belief system

2017-10-26T08:19:27.502Z · score: 54 (26 votes)

Being Correct as Attire

2017-10-24T10:04:10.703Z · score: 15 (5 votes)

Typical Minding Guilt/Shame

2017-10-24T09:39:35.498Z · score: 25 (11 votes)