Posts

Unreal's Shortform 2019-08-03T21:11:22.475Z · score: 12 (2 votes)
Dependability 2019-03-26T22:49:37.402Z · score: 65 (23 votes)
Rest Days vs Recovery Days 2019-03-19T22:37:09.194Z · score: 116 (53 votes)
Active Curiosity vs Open Curiosity 2019-03-15T16:54:45.389Z · score: 68 (26 votes)
Policy-Based vs Willpower-Based Intentions 2019-02-28T05:17:55.302Z · score: 62 (18 votes)
Moderating LessWrong: A Different Take 2018-05-26T05:51:40.928Z · score: 41 (11 votes)
Circling 2018-02-16T23:26:54.955Z · score: 122 (57 votes)
Slack for your belief system 2017-10-26T08:19:27.502Z · score: 64 (29 votes)
Being Correct as Attire 2017-10-24T10:04:10.703Z · score: 15 (5 votes)
Typical Minding Guilt/Shame 2017-10-24T09:39:35.498Z · score: 25 (11 votes)

Comments

Comment by unreal on Partial summary of debate with Benquo and Jessicata [pt 1] · 2019-08-17T18:40:44.306Z · score: 4 (2 votes) · LW · GW

I see great need for some way to indicate "not-an-accident but also not necessarily conscious or endorsed." And ideally the term doesn't have a judgmental or accusatory connotation.

This seems pretty hard to do actually. Maybe an acronym?

Alice lied (NIANOA) to Bob about X.

Not Intentionally And Not On Accident

Comment by unreal on Machine Learning Analogy for Meditation (illustrated) · 2019-08-13T17:36:04.821Z · score: 5 (2 votes) · LW · GW

Thanks! This was helpful analysis.

I suspect my slight trigger (1/10) set off other people's triggers. And I'm more triggered now as a result (but still only like 3/10.)

I'd like to save this thread as an example of a broader pattern I think I see on LW, which makes having conversations here more unpleasant than is probably necessary? Not sure though.

Comment by unreal on Machine Learning Analogy for Meditation (illustrated) · 2019-08-13T14:52:17.447Z · score: 2 (1 votes) · LW · GW

I acknowledge that it's likely somehow because of how I worded things in my original comment. I wish I knew how to fix it.

Comment by unreal on Machine Learning Analogy for Meditation (illustrated) · 2019-08-12T21:21:47.677Z · score: 12 (5 votes) · LW · GW

Yeah I'm not implying that System 2 is useless or irrelevant for actions. Just that it seems more indirect or secondary.

Also please note that overall I'm probably confused about something, as I mentioned. And my comments are not meant to open up conflict, but rather I'm requesting a clarification on this particular sentence and what frame / ontology it's using:

If you're experiencing "motivational issues", then it stands to reason that it might be useful to keep an eye on which thoughts are leading to actions and which are not.

I would like to expand the words 'thoughts' and 'useful' here.

People seem to be responding to me as though I'm trying to start an argument, and this is really not what I'm going for. Sharing my POV is just to try to help close inferential gap in the right direction.

Comment by unreal on Machine Learning Analogy for Meditation (illustrated) · 2019-08-12T14:44:34.893Z · score: 6 (3 votes) · LW · GW

Hmmm. No.

Basically, as far as I know, System 1 is more or less directly responsible for all actions.

You can predict what actions a person will take BEFORE they are mentally conscious of it at all. You can do this by measuring galvanic skin response or watching their brain activity.

The mentally conscious part happens second.

But like, I'm guessing for some reason, what I'm saying here is already obvious, and Abram just means something else, and I'm trying to figure out what.

Comment by unreal on Machine Learning Analogy for Meditation (illustrated) · 2019-08-10T16:49:54.640Z · score: 13 (4 votes) · LW · GW

My inferential distance from this post is high, I think. So excuse if this question doesn't even make sense.

If you're experiencing "motivational issues", then it stands to reason that it might be useful to keep an eye on which thoughts are leading to actions and which are not.

I think I don't understand what you mean by 'thoughts' ?

I view 'thoughts' as not having very much to do with action in general. They're just like... incidental post-hoc things. Why is it useful to track which thoughts lead to actions? /blink

Comment by unreal on Machine Learning Analogy for Meditation (illustrated) · 2019-08-10T14:50:29.808Z · score: 5 (3 votes) · LW · GW

I also liked reading the handwritten version.

Comment by unreal on Unreal's Shortform · 2019-08-10T12:36:58.635Z · score: 5 (2 votes) · LW · GW

Yeah, I think it's more the latter.

I would classify situation 3 as a lack of the model-building skill proper, which includes having my models affect me on an alief level. Although introspection would help me notice what my aliefs are to begin with.

Comment by unreal on Unreal's Shortform · 2019-08-09T20:42:02.806Z · score: 22 (7 votes) · LW · GW

When I talk about "model-building skill" I think I mean three separate skills:

Skill A) Model-building skill proper

Skill B) Introspection skill

Skill C) Communication skill

There are probably a lot of people who are decent at model-building proper. (Situation #1)

I'm imagining genius-level programmers. But then when you try to get some insight into what their models actually are or why they do things one way vs another, it's opaque. They don't know how to explicate any of it. They get annoyed by having to try to verbalize their models or even know what they are—they'd rather get back to coding and are frustrated having to "waste time" convincing other people.

Then there are other people who might be decent at model-building and introspecting on their models, but when they try to communicate their models to you, it comes out as gibberish (at least to your ears). And asking them questions doesn't seem to go anywhere. (Situation #2)

Then there's the situation where people are really articulate and able to communicate very clear, explicit, verbal models—but when it comes to implementing those models on a somatic-emotional level, they run into trouble. But it sounds like they have the model-building skill because they can talk about their models and share genuine insights about them. (Situation #3)

An example is having a bunch of insightful theories about social dynamics, but when actually in a situation where they could put those theories into practice, there is some kind of block. The models are not acting like felt models.

...

I've been in Situation #3 and Situation #1 before.

Overcoming Situation #3 is a scary thing. Being able to see, make sense of, and articulate models (from afar) was a way of distancing myself from reality. It was a preemptive defense mechanism. It helped me feel superior / knowledgable / satisfied. And then I continued to sit and watch rather than participate, engage, run experiments, etc. Or I'd play with "toy models" like games or structured activities or imaginings or simulations.

To be fair to toy models, I learned quite a lot / built many useful models from playing games, so I don't regret doing a lot of that. And, there's also a lot to be gained from trial by fire, so to speak.

For Situation #1, the solution mostly came from learning to introspect properly and also learning to take myself seriously.

Once I honestly felt like I had meaningful things to share with other people, it became easier to bother trying. (It's hard to do this if you're surrounded by people who have high standards for "interesting" or "useful." Being around people who were more appreciative helped me overcome some stuck beliefs around my personal worth.)

I also had a block around "convincing other people of anything." So while I could voice models, I couldn't use them to "convince people" to change anything about their lives or way of being. It made me a worse teacher and also meant my models always came across as "nice/cool but irrelevant." And not in a way that was easy for anyone to detect properly, especially myself.

Comment by unreal on Unreal's Shortform · 2019-08-03T21:11:22.625Z · score: 28 (8 votes) · LW · GW

#trauma #therapy

My working definition of trauma = sympathetic nervous system (SNS) activation + dorsal vagal complex (DVC) activation

The SNS is in control of fight or flight responses. Activation results in things like increased heart rate / blood pressure, dilated pupils, faster breathing, and slowed digestion.

The DVC plays a role in the freeze response that exists in many vertebrates. In vertebrates, the freeze response results in immobility. In humans, the freeze response is associated with de-activated language centers in the brain (based on MRI research on trauma flashbacks). From my own extrapolations, it also seems associated with dissociation / depersonalization (sometimes) and lessened ability to orient to one's surroundings.

The dorsal branch of the vagus originates in the dorsal motor nucleus and is considered the phylogenetically older branch.[3] This branch is unmyelinated and exists in most vertebrates. This branch is also known as the “vegetative vagus” because it is associated with primal survival strategies of primitive vertebrates, reptiles, and amphibians.[3] Under great stress, these animals freeze when threatened, conserving their metabolic resources.
https://www.wikiwand.com/en/Polyvagal_theory

The vagus systems acts by inhibiting the SNS.

In other words, a traumatic incident involves "fully pushing the gas pedal while simultaneously fully pushing the brake." On the one hand, the body is trying to engage fight or flight, but in cases where neither fighting nor fleeing feel like options, the body then tries to engage freeze.

a) This releases a bunch of stress hormones.

So, according to The Body Keeps the Score, cortisol is a hormone that signals the body to STOP releasing stress hormones. In people with PTSD, cortisol levels are low. And stress hormones fail to return to baseline after the threat has passed, meaning they experience a prolonged stress response.

b) This helpless freeze response becomes a learned response in similar situations in the future.

The concept of "learned helplessness" is very likely related to the experience of trauma. (Interestingly, the Wikipedia article doesn't even mention the word "trauma.")

From the "Early Key Experiments" section on the Learned Helplessness Wiki page:

American psychologist Martin Seligman initiated research on learned helplessness in 1967 at the University of Pennsylvania as an extension of his interest in depression.[4][5] This research was later expanded through experiments by Seligman and others. One of the first was an experiment by Seligman & Maier: In Part 1 of this study, three groups of dogs were placed in harnesses. Group 1 dogs were simply put in a harnesses for a period of time and were later released. Groups 2 and 3 consisted of "yoked pairs". Dogs in Group 2 were given electric shocks at random times, which the dog could end by pressing a lever. Each dog in Group 3 was paired with a Group 2 dog; whenever a Group 2 dog got a shock, its paired dog in Group 3 got a shock of the same intensity and duration, but its lever did not stop the shock. To a dog in Group 3, it seemed that the shock ended at random, because it was his paired dog in Group 2 that was causing it to stop. Thus, for Group 3 dogs, the shock was "inescapable".
In Part 2 of the experiment the same three groups of dogs were tested in a shuttle-box apparatus (a chamber containing two rectangular compartments divided by a barrier a few inches high). All of the dogs could escape shocks on one side of the box by jumping over a low partition to the other side. The dogs in Groups 1 and 2 quickly learned this task and escaped the shock. Most of the Group 3 dogs – which had previously learned that nothing they did had any effect on shocks – simply lay down passively and whined when they were shocked.[4]

It seems very likely they ended up traumatizing the dogs in that experiment by caging them up and then electrocuting them. The dogs subsequently don't "flee" even when the cage is open. They've ingrained their freeze response.

Similarly in humans, traumatized people become trapped within their own mental constraints when faced with a triggering situation. They rationalize why they can't change things, but the rationalization is likely "after-the-fact". They're frozen on a nervous-system level (which happens first), and they're justifying the freeze with stories (which happens after).

Not to say that the stories are irrelevant or don't affect things. I think they do.

c) In future similar situations, the "body" believes it's still stuck in the past / that the traumatic incident is still happening in some way. It tries to recreate the original pattern over and over.

[ My use of the word "body" is a bit shifty here. I might mean something like System 1 (Kahneman) or Self 2 (Inner Game of Tennis) or whatever part of you stores "feeling beliefs" (Bio-Emotive) or "core beliefs / belief reports" (Connection Theory). When I say "body" in quotes, this is what I mean. Whatever I mean, it probably involves the limbic system. ]

I don't fully understand what's going on here, but here's a couple super-bad made-up stories with a bunch of missing gears:

Story #1: The "body", feeling trapped, keeps trying to relive the experience in an effort to "finally find some way out". Similar to the function of dreaming, which is to try to "rehearse" some event but try out a bunch of variations quickly.

Often, reliving trauma doesn't actually end up with a different outcome, however. In many cases, the traumatized person ends up reinforcing the original trauma narrative.

Somatic Experiencing is a therapy modality that tries to relive a trauma with a new narrative where the person is not helpless, and it does it by engaging the "body" (rather than the intellect).

Story #2: The "body" has a bunch of trapped "energy" from the original trauma, and the "energy" needs a way to release.

The person usually fails to find a way to release the energy without the help of therapy or ritual or some kind of processing technique.

The Bio-Emotive process releases "energy" in the form of sobbing and vocalizing. It engages the emotional system through simple language ("I feel sad and helpless"). It engages the meaning-making / narrative system through simple story ("He left me to die"). There can be a large, loud emotional and physical release from this process.

( Lots of approaches have been developed for dealing with trauma, but I'm mentioning two as direct examples of how my Stories might fit given current practices. )

---

I've been thinking about this lately because I'm about to give a talk on trauma, so I've been re-reading sections of Waking the Tiger and The Body Keeps the Score.

I highly recommend the latter book as a pretty comprehensive view of where we are on trauma research lately. The former book is by Peter Levine, who developed Somatic Experiencing therapy and figured out a lot of stuff through trial-and-error with his clients. His language is less "science-y sounding" or something, but it contains helpful exercises and the correct "mindset" for being with trauma.

Comment by unreal on Integrity and accountability are core parts of rationality · 2019-07-15T22:41:19.397Z · score: 13 (7 votes) · LW · GW

This post is relevant to my post on Dependability.

I'm at MAPLE in order to acquire a certain level of integrity in myself.

The high-reaching goal is to acquire a level of integrity that isn't much-influenced by short/medium-term incentives, such that I can trust myself to be in integrity even when I'm not in an environment that's conducive for that.

But that's probably a long way off, and in the meantime, I am just working on how to be the kind of person that can show up on time to things and say what needs to be said when it's scary and take responsibility for my mistakes and stuff.

I thumbs-up anyone who attempts to become more in integrity over time! Seems super worthwhile.

Comment by unreal on The Relationship Between the Village and the Mission · 2019-05-14T03:01:57.014Z · score: 10 (5 votes) · LW · GW

I'm realizing that I need to make the following distinction here:

Village 1) There is a core of folks in the village that are doing a hard thing (Mission). and also their friends, family, and neighbors who support them and each other but are not directly involved in the Mission.

Village 2) There is a village with only ppl who are doing the direct Mission work. Other friends, family, etc. do not make their homes in the village.

I weakly think it's possible for 1 to be good.

I think 2 runs into lots of problems and is what my original comment was speaking against.

Comment by unreal on The Relationship Between the Village and the Mission · 2019-05-12T23:33:52.156Z · score: 27 (8 votes) · LW · GW

[ comment copied from Facebook / I didn't read the full article before making this comment ]

i am somewhat anti-"Mission-centered Village."

i think entanglement between mission and livelihood already causes problems. (you start feeling that the mission is good b/c it feeds you, and that an attack on the mission is an attack on your ability to feed yourself / earn income)

entanglement between mission and family/home seems like it causes more of those problems. (you start feeling that if your home is threatened in any way, this is a threat to the mission, and if you feel that your mission is threatened, it is a threat to your home.)

avoiding mental / emotional entanglement in this way i think would require a very high bar: a mind well-trained in the art of { introspection / meditation / small-identity / surrender / relinquishment } or something in that area. i suspect <10 ppl in the community to meet that bar?

Comment by unreal on The Relationship Between the Village and the Mission · 2019-05-12T23:32:50.719Z · score: 14 (6 votes) · LW · GW

[ comment copied from Facebook / I didn't read the full article before making this comment ]

agree with a lot of this, esp the part about not trying to welcome everyone / lower barrier to entry to the point that there's no commitment involved

i think a successful village will require a fair amount of commitment and sacrifice, in terms of time, effort, opportunity cost, and probably money

if everyone is looking to maximize their own interests, while pursuing a village, i think this will drain resources to the point that nothing gets done or things fall apart. a weak structure will beget a fragile village. and i think a fragile village can easily be net harmful.

at the same time, it's good to be considerate to people who can't contribute a whole lot due to disability or financial insecurity.

Comment by unreal on Dependability · 2019-04-08T17:26:22.773Z · score: 2 (1 votes) · LW · GW

Hmm I wonder if there's something about meditation in a monastic setting where all the rules need to be strived to be followed that does something.

Because I'm pretty sure a number of the residents of the monastery here have become much more reliable after a year or two of being here.

It might be context-dependent too, but I seem to not be worried about that problem as much for me. I feel above-average at the generalization skill and think I can take some useful things out of a specific context into other contexts.

Comment by unreal on Dependability · 2019-03-30T12:27:06.570Z · score: 2 (1 votes) · LW · GW

I don't think I've properly conveyed what I mean by Dependability, judging by the totality of the comments. Or, maybe I've conveyed what I mean by Dependability, but I did not properly explain that I want to achieve it in a specific way. I'm looking to gain the skill through compassion and equanimity. A monastic lifestyle seems appropriate for this.

I also did not at all explain why I'm specifically disadvantaged in this area, compared to the average person. And I think that would bring clarity too, if I explained that.

Comment by unreal on Dependability · 2019-03-28T19:58:24.643Z · score: 25 (7 votes) · LW · GW

I will try to explain where my disagreement is.

1. Concept space is huge. There are more concepts than there are words for concepts. (There are many possible frames from which to conceptualize a concept too, which continues to explode the number of ways to think about any given concept.)

2. Whenever I try to 'coin' a term, I'm not trying to redefine an old concept. I have a new concept, often belonging in a particular new frame. This new concept contains a lot of nuance and specificity, different from any old words or concepts. I want to relay MY concept, which contains and implies a bunch of models I have about the world. Old words would fail to capture any of this—and would also fail to properly imply that I want to relay something confusingly but meaningfully precise.

3. I'm not 'making up' these concepts from nothing. I'm not 'thinking of ways to add complexity' to concepts. My concepts are already that complex. I'm merely sharing a concept I already have, that is coming forth from my internal, implicit models—and I try to make them explicit so others can know what concepts I already implicitly, subconsciously use to conceptualize the world. And my concepts are unique because the set of models I have are different from yours. And when I feel I've got a concept that feels particularly important in some way, I want to share it.

4. I want to understand people's true, implicit concepts—which are probably always full of nuance and implicit models. I am endlessly interested in people's precise, unique concepts. It's like getting a deep taste of someone's worldview in a single bite-sized piece. I like getting tastes of people's worldviews because everyone has a unique set of models and data, and that complexity is reflected in their concepts. Their concepts—which always start implicit and nonverbal, if they can learn to verbalize them and communicate them—are rich and layered. And I want them. (Also I think it is a very, very valuable skill to be able to explicate your implicit concepts and models. LessWrong seems like a good place to practice.)

5. "But what about building upon human knowledge, which requires creating a shared language? What about figuring out which concepts are best and building on those?" I agree this is a good goal to have. The platform of LessWrong is already built to prune concept space (with multiple ways for concepts to be promoted or demoted).

But I do think this goal is "at odds" with my goal of sharing my concepts, learning others' concepts, and diving into the depths of concept space. What I want here is to be in the "whiteboarding" phase where lots of ideas and thoughts are allowed to surface, and maybe it's their first time really seeing the light, but I get feedback, and other people have associated thoughts and share those. And it's a generative sort of phase, rather than a pruning phase.

It seems plausible my posts should stay in my 'blog' and off the front page? I don't fully understand the point of front page vs blog personally. But I'd be happy to keep my posts in the corner of "my blog" and do the 'whiteboarding' thing there.

If any of the mods want to discuss this dilemma with me (I'd prefer doing this offline), I'd be into getting more opinions on this.

Comment by unreal on Dependability · 2019-03-27T04:37:54.176Z · score: 4 (2 votes) · LW · GW

There's some overlap with conscientiousness, but dependability doesn't include being organized, being efficient, caring about achievement or perfection, being hardworking, being careful, being thorough, or appearing competent.

Grit seems important for trying and follow-through in particular!

Comment by unreal on Dependability · 2019-03-27T00:11:40.572Z · score: 2 (3 votes) · LW · GW

I guess I disagree :P

Comment by unreal on Active Curiosity vs Open Curiosity · 2019-03-24T06:36:54.135Z · score: 2 (1 votes) · LW · GW

I've been watching a bunch of videos on this, and I'm finding them quite interesting so far.

http://iainmcgilchrist.com/videos/

Also I agree lots of precision and discernment are useful to maintain here. It could get "floppy" real fast if people aren't careful with their concepts / models.

Comment by unreal on Rest Days vs Recovery Days · 2019-03-22T06:57:17.262Z · score: 8 (5 votes) · LW · GW

Connotations of Rest that I find relevant:

  • lack of anxiety
  • PSNS activation
  • relaxed body (while not necessarily inactive or passive body)
  • a state that you can be in indefinitely, in theory (whereas Recover suggests temporary)
  • meditative (vs medicative)
  • not trying to do anything / not needing anything (whereas Recover suggests goal orientation)
  • Rest feels more sacred than Recovery

Concept that I want access to that "Recover" doesn't fit as well with:

Comment by unreal on Active Curiosity vs Open Curiosity · 2019-03-21T19:42:35.121Z · score: 10 (2 votes) · LW · GW

Ian McGilchrist came out a book on brain hemispheres and their specialized roles called The Master and His Emissary. This summary was useful: https://www.reddit.com/r/streamentry/comments/b39n4x/the_divided_brain_and_awakening_theorycommunity/

The Left Hemisphere handles narrow focus (like a bird trying to pick out a seed among a bunch of pebbles and dirt), while the Right Hemisphere handles broad, open focus (the same bird keeping some attention on the background for predators). The LH is associated with tool use and manipulation of objects. The RH is associated with exploration and experiential data gathering.

I don't immediately know how the hemispheres may be involved in the types of Curiosity. But a plausible hypothesis might be that Active Curiosity would be more left-brained and Open Curiosity would be more right-brained.

Comment by unreal on Rest Days vs Recovery Days · 2019-03-21T01:28:49.100Z · score: 6 (3 votes) · LW · GW
It's not that you're just doing whatever you "feel" like, in a generic sense. You're doing something like Focusing on your stomach in particular

Yes, this is right.

I also do predict the stomach is where most people should be Focusing on, for getting proper Rest. I think there's some kind of ongoing battle between the head and the stomach, and people/society tends to favor the head.

But I get mileage out of doing Focusing on all kinds of areas.

Comment by unreal on Rest Days vs Recovery Days · 2019-03-21T01:05:01.668Z · score: 13 (4 votes) · LW · GW

So some very general links (since 'improving productivity on chores and future planning' sounds like it could mean a lot of things):

Overall, I've gotten large gains out of designing my life such that work feels like water flowing downhill rather than me trying to trudge uphill.

I use Policy-Based Intentions a fair amount, as a way to save willpower. I'm like a game designer trying to design the maze that my mouse is running in, if that makes sense. And I try to make it easy for the mouse to make the correct decisions depending on the situation.

Comment by unreal on Rest Days vs Recovery Days · 2019-03-20T19:33:36.337Z · score: 14 (7 votes) · LW · GW

I think Kaj is right. But in general, video games / TV feel like they help me escape the present moment, avoid thinking about something or feeling my body, and keep me in my head. Video games also have that feeling of fake productivity which makes them feel like a compulsive "pretend work." (Aka pica.)

I guess I also should have distinguished "reading for pleasure" and "productive reading." I was advocating for the former and not so much the latter.

Once, I did a spontaneous picnic where I put a blanket outside somewhere nice and brought a basket of food and a book. And I just lounged outside, reading [Annihilation] and eating and looking at nature. If I imagine having TV instead, I feel like I lose the ability to choose where my attention goes freely. With a book, I can pause or daydream and take my time with it more easily.

But really it's up to you what counts as Restful. I can imagine watching video interviews Restful for some reason. Or listening to podcasts. I'm less sure what Restful video games for me would be.

Comment by unreal on Rest Days vs Recovery Days · 2019-03-20T19:18:37.242Z · score: 14 (4 votes) · LW · GW

I would experiment with that in the following ways:

  • Try not doing any projects and see how that is (This seems good for what Zvi / Ben describe as an emergency check / Sabbath as alarm.)
  • When you feel like working on a project, do so but periodically check "Do I still feel good about doing this right now? Is this yummy? Do I want to be doing this?" Do the check and then follow what seems good in the moment.
Comment by unreal on Active Curiosity vs Open Curiosity · 2019-03-17T00:07:39.143Z · score: 8 (4 votes) · LW · GW

Open curiosity does not actively seek to understand. Which is why I call the other one 'active'.

I suspect concentrated and diffuse curiosity are both referring to types of active curiosity. Open curiosity is talking about something different.

Comment by unreal on Active Curiosity vs Open Curiosity · 2019-03-15T13:11:43.489Z · score: 8 (4 votes) · LW · GW

yes, this is basically what I'm referring to

Comment by unreal on Policy-Based vs Willpower-Based Intentions · 2019-02-28T22:10:31.135Z · score: 9 (4 votes) · LW · GW

Oh yeah. I do think the nature of the task is an important factor. It's not like you can willy-nilly choose policy-based or willpower-based. I did not mean to present it as though you had a choice between them.

I was more describing that there are (at least) two different ways to create intentions, and these are two that I've noticed.

But you said that you can't use this on everything, so maybe the policies that I would need willpower to install just happen to be different from the policies that you would need willpower to install.

This seems likely true.

It's not that I don't have policies, it's that this description sounds like you can just... decide to change a policy, and then have that happen automatically.

It is true that I can immediately change certain policies such that I don't need to practice the new way. I just install the new way, and it works. But I can't install large complex policies all in one go. I will explain.

the Lyft thing sounded complicated to memorize and I would probably need to consciously think about it on several times when I was actually doing the tipping before I had it committed into memory.

With zero experience of Lyft tipping, I would not just be able to think up a policy and then implement it. Policy-driven intentions are collaborations between my S1 and S2, so S2 can't be doing all the work alone. But maybe after a few Lyft rides, I notice confusion about how much to tip. Then maybe I think about that for a while or do some reading. Eventually I notice I need a policy because deciding each time is tiring or effortful.

I notice I feel fine tipping a bit each time when I have a programming job. I feel I can afford it, and I feel better about it. So I create and install a policy to tip $1 each time and run with that; I make room for exceptions when I feel like it.

Later, I stop having a programming job, and now I feel bad about spending that money. So I create a new if-then clause. If I have good income, I will tip $1. If not, I will tip $0. That code gets rewritten.

Later, I notice my policy is inadequate for handling situations where I have heavy luggage (because I find myself in a situation where I'm not tipping people who help me with my bag, and it bothers me a little). I rewrite the code again to add a clause about adding $1 when that happens.

Policy re-writes are motivated by S1 emotions telling me they want something different. They knock on the door of S2. S2 is like, I can help with that! S2 suggests a policy. S1 is relieved and installs it. The change is immediate.

Comment by unreal on Policy-Based vs Willpower-Based Intentions · 2019-02-28T21:05:01.648Z · score: 4 (2 votes) · LW · GW

That's interesting!

How do other people handle the tipping thing? Whether for a driver or at a restaurant? Are you kind of deciding each time?

How do you handle the question of "who pays for a meal" with acquaintances / new people / on dates? My policy in this area is to always offer to split.

How do you handle whether to give money to homeless people or if someone is trying to offer you something on the street? My policy here is to always say no.

I'm curious what other people are doing here because I assumed most people use policies to handle these things.

Comment by unreal on Policy-Based vs Willpower-Based Intentions · 2019-02-28T20:58:41.571Z · score: 8 (4 votes) · LW · GW

I have not much considered group intention-setting. This seems super interesting to explore too.

Phenomenologically, I feel it kind of as... the agreements or intentions of the group (in a circle) recede into the background, to form the water we're all in together. Like it gets to relax in the VERY BACK of my mind and also I'm aware of it being in the back of other people's minds.

And from that shared container / background, I "get to move around" but it's like I am STARTING with a particular set of assumptions.

Other potential related examples:

  • I'm at a Magic tournament. I know basically what to expect—what people's goals are, what people's behaviors will be, what the rules of the game are and how to enforce them. It's very easy for me to move here because a lot of the assumptions are set in place for me.
  • I'm in church as a kid. Similar to the above. But maybe less agreeable to me or more opaque to me. I get this weird SENSE that there are ways I'm supposed to behave, but I'm not totally sure what they are. I'm just trying to do what everyone else seems to be doing... This is not super comfortable. If I act out of line, a grownup scolds me, is one way I know where the lines are.

Potential examples of group policy-based intentions:

  • I have a friend I regularly get meals with. We agree to take turns paying for each other, explicitly.
  • I have a friend, and our implicit policy is to tell each other as soon as something big happens in our lives.

As soon as a third person is added to the dynamic, I think it gets trickier to ensure it's a policy-based intention. (Technology might provide many exceptions?) As soon as one person feels a need to remind themselves of the thing, it stops being a policy-based intention.

Willpower-based intentions in groups feel they contain a bunch of things like rules, social norms, etc.

Comment by unreal on Policy-Based vs Willpower-Based Intentions · 2019-02-28T20:42:29.336Z · score: 4 (2 votes) · LW · GW

There is definitely this sense that exerting force or willpower feels like an EXTERNAL pressure even if that pressure does not have an external source that I could point to or even gesture at. But it /feels/ external or 'not me'.

I have some trauma related to this. I could've gone into the trauma stuff more, but I think it would have made the post less accessible and also more confusing, rather than less. So I didn't. :P

Comment by unreal on Policy-Based vs Willpower-Based Intentions · 2019-02-28T20:38:18.188Z · score: 2 (1 votes) · LW · GW

oh. I must have messed that up. I am OK with this being on the front page. I have definitely noticed some bugs here and there. Esp around the account settings page and trying to change my moderation guidelines. But I think I maybe just messed up the checkbox. Is it default checked to 'not ok'? Because if so, I left it alone thinking it was checked to 'is ok to promote'.

Comment by unreal on Policy-Based vs Willpower-Based Intentions · 2019-02-28T17:25:33.953Z · score: 5 (3 votes) · LW · GW

I enjoyed that article! Seems worth including the link in my article too. Thanks.

Your definition of intention seems different from my use of "willpower-based intention." My 'willpower-based intention' always has a conscious element and cannot do things like "work in the background without my awareness at all." It's maybe quite related to the thing in your forehead.

My policy-based intentions feel kind of like pulling up my inner code guts, making a little rewrite or alteration, and putting them back into my guts. This is a conscious process (the installation), but then the change runs automatically, without holding conscious intentions.

I'm very bad at using these to create personal habits, like drinking water everyday or taking vitamins everyday. I don't think these count. They require willpower after a while.

But maybe I one-time decide the best configuration of spices on the spice rack or how my kitchen is arranged. Then it is automatic for me to place things back where they belong after using it, and it is also automatic for me to want to organize things so they're back where they belong when they get messed up.

These 'desires' for things to be a certain way live in my belly. And it feels like my belly carries motivations and behaviors that I can ride out.

It feels relaxing to have a policy I can lean on, and to carry out the policy. Like water running downhill.

You could maybe think of it as 'intentions you already want to do anyway'. But with policies, your conscious mind can also make alterations / rewrite that code directly. Without any need for convincing, arguing, pushing. It is more of a collaboration I am in between elephant / rider—coming up with good policies makes us feel good and relaxed.

Comment by unreal on Strong Votes [Update: Deployed] · 2018-06-02T04:41:59.506Z · score: 8 (3 votes) · LW · GW

I was assuming the list comes out once -> I learn enough to understand what types of posts get what voting patterns (or, I learn that the data doesn't actually tell me very much, which might be more likely), but after that I don't need any more lists of posts.

I don't care if it has my own posts on it, really. I care more about 'the general pattern' or something, and I imagine I can either get that from one such list, or I'll figure out I just won't get it (because the data doesn't have discernible patterns / it's too noisy).

Comment by unreal on Strong Votes [Update: Deployed] · 2018-06-02T00:06:05.289Z · score: 8 (3 votes) · LW · GW

I prefer the one-time cost vs the many-time cost.

Comment by unreal on Strong Votes [Update: Deployed] · 2018-06-01T23:25:16.373Z · score: 11 (5 votes) · LW · GW

That makes sense.

But it's really confusing for my models of the post.

Cause there is a real difference between (lots of 2-users voted on this vs. a few 5-users voted on this). Those feel very different to me, and I'd adjust my views accordingly as to whether the post was, in fact, successful.

I get that you're trying to make "lots of 2-users" and "a few 5-users" basically amount to the same value, which is why you're scaling it this way.

But if a post ACTUALLY only has 2-users upvoting it and almost no 5-users, and other posts have 5-users voting on it but very few 2-users, that seems ... worth noting.

Although, you could prob achieve the same by publishing an analysis of upvote/downvote patterns.

You could, for instance, release a list of posts, ranked by various such metrics. (Ratio of low:high user votes. Ratio of high:low user votes. Etc. Etc.)

That would be interesting!

Comment by unreal on Monopoly: A Manifesto and Fact Post · 2018-06-01T23:14:38.979Z · score: 7 (2 votes) · LW · GW

The book The Fine Print covers a lot of examples of "special privileges granted by the government" in a number of industries (rail, telecom, energy). I read it a long time ago, so don't remember a ton from it. But in case anyone's interested in more concrete examples of this.

Comment by unreal on Monopoly: A Manifesto and Fact Post · 2018-06-01T23:12:15.346Z · score: 11 (2 votes) · LW · GW

Really glad you wrote this post. I think it's trying to speak to something I've been concerned with for a while—a thing that feels (to me) like a crux for a lot of current social movements and social ills in the States (including the social justice movement, black lives matter, growing homelessness / decreasing standards of living for the poorest people). And of course, the whole shit-pile that is our health care system.

Some Questions / Further Comments:

(Please respond to each point as a separate thread, so that threads are segregated by topic / question.)

1) My guess is that under "Services and construction", where you list "transportation", you mean a different "transportation" than the one in the graph, which has "Transportation and Warehousing" as its own category? I'd appreciate clarification / disambiguation in the article.

2) I agree with your point RE: intangibles, that they correlate / go together with monopoly. But it's difficult for me to tell HOW MUCH they 'go together'. And whether it is strictly 'a bad sign'. While I'm not a huge fan of how patents sometimes play out, I am a fan of branding. While you can't just try to transfer the effect of Coca-Cola's branding to your new product, I think you can, in fact, try to compete on branding.

(It would be terrible if someone tried to take exclusive rights over the use of the color red in logos or something, though. Hopefully that doesn't ever happen.)

And, honestly, I think the 'value' of their branding might not be too inaccurately priced, in some sense? (Even if the product reduces in quality, I think the branding has value beyond trying to measure quality of product.) I also don't whether 'intangibles' includes things like 'excellent customer service', but if it does, that seems like true value, not 'fake value'. Even though it doesn't directly cash out into more product.

Over time, I think more of what we consider valuable should be in intangibles? Seems like a sign of people having enough useful things that they can now afford to put money into "nice experiences." And in many ways, people value having fewer choices because it cashes out into less effort.

3) Similarly, 'company culture'—while it is 'dark matter' as Robin Hanson says—seems appropriate to value highly in some cases. I don't think most 'monopoly situations' are a result of some company just having a really good, un-copyable company culture, but in general, I do expect it to be very difficult to transfer / copy really excellent company cultures. And as a result, I do expect something monopolistic-looking to emerge as a result of—not shady dealings or exclusive privileges facilitated by government—but as a natural consequence of very few companies, in fact, being really good places to work.

I would really like to be able to disambiguate between the situations where: There are only 3 main firms in this industry. Is it because those 3 firms are in fact providing outsized value in a way that's hard to compete with? Or, is this happening because the government made some poor decisions that favored certain companies for not-very-good reasons, and they leveraged this into an effective monopoly?

Comment by unreal on Strong Votes [Update: Deployed] · 2018-06-01T22:03:37.540Z · score: 3 (3 votes) · LW · GW

That is too many numbers to parse! I only care about the # of ppl who've interacted with the post. Can I just have THAT number as a tooltip? That would mostly resolve my concern here.

Also, it's kind of weird to me that I have 5 vote power given I've only really interacted with this site for... a few months? And you guys have, like, 6? 7? Are you sure your scaling is right here? :/

Comment by unreal on Strong Votes [Update: Deployed] · 2018-06-01T22:01:44.686Z · score: 6 (2 votes) · LW · GW

Would you still be sad if your strong vote was maxed at 5?

1:15 is a big difference! But 1:5 is a lot less. And 1:3 is even lot lot less!

Comment by unreal on Strong Votes [Update: Deployed] · 2018-06-01T21:43:55.858Z · score: 19 (5 votes) · LW · GW

some thoughts before i try this out:

I am worried about this thing where I both want to know: how many ppl liked a thing vs how strongly ppl liked a thing. More for posts than for comments. For posts, if I see a number like 100, I am very confused about how many ppl liked it. It seems to range between 20-50. But if... the vote power actually goes up to 15. Then... I will be confused about whether it's like... 10-50. That's... a big difference to me.

I'd almost would like it if for posts, 1 = normal 2 = strong for ppl with lower karma. And for people with more karma, it's 1 = normal 3 = strong. ? Or something that reduces the possible range for "how many ppl liked the post."

There's also a clear dynamic where people with 4-6 karma tend to check LW more frequently / early, so ... um... karma tends to go up more quickly at the beginning and then kind of tapers off, but it's like...

I dunno, it's kind of misleading to me.

Why do you top out at 16 instead of 5? I'm just ... confused by this.

Kind of wish all 'weak votes' were 1, too, and karma scores only kick in if you strong vote.

Comment by unreal on Strong Votes [Update: Deployed] · 2018-06-01T21:35:30.407Z · score: 8 (3 votes) · LW · GW

that link seems broke

Comment by unreal on Duncan Sabien: "In Defense of Punch Bug" · 2018-05-29T21:24:54.426Z · score: 24 (5 votes) · LW · GW

I am fascinated by this conversation/disagreement about Ender's Game. I think it might be really important. I am upvoting both comments.

Some things it makes me consider:

a) When is violence / attacking the outgroup justified?

b) Would it have been abusive if the children hadn't been lied to? (I lean no. But given that they were lied to, I lean yes.)

c) Is it OK to sometimes frame "the default ways of the universe" as a kind of outgroup, in order to motivate action 'against' them? Ender's Game was about another sentient lifeform. But in ways, the universe has "something vaguely resembling" anthropomorphizable demons that tend to work against human interests. (We, as a community, have already solidified Moloch as one. And there are others.) In a way, we ARE trying to mobilize ourselves 'against the outgroup'—with that outgroup being kind of nebulous and made-up, but still trying to point at real forces that threaten our existence/happiness.

Q for benquo:

How do you feel about sports (or laser tag leagues)?

Comment by unreal on Duncan Sabien on Moderating LessWrong · 2018-05-29T20:47:35.704Z · score: 32 (6 votes) · LW · GW

If you found out some of those cons (or some close version of them) were necessary in order to achieve those pros, would anything shift for you?

For instance, if you see people acting to work on/improve/increase the cons... would you see those people as acting badly/negatively if you knew it was the only realistic way to achieve the pros?

(This is just in the hypothetical world where this is true. I do not know if it is.)

Like, what if we just live in a "tragic world" where you can't achieve things like your pros list without... basically feeding people's desire for community and connection? And what if people's desire for connection often ends up taking the form of wanting to live/work/interact together? Would anything shift for you?

(If my hypothetical does nothing, then could you come up with a hypothetical that does?)

Comment by unreal on Moderating LessWrong: A Different Take · 2018-05-28T21:09:13.778Z · score: 21 (5 votes) · LW · GW

That makes sense.

It would look less like you were emotionally compromised if you tried to do the double crux thing in addition to pointing out the norms violations. E.g., "I think you're over the line in these ways. [List of ways] But, if you did have some truth to what you're saying, would it be this? [attempt at understanding their argument / what they are trying to protect]"

(Maybe you have done this, and I missed it.)

But if you haven't done this, why not?

Alternatively, another move would be, "I feel ___ about engaging with your arguments because they strike me as really uncharitable to the post. Instead I would like to just call out what I think are a list of norms you are violating, which are important to me for wanting to engage with your points."

^This calls to the fact you are avoiding engaging with the critique on your post. (There are plenty of other ways to do this, I just gave one possible example.)

Does that move seem reasonable / executable?

(I'm noticing that if you felt you "should" do these things, it would be an unreasonable pressure. I think you are absolutely NOT obligated to engage in these ways. I'm pointing at these moves because they would cause me, and likely others, to respect you more in the arena of online debate. I already respect you plenty in lots of other arenas, so. This is like extra?)

Comment by unreal on Moderating LessWrong: A Different Take · 2018-05-28T06:35:27.037Z · score: 35 (8 votes) · LW · GW

Weird, I was expecting you to disagree. I was trying to illustrate what I thought you were missing in your own arguments around this.

In the disputes I've seen you engage in, this is kind of what it looks like is happening. (Except you're not a mod, just the author of the post.)

Comment by unreal on Moderating LessWrong: A Different Take · 2018-05-28T05:07:33.453Z · score: 38 (9 votes) · LW · GW

I think it's fine for participants to engage this way.

If a moderator gets embroiled in a disagreement where one side is saying "You're criticizing me wrong" vs "I'm trying to criticize you for X." Then this can get real awkward.

If the criticism itself has (potentially) some truth or validity, but the moderator doesn't acknowledge any of that and instead keeps trying to have a conversation about how the criticism is wrong/improper by LW's standards, then the way this looks is:

a) A moderator is trying to dodge being criticized

b) They are using the mantle of "upholding LW's standards" to hide behind and dodging double cruxing at the object level

c) They aren't acknowledging the overall situation, and so it's unclear whether the mod is aware of how this all looks and whether they're doing it on purpose, or if they're feeling defensive and using principles to (subconsciously) dodge criticism

Here, it is valid to care about more than just whether the mod is technically correct about the criticism's wrongness! The mod might be correct on the points they're making. But they're also doing something weird in the conversation, where it really seems like they're trying to dodge something. Possibly subconsciously. And the viewers are left to wonder whether that's actually happening or if they're mistaken. But it's awkward for a random viewer to try to "poke the bear" here, given the power differential.

Even worse, if someone does try to "poke the bear" and the mod reacts by denying any accusations of motivated reasoning, but continuing to leave the dynamic unacknowledged and then claiming that this is a culture that should be better than that.

In my head, it is obvious why this is all bad for a mod to do. So I didn't explain quite why it's bad. I can try if someone asks.

Comment by unreal on Moderating LessWrong: A Different Take · 2018-05-27T19:23:55.562Z · score: 11 (3 votes) · LW · GW

Where does the piece say that?

Comment by unreal on Duncan Sabien on Moderating LessWrong · 2018-05-26T21:56:14.323Z · score: 9 (4 votes) · LW · GW

If you only plan on annotating past discussions that have long-since died, I mind a lot less. But for a discussion that is still live or potentially live, it feels like standing on a platform and shouting through a loudspeaker. I'd advocate for only annotating comments without any activity within the past X months.