Posts

3 Cultural Infrastructure Ideas from MAPLE 2019-11-26T18:56:48.921Z · score: 56 (25 votes)
Unreal's Shortform 2019-08-03T21:11:22.475Z · score: 12 (2 votes)
Dependability 2019-03-26T22:49:37.402Z · score: 67 (24 votes)
Rest Days vs Recovery Days 2019-03-19T22:37:09.194Z · score: 117 (54 votes)
Active Curiosity vs Open Curiosity 2019-03-15T16:54:45.389Z · score: 68 (26 votes)
Policy-Based vs Willpower-Based Intentions 2019-02-28T05:17:55.302Z · score: 62 (18 votes)
Moderating LessWrong: A Different Take 2018-05-26T05:51:40.928Z · score: 41 (11 votes)
Circling 2018-02-16T23:26:54.955Z · score: 129 (60 votes)
Slack for your belief system 2017-10-26T08:19:27.502Z · score: 65 (30 votes)
Being Correct as Attire 2017-10-24T10:04:10.703Z · score: 15 (5 votes)
Typical Minding Guilt/Shame 2017-10-24T09:39:35.498Z · score: 25 (11 votes)

Comments

Comment by unreal on The Five Main Muscles for a Full Range of Natural Movement, Dynamic Alignment & Balance. · 2019-12-06T05:38:41.091Z · score: 12 (3 votes) · LW · GW

I appreciate seeing this post here! I am very interested in this sort of topic, generally.

I'm confused why the post has such a low karma score. If nothing else, it seems like a useful reference for human anatomy.

One thing this post suffers from is, like, it's overwhelming for a noob to look at. Personally I'd much rather just hire someone to teach me all this in person, if at all possible.

That said, it still seems like a great reference for parts of human anatomy, and it contains a very interesting hypothesis. I wish LessWrong talked more about this stuff, as it seems very important for humans and how humans think.

Writing about anything RE: biology, life, anatomy, etc. seems difficult because it's all very 3D in nature, and it's best to have good visualizations. Which are not always available. That said, I am grateful that you put all this together. It seems like it took a lot of work. And I hope to see more in the future.

Comment by unreal on 3 Cultural Infrastructure Ideas from MAPLE · 2019-12-05T19:38:35.544Z · score: 4 (2 votes) · LW · GW

Worth noting here that the Schedule at MAPLE is very conducive for creating these low-stakes contexts. In fact, inside the Schedule, you are always in such a context...

There is a world-saving mission at MAPLE, but at MAPLE, it does not define people's worth or whether they deserve care / attention or whether they belong in the community. I think the issue with both the EA and rationalist community is that people's "output" is too easily tied to their sense of worth. I could probably write many words on this phenomenon in the Bay community.

It is hard to convey in mere words what MAPLE has managed to do here. There is a clearer separation between "your current output level" and "your deserving-ness / worthiness as a human." It was startling to experience this separation occurring on a visceral level within me. Now I'm much more grounded, self-confident, and less likely to take things personally, and this shift feels permanent and also ongoing.

Comment by unreal on Circling · 2019-12-03T18:19:55.877Z · score: 13 (3 votes) · LW · GW

Upon re-reading this post, I want to review this sentence:

In my experience, being in an SNS-activated state really primes me for new information in a way that being calm (PSNS activation) does not.

I think this is true still, but I also suspect being in a certain calm, open PSNS state is also good for integrating new information.

I don't understand this fully yet. But some things:

  • Many therapeutic modalities attempt to get me into a particular open, peaceful, "all-seeing", perceptive state. Often related to compassion + curiosity. Referred to as "Self" in IFS. From here, I have been able to integrate many things that were previously "too hard" or "overwhelming."
  • In Circling, I have sometimes been basically doing CoZE and going right up to the fence of my fears. Maybe looking at someone actively caring about me / understanding me while I feel shame / fear / self-judgment. For me, this is a very activating situation, like reaching the peak of a roller coaster. From here, I have made some of my biggest updates / experienced my largest releases. And I attributed that to the level of activation / fear, in contrast with the "drop"—there's this big juxtaposition between my feared/projected/storied reality and what is happening in front of me right now.

These phenomena are mostly still mystery to me.

Comment by unreal on 3 Cultural Infrastructure Ideas from MAPLE · 2019-11-27T23:05:13.624Z · score: 5 (2 votes) · LW · GW

I think growth-training programs actually do work for the former.

E.g. My CFAR workshop wasn't something I decided to go to because I was thinking about training leadership. But it none-the-less helped unlock some of this "entry level leadership" thing. Much of the same happens with Circling and other workshops that help unblock people.

So far what seems to work here is training programs that do any kind of developmental training / leveling up. Ideally they work on you regardless of what stage you happen to be and just help propel you to the next stage.

Of course, not all the people who go through those programs end up interested in leadership, but this is probably fine, and I suspect trying to pre-screen for 'leadership potential' is a waste of effort, and you should just ride selection effects. (Similar to how people who emigrate correlate with having skill, resourcefulness, and gumption.)

Comment by unreal on 3 Cultural Infrastructure Ideas from MAPLE · 2019-11-27T22:23:27.341Z · score: 2 (1 votes) · LW · GW

I feel very compelled by this! I would love to help figure out how to approach this bottleneck. I have some ideas.

My sense is that there are some useful funnels already in place that one could take advantage of for finding potential people, and there are effective, growth-y training programs one could also take advantage of. There are maybe bottlenecks in money + space in specific training programs + getting the right people to the right training programs.

Comment by unreal on 3 Cultural Infrastructure Ideas from MAPLE · 2019-11-27T20:37:26.743Z · score: 9 (4 votes) · LW · GW

It feels tractable to me. I feel like there are lots of levers to play around with.

Pieces I suspect may be load-bearing:

  • Honest selection effects. This means sending accurate, honest messages, to attract people who are good fits and pass under the radar of those who aren't. (With some flexibility at the edges, as some people might be on the fence / seem like not-fits but only on the surface; those people can run some cheap experiments, like visiting for a few days.)
  • There needs to be a bigger point to it all. I don't think this can all just be for the sake of "my own health" or "I feel less stress with a schedule" or something like this. These personal motivations don't stand up to enough pressure. At MAPLE, everything is ultimately for the sake of training awakening and leadership. You signed up in order to grow in these ways, and so you're devoting yourself to the training. And more than that, the point of training is to become a person who can help others / do good things when you leave—someone who can be of benefit, be reliable, is trustworthy, is compassionate. You'll sacrifice some optionality if there's an inspiring, higher purpose to the sacrifice. If it only feels like "well I guess I could give up on some sleep in order to exercise because it's good for me...?", then it could often go either way. When there's a higher purpose that's bigger than me, there's always a North Star to be following, even if I'm not always on track.
  • Reasonable, skillful leadership. This thing probably doesn't work very well if decisions are all based on consensus or something. So you'd want to find at least a few pretty reliable, trustworthy, reasonable people to lead / hold the important roles. Power should be spread around, but it seems fine for there to be a "final say-so" person, who exercises end-of-the-line power, but does so infrequently. There are various ways to play with this. The important thing is having a few good leaders (maybe even just one? but this feels less robust to me), who people would be willing to follow, and they should divide roles between them in a sensible way. One person can be end-of-the-line decision-maker tie-breaker (probably the person with the Vision).
  • Using commitments wisely. If the leaders all have buy-in (because they put in the most effort, money, etc.), but the followers don't, the thing will probably fall apart. Get commitments from people, preferably in writing. And then make sure commitments really mean something, in general. Include integrity in your list of virtues. Leaders should consistently demonstrate they care about commitments (big or small) and that when they themselves break commitments, they take that seriously. People should not break their commitments, but also they shouldn't be shamed if they do. A broken commitment is like a death. It's no one's fault, but it's also worth trying to prevent. Occasionally, it may be correct to break a commitment, but there should be an acknowledgement of its suboptimality (e.g. perhaps it should have been differently made originally, or never made).
  • Feedback culture. It should be welcome and encouraged and also normal to give and receive feedback from each other, daily. Ideal feedback should be kindly given, rather than given out of annoyance, superiority, disappointment, shaming, or guilting. Feedback is ideally received as a gift. It is OK to fail in giving/receiving feedback well, because people can give feedback on how you give/receive feedback. At MAPLE, it's part of the written commitment that you will give/receive feedback. (Part of me suspects this works so well at MAPLE because of the meditation training, which helps people feel more equanimous and calms egoic reactivity. If meditation training is important for growing the skill of giving/receiving feedback easefully, that might be a major constraint.)
  • Financial viability. One thing about Dragon Army that I didn't like was that Duncan seemed to be holding most of the financial burden, and his willingness and ability to provide financial support seemed cruxy to the thing staying afloat. Now I understand better that it's possible to fundraise for projects like this and also apply for grants. My sense now is that if people don't want to give you money for such a project, maybe it's better to just not do it? On the other hand, if your project has visionary and trustworthy leadership—then you can probably find people interested in funding it, even if they're not directly involved. If your project is inspiring and beneficial to others, you'll probably find donors. I think it's better not to rely on the residing community members as the only source of financial support. (Leverage seems to work this way?)
Comment by unreal on Partial summary of debate with Benquo and Jessicata [pt 1] · 2019-08-17T18:40:44.306Z · score: 4 (2 votes) · LW · GW

I see great need for some way to indicate "not-an-accident but also not necessarily conscious or endorsed." And ideally the term doesn't have a judgmental or accusatory connotation.

This seems pretty hard to do actually. Maybe an acronym?

Alice lied (NIANOA) to Bob about X.

Not Intentionally And Not On Accident

Comment by unreal on Machine Learning Analogy for Meditation (illustrated) · 2019-08-13T17:36:04.821Z · score: 5 (2 votes) · LW · GW

Thanks! This was helpful analysis.

I suspect my slight trigger (1/10) set off other people's triggers. And I'm more triggered now as a result (but still only like 3/10.)

I'd like to save this thread as an example of a broader pattern I think I see on LW, which makes having conversations here more unpleasant than is probably necessary? Not sure though.

Comment by unreal on Machine Learning Analogy for Meditation (illustrated) · 2019-08-13T14:52:17.447Z · score: 2 (1 votes) · LW · GW

I acknowledge that it's likely somehow because of how I worded things in my original comment. I wish I knew how to fix it.

Comment by unreal on Machine Learning Analogy for Meditation (illustrated) · 2019-08-12T21:21:47.677Z · score: 12 (5 votes) · LW · GW

Yeah I'm not implying that System 2 is useless or irrelevant for actions. Just that it seems more indirect or secondary.

Also please note that overall I'm probably confused about something, as I mentioned. And my comments are not meant to open up conflict, but rather I'm requesting a clarification on this particular sentence and what frame / ontology it's using:

If you're experiencing "motivational issues", then it stands to reason that it might be useful to keep an eye on which thoughts are leading to actions and which are not.

I would like to expand the words 'thoughts' and 'useful' here.

People seem to be responding to me as though I'm trying to start an argument, and this is really not what I'm going for. Sharing my POV is just to try to help close inferential gap in the right direction.

Comment by unreal on Machine Learning Analogy for Meditation (illustrated) · 2019-08-12T14:44:34.893Z · score: 6 (3 votes) · LW · GW

Hmmm. No.

Basically, as far as I know, System 1 is more or less directly responsible for all actions.

You can predict what actions a person will take BEFORE they are mentally conscious of it at all. You can do this by measuring galvanic skin response or watching their brain activity.

The mentally conscious part happens second.

But like, I'm guessing for some reason, what I'm saying here is already obvious, and Abram just means something else, and I'm trying to figure out what.

Comment by unreal on Machine Learning Analogy for Meditation (illustrated) · 2019-08-10T16:49:54.640Z · score: 13 (4 votes) · LW · GW

My inferential distance from this post is high, I think. So excuse if this question doesn't even make sense.

If you're experiencing "motivational issues", then it stands to reason that it might be useful to keep an eye on which thoughts are leading to actions and which are not.

I think I don't understand what you mean by 'thoughts' ?

I view 'thoughts' as not having very much to do with action in general. They're just like... incidental post-hoc things. Why is it useful to track which thoughts lead to actions? /blink

Comment by unreal on Machine Learning Analogy for Meditation (illustrated) · 2019-08-10T14:50:29.808Z · score: 5 (3 votes) · LW · GW

I also liked reading the handwritten version.

Comment by unreal on Unreal's Shortform · 2019-08-10T12:36:58.635Z · score: 5 (2 votes) · LW · GW

Yeah, I think it's more the latter.

I would classify situation 3 as a lack of the model-building skill proper, which includes having my models affect me on an alief level. Although introspection would help me notice what my aliefs are to begin with.

Comment by unreal on Unreal's Shortform · 2019-08-09T20:42:02.806Z · score: 22 (7 votes) · LW · GW

When I talk about "model-building skill" I think I mean three separate skills:

Skill A) Model-building skill proper

Skill B) Introspection skill

Skill C) Communication skill

There are probably a lot of people who are decent at model-building proper. (Situation #1)

I'm imagining genius-level programmers. But then when you try to get some insight into what their models actually are or why they do things one way vs another, it's opaque. They don't know how to explicate any of it. They get annoyed by having to try to verbalize their models or even know what they are—they'd rather get back to coding and are frustrated having to "waste time" convincing other people.

Then there are other people who might be decent at model-building and introspecting on their models, but when they try to communicate their models to you, it comes out as gibberish (at least to your ears). And asking them questions doesn't seem to go anywhere. (Situation #2)

Then there's the situation where people are really articulate and able to communicate very clear, explicit, verbal models—but when it comes to implementing those models on a somatic-emotional level, they run into trouble. But it sounds like they have the model-building skill because they can talk about their models and share genuine insights about them. (Situation #3)

An example is having a bunch of insightful theories about social dynamics, but when actually in a situation where they could put those theories into practice, there is some kind of block. The models are not acting like felt models.

...

I've been in Situation #3 and Situation #1 before.

Overcoming Situation #3 is a scary thing. Being able to see, make sense of, and articulate models (from afar) was a way of distancing myself from reality. It was a preemptive defense mechanism. It helped me feel superior / knowledgable / satisfied. And then I continued to sit and watch rather than participate, engage, run experiments, etc. Or I'd play with "toy models" like games or structured activities or imaginings or simulations.

To be fair to toy models, I learned quite a lot / built many useful models from playing games, so I don't regret doing a lot of that. And, there's also a lot to be gained from trial by fire, so to speak.

For Situation #1, the solution mostly came from learning to introspect properly and also learning to take myself seriously.

Once I honestly felt like I had meaningful things to share with other people, it became easier to bother trying. (It's hard to do this if you're surrounded by people who have high standards for "interesting" or "useful." Being around people who were more appreciative helped me overcome some stuck beliefs around my personal worth.)

I also had a block around "convincing other people of anything." So while I could voice models, I couldn't use them to "convince people" to change anything about their lives or way of being. It made me a worse teacher and also meant my models always came across as "nice/cool but irrelevant." And not in a way that was easy for anyone to detect properly, especially myself.

Comment by unreal on Unreal's Shortform · 2019-08-03T21:11:22.625Z · score: 28 (8 votes) · LW · GW

#trauma #therapy

My working definition of trauma = sympathetic nervous system (SNS) activation + dorsal vagal complex (DVC) activation

The SNS is in control of fight or flight responses. Activation results in things like increased heart rate / blood pressure, dilated pupils, faster breathing, and slowed digestion.

The DVC plays a role in the freeze response that exists in many vertebrates. In vertebrates, the freeze response results in immobility. In humans, the freeze response is associated with de-activated language centers in the brain (based on MRI research on trauma flashbacks). From my own extrapolations, it also seems associated with dissociation / depersonalization (sometimes) and lessened ability to orient to one's surroundings.

The dorsal branch of the vagus originates in the dorsal motor nucleus and is considered the phylogenetically older branch.[3] This branch is unmyelinated and exists in most vertebrates. This branch is also known as the “vegetative vagus” because it is associated with primal survival strategies of primitive vertebrates, reptiles, and amphibians.[3] Under great stress, these animals freeze when threatened, conserving their metabolic resources.
https://www.wikiwand.com/en/Polyvagal_theory

The vagus systems acts by inhibiting the SNS.

In other words, a traumatic incident involves "fully pushing the gas pedal while simultaneously fully pushing the brake." On the one hand, the body is trying to engage fight or flight, but in cases where neither fighting nor fleeing feel like options, the body then tries to engage freeze.

a) This releases a bunch of stress hormones.

So, according to The Body Keeps the Score, cortisol is a hormone that signals the body to STOP releasing stress hormones. In people with PTSD, cortisol levels are low. And stress hormones fail to return to baseline after the threat has passed, meaning they experience a prolonged stress response.

b) This helpless freeze response becomes a learned response in similar situations in the future.

The concept of "learned helplessness" is very likely related to the experience of trauma. (Interestingly, the Wikipedia article doesn't even mention the word "trauma.")

From the "Early Key Experiments" section on the Learned Helplessness Wiki page:

American psychologist Martin Seligman initiated research on learned helplessness in 1967 at the University of Pennsylvania as an extension of his interest in depression.[4][5] This research was later expanded through experiments by Seligman and others. One of the first was an experiment by Seligman & Maier: In Part 1 of this study, three groups of dogs were placed in harnesses. Group 1 dogs were simply put in a harnesses for a period of time and were later released. Groups 2 and 3 consisted of "yoked pairs". Dogs in Group 2 were given electric shocks at random times, which the dog could end by pressing a lever. Each dog in Group 3 was paired with a Group 2 dog; whenever a Group 2 dog got a shock, its paired dog in Group 3 got a shock of the same intensity and duration, but its lever did not stop the shock. To a dog in Group 3, it seemed that the shock ended at random, because it was his paired dog in Group 2 that was causing it to stop. Thus, for Group 3 dogs, the shock was "inescapable".
In Part 2 of the experiment the same three groups of dogs were tested in a shuttle-box apparatus (a chamber containing two rectangular compartments divided by a barrier a few inches high). All of the dogs could escape shocks on one side of the box by jumping over a low partition to the other side. The dogs in Groups 1 and 2 quickly learned this task and escaped the shock. Most of the Group 3 dogs – which had previously learned that nothing they did had any effect on shocks – simply lay down passively and whined when they were shocked.[4]

It seems very likely they ended up traumatizing the dogs in that experiment by caging them up and then electrocuting them. The dogs subsequently don't "flee" even when the cage is open. They've ingrained their freeze response.

Similarly in humans, traumatized people become trapped within their own mental constraints when faced with a triggering situation. They rationalize why they can't change things, but the rationalization is likely "after-the-fact". They're frozen on a nervous-system level (which happens first), and they're justifying the freeze with stories (which happens after).

Not to say that the stories are irrelevant or don't affect things. I think they do.

c) In future similar situations, the "body" believes it's still stuck in the past / that the traumatic incident is still happening in some way. It tries to recreate the original pattern over and over.

[ My use of the word "body" is a bit shifty here. I might mean something like System 1 (Kahneman) or Self 2 (Inner Game of Tennis) or whatever part of you stores "feeling beliefs" (Bio-Emotive) or "core beliefs / belief reports" (Connection Theory). When I say "body" in quotes, this is what I mean. Whatever I mean, it probably involves the limbic system. ]

I don't fully understand what's going on here, but here's a couple super-bad made-up stories with a bunch of missing gears:

Story #1: The "body", feeling trapped, keeps trying to relive the experience in an effort to "finally find some way out". Similar to the function of dreaming, which is to try to "rehearse" some event but try out a bunch of variations quickly.

Often, reliving trauma doesn't actually end up with a different outcome, however. In many cases, the traumatized person ends up reinforcing the original trauma narrative.

Somatic Experiencing is a therapy modality that tries to relive a trauma with a new narrative where the person is not helpless, and it does it by engaging the "body" (rather than the intellect).

Story #2: The "body" has a bunch of trapped "energy" from the original trauma, and the "energy" needs a way to release.

The person usually fails to find a way to release the energy without the help of therapy or ritual or some kind of processing technique.

The Bio-Emotive process releases "energy" in the form of sobbing and vocalizing. It engages the emotional system through simple language ("I feel sad and helpless"). It engages the meaning-making / narrative system through simple story ("He left me to die"). There can be a large, loud emotional and physical release from this process.

( Lots of approaches have been developed for dealing with trauma, but I'm mentioning two as direct examples of how my Stories might fit given current practices. )

---

I've been thinking about this lately because I'm about to give a talk on trauma, so I've been re-reading sections of Waking the Tiger and The Body Keeps the Score.

I highly recommend the latter book as a pretty comprehensive view of where we are on trauma research lately. The former book is by Peter Levine, who developed Somatic Experiencing therapy and figured out a lot of stuff through trial-and-error with his clients. His language is less "science-y sounding" or something, but it contains helpful exercises and the correct "mindset" for being with trauma.

Comment by unreal on Integrity and accountability are core parts of rationality · 2019-07-15T22:41:19.397Z · score: 13 (7 votes) · LW · GW

This post is relevant to my post on Dependability.

I'm at MAPLE in order to acquire a certain level of integrity in myself.

The high-reaching goal is to acquire a level of integrity that isn't much-influenced by short/medium-term incentives, such that I can trust myself to be in integrity even when I'm not in an environment that's conducive for that.

But that's probably a long way off, and in the meantime, I am just working on how to be the kind of person that can show up on time to things and say what needs to be said when it's scary and take responsibility for my mistakes and stuff.

I thumbs-up anyone who attempts to become more in integrity over time! Seems super worthwhile.

Comment by unreal on The Relationship Between the Village and the Mission · 2019-05-14T03:01:57.014Z · score: 10 (5 votes) · LW · GW

I'm realizing that I need to make the following distinction here:

Village 1) There is a core of folks in the village that are doing a hard thing (Mission). and also their friends, family, and neighbors who support them and each other but are not directly involved in the Mission.

Village 2) There is a village with only ppl who are doing the direct Mission work. Other friends, family, etc. do not make their homes in the village.

I weakly think it's possible for 1 to be good.

I think 2 runs into lots of problems and is what my original comment was speaking against.

Comment by unreal on The Relationship Between the Village and the Mission · 2019-05-12T23:33:52.156Z · score: 27 (8 votes) · LW · GW

[ comment copied from Facebook / I didn't read the full article before making this comment ]

i am somewhat anti-"Mission-centered Village."

i think entanglement between mission and livelihood already causes problems. (you start feeling that the mission is good b/c it feeds you, and that an attack on the mission is an attack on your ability to feed yourself / earn income)

entanglement between mission and family/home seems like it causes more of those problems. (you start feeling that if your home is threatened in any way, this is a threat to the mission, and if you feel that your mission is threatened, it is a threat to your home.)

avoiding mental / emotional entanglement in this way i think would require a very high bar: a mind well-trained in the art of { introspection / meditation / small-identity / surrender / relinquishment } or something in that area. i suspect <10 ppl in the community to meet that bar?

Comment by unreal on The Relationship Between the Village and the Mission · 2019-05-12T23:32:50.719Z · score: 14 (6 votes) · LW · GW

[ comment copied from Facebook / I didn't read the full article before making this comment ]

agree with a lot of this, esp the part about not trying to welcome everyone / lower barrier to entry to the point that there's no commitment involved

i think a successful village will require a fair amount of commitment and sacrifice, in terms of time, effort, opportunity cost, and probably money

if everyone is looking to maximize their own interests, while pursuing a village, i think this will drain resources to the point that nothing gets done or things fall apart. a weak structure will beget a fragile village. and i think a fragile village can easily be net harmful.

at the same time, it's good to be considerate to people who can't contribute a whole lot due to disability or financial insecurity.

Comment by unreal on Dependability · 2019-04-08T17:26:22.773Z · score: 2 (1 votes) · LW · GW

Hmm I wonder if there's something about meditation in a monastic setting where all the rules need to be strived to be followed that does something.

Because I'm pretty sure a number of the residents of the monastery here have become much more reliable after a year or two of being here.

It might be context-dependent too, but I seem to not be worried about that problem as much for me. I feel above-average at the generalization skill and think I can take some useful things out of a specific context into other contexts.

Comment by unreal on Dependability · 2019-03-30T12:27:06.570Z · score: 2 (1 votes) · LW · GW

I don't think I've properly conveyed what I mean by Dependability, judging by the totality of the comments. Or, maybe I've conveyed what I mean by Dependability, but I did not properly explain that I want to achieve it in a specific way. I'm looking to gain the skill through compassion and equanimity. A monastic lifestyle seems appropriate for this.

I also did not at all explain why I'm specifically disadvantaged in this area, compared to the average person. And I think that would bring clarity too, if I explained that.

Comment by unreal on Dependability · 2019-03-28T19:58:24.643Z · score: 25 (7 votes) · LW · GW

I will try to explain where my disagreement is.

1. Concept space is huge. There are more concepts than there are words for concepts. (There are many possible frames from which to conceptualize a concept too, which continues to explode the number of ways to think about any given concept.)

2. Whenever I try to 'coin' a term, I'm not trying to redefine an old concept. I have a new concept, often belonging in a particular new frame. This new concept contains a lot of nuance and specificity, different from any old words or concepts. I want to relay MY concept, which contains and implies a bunch of models I have about the world. Old words would fail to capture any of this—and would also fail to properly imply that I want to relay something confusingly but meaningfully precise.

3. I'm not 'making up' these concepts from nothing. I'm not 'thinking of ways to add complexity' to concepts. My concepts are already that complex. I'm merely sharing a concept I already have, that is coming forth from my internal, implicit models—and I try to make them explicit so others can know what concepts I already implicitly, subconsciously use to conceptualize the world. And my concepts are unique because the set of models I have are different from yours. And when I feel I've got a concept that feels particularly important in some way, I want to share it.

4. I want to understand people's true, implicit concepts—which are probably always full of nuance and implicit models. I am endlessly interested in people's precise, unique concepts. It's like getting a deep taste of someone's worldview in a single bite-sized piece. I like getting tastes of people's worldviews because everyone has a unique set of models and data, and that complexity is reflected in their concepts. Their concepts—which always start implicit and nonverbal, if they can learn to verbalize them and communicate them—are rich and layered. And I want them. (Also I think it is a very, very valuable skill to be able to explicate your implicit concepts and models. LessWrong seems like a good place to practice.)

5. "But what about building upon human knowledge, which requires creating a shared language? What about figuring out which concepts are best and building on those?" I agree this is a good goal to have. The platform of LessWrong is already built to prune concept space (with multiple ways for concepts to be promoted or demoted).

But I do think this goal is "at odds" with my goal of sharing my concepts, learning others' concepts, and diving into the depths of concept space. What I want here is to be in the "whiteboarding" phase where lots of ideas and thoughts are allowed to surface, and maybe it's their first time really seeing the light, but I get feedback, and other people have associated thoughts and share those. And it's a generative sort of phase, rather than a pruning phase.

It seems plausible my posts should stay in my 'blog' and off the front page? I don't fully understand the point of front page vs blog personally. But I'd be happy to keep my posts in the corner of "my blog" and do the 'whiteboarding' thing there.

If any of the mods want to discuss this dilemma with me (I'd prefer doing this offline), I'd be into getting more opinions on this.

Comment by unreal on Dependability · 2019-03-27T04:37:54.176Z · score: 4 (2 votes) · LW · GW

There's some overlap with conscientiousness, but dependability doesn't include being organized, being efficient, caring about achievement or perfection, being hardworking, being careful, being thorough, or appearing competent.

Grit seems important for trying and follow-through in particular!

Comment by unreal on Dependability · 2019-03-27T00:11:40.572Z · score: 2 (3 votes) · LW · GW

I guess I disagree :P

Comment by unreal on Active Curiosity vs Open Curiosity · 2019-03-24T06:36:54.135Z · score: 2 (1 votes) · LW · GW

I've been watching a bunch of videos on this, and I'm finding them quite interesting so far.

http://iainmcgilchrist.com/videos/

Also I agree lots of precision and discernment are useful to maintain here. It could get "floppy" real fast if people aren't careful with their concepts / models.

Comment by unreal on Rest Days vs Recovery Days · 2019-03-22T06:57:17.262Z · score: 8 (5 votes) · LW · GW

Connotations of Rest that I find relevant:

  • lack of anxiety
  • PSNS activation
  • relaxed body (while not necessarily inactive or passive body)
  • a state that you can be in indefinitely, in theory (whereas Recover suggests temporary)
  • meditative (vs medicative)
  • not trying to do anything / not needing anything (whereas Recover suggests goal orientation)
  • Rest feels more sacred than Recovery

Concept that I want access to that "Recover" doesn't fit as well with:

Comment by unreal on Active Curiosity vs Open Curiosity · 2019-03-21T19:42:35.121Z · score: 10 (2 votes) · LW · GW

Ian McGilchrist came out a book on brain hemispheres and their specialized roles called The Master and His Emissary. This summary was useful: https://www.reddit.com/r/streamentry/comments/b39n4x/the_divided_brain_and_awakening_theorycommunity/

The Left Hemisphere handles narrow focus (like a bird trying to pick out a seed among a bunch of pebbles and dirt), while the Right Hemisphere handles broad, open focus (the same bird keeping some attention on the background for predators). The LH is associated with tool use and manipulation of objects. The RH is associated with exploration and experiential data gathering.

I don't immediately know how the hemispheres may be involved in the types of Curiosity. But a plausible hypothesis might be that Active Curiosity would be more left-brained and Open Curiosity would be more right-brained.

Comment by unreal on Rest Days vs Recovery Days · 2019-03-21T01:28:49.100Z · score: 6 (3 votes) · LW · GW
It's not that you're just doing whatever you "feel" like, in a generic sense. You're doing something like Focusing on your stomach in particular

Yes, this is right.

I also do predict the stomach is where most people should be Focusing on, for getting proper Rest. I think there's some kind of ongoing battle between the head and the stomach, and people/society tends to favor the head.

But I get mileage out of doing Focusing on all kinds of areas.

Comment by unreal on Rest Days vs Recovery Days · 2019-03-21T01:05:01.668Z · score: 13 (4 votes) · LW · GW

So some very general links (since 'improving productivity on chores and future planning' sounds like it could mean a lot of things):

Overall, I've gotten large gains out of designing my life such that work feels like water flowing downhill rather than me trying to trudge uphill.

I use Policy-Based Intentions a fair amount, as a way to save willpower. I'm like a game designer trying to design the maze that my mouse is running in, if that makes sense. And I try to make it easy for the mouse to make the correct decisions depending on the situation.

Comment by unreal on Rest Days vs Recovery Days · 2019-03-20T19:33:36.337Z · score: 14 (7 votes) · LW · GW

I think Kaj is right. But in general, video games / TV feel like they help me escape the present moment, avoid thinking about something or feeling my body, and keep me in my head. Video games also have that feeling of fake productivity which makes them feel like a compulsive "pretend work." (Aka pica.)

I guess I also should have distinguished "reading for pleasure" and "productive reading." I was advocating for the former and not so much the latter.

Once, I did a spontaneous picnic where I put a blanket outside somewhere nice and brought a basket of food and a book. And I just lounged outside, reading [Annihilation] and eating and looking at nature. If I imagine having TV instead, I feel like I lose the ability to choose where my attention goes freely. With a book, I can pause or daydream and take my time with it more easily.

But really it's up to you what counts as Restful. I can imagine watching video interviews Restful for some reason. Or listening to podcasts. I'm less sure what Restful video games for me would be.

Comment by unreal on Rest Days vs Recovery Days · 2019-03-20T19:18:37.242Z · score: 14 (4 votes) · LW · GW

I would experiment with that in the following ways:

  • Try not doing any projects and see how that is (This seems good for what Zvi / Ben describe as an emergency check / Sabbath as alarm.)
  • When you feel like working on a project, do so but periodically check "Do I still feel good about doing this right now? Is this yummy? Do I want to be doing this?" Do the check and then follow what seems good in the moment.
Comment by unreal on Active Curiosity vs Open Curiosity · 2019-03-17T00:07:39.143Z · score: 8 (4 votes) · LW · GW

Open curiosity does not actively seek to understand. Which is why I call the other one 'active'.

I suspect concentrated and diffuse curiosity are both referring to types of active curiosity. Open curiosity is talking about something different.

Comment by unreal on Active Curiosity vs Open Curiosity · 2019-03-15T13:11:43.489Z · score: 8 (4 votes) · LW · GW

yes, this is basically what I'm referring to

Comment by unreal on Policy-Based vs Willpower-Based Intentions · 2019-02-28T22:10:31.135Z · score: 9 (4 votes) · LW · GW

Oh yeah. I do think the nature of the task is an important factor. It's not like you can willy-nilly choose policy-based or willpower-based. I did not mean to present it as though you had a choice between them.

I was more describing that there are (at least) two different ways to create intentions, and these are two that I've noticed.

But you said that you can't use this on everything, so maybe the policies that I would need willpower to install just happen to be different from the policies that you would need willpower to install.

This seems likely true.

It's not that I don't have policies, it's that this description sounds like you can just... decide to change a policy, and then have that happen automatically.

It is true that I can immediately change certain policies such that I don't need to practice the new way. I just install the new way, and it works. But I can't install large complex policies all in one go. I will explain.

the Lyft thing sounded complicated to memorize and I would probably need to consciously think about it on several times when I was actually doing the tipping before I had it committed into memory.

With zero experience of Lyft tipping, I would not just be able to think up a policy and then implement it. Policy-driven intentions are collaborations between my S1 and S2, so S2 can't be doing all the work alone. But maybe after a few Lyft rides, I notice confusion about how much to tip. Then maybe I think about that for a while or do some reading. Eventually I notice I need a policy because deciding each time is tiring or effortful.

I notice I feel fine tipping a bit each time when I have a programming job. I feel I can afford it, and I feel better about it. So I create and install a policy to tip $1 each time and run with that; I make room for exceptions when I feel like it.

Later, I stop having a programming job, and now I feel bad about spending that money. So I create a new if-then clause. If I have good income, I will tip $1. If not, I will tip $0. That code gets rewritten.

Later, I notice my policy is inadequate for handling situations where I have heavy luggage (because I find myself in a situation where I'm not tipping people who help me with my bag, and it bothers me a little). I rewrite the code again to add a clause about adding $1 when that happens.

Policy re-writes are motivated by S1 emotions telling me they want something different. They knock on the door of S2. S2 is like, I can help with that! S2 suggests a policy. S1 is relieved and installs it. The change is immediate.

Comment by unreal on Policy-Based vs Willpower-Based Intentions · 2019-02-28T21:05:01.648Z · score: 4 (2 votes) · LW · GW

That's interesting!

How do other people handle the tipping thing? Whether for a driver or at a restaurant? Are you kind of deciding each time?

How do you handle the question of "who pays for a meal" with acquaintances / new people / on dates? My policy in this area is to always offer to split.

How do you handle whether to give money to homeless people or if someone is trying to offer you something on the street? My policy here is to always say no.

I'm curious what other people are doing here because I assumed most people use policies to handle these things.

Comment by unreal on Policy-Based vs Willpower-Based Intentions · 2019-02-28T20:58:41.571Z · score: 8 (4 votes) · LW · GW

I have not much considered group intention-setting. This seems super interesting to explore too.

Phenomenologically, I feel it kind of as... the agreements or intentions of the group (in a circle) recede into the background, to form the water we're all in together. Like it gets to relax in the VERY BACK of my mind and also I'm aware of it being in the back of other people's minds.

And from that shared container / background, I "get to move around" but it's like I am STARTING with a particular set of assumptions.

Other potential related examples:

  • I'm at a Magic tournament. I know basically what to expect—what people's goals are, what people's behaviors will be, what the rules of the game are and how to enforce them. It's very easy for me to move here because a lot of the assumptions are set in place for me.
  • I'm in church as a kid. Similar to the above. But maybe less agreeable to me or more opaque to me. I get this weird SENSE that there are ways I'm supposed to behave, but I'm not totally sure what they are. I'm just trying to do what everyone else seems to be doing... This is not super comfortable. If I act out of line, a grownup scolds me, is one way I know where the lines are.

Potential examples of group policy-based intentions:

  • I have a friend I regularly get meals with. We agree to take turns paying for each other, explicitly.
  • I have a friend, and our implicit policy is to tell each other as soon as something big happens in our lives.

As soon as a third person is added to the dynamic, I think it gets trickier to ensure it's a policy-based intention. (Technology might provide many exceptions?) As soon as one person feels a need to remind themselves of the thing, it stops being a policy-based intention.

Willpower-based intentions in groups feel they contain a bunch of things like rules, social norms, etc.

Comment by unreal on Policy-Based vs Willpower-Based Intentions · 2019-02-28T20:42:29.336Z · score: 4 (2 votes) · LW · GW

There is definitely this sense that exerting force or willpower feels like an EXTERNAL pressure even if that pressure does not have an external source that I could point to or even gesture at. But it /feels/ external or 'not me'.

I have some trauma related to this. I could've gone into the trauma stuff more, but I think it would have made the post less accessible and also more confusing, rather than less. So I didn't. :P

Comment by unreal on Policy-Based vs Willpower-Based Intentions · 2019-02-28T20:38:18.188Z · score: 2 (1 votes) · LW · GW

oh. I must have messed that up. I am OK with this being on the front page. I have definitely noticed some bugs here and there. Esp around the account settings page and trying to change my moderation guidelines. But I think I maybe just messed up the checkbox. Is it default checked to 'not ok'? Because if so, I left it alone thinking it was checked to 'is ok to promote'.

Comment by unreal on Policy-Based vs Willpower-Based Intentions · 2019-02-28T17:25:33.953Z · score: 5 (3 votes) · LW · GW

I enjoyed that article! Seems worth including the link in my article too. Thanks.

Your definition of intention seems different from my use of "willpower-based intention." My 'willpower-based intention' always has a conscious element and cannot do things like "work in the background without my awareness at all." It's maybe quite related to the thing in your forehead.

My policy-based intentions feel kind of like pulling up my inner code guts, making a little rewrite or alteration, and putting them back into my guts. This is a conscious process (the installation), but then the change runs automatically, without holding conscious intentions.

I'm very bad at using these to create personal habits, like drinking water everyday or taking vitamins everyday. I don't think these count. They require willpower after a while.

But maybe I one-time decide the best configuration of spices on the spice rack or how my kitchen is arranged. Then it is automatic for me to place things back where they belong after using it, and it is also automatic for me to want to organize things so they're back where they belong when they get messed up.

These 'desires' for things to be a certain way live in my belly. And it feels like my belly carries motivations and behaviors that I can ride out.

It feels relaxing to have a policy I can lean on, and to carry out the policy. Like water running downhill.

You could maybe think of it as 'intentions you already want to do anyway'. But with policies, your conscious mind can also make alterations / rewrite that code directly. Without any need for convincing, arguing, pushing. It is more of a collaboration I am in between elephant / rider—coming up with good policies makes us feel good and relaxed.

Comment by unreal on Strong Votes [Update: Deployed] · 2018-06-02T04:41:59.506Z · score: 8 (3 votes) · LW · GW

I was assuming the list comes out once -> I learn enough to understand what types of posts get what voting patterns (or, I learn that the data doesn't actually tell me very much, which might be more likely), but after that I don't need any more lists of posts.

I don't care if it has my own posts on it, really. I care more about 'the general pattern' or something, and I imagine I can either get that from one such list, or I'll figure out I just won't get it (because the data doesn't have discernible patterns / it's too noisy).

Comment by unreal on Strong Votes [Update: Deployed] · 2018-06-02T00:06:05.289Z · score: 8 (3 votes) · LW · GW

I prefer the one-time cost vs the many-time cost.

Comment by unreal on Strong Votes [Update: Deployed] · 2018-06-01T23:25:16.373Z · score: 11 (5 votes) · LW · GW

That makes sense.

But it's really confusing for my models of the post.

Cause there is a real difference between (lots of 2-users voted on this vs. a few 5-users voted on this). Those feel very different to me, and I'd adjust my views accordingly as to whether the post was, in fact, successful.

I get that you're trying to make "lots of 2-users" and "a few 5-users" basically amount to the same value, which is why you're scaling it this way.

But if a post ACTUALLY only has 2-users upvoting it and almost no 5-users, and other posts have 5-users voting on it but very few 2-users, that seems ... worth noting.

Although, you could prob achieve the same by publishing an analysis of upvote/downvote patterns.

You could, for instance, release a list of posts, ranked by various such metrics. (Ratio of low:high user votes. Ratio of high:low user votes. Etc. Etc.)

That would be interesting!

Comment by unreal on Monopoly: A Manifesto and Fact Post · 2018-06-01T23:14:38.979Z · score: 7 (2 votes) · LW · GW

The book The Fine Print covers a lot of examples of "special privileges granted by the government" in a number of industries (rail, telecom, energy). I read it a long time ago, so don't remember a ton from it. But in case anyone's interested in more concrete examples of this.

Comment by unreal on Monopoly: A Manifesto and Fact Post · 2018-06-01T23:12:15.346Z · score: 11 (2 votes) · LW · GW

Really glad you wrote this post. I think it's trying to speak to something I've been concerned with for a while—a thing that feels (to me) like a crux for a lot of current social movements and social ills in the States (including the social justice movement, black lives matter, growing homelessness / decreasing standards of living for the poorest people). And of course, the whole shit-pile that is our health care system.

Some Questions / Further Comments:

(Please respond to each point as a separate thread, so that threads are segregated by topic / question.)

1) My guess is that under "Services and construction", where you list "transportation", you mean a different "transportation" than the one in the graph, which has "Transportation and Warehousing" as its own category? I'd appreciate clarification / disambiguation in the article.

2) I agree with your point RE: intangibles, that they correlate / go together with monopoly. But it's difficult for me to tell HOW MUCH they 'go together'. And whether it is strictly 'a bad sign'. While I'm not a huge fan of how patents sometimes play out, I am a fan of branding. While you can't just try to transfer the effect of Coca-Cola's branding to your new product, I think you can, in fact, try to compete on branding.

(It would be terrible if someone tried to take exclusive rights over the use of the color red in logos or something, though. Hopefully that doesn't ever happen.)

And, honestly, I think the 'value' of their branding might not be too inaccurately priced, in some sense? (Even if the product reduces in quality, I think the branding has value beyond trying to measure quality of product.) I also don't whether 'intangibles' includes things like 'excellent customer service', but if it does, that seems like true value, not 'fake value'. Even though it doesn't directly cash out into more product.

Over time, I think more of what we consider valuable should be in intangibles? Seems like a sign of people having enough useful things that they can now afford to put money into "nice experiences." And in many ways, people value having fewer choices because it cashes out into less effort.

3) Similarly, 'company culture'—while it is 'dark matter' as Robin Hanson says—seems appropriate to value highly in some cases. I don't think most 'monopoly situations' are a result of some company just having a really good, un-copyable company culture, but in general, I do expect it to be very difficult to transfer / copy really excellent company cultures. And as a result, I do expect something monopolistic-looking to emerge as a result of—not shady dealings or exclusive privileges facilitated by government—but as a natural consequence of very few companies, in fact, being really good places to work.

I would really like to be able to disambiguate between the situations where: There are only 3 main firms in this industry. Is it because those 3 firms are in fact providing outsized value in a way that's hard to compete with? Or, is this happening because the government made some poor decisions that favored certain companies for not-very-good reasons, and they leveraged this into an effective monopoly?

Comment by unreal on Strong Votes [Update: Deployed] · 2018-06-01T22:03:37.540Z · score: 8 (4 votes) · LW · GW

That is too many numbers to parse! I only care about the # of ppl who've interacted with the post. Can I just have THAT number as a tooltip? That would mostly resolve my concern here.

Also, it's kind of weird to me that I have 5 vote power given I've only really interacted with this site for... a few months? And you guys have, like, 6? 7? Are you sure your scaling is right here? :/

Comment by unreal on Strong Votes [Update: Deployed] · 2018-06-01T22:01:44.686Z · score: 6 (2 votes) · LW · GW

Would you still be sad if your strong vote was maxed at 5?

1:15 is a big difference! But 1:5 is a lot less. And 1:3 is even lot lot less!

Comment by unreal on Strong Votes [Update: Deployed] · 2018-06-01T21:43:55.858Z · score: 24 (6 votes) · LW · GW

some thoughts before i try this out:

I am worried about this thing where I both want to know: how many ppl liked a thing vs how strongly ppl liked a thing. More for posts than for comments. For posts, if I see a number like 100, I am very confused about how many ppl liked it. It seems to range between 20-50. But if... the vote power actually goes up to 15. Then... I will be confused about whether it's like... 10-50. That's... a big difference to me.

I'd almost would like it if for posts, 1 = normal 2 = strong for ppl with lower karma. And for people with more karma, it's 1 = normal 3 = strong. ? Or something that reduces the possible range for "how many ppl liked the post."

There's also a clear dynamic where people with 4-6 karma tend to check LW more frequently / early, so ... um... karma tends to go up more quickly at the beginning and then kind of tapers off, but it's like...

I dunno, it's kind of misleading to me.

Why do you top out at 16 instead of 5? I'm just ... confused by this.

Kind of wish all 'weak votes' were 1, too, and karma scores only kick in if you strong vote.

Comment by unreal on Strong Votes [Update: Deployed] · 2018-06-01T21:35:30.407Z · score: 8 (3 votes) · LW · GW

that link seems broke

Comment by unreal on Duncan Sabien: "In Defense of Punch Bug" · 2018-05-29T21:24:54.426Z · score: 24 (5 votes) · LW · GW

I am fascinated by this conversation/disagreement about Ender's Game. I think it might be really important. I am upvoting both comments.

Some things it makes me consider:

a) When is violence / attacking the outgroup justified?

b) Would it have been abusive if the children hadn't been lied to? (I lean no. But given that they were lied to, I lean yes.)

c) Is it OK to sometimes frame "the default ways of the universe" as a kind of outgroup, in order to motivate action 'against' them? Ender's Game was about another sentient lifeform. But in ways, the universe has "something vaguely resembling" anthropomorphizable demons that tend to work against human interests. (We, as a community, have already solidified Moloch as one. And there are others.) In a way, we ARE trying to mobilize ourselves 'against the outgroup'—with that outgroup being kind of nebulous and made-up, but still trying to point at real forces that threaten our existence/happiness.

Q for benquo:

How do you feel about sports (or laser tag leagues)?