Ruby's Short-Form Feed

post by Ruby · 2019-02-23T21:17:48.972Z · score: 11 (4 votes) · LW · GW · 2 comments

Contents

  Please don't create top-level comments here, but feel free to reply to comments.
None
2 comments

This is an imitation of Raemon's experiment [LW · GW] for short-form content on LW2.0. I'll use the comment section of this post to share relatively short and informal write-ups of ideas of I'm thinking about. Closer to the style I'd adopt for Facebook, but still serious in content.

When writing full-on posts I feel the need to pay a lot of attention to the structure, content, editing, including all supporting material, being rigorous, etc. It feels like there's a quality bar to full-on posts. It would probably be fine to write things which are a little more rough, but it still feels bad to write to a lower level of polish than than the rest of the posts on the Frontpage.

The result is that I don't share many thoughts I think are likely worth sharing [1]. I'm hoping that this feed will offer an affordance for low-friction writing. It might also end up serving a place where I share early drafts which later become refined posts.

A nice name would be: Ruby's Random Ramblings about Rationality. Well, it's a very nice alliteration but a little misleading - probably won't be that random or rambly.

Please don't create top-level comments here, but feel free to reply to comments.

[1] Also sometimes it's nice to share what you're thinking about with others without worrying too much about how valuable is this really?

2 comments

Comments sorted by top scores.

comment by Ruby · 2019-02-28T19:35:53.312Z · score: 10 (2 votes) · LW · GW

Over the years, I've experienced a couple of very dramatic yet rather sudden and relatively "easy" shifts around major pain points: strong aversions, strong fears, inner conflicts, or painful yet deeply ingrained beliefs. My post Identities are [Subconscious] Strategies contains examples. It's not surprising to me that these are possible, but my S1 says they're supposed to require a lot of effort: major existential crises, hours of introspection, self-discovery journeys, drug trips, or dozens of hours with a therapist.

Have recently undergone a really big one, I noted my surprise again. Surprise, of course, is a property of bad models. (Actually, the recent shift occurred precisely because of exactly this line of thought: I noticed I was surprised and dug in, leading to an important S1 shift. Your strength as a rationalist and all that.) Attempting to come up with a model which wasn't as surprised, this is what I've got:

The shift involved S1 models. The S1 models had been there a long time, maybe a very long time. When that happens, they begin to seem how the world just *is*. If emotions arise from those models, and those models are so entrenched they become invisible as models, then the emotions too begin to be taken for granted - a natural way to feel about the world.

Yet the longevity of the models doesn’t mean that they’re deep, sophisticated, or well-founded. That might be very simplistic such that they ignore a lot of real-world complexity. They might have been acquired in formative years before one learned much of their epistemic skill. They haven’t been reviewed, because it was hardly noticed that they were beliefs/models rather than just “how the world is”.

Now, if you have a good dialog with your S1, if your S1 is amenable to new evidence and reasoning, then you can bring up the models in question and discuss them with your S1. If your S1 is healthy (and is not being entangled with threats), it will be open to new evidence. It might very readily update in the face of that evidence. “Oh, obviously the thing I’ve been thinking was simplistic and/or mistaken. That evidence is incompatible with the position I’ve been holding.” If the models shift, then the feelings shift.

Poor models held by an epistemically healthy "agent" can rapidly change when presented with the right evidence. This is perhaps not surprising.

Actually, I suspect that difficulty updating often comes from the S1 models and instances of the broccoli error: “If I updated to like broccoli then I would like broccoli, but I don’t like broccoli, so I don’t want that.” “If I updated that people aren’t out to get me then I wouldn’t be vigilant, which would be bad since people are out to get me.” Then the mere attempt to persuade that broccoli is pretty good / people are benign is perceived as threatening and hence resisted.

So maybe a lot of S1 willingness to update is very dependent on S1 trusting that it is safe, that you’re not going to take away any important, protective beliefs of models.

If there are occasions where I achieve rather large shifts in my feelings from relatively little effort, maybe it is just that I’ve gotten to a point where I’m good enough at locating the S1 models/beliefs that are causing inner conflict, good enough at feeling safe messing with my S1 models, and good enough at presenting the right reasoning/evidence to S1.

comment by Ruby · 2019-05-15T17:59:25.601Z · score: 4 (2 votes) · LW · GW

For my own reference.

Brief timeline of notable events for LW2:

  • 2017-09-20 LW2 Open Beta launched
  • (2017-10-13 There is No Fire Alarm published)
  • (2017-10-21 AlphaGo Zero Significance post published)
  • 2017-10-28 Inadequate Equilibria first post published
  • (2017-12-30 Goodhart Taxonomy Publish) <- maybe part of January spike?
  • 2018-03-23 Official LW2 launch and switching of www.lesswrong.com to point to the new site.

In parentheses events are possible draws which spiked traffic at those times.