Posts

Novum Organum: Introduction 2019-09-19T22:34:21.223Z · score: 43 (10 votes)
Open & Welcome Thread - September 2019 2019-09-03T02:53:21.771Z · score: 9 (3 votes)
LessWrong Updates - September 2019 2019-08-29T22:12:55.747Z · score: 43 (12 votes)
[Resource Request] What's the sequence post which explains you should continue to believe things about a particle moving that's moving beyond your ability to observe it? 2019-08-04T22:31:37.063Z · score: 7 (1 votes)
Open & Welcome Thread - August 2019 2019-08-02T23:56:26.343Z · score: 13 (5 votes)
Do you fear the rock or the hard place? 2019-07-20T22:01:48.392Z · score: 43 (14 votes)
Why did we wait so long for the bicycle? 2019-07-17T18:45:09.706Z · score: 48 (18 votes)
Causal Reality vs Social Reality 2019-06-24T23:50:19.079Z · score: 37 (28 votes)
LW2.0: Technology Platform for Intellectual Progress 2019-06-19T20:25:20.228Z · score: 27 (7 votes)
LW2.0: Community, Culture, and Intellectual Progress 2019-06-19T20:25:08.682Z · score: 28 (5 votes)
Discussion Thread: The AI Does Not Hate You by Tom Chivers 2019-06-17T23:43:00.297Z · score: 34 (9 votes)
Welcome to LessWrong! 2019-06-14T19:42:26.128Z · score: 82 (35 votes)
LessWrong FAQ 2019-06-14T19:03:58.782Z · score: 57 (16 votes)
An attempt to list out my core values and virtues 2019-06-09T20:02:43.122Z · score: 26 (6 votes)
Feedback Requested! Draft of a New About/Welcome Page for LessWrong 2019-06-01T00:44:58.977Z · score: 30 (5 votes)
A Brief History of LessWrong 2019-06-01T00:43:59.408Z · score: 20 (12 votes)
The LessWrong Team 2019-06-01T00:43:31.545Z · score: 22 (6 votes)
Site Guide: Personal Blogposts vs Frontpage Posts 2019-05-31T23:08:07.363Z · score: 34 (9 votes)
A Quick Taxonomy of Arguments for Theoretical Engineering Capabilities 2019-05-21T22:38:58.739Z · score: 29 (6 votes)
Could humanity accomplish everything which nature has? Why might this not be the case? 2019-05-21T21:03:28.075Z · score: 8 (2 votes)
Could humanity ever achieve atomically precise manufacturing (APM)? What about a much-smarter-than-human-level intelligence? 2019-05-21T21:00:30.562Z · score: 8 (2 votes)
Data Analysis of LW: Activity Levels + Age Distribution of User Accounts 2019-05-14T23:53:54.332Z · score: 27 (9 votes)
How do the different star-types in the universe (red dwarf, etc.) related to habitability for human-like life? 2019-05-11T01:01:52.202Z · score: 6 (1 votes)
How many "human" habitable planets/stars are in the universe? 2019-05-11T00:59:59.648Z · score: 6 (1 votes)
How many galaxies could we reach traveling at 0.5c, 0.8c, and 0.99c? 2019-05-08T23:39:16.337Z · score: 6 (1 votes)
How many humans could potentially live on Earth over its entire future? 2019-05-08T23:33:21.368Z · score: 9 (3 votes)
Claims & Assumptions made in Eternity in Six Hours 2019-05-08T23:11:30.307Z · score: 46 (13 votes)
What speeds do you need to achieve to colonize the Milky Way? 2019-05-07T23:46:09.214Z · score: 6 (1 votes)
Could a superintelligent AI colonize the galaxy/universe? If not, why not? 2019-05-07T21:33:20.288Z · score: 6 (1 votes)
Is it definitely the case that we can colonize Mars if we really wanted to? Is it reasonable to believe that this is technically feasible for a reasonably advanced civilization? 2019-05-07T20:08:32.105Z · score: 8 (2 votes)
Why is it valuable to know whether space colonization is feasible? 2019-05-07T19:58:59.570Z · score: 6 (1 votes)
What are the claims/arguments made in Eternity in Six Hours? 2019-05-07T19:54:32.061Z · score: 6 (1 votes)
Which parts of the paper Eternity in Six Hours are iffy? 2019-05-06T23:59:16.777Z · score: 18 (5 votes)
Space colonization: what can we definitely do and how do we know that? 2019-05-06T23:05:55.300Z · score: 31 (9 votes)
What is corrigibility? / What are the right background readings on it? 2019-05-02T20:43:45.303Z · score: 6 (1 votes)
Speaking for myself (re: how the LW2.0 team communicates) 2019-04-25T22:39:11.934Z · score: 47 (17 votes)
[Answer] Why wasn't science invented in China? 2019-04-23T21:47:46.964Z · score: 78 (26 votes)
Agency and Sphexishness: A Second Glance 2019-04-16T01:25:57.634Z · score: 27 (14 votes)
On the Nature of Agency 2019-04-01T01:32:44.660Z · score: 30 (10 votes)
Why Planning is Hard: A Multifaceted Model 2019-03-31T02:33:05.169Z · score: 37 (15 votes)
List of Q&A Assumptions and Uncertainties [LW2.0 internal document] 2019-03-29T23:55:41.168Z · score: 25 (5 votes)
Review of Q&A [LW2.0 internal document] 2019-03-29T23:15:57.335Z · score: 25 (4 votes)
Plans are Recursive & Why This is Important 2019-03-10T01:58:12.649Z · score: 61 (24 votes)
Motivation: You Have to Win in the Moment 2019-03-01T00:26:07.323Z · score: 49 (21 votes)
Informal Post on Motivation 2019-02-23T23:35:14.430Z · score: 29 (16 votes)
Ruby's Public Drafts & Working Notes 2019-02-23T21:17:48.972Z · score: 11 (4 votes)
Optimizing for Stories (vs Optimizing Reality) 2019-01-07T08:03:22.512Z · score: 45 (15 votes)
Learning-Intentions vs Doing-Intentions 2019-01-01T22:22:39.364Z · score: 58 (21 votes)
Four factors which moderate the intensity of emotions 2018-11-24T20:40:12.139Z · score: 60 (18 votes)
Combat vs Nurture: Cultural Genesis 2018-11-12T02:11:42.921Z · score: 36 (11 votes)

Comments

Comment by ruby on LessWrong Updates - September 2019 · 2019-09-20T00:50:10.596Z · score: 4 (2 votes) · LW · GW

UPDATE

  • Link Previews are now live.
  • Subscriptions and New Editor are still being worked on. The new editor will hopefully hit the "opt into experimental features" status soon.
  • Convert Comments to Posts is hitting some difficulties.
Comment by ruby on Ruby's Public Drafts & Working Notes · 2019-09-15T05:45:08.075Z · score: 4 (2 votes) · LW · GW

Converting this from a Facebook comment to LW Shortform.

A friend complains about recruiters who send repeated emails saying things like "just bumping this to the top of your inbox" when they have no right to be trying to prioritize their emails over everything else my friend might be receiving from friends, travel plans, etc. The truth is they're simply paid to spam.

Some discussion of repeated messaging behavior ensued. These are my thoughts:

I feel conflicted about repeatedly messaging people. All the following being factors in this conflict:

  • Repeatedly messaging can be making yourself an asshole that gets through someone's unfortunate asshole filter.
  • There's an angle from which repeatedly, manually messaging people is a costly signal bid that their response would be valuable to you. Admittedly this might not filter in the desired ways.
  • I know that many people are in fact disorganized and lose emails or otherwise don't have systems for getting back to you such that failure to get back to you doesn't mean they didn't want to.
  • There are other people have extremely good systems I'm always impressed by the super busy, super well-known people who get back to you reliably after three weeks. Systems. I don't always know where someone falls between "has no systems, relies on other people to message repeatedly" vs "has impeccable systems but due to volume of emails will take two weeks."
    • The overall incentives are such that most people probably shouldn't generally reveal which they are.
  • Sometimes the only way to get things done is to bug people. And I hate it. I hate nagging, but given other people's unreliability, it's either you bugging them or a good chance of not getting some important thing.
    • A wise, well-respected, business-experienced rationalist told me many years ago that if you want something from someone you, you should just email them every day until they do it. It feels like this is the wisdom of the business world. Yet . . .
  • Sometimes I sign up for a free trial of an enterprise product and, my god, if you give them your email after having expressed the tiniest interest, they will keep emailing you forever with escalatingly attention-grabby and entitled subject titles. (Like recruiters but much worse.) If I was being smart, I'd have a system which filters those emails, but I don't, and so they are annoying. I don't want to pattern match to that kind that of behavior.
    • Sometimes I think I won't pattern match to that kind of spam because I'm different and my message is different, but then the rest of the LW team cautions me that such differences are in my mind but not necessarily the recipient tho whom I'm annoying.
    • I suspect as a whole they lean too far in the direction of avoiding being assholes while at the risk of not getting things done while I'm biased in the reverse direction. I suspect this comes from my previous most recent work experience being in the "business world" where ruthless, selfish, asshole norms prevail. It may be I dial it back from that but still end up seeming brazen to people with less immersion in that world; probably, overall, cultural priors and individual differences heavily shape how messaging behavior is interpreted.

So it's hard. I try to judge on a case by case basis, but I'm usually erring in one direction or another with a fear in one direction or the other.

A heuristic I heard in this space is to message repeatedly but with an exponential delay factor each time you don't get a response, e.g. message again after one week, if you don't get a reply, message again after another two weeks, then four weeks, etc. Eventually, you won't be bugging whoever it is.

Related:

Comment by ruby on Ruby's Public Drafts & Working Notes · 2019-09-14T00:21:20.747Z · score: 23 (5 votes) · LW · GW

Selected Aphorisms from Francis Bacon's Novum Organum

I'm currently working to format Francis Bacon's Novum Organum as a LessWrong sequence. It's a moderate-sized project as I have to work through the entire work myself, and write an introduction which does Novum Organum justice and explains the novel move of taking an existing work and posting in on LessWrong (short answer: NovOrg is some serious hardcore rationality and contains central tenets of the LW foundational philosophy notwithstanding being published back in 1620, not to mention that Bacon and his works are credited with launching the modern Scientific Revolution)

While I'm still working on this, I want to go ahead and share some of my favorite aphorisms from is so far:

3. . . . The only way to command reality is to obey it . . .

9. Nearly all the things that go wrong in the sciences have a single cause and root, namely: while wrongly admiring and praising the powers of the human mind, we don’t look for true helps for it.

Bacon sees the unaided human mind as entirely inadequate for scientific progress. He sees for the way forward for scientific progress as constructing tools/infrastructure/methodogy to help the human mind think/reason/do science.


10. Nature is much subtler than are our senses and intellect; so that all those elegant meditations, theorizings and defensive moves that men indulge in are crazy—except that no-one pays attention to them. [Bacon often uses a word meaning ‘subtle’ in the sense of ‘fine-grained, delicately complex’; no one current English word will serve.]

24. There’s no way that axioms •established by argumentation could help us in the discovery of new things, because the subtlety of nature is many times greater than the subtlety of argument. But axioms •abstracted from particulars in the proper way often herald the discovery of new particulars and point them out, thereby returning the sciences to their active status.

Bacon repeatedly hammers that reality has a surprising amount of detail such that just reasoning about things is unlikely to get at truth. Given the complexity and subtlety of nature, you have to go look at it. A lot.

28. Indeed, anticipations have much more power to win assent than interpretations do. They are inferred from a few instances, mostly of familiar kinds, so that they immediately brush past the intellect and fill the imagination; whereas interpretations are gathered from very various and widely dispersed facts, so that they can’t suddenly strike the intellect, and must seem weird and hard to swallow—rather like the mysteries of faith.

Anticipations are what Bacon calls making theories by generalizing principles from a few specific examples and the reasoning from those [ill-founded] general principles. This is the method of Aristotle and science until that point which Bacon wants to replace. Interpretations is his name for his inductive method which only generalizes very slowly, building out slowly increasingly large sets of examples/experiments.

I read Aphorism 28 as saying that Anticipations have much lower inferential distance since they can be built simple examples with which everyone is familiar. In contrast, if you build up a theory based on lots of disparate observation that isn't universal, you know have lots of inferential distance and people find your ideas weird and hard to swallow.


All quotations cited from: Francis Bacon, Novum Organum, in the version by Jonathan Bennett presented at www.earlymoderntexts.com


Comment by ruby on How Can People Evaluate Complex Questions Consistently? · 2019-09-13T22:04:12.373Z · score: 8 (4 votes) · LW · GW

jimrandomh correctly pointed out to be that precision can have it's own value for various kinds of comparison. I think he's right. If A and B are each biased estimators of 'a' and 'b' but their bias is consistent (causing lower accuracy) but their precision is high, then I can make comparisons between A/a and B/b over time and between each other in ways I can't even if the estimators were less biased but higher noise.

Still though it's here is that the estimator is tracking a real fixed thing.

If I were to try to improve my estimator, I'd look at the process as a whole it implements and try to improve that rather than just trying to make the answer come out the same.

Comment by ruby on How Can People Evaluate Complex Questions Consistently? · 2019-09-13T22:00:07.808Z · score: 6 (3 votes) · LW · GW

Posting some thoughts I wrote up when first engaging with the question for 10-15 minutes.


The questions is phrased as: How Can People Evaluate Complex Questions Consistently? I will be reading a moderate amount into this exact phrasing. Specifically that it's specifying a project whose primary aim is increasing the consistency of answers to questions.


The projects strikes me as misguided. It seems to me definitely the case that consistency is an indicator of accuracy because if your "process" is reliably picking out a fixed situation in the world, then this process will give roughly the same answers as applied over time. Conversely, if I keep getting back disparate answers, then likely whatever answering process is being executed isn't picking up a consistent feature of reality.


Examples:
1)  I have a measuring tape and I measure my height. Each time I measure myself, my answer falls within a centimeter range. Likely I'm measuring a real thing in the world with a process that reliably detects it. We know how my brain and my height get entangled, etc.
2) We ask different philosophers about the ethics of euthanasia. They give widely varying answers for widely varying reasons. We might grant that there exists on true answer here, but that the philosophers are not all using reliably processes for accessing that true answer. Perhaps some are, but clearly not all are since they're not converging, which makes it hard to trust any of them.


Under my picture, it really is accuracy that we care about almost all of the time. Consistency/precision is an indicator of accuracy, and lack of consistency is suggestive of lack of accuracy. If you are well entangled with a fixed thing, you should get a fixed answer. Yet, having a fixed answer is not sufficient to guarantee that you are entangled with the fixed thing of interest. ("Thing" is very broad here and includes abstract things like the output of some fixed computation, e.g. morality.)


The real solution/question then is how to increase accuracy, i.e. how to increase your entanglement with reality. Trying to increase consistency separate from accuracy (even at the expense of!) is mixing up an indicator and symptom with the thing which really matters: whether you're actually determining how reality is.


It does seem we want a consistent process for sentencing and maybe pricing (but that's not so much about truth as it is about "fairness" and planning where we fear that differing amounts (sentence duration) is not sourced in legitimate differences between cases. But even this could be cast in the light of accuracy too: suppose there is some "true, correct, fair" sentence for a given crime, then we want a process that actually gets that answer. If the process actually works, it will consistently return that answer which is a fixed aspect of the world. Or we might just choose the thing we want to be entangled with (our system of laws) to be a more predictable/easily-computable one.


I've gotten a little rambly, so let me focus again. I think consistency and precision are important indicators to pay attention to when assessing truth-seeking processes, but when it comes to making improvements, the question should be "how do I increase accuracy/entanglement?" not "how do increase consistency?" Increasing accuracy is the legitimate method by which you increase consistency. Attempting to increase consistency rather than accuracy is likely a recipe for making accuracy worse because you're now focusing on the wrong thing.

Comment by ruby on Shallow Review of Consistency in Statement Evaluation · 2019-09-13T04:01:41.203Z · score: 8 (4 votes) · LW · GW

I had a look over Uncertain Judgements: Eliciting Experts' Probabilities, mostly reading the through the table of contents and jumping around and reading bits which seemed relevant.

The book is pretty much exactly what the title says: it's all about how to accurately get expert's opinions, whatever those opinions might be (as opposed to trying to get experts to be accurate). Much probability/statistics theory is explained (especially Bayesianism) as well as a good deal of heuristics and biases material like anchoring-adjusting, affect heuristic + inside/outside view.

Some points:

  • A repeated point is that experts, notwithstanding their subject expertise, are often not trained in probability and probabilistic thinking such that they're not very good by default at reporting estimates.
    • Part of this is most people are familiar with probability only in terms of repeatable, random events that are nicely covered by frequentist statistics and don't know how to give subjective probability estimates well. (The book calls subjective probabilities "personal probabilities".)
    • A suggested solution is giving experts appropriate training, calibration training, etc. in advance of trying to elicit their estimates.
  • There's discussion of coherence (in the sense of conforming to the basic probability theorems). An interesting point is that while it's easy to see if probabilities of mutually exclusive events add up to greater than 1, it can harder to see if several correlations one believes in are inconsistent (say, resulting in a covariance matrix that isn't positive-definite). Each believed correlation on its own can seem fine to a person even though in aggregate they don't work.
  • Another interesting point is the observation is that people are good at reporting the frequency of their own observation of thing, but bad at seeing or correcting for the fact that sampling biases can affect what they end up observing.

On the whole, kinda interesting stuff on how to actually get experts actual true beliefs, but nothing really specifically on the topic of getting consistent estimates. The closest thing to that seems to be the parts on getting coherent probability estimates from people, though generally, the book mixes between "accurately elicit expert's beliefs" and "get experts to have accurate, unbiased beliefs."


Comment by ruby on An1lam's Short Form Feed · 2019-09-12T22:24:37.982Z · score: 2 (1 votes) · LW · GW

Cool. Looking forward to it!

Comment by ruby on An1lam's Short Form Feed · 2019-09-12T02:23:23.216Z · score: 2 (1 votes) · LW · GW

Sorry for the delayed reply on this one.

I do think we agree on rather a lot here. A few thoughts:

1. Seems there are separate questions of "how you model/role-models and heroes/personal identity" and separate questions of pedagogy.

You might strongly seek unifying principles and elegant theories but believe the correct way to arrive at these and understand these is through lots of real-world messy interactions and examples. That seems pretty right to me.

2. Your examples in this comment do make me update on the importance of engineering types and engineering feats. It makes me think that indeed LessWrong too much focuses only on heroes of "understanding" when there are heroes "of making things happen" which is rather a key part of rationality too.

A guess might be that this is down-steam of what was focused on in the Sequences and the culture that set. If I'm interpreting Craft and the Community correctly, Eliezer never saw the Sequences as covering all of rationality or all of what was important, just his own particular sub-art that he created in the course of trying to do one particular thing.

That's my dream—that this highly specialized-seeming art of answering confused questions, may be some of what is needed, in the very beginning, to go and complete the rest.

Seemingly answering is confused questions is more science-y than engineering-y and would place focus on great scientists like Feynman. Unfortunately, the community has not yet supplemented the Sequences with the rest of the art of human rationality and so most of the LW culture is still downstream of the Sequences alone (mostly). Given that, we can expect the culture is missing major key pieces of what would be the full art, e.g. whatever skills are involved in being Jeff Dean and John Carmack.

My perceived disagreement is more around how much I trust/enjoy theory for its own sake vs. with an eye towards practice.

About that you might be correct. Personally, I do think I enjoy theory even without application. I'm not sure if my mind secretly thinks all topics will find their application, but having applications (beyond what is needed to understand) doesn't feel key to my interest, so something.

Comment by ruby on An1lam's Short Form Feed · 2019-09-12T01:54:31.874Z · score: 2 (1 votes) · LW · GW
You're looking at the wrong thing. Don't look at the topic of their work; look at their cognitive style and overall generativity.

By generativity do you mean "within-domain" generativity?

Carmack is many levels above Pearl.

To unpack which "levels" I was grading on, it's something like a blend of "importance and significance of their work" / "difficulty of the problems they were solving", admittedly that's still pretty vague. On those dimensions, it seems entirely fair to compare across topics and assert that Pearl was solving more significant and more difficult problem(s) than Carmack. And for that "style" isn't especially relevant. (This can also be true even if Carmack solved many more problems.)

But I'm curious about your angle - when you say that Carmack is many levels above Pearl, which specific dimensions is that on (generativity and style?) and do you have any examples/links for those?

Comment by ruby on An1lam's Short Form Feed · 2019-09-12T01:46:40.283Z · score: 2 (1 votes) · LW · GW

Seems you're referring to this https://en.wikipedia.org/wiki/TRIZ?

Comment by ruby on Ruby's Public Drafts & Working Notes · 2019-09-09T06:25:50.449Z · score: 13 (3 votes) · LW · GW

A random value walks into a bar. A statistician swivels around in her chair, one tall boot unlaced and an almost full Manhattan sitting a short distance from her right elbow.

"I've been expecting you," she says.

"Have you been waiting long?" respond the value.

"Only for a moment."

"Then you're very on point."

"I've met enough of your kind that there's little risk of me wasting time."

"I assure you I'm quite independent."

"Doesn't mean you're not drawn from the same mold."

"Well, what can I do for you?"

"I was hoping to gain your confidence..."

Comment by ruby on LessWrong Updates - September 2019 · 2019-09-04T22:03:53.492Z · score: 8 (4 votes) · LW · GW

Thanks for flagging that. We're going to disable hover link-previews on mobile since they don't work very well there; should be fixed soon. (And thanks for being opted into beta features.)

Comment by ruby on Open & Welcome Thread - September 2019 · 2019-09-04T02:49:57.733Z · score: 2 (1 votes) · LW · GW

Aside: I approve of you messaging here since here was indeed a place you could reach us.

Comment by ruby on September Bragging Thread · 2019-09-04T02:48:29.980Z · score: 7 (4 votes) · LW · GW

Successfully dislodged!

Comment by ruby on An1lam's Short Form Feed · 2019-09-02T17:29:22.642Z · score: 3 (6 votes) · LW · GW

This is really interesting, I'm glad you wrote this up. I think there's something to it.

Some quick comments:

  • I generally expect there to exist simple underlying principles in most domains which give rise to messiness (and often the messiness seems a bit less messy once you understand them). Perceiving "messiness" does also often feel to me like lack of understanding whereas seeing the underlying unity makes me feel like I get whatever the subject matter is.
  • I think I would like it if LessWrong had more engineers/inventors as role models and that it's something of an oversight that we don't. Yet I also feel like John Carmack probably probably isn't remotely near the level of Pearl (I'm not that familiar Carmack's work): pushing forward video game development doesn't compare to neatly figuring what exactly causality itself is.
    • There might be something like all truly monumental engineering breakthroughs depended on something like a "scientific" breakthrough. Something like Faraday and Maxwell figuring out theories of electromagnetism is actually a bigger deal than Edison(/others) figuring out the lightbulb, the radio, etc. There are cases of lauded people who are a little more ambiguous on the science/engineer dichotomy. Turing? Shannon? Tesla? Shockley et al with the transistor seems kind of like an engineering breakthrough, and seems there could be love for that. I wonder if Feynman gets more recognition because as an educator we got a lot more of the philosophy underlying his work. Just rambling here.
  • A little on my background: I did an EE degree which was very practical focus. My experience is that I was taught how to do apply a lot of equations and make things in the lab, but most courses skimped on providing the real understanding that left me overall worse as an engineer. The math majors actually understood Linear Algebra, the physicists actually understood electromagnetism, and I knew enough to make some neat things in the lab and pass tests, but I was worse off for not having a deeper "theoretical" understanding. So I feel like I developed more of an identity as a engineer, but came to feel that to be a really great engineer I needed to get the core science better*.

*I have some recollection that Tesla could develop a superior AC electric system because he understood the underlying math better than Edison, but this is a vague recollection.

Comment by ruby on Eli's shortform feed · 2019-09-02T16:44:04.425Z · score: 2 (1 votes) · LW · GW

Someone smart once made a case like to this to me in support of a specific substance (can't remember which) as a nootropic, though I'm a bit skeptical.

Comment by ruby on Ruby's Public Drafts & Working Notes · 2019-09-02T16:42:36.030Z · score: 10 (4 votes) · LW · GW

Hypothesis that becomes very salient from managing the LW FB page: "likes and hearts" are a measure of how much people already liked your message/conclusion*.

*And also like how well written/how alluring a title/how actually insightful/how easy to understand, etc. But it also seems that the most popular posts are those which are within the Overton window, have less inferential distance, and a likable message. That's not to say they can't have tremendous value, but it does make me think that the most popular posts are not going to be the same as the most valuable posts + optimizing for likes is not going to be same as optimizing for value.

**And maybe this seems very obvious to many already, but it just feels so much more concrete when I'm putting three posts out there a week (all of which I think are great) and seeing which get the strongest response.

***This effect may be strongest at the tails.

****I think this effect would affect Gordon's proposed NPS-rating too.

*****I have less of this feeling on LW proper, but definitely far from zero.

Comment by ruby on Ruby's Public Drafts & Working Notes · 2019-09-02T03:36:34.665Z · score: 13 (6 votes) · LW · GW

It feels like the society I interact with dislikes expression of negative emotions, at least in the sense that expressing negative emotions is kind of a big deal - if someone expresses a negative feeling, it needs to be addressed (fixed, ideally). The discomfort with negative emotions and consequent response acts to a fair degree to suppress their expression. Why mention something you're a little bit sad about if people are going to make a big deal out of it and try to make you feel better, etc., etc.?


Related to the above (with an ambiguously directed causal arrow) is that we lack reliable ways to communicate about negative emotions with something like nuance or precision. If I think imagine starting a conversation with a friend by saying "I feel happy", I expect to be given space to clarify the cause, nature, and extent of my happiness. Having clarified these, my friend will react proportionally. Yet when I imagine saying "I feel sad", I expect this to be perceived as "things are bad, you need sympathy, support, etc." and the whole stage of "clarify cause, nature, extent" is skipped instead proceeding to a fairly large reaction.


And I wish it wasn't like that. I frequently have minor negative emotions which I think are good, healthy, and adaptive. They might persist for one minute, five minute, half a day, etc. The same as with my positive emotions. When | get asked how I am, or I'm just looking to connect with others by sharing inner states, then I want to be able to communicate my inner state - even when it's negative - and be able to communicate that precisely. I want to be given space to say "I feel sad on the half-hour scale because relatively minor bad thing X happened” vs "I'm sad on the weeks scale because a major negative life event happened." And I want to be able to express the former without it being a bid deal, just a normal thing that sometimes slightly bad things happens and you're slightly sad.

Comment by ruby on How good a proxy for accuracy is precision? · 2019-08-31T04:47:42.014Z · score: 5 (3 votes) · LW · GW

This seems to be a question about the correlation of the two over all processes that generate estimates, which seems very hard to do. Even supposing you had this correlation over processes, I'm guessing once you have a specific process in mind, you get to condition on what to know about it in a way that just screens off the prior.

In a given domain though, maybe there are useful priors one could have given what one knows about the particular process. I'll try to think of examples.

Comment by ruby on A Game of Giants [Wait But Why] · 2019-08-30T20:21:06.112Z · score: 7 (6 votes) · LW · GW

I think he's not obviously doing something all that wrong from a learning/modeling perspective. Trying to figure out the answer yourself using your own models (and all the data you've passively accrued yourself) before looking up the answer seems like a rather good way to think and learn.

If he then checks his answer before sharing it with others, and is clear about the process he used, then that's not clearly bad.

(I do think some things he's saying are mistaken/misleading, but the process described in the footnote seems okay.)

Comment by ruby on Matt Goldenberg's Short Form Feed · 2019-08-30T06:13:38.989Z · score: 2 (1 votes) · LW · GW

Just wanted to quickly assert strongly that I wouldn't characterize my post cited above as being only about value disagreements (value disagreements might even be a minority of applicable cases).

Consider Alice and Bob who are aligned on the value of not dying. They are arguing heatedly over whether to stay where they are vs run into the forest.

Alice: "If we stay here the axe murderer will catch us!" Bob: "If we go into the forest the wolves will eat us!!" Alice: "But don't you see, the axe murderer is nearly here!!!"

Same value, still a rock and hard place situation.

Comment by ruby on LessWrong Updates - September 2019 · 2019-08-29T22:26:02.412Z · score: 6 (3 votes) · LW · GW

Hints & Tips

Some good things to know:

  • LessWrong has a Frontpage Post vs Personal Blogpost distinction that helps ensure the relevance of content shown to users by default. If you want to see more content (including community-specific and niche-interest posts), you can enable the Include Personal Blogposts checkbox on the homepage.
  • Want to regularly get LessWrong's Curated Posts in your inbox or RSS reader? You can subscribe on the homepage beneath the Recently Curated section.
  • Don't like the recommendations you're seeing at the top of the homepage? Click the gear icon to modify or hide the From the Archive and Continue Reading sections.
  • Generally have site questions? Try the FAQ or comment here.
Comment by ruby on Intentional Bucket Errors · 2019-08-23T04:10:59.477Z · score: 11 (9 votes) · LW · GW

"Bucket Errors" seem to me to be pretty much the same idea as explained in the less reference post Fallacies of Compression (except the post introducing the former uses "compression"/conflation of different variables to explain psychological resistance to changing one's mind).

In other words, the concept at hand here is compression of the map. On my reading, your post is making the point that compression of your map is sometimes a feature, not a bug - and not just for space reasons.

This is evoking thoughts about compression of data for me, and how often PCA and related "compression" techniques often make it easier to see relationships and understand things.

Comment by ruby on Raemon's Scratchpad · 2019-08-17T20:18:30.114Z · score: 9 (5 votes) · LW · GW

I believe I'm one of the people who commented on your strong focus on using the Double Framework recently, but on reflection I think can clarify my thoughts. I think generally there's a lot to be said for sticking to the framework as explicitly formulated until you learn how to do the thing reliably and there's a big failure mode of thinking you can skip to the post-formal stage. I think you're right to push on this.

The complication is that I think the Double-Crux framework is still nascent (at least in common knowledge; I believe Eli has advanced models and instincts, but those are hard to communicate and absorb), which means I see us being in a phase of "figuring out how to do Double-Crux right" where the details of the framework are fuzzy and you might be missing pieces, parts of the algorithm, etc.

The danger is then that if you're too rigid in sticking to your current conception of what the formal framework of Double-Crux, you might lack the flexibility to see where you're theory is failing in practice, and you need to update what you think Double-Crux even should be.

I perceive something a shift (could be wrong here) where after some conversations you started paying more attention to the necessity of model-sharing as a component of Double-Crux as maybe a preliminary stage to find cruxes, and this wasn't emphasized before. That's the kind of flexibility I think is need to realize when the current formalization is insufficient and deviation from it is warranted as part of the experimentation/discovery/development/learning/testing/etc.

Comment by ruby on What experiments would demonstrate "upper limits of augmented working memory?" · 2019-08-16T04:42:38.631Z · score: 4 (3 votes) · LW · GW

Related but not quite the same thing: because of the general inability to visualize or see things beyond 3D (and usually only 2D because making 3D graphs is a pain, especially in print), we resort to various tricks to examine high dimensional data in lower dimensions. This includes making multiple 2D, two-variable plots comparing only the interactions of pairs out of larger set of variables (obviously we lose out on more complicated relationship that way), and projecting higher dimensional stuff onto lower dimensions (PCA being often used to do this). Probably other techniques too, but the two above are quite common.

This tutorial does a good job describing the general idea of the above paragraph plus the two techniques described.

Arguably these are external tools which are being used to get around the limitations of human visualization/working memory capacity, tools which are more powerful than just having paper store plain information for you. These are much worse than if you could see in higher dimensions, but they're better than nothing.

Possibly there are generalizations to these kinds of techniques which could work for more general working-memory constraints? Probably they're not researched enough.

Comment by Ruby on [deleted post] 2019-08-15T03:57:45.203Z

^ acknowledged, though I am curious what specific behaviors you have in mind by concern-trolling and whether you can point to any examples on LessWrong.

Reflecting on the conversations in thread, I'm thinking/remembering that my attention and your (plus others) attention were on different things: if I'm understanding correctly, most of your attention has been on discussions with a political element (money and power) [1], yet I have been focused on pretty much (in my mind) apolitical discussions which have little to do with money or power.

I would venture (though I am not sure), that the norms and moderation requirements/desiderata for those contexts are different and can be dealt with differently. That is, that when someone makes a fact post about exercise or productivity, or someone writes about something to do with their personal psychology, or even someone is conjecturing about society in general-- these cases are all very different from when bad behavior is being pointed out, e.g. in Drowning Children.

I haven't thought much about the latter case, it feels like such posts, while important, are an extreme minority on LessWrong. One in a hundred. The other ninety-nine are not very political at all, unless raw AI safety technical stuff is actually political. I feel much less concerned that there are social pressures pushing to censor views on those topics. I am more concerned that people overall have productive conversations they find on net enjoyable and worthwhile, and this leads me to want to state that it is, all else equal, virtuous to be more "pleasant and considerate" in one's discussions; and all else equal, one ought to invest to keep the tone of discussions collaborate/cooperative/not-at-war, etc.

And the question is maybe I can't actually think about these putatively "apolitical" discussions separately from discussions of more political significance. Maybe whatever norms/virtues we set in the former will dictate how conversations about the latter are allowed to proceed. We have to think about the policies for all types of discussions all at once. I could imagine that being true, though it's not clear to me that it definitely is.

I'm curious what you think.

[1] At one point in the thread you said I'd missed the most important case, and I think this was relative to your focus.

Comment by ruby on Open & Welcome Thread - August 2019 · 2019-08-07T18:51:56.856Z · score: 2 (1 votes) · LW · GW

Welcome!

I'm not aware of a single sequence on those topics, but you might like to start with these "Akrasia Tactics Reviews" posts which link to different interventions/tricks and where people have reported their efficacy for them.

Akrasia Tactics Review

Akrasia Tactics Review 2: The Akrasia Strikes Back

Akrasia Tactics Review 3: The Return of the Akrasia

There have of course been many more posts on Akrasia since then, I'd advise using search (also Google search for "LessWrong Akrasia" for a different algorithm) to hunt around. Sorry about the lack of single Sequence.

An idea I've been thinking about is creating a LW Index (like a book's index, but for the site's content, very similar to tagging) where you could look up Akrasia and find all relevant posts sorted/filtered, etc. as one pleases. This would help retain easy access to our past store of knowledge.


Comment by ruby on [Resource Request] What's the sequence post which explains you should continue to believe things about a particle moving that's moving beyond your ability to observe it? · 2019-08-05T18:05:45.280Z · score: 2 (1 votes) · LW · GW

Good reference! Thanks!

Comment by ruby on [Resource Request] What's the sequence post which explains you should continue to believe things about a particle moving that's moving beyond your ability to observe it? · 2019-08-05T18:05:22.719Z · score: 2 (1 votes) · LW · GW

Just what I was looking for, thanks!

Comment by ruby on Keeping Beliefs Cruxy · 2019-08-03T19:12:07.647Z · score: 2 (1 votes) · LW · GW

Note: I'm pretty behind on sleep. This writing quality is likely sub-par for me.

Yeah, that picture seems correct. I think you do need to do a bunch of model sharing as the first stage in a double-crux like that, and that related to this you can't have your cruxes (your **substantive** cruxes) prepared in advance and likely can't even generate them on your own even given hours to prepare (or days or weeks or months given upon the thing).

It's not obvious that if I see no reason for the improbable position I will get much from reflecting on my own "signgle-player" cruxes before the model sharing. For the balloons example, sure, I could try to imagine from scratch (but I don't know much about the person, Guatemala, etc., so I might not do a very good job without more research). It certainly seems unlikely any cruxes I might generate will coincide with theirs and up being double-cruxes. Seems best to quickly get to the part where they explain their reasoning/models and I see whether I'm convinced / we talk about what would convince me that their reasoning is/isn't correct.

If the majority of double-cruxes are actually like this (that is, I see a reason for something that to you seems low-probability), then that means for those disagreements you won't be able to have cruxes prepared in advance beyond the generic "I don't have models that would suggest this at all, perhaps because I have little info on the target domain". It perhaps means that for all instances where you believe something that other people reasonably have a low prior on, you alone should be prepared with your cruxes, and this basically means there's asymmetrical preparedness except when people are arguing about two specific "low prior probability/complex" beliefs, e.g. minimalism vs information-density.

In the double-crux workshops I've attended, there were attempts to find people who had strong disagreements (clashes of models) about given topics, but this struggled because often one person would believe a strange/low-prior probability thing which others haven't thought about and so basically no one had pre-existing models to argue back with.

Comment by ruby on Keeping Beliefs Cruxy · 2019-08-03T16:20:06.808Z · score: 2 (1 votes) · LW · GW

I was the friend in the story. Here's something which might be part of the picture of resolving this. I had an argument for wanting a party with the structure: A -> B, I also believed A, so I concluded B was true.

We could have double-cruxed over the entailment A -> B or if you already agreed to that, then over the truth of A, where A can be made up of many propositions.

The shift is perhaps going from the double-crux being about a conclusion of what action to take to being about whether an argument (entailment) plus its premises are true. These of course are my cruxes once/if I can uncover my beliefs to this structure (which seems like you should be able to).

Overall though, I'm a bit confused by "negatives" and "null options". In all cases you're arguing about how the world is. You can just always say "what is my crux the world is not that way?", "what are the positive beliefs that cause me to think the world is not the way my interlocutor does?"

Comment by ruby on Drive-By Low-Effort Criticism · 2019-08-02T02:04:01.314Z · score: 15 (5 votes) · LW · GW

I think with that approach there are a great many results you'd fail to achieve. People can get animals to do remarkable things with shaping and I would wager that you can't do them at all otherwise.

From the Wikipedia article on Shaping (psychology):

We first give the bird food when it turns slightly in the direction of the spot from any part of the cage. This increases the frequency of such behavior. We then withhold reinforcement until a slight movement is made toward the spot. This again alters the general distribution of behavior without producing a new unit. We continue by reinforcing positions successively closer to the spot, then by reinforcing only when the head is moved slightly forward, and finally only when the beak actually makes contact with the spot. ... The original probability of the response in its final form is very low; in some cases it may even be zero. In this way we can build complicated operants which would never appear in the repertoire of the organism otherwise. By reinforcing a series of successive approximations, we bring a rare response to a very high probability in a short time. ... The total act of turning toward the spot from any point in the box, walking toward it, raising the head, and striking the spot may seem to be a functionally coherent unit of behavior; but it is constructed by a continual process of differential reinforcement from undifferentiated behavior, just as the sculptor shapes his figure from a lump of clay.

Humans are more sophisticated than birds, but producing highly complex and abstruse truths in a format understandable to others is also a lot more complicated than getting a bird to put its beak in a particular spot. I think all the same mechanics are at work. If you want to get someone (including yourself) to do something as complex and difficult as producing valuable, novel, correct, expositions of true things on LessWrong - you're going to have to reward the predictable intermediary steps.

We don't go to five year olds and say "the desired result is that you can write fluently, therefore no positive feedback on your marginal efforts until you can do so, in fact, I'm going to strike your knuckles every time you make a spelling error or anything which isn't what we hope to see from you when you're 12, we will only reward the final desired result and you can back propagate from that to get figure out what's good." That's really only a recipe for children who are unwilling to put any effort in learning to write, not those who progressively put in effort over years to learn what it even looks like to a be a competent writer.

This is beyond my earlier point that verifying results in our cases is often much harder than verifying that good steps were being taken.

Comment by ruby on Drive-By Low-Effort Criticism · 2019-08-02T01:52:50.115Z · score: 10 (3 votes) · LW · GW

Strong upvote for clear articulation of points I wanted to see made.

The most obvious benefit is that it is far easier for the party doing the rewarding and punishing: very little cognitive effort is required to assess whether a given result is positive or negative, in stark contrast to the large amounts of effort necessary to decide whether a given strategy has positive or negative expectation.

This part isn't obviously/exactly correct to me. If we're talking about posts and comments on LessWrong, it can be quite hard for me to assess whether a given post is correct or not (although even incorrect posts are often quite valuable parts of the discourse). It might also take a lot of information/effort to arrive that the belief that the strategy of "invest more effort, generate more ideas" leads ultimately to more good ideas such that incentivizing generation itself is good. However, once I hold that belief, it's relatively easy to apply it. I see someone investing effort in adding to communal knowledge in a way that is plausibly correct/helpful; I then encourage this pro-social contribution despite the fact evaluating whether the post was actually correct or not* can be extremely difficult.

*"Correct or not" is a bit binary, but even assessing the overall "quality" or "value" of a post doesn't make it much easier to assess. Far harder than number of rabbits. However, if a post doesn't seem obviously wrong (or even if it's clearly wrong but because understandable mistake many people might make), I can often confidently say that it is contributing to communal knowledge (often via the discussion it sparks or simply because someone could correct a reasonable misunderstanding) and I overall want to encourage more of whatever generated it. I'm happy to get more posts like that, even if I seek push for refinements in the process, say.

(Reacts or separate upvote/downvotes vs agree/disagree buttons will hopefully make it easier in the future to encourage effort even while expressing that I think something is wrong. )

Comment by ruby on How to Ignore Your Emotions (while also thinking you're awesome at emotions) · 2019-08-01T19:56:29.059Z · score: 2 (1 votes) · LW · GW

Related strange fact: I can voluntarily twitch/flex/wriggle my right pectoral muscle but not my left one. I just can't seem to get it to happen despite the other being easy. Also, I'm left-handed.

I would be surprised if my body was only capable of twitching one side but the not the other, so I'd guess it is some kind of trained neural control, but not that I know how to train it.

Comment by ruby on Drive-By Low-Effort Criticism · 2019-08-01T06:03:03.777Z · score: 37 (10 votes) · LW · GW
One thing I do want to note is that while I think you're pointing at a real phenomena, I don't actually think the two examples you gave for my post are quite pointing at the right thing.

This itself serves as an interesting example. Even if a particular author isn't bothered by certain comments (due to an existing relationship, being unusually stoic, etc), it is still possible for others to perceive those comments as aversive/hostile/negative.

This is a feature of reality worth noticing, even before we determine what the correct response to it is. It would suggest you could have a world with many LessWrong members discussing in a way that they all enjoyed, yet it appears hostile and uncivil to the outside world who assume those participating are doing so despite being upset. This possibly has bad consequences for getting new people to join (those who aren't here). You might expect this if a Nurture-native person was exposed to a Combat culture.

If that's happening a lot, you might do any of the following:

1) shift your subculture to represent the dominant outside one

2) invest in "cultural-onboarding" so that new people learn to understand people aren't unhappy with the comments they're receiving (of course, we want this to be true)

3) create different spaces: ones for new people who are still acculturating, and others for the veterans who know that a blunt critical remark is a sign of respect.

The last one mirrors how most interpersonal relationships progress. At first you invest heavily in politeness to signal your positive intent and friendliness; progressively, as the prior of friendliness is established, fewer overt signals are required and politeness requirements drop; eventually, the prior of friendliness is so high that it's possible to engage in countersignalling behaviors.

A fear I have is that veteran members of blunt and critical spaces (sometimes LW) have learnt that critical comments don't have much interpersonal significance and pose little reputational or emotional risk to them. That might be the rational [1] prior from their perspective given their experience. A new member to the space who is bringing priors from the outside world may rationally infer hostility and attack when they read a casually and bluntly written critical comment. Rather than reading it as someone engaging positively with their post and wanting to discuss, they just feel slighted, unwelcome, and discouraged. This picture remains true even if a person is not usually sensitive or defensive to what they know is well-intentioned criticism. The perception of attack can be the result of appropriate priors about the significance [2] of different actions.

If this picture is correct and we want to recruit new people to LessWrong, we need to figure out some way of ensuring that people know they're being productively engaged with.

--------------------

Coming back to this post. Here there was private information which shifted what state of affairs the cited comments were Bayesian evidence for. Most people wouldn't know that Raemon had requested Unreal copy the comment moved from FB (where he'd posted it only partially) or that Raemon has been housemates with Qiaochu for years. In other words, Raemon has strongly established relationships with those commenters and knows them to be friendly to him-- but that's not universal knowledge. The OP's assessment might be very reasonable if you lacked that private info (knowing it myself already, it's hard for me to simulate not knowing it). This is also info it's not at all reasonable to expect all readers of the site to know.

I think it's very unfortunate if someone incorrectly thinks someone else is being attacked or disincentivized from contributing. It's worth thinking about how one might avoid it. There are obviously bad solutions, but that doesn't mean there aren't better ones than just ignoring the problem.

--------------------

[1] Rational as in the sense of reaching the appropriate conclusion with the data available.

[2] By significance I mean what is it Bayesian evidence for.

Comment by ruby on The Importance of Those Who Aren't Here · 2019-08-01T00:25:00.254Z · score: 6 (4 votes) · LW · GW

Strong upvote. This is a consideration that's occurred to me several times in the past weeks, but I hadn't voiced it yet at any point. I'm glad now to have this excellent clear reference for when I want to mention this.

Comment by ruby on Appeal to Consequence, Value Tensions, And Robust Organizations · 2019-08-01T00:17:50.356Z · score: 4 (2 votes) · LW · GW
I think if you're judging the impact on that value, then both "freedom of speech" and "not driving people away" begin to trade off against each other in important ways.

Yes, that I agree with, and I'm happy with that framing of it.

I suppose the actual terminal goal is a thing that ought to be clarified and agreed upon. The about page has:

To that end, LessWrong is a place to 1) develop and train rationality, and 2) apply one’s rationality to real-world problems.

But that's pretty brief, doesn't explicitly mention truth, and doesn't distinguish between "uncover important intellectual truths" and "cause all its members to have maximally accurate maps" or something.

Elsewhere, I've talked at length about the goal of intellectual progress for LessWrong. That's also unclear about what specific tradeoffs are implied when pursuing truths.

Important questions, probably the community should discuss them more. (I though my posting a draft of the new about page would spark this discussion, but it didn't.)

Comment by ruby on Information empathy · 2019-07-30T14:05:05.662Z · score: 2 (1 votes) · LW · GW

My impression is that Theory of Mind, as a technical term, means exactly the thing you've defined. I was about to comment saying the existing term, to me, is fine (and is perhaps better).

Comment by ruby on Appeal to Consequence, Value Tensions, And Robust Organizations · 2019-07-30T01:18:57.108Z · score: 6 (3 votes) · LW · GW

I just reread your post and have a couple more comments.

Jill: The problem is twofold. Firstly, people find it annoying to retread the same conversation over and over. More importantly, this topic usually leads to demon conversations, and I fear that continued discussion of the topic at the rate its' currently discussed could lead to a schism. Both of these outcomes go against our value of being a premiere community that attracts the smartest people, as they're actually driving these people away!
Jill: Yes, truthseeking is very important. However, It's clear that just choosing one value as sacred , and not allowing for tradeoffs can lead to very dysfunctional belief systems. I believe you've pointed at a clear tension in our values as they're currently stated. The tension between freedom of speech and truth, and the value of making a space that people actually want to have intellectual discussions at.

I think it's one thing to say that instrumentally the value of truth is maximized by placing some restrictions on people's ability to express things (e.g. no repeating the same argument again and again, you have to be civil) and a very different thing to treat to treat something like attracting people as a top-level value to be traded off against the value of truth.

My prediction [justification needed] is that if you allow appeals to "but that would be unpopular/drive people away" to be as important as "is it true/cause accurate updates?", you will no longer be a place of truth-seeking, and politics will eat you, something, something. Even allowing questions "will it drive people away?" instrumentally for truth is dangerous, but perhaps safer if ultimately you're judging by the impact on truth.

Sorry, I'll work on explaining why I have that prediction. It seems sometimes once a model has become embedded deep enough, it gets difficult to express succinctly in words.

Comment by ruby on Appeal to Consequence, Value Tensions, And Robust Organizations · 2019-07-30T01:00:44.262Z · score: 9 (2 votes) · LW · GW

I think I understand this picture and could pass your ITT (maybe), but I think your proposed org will fail in all but exceptional circumstances for reasons I don't have an immediate great articulation for.

I'll attempt to offer something, but I might need to stew on it longer (plus it's probably a rather long conversation we were to try to properly resolve it. I'd be up for chatting sometime or a public Double-Crux or the like. Feel free to reply to this one, but probably next round should happen elsewhere).

A thing I emphatically agree with is that people are usually covertly pursuing other goals when working on products together. I lean a bit "cynical" here and think it's "expressing trauma and unmet needs" plus typical monkey status competition stuff. Much of the later stuff is a) subconscious and instinctive (for the reasons given in Elephant in the Brain/Trivers), and b) not stuff you can ever admit to and still succeed at due to its zero-sum, adversarial nature. I'll collectively call these a person's Other Goals because they're "other" than the stated goal of building a product.

I think that people's (sub)conscious pursuit of Other Goals does interfere with their ability to work on the product, but I think it's a perilous for an organization's solution to be to try ensure everyone's satisfied on their Other Goals enough to work on the product without distraction/compromise. Individuals should attempt to do achieve integration/inner harmony, etc., but if an organizations tries to create this for them as its primary strategy for dealing with Other Goals, I foresee that opening being exploited ruthlessly by the Other Goals to detriment of the product. [elaboration/justification needed]

I favor solving the Other Goals problem by being a culture/system which rewards and punishes you for helping or hindering the product. Want to be more listened to? Have good ideas for the product! This requires an emotional maturity of sorts from members who need to be able to contribute to the actual goal even if it means neglecting their Other Goals pursuit. ("I concede you're right about minimalism because I care about doing the correct thing for the product and not just winning." Rationalist circles do well here because it rewards one socially for this behavior thereby aligning product-goals and Other goals.)

This isn't to say feeling, emotions, needs, etc. should never be mentioned or dealt with. They should, but carefully, and only (as far as the organization is concerned) secondarily to the mission of creating the product. Definitely, I think people should explicitly deal with interpersonal issues that arise ("I feel disregarded because you never listen and always interrupt" or "I don't feel like I'm getting enough feedback on whether my work is valued"). Definitely, definitely, people should take care not to harm their collaborators psychologically or covertly do zero-sum things. Also there are many times it's good to share things that are going on for a person and receive support. But all of this only within the context of organizational values that say first and foremost comes the product, and that if something appears to be sucking attention way from the product in a way that is net harmful, that something will be cut.

As I understood it, your envisioned organization makes needs ("Other Goals" in my parlance) first-class concerns in way I expect the product to lose to. [elaboration needed]. Crucially, to me, it is the product which is far more fragile [elaboration needed].


Comment by ruby on Ruby's Public Drafts & Working Notes · 2019-07-29T17:53:11.683Z · score: 2 (1 votes) · LW · GW

Failed replications notwithstanding, I think there's something to Fixed vs Growth Mindset. In particular, Fixed Mindset leading to failure being demoralizing, since it is evidence you are a failure, rings true.

Comment by ruby on Mapping of enneagram to MTG personality types · 2019-07-29T17:36:18.200Z · score: 2 (1 votes) · LW · GW

Interesting attempt!

I identify strongly as Enneagram Type 1 (feels like I'm the textbook case-- the extreme fit is what persuaded me to take a woo personality system seriously), but also score/feel extremely low on Red. (There's the word-association-identification test somewhere, and that's one place I had minimum red.)

A hypothesis I have is that although I do value freedom/flexibility/autonomy (Red stuff), I never feel at much risk of having those taken away (because of my Black?) and so feel no special need to protect them, no need to rebel, etc.

Comment by ruby on Hazard's Shortform Feed · 2019-07-29T17:30:38.268Z · score: 2 (1 votes) · LW · GW

This has been my model too, deriving from EitB. But it's probably not just about preventing the over-saturation, it's also to the benefit of those who are more skilled at signaling covertly to promote a norm that disadvantages who only have skills, but not the covert-signaling skills.

Comment by ruby on Ruby's Public Drafts & Working Notes · 2019-07-28T18:02:27.128Z · score: 10 (5 votes) · LW · GW

Narrative Tension as a Cause of Depression

I only wanted to budget a couple of hours for writing today. Might develop further and polish at a later time.

Related to and an expansion of Identities are [Subconscious] Strategies

Epistemic status: This is non-experimental psychology, my own musings. Presented here is a model derived from thinking about human minds a lot over the years, knowing many people who’ve experienced depression, and my own depression-like states. Treat it as a hypothesis, see if matches your own data and it generates helpful suggestions.

Clarifying “narrative”

In the context of psychology, I use the term narrative to describe the simple models of the world that people hold to varying degrees of implicit vs explicit awareness. They are simple in the sense of being short, being built of concepts which are basic to humans (e.g. people, relationships, roles, but not physics and statistics), and containing unsophisticated blackbox-y causal relationships like “if X then Y, if not X then not Y.”

Two main narratives

I posit that people carry two primary kinds of narratives in their minds:

  • Who I am (the role they are playing), and
  • How my life will go (the progress of their life)

The first specifies the traits they possess and actions they should take. It’s a role to played. It’s something people want to be for themselves and want to be seen to be by others. Many roles only work when recognized by others, e.g. the cool kid.

The second encompasses wants, needs, desires, and expectations. It specifies a progression of events and general trajectory towards a desired state.

The two narratives function as a whole. A person believes that by playing a certain role they will attain the life they want. An example: a 17 year-old with a penchant for biology decides they destined to be a doctor (perhaps there are many in the family); they expect to study hard for SATs, go to pre-med, go to medical school, become a doctor; once they are a doctor they expect to have a good income, live in a nice house, attract a desirable partner, be respected, and be a good person who helps people.

The structure here is “be a doctor” -> “have a good life” and it specifies the appropriate actions to take to live up to that role and attain the desired life. One fails to live up to the role by doing things like failing to get into med school, which I predict would be extremely distressing to someone who’s predicated their life story on that happening.

Roles needn’t be professional occupations. A role could be “I am the kind, fun-loving, funny, relaxed person who everyone loves to be around”, it specifies a certain kind of behavior and precludes others (e.g. being mean, getting stressed or angry). This role could be attached to a simple causal structure of “be kind, fun-loving, popular” -> “people like me” -> “my life is good.”

Roles needn’t be something that someone has achieved. They are often idealized roles towards which people aspire, attempting to always take actions consistent with achieving those roles, e.g. not yet a doctor but studying for it, not yet funny but practicing.

I haven’t thought much about this angle, but you could tie in self-worth here. A person derives their self-worth from living up to their narrative, and believes they are worthy of the life they desire when they succeed at playing their role.

Getting others to accept our narratives is extremely crucial for most people. I suspect that even when it seems like narratives are held for the self, we're really constructing them for others, and it's just much simpler to have a single narrative than say "this is my self-narrative for myself" and "this is my self-narrative I want others to believe about me" a la Trivers/Elephant in the Brain.

Maintaining the narrative

A hypothesis I have is that among the core ways people choose their actions, it’s with reference to which actions would maintain their narrative. Further, that most events that occur to people are evaluated with reference to whether that event helps or damages the narrative. How upsetting is it to be passed over for a promotion? It might depend on whether you have a self-narrative is as “high-achiever” or “team-player and/or stoic.”

Sometimes it’s just about maintaining the how my life will go element: “I’ll move to New York City, have two kids and a dog, vacation each year in Havana, and volunteer at my local Church” might be a story someone has been telling themselves for a long time. They work towards it and will become distressed if any part of it starts to seem implausible.

You can also see narratives as specifying the virtues that an individual will try to act in accordance with.

Narrative Tension

Invariable, some people encounter difficult living up to their narratives. What of the young sprinter who believes their desired future requires them to win Olympic Gold yet is failing to perform? Or the aspiring parent who in their mid-thirties is struggling to find a co-parent? Or the person who believes they should be popular, yet is often excluded? Or the start-up founder wannabee who’s unable to obtain funding yet again for their third project?

What happens when you are unable to play the role you staked your identity on?

What happens when the life you’ve dreamed of seems unattainable?

I call this narrative tension. The tension between reality and the story one wants to be true. In milder amounts, when hope is not yet lost, it can be a source of tremendous drive. People work longer and harder, anything to keep the drive alive.

Yet if the attempts fail (or it was already definitively over) then one has to reconcile themselves to the fact that they cannot live out that story. They are not that person, and their life isn’t going to look like that.

It is crushing.

Heck, even just the fear of it possibly being the case, even when their narrative could in fact still be entirely achievable, can still be crushing.

Healthy and Unhealthy Depression

Related: Eliezer on depression and rumination

I can imagine that depression could serve an important adaptive function when it occurs in the right amounts and at the right times. A person confronted with the possible death of their narratives either: a) reflects and determines they need to change their approach, or b) grieves and seeks to construct new narratives to guide their life. This is facilitated with a withdrawal from their normal life and disengagement from typical activities. Sometimes the subconscious mind forces this on a person who otherwise would drive themselves into the ground vainly trying to cling to a narrative that won’t float.

Yet I could see this all failing if a person refuses to grieve and refuses to modify their narrative. If their attitude is “I’m a doctor in my heart of hearts and I could never be anything else!” then they’ll fail to consider whether being a dentist or nurse or something else might be the next best thing for them. A person who’s only ever believed (implicitly or explicitly) that being the best is the only strategy for them to be liked and respected, won’t even ponder how it is other people who aren’t the best in their league ever get liked or respected, and whether she might do the same.

Depressed people think things like:

  • I am a failure.
  • No one will ever love me.
  • I will never be happy.

One lens on this might be that some people are unwilling to give up a bucket error whereby they’re lumping their life-satisfaction/achievement of their value together with achievement of a given specific narrative. So once they believe the narrative is dead, they believe all is lost.

They get stuck. They despair.

It’s despair which I’ve begun to see as the hallmark of depression, present to some degree or other in all the people I’ve personally known to be depressed. They see no way forward. Stuck.

[Eliezer's hypothesis of depressed individuals wanting others to validate their retelling of past events seems entirely compatible with people wanting to maintain narratives and seeking indication that others still accept their narrative, e.g. of being good person.]

Narrative Therapy

To conjecture on how the models here could be used to help, I think the first order is to try to uncover a person’s narratives: everything they model about who they’re supposed to be and how their life should look and progress. The examples I’ve given here are simplified. Narratives are simple relative to full causal models of reality, but a person’s self-narrative will still have have many pieces, distributed over parts of their mind, often partitioned by context, etc. I expect doing this to require time, effort, and skill.

Eventually, once you’ve got the narrative models exposed, they can be investigated and supplemented with full causal reasoning. “Why don’t we break down the reasons you want to be a doctor and see what else might be a good solution?” “Why don’t we list out all the different things that make people likable, see which might you are capable of?”

I see CBT and ACT each offering elements of this. CBT attempts to expose many of one’s simple implicit models and note where the implied reasoning is fallacious. ACT instructs people to identify their values and find the best way to live up to them, even if they can’t get their first choice way of doing so, e.g. “you can’t afford to travel, but you can afford to eat foreign cuisine locally.”

My intuition though is that many people are extremely reluctant to give up any part of their narrative and very sensitive to attempts to modify any part of it. This makes sense if they’re in the grips of a bucket error where making any allowance feels like giving up on everything they value. The goal of course is to achieve flexible reasoning.

Why this additional construct?

Is really necessary to talk about narratives? Couldn’t I have described just talking about what people want and their plans? Of course, people get upset when they fail to get what they want and their plans fail!

I think the narratives model is important for highlighting a few elements:

  1. The kind of thinking used here is very roles-based in a very deep way: what kind of person I am, what do I do, how do I relate to others and they relate to me.
  2. The thinking is very simplistic, likely a result of originating heavily from System 1. This thinking does not employ a person’s full ability to causally model the world.
  3. Because of 2), the narratives are much more inflexible than a person’s general thinking. Everything is all or nothing, compromises are not considered, it’s that narrative or bust.
Comment by ruby on The AI Timelines Scam · 2019-07-22T20:02:49.163Z · score: 4 (2 votes) · LW · GW

Agree, good question.

In was going to say the much the same. I think it kind of is a noncentral fallacy too, but not one that strikes me as problematic.

Perhaps I'd add that I feel the argument/persuasion being made by Eliezer doesn't really rest on trying to import my valence towards "swindle" over to this. I don't have that much valence to a funny obscure word.

I guess it has to be said that it's a noncentral noncentral fallacy.

Comment by ruby on Raemon's Scratchpad · 2019-07-21T23:11:12.196Z · score: 7 (3 votes) · LW · GW

Me: *makes joke*

Vaniver: I want you to post it on LessWrong so I can downvote it.

Comment by ruby on Dialogue on Appeals to Consequences · 2019-07-21T23:09:01.753Z · score: 2 (1 votes) · LW · GW

Yeah, granted that it's going to be rough.

5x seems consistent with the raw activity numbers though. Eyeballing it, seems like 4x more active in terms of comments and commenters. Number of posts is pretty close.

Comment by ruby on Appeal to Consequence, Value Tensions, And Robust Organizations · 2019-07-21T23:00:57.897Z · score: 2 (1 votes) · LW · GW

Thanks, fixed! It's a little bit repetitive with everything else I've written lately, but maybe I'm getting it clearer with each iteration.

Comment by ruby on Appeal to Consequence, Value Tensions, And Robust Organizations · 2019-07-21T22:52:35.197Z · score: 16 (4 votes) · LW · GW

I hereby proclaim that "feelings of safety" be shortened to "fafety." The domain of worrying about fafety is now "fafety concerns."

Problem solved. All in a day's work.

Comment by ruby on Appeal to Consequence, Value Tensions, And Robust Organizations · 2019-07-21T22:49:32.600Z · score: 2 (1 votes) · LW · GW
I agree with this, but it's quite a mouthful to deal with

Yeah, but there's a really big difference! You can't give up that precision.

This seems rightish- but off in really important ways that I can't articulate.

Nods. Also agree that "collective responsibility" is not the most helpful concept to talk about.

Note that this issue is explicitly addressed in the original dialogue. If someones feelings are hurting the discourse, they need to take responsibility for that just as much as I need to take responsibility for hurting their feelings.

Indeed, the fact people can say ""It feels like your need for safety is getting in the way of truth-seeking"is crucial for it to have any chance.

My expectation based on related real-life experience though, is that if making your need for safety is an option, there will people who abuse this and use it to suck up a lot of time and attention. That technically someone could deny their claim and move on, but this will happen much later than optimal and in the meantime everyone's attention has been sucked into a great drama. Attempts to say "your safety is disrupting truth-seeking" are accused as being attempts to oppress someone, etc.

This is all imagining how it would go with typical humans. I'm guessing you're imagining better-than-typical people in your org who won't have the same failure mode, so maybe it'll be fine. I'm mostly anchored how I expect that approach to go if applied to most humans I've known (especially those really into caring about feelings and who'd be likely to sign up for it).