Comment by raemon on Drowning children are rare · 2019-07-17T06:51:48.259Z · score: 5 (2 votes) · LW · GW

A friend also recently mentioned getting this email to me, and yes, this does significantly change my outlook here.

Doublecrux is for Building Products

2019-07-17T06:50:26.409Z · score: 10 (1 votes)
Comment by raemon on "Rationalizing" and "Sitting Bolt Upright in Alarm." · 2019-07-17T05:52:42.333Z · score: 3 (1 votes) · LW · GW

I'm not sure I agree with all the details of this (it's not obvious to me that humans are friendly if you scale them up) but I agree that the orientation towards clarity likely has important analogues to the AI Alignment problem.

Comment by raemon on "Rationalizing" and "Sitting Bolt Upright in Alarm." · 2019-07-17T05:51:26.021Z · score: 3 (1 votes) · LW · GW

Thanks. This all makes sense. I think I have a bunch more thoughts but for the immediate future will just let that sink in a bit.

Comment by raemon on Integrity and accountability are core parts of rationality · 2019-07-17T04:28:03.640Z · score: 3 (1 votes) · LW · GW

I do think people (including myself) tend towards adopting politically expedient beliefs when there is pressure to do so (esp. when their job, community or narrative are on the line).

This is based in part on person experience, and developing the skill of noticing what motions my brain makes in what circumstances.

Comment by raemon on Integrity and accountability are core parts of rationality · 2019-07-17T04:15:42.674Z · score: 3 (1 votes) · LW · GW

Can't answer for habryka, but my current guess of where you're point at here is something like: "the sort of drive towards consistency is part of an overall pattern that seems net harmful, and that the correct action is more like stopping and thinking than like 'trying to do better at what you were currently doing'."

(You haven't yet told me if this comment was successfully passing your ITT, but it's my working model of your frame)

I think habryka (and separately, but not coincidentally, me) has a belief that he's the sort of person where looking for opportunities to improve consistency is beneficial. I'm not sure whether you're disagreeing with that, or if you're point is more that the median-lesswrong will be taking wrong advice from this?

[Assuming I've got your frame right, I obviously disagree quite a bit – but I'm not sure what to do about it locally, here]

Comment by raemon on Open Thread July 2019 · 2019-07-16T22:52:17.316Z · score: 3 (1 votes) · LW · GW

Addenda: my Strategies of Personal Growth post is also particularly downstream of CFAR. (I realize that much of it is something you can elsewhere. My perspective is that the main product CFAR provides is a culture that makes it easier to orient this sort of thing, and stick with it. CFAR iterates on "what combination of techniques can you present to a person in 4 days that best help jump-start them into that culture?", and they chose that feedback-loop-cycle after exploring others and finding them less effective)

One salient thing from the Strategies of Personal Growth perspective (which I attribute to exploration by CFAR researchers) is that many of the biggest improvements you can gain come from healing and removing psychological blockers.

Comment by raemon on What is your Personal Knowledge Management system? · 2019-07-16T22:43:41.900Z · score: 5 (2 votes) · LW · GW

I endorse the tips/tricks section here (and it seems like the most important bit because different individuals have idiosyncrasies that make different tools useful).

Comment by raemon on Open Thread July 2019 · 2019-07-16T22:29:34.838Z · score: 6 (3 votes) · LW · GW

I think I should actually punt this question to Kaj_Sotala, since they are his posts, and the meta rule is that authors get to set the norms on this posts. But:

a) if I had written the posts, I would see them as "yes, now these are actually at the stage where the sort of critique Said does is more relevant." I still think it'd be most useful if you came at it from the frame of "What product is Kaj trying to build, and if I think that product isn't useful, are there different products that would better solve the problem that Kaj's product is trying to solve?"

b) relatedly, if you have criticism of the Sunset at Noon content I'd be interested in that. (this is not a general rule about whether I want critiques of that sort. Most of my work is downstream of CFAR paradigm stuff, and I don't want most of my work to turn into a debate about CFAR. But it does seem interesting to revisit SaN through the "how content that Raemon attributes to CFAR holds up to Said" lens)

c) Even if Kaj prefers you not to engage with them (or to engage only in particular ways), it would be fine under the meta-rules for you to start a separate post and/or discussion thread for the purpose of critiquing. I actually think the most useful thing you might do is write a more extensive post that critiques the sequence as a whole.

Comment by raemon on Open Thread July 2019 · 2019-07-16T21:16:10.616Z · score: 6 (3 votes) · LW · GW

On the "is there something worth teaching there" front, I think you're just wrong, and obviously so from my perspective (since I have, in fact, learned things. Sunset at Noon is probably the best writeup of what CFAR-descended things I've learned and why they're valuable to me).

This doesn't mean you're obligated to believe me. I put moderate probability on "There is variation on what techniques are useful for what people, and Said's mind is shaped such that the CFAR paradigm isn't useful, and it will never be legible to Said that the CFAR paradigm is useful." But, enough words have been spent trying to demonstrate things to you that seem obvious to me that it doesn't seem worth further time on it.

The Multi-Agent Model of Mind is the best current writeup of (one of) the important elements of what I think of as the CFAR paradigm. I think it'd be more useful for you to critique that than to continue this conversation.

Comment by raemon on Integrity and accountability are core parts of rationality · 2019-07-16T20:18:39.840Z · score: 3 (1 votes) · LW · GW

I think this is the most helpful encapsulation I've gotten of your preferred meta-frame.

I think I mostly just agree with it now that it's spelled out a better. (I think I have some disagreements about how exactly rationalist forums should relate to this, and what moods are useful. But in this case I basically agree that the actions you suggest at the end are the right move and it seems better to focus on that).

Comment by raemon on Open Thread July 2019 · 2019-07-16T19:55:18.467Z · score: 10 (2 votes) · LW · GW

Noting that I do agree with this particular claim.

I see the situation as:

  • There are, in fact, good reasons that its hard to communicate and demonstrate some things, and that hyperfocus on "what can be made legible to a third party" results in a lot of looking under street lamps, rather than where the value necessarily is. I have very different priors than Said on how suspicious CFAR's actions are, as well as different firsthand experience that leads me to believe there's a lot of value in CFAR's work that Said presumably dismisses.
    • [this is not "zero suspicion", but I bet my suspicion takes a very different shape than Said's]
  • But, it's still important for group rationality to have sound game theory re: what sort of ideas gain what sort of momentum. An important meta-agreement/policy is for people and organizations to be clear about the epistemic status of their ideas and positions.
    • I think it takes effort to maintain the right epistemic state, as groups and as individuals. So I think it would have been better if CFAR explicitly stated in their handbook, or in a public blogpost*, that "yes, this limits how much people should trust us, and right now we think it's more important for us to focus on developing a good product than trying to make our current ideas legibly trustworthy."
    • As habryka goes into here, I think there are some benefits for researchers to focus internally for a while. The benefits are high bandwidth communication, and being able to push ideas farther and faster than they would if they were documenting every possibly-dead-end-approach at every step of the way. But, after a few years of this, it's important to write up your findings in a more public/legible way, both so that others can critique it and so that others can build on it.
      • CFAR seems overdue for this. But, also, CFAR has had lots of staff turnover by now and it's not that useful to think in terms of "what CFAR ought to do" vs "what people who are invested in the CFAR paradigm should do." (The Multi-Agent Models sequence is a good step here. I think good next steps would be someone writing up several other aspects of the CFAR paradigm with a similar degree of clarity, and good steps after that would be to think about what good critique/evaluation would look like)
      • I see this as needing a something of a two-way contract, where:
        • Private researchers credibly commit to doing more public writeups (even though it's a lot of work that often won't result in immediate benefit), and at the very least writing up quick, clear epistemic statuses of how people should relate to the research in the meanwhile.
        • Third party skeptics develop a better understanding of what sort of standards are reasonable, and "cutting researchers exactly the right amount of slack." I think there's good reason at this point to be like "geez, CFAR, can you actually write up your stuff and put reasonable disclaimers on things and not ride a wave of vague illegible endorsement?" But my impression is that even if CFAR did all the right things and checked all the right boxes, people would still be frustrated, because the domain CFAR is trying to excel at is in fact very difficult, and a rush towards demonstrability wouldn't be useful. And I think good criticism needs to understand that.

*I'm not actually sure they haven't made a public blogpost about this.

Comment by raemon on Integrity and accountability are core parts of rationality · 2019-07-16T18:25:57.335Z · score: 3 (1 votes) · LW · GW

I think you're focusing on the "competence" here when the active ingredient was more the "position of power" thing.

Comment by raemon on Integrity and accountability are core parts of rationality · 2019-07-16T03:03:49.511Z · score: 7 (3 votes) · LW · GW

Part of the point as I saw it was that being accountable to a group limits the complexity of the types of moral logic you can be guided by.

i.e, if I'm accountable to all employees at work, my moral principles have to be simpler, and probably have to account for asymmetric justice. This doesn't necessarily mean I shouldn't be accountable to all the employees at work (if I'm their employer, or a fellow employee). But I saw the point of this post as "be wary of how exactly you operationalize that."

Comment by raemon on Integrity and accountability are core parts of rationality · 2019-07-16T02:58:41.466Z · score: 10 (5 votes) · LW · GW

I like this frame.

A related thing it brings to mind is something like "if you speak in support of something [a project or org] because you believe in it, and then later change your mind about it and think it's less good or harmful, you've done something bad to the commons by lending your credibility and then leaving that inertia there. “

(This could be woven directly into the ritual frame by having something kinda like swearing an oath in court, where you say "I'm making these claims to the best of my ability as a rationalist, upon my word. Furthermore, if I am to change my mind about these claims, I promise to make a good faith effort to do so publicly". Or variations on that)

Comment by raemon on Raemon's Shortform · 2019-07-15T23:19:09.895Z · score: 6 (3 votes) · LW · GW

Possible UI:

What if the RecentDiscussion section specifically focused on comments from old posts, rather than posts which currently appear in Latest Posts. This might be useful because you can already see updates to current discussions (since comments turn green when unread, and/or comment counts go up), but can't easily see older comments.

(You could also have multiple settings that handled this differently, but I think this might be a good default setting to ensure comments on old posts get a bit more visibility)

Comment by raemon on Commentary On "The Abolition of Man" · 2019-07-15T23:08:57.105Z · score: 7 (4 votes) · LW · GW

This

The Chest-Magnanimity-Sentiment--these are the indispensable liaison officers between cerebral man and visceral man. It may even be said that it is by this middle element that man is man; for by his intellect he is a mere spirit and by his appetite mere animal.

and

This reminded me of Bayesians vs. Barbarians, with a new dimension added; it is not that the Barbarians gain from having less in their head, it is that the Bayesians lost because they forgot to develop their chests.

Feels like it's getting at something real, but I'd be interested in checking how this grounds out in something physiologically real. What are the gears inside "develop your chest?"

Comment by raemon on Why artificial optimism? · 2019-07-15T23:00:39.315Z · score: 3 (1 votes) · LW · GW

Hmm, okay that makes sense. [I think there might be other models for what's going on here but agree that this model is plausible and doesn't require the multi-agent model]

Comment by raemon on Why artificial optimism? · 2019-07-15T22:47:35.053Z · score: 3 (1 votes) · LW · GW

I think I'm asking the same question of Said of, "how is this the same phenomenon as someone saying "I'm fine", if not relying on [something akin to] the multi-agent model of mind? Otherwise it looks like it's built out of quite different parts, even if they have some metaphorical similarities.

Comment by raemon on Why artificial optimism? · 2019-07-15T22:28:51.266Z · score: 3 (1 votes) · LW · GW

I think there might be something similar going in the group optimism bias vs individual, but that this depends somewhat on whether you accept the multi-agent model of mind.

Comment by raemon on How Should We Critique Research? A Decision Perspective · 2019-07-15T22:24:45.468Z · score: 5 (2 votes) · LW · GW

Hmm. Maybe it does. I guess what I normal do when writing a distillation is check whether the abstract was sufficient.

(I have not yet read the post in full, but predicted that if I had, I'd want something that looked more like this as a distillation)

I generally want two things out of a distillation:

  • An 80/20 of the post that does at least some work to clarify any assumptions the post is working in, and give some examples that of the higher level details of the post that get across some pieces of the post's generators, as well as it's content. (This is mostly for people who don't have time to read the whole post)
  • A skimmable document that I can use to refer back to the post in medium resolution, after I've actually read it. Where the point is to keep markers for each major concept the post introduces within a single visual field I can easily use to expand my working memory limitations.
Comment by raemon on LW authors: How many clusters of norms do you (personally) want? · 2019-07-15T21:49:13.244Z · score: 11 (5 votes) · LW · GW

LW team has discussed a few options for changing the rules for strong votes, with possibilities including:

  • All strong votes require a (short) explanation
  • Strong vote power decays if you use it all the time
Comment by raemon on "Rationalizing" and "Sitting Bolt Upright in Alarm." · 2019-07-15T21:24:29.850Z · score: 19 (6 votes) · LW · GW

Okay. I think I have a somewhat better handle on some of the nuances here and how various pieces of your worldview fit together. I think I'd previously been tracking and responding to a few distinct disagreements I had, and it'd make sense if those disagreements didn't land because I wasn't tracking the entirety of the framework at once.

Let me know how this sounds as an ITT:

  • Thinking and building a life for yourself
    • Much of civilization (and the rationalsphere as a subset of it and/or memeplex that's influenced and constrained by it) is generally pointed in the wrong direction. This has many facets, many of which reinforce each other. Society tends to:
      • Schools systematically teach people to associate reason with listening-to/pleasing-teachers, or moving-words-around unconnected from reality. [Order of the Soul]
      • Society systematically pushing people to live apart from each other, to work until they need (or believe they need) palliatives, in a way that doesn't give you space to think [Sabbath Hard and Go Home]
      • Relatedly, society provides structure that incentivizes you to advance in arbitrary hierarchy, or to tread water and barely stay afloat, without reflection of what you actually want.
    • By contrast, for much of history, there was a much more direct connection between what you did, how you thought, and how your own life was bettered. If you wanted a nicer home, you built a nicer home. This came with many overlapping incentive structures reinforced something closer to living healthily and generating real value.
    • (I'm guessing a significant confusion was me seeing this whole section as only moderately connected rather than central to the other sections)
  • We desperately need clarity
    • There's a collection of pressures, in many-but-not-all situations, to keep both facts and decision-making principles obfuscated, and to warp language in a way that enables that. This is often part of an overall strategy (sometimes conscious, sometimes unconscious) to maneuver groups for personal gain.
    • It's important to be able to speak plainly about forces that obfuscate. It's important to lean fully into clarity and plainspeak, not just taking marginal steps towards it, both because clear language is very powerful intrinsically, and there's a sharp dropoff as soon as ambiguity leaks in (moving the conversation to higher simulacrum levels, at which point it's very hard to recover clarity)
  • [Least confident] The best focus is on your own development, rather than optimizing systems or other people
    • Here I become a lot less confident. This is my attempt to summarize whatever's going on in our disagreement about my "When coordinating at scale, communicating has to reduce gracefully to about 5 words" thing. I had an impression that this seemed deeply wrong, confusing, or threatening to you. I still don't really understand why. But my best guesses include:
      • This is putting the locus of control in the group, at a moment-in-history where the most important thing is reasserting individual agency and thinking for yourself (because many groups are doing the wrong-things listed above)
      • Insofar as group coordination is a lens to be looked through, it's important that groups a working in a way that respects everyone's agency and ability to think (to avoid falling into some of the failure modes associated with the first bullet point), and simplifying your message so that others can hear/act on it is part of an overall strategy that is causing harm
      • Possibly a simpler "people can and should read a lot and engage with more nuanced models, and most of the reason you might think that they can't is because school and hierarchical companies warped your thinking about that?"

And then, in light of all that, something is off with my mood when I'm engaging with individual pieces of that, because I'm not properly oriented around the other pieces?

Does that sound right? Are there important things left out or gotten wrong?

Comment by raemon on How Should We Critique Research? A Decision Perspective · 2019-07-15T20:42:02.487Z · score: 5 (2 votes) · LW · GW

I'm too busy to do so today, but I'd appreciate it if someone wrote up a comment that distilled this down into summary.

Comment by Raemon on [deleted post] 2019-07-15T20:21:29.882Z

[meta note, replying to you because we don't yet have a good process for notifications that don't rely on replying to a person:

I don't know that this went anywhere important enough to publish, but fwiw, since my model of you puts at least some value to things being public and I don't personally object, if you wanted me to turn this from a draft-to-a-public-post that'd be fine.]

Comment by raemon on Reclaiming Eddie Willers · 2019-07-15T20:01:28.853Z · score: 3 (1 votes) · LW · GW

In case it matters, the original quote was "vampires", not snakes.

Comment by raemon on Open Thread July 2019 · 2019-07-15T19:52:24.278Z · score: 7 (3 votes) · LW · GW

Quick note for your model of how people interpret various kinds of writing, my initial read of your comment was to put a 60% probability on "Zack is currently undergoing a pendulum swing in the direction away from calling people out on lying, and overcompensating." (which was wrong and/or overconfident on my part)

Comment by raemon on Reclaiming Eddie Willers · 2019-07-15T19:49:24.698Z · score: 12 (2 votes) · LW · GW

Some thoughts from a different aspect of Hufflepuff-ness (the "niceness" aspect)

There's a kind of midwestern person who grows up in a small town where everyone is nice all the time. The Being Nice provides a clear benefit in the form of, well, the place is nice. It provides a less legible benefit of "the process of everyone being nice to each other builds group cohesion."

And that person comes to the big city. And being "nice" is no longer adaptive. And the people of the city also haven't quite figured out how to adapt to it. (Something from Vaniver's recent post feels relevant here. The Tao of being in a midwestern town is not the same as the Tao of being in a big city. But also, big cities are full of people who came from places with oddly specific Tao, and they haven't all figured out the Big City Tao, which often means the whole thing is afflicted by a vague pathology)

The Nice person starts out by trying to be nice.

Alas: "A Hufflepuff surrounded by Slytherins will surely wither and die as if they were surrounded by vampires."

The Nice person helps the people around them, at first hoping/assuming that they will be helped in turn, which never happens. Eventually the Nice person becomes a burnt out version of themselves, unhappy. And maybe they leave the Big City, or maybe they stay and are unhappy, or maybe eventually they find a small enclave of Nice people, or maybe they stop being Nice, or maybe they manage to keep being nice and just sort of accept that others won't be nice back.

But, the thing that would have been particularly valuable is if the Nice person had realized, and internalized, that Being Nice in the big city needs to look quite different from Being Nice in the midwestern town.

Being Nice in the big city requires backbone, in a way that Being Nice in the midwestern town was fundamentally about not needing backbone. [Maybe. I haven't actually lived in a midwestern town so I'm not sure I grok it]. My sense is that in the small town, the fact that everyone can trust each other without having to have their guard up is part of the magic that is going on.

In any case, in the big city, you need your guard up. And you need backbone to enforce your boundaries to avoid getting consumed. But more interestingly, you need backbone to be nice.

Hufflepuff Leadership, I described this as:

There's an important skill, early on in the Hufflepuff Skill Tree, which is something like "Collaborative Leadership."
The Hufflepuff strategy of "everyone pitching in to keep things nice" requires a mechanism to cause there to be a lot of people pitching in. If you're going to attempt to keep a place nice this way, you need such a mechanism. This requires a certain kind of leadership.
It doesn't need to feel like bossing people around – it can feel like "people making friends and helping each other out". But it does require a certain kind of assertiveness.
If you're pitching in and helping out just because you like to and okay with the notion that others might not do so, coolio. But if your goal is to keep a place nice, instead of making it nice for this particular afternoon, this skill is really important.

With followup:

If you're the sort of person in space where people come-and-go a lot, and as such, it's continuously important to be building Fight Entropy Capacity...
...and you have a natural impulse to, say, see that the garbage needs taking out and then Do So...
...then whenever possible, you should try replacing that impulse with something like the following:
— Find another person who can see the garbage from where they're sitting
— Say "Hey, want to help me take out the garbage?" (this works best if there's multiple bins, recycling, etc, so you legitimately could use some help)
— Show them how to actually do so (since it's often not clear what to do with a full garbage bag), and then where to get a new bag for the newly empty bin.
— End the interaction, not with a pitch for them to help takeout the garbage themselves in the future, but to ask other people for help the way you just asked them, and show them how to do it, so that the body of people who've ever thought about how to keep the space clean can grow.

I'm not sure that description actually quite works (and in any case it requires a number of social skills). It also applies differently in domains other than "take out the garbage."

But the core idea is that, in the big city, or anywhere that doesn't have a preexisting, self-propagating Niceness Meme, Being Nice requires leadership. Leadership requires agency, and willingness to deal with conflict, and the ability to actually figure out what's right for yourself.

And there's an important move people need to learn (not just for "being nice"), which is to backpropagate the awareness that "In this environment, niceness requires leadership" into their aesthetic of why Being Nice is good and beautiful and right.

Being Nice in the big city is still good and beautiful and right. But it's a different kind of good-and-beautiful-and-right.

I think there's something similar going on with loyalty. (I think the discussion in the other comment threads here are already roughly grappling with the right questions here)

Comment by raemon on LW authors: How many clusters of norms do you (personally) want? · 2019-07-15T17:19:40.803Z · score: 3 (1 votes) · LW · GW

Yeah I was like "wtf bro?".

Comment by raemon on Raemon's Shortform · 2019-07-14T18:40:38.034Z · score: 5 (2 votes) · LW · GW

Weird thoughts on 'shortform'

1) I think most of the value of shortform is "getting started writing things that turn out to just be regular posts, in an environment that feels less effortful."

2) relatedly, "shortform" isn't quite the right phrase, since a lot of things end up being longer. "Casual" or "Off-the-cuff" might be better?

Comment by raemon on Diversify Your Friendship Portfolio · 2019-07-14T18:38:04.174Z · score: 5 (2 votes) · LW · GW

I think part of the point of the OP was to get outside that still relatively narrow set of subcultures.

Comment by raemon on Raemon's Shortform · 2019-07-14T18:36:44.494Z · score: 11 (2 votes) · LW · GW

Just spent a weekend at the Internet Intellectual Infrastructure Retreat. One thing I came away with was a slightly better sense of was forecasting and prediction markets, and how they might be expected to unfold as an institution.

I initially had a sense that forecasting, and predictions in particular, was sort of "looking at the easy to measure/think about stuff, which isn't necessarily the stuff that connected to stuff that matters most."

Tournaments over Prediction Markets

Prediction markets are often illegal or sketchily legal. But prediction tournaments are not, so this is how most forecasting is done.

The Good Judgment Project

Held an open tournament, the winners of which became "Superforecasters". Those people now... I think basically work as professional forecasters, who rent out their services to companies, NGOs and governments that have a concrete use for knowing how likely a given country is to go to war, or something. (I think they'd been hired sometimes by Open Phil?)

Vague impression that they mostly focus on geopolitics stuff?

High Volume and Metaforecasting

Ozzie described a vision where lots of forecasters are predicting things all the time, which establishes how calibrated they are. This lets you do things like "have one good forecaster with a good track record make lots of predictions. Have another meta-forecaster evaluate a small sample of their predictions to sanity check that they are actually making good predictions", which could get you a lot of predictive power for less work than you'd expect."

This seemed interesting, but I still had some sense of "But how do you get all these people making all these predictions? The prediction markets I've seen don't seem to accomplish very interesting things, for reasons Zvi discussed here." Plus I'd heard that sites like Metaculus end up mostly being about gaming the operationalization rules than actually predicting things accurately.

Automation

One thing I hadn't considered is that Machine Learning is already something like high volume forecasting, in very narrow domains (i.e. lots of bots predicting which video you'll click on next). One of Ozzie's expectations is that over time, as ML improves, it'll expand the range of things that bots can predict. So some of the high volume can come from automated forecasters.

Neural nets and the like might also be able to assist in handling the tricky "operationalization bits", where you take a vague prediction like "will country X go to war against country Y" and turn that into the concrete observations that would count for such a thing. Currently this takes a fair amount of overhead on Metaculus. But maybe at some point this could get partly automated.

(there wasn't a clear case for how this would happen AFAICT, just 'i dunno neural net magic might be able to help.' I don't expect neural-net magic to help here in the next 10 years but I could see it helping in the next 20 or 30. I'm not sure if it happens much farther in advance than "actual AGI" though)

I [think] part of the claim was that for both the automated-forecasting and automated-operationalization, it's worth laying out tools, infrastructure and/or experiments now that'll set up our ability to take advantage of them later.

Sweeping Visions vs Near-Term Practicality, and Overly Narrow Ontologies

An aesthetic disagreement I had with Ozzie was:

My impression is that Ozzie is starting with lots of excitement for forecasting as a whole, and imagining entire ecosystems built out of it. And... I think there's something important and good about people being deeply excited for things, exploring them thoroughly, and then bringing the best bits of their exploration back to the "rest of the world."

But when I look at the current forecasting ecosystem, it looks like the best bits of it aren't built out of sweeping infrastructural changes, they're built of small internal teams building tools that work for them, or consulting firms of professionals that hire themselves out. (Good Judgment project being one, and the How To Measure Anything guy being another)

The problem with large infrastructural ecosystems is this general problem you also find on Debate-Mapping sites – humans don't actually think in clean boxes that are easy to fit into database tables. They think in confused thought patterns that often need to meander, explore special cases, and don't necessarily fit whatever tool you built for them to think in.

Relatedly: every large company I've worked at has built internal tools of some sort, even for domains that seem like they sure out to be able to be automated and sold at scale. Whenever I've seen someone try to purchase enterprise software for managing a product map, it's either been a mistake, or the enterprise software has required a lot of customization before it fit the idiosyncratic needs of the company.

Google sheets is really hard to beat as a coordination tool (but a given google sheet is hard to scale)

So for the immediate future I'm more excited by hiring forecasters and building internal forecasting teams than ecosystem-type websites.

Comment by raemon on "Rationalizing" and "Sitting Bolt Upright in Alarm." · 2019-07-14T02:06:26.000Z · score: 3 (1 votes) · LW · GW

Okay. I'm not confident I understand the exact thing you're pointing at but I think this comment (and reference to the sabbath conversation) helped orient me a bit in the direction towards understanding your frame. I think this may need to gestate a bit before I'm able to say more.

Comment by raemon on "Rationalizing" and "Sitting Bolt Upright in Alarm." · 2019-07-13T07:43:03.485Z · score: 3 (1 votes) · LW · GW

Put another way: the comment you just wrote seems (roughly) like a different way I might have attempted to explain my views in the OP. So I'm not sure if the issue is there still something subtle I'm not getting, or if I communicated the OP in a way that made it seem like your comment here wasn't a valid restatement of my point, or something other thing?

Comment by raemon on "Rationalizing" and "Sitting Bolt Upright in Alarm." · 2019-07-13T07:35:19.555Z · score: 3 (1 votes) · LW · GW

I think one issue is that I don't have a word or phrase that quite communicates the thing I was trying to point to. When I said "sit upright in alarm", the actions I meant to be coupled with that look more like this:

The behavior I'd have wanted my parents to exhibit would probably have started with working out - with friends and community members - and with me and my sister - and first, with each other - a shared model and language for talking about the problem, before we started to do anything about it.

As opposed to either ignoring the problem, or blaming something haphazardly, or imposing screen limits without reflection, or whatever.

I'm not sure of a phrase that communicates the right motion. I agree that alarm fatigue is a thing (and basically said so right after posting the OP). "Sitting up, taking notice, and directing your attention strategically" sort of does it but in an overwrought way. If you have a suggestion for a short handle for "the sort of initial mental motion you wish your parents had done, as well as the sort of initial mental motion you wish people

The thing prompting the OP was the facts that I've noticed people (in a few settings), using the word lying in a way that a) seemed false [by the definition of lying that seems most common to me, i.e. including both 'deliberateness' and usually at least a small bit of 'blameworthiness'], b) seemed like specifically people were making a mistake relating to "wishing they had a word that directed people's attention better", and it seeming unfair to ask them to stop without giving them a better tool to direct people's attention.

Comment by raemon on Benito's Shortform Feed · 2019-07-13T07:24:28.546Z · score: 5 (2 votes) · LW · GW

I'm not claiming you can literally do this all the time. [Ah, an earlier draft of the previous comment emphasized this this was all "things worth pushing for on the margin", and explicitly not something you were supposed to sacrifice all other priorities for. I think I then rewrote the post and forgot to emphasize that clarification]

I'll try to write up better instructions/explanations later, but to give a rough idea of the amount of work I'm talking about. I'm saying "spend a bit more time than you normally do in 'doublecrux mode'". [This can be, like, an extra half hour sometimes when having a particular difficult conversation].

When someone seems obviously wrong, or you seem obviously right, ask yourself "what are cruxes are most loadbearing", and then:

  • Be mindful as you do it, to notice what mental motions you're actually performing that help. Basically, do Tuning Your Cognitive Strategies to the double crux process, to improve your feedback loop.
  • When you're done, cache the results. Maybe by writing it down, or maybe just sort of thinking harder about it so you remember it a better.

The point is not to have fully mapped out cruxes of all your beliefs. The point is that you generally have practiced the skill of noticing what the most important cruxes are, so that a) you can do it easily, and b) you keep the results computed for later.

Comment by raemon on LW authors: How many clusters of norms do you (personally) want? · 2019-07-13T03:33:40.885Z · score: 5 (2 votes) · LW · GW

For convenience of skimming, could you post this as a top-level answer?

Comment by raemon on Benito's Shortform Feed · 2019-07-13T02:12:48.472Z · score: 14 (4 votes) · LW · GW

I'd been working on a sequence explaining this all in more detail (I think there's a lot of moving parts and inferential distance to cover here). I'll mostly respond in the form of "finish that sequence."

But here's a quick paragraph that more fully expands what I actually believe:

  • If you're building a product with someone (metaphorical product or literal product), and you find yourself disagreeing, and you explain "This is important because X, which implies Y", and they say "What!? But, A, therefore B!" and then you both keep repeating those points over and over... you're going to waste a lot of time, and possibly build a confused frankenstein product that's less effective than if you could figure out how to successfully communicate.
    • In that situation, I claim you should be doing something different, if you want to build a product that's actually good.
    • If you're not building a product, this is less obviously important. If you're just arguing for fun, I dunno, keep at it I guess.
  • A separate, further claim is that the reason you're miscommunicating is because you have a bunch of hidden assumptions in your belief-network, or the frames that underly your belief network. I think you will continue to disagree and waste effort until you figure out how to make those hidden assumptions explicit.
    • You don't have to rush that process. Take your time to mull over your beliefs, do focusing or whatever helps you tease out the hidden assumptions without accidentally crystallizing them wrong.
    • This isn't an "obligation" I think people should have. But I think it's a law-of-the-universe that if you don't do this, your group will waste time and/or your product will be worse.
      • (Lots of companies successfully build products without dealing with this, so I'm not at all claiming you'll fail. And meanwhile there's lots of other tradeoffs your company might be making that are bad and should be improved, and I'm not confident this is the most important thing to be working on)
      • But among rationalists, who are trying to improve their rationality while building products together, I think resolving this issue should be a high priority, which will pay for itself pretty quickly.
  • Thirdly: I claim there is a skill to building up a model of your beliefs, and your cruxes for those beliefs, and the frames that underly your beliefs... such that you can make normally implicit things explicit in advance. (Or, at least, every time you disagree with someone about one of your beliefs, you automatically flag what the crux for the belief was, and then keep track of it for future reference). So, by the time you get to a heated disagreement, you already have some sense of what sort of things would change your mind, and why you formed the beliefs you did.
    • You don't have to share this with others, esp. if they seem to be adversarial. But understanding it for yourself can still help you make sense of the conversation.
    • Relatedly, there's a skill to detecting when other people are in a different frame from you, and helping them to articulate their frame.
  • Literal companies building literal products can alleviate this problem by only hiring people with similar frames and beliefs, so they have an easier time communicating. But, it's
  • This seems important because weird, intractable conversations have shown up repeatedly...
    • in the EA ecosystem
      • (where even though people are mostly building different products, there is a shared commons that is something of a "collectively built product" that everyone has a stake in, and where billions of dollars and billions of dollars worth of reputation are at stake)
    • on LessWrong the website
      • (where everyone has a stake in a shared product of "how we have conversations together" and what truthseeking means)
    • on the LessWrong development team
      • where we are literally building a product (a website), and often have persistent, intractable disagreements about UI, minimalism, how shortform should work, is Vulcan a terribly shitshow of a framework that should be scrapped, etc.
Comment by Raemon on [deleted post] 2019-07-13T01:36:13.638Z

A short response for the time being (hopefully will write up a better overall explanation soon), is that I didn't mean "prioritize keeping frames explicit over all other things", and I also meant some subtler things by "keep things explicit" than I think I successfully communicated.

I meant something more like "on the margin, I think people should be prioritizing keeping their beliefs, cruxes and frames more explicit than they currently do."

(This may mean I come up with a different phrase, although I do think any phrase will need to be compacted down to around 5 words, and lose some nuance in the process)

I find myself wanting to try to clarify the details, but I think it makes more sense to start the explanation from scratch in a top level comment. For now, just noting that I agree with Benito about the failure modes of "making things too explicit too quickly".

Comment by raemon on LW authors: How many clusters of norms do you (personally) want? · 2019-07-13T00:45:33.185Z · score: 5 (2 votes) · LW · GW

Ah. What I meant to be clear (but perhaps failed to communicate, is that I wanted each author to list the norms that they personally want, rather than the norms they think other people want.

The "how many norms does everyone want" is a fact that should arise emergently from the process of everyone sharing their own individual preferences.

Comment by raemon on Meta-tations on Moderation: Towards Public Archipelago · 2019-07-12T18:34:27.692Z · score: 7 (3 votes) · LW · GW
An idea Ben and I came up with was having an off-topic comment section of a post. Authors get to decide what is "on topic" for a discussion, and there's an easily accessible button that labels a comment "off topic". Off topic comments move to a hidden-by-default section at the bottom of the comments. Clicking it ones unveils it and leaves it unveiled for the reader in question (and it has some kind of visual cue to let you know that you've entered off-topic world).

My new belief: this option should be called "collapse". Rather than having a new element in the comments section, it just forces a comment to be collapsed-by-default, and sorted to the bottom of the page (independent of how much karma it has), possibly not showing up in the Recent Discussion section.

This has two benefits of a) not having to create any new sections that take up conceptual space on the site, instead using existing site mechanics, b) is more ideologically neutral than "on-topic / off-topic", which would have been a bit misleading/deceptive about what sort of uses the offtopic button might have.

Comment by raemon on LW authors: How many clusters of norms do you (personally) want? · 2019-07-12T18:20:37.392Z · score: 3 (1 votes) · LW · GW

So, I would still like to know "what cluster of norms do you actually prefer?" An important part of this question was "what is there actual demand for, and/or supply of?"

Comment by raemon on How much background technical knowledge do LW readers have? · 2019-07-12T01:54:20.047Z · score: 7 (4 votes) · LW · GW

Quick note: I've frontpaged this despite being relatively meta because... I dunno it seemed like a particularly object level sort of meta post? (Not sure I have a consistent principle for this but seemed good to at least note when I take an action that's not obviously in-line with our stated frontpage principles)

Comment by raemon on "Rationalizing" and "Sitting Bolt Upright in Alarm." · 2019-07-11T08:07:16.301Z · score: 3 (1 votes) · LW · GW

I'll try to write up a post that roughly summarizes the overall thesis I'm trying to build towards here, so that it's clearer how individual pieces fit together.

But a short answer to the "why would I want a clear handle for 'sitting upright in alarm'" is that I think it's at least sometimes necessary (or at the very least, inevitable), for this sort of conversation to veer into politics, and what I want is to eventually be able to discuss politics-qua-politics sanely and truth-trackingly.

My current best guess (although very lightly held) is that politics will go better if it's possible to pack rhetorical punch into things for a wider variety of reasons, so people don't feel pressure to say misleading things in order to get attention.

Comment by raemon on The AI Timelines Scam · 2019-07-11T08:05:54.191Z · score: 7 (3 votes) · LW · GW

Ah, that all makes sense.

Comment by raemon on The AI Timelines Scam · 2019-07-11T07:37:52.691Z · score: 13 (6 votes) · LW · GW
There is a two-step form of judo required to first learn to make 50 year plans and then secondarily restrict yourself to shorter-term plans. It is not one move, and I often see "but timelines are short" used to prevent someone from learning the first move.

Is there a reason you need to do 50 year plans before you can do 10 year plans? I'd expect the opposite to be true.

(I happen to currently have neither a 50 nor 10 year plan, apart from general savings, but this is mostly because it's... I dunno kinda hard and I haven't gotten around to it or something, rather than anything to do with timelines.)

Comment by raemon on Raemon's Shortform · 2019-07-10T22:41:16.837Z · score: 3 (1 votes) · LW · GW

Here is a quick, off-the-cuff summary of the overall thesis I'm building towards (with the "Rationalization" and "Sitting Bolt Upright in Alarm" post, and other posts and conversations that have been in the works). It'd be helpful to know which elements are most confusing, or seem most wrong, or what-not.

  • The rationalsphere isn't great at applying rationality to its own internal politics
    • We don't seem to do much better than average. This seems like something that's at least pretty sad, even if it's a true brute fact about the world.
    • There have been some efforts to fix this fact, but most of it has seemed (to me) to be missing key facts about game theory, common knowledge, theory of mind, and some other topics that I see as necessary to solve the problem.
  • Billions of dollars are at stake, which creates important distortions that need addressing
    • The rationality and EA communities are valuable, in large part, because there is an opportunity for important ideas to influence the world-stage, moving millions or billions of dollars (or causing millions of dollars worth of stuff happening). But, when billions of dollars are at stake, you start attract world-class opportunists trying to coopt you, (as well as community members start feeling pressure to conform to social reality on the world-stage), which demands world-class ability to handle subtle political pressures to preserve that value.
      • [epistemic status: I'm not sure whether I endorse the rhetoric here. Maybe you don't need to be world class, but you probably need to be at least 75th percentile, and/or become more illegible to the forces that would try to coopt you]
  • Default strategies I've observed seem doomy
    • One of the default failure modes that I've seen is, when people don't pay attention to a given call-for-clarity about "hey, we seem to be acting in ways that distort truth in predictable ways", is to jump all the way to statements like "EA has a lying problem," which I think is both untrue and anti-helpful for preserving a truthseeking space.
      • (In that case Sarah later wrote up a followup post that was more reasonable and Benquo wrote up a post that articulated the problem more clearly. [Can't find the links offhand]. But it was a giant red flag for me that getting people to pay attention required sensationalizing the problem. It seemed to me that this was following an incentive gradient identical to that which political news has followed, which seems roughly as bad for truthseeking as the original problem Sarah was trying to address was)
    • The "Rationalization/Sitting-bolt-upright" post was intended to provide an outlet for that sort of impulse that was at less counterproductive (in the interim before figuring out a more robust solution).
  • By default, people use language for both truthseeking and for politics. It takes special effort to keep things truth-focused
    • A primary lesson I learned from the sequences is that most people's beliefs are not about truth at all. ("Science as attire", "Fable of Science and Politics", etc. Most of the places where the rationalsphere seems most truth-tracking are where it sidesteps this issue, rather than really solving it. Attempting to directly jump towards "well we just use words for truth, not politics", sound to me about as promising as writing the word 'cold' on a fridge.
    • Relatedly, I think people struggle to stay in a truthseeking frame when they are feeling defensive. One person being defensive makes it 2-30x harder to remain truth-oriented. Multiple people being defensive at least add up that difficulty linearly, and potentially compound in weirder ways. I think this is challenging enough that it requires joint effort to avoid.
      • A truthseeking space that can actually discuss politics sanely needs both individuals who are putting special effort to avoid being defensive, and conversation partners that practice avoiding unnecessarily* provoking defensiveness.
        • *where by "unnecessary" I mean: "if your subject matter is inherently difficult to hear, you shouldn't avoid saying it. But you should avoid saying it with rhetoric that is especially liable to inflame the conversation. (i.e. "i think your project is net-harmful" is fine. "I think your project is stupid and can't believe you wasted our time on it" is making the conversation 20x harder, unnecessarily.)
          • Yes, this is hard and doesn't come naturally to everyone. But I think it's at least approximately as hard as learning to avoid getting defensive is (and I would guess the low-hanging fruit is actually comparatively easy). I think if a truthseeking space doesn't ask people to at least pick up the low-hanging fruit here, it will be less effective as a truthseeking space.
      • I don't think this is necessary for all conversations, but it's escalatingly important the less the participants trust each other and the higher the stakes.
  • Communicating between frames/aesthetics/ontologies are very hard
    • Common knowledge of 'Double Crux' has made it somewhat easier to resolve gnarly disagreements, but I still frequently observe rationalists (myself included) just completely talking past each other, not noticing, and then either getting really frustrated, or assuming bad faith when the actual problem is significantly different world models.
    • Tools that I know of for communicating include:
      • Classical Debate
      • Writing a comprehensive thesis
      • Writing a series of bite size sequence posts that walk people through your frame
      • Doublecrux
    • Of those:
      • Classical debate explicitly pushes things in a political point-scoring frame. For discussions that are already political, I think this is a non-starter for reaching agreement.
      • My sense of 'comprehensive theses' is that they attempt for address every objection at once. But they still fail to anticipate the vast gulfs that occur when people are not looking at things through the same ontology. They also tax people's working memory and sometimes their simple willingness to read.
      • Bite sized sequences that slowly bridge inferential distance work okay. (This is, indeed, how LessWrong successfully communicated across a massive disconnect in frame/ontology that many current-rationalists were once missing). But, well, they're huge amounts of work.
      • Doublecrux is essentially two people "writing a sequence", but in realtime, where your job is to con
    • In the original Double Crux post, Duncan acknowledges that there wasn't a very clear sense (at least at the time) of how to actually look for cruxes. I'd go on to say that DoubleCrux doesn't expressly acknowledge differences in frame, aesthetic or ontology. It notes that the goal is to build a shared causal graph, but there at least aren't written-up, common knowledge guidelines for what to do if (not only) your causal graphs are just totally different, but also the lens through which you filter evidence is entirely different, and your mechanism for adjusting your lens is entirely different.
    • Even vanilla Double Crux (contrasted with what I'd call "Ontology Doublecrux" or "Aesthetic Doublecrux") is hard to teach, doesn't have a good public online examples.
  • We need de-escalation techniques that work when people are starting from a position of 'mutually assumed bad faith.'
Comment by raemon on "Rationalizing" and "Sitting Bolt Upright in Alarm." · 2019-07-10T19:34:16.731Z · score: 3 (1 votes) · LW · GW

(I edited the comment, curious if it's clearer now)

Comment by raemon on "Rationalizing" and "Sitting Bolt Upright in Alarm." · 2019-07-10T19:02:27.789Z · score: 3 (1 votes) · LW · GW

Ah, sorry not being clearer. Yes, that's actually the point I meant to be making. It's inappropriate (and factually wrong) for Bob to lead with "hey Alice you lied here". (I was trying to avoid editorializing too much about what seemed appropriate, and focus on why the two situations are different)

I agree that the correct opening move is "that statement is incorrect", etc.

One further complication, though, is that it might be that Alice and Bob have talked a lot about whether Alice is incorrect, looked for cruxes, etc, and after several months of this Bob still thinks Alice is being motivated and Alice still think her model just makes sense. (This was roughly the situation in the OP)

From Bob's epistemic state, he's now in a world where it looks like Alice has a pattern of motivation that needs to be addressed, and Alice is non-cooperative because Alice disagrees (and it's hard to tell the difference between "Alice actually disagrees" or "Alice is feigning disagreement for political convenience). I don't think there's any simple thing that can happen next, and [for good or for ill] what happens next is probably going to have something to do with Alice and Bob's respective social standing.

I think there are practices and institutions one could develop to help keep the topic in the domain of epistemics instead of politics, and there are meta-practices Alice and Bob can try to follow if they both wish for it to remain in the domain of epistemics rather than politics. But there is no special trick for it.

Comment by raemon on Hazard's Shortform Feed · 2019-07-10T18:59:12.350Z · score: 5 (2 votes) · LW · GW

I like this post, and think it'd be fine to crosspost to LW.

Comment by raemon on "Rationalizing" and "Sitting Bolt Upright in Alarm." · 2019-07-10T18:21:51.982Z · score: 4 (2 votes) · LW · GW

Attempting to answer more concretely and principled-ly about what makes sense to distinguish here

I think you are pointing to some difference between the two, but I'm not sure what it is. Maybe the difference is that motivated errors are more covert than lies are, more plausible deniability (including narrative coherence) is maintained, and this plausible deniability is maintained through keeping the thing unconscious, while a plausible semicoherent narrative is maintained in consciousness?

Reflecting a bit more, I think there are two important distinctions to be made:

Situation A) Alice makes a statement, which is false, and either Alice knows beforehand it's false, or Alice realizes it's false as soon as she pays any attention to it after the fact. (this is slightly different from how I'd have defined "lie" yesterday, but after 24 hours of mulling it over I think this is the correct clustering)

Situation B) Alice makes a statement which is false, which to Alice appears locally valid, but which is built upon some number of premises or arguments that are motivated.

...

[edit:]

This comment ended up quite long, so a summary of my overall point:

Situation B is much more complicated than Situation A.

In Situation A, Alice only has one inferential step to make, and Alice and Bob have mutual understanding (although not common knowledge) of that one inferential step. Bob can say "Alice, you lied here" and have the conversation make sense.

In Situation B, Alice has many inferential steps to make, and if Bob says "Alice, you lied here", Alice (even if rational and honest) needs to include probability mass on "Bob is wrong, Bob is motivated, and/or Bob is a malicious actor."

These are sufficiently different epistemic states for Alice to be in that I think it makes sense to use different words for them.

...

Situation A

In situation A, if Bob says "Hey, Alice, you lied here", Alice thinks internally either "shit I got caught" or "oh shit, I *did* lie." In the first case, Alice might attempt to obfuscate further. In the second case, Alice hopefully says "oops", admits the falsehood, and the conversation moves on. In either case, the incentives are *mostly* clear and direct to Alice – try to avoid doing this again, because you will get called on it.

If Alice obfuscates, or pretends to be in Situation B, she might get away with it this time, but identifying the lie will still likely reduce her incentives to make similar statements in the future (since at the very least, she'll have to do work defending herself)

Situation B

In situation B, if you say "Hey Alice, you lied here", Alice will say "what the hell? No?".

And then a few things happen, which I consider justified on Alice's part:

From Alice's epistemic position, she just said a true thing. If Bob just claimed that true thing was a lie, alice has now has several major hypotheses to consider:

  • Alice actually said a false thing
    • maybe the argument that directly supports proposition B is faulty reasoning, or Alice is mistaken about the facts.
    • maybe somewhere in her background models/beliefs/ontology are nodes that are false due to motivated reasoning
    • maybe somewhere in her background models/beliefs/ontology are nodes that are false for non-motivated reasoning
  • Alice actually said a true thing
    • Bob's models/beliefs/ontology are wrong, because *Bob* is motivated, causing Bob to incorrectly think Alice's statement was false
    • Bob's models/beliefs/ontology are wrong, for non-motivated reasons
    • Bob is making some kind of straightforward local error about the claim in question (maybe he's misunderstanding her or defining words differently from her)
    • Bob's models are fine... but Bob is politically motivated. He is calling Alice a liar, not to help truthseek, but to cast aspersions on Alice's character. (this might part of an ongoing campaign to harm Alice, or just a random "Bob is having a bad day and looking to dump his frustration on someone else")
  • Alice said a partially true, partially false thing (or, some other variation of "it's complicated").
    • Maybe Bob is correctly noticing that Alice has a motivated component to her belief, but in fact, the belief is still true, and most of her reasoning is still correct, and Bob is factually wrong about the statement being a lie.
    • Maybe Alice and Bob's separate world models are pointing in different directions, which is making different aspects of Alice's salient to each of them. (They might both be motivated, or non-motivated). If they talk for awhile they may both eventually learn to see the situation through different frames that broaden their understanding.

This is a much more complicated set of possibilities for Alice to evaluate. Incentives are getting applied here, but they could push her in a number of ways.

If Alice is a typical human and/or junior rationalist, she's going to be defensive, which will make it harder for her to think clearly. She will be prone to exaggerating the probability of options that aren't her fault. She may see Bob as socially threatening her – not as a truthseeking collaborator trying to help, but as a malicious actor out to harm her.

If Alice is a perfectly skilled rationalist, she'll hopefully avoid feeling defensive, and will not exaggerate the probability any of the options for motivated reasons. But over half the options are still "this is Bob's problem, not Alice's, and/or they are both somewhat confused together".

Exactly how the probabilities fall out depends on the situation, and how much Alice trusts her own reasoning, and how much she trusts Bob's reasoning. But even perfect-rationalist Alice should have nonzero probability on "Bob is one who is wrong, perhaps maliciously, here".

And if the answer is "Alice's belief is built on some kind of motivated reasoning", that's not something that can be easily resolved. If Alice is wrong, but luckily so, where the chain of motivated beliefs might be only 1-2 nodes deep, she can check if they make sense and maybe discover she is wrong. But...

  • if she checks 1-2 nodes deep and she's not obviously wrong, this isn't clear evidence, since she might still be using motivated cognition to check for motivated cognition
  • if Alice is a skilled enough rationalist to easily check for motivated cognition, going 1-2 nodes deep still isn't very reassuring. If the problem was that "many of Alice's older observations were due to confirmation bias, and she no longer directly remembers those events but has cached them as prior probabilities", that's too computationally intractable to check in the moment.

And meanwhile, until Alice has verified that her reasoning was motivated, she needs to retain probability mass on Bob being the wrong one.

Takeaways

Situation B seems extremely different to me from Situation A. It makes a lot of sense to me for people to use different words or phrases for the two situations.

One confounding issue is that obfuscating liars in Situation A have an incentive to pretend to be in Situation B. But there's still a fact-of-the-matter of what mental state Alice is in, which changes what incentives Alice will and should respond to.

"Rationalizing" and "Sitting Bolt Upright in Alarm."

2019-07-08T20:34:01.448Z · score: 29 (9 votes)

LW authors: How many clusters of norms do you (personally) want?

2019-07-07T20:27:41.923Z · score: 40 (9 votes)

What product are you building?

2019-07-04T19:08:01.694Z · score: 37 (20 votes)

How to handle large numbers of questions?

2019-07-04T18:22:18.936Z · score: 13 (3 votes)

Opting into Experimental LW Features

2019-07-03T00:51:19.646Z · score: 21 (5 votes)

How/would you want to consume shortform posts?

2019-07-02T19:55:56.967Z · score: 20 (6 votes)

What's the most "stuck" you've been with an argument, that eventually got resolved?

2019-07-01T05:13:26.743Z · score: 15 (4 votes)

Do children lose 'childlike curiosity?' Why?

2019-06-29T22:42:36.856Z · score: 43 (13 votes)

What's the best explanation of intellectual generativity?

2019-06-28T18:33:29.278Z · score: 30 (8 votes)

Is your uncertainty resolvable?

2019-06-21T07:32:00.819Z · score: 32 (17 votes)

Welcome to LessWrong!

2019-06-14T19:42:26.128Z · score: 78 (30 votes)

Ramifications of limited positive value, unlimited negative value?

2019-06-09T23:17:37.826Z · score: 10 (5 votes)

The Schelling Choice is "Rabbit", not "Stag"

2019-06-08T00:24:53.568Z · score: 91 (28 votes)

Seeing the Matrix, Switching Abstractions, and Missing Moods

2019-06-04T21:08:28.709Z · score: 32 (20 votes)

FB/Discord Style Reacts

2019-06-01T21:34:27.167Z · score: 77 (19 votes)

What is required to run a psychology study?

2019-05-29T06:38:13.727Z · score: 32 (10 votes)

What are some "Communities and Cultures Different From Our Own?"

2019-05-12T22:03:42.590Z · score: 31 (14 votes)

The Relationship Between the Village and the Mission

2019-05-12T21:09:31.513Z · score: 129 (35 votes)

How much do major foundations grant per hour of staff time?

2019-05-05T19:57:42.756Z · score: 24 (6 votes)

Bay Summer Solstice 2019

2019-05-03T04:49:10.287Z · score: 27 (6 votes)

Open Problems in Archipelago

2019-04-16T22:57:07.704Z · score: 48 (15 votes)

Robin Hanson on Simple, Evidence Backed Models

2019-04-16T22:22:19.784Z · score: 44 (11 votes)

How do people become ambitious?

2019-04-04T19:12:26.826Z · score: 58 (18 votes)

LW Update 2019-04-02 – Frontpage Rework

2019-04-02T23:48:11.555Z · score: 52 (12 votes)

What would you need to be motivated to answer "hard" LW questions?

2019-03-28T20:07:48.747Z · score: 48 (14 votes)

Do you like bullet points?

2019-03-26T04:30:59.104Z · score: 55 (20 votes)

The Amish, and Strategic Norms around Technology

2019-03-24T22:16:04.974Z · score: 114 (44 votes)

You Have About Five Words

2019-03-12T20:30:18.806Z · score: 58 (23 votes)

Renaming "Frontpage"

2019-03-09T01:23:05.560Z · score: 44 (13 votes)

How much funding and researchers were in AI, and AI Safety, in 2018?

2019-03-03T21:46:59.132Z · score: 42 (8 votes)

LW2.0 Mailing List for Breaking API Changes

2019-02-25T21:23:03.476Z · score: 12 (3 votes)

How could "Kickstarter for Inadequate Equilibria" be used for evil or turn out to be net-negative?

2019-02-21T21:36:07.707Z · score: 25 (9 votes)

If a "Kickstarter for Inadequate Equlibria" was built, do you have a concrete inadequate equilibrium to fix?

2019-02-21T21:32:56.366Z · score: 51 (17 votes)

Avoiding Jargon Confusion

2019-02-17T23:37:16.986Z · score: 50 (18 votes)

The Hamming Question

2019-02-08T19:34:33.993Z · score: 31 (10 votes)

Should questions be called "questions" or "confusions" (or "other")?

2019-01-22T02:45:01.211Z · score: 17 (6 votes)

What are the open problems in Human Rationality?

2019-01-13T04:46:38.581Z · score: 77 (26 votes)

LW Update 2019-1-09 – Question Updates, UserProfile Sorting

2019-01-09T22:34:31.338Z · score: 30 (6 votes)

Open Thread January 2019

2019-01-09T20:25:02.716Z · score: 24 (6 votes)

Events in Daily?

2019-01-02T02:30:06.788Z · score: 16 (5 votes)

What exercises go best with 3 blue 1 brown's Linear Algebra videos?

2019-01-01T21:29:37.599Z · score: 30 (8 votes)

Thoughts on Q&A so far?

2018-12-31T01:15:17.307Z · score: 26 (7 votes)

Can dying people "hold on" for something they are waiting for?

2018-12-27T19:53:35.436Z · score: 27 (9 votes)

Solstice Album Crowdfunding

2018-12-18T20:51:31.183Z · score: 39 (11 votes)

How Old is Smallpox?

2018-12-10T10:50:33.960Z · score: 39 (13 votes)

LW Update 2018-12-06 – All Posts Page, Questions Page, Posts Item rework

2018-12-08T21:30:13.874Z · score: 18 (3 votes)

What is "Social Reality?"

2018-12-08T17:41:33.775Z · score: 39 (9 votes)

LW Update 2018-12-06 – Table of Contents and Q&A

2018-12-08T00:47:09.267Z · score: 58 (14 votes)

On Rationalist Solstice and Epistemic Caution

2018-12-05T20:39:34.687Z · score: 59 (22 votes)