Posts

The feeling of breaking an Overton window 2021-02-17T05:31:40.629Z
“PR” is corrosive; “reputation” is not. 2021-02-14T03:32:24.985Z
Where do (did?) stable, cooperative institutions come from? 2020-11-03T22:14:09.322Z
Reality-Revealing and Reality-Masking Puzzles 2020-01-16T16:15:34.650Z
We run the Center for Applied Rationality, AMA 2019-12-19T16:34:15.705Z
AnnaSalamon's Shortform 2019-07-25T05:24:13.011Z
"Flinching away from truth” is often about *protecting* the epistemology 2016-12-20T18:39:18.737Z
Further discussion of CFAR’s focus on AI safety, and the good things folks wanted from “cause neutrality” 2016-12-12T19:39:50.084Z
CFAR's new mission statement (on our website) 2016-12-10T08:37:27.093Z
CFAR’s new focus, and AI Safety 2016-12-03T18:09:13.688Z
On the importance of Less Wrong, or another single conversational locus 2016-11-27T17:13:08.956Z
Several free CFAR summer programs on rationality and AI safety 2016-04-14T02:35:03.742Z
Consider having sparse insides 2016-04-01T00:07:07.777Z
The correct response to uncertainty is *not* half-speed 2016-01-15T22:55:03.407Z
Why CFAR's Mission? 2016-01-02T23:23:30.935Z
Why startup founders have mood swings (and why they may have uses) 2015-12-09T18:59:51.323Z
Two Growth Curves 2015-10-02T00:59:45.489Z
CFAR-run MIRI Summer Fellows program: July 7-26 2015-04-28T19:04:27.403Z
Attempted Telekinesis 2015-02-07T18:53:12.436Z
How to learn soft skills 2015-02-07T05:22:53.790Z
CFAR fundraiser far from filled; 4 days remaining 2015-01-27T07:26:36.878Z
CFAR in 2014: Continuing to climb out of the startup pit, heading toward a full prototype 2014-12-26T15:33:08.388Z
Upcoming CFAR events: Lower-cost bay area intro workshop; EU workshops; and others 2014-10-02T00:08:44.071Z
Why CFAR? 2013-12-28T23:25:10.296Z
Meetup : CFAR visits Salt Lake City 2013-06-15T04:43:54.594Z
Want to have a CFAR instructor visit your LW group? 2013-04-20T07:04:08.521Z
CFAR is hiring a logistics manager 2013-04-05T22:32:52.108Z
Applied Rationality Workshops: Jan 25-28 and March 1-4 2013-01-03T01:00:34.531Z
Nov 16-18: Rationality for Entrepreneurs 2012-11-08T18:15:15.281Z
Checklist of Rationality Habits 2012-11-07T21:19:19.244Z
Possible meetup: Singapore 2012-08-21T18:52:07.108Z
Center for Modern Rationality currently hiring: Executive assistants, Teachers, Research assistants, Consultants. 2012-04-13T20:28:06.071Z
Minicamps on Rationality and Awesomeness: May 11-13, June 22-24, and July 21-28 2012-03-29T20:48:48.227Z
How do you notice when you're rationalizing? 2012-03-02T07:28:21.698Z
Urges vs. Goals: The analogy to anticipation and belief 2012-01-24T23:57:04.122Z
Poll results: LW probably doesn't cause akrasia 2011-11-16T18:03:39.359Z
Meetup : Talk on Singularity scenarios and optimal philanthropy, followed by informal meet-up 2011-10-10T04:26:09.284Z
[Question] Do you know a good game or demo for demonstrating sunk costs? 2011-09-08T20:07:55.420Z
[LINK] How Hard is Artificial Intelligence? The Evolutionary Argument and Observation Selection Effects 2011-08-29T05:27:31.636Z
Upcoming meet-ups 2011-06-21T22:28:40.610Z
Upcoming meet-ups: 2011-06-11T22:16:09.641Z
Upcoming meet-ups: Buenos Aires, Minneapolis, Ottawa, Edinburgh, Cambridge, London, DC 2011-05-13T20:49:59.007Z
Mini-camp on Rationality, Awesomeness, and Existential Risk (May 28 through June 4, 2011) 2011-04-24T08:10:13.048Z
Learned Blankness 2011-04-18T18:55:32.552Z
Talk and Meetup today 4/4 in San Diego 2011-04-04T11:40:05.167Z
Use curiosity 2011-02-25T22:23:54.462Z
Make your training useful 2011-02-12T02:14:03.597Z
Starting a LW meet-up is easy. 2011-02-01T04:05:43.179Z
Branches of rationality 2011-01-12T03:24:35.656Z
If reductionism is the hammer, what nails are out there? 2010-12-11T13:58:18.087Z

Comments

Comment by annasalamon on Covid: CDC Issues New Guidance on Opening Schools · 2021-02-17T21:58:56.189Z · LW · GW

In terms of concrete measures for lower-covid-risk school reopening, I think it's worth underlining massive use of HEPA filters. They are not that expensive, and should reduce aerosol density and thereby covid risk rather a lot. Worth trumpeting also for grocery stores, pharmacies, offices, etc.

I like your overall point more though, and feel a bit innane commenting about something else without responding to that.

Comment by annasalamon on Covid: CDC Issues New Guidance on Opening Schools · 2021-02-17T21:45:09.794Z · LW · GW

The big thing is that a clear majority thinks schools should not wait for teachers to get vaccinated, let alone for students to get vaccinated, before reopening.

Wait, how do you get this? Am I misreading the chart above this sentence? It looks to me like in the poll you quote, 55% thinks schools should wait, which seems in contradiction with your sentence.

Comment by annasalamon on The feeling of breaking an Overton window · 2021-02-17T16:34:50.468Z · LW · GW

Sometimes unconscious/visceral fears are kind of what I thought they were:

If I look down from a tall height, I experience: [vertigo, a slight increase in my heart rate, some similar change in my breathing, a slight increase in my tendency to "freeze" my muscles, a shift in my attention toward the height/downness, etc.].

I might've thought this response was a consciously chosen strategy for not falling. Except that it also occurs when I'm walking on a glass floor in a well-engineered building I trust. Still, I conceptualize it as something like "in-built fear of heights; designed to prevent falling but based partly on a bunch of visceral cues that persist even when my conscious mind knows I won't fall". My lead "rationalization" for the response ("I'm breathing more shallowly and refusing to walk to the edge of the cliff with you because I don't want to fall off the cliff") is at least partly post-hoc, and is causally downstream of a more visceral reaction... that is basically also evolved for not falling off cliffs.

And sometimes unconscious/visceral reactions are just not at all what I thought they were:

Some years ago, I was waiting impatiently for a friend to leave an event we'd both been at, so that I could "go home to do my important work with its urgent deadline." Then I took a better look at the feeling, and realized that I was actually cold and in need of a washroom. I found a washroom and some warmth, and felt suddenly at ease and untroubled about my task. I've related differently to such feelings ever since.

Similarly, some years ago I was hanging out with my cousins around a holiday, and I felt "bored" (as I saw it) "because" they weren't talking about anything interesting, and I couldn't share with them my thoughts about anything interesting, such as AI risk. And then we switched board games, and I started laughing more, and I realized that my previous "bored" has actually been made of "socially uncomfortable". And I got a bit better at identifying the "socially uncomfortable" visceral response, and calling it by that name instead of some other.

The interesting thing for me about the cashier incident above, was the rationalizations my brain produced around the "don't share my views" impulse in my cashier situation seemed pretty transparently unsound (in my situation, in contrast to e.g. Oliver's similar instance in-thread, or any number of previous times when I had more plausible ... rationalizations? reasons? for not breaking an Overton window). If I was concerned about the cashier's welfare: yes, she might find it uncomfortable to see our views, but "withholding practically relevant information from someone specifically asking about it" does not seem kind. If I was concerned about causing some sort of social drama that might do me harm: it didn't seem like there was much plausible harm. (The risk that she might object to selling me the groceries did not occur to me, I think correctly. The 24-hour Safeway just outside of Berkeley may have had different customers than the downtown Berkeley Trader Joe's that Oliver and Ben Pace mention. She seemed peaceable/stable/normal. I was with my husband, which is probably safer than alone. Etc.) There probably was still more social drama in telling the truth vs in evading, but its absolute amount seemed pretty small, to the point where the normal ratios at which I try to buy [human decency / communication / being helpful and honest] vs [avoiding harm to myself] seemed to pretty clearly favor talking.

So it was more a glass floor situation, than an actually being near a cliff situation. Useful for elucidating things. And I'm not yet sure what the analog of "avoid falling" is, that this reaction is actually triggered by cues of. Is there a pretty in-built visceral thing for "don't break Overton windows", that is quasi-independent of conscious knowledge that you're safe? Is its true name "don't break Overton windows", or something else? What's up with the way the impulse in me oscilated between selfish rationalizations ("she might harm me") and morality-related rationalizations ("it's wrong to upset people")?

Comment by annasalamon on The feeling of breaking an Overton window · 2021-02-17T14:53:14.668Z · LW · GW

Totally. I was not AFAICT worried at the time about limited supply buying, or not very worried; the Safeway we were getting things from did not seem out of much and I hadn't heard people complain about shortages/buying yet as far as I can recall.

Comment by annasalamon on “PR” is corrosive; “reputation” is not. · 2021-02-15T01:22:55.454Z · LW · GW

To be honest I am not sure what exactly is being advised.

I am basically advising that you treat the concept of PR, and the word “PR”, the way you would treat a skilled but incredibly sleazy used car salesman. You may sometimes wish to deal with him anyway, if you can’t practically locate any other way to buy a car. But you’ll want to be very very alert to what’s being slipped into “your” “beliefs”, while you do so.

Sort of like if you were using a concept from Scientology to navigate a personal psychological issue.

Do you think trying to be 'honorable' will suffice to avoid bad outcomes?

I think that attention to “honor”, “reputation”, “brand”, etc. will get us most but not all of what we might hope for from PR, and including some things that PR itself won’t give, such as some kinds of longer-term freedom, grounding, and ability to think.

I would advise using this concept first (just, very simply, substituting the word “reputational concerns” for “PR concerns” in conversations, and seeing where this substitution gets you).

I don’t think it’ll do everything PR would do. And I’m not saying you should never care about the residual (although I am saying that the sleazy car salesman may have tricked us into sometimes thinking the residual matters more than it does).

Comment by annasalamon on “PR” is corrosive; “reputation” is not. · 2021-02-14T21:47:05.533Z · LW · GW

It's much easier to resolve disagreements about what counts as good PR.

I mostly disagree. I mean, maybe this applies in comparison to “honor” (not sure), but I don’t think it applies in comparison to “reputation” in many of the relevant senses. A person or company could reasonably wish to maintain a reputation as a maker of solid products that don’t break, or as a reliable fact-checker, or some other such specific standard. And can reasonably resolve internal disagreements about what is and isn’t likely to maintain this reputation.

If it was actually easy to resolve disagreements about PR, I suspect we wouldn’t be so spooked by it, or so prone to deferring to outside “PR consultants”.

I… my thoughts aren’t coherent enough here to let me know how to write a short comment, so I’m gonna write a long one, while noting aloud that this is a bad sign about my models here.

But: it isn’t just a matter of deferring to polls. Partly because with “bad PR” or scandals, there’s a dynamicness to the mob. It isn’t about peoples’ fixed standards or comparisons, that you could get by consulting polls. (Or just by consulting a friend or two, the way you probably do when you are faced with normal ethical questions and you want help remembering what the usual standards are.) It’s some spooky other thing, involving dynamics that evolve, and experts that you pay to be a bit distanced somehow from your not-knowing and to be able to tell other people that of course you consulted an expert so that they won't shun you after the whole thing explodes.

It seems like it's easier for organizations to coordinate around PR

So, that does seem true, at least in the sense that lots of organizations and groups talk about “PR”; so there’s some tautological sense in which it’s gotta be easier for organizations to somehow end up talking about that. (Lately. It wasn't so in past centuries, FWIW. Possibly they just didn't know how.)

But I am a bit unclear on why.

One hypothesis that matches my own introspection, at least for small- to medium-sized organizations of the sort I’ve been involved in, is that we attend to PR because it’s somehow part of the received “everyone knows you should pay attention to PR” morality that was imparted to us, right next to “don’t drink and drive” and “get a college degree” and "remember to feel a sense of 'wow' if somebody mentions Harvard". And not because of inside-view/directly-perceivable advantages to attending to PR (vs reputation/brand/honor).

Of course, this just kicks the can down the road — why would this morality have been imparted to us? I’m honestly not sure. I don’t trust it. I do not personally notice myself or others making any worse decisions when we instead attend to "reputation".

Comment by annasalamon on “PR” is corrosive; “reputation” is not. · 2021-02-14T04:01:15.621Z · LW · GW

Thanks; I find this comment helpful and interesting, like part of a puzzle.

Comment by annasalamon on Is MIRI actually hiring and does Buck Shlegeris still work for you? · 2021-02-13T16:25:18.512Z · LW · GW

I'm not an official MIRI spokesperson here or something, but I'm a MIRI board member and I do a lot of work with MIRI (without being on payroll). My take, which someone else may turn out to correctly disagree:

  1. We are not hiring as much since the pandemic. We are however still hiring for some niche positions, especially ones that can easily be remote and/or that do not require the in-person internships for getting started that we often used.

  2. Separately, some but not all of our research programs are additionally not hiring as much lately for non-pandemic reasons, due to being in more of a "regroup and see what makes sense in terms of research directions, before possibly making hires and expanding again" place than they were last year. We should probably update our job ads but haven't been quite sure how and have been procrastinating therefore.

  3. Buck is mostly no longer working at MIRI but is doing some contracting, including about processing job applicants I think.

  4. Despite #1 and 2, I am aware of three hires made over the last few months, though all for niche positions assisting distinct people with distinct things. Also, various other bits of MIRI may turn out to be hiring at different times in ways that are hard for me and perhaps also them to predict. I do not have an official role in any of this, but if you'd like to talk 1-on-1 I'd be happy to, and it'd let me be aware of you if job openings later come up in parts of MIRI I'm connected to.

Comment by annasalamon on Still Not in Charge · 2021-02-10T19:33:10.864Z · LW · GW

If the social substrate people are in makes it easy to form binding contracts, people won't defect in prisoner dilemmas. Maybe I'm using the wrong words; I'm trying to agree with your point. I don't mean "coordination ability" to be a property just of the individuals; it's a property of them and their context.

Comment by annasalamon on Still Not in Charge · 2021-02-10T17:26:31.185Z · LW · GW

Yes; the test Zvi mentions seems like it actually tests "folks have utility functions and good coordination ability". (Like, good ability to form binding contracts, or make trades.)

Comment by annasalamon on Still Not in Charge · 2021-02-10T17:17:20.375Z · LW · GW

It just... seems like it must be pretty hard to get to the top without having some kind of longterm planning going on (even if it's purely manipulative)

I think I would bet against the quoted sentence, though I'm uncertain. The crux for me is whether the optimization-force that causes a single person to end up "at the top" (while many others don't) is mostly that person's own optimization-force (vs a set of preferences/flinches/optimization-bits distributed in many others, or in the organization as a whole, or similar).

(This overlaps with jaspax's comment; but I wanted to state the more general version of the hypothesis.)

See also Kaj's FB post from this morning.

Comment by annasalamon on Still Not in Charge · 2021-02-09T20:31:35.354Z · LW · GW

Why and when does self-interest (your "utility function hypothesis") ever arise? (As opposed to people effectively being a bunch of not-very-conscious flinchy reflexes that can find their way to a local optimum, but can't figure out how to jump between optima?)

I keep feeling a sense of both interest/appreciation and frustration/this-isn't-quite-it-yet when I read your posts, and the above seems like one of the main gaps for me.

Comment by annasalamon on Making Vaccine · 2021-02-04T01:52:48.698Z · LW · GW

Neat! Will you also use try commercial antibody tests on your mucus, or is that known to not-work?

Comment by annasalamon on Motive Ambiguity · 2021-01-22T05:01:02.336Z · LW · GW

You know how there's standard advice to frame desires / recommendations / policy proposals / etc. in positive rather than negative terms? (E.g., to say "It's good to X" and not "It's bad to Y"?)

I bet this is actually good advice, and that it's significantly about reducing the "doing costly things just to visibly not be like Y" dynamic Zvi is talking about. I bet it happens both between multiple people (as in Zvi's examples) and within a person (as in e.g. the examples in "pain is not the unit of effort").

Comment by annasalamon on AnnaSalamon's Shortform · 2021-01-13T19:55:58.876Z · LW · GW

An acquaintance recently started a FB post with “I feel like the entire world has gone mad.”

My acquaintance was maybe being a bit humorous; nevertheless, I was reminded of this old joke:

As a senior citizen was driving down the freeway, his car phone rang. Answering, he heard his wife's voice urgently warning him, "Herman, I just heard on the news that there's a car going the wrong way on 280. Please be careful!"

”Hell," said Herman, "It's not just one car. It's hundreds of them!"

I guess it’s my impression that a lot of people have the “I feel large chunks of the world have gone mad” thing going, who didn’t have it going before (or not this much or this intensely). (On many sides, and not just about the Blue/Red Trump/Biden thing.) I am curious whether this matches others’ impressions. (Or if anyone has studies/polls/etc. that might help with this.)

Separately but relatedly, I would like to be on record as predicting that the amount of this (of people feeling that large numbers of people are totally batshit on lots of issues) is going to continue increasing across the next several years. And is going to spread further beyond a single axis of politicization, to happen almost everywhere.

I’m very open to bets on this topic, if anybody has a suitable operationalization.

I’m also interested in thinking on what happens next, if a very large increase of this sort does occur.

Comment by annasalamon on AnnaSalamon's Shortform · 2021-01-06T06:35:26.432Z · LW · GW

I just read this tweet, which claims that the author's nieces and nephews (who are teenagers) think that Helen Keller probably didn't exist, based on basically not believing things they can't directly verify. (The author seems to think this is a common thing for today's American teenagers.)

This is more extreme than I would have predicted, although in a direction I would have predicted. I have no idea if this is in fact true and common (vs made-up/exaggerated and/or uncommon.) Is there anyone here who knows some American teenagers (or other teenagers, really) and is willing to ask them about this for me?

Comment by annasalamon on The Costs of Reliability · 2020-12-22T04:48:26.019Z · LW · GW

I've heard it said (and am inclined to believe) that contemporary firms maintain less slack than their analogs did in the past (more just-in-time purchasing, etc.). Under this model, I guess that means they should need to maintain greater reliability, and to need to pay greater "costs of reliability"?

Comment by annasalamon on Motive Ambiguity · 2020-12-21T05:13:52.954Z · LW · GW

Oh man; that article is excellent and I hadn't seen it. If anyone's wondering whether to click the link: highly recommend.

Comment by annasalamon on Motive Ambiguity · 2020-12-21T04:59:48.975Z · LW · GW

[Epistemic status: I’m not confident of any of this; I just want better models and am trying to articulate mine in case that helps. Also, all of my comments on this post are as much a response to the book “Moral Mazes” as to the OP.]

Let’s say that A is good, and that B is also good. (E.g. equality and freedom, or diversity and families, or current lives saved and rationality, or any of a huge number of things.) Let’s consider how the desire-for-A and the desire-for-B might avoid having their plans/goal-achievement disrupted by one another.

In principle, you could build a larger model that explains how to trade off between A and B — a model that subsumes A and B as special cases of a more general good. And then the A-desire and the B-desire could peacefully co-exist and share influence within this larger structure, without disrupting each others’ ability to predict-and-control, or to achieve their goals. (And thereby, they could both stably remain part of your psyche. Or part of your organization. Or part of your subcultural movement. Or part of your overarching civilization’s sense of moral decency. Or whatever. Without one part of your civilization’s sense of moral decency (or etc.) straining to pitch another part of that same sense of moral decency overboard.)

Building a larger model subsuming both the A-is-good and B-is-good models is hard, though. It requires a bunch of knowledge/wisdom/culture to kind a workable model of that sort. Especially if you want everybody to coordinate within the same larger model (so that the predict-and-control thing can keep working). A simpler thing you could attempt instead is to just ban desire B. Then desire-for-B won’t get in the way of your attempt to coordinate around achieving desire A. (Or, in more degenerate cases, it won’t get in the way of your attempt to coordinate around you-the-coordinator staying coordinating, with all specific goals mostly forgotten about.) This “just abolish desire B” thing is much simpler to design. So this simpler strategy (“disown and dissociate from one of the good things”) can be reinvented even in ignorance, and can also be shared/evangelized for pretty easily, without needing to share a whole culture.

Separately: once upon a time, there used to be a shared deep culture that gave all humans in a given tribe a whole bunch of shared assumptions about how everything fit together. In that context, it was easier to create/remember/invoke common scaffolds allowing A-desire and B-desire to work together without disrupting each others’ ability to do predictability-and-control. You did not have to build such scaffolds from scratch.

Printing presses and cities and travel/commerce/conversation between many different tribes, and individuals acquiring more tools for creating new thoughts/patterns/associations, and… social media… later made different people assume different things, or fewer things. It became extra-hard to create shared templates in which A-desire and B-desire can coordinate. And so we more often saw social movements / culture wars in which the teams (which each have some memory of some fragment of what’s good) are bent on destroying one another, lest one another destroy their ability to do prediction-and-control in preservation of their own fragment of what’s good. “Humpty Dumpty sat on a wall…”

(Because the ability to do the simpler “dissociate from desire B, ban desire B” move does not break down as quickly, with increasing cultural diversity/fragmentation, as the ability to do the more difficult “assimilate A and B into a common larger good” move.)

Comment by annasalamon on Motive Ambiguity · 2020-12-21T01:11:20.862Z · LW · GW

Extending the E-F-G thing: perhaps we could say “every cause/movement/organization wants to become a pile of defanged pica and ostentatious normalcy (think: Rowling's Dursleys) that won’t be disruptive to anyone”, as an complimentary/slightly-contrasting description to “every cause wants to be a cult”.

In the extreme, this “removing of all impulses that’ll interfere with predictability-and-control” is clearly not useful for anything. But in medium-sized amounts, I think predictability/controllability-via-compartmentalization can actually help with creating physical goods, as with the surgeon or poker player or tennis player who has an easier time when they are not in touch with an intense desire for a particular outcome. And I think we see it sometimes in large amounts — large enough that they are net-detrimental to the original goal of the person/cause/business/etc.

Maybe it’s something like:

  • Being able to predict and control one’s own actions, or one’s organization’s actions, is in fact useful. You can use this to e.g. take three coordinated actions in sequence that will collectively but not individually move you toward a desired outcome, such as putting on your shoes in order to walk to the store in order to be able to buy pasta in order to be able to cook it for dinner. (I do not think one can do this kind of multi-step action nearly as well without prediction-and-control of one’s behavior.)

  • Because it is useful, we build apparatuses that support it. (“Egos” within individual humans; structures of management and deferral and conformity within organizations and businesses and social movements.)

  • Even though prediction-and-control is genuinely useful, a central planning entity doing prediction-and-control will tend to overestimate the usefulness of its having more prediction-and-control, and to underestimate the usefulness of aspects of behavior that it does not control. This is because it can see what it’s trying to do, and can’t see what other people are trying to do. Also, its actions are specifically those that its own map says will help, and others’ actions are those which their own maps say will help, which will bring in winner’s curse-type dynamics. So central planning will tend to over-invest in increasing its own control, and to under-invest in allowing unpredictability/disruption/alternate pulls on behavior.

  • … ? [I think the above three bullet points are probably a real thing that happens. But it doesn’t seem to take my imagination all the way to full-on moral mazes (for organizations), or to individuals who are full-on trying to prop up their ego at the expense of everything. Maybe it does and I’m underestimating it? Or maybe there are added steps after my third bullet point of some sort?]

Comment by annasalamon on Motive Ambiguity · 2020-12-21T00:26:40.582Z · LW · GW

Also: it seems to me that “G” might be the generator of the thing Zvi calls as “Moloch’s Army.” Zvi writes:

Moloch’s Army …. I still can’t find a way into this without sounding crazy. The result of this is that the sequence talks about maze behaviors and mazes as if their creation and operation are motivated by self-interest. That’s far from the whole picture.

There is mindset that instinctively and unselfishly opposes everything of value. This mindset is not only not doing calculations to see what it would prefer or might accomplish. It does not even believe in the concept of calculation (or numbers, or logic, or reason) at all. It cares about virtues and emotional resonances, not consequences. To do this is to have the maze nature. This mindset instinctively promotes others that share the mindset, and is much more common and impactful among the powerful than one would think. Among other things, the actions of those with this mindset are vital to the creation, support and strengthening mazes.

Until a proper description of that is finished, my job is not done. So far, it continues to elude me. I am not giving up.

For whatever it’s worth, I am also inclined to think that something like “Moloch’s Army” describes something important in the world. As sort-of-mentioned, Atlas Shrugged more or less convinced me of this by highlighting a bunch of psychological dynamics that, once highlighted, I seemed to see in myself and others. But I am still confused about it (whether it’s real; what it’s made of insofar as there is a real thing like that). And G is my best current attempt to derive it.

Comment by annasalamon on Motive Ambiguity · 2020-12-21T00:20:27.158Z · LW · GW

Here is a different model (besides the zero-sum effort tradeoffs model) of why value-losses such as those in the OP might be common and large. The different model is something like “compartmentalization has large upsides for coordination/predictability/simplicity, and is also easier than most other ways of getting control/predictability”. Or in more detail: having components of a (person/organization/anything) that act on anything unexpected means having components of a (person/organization/anything) that are harder to control, which decreases the (person/organization/etc.)’s ability to pull off maneuvers that require predictability, and is costly. (I think this might be Zvi’s model from not-this-post, but I’m not sure, and I’d like to elaborate it in my own words regardless.)

Under this model, real value is actually created via enforcing this kind of predictability (at least, if the organization is being used to make value at all), although at real cost.

Examples/analogies that (correctly or not) are parts of why I find this “compartmentalization/simplicity has major upsides” model plausible:

A. I read/heard somewhere that most of our muscles are used to selectively inhibit other muscles, so as to be able to do fine motor coordination. And that this is one of the differences between humans and chimps, where humans have piles of muscles inhibiting each other to allow fine motor skill, and chimps went more for uncoordinated strength. (Can someone confirm whether this is true?) (The connection may be opaque here. But it seems to me that most of our psychologies are a bit like this — we could’ve had simple, strongly felt, drives and creative impulses, but civilized humans are instead bunches of macros that selectively track and inhibit other macros; and this seems to me to have been becoming more and more true across the last few thousand years in the West.)

B. If I imagine hiring someone for CFAR who has a history of activism along the lines of [redacted, sorry I’m a coward but at least it’s better than omitting the example entirely], I feel pause, not because of “what if the new staff member puts some of their effort into that instead of about CFAR’s goals” but because of “what it makes it more difficult and higher-overhead to coordinate within CFAR, and leaves us with a bunch of, um, what shows up on my internal radar as ‘messes we have to navigate’ all the time, where I have to somehow trick them into going along with the program, and the overhead of this makes it harder to think and talk and get things done together.” (To be clear, parts of this seem bad to me, and this isn’t how I would try to strategize toward me and CFAR doing things; in particular it seems to highlight some flaw in my current ontology to parse divergent opinions as ‘messes I have to navigate, to trick them into going along with the program’. I, um, do not want you to think I am endorsing this and to get to blame or attack me for it, but I do want to get to talk about it.)

C. I think a surgeon would typically be advised not to try to operate on their own child, because it is somehow harder to have steady hands and mind (highly predictable-to-oneself and coordinated behavior) if a strong desire/fear is activated (even one as aligned with “do good surgery on my child” as the desire/fear for one’s child’s life). (Is this true? I haven’t fact-checked it. I have heard poker players say that it’s harder to play well for high stakes. Also the book “The inner game of tennis” claims that wanting to win at tennis impairs most adults’ ability to learn tennis.)

D. In the OP’s “don’t ask what the wine costs, it would ruin the evening” example: it seems to me that there really is a dynamic where asking what the wine costs can at least mildly harm my own experience of the evening, and that for me (and I imagine quite a few others), the harm is not that asking the wine’s price reveals a stable, persistent fact that the asker cares about money. Rather, the harm is asking it breaks the compartmentalization that was part of how I knew how to be “in flow” for the evening. Like, after the asking, I’m thinking about money, or thinking about others thinking about money, and I’m somehow less good at candlelight and music and being with my and others’ experiences when that is happening. (This is why Zvi describes it as “slightly socially awkward” — awkwardness is what it feels like when a flow is disrupted.) (We can tell that the thing that’s up here in my experience of the evening isn’t about longer-term money-indicators, partly because I have no aversion to hearing the same people talk about caring about money in most other contexts.) (I’m sure straight money-signaling, as in Zvi’s interpretation, also happens with some people about the wine. But the different “compartmentalization is better for romantic evenings” dynamic I’m describing can happen too.)

E. This is the example I care most about, and am least likely to do justice to. Um: there’s a lot of pain/caring that I find myself dissociating from, most of the time. (Though I can only see it in flashes.) For example, it’s hard for me to think about death. Or about AI risk, probably because of the “death” part. Or about how much I love people. Or how I hope I have a good life, and how much I want children. I can think words about these things, but I tend to control my breathing while doing so, to become analytic, to look at things a bit from a distance, to sort of emulate the thoughts rather than have them.

It seems to me my dissociating here is driven less by raw pain/caring being unpleasant (although sometimes it is), and more by the fact that when I am experiencing raw pain/caring it is harder to predict/plan/control my own behavior, and that lack of predictability is at least somewhat scary and risky. Plus it is somehow tricky for other people to be around, such that I would usually feel impolite doing it and avoid visibly caring in certain ways for that reason. (See the example F.)

F. [Kind of like E, but as an interpersonal dynamic] When other people show raw caring, it’s hard for me to avoid dissociating. Especially if it’s to do with something where… the feeling inside my head is something like “I want this, I am this, but I can’t have this. It isn’t mine. Fear. Maybe I’m [inadequate/embarrassing/unable forever]?” Example: a couple days ago, some friends and I watched “It’s a Wonderful Life”, which I hadn’t seen before. And afterward a friend and I were raw and talking, and my friend was, I can’t remember, but talking about wanting to be warm and human or something. And it was really hard for me not to just dissociate — I kept having all kinds of nonsense arguments pop into my head for why I should think about my laundry, why I get analytic-and-in-control-of-the-conversation, why I should interrupt him. And later on, my friend was “triggered” about a different thing, and I noticed it was the same [fear/blankness/tendency-to-want-to-dissociate] in me, in response to those other active currents. And I commented on it to my friend, and we noticed that the thing I was instinctively doing in response to that fear in me, was kind of sending my friend “this is weird/bad what you’re doing” signals. So. Um. Maybe there’s a thing where, once people start keeping raw pain/caring/love/anything at distance, if they run into other people who aren’t, they send those people “you’re being crazy/bad” signals whenever those other people aren’t keeping their own raw at a distance. And so we socialize each other to dissociate.

(This connects still to the compartmentalization-as-an-aid-to-predictability thesis, because part of the trouble with e.g. somebody else talking about death, or being raw, or triggered, is that it makes it harder for me to dissociate, and so makes me less predicable/controllable to me.)

G. This brings me to an alternate possible mechanics of Zvi’s “carefully not going out of one’s way not to poison the river with the widget factory” example. If lots of people at WidgetCorp wanted to contribute to (the environment / good things broadly), but are dissociated from their desire, it might mess with their dissociation (and, thus, their control and predictability-to-themselves of their own behavior, and plus WidgetCorp’s ability to predict and control them) if anybody else visibly cares about the river (or even, visibly does a thing one could mistake as caring about the river). And so we get the pressure that Zvi mentions, here and in his “moral mazes” sequence. (And we can analogously derive a pressure not to be a “goody two-shoes” among kids who kind of want to be good still, but also kind of want to be free from that wanting. And the pressure not to be too vulnerably sincere in one’s romantic/sexual encounters, and to instead aspire to cynicism. And more generally (in the extreme, at least) to get attack anyone who acts from intact caring. Sort of like an anti-epistemology, but more exactly like an anti-caring.

Comment by annasalamon on Motive Ambiguity · 2020-12-21T00:15:37.176Z · LW · GW

I wish I had a better model of how common it is to actually have people destroying large amounts of value on purpose, for reasons such as those in the OP. And if it is common, I wish I had a clearer model of why. (I suspect it’s common. I’ve suspected this since reading Atlas Shrugged ~a year ago. But I’m not sure, and I don’t have a good mechanistic mode, and in my ignorance of the ‘true names’ of this stuff it seems really easily to blatantly misunderstand.)

To try to pinpoint what I don’t understand:

  • I agree that we care about each others’ motives. And that we infer these from actions. And that we care about others’ models of our motives, and that this creates funny second-order incentives for our actions.
  • I also agree that there are scenarios, such as those in the OP, where these second-order incentives can lead a person to destroy a bunch of value (by failing to not-poison the river, etc.)
  • I’m uncertain of the frequency and distribution of such incentive-impacts. Are these second-order incentives mostly toward “actions with worse physical consequences”, or are they neutral or positive in expectation (while still negative in some instances)? (I agree there are straight-forward examples where they’re toward worse, and that Zvi lists some of these. But there are also examples the other way. Like, in Zvi’s widget-factory example, I could imagine the middle manager choosing the policy that will avoid poisoning the water (whether or not he directly cares about it) so that other people will think he is the sort of person who cares about good things, and will be more likely to ally with him in contexts where you want someone who cares about good things (marriage; friendships; some jobs).)
  • If the distribution does have a large amount of cases second-order incentives push toward destroying value — why, exactly?

Differently put: in the OP, Zvi writes “none of this assumes a zero-sum mentality. At all.” But, if we aren’t assuming a zero-sum mentality, why would the middle manager’s boss (in the widgets example) want to make sure he doesn’t care about the environment? Like, one possibility is that the boss thinks things are zero-sum, and expects a tradeoff between “is this guy liable to worry about random non-company stuff in future situations” and “is this guy doing what’ll help the company” in future cases. But that seems like an example of the boss expecting zero-sum situations, or at least of expecting tradeoffs. And Zvi is saying that this isn’t the thing.

(And one possibility for why such dynamics would be common, if they are, is if it is common to have zero-sum situations where “putting effort toward increasing X” would interfere with a person’s ability to increase Y. But I think this isn’t quite what Zvi is positing.)

Comment by annasalamon on Motive Ambiguity · 2020-12-20T19:49:27.114Z · LW · GW

Okay, this seems true to me (i.e., it seems true to me that some real value is being created by displaying flexibility, willingness to compromise, etc.). (And thanks; I missed it when reading Luke's post above, but it clicked better when reading your reply.)

The thing is, there's somehow a confusing set of games that get played sometimes in cases like the restaurant example that are not about these esteem benefits, but are instead some sort of pica of "look how much I'm sacrificing; now clearly I love you hugely, and I am the victim here unless you give me something similar really a lot, and you owe me" or "look how hard we are working on the start-up; clearly it won't be our fault when the deadline is missed" or various other weird games that seem bad. I guess Luke is referring to this with his phrase about "but if the esteem is the main goal the sacrificer is exhibiting unhealthy codependent behavior." But what is that, exactly?

Comment by annasalamon on Motive Ambiguity · 2020-12-20T17:54:05.489Z · LW · GW

Yoav, I think there might be a difference like the one you’re gesturing at, but if so, I don’t think Zvi’s formalism quite captures it. If someone can find a formalism that does capture it, I’m interested. (Making that need for a fuller formalism explicit, is sort of what I’m hoping for with the examples.)

For example, I disagree, if I reason formally/rigidly, with “in almost none of these does "the protagonist chooses the worse action because it is worse". sleeping in a more risky part of the forest isn't strictly worse, there are benefits to it. spending time finding a rare flower isn't worse than using a common flower since a rare flower is likely to have more value.”

Re: the flowers, I can well imagine a situation where the boy chooses the [takes more work and has more opportunity cost to gather (“rarer”)] flower because it [visibly takes more cost to gather it], and “because it has more costs” is IMO pretty clearly an example of “because it is worse” in the sense in the OP (similar to: “because it costs more rubles to buy this scarf-that-is-not-better”). To make a pure example: It’s true the flower’s that rarity itself makes it more valuable to look at (since folks have seen it less) — but we can imagine a case where it is slightly uglier and slightly less pretty-smelling, to at least mostly offset this, so that a naive observer from a different region who did not know what was rare/common would generally prefer the other. Anyhow, in that scenario it still seems pretty plausible that the boy’s romantic gesture would work better with the rarer flower, as the girl says to her gossipy girlfriends “He somehow brought me 50 [rareflowers]!” And they’re like “What? Where did he possibly get those from?” And she’s like “Yeah. I don’t even like [rareflowertype] all that much, but still, gathering those! I guess he must really like me!” (I.e., in this scenario, the boy having spent hours of his time roving around seeking flowers, which seems naively/formally like a cost, is itself a thing that the girl is appreciating.)

Similarly, “riskier part of the forest” means “part of the forest with less safety” — and while, yes, the forest-part surely has other positive features, I can well imagine a context where the “has less safety” is itself the main attraction to the kids (so they can demonstrate their daring). (And “has less safety / has greater risk of injury” seems formally like an example of “worse”. If it isn’t, I need better models of what “worse” means here.)

If these are actually disanalogous, maybe you could spell out the disanalogy more? I realize I didn’t engage here with your point about “challenge” and “effort” (which seem like costs on some reckoning, but costs that we sometimes give a positive-affect term to, and for reason)

Comment by annasalamon on Motive Ambiguity · 2020-12-19T18:20:27.076Z · LW · GW

I tried looking for situations that have many of the same formal features, but that I am glad exist (whereas I intuitively dislike the examples in the OP and wish they happened less). I got:

  1. Some kids set out to spend the night outdoors somewhere. They consider spending it in a known part of the woods, or in an extra scary/risky-seeming part of the woods. They choose the latter because it is risky. (And because they care more about demonstrating to themselves and each other that they can tolerate risk, then about safety.)

  2. A boy wants to show a girl that he cares about her, as he asks her on a first date. He has no idea which flowers she does/doesn’t like. He considers getting her some common (easy to gather) flowers, or seeking out and giving her some rare flowers. (Everybody in town knows which flowers and common and which are rare.) He decides on the rare flowers, specifically because it’ll cause her to know that he spent more time gathering them, which will send a louder “hey I’m interested in you” signal. (This is maybe the same as your gift-giving example, which I feel differently good/bad about depending on its context.)

  3. Critch used to recommend (probably still does) that if a person has e.g. just found out they’re allergic to chocolate, and is planning to try to give up chocolate but expecting to find this tricky, that they go buy unusually fancy/expensive/delicious chocolate packages, open them up, smell them, and then visibly throw them away without taking a bite. (Thus creating more ability to throw out similar things later, if they are e.g. given fancy chocolates as a gift.) For this exercise, the more expensive/fancy/good the chocolates are (and thus, the larger the waste in buying them to throw away), the better.

  4. Some jugglers get interested in a new, slippery kind of ball that is particularly difficult to juggle. It is not any more visually beautiful to watch a person juggle (at least, if you’re an untrained novice who doesn’t know how the slippery balls work) — it is just harder. Many of the jugglers, when faced with the choice between training on an easier kind of ball or training on the difficult slippery ones, choose to train on the difficult slippery ones, specifically because it is harder (i.e., specifically because it’ll take more time/effort/attention to learn to successfully juggle them).

  5. My anonymous friend Alex was bullied a lot in school. Then, at age 18, Alex was in a large group house with me and… made cookies… all through the afternoon. A huge batch of cookies. Twelve hungry people, including me, sat around smelling the cookies, expecting the cookies. (Food-sharing was ubiquitous in this group house.) Then, when the cookies were made and we showed up in the kitchen to ask for them, Alex… said they were the boss of the cookies (having made them) and that no one could have any! (Or at least, not for several hours.) A riot practically broke out. I was pretty upset! Partly in myself, and partly because lots of other people were pretty upset! But later Alex said they were still glad they did this, basically to show themselves that they didn’t always have to lose conflicts with other people, or have to be powerless in social contexts. And I still know Alex, and haven’t known them to do anything similar again. So I think I in fact feel positively about this all things considered. And the costs/value-destruction was pretty intrinsic to how Alex’s self-demonstration worked — Alex had no other reason to prefer “making everybody wait hours without cookies” to “letting people eat the cookies”, besides that people strongly dis-prefered waiting. (This is a true story.)

(I don’t have a specific thesis in sharing these. It’s just a step for me in trying to puzzle out what the dynamics in the OP’s examples actually are, and I want to share my scratch work. More scratch work coming probably.)

Comment by annasalamon on Motive Ambiguity · 2020-12-19T01:55:30.348Z · LW · GW

Thanks for writing this; I really appreciated getting to read this post, especially the examples, which seem helpful for trying to bring something into focus.

Comment by annasalamon on Covid 12/10: Vaccine Approval Day in America · 2020-12-10T23:57:27.873Z · LW · GW

Oh; sorry. I thought your "again" was referring to the earlier covid wave.

Comment by annasalamon on Covid 12/10: Vaccine Approval Day in America · 2020-12-10T21:48:49.881Z · LW · GW

This is a nitpick, but to me it seems to overstate things to say:

and the United States once again has a far worse Covid problem than Europe

According to worldometers, the United States has so far had ~903 covid deaths for every 1M people, while in Europe, the UK has had 912 deaths/1M; Italy has had 1,036; Spain 1,012; Germany 253; France 871; Greece 324; etc. The US has a far worse problem than Germany or Greece or Finland, but a comparably bad problem to the UK or Italy or Spain or France. I guess I feel a bit picky on this point because I've seen a lot of news articles that assume the US is worse because they fail to do "per capita", and because there seems to be a general attempt to discredit our institutions so that they'll collapse, which doesn't obviously seem crazy or misguided to me, but doesn't obviously seem right either given the downside risk.

(Or do you think the death estimates are wrong?)

Comment by annasalamon on Cultural accumulation · 2020-12-09T15:25:34.066Z · LW · GW

I'm hoping the operationalization is more about "would you get a working (and decent) bicycle if you tried from this drawing (+ maybe the stuff you would obviously figure out while trying from the drawing)" and less about "does it have every one of the fancy improvements that modern bikes have".

Comment by annasalamon on Cultural accumulation · 2020-12-06T20:45:42.191Z · LW · GW

I would quite like to see this bet, basically for inquiry's sake. (Though it would cost 1-2 days of habryka's time.)

Comment by annasalamon on Rule Thinkers In, Not Out · 2020-11-28T22:59:54.624Z · LW · GW

The short version of my current stance on Vassar is that:

(1) I would not trust him to conform to local rules or norms. He also still seems to me to precipitate psychotic episodes in his interlocutors surprisingly often, to come closer to advocating physical violence than I would like (e.g. this tweet), and to have conversational patterns that often disorient his interlocutors and leave them believing different things while talking to Michael than they do a bit later.

(2) I don't have overall advice that people ought to avoid Vassar, in spite of (1), because it now seems to me that he is trying to help himself and others toward truth, and I think we're bottlenecked on that enough that I could easily imagine (2) overshadowing (1) for individuals who are in a robust place (e.g., who don't feel like they are trapped or "have to" talk to a person or do a thing) and who are choosing who they want to talk to. (There were parts of Michael's conversational patterns that I was interpreting as less truth-conducive a couple years ago than I am now. I now think that this was partly because I was overanchored on the (then-recent) example of Brent, as well as because I didn't understand part of how he was doing it, but it is possible that it is current-me who is wrong.) (As one example of a consideration that moved me here: a friend of mine whose epistemics I trust, and who has known Vassar for a long time, said that she usually in the long-run ended up agreeing with her while-in-the-conversation self, and not with her after-she-left-the-conversation self.)

Also I was a bit discomfited when my previous LW comment was later cited by folks who weren't all that LW-y in their conversational patterns as a general "denouncement" of Vassar, although I should probably have predicted this, so, that's another reason I'd like to try to publicly state my revised views. To be clear, I do not currently wish to "denounce" Vassar, and I don't even think that's what I was trying to do last time, although I think the fault was mostly mine that some people read my previous comment as a general denouncement.

Also, to be clear, what I am saying here is just that on the strength of my own evidence (which is not all evidence), (1) and (2) seem true to me. I am not at all trying to be a court here, or to evaluate any objections anyone else may have to Vassar, or to claim that there are no valid objections someone else might have, or anything like that. Just to share my own revised impression from my own limited first-hand observations.

Comment by annasalamon on Pain is not the unit of Effort · 2020-11-28T16:31:12.137Z · LW · GW

Not a direct response to the post, but on the same broad topic:

It seems to me that many people (e.g., me and several people I've discussed this with) have not so much an intrinsic aversion to pain as a fear of how we'll act if in pain. (As the main effect. I do also simply like not being in pain, but the fear of how the pain will impact my actions is/was often the larger of these two.) So, for example, a person will avoid seeking out bad news about their project not so much because they mind the pain as such, but because they aren't sure whether they'll act funny or have trouble working or similar if they're suddenly sad. Or a person will try to manage their moods not so much to avoid the mood as such, as to avoid being grouchy toward those near them, or to avoid being a downer at the party.

In my experience, increases in my ability to try in deep/effective ways have several times come via decreases in how afraid I was of being sad/upset (and/or increases in my ability to act well despite being sad/upset). Acquiring less of a need to manage my own mood was important and useful for me. When trying to put up a pretense of "everything is okay and I'm fine," I couldn't think properly. (I still have some of that, and it is still a barrier to thinking, but less.)

This has seemed true for me despite it also seeming true that when I am trying my best, I am often/usually free and happy. (And that if I look for where I have a "posture of pain", I can often thereby locate a place where my form is poor and is wasting my energy.) Trying well for me has often involved a sort of happiness.... but first it has often involved pain/fear/similar as I integrate what I do not know how to integrate.

(None of this is intended as advice. I don't know you, whoever you are who is reading this, and I don't have a good grasp of how you're currently put-together and kind-of-stable.)

Comment by annasalamon on Pain is not the unit of Effort · 2020-11-28T15:41:42.001Z · LW · GW

I appreciate you pointing this out. I'm not sure if you're already saying this or not, but IMO we on LW should work hard (on LW, at least) not to promote beliefs that are meant to be useful, as though they are meant to be true. Otherwise, we'll get into a muddle where moralism / desire not to harm others makes it difficult to acquire and share true observations about the world.

E.g., maybe I'll be afraid to say "my anonymous friend Bob seems to me to work exceedingly hard, and exceedingly effectively, while being very unhappy" lest I retraumatize people or make their antidotes ineffective.

A proposed fix to your "counterbalancing beliefs": call them "heuristics" or "questions-to-oneself," and phrase them as questions rather than truth-claims. E.g.:

  • 1'. If it hurts, is there some way the specifics of the pain/tiredness can lead me to notice wasted effort / improvable form?
  • 2'. Are there ways I can let go of some of the pain/tiredness? If I was really trying here, might I be happier?

I do personally get milage from questions like 1' and 2'. I think the thing you're after with the antidotes (whose spirit I appreciate) is to make sure that we don't preferentially look for ways to be more effective that cause pain (rather than ways to be more effective that relieve pain, or that are neutral on the pain dimension). So we can look for the search strategies directly.

(Also, thanks for the post! Some good discussion on a tricky and important topic, IMO.)

Comment by annasalamon on AGI Predictions · 2020-11-22T05:12:51.612Z · LW · GW

IMO, we decidedly do not "basically have it covered."

That said, IMO it is generally not a good idea for a person to try to force themselves on problems that will make them crazy, desperate need or no.

I am often tempted to downplay how much catastrophe-probability I see, basically to decrease the odds that people decide to make themselves crazy in the direct vicinity of alignment research and alignment researchers.

And on the other hand, I am tempted by the HPMOR passage:

"Girls?" whispered Susan. She was slowly pushing herself to her feet, though Hermione could see her limbs swaying and quivering. "Girls, I'm sorry for what I said before. If you've got anything clever and heroic to try, you might as well try it."

(To be clear, I have hope. Also, please just don't go crazy and don't do stupid things.)

Comment by annasalamon on Where do (did?) stable, cooperative institutions come from? · 2020-11-04T05:44:31.455Z · LW · GW

Thanks; this resonates for me and I hadn't thought of it here.

The guess that makes sense to me along these lines: maybe it's less about individual vulnerability from attack/etc. And more that they can sense somehow that the fundamentals of the our collective situation are not viable (environmental collapse, AI, social collapse, who knows, from that visceral perspective I imagine them to have), and yet they don't have a frame for understanding the "this can't keep working," and so it lands in the "in denial" bucket and their "serious person" is fake. (I don't think the "fake" comes from "scared" alone, I think you need also "in denial about it." For example, I think military units in war often do not feel fake, although their people are scared.)

(Alternative theory for scared: maybe it is just that we are lacking tribe.)

Comment by annasalamon on Where do (did?) stable, cooperative institutions come from? · 2020-11-04T05:07:14.120Z · LW · GW

Thanks. I buy the death spirals thing. I'm not sure I buy the "OK in the private sector but not the public sector b/c no competitive process there" thing -- do you have a story for why the public sector remained okay for ~200 years (if it did)? Also, particular newspapers and academic institutions have competitors, and seem to me also to be in decline.

Comment by annasalamon on Open & Welcome Thread – November 2020 · 2020-11-04T05:04:30.683Z · LW · GW

"And the explosive mood had rapidly faded into a collective sentiment which might perhaps have been described by the phrase: Give us a break!

Blaise Zabini had shot himself in the name of Sunshine, and the final score had been 254 to 254 to 254."

Comment by annasalamon on Where do (did?) stable, cooperative institutions come from? · 2020-11-04T00:16:01.001Z · LW · GW

The first half is fine, but replace "altruistically" with "selfishly".... They figure out how to make a living... [emphasis mine]

At first glance, if we're talking about a thing that requires cooperative effort from many people across time, this seems like a heck of a principal agent problem. What keeps everybody's incentives aligned? Why does each of us trying selfishly to make a living result in a working fire fighting group (or whatever) instead of a tug-of-war? I understand the "invisible hand" when many different individuals are individually putting up goods/services for sale; I do not understand it as an explanation for how hundreds of people get coordinated into working institutions.

My 0-3 is an attempt to understand how something-like-selfishness (or something-like-altruism, or whatever) could stitch the people together into a thingy that could produce good stuff despite the principal agent problem / coordination difficulty.

Comment by annasalamon on Where do (did?) stable, cooperative institutions come from? · 2020-11-03T23:50:18.231Z · LW · GW

Thanks for the SF crime link; you may be right. Multiple (but far from all) friends of mine in SF have been complaining about being more often accosted, having greater fear of mugging than previously, etc.; but that is a selection of crimes and is not conclusive evidence.

Comment by annasalamon on Where do (did?) stable, cooperative institutions come from? · 2020-11-03T23:49:29.369Z · LW · GW

Thanks for the SF crime link; you may be right. Multiple (but far from all) friends of mine in SF have been complaining about being more often accosted, having greater fear of mugging than previously, etc.; but that is a selection of crimes and is not conclusive evidence.

Comment by annasalamon on Where do (did?) stable, cooperative institutions come from? · 2020-11-03T23:25:51.352Z · LW · GW

There actually seems to be far more subcultures being formed than there ever were before

DaystarEld, what are your favorite current happening scenes? (Where new art/science/music/ways of making sense of the world/neat stuff is being created?) Would love leads on where to look.

Comment by annasalamon on Where do (did?) stable, cooperative institutions come from? · 2020-11-03T23:13:05.831Z · LW · GW

Thanks. Under this hypothesis, we should see an improvement in the quality of private-sector institutions. (Whereas, under some competing hypotheses, Google and other private-sector companies should also have trouble creating institutional cultures in the 0-3 sense.) Thoughts on which we see?

Also, thoughts on David Chapman's claim that subcultures (musical scenes, hobby groups, political movements, etc.) have been vanishing? Do you also hypothesize this brain drain to affect hobby groups?

Comment by annasalamon on Where do (did?) stable, cooperative institutions come from? · 2020-11-03T23:02:15.445Z · LW · GW

Good question. I'm not sure if this will make sense, but: this is somehow the sort of place where I would expect peoples' stab-in-the-dark faculties ("blindsight", some of us call it at CFAR) to have some shot at seeing the center of the thing, and where by contrast I would expect that trying to analyze it with explicit concepts that we already know how to point to would... find us some very interesting details, but nonetheless risk having us miss the "main point," whatever that is.

Differently put: "what is up with institutional cultures lately?" is a question where I don't yet have the right frame/ontology. And so, if we try to talk from concepts/ontologies we do have, I'm afraid we'll slide off of the thing. Whereas, if we tune in to something like that tiny note of discord Eliezer talks about (or if we pan out a lot, and ask what our taste says is/isn't most relevant to the situation, or ask ourselves what does/doesn't feel most central), we may have a better shot.

Comment by annasalamon on Where do (did?) stable, cooperative institutions come from? · 2020-11-03T22:54:02.139Z · LW · GW

Thanks! Fixed.

Comment by annasalamon on Where do (did?) stable, cooperative institutions come from? · 2020-11-03T22:37:21.126Z · LW · GW

One hypothesis for why it has gotten harder to form institutional cultures (I am assuming here that it has):

I’ll call this the “Geeks, MOPS, and sociopaths” model. Under this model (put forward e.g. in the essay of Ben Hoffman’s that I linked above), it has somehow become easier and more common for people to successfully ape the appearances of an institutional culture, while not truly being true to it (and so, while betraying it in longer-term or harder-to-trace ways).

In the example of the NYT, this could occur in several ways:

  • People getting jobs within the NYT who believe less sincerely in the old journalistic ethic (though they perhaps believe in looking like they believe in whatever is popular);
  • Alternative press outlets (Washington Post, or whoever) arising that believe less sincerely in journalistic ethics (or anything like this) than the NYT, but who parasitize the “kind of like journalistic ethics” brand by aping its appearance to readers;
  • Leadership of the NYT being more interested in bending the NYT’s brand (and its internal culture/ethics) to however current people today happen to be evaluating which newspaper to trust, in ways that boost those leaders’ personal [$/prestige/political power] but that harm the longer-term legacy of the institution (because future people, who are under the sway of different fads, won’t see it this way).

Related argument: the 4-hour documentary / propaganda film “Century of the Self” argues that the dispersion of game theory (“it’s virtuous to think about my self-interest and e.g. defect in prisoners’ dilemmas”) and of marketing/focus groups/“public relations” (“my brand can figure out how other people are making sense of the world on a pre-conscious level, by using techniques similar to Gendlin’s Focusing on them, and can thereby figure out how to be perceived as having a certain ethic/culture/institution by hacking their detectors”) led to more of this sort of aping, and replaced institutional cultures that might’ve helped past people do real work with LARPing and “lifestyles.”

This is also quite related to Goodhardt’s law. But under this hypothesis, dynamics have somehow changed dynamics so that [individuals/organizations who are trying to appear to have virtues] are able to successfully fool the detectors of [individuals/organizations that are are trying to detect whether they have virtues]. It does not explain why that would have changed.

Comment by annasalamon on Open & Welcome Thread - June 2020 · 2020-06-06T16:40:59.056Z · LW · GW

I'm glad you're trying, and am sorry to hear it is so hard; that sounds really hard. You might try the book "How to have Impossible Conversations." I don't endorse every bit of it, but there's some good stuff in there IMO, or at least I got mileage from it.

Comment by annasalamon on Hanson vs Mowshowitz LiveStream Debate: "Should we expose the youth to coronavirus?" (Mar 29th) · 2020-04-05T18:03:14.427Z · LW · GW

Yes; thanks; I now agree that this is plausible, which I did not at the time of making my above comment.

Comment by annasalamon on Hanson vs Mowshowitz LiveStream Debate: "Should we expose the youth to coronavirus?" (Mar 29th) · 2020-03-29T20:16:20.149Z · LW · GW

​I think we are unlikely to hit herd immunity levels of infection in the US in the next 2 years. I want to see Robin and Zvi discuss whether they think that also or not, since this bears on the value of Robin's proposal (and lots of other things).

Comment by AnnaSalamon on [deleted post] 2020-02-17T22:18:08.885Z

Add lots of sleep and down-time, and activities with a clear feedback loop to the physical world (e.g. washing dishes or welding metals or something).