Comment by rana-dexsin on Covid 12/31: Meet the New Year · 2021-01-01T05:41:07.610Z · LW · GW

The WHO redefinition part looked weird to me, so I tried to verify it. The 13 November text verifies at the Internet Archive—though note that the text shown in the screenshot is only the beginning of the entry. The entry contained many more paragraphs of text, but I don't see it correcting the weird definition of “herd immunity” that it establishes at the beginning.

However, the current text as I am seeing it live on 31 December (last update today, apparently) is significantly different; it gives a lot of space to the benefits of vaccination, but does not phrase it in such a way as to ignore other immunity sources the way the 13 November text did, and makes it clearer that the “herd immunity through vaccination” is a normative claim on actions that should be taken and not a positive or nominative claim on what herd immunity actually is. Here's the current first paragraph, emphasis mine:

'Herd immunity', also known as 'population immunity', is the indirect protection from an infectious disease that happens when a population is immune either through vaccination or immunity developed through previous infection. WHO supports achieving 'herd immunity' through vaccination, not by allowing a disease to spread through any segment of the population, as this would result in unnecessary cases and deaths.

The rest of the new text more or less matches this change from the 13 November version; there is a bit about “The fraction of the population that must be vaccinated against COVID-19 to begin inducing herd immunity is not known”, but that's several paragraphs in and I read it as pretty well-contextualized to “given that the plan is to vaccinate until we reach that point”. Here's the first sentence from the third paragraph, emphasis mine:

Vaccines train our immune systems to create proteins that fight disease, known as ‘antibodies’, just as would happen when we are exposed to a disease but – crucially – vaccines work without making us sick.

The part I emphasized in that sentence is actually identical in the 13 November text, but badly contextualized. (The other differences in the third paragraph are immaterial to the distinction under question, consisting only of additional explanatory text—I assume to help readers who don't have a basic gears-model of immune response and viral transmission readily in memory.)

Importantly, and to restate something from above, the third and all subsequent paragraphs are missing from the right-hand screenshot in the post, and it doesn't look like normal truncation at a glance—the whitespace at the bottom of the screenshot visually implies that the second paragraph was the end of the entry in that version, which is false.

IA snapshots show that the 13 November text was in place up through 27 December, so perhaps not a small blip in terms of Internet time, but it does seem to have been corrected.

I was not able to verify the 9 June text, since IA shows no snapshots of this URL before October. I imagine perhaps the URL was different, and I would appreciate a hard reference if anyone has one.

Comment by rana-dexsin on 100 Tips for a Better Life · 2020-12-23T23:57:07.428Z · LW · GW

I think it's sort of inevitable that general-vectors lists like this will have a lot of entries that have the “this is much easier to do when you're already in a good position” property, but that the underlying effect is much more a divergent-feedback property of the environment and not specific to the list. So I'd say something like:

  1. It's important not to get stuck in the victim mindset where you give up and/or rebel because you can't do the same things to obtain wins that are easy for people in better situations. In more collective, adversarial situations, the balance of social emotions may skew toward doing otherwise as a tactic, but communities where that's a steady state trend unhealthy in the medium to long term, and I don't think there's a lot of cases where deciding it on your own is actually a win.
  2. If you're in a worse situation than allows the direct use of an idea, but not so much worse that there's an uncrossable gap, most of these degrade gracefully to “maybe keep an eye out for this”. I can't afford a second monitor right now (this is true in reality), but I can remember to revisit the idea if I have more money later. But someone who won't realistically be in a position to own any computing devices for the next decade should discard that item entirely.
  3. Adjacent to (2), if a gap looks uncrossable but you want it not to be, consider that some of that might be an illusion, and that you might be able to improve your imagination and look for possibilities you've missed. Extending your range of thought is something that's encouraged a lot here. If you hold on too strongly to “you shouldn't even be talking about things like that”, that can set you up to fall into #47 (which I think is one of the more universal ones).
  4. All the same, calibration to “what level and type of things people are in a position to care about right now” is one of the big implicit cultural and situational specificity elements I mentioned in passing elsewhere. If you're way off from the implied target audience for too much of the list, maybe it's not worth bothering. #31 (which I also think is one of the more universal ones) sort of implies this. (However, I don't think it's practical to expect a list of anything more than platitudes to make no such assumptions.)
  5. … but to combine (4) with (3), lines of thought go very differently depending on whether you use “you shouldn't even be talking about that” or “I don't care about this list right now” as an interpretation. The latter opens up more agency for doing something about it.
  6. If what you mean is more like “hey, are you even thinking about the possibility that some of these might be impossible”, then I would agree with you that it's generally a good idea to notice the context dependence when composing things like this (which is in fact why I mentioned it elsewhere), but stopping at that idea doesn't lead to much. If you want a different outcome, starting by clarifying in your own mind what that would be like helps more; for instance, “I would like to see similar lists with different implied audiences” is not a bad idea (though there are ways of instantiating it unproductively).
  7. All of the above, themselves, of course assume a certain amount of value compatibility…
Comment by rana-dexsin on 100 Tips for a Better Life · 2020-12-23T22:47:27.990Z · LW · GW

It's pretty USA-centric, at least. Doing this in other jurisdictions where the balance of rights and the dominant informal relationship between the public and the police are different could be much worse.

Comment by rana-dexsin on 100 Tips for a Better Life · 2020-12-23T09:46:47.944Z · LW · GW

[Epistemic status: experience-based synthesis, likely biased]

Most of these seem reasonably sane, of course with varying levels of cultural and situational slant and specificity (as one would expect from any list like this). One of them, however, strikes me as actively dangerous in a way worth mentioning:

  1. If you want to become funny, try just saying stupid shit until something sticks.

Doing this visibly in more sensitive or conformist social groups can be a disaster. Gaining a reputation for saying erratic things can make you the person that no one can take anywhere because you might ruin the environment at any time, and then you're in the hole. Depending on your interpersonal goals, it may be that exiting a group like that would be a net benefit for you, but even if that's true for you, you may want to examine those options first before playing roulette with your status.

Bouncing things off yourself doesn't have the same problem, but seems like a much weaker way of developing a quality which is fundamentally social; it can work if you have an internal sense of what's funny but haven't “found” it for conscious access, but it doesn't work if you were miscalibrated to start with. Bouncing things off trusted friends can work, but at that point you're more likely to have already had that option saliently in mind. (Well, if you didn't and you're reading this, now you do.)

More specifically, I think people who are socially oblivious and think that humor will improve their standing may be likely to jump at 52, and if they are in the above situation, get hurt, with the hazard having been invisible due to the obliviousness. One might then ask why they would get marginally hurt if they were already likely to make social errors—but I think it's possible to get by in such cases with (perhaps not consciously noticed) conditioned broad inhibitions instead… until you read something like this as “permission”.

Comment by rana-dexsin on Covid 12/10: Vaccine Approval Day in America · 2020-12-10T18:24:16.579Z · LW · GW

There was a Pew Research survey this week on who will get the vaccine.

To confirm, this is about who intends to seek out the vaccine, yes? The study mentioned by this article on the Pew Research site? Some of the text (especially the Twitter screenshot) was easy to initially read as “to whom the system will choose to allow the vaccine” instead.

Comment by rana-dexsin on Covid 12/3: Land of Confusion · 2020-12-04T20:50:52.938Z · LW · GW

In that case, I would like to register, without requesting any specific action, that one of the reasons I tend to quietly accept a lot of the scattershot secondary jabs as part of Zvi's style is that these posts are Personal-Blog-categorized crossposts. He writes earlier that “I am happy that the community gets use out of them, but they are not designed for the front page of LessWrong or its norms.” Giving most of the last few sections full front page status as-is feels very costly to me. If I could provide a finger-snap option, I'd go with “front-page a separate post that only pulls the more critical and topical information from this one”, but of course someone would have to actually do that. (In theory I would offer to do such derivation, but I won't have spare mental space for something of that order for several days yet, which is presumably too late.)

And along the lines of another comment subtree, from my perspective most (not all) of the wave of Twitter links on this post feels worse in terms of “more heat than light” than what I see from an ad-hoc resample of previous posts in the series—in a way it feels like the series has been leaning further and further almost-but-not-quite toward the “Agreed that doing what I did here on a regular basis would be quite bad” zone from the one post in the series that had a more clearly marked digression on US election results.

(Half-contrasting opinion-observation: I'm aware of and mostly am okay with the level of stylistic pointedness in Zvi's other writing such as “More Dakka” and the Moral Mazes series, but I don't think I've elsewhere observed the one-two punch of that plus edging against more-heat-than-light topics arriving from Twitter.)

Comment by rana-dexsin on Persuasion Tools: AI takeover without AGI or agency? · 2020-11-24T09:40:27.222Z · LW · GW

Did your friend manage to get out of the mistaken suggestion-patterns later, and if so, how? (If it's appropriate to reveal, of course.)

Comment by rana-dexsin on Survey of Deviant Ideas · 2020-11-23T09:04:27.662Z · LW · GW

Isn't this extremely social-context-dependent? Do you mean “almost no other LW readers would agree with you on”? Or “almost nobody in the (poorly-defined) ‘mainstream’ would agree with you on”? Or “almost nobody in your ‘primary’ social group (whatever that is) would agree with you on”? Or “almost nobody in the world (to what threshold? that's a lot of people!) would agree with you on”?

Edited to add: To make the concrete connection explicit, I can think of a number of things I believe that I wouldn't dare say out loud on LW, and a number of things I believe that I wouldn't dare say out loud in another very different social setting I'm attached to, but they don't intersect much. I'm not sure I can think of much I believe where I have no social group that would agree with me.

Comment by rana-dexsin on AGI Predictions · 2020-11-21T07:24:09.008Z · LW · GW

What level of background in AI alignment are you assuming/desiring for respondents? Is it just “all readers” where the assumption is that any cultural osmosis etc. is included in what you're trying to measure?

Comment by rana-dexsin on Where do (did?) stable, cooperative institutions come from? · 2020-11-06T20:42:40.300Z · LW · GW

I strong-upvoted this out of the negative because it seemed disproportionate for it to be there; I think it has some flaws as an answer and it might've been better as a comment, but there are other answers that are just as shaky on an explanatory level. (Though I don't think some of the adversarial framing is doing it any favors.)

My intuition is that there's a strong underlying point here, even if the surface markers have run afoul of some memetic antibodies. I'd love to see a better framing of this and better-explained actual counters if they're there; if the latter are part of local canon, they haven't propagated to me. “spirals of inequality leads to spirals of distrust” as a central thesis certainly plays well enough with some of the EEA-psychology and historical-cycles modeling on the surface.

Comment by rana-dexsin on Location Discussion Takeaways · 2020-11-04T06:36:58.869Z · LW · GW

I'll try to reword/expand here what I read Zack as saying/implying, without presently agreeing or disagreeing with it (except for one meta bit below):

“mingyuan's post implies that the main threat from cancel culture is being personally (perhaps physically) attacked. However, the main problem with attempting to center a rationalist community in an area that is sufficiently affected by cancel culture occurs well before the point where being personally attacked is likely. The problem is that people reflexively censor what they say, in such a way that the community stops being able to coordinate on anything that is true but cancellable, or even potentially-true but cancellable.¹ This would cause arrived-at consensus about reality to be distorted in a way that no longer reflects rationalist ideals and makes the resultant community no longer worthy of² the name. Because this has such an impact, centering on individual “social safety” as described in the original post is misleading and distracting when thinking about how to defend a rationalist community from the effects of cancel culture, in that it may lead to accepting solutions that preserve such individual safety but destroy defining aspects of the community in the process.”

¹ Zack's first comment has other references about how this can create an equilibrium that's difficult to break, and why partial answers work poorly, but I'm not confident enough to copy those here, and I think they may have been distracting as originally interleaved with the main argument.

² I considered writing “accurately described by” instead of “worthy of” here, but I place more salience on the emotion/motivation aspect in that part of Zack's comment.

Zack, how accurate is this? habryka, does that help?

(My read on the meta-aspect of the very first part is that I interpret the “I guess cities are maybe worse on the cancel culture dimension because if you're hidden in the middle of nowhere it's harder for people to credibly threaten to physically attack you.” part of mingyuan's post as less salient, and more intended as a potentially nonrepresentative example, compared to the “But, is there anywhere, physically, that one can go to escape cancel culture? My instinct is no”, which dominates my felt-sense of that section. So (without intending to judge whether this is good or bad) I think Zack is responding at something of an angle to the thrust of the original post.)

Comment by rana-dexsin on Things are allowed to be good and bad at the same time · 2020-10-17T16:38:44.652Z · LW · GW

On the face of it, the initial conditions which you're consciously pushing against sound like a milder “everyone does it a little” version of the “splitting” behavior that shows up more pathologically in e.g. borderline personality disorder, maybe with a dash of the “Mental Mountains” post from SSC. Does that sound like an accurate description?

Comment by rana-dexsin on "Win First" vs "Chill First" · 2020-10-01T17:24:54.970Z · LW · GW

[Epistemic status: anecdata and perspective generation]

I think it's not right in the general case, but it may be more right than not as an approximation here, since what's described might be indicative of defaults regarding intensity. In my experience, default intensities do feel roughly bimodal among my peers, and in fact one of my current life strategy issues is to figure out how not to fall too far into line with the less-intense subset that currently dominate my social graph.

Another read on that might be that even when the resultant intensities differ widely between activities and situations and may overlap or cross over between a “win-first” individual and a “chill-first” individual, there's still an underlying difference in something like focus, salience, or differential habituation to up-regulation versus down-regulation of intensity.

Comment by rana-dexsin on Open & Welcome Thread - September 2020 · 2020-09-25T04:47:32.313Z · LW · GW

Is there visible reporting on this?

Comment by rana-dexsin on The Best Toy In The Park · 2020-08-26T09:35:19.952Z · LW · GW

There is little conscious depth, i.e. depth that we can introspect, experience or enjoy. We don't think much about which specific centimeter we'll place our foot at, we just feel the correct motion and perform it.

[Epistemic status: personal observation of mental states which are difficult to describe well]

This doesn't quite match my experience (though I haven't had much of this experience for a while, so take this with some extra salt). What I remember is being able to have deep conscious interaction with an ongoing complex motor process like that, but in a less synchronous way. Activities like playing board games involve conscious manipulation in the same subjective timeline as the main flow of action: you consciously think about what move to make, then you reach out to make it, then you consciously observe what your opponents are doing, then repeat (depending on the game of course). Activities like playing music or running, by contrast, involve primarily unconscious cycles as the “main” flow of action, but the conscious mind can still watch it happen and then reach out and touch it in parallel, placing constraints and nudges and altering parameters. What it doesn't get is waited on for a say in every microdecision, because those are happening too fast—but consciously remembering a finer-grained history lets you try to extrapolate what nudges to give to create the pattern you want next time, which is how I would realize the loop of deliberate practice in motor skills, which I just now notice does make the “(consciously) think, then act” pattern again, but one level of temporal chunking up. And it's possible to have a conscious say in an upcoming microdecision if the conscious mind predicts them far enough in advance and the unconscious mind has enough spare processing power that the information can be integrated in time.

Comment by rana-dexsin on Survey Results: 10 Fun Questions for LWers · 2020-08-21T03:14:08.103Z · LW · GW

Maybe, but I think any change to the result caused by people randomizing is inherently part of the actual result here. But then, any change to the result caused by people thinking they shouldn't randomize because it would hamper the result is also part of the result.

Comment by rana-dexsin on Survey Results: 10 Fun Questions for LWers · 2020-08-19T08:17:15.601Z · LW · GW

I did roll a four-sided die for the first question, in fact. (Well, to be more precise, I rolled a six-sided die after precommitting to myself that I would continue rolling until the answer was in [1, 4].) Now I'm glad I did.

Comment by rana-dexsin on WordPress Destroys Editing Process, Seeking Alternatives · 2020-08-18T20:01:44.589Z · LW · GW

The original post contains information implying that Zvi is not self-hosting. “There might be a way to switch back to the old editor if I payed them the ransom money to upgrade my account to Business so I could use plugins, but …” The post is also itself marked as crossposted from the blog, and the URL is at the official hosting offering.

Comment by rana-dexsin on SlateStarCodex deleted because NYT wants to dox Scott · 2020-06-23T21:22:34.628Z · LW · GW

This initially felt to me like it ignored some of the ramifications of its parent comment, but I'm also not sure the parent comment intended to imply them. So I would like to put forth the more specific idea that the line of action “there is a power imbalance, therefore, we have to amplify our motions by a large factor to counteract it, which is safe because we know we can't do any real damage to them” may not be universally wrong but is still dangerous and, for those acting on the sort of charitability norms ESRogs/ricraz describe, requires a lot of extra scrutiny. Specifically, I think nonrigorously with medium confidence that:

  • This line of action can create a violence cascade if some of the assumptions are wrong. (And in this concrete context specifically, it is not clear to me that the assumptions are right enough.)
  • In the case of “soft power” (as opposed to, for instance, physical violence, where damage is more readily objectively measurable and is often decisive by way of shutting down capacity), this is much more true when there is a lot of “fog of war” going on, where perceptions of who has power over what and whom don't have a lot of consensus. It is very easy to assume you're in the weak position when you actually have more power than you think, and even if that power is only in some spheres, it can do lasting damage.
  • Some of the possible lasting damage is polarization cascades which operate independently of whether you can damage someone's reputation in the “mainstream”: if each loosely-defined party over-updates on decrements to an opposing party's reputation just among itself, this opens up a positive feedback loop.
  • In the case of decentralized Internet communities, it's hard to tell how large the amplification factor is actually going to be unless there's actually a control loop involved (such as a leader with the social credentials to say “our demands have been met, now we will stop shouting”).
  • In the presence of the ability of soft-power actions to “go viral” quickly and out of control from tiny sources, unilateralist's curse amplifies all of the above for even very localized decisions about when to “put the hurt on”.

I think with less confidence that the existing polarization cascades across the Internet involve a growing memetic strain that incentivizes strategic perception of self as weak in the public sphere, so there's some amount of “if you think you're in the weak position and should hit back, it might also be your corrupted hardware emulating status-acquiring behavior” in there too.

At this point the specific SSC articles “Be Nice, At Least Until You Can Coordinate Meanness” and “The Toxoplasma of Rage” come to mind, but I don't remember clearly enough whether they directly support any of this, and given Scott's current position, I don't feel like it would be appropriate for me to try to check directly.

I do think there are plausibly more concrete points against a “mistake theory”-like interpretation of the events. For instance, Scott reported the reporter describing that there was an NYT policy, and others say this is not actually true. But the reporter could have misspoken, which would still be a legitimate grievance against the reporter, but frames it in a different light. Or Scott could have subtly misrepeated the information; I am sure he tries to be careful, but does he get every such fact exactly right under the large stresses of an apparent threat?

So, I generally endorse “tread cautiously here”.

I also think Scott's own suggestions of sending polite, private feedback to the NYT expressing disapproval of revealing Scott's name are not unusually dangerous and do not have much potential for creating cascading damage per above, especially since “news organizations should be able to deal with floods of private feedback” is a well-established norm. So this shouldn't be interpreted as a reason to suppress that.

Comment by Rana Dexsin on [deleted post] 2020-06-16T00:00:56.896Z

I've put a few cycles into trying to come up with a better way to point at the thing/model I'm thinking of. (I say “thing/model” because in the domain of social psychology especially, Strange Loops between a phenomenon and people's models of the phenomenon cause them to not be that cleanly separable. Is there a word for that that I'm missing?) I haven't gotten through much of it, but in the meantime, I've also just noticed that a recent second-level comment by Vaniver on their own “How alienated should you be?” post has description that seems to come from a similar observation/interpretation of the world to the part of mine I'm trying to point at, and the main post goes into more detail. So that may help. I think there is a streak of variants of this idea in LW already, and it's possible that what I really want to do is go through the archives and find the best-aligned existing posts on the subject to link to…

Comment by rana-dexsin on Simulacra Levels and their Interactions · 2020-06-15T17:56:35.886Z · LW · GW

… that said, I also notice that in a universe in which this were intentional, it could also be a nice medium-level demonstration of Level 2 thinking: “If I name my post with something important-sounding in the title, more people will read it, and I will (probabilistically) gain points.”

Comment by rana-dexsin on Simulacra Levels and their Interactions · 2020-06-15T17:53:47.627Z · LW · GW

Minor presentational quibble:

The original intent of this post was to go on to analysis of other issues surrounding Covid-19. I was hoping to make clear what I meant by the more disputed statements in my Covid-19 model summary from two weeks ago, and also how and why I believe those dynamics occurred, and what dynamics one can expect going forward. But this post is long enough, so I’ve pushed that into future posts.

It'd be nice if the title were updated to match this, since this means it's no longer an accurate summary of the content.

Comment by Rana Dexsin on [deleted post] 2020-06-10T12:40:38.534Z

[This is condensed and informalized from a much longer and more explicit comment which I'm not sure would have been worth wading through, which still seemed hazy in important ways, and which seemed like it needed me to open more boxes than I have energy for right now. This one still seems hazy, but hopefully it wears it more on its sleeve. I should also declare up front that I have a bunch of weird emotional warping around this topic; hopefully I'm working around enough of it for this to still be useful.]

I think you're interpreting “this is not how human psychology works” in a noncentral way compared to how Bob Jacobs is likely to have meant it, or maybe asserting your examples of psychology working that way more as normative than as positive claims.

I have a completely different tack in mind: how do we know that the sort of mental maneuvers you describe don't become harmful in their aggregate effects when too many people do them, or do them without coordinating enough, or something along those lines?

I would like to point out the following:

The person literally could not process what she said because it was so far from what he was expecting, and she felt foolish for saying it. Her injury then swelled up, even though it had already been a while since the break.

“felt foolish” together with the consequences looks like a description of an alief-based and alief-affecting social feedback mechanism. How safe is it for individuals to unilaterally train themselves out of such mechanisms? Some detachment from it seems to be part of emotional maturity, but that's coupled with a lot of other mediating material. Further detachment seems to be part of various spiritual traditions—also coupled with even more mediating material. That's not very promising for any implied “there's no such thing as too much”.

More specifically, I would like to consider the possibility that many of the sort of false aliefs you're talking about act more like restraining bolts imposed by the social cohesion subunit of a human mind, “because” humans are not safe under amplification with regard to social values. (And notably, “hypercompetent individuals are good for society” is by no means a universal more.)

Or: individual coherence and social cohesion seem to be at odds often enough for that to be a way for “not-winning due to being too coherent” to sneak in through crazy backdoors in the environment, absent unbounded handling-of-detachment resources which are not in evidence and at some point may be unimplementable within human bounds.

[… this was going to be more-edited, but I've accidentally hit Submit, and I don't want to do too much frantic editing, so I've just cleaned up a few pieces. I think this is still just-about worth enough to leave up; I'll try to come back to it later if it's deemed worth talking about.]

Comment by rana-dexsin on Is ethics a memetic trap ? · 2020-04-23T19:30:56.022Z · LW · GW

Spelling note: “de Beauvoir” is written correctly the first time, but quite incorrectly the second time.

Comment by rana-dexsin on Holiday Pitch: Reflecting on Covid and Connection · 2020-04-23T16:23:37.937Z · LW · GW

I would be very worried about that last idea turning into a “performance” in terms of social instinct. In what other social contexts do we have twenty or thirty people actively doing something while any number of people watch? When they're on stage…

Comment by rana-dexsin on Open & Welcome Thread - March 2020 · 2020-03-17T01:29:24.718Z · LW · GW

At least a decade ago, excessive aversion to group identity was recognized as a potentially undesirable aspect of the rationality community as it existed then, so you're in (past? present?) good company on that front…

Comment by rana-dexsin on Slack Budget: 3 surprise problems per week · 2020-02-26T19:58:00.772Z · LW · GW

I think “tiered levels of reserves with increasing costs” is a more general description, out of which the “frequently dip into short-term reserves during normal operations” (even biological cycles of sleeping and eating involve that) and “hesitate before dipping into longer-term reserves because it might signal either something else avoidably wrong or the need to change other plans to compensate” fall from different points in the continuum.

Comment by rana-dexsin on How was your decade? · 2019-12-29T08:25:49.070Z · LW · GW

The person “I” “was” in 2009 is subjectively a grandparent to me now.

I believe I am now slowly progressing toward the level of maturity “I” “should have” had ten years ago. Most of the intervening time has consisted of a combination of stagnation and disastrous missteps.

I'm not sure there's much advice I'd give those past selves in practice, since most of the important bits I think they would have inevitably misinterpreted out of context. What I would most want to do is try to compress their necessary experiences so they'd require less overall time, leaving someone like me showing up in, say, 2014 instead; this would have given a very high positive delta in expected value of my life. There are certain people and situations I would have warned them not to wait on; there are certain elements of material life I would have warned them to attend to more thoroughly to avoid me being mired in their executive debt; and there are certain gestalts which I would like to transmit to them which I would have to think for a very long time to be able to put into a form that can be unpacked from words. But the specifics are all too high-context to be useful here.

If I tried to generalize the most useful bits, I would say something like: seek out ways in which your developmental environment didn't provide useful examples of things that people who are good at having the sort of life you want to have do, and then force yourself through any discomfort necessary to acquire new examples, with a heavy lean toward visceral examples and experience, as well as not being afraid to try synthesizing examples yourself (while avoiding clinging to these as authoritative just because you made them in a way comfortable to you). For people with more directly material desires than mine, this may read as applying more to superficial cultural traits, in which case I would still say, don't be afraid to assimilate if the people you're assimilating to are admirable to you. Contrariwise, make sure they aren't just expressing attributes you like on the surface, because that signal will get drowned out by posers being louder than achievers (used broadly, including in senses that don't read to the dominant culture as “achievement”); look for deeper evidence that they are good at following through.

The problem with all this is that upon rereading it, it sounds like pretty vapid Normal Life Advice, and this reminds me of how I currently think these sorts of personal retrospectives are best handled in the context of people who've already been following each other's life tracks, are loosely aligned, and have enough of a subconscious-layer (“system-1”, I suppose) emotional bond with each other to avoid most of the message getting lost to internal misalignment friction. Which would imply that if you don't have such people, having them can have a very high mutual long-term value, which in turn is just another phrasing of “make and keep close friends, dammit”, which is yet another piece of Normal Life Advice…

The doing is always the problem, isn't it?

Comment by rana-dexsin on AI Safety "Success Stories" · 2019-09-07T19:30:45.897Z · LW · GW

Aside: If you want all alliteration, “Pivotal Performer/Predictor” (depending on whether tool or oracle) and “Rapid Researcher” might be alternative names for types 2 and 5.

Comment by rana-dexsin on Diversify Your Friendship Portfolio · 2019-07-10T07:15:43.178Z · LW · GW

[Epistemic status: synthesis of observation, intuition, advice from other people]

I don't think the “rather than” in that second paragraph is workable. Strong ties usually grow out of weak ties, so if you don't have a broad buffer of weak ties (or if it goes away, or if you let it go away), your replenishment pool for strong ties also goes away. Even strong ties frequently don't last forever, so if you have only strong ties, you're in an unstable position in the long term. Sometimes strong ties can give you access to more weak ties, but sometimes they can't, and even when they can, you still have to step up to take advantage of this.

I also vaguely think the investment metaphor might go wrong places for reasons similar to what Dagon mentions, but I don't think I can unpack that now.

Comment by rana-dexsin on Welcome and Open Thread June 2019 · 2019-06-29T05:23:54.286Z · LW · GW

I'm looking for some clarification/feelings on the social norms here surrounding reporting typo/malapropism-like errors in posts. So far I've been sending a few by PM the way I'm used to doing on some other sites, as a way of limiting potential embarrassment and not cluttering the comments section with things that are easily fixed, but I notice some people giving that feedback in the comments instead. Is one or the other preferred?

I also have the impression that that sort of feedback is generally wanted here in the first place, due to precise, correct writing being considered virtuous, but I'm not confident of this. Is this basically right, or should I be holding back more?

Comment by rana-dexsin on Welcome and Open Thread June 2019 · 2019-06-29T05:20:13.353Z · LW · GW

How many of you are there, and what is your dosh-distimming schedule like these days?

Comment by rana-dexsin on Welcome and Open Thread June 2019 · 2019-06-03T11:41:01.207Z · LW · GW

What sort of better are you hoping to become?

Comment by rana-dexsin on FB/Discord Style Reacts · 2019-06-02T03:25:10.575Z · LW · GW

“Wariness, thoughtfully following, should think about this more.”

Comment by rana-dexsin on FB/Discord Style Reacts · 2019-06-02T02:55:32.362Z · LW · GW

I intuitively believe that anonymous reactions will be more likely to lead to gaming, becoming a way to snipe or brigade from the sidelines in a more emotionally impactful way than downvotes and upvotes. Being able to weight the reactions by status is important.

There is also less pushback possible versus toxic anonymous uses of emoji-like reactions, because they often encode emotions less abstractly than votes do, and norms like “you should vote based on certain criteria that promote the purpose of the space” don't translate well to “you should emote based on certain criteria” (even though the latter does happen in human societies).

A place where I see private information as potentially beneficial, in a way that isn't reflected in any previous reaction systems I've seen, is actually “reacting user reveals reaction only to comment owner”. This would be to a PM response as a visible reaction would be to a comment response, and would serve a similar function when someone doesn't feel comfortable revealing a potentially low-status emotional reaction to the group nor being clear enough about it to raise the interaction stakes, but where such information especially in aggregate could still be useful. If a lot of people have a good or bad feeling about something, but few of them feel comfortable showing it in public, that can be very useful dynamics information.

(My previous comment's caveats about how I'm not sure how well any of this works in a comment-tree situation apply.)

Comment by rana-dexsin on FB/Discord Style Reacts · 2019-06-02T02:46:39.244Z · LW · GW


Comment by rana-dexsin on FB/Discord Style Reacts · 2019-06-02T02:36:05.820Z · LW · GW

My experience in other circles with Slack and Discord is that the niche of emoji reactions is primarily non-interrupting room-sensing (there are also sillier uses in casual social contexts, but they don't seem relevant here). I don't feel any pressure to specifically have read something, and I haven't observed people reading anything into failure to provide a reaction. The rare exception to the latter is when there's clearly an active conversation going on that someone's already clearly been active in, which can be handled by explicitly signaling departure, which was a norm in those circumstances anyway.

Non-interrupting room-sensing in a fast-flowing channel environment has generally struck me as beneficial. Being able to quickly find the topic-flow of the current conversation is important, and reactions do not have to be scanned for topic introductions. Reactions encode leafness: you can't reply to a reaction easily, which also means giving a reaction cannot induce social pressure to reply to it. They encode weaker ties to the individual: people with the same reaction are stacked together, and it takes an extra effort to look at the list of reacting users. Differentially, reactions can also signal level of involvement: someone “conversing” in only reactions may not be up for thinking about the conversation hard enough to produce text responses, but is able to listen and give base emotional feedback (which seems to be the most relevant to the proposed uses here). It serves a similar function to scanning people's facial expressions in a physical meeting room.

I'm very unclear on how these patterns would play out in a longer-form, more delay-tolerant environment like a comment tree. Some of the room-sensing interpretation makes less sense the less the timescale of the reactions corresponds to unconscious-emotion synchronization; there's a lot of lost flow context.

Comment by rana-dexsin on Feature Request: Self-imposed Time Restrictions · 2019-05-21T02:02:55.303Z · LW · GW

Since this seems to be an akrasia/executive-related problem, I suspect just having links to possible addons to use (and ideally, example configurations) easily accessible could be disproportionately ameliorative compared to its implementation cost, both via the reminder that compulsive browsing and mitigations for it both exist, and via the social signaling that this is an approved way of browsing that won't make you weird. Though I'm not sure about the possible noise it creates, depending on what easy options you have for placement/hiding.

Comment by rana-dexsin on Why I've started using NoScript · 2019-05-18T20:36:02.369Z · LW · GW

I think it depends a lot on how you frame it, and analogies work much less well than people expect because of ways the Internet is very different from previous environments.

The intuitive social norms surrounding the store clerk involve the clerk having socially normal memory performance and a social conscience surrounding how they use that memory. What if the store clerk were writing down everything you did in the store, including every time you picked your nose, your exact walking path, every single item you looked at and put back, and what you were muttering to your shopping companion? What if that list were quickly sent off to an office across the country, where they would try to figure out any number of things like “which people look suspicious” and “where to display which items”? What if the clerk followed you around the entire store with their notepad when it's a giant box store with many departments? For the cross-site case, imagine that the office also receives detailed notes about you from the clerks at just about every other place you go, because those ones wound up with more profitable store layouts and lower theft rates and the other shops gradually went out of business.

There are other analogy framings still; consider one with security cameras instead, and whether it feels different, and what different assumptions might be in play. But in all of those cases, relying on misplaced assumptions about humanlike capability, motivation, and agency is to be wary of. (Fortunately, I think a lot of people here should be familiar with that one!)

Comment by rana-dexsin on Strategic implications of AIs' ability to coordinate at low cost, for example by merging · 2019-04-25T17:49:15.447Z · LW · GW

Extending this: trust problems could impede the flow of information in the first place in such a way that the introspective access stops being an amplifier across a system boundary. An AI can expose some code, but an AI that trusts other AIs to be exposing their code in a trustworthy fashion rather than choosing what code to show based on what will make the conversation partner do something they want seems like it'd be exploitable, and an AI that always exposes its code in a trustworthy fashion may also be exploitable.

Human societies do “creating enclaves of higher trust within a surrounding environment of lower trust” a lot, and it does improve coordination when it works right. I don't know which way this would swing for super-coordination among AIs.

Comment by rana-dexsin on Strategic implications of AIs' ability to coordinate at low cost, for example by merging · 2019-04-25T17:41:09.673Z · LW · GW

But jointly constructing a successor with compromise values and then giving them the reins is something humans can sort of do via parenting, there's just more fuzziness and randomness and drift involved, no? That is, assuming human children take a bunch of the structure of their mindsets from what their parents teach them, which certainly seems to be the case on the face of it.

Comment by rana-dexsin on Alignment Newsletter One Year Retrospective · 2019-04-18T20:21:16.553Z · LW · GW

Speculative followup: seeing a few other people say similar things here and contrasting it with what seems to have been implied in the retrospective itself makes me guess there's a seriousness split between LW and email "subscribers". Does the former have passersby dominating the reader set (especially since it'll be presented to people who are on LW for some other reason), whereas anyone who cares more deeply and specifically will primarily consume the newsletter by email?

Comment by rana-dexsin on Alignment Newsletter One Year Retrospective · 2019-04-17T17:42:12.855Z · LW · GW

I browse this newsletter occasionally via LW; I am not subscribed by email. I am not so far seriously involved in AI research, and I don't wind up understanding most of it in detail, but I have a longer-term interest in such issues, and I want to keep a fraction of a bird's eye on the state of the field if possible, so that if I start in on deeper such activities a few years from now, I can re-skim the archives and try to catch up.

Comment by rana-dexsin on Degrees of Freedom · 2019-04-04T15:15:23.258Z · LW · GW

But how do the two things in the last paragraph mix if I have (1) a preference for others to judge me well, (2) a belief that others will judge me well if they believe I am doing what they believe is optimal for what they think my beliefs and preferences should be, and (3) a belief that the extrapolated cost of convincing them that I am doing such a thing without actually doing the thing is so incredibly high as to make plans involving that almost never show up in decision-making processes?

Put another way, it seems like the two definitions can collapse in a sufficiently low-privacy conformist environment—which can be unified with the emotion of “freedom”, but at least in most Western contexts, that seems infrequent. The impression I get is that most people obvious-patch around this by trying to extrapolate “what a version of me completely removed from peer pressures would prefer” and using that as the preference baseline, but I both think and feel that that's incoherent. (Further meta, I also get the impression that many people don't feel that it's incoherent even if they would agree cognitively that it is, and that that leads to a lot of worldmodel divergence down the line.)

(I realize this might be a bit off-track from its parent comment, but I think it's relevant to the broader discussion.)

Comment by rana-dexsin on Renaming "Frontpage" · 2019-03-11T10:49:03.126Z · LW · GW

“Default” and “Common” feel wrong, but perhaps “Core” has a place somewhere? “This is what we're here for; the rest is in support of it.”

Comment by rana-dexsin on The Pavlov Strategy · 2018-12-26T03:22:36.428Z · LW · GW

Is the “Chaos” part meant to be a link? It doesn't seem to go anywhere.

Comment by rana-dexsin on The Bat and Ball Problem Revisited · 2018-12-13T23:10:26.798Z · LW · GW

The bat and ball problem I answer in what I'll call one conscious time-step with the correct “five cents”, but it happens too fast for me to verify how (beyond the usual trouble with verifying internal reflection). I would speculate, in decreasing order of intuitive probability, that in order to get the answer, either (a) I've seen an exactly analogous “trick” problem before and am pattern-matching on that or (b) I'm doing the algebra quickly using my seemingly well-developed mathematical intuition. I can also imagine (c) I'm leaping to the “wrong” answer, then trying to verify it, noticing it's wrong, and correcting it, all in the same subconscious flash, but that feels off. Imagining the “ten cents” answer doesn't actually feel compelling; it just feels wrong. (It feels like a similar emotion to noticing I've gotten the wrong amount of change, in fact.)

The widgets problem I do a noticeable double-take on, but it's rapidly corrected within one conscious time-step; the “100” is a momentary flicker before my brain settles on the correct answer. Imagining “100” afterwards feels wrong, but less immediately so than “ten cents” did. It feels like I have a bias there toward answering “how many widgets can you produce in a fixed time” questions, so I might have an echo of the misreading “how many widgets can 100 machines produce in [assumed to be the same amount of time as before, since no contrary time value is presented to override this]”.

The lily pads question takes me a conscious time-step longer to answer than either of the other two; the initial flash is “inconclusive”, and then I see myself rechecking the part where the quantity doubles every step before answering “47”. (I notice I didn't remember that the steps were days, only remembering that there was a time unit; I don't know if that's relevant.) Imagining “24” afterwards feels some intermediate level of wrong between “ten cents” and “100”; my mental graph of the growth curve puts the expected value 24 at “way too low” intuitively before I can compute the actual exponent.

Comment by rana-dexsin on What is "Social Reality?" · 2018-12-09T04:38:29.637Z · LW · GW

I wonder if Chris_Leong was trying to deliver a meta-joke-based answer by pointing out that any consensus definition of “social reality” is itself a part of social reality.

Comment by rana-dexsin on Anyone use the "read time" on Post Items? · 2018-12-04T00:43:21.230Z · LW · GW

Thanks for clarifying. In that case, I don't count that as a gesture for word count in the sense that I was hoping, because it's far too heavy and requires flow-breaking motion tracking of an unpredictable expand/collapse.

Comment by rana-dexsin on Anyone use the "read time" on Post Items? · 2018-12-02T05:52:45.641Z · LW · GW

I use it as a proxy, but I'd like word count better. T3t implied that there's already a gesture for word count, but I don't know what it is, so maybe that's not discoverable enough as it is, too.