Posts

Comments

Comment by rana-dexsin on AI Safety "Success Stories" · 2019-09-07T19:30:45.897Z · score: 2 (2 votes) · LW · GW

Aside: If you want all alliteration, “Pivotal Performer/Predictor” (depending on whether tool or oracle) and “Rapid Researcher” might be alternative names for types 2 and 5.

Comment by rana-dexsin on Diversify Your Friendship Portfolio · 2019-07-10T07:15:43.178Z · score: 4 (4 votes) · LW · GW

[Epistemic status: synthesis of observation, intuition, advice from other people]

I don't think the “rather than” in that second paragraph is workable. Strong ties usually grow out of weak ties, so if you don't have a broad buffer of weak ties (or if it goes away, or if you let it go away), your replenishment pool for strong ties also goes away. Even strong ties frequently don't last forever, so if you have only strong ties, you're in an unstable position in the long term. Sometimes strong ties can give you access to more weak ties, but sometimes they can't, and even when they can, you still have to step up to take advantage of this.

I also vaguely think the investment metaphor might go wrong places for reasons similar to what Dagon mentions, but I don't think I can unpack that now.

Comment by rana-dexsin on Welcome and Open Thread June 2019 · 2019-06-29T05:23:54.286Z · score: 3 (2 votes) · LW · GW

I'm looking for some clarification/feelings on the social norms here surrounding reporting typo/malapropism-like errors in posts. So far I've been sending a few by PM the way I'm used to doing on some other sites, as a way of limiting potential embarrassment and not cluttering the comments section with things that are easily fixed, but I notice some people giving that feedback in the comments instead. Is one or the other preferred?

I also have the impression that that sort of feedback is generally wanted here in the first place, due to precise, correct writing being considered virtuous, but I'm not confident of this. Is this basically right, or should I be holding back more?

Comment by rana-dexsin on Welcome and Open Thread June 2019 · 2019-06-29T05:20:13.353Z · score: 3 (2 votes) · LW · GW

How many of you are there, and what is your dosh-distimming schedule like these days?

Comment by rana-dexsin on Welcome and Open Thread June 2019 · 2019-06-03T11:41:01.207Z · score: 2 (2 votes) · LW · GW

What sort of better are you hoping to become?

Comment by rana-dexsin on FB/Discord Style Reacts · 2019-06-02T03:25:10.575Z · score: 1 (1 votes) · LW · GW

“Wariness, thoughtfully following, should think about this more.”

Comment by rana-dexsin on FB/Discord Style Reacts · 2019-06-02T02:55:32.362Z · score: 5 (3 votes) · LW · GW

I intuitively believe that anonymous reactions will be more likely to lead to gaming, becoming a way to snipe or brigade from the sidelines in a more emotionally impactful way than downvotes and upvotes. Being able to weight the reactions by status is important.

There is also less pushback possible versus toxic anonymous uses of emoji-like reactions, because they often encode emotions less abstractly than votes do, and norms like “you should vote based on certain criteria that promote the purpose of the space” don't translate well to “you should emote based on certain criteria” (even though the latter does happen in human societies).

A place where I see private information as potentially beneficial, in a way that isn't reflected in any previous reaction systems I've seen, is actually “reacting user reveals reaction only to comment owner”. This would be to a PM response as a visible reaction would be to a comment response, and would serve a similar function when someone doesn't feel comfortable revealing a potentially low-status emotional reaction to the group nor being clear enough about it to raise the interaction stakes, but where such information especially in aggregate could still be useful. If a lot of people have a good or bad feeling about something, but few of them feel comfortable showing it in public, that can be very useful dynamics information.

(My previous comment's caveats about how I'm not sure how well any of this works in a comment-tree situation apply.)

Comment by rana-dexsin on FB/Discord Style Reacts · 2019-06-02T02:46:39.244Z · score: 2 (2 votes) · LW · GW

“Agree.”

Comment by rana-dexsin on FB/Discord Style Reacts · 2019-06-02T02:36:05.820Z · score: 15 (5 votes) · LW · GW

My experience in other circles with Slack and Discord is that the niche of emoji reactions is primarily non-interrupting room-sensing (there are also sillier uses in casual social contexts, but they don't seem relevant here). I don't feel any pressure to specifically have read something, and I haven't observed people reading anything into failure to provide a reaction. The rare exception to the latter is when there's clearly an active conversation going on that someone's already clearly been active in, which can be handled by explicitly signaling departure, which was a norm in those circumstances anyway.

Non-interrupting room-sensing in a fast-flowing channel environment has generally struck me as beneficial. Being able to quickly find the topic-flow of the current conversation is important, and reactions do not have to be scanned for topic introductions. Reactions encode leafness: you can't reply to a reaction easily, which also means giving a reaction cannot induce social pressure to reply to it. They encode weaker ties to the individual: people with the same reaction are stacked together, and it takes an extra effort to look at the list of reacting users. Differentially, reactions can also signal level of involvement: someone “conversing” in only reactions may not be up for thinking about the conversation hard enough to produce text responses, but is able to listen and give base emotional feedback (which seems to be the most relevant to the proposed uses here). It serves a similar function to scanning people's facial expressions in a physical meeting room.

I'm very unclear on how these patterns would play out in a longer-form, more delay-tolerant environment like a comment tree. Some of the room-sensing interpretation makes less sense the less the timescale of the reactions corresponds to unconscious-emotion synchronization; there's a lot of lost flow context.

Comment by rana-dexsin on Feature Request: Self-imposed Time Restrictions · 2019-05-21T02:02:55.303Z · score: 3 (2 votes) · LW · GW

Since this seems to be an akrasia/executive-related problem, I suspect just having links to possible addons to use (and ideally, example configurations) easily accessible could be disproportionately ameliorative compared to its implementation cost, both via the reminder that compulsive browsing and mitigations for it both exist, and via the social signaling that this is an approved way of browsing that won't make you weird. Though I'm not sure about the possible noise it creates, depending on what easy options you have for placement/hiding.

Comment by rana-dexsin on Why I've started using NoScript · 2019-05-18T20:36:02.369Z · score: 6 (4 votes) · LW · GW

I think it depends a lot on how you frame it, and analogies work much less well than people expect because of ways the Internet is very different from previous environments.

The intuitive social norms surrounding the store clerk involve the clerk having socially normal memory performance and a social conscience surrounding how they use that memory. What if the store clerk were writing down everything you did in the store, including every time you picked your nose, your exact walking path, every single item you looked at and put back, and what you were muttering to your shopping companion? What if that list were quickly sent off to an office across the country, where they would try to figure out any number of things like “which people look suspicious” and “where to display which items”? What if the clerk followed you around the entire store with their notepad when it's a giant box store with many departments? For the cross-site case, imagine that the office also receives detailed notes about you from the clerks at just about every other place you go, because those ones wound up with more profitable store layouts and lower theft rates and the other shops gradually went out of business.

There are other analogy framings still; consider one with security cameras instead, and whether it feels different, and what different assumptions might be in play. But in all of those cases, relying on misplaced assumptions about humanlike capability, motivation, and agency is to be wary of. (Fortunately, I think a lot of people here should be familiar with that one!)

Comment by rana-dexsin on Strategic implications of AIs' ability to coordinate at low cost, for example by merging · 2019-04-25T17:49:15.447Z · score: 7 (4 votes) · LW · GW

Extending this: trust problems could impede the flow of information in the first place in such a way that the introspective access stops being an amplifier across a system boundary. An AI can expose some code, but an AI that trusts other AIs to be exposing their code in a trustworthy fashion rather than choosing what code to show based on what will make the conversation partner do something they want seems like it'd be exploitable, and an AI that always exposes its code in a trustworthy fashion may also be exploitable.

Human societies do “creating enclaves of higher trust within a surrounding environment of lower trust” a lot, and it does improve coordination when it works right. I don't know which way this would swing for super-coordination among AIs.

Comment by rana-dexsin on Strategic implications of AIs' ability to coordinate at low cost, for example by merging · 2019-04-25T17:41:09.673Z · score: 7 (3 votes) · LW · GW

But jointly constructing a successor with compromise values and then giving them the reins is something humans can sort of do via parenting, there's just more fuzziness and randomness and drift involved, no? That is, assuming human children take a bunch of the structure of their mindsets from what their parents teach them, which certainly seems to be the case on the face of it.

Comment by rana-dexsin on Alignment Newsletter One Year Retrospective · 2019-04-18T20:21:16.553Z · score: 1 (1 votes) · LW · GW

Speculative followup: seeing a few other people say similar things here and contrasting it with what seems to have been implied in the retrospective itself makes me guess there's a seriousness split between LW and email "subscribers". Does the former have passersby dominating the reader set (especially since it'll be presented to people who are on LW for some other reason), whereas anyone who cares more deeply and specifically will primarily consume the newsletter by email?

Comment by rana-dexsin on Alignment Newsletter One Year Retrospective · 2019-04-17T17:42:12.855Z · score: 6 (4 votes) · LW · GW

I browse this newsletter occasionally via LW; I am not subscribed by email. I am not so far seriously involved in AI research, and I don't wind up understanding most of it in detail, but I have a longer-term interest in such issues, and I want to keep a fraction of a bird's eye on the state of the field if possible, so that if I start in on deeper such activities a few years from now, I can re-skim the archives and try to catch up.

Comment by rana-dexsin on Degrees of Freedom · 2019-04-04T15:15:23.258Z · score: 3 (2 votes) · LW · GW

But how do the two things in the last paragraph mix if I have (1) a preference for others to judge me well, (2) a belief that others will judge me well if they believe I am doing what they believe is optimal for what they think my beliefs and preferences should be, and (3) a belief that the extrapolated cost of convincing them that I am doing such a thing without actually doing the thing is so incredibly high as to make plans involving that almost never show up in decision-making processes?

Put another way, it seems like the two definitions can collapse in a sufficiently low-privacy conformist environment—which can be unified with the emotion of “freedom”, but at least in most Western contexts, that seems infrequent. The impression I get is that most people obvious-patch around this by trying to extrapolate “what a version of me completely removed from peer pressures would prefer” and using that as the preference baseline, but I both think and feel that that's incoherent. (Further meta, I also get the impression that many people don't feel that it's incoherent even if they would agree cognitively that it is, and that that leads to a lot of worldmodel divergence down the line.)

(I realize this might be a bit off-track from its parent comment, but I think it's relevant to the broader discussion.)

Comment by rana-dexsin on Renaming "Frontpage" · 2019-03-11T10:49:03.126Z · score: 1 (1 votes) · LW · GW

“Default” and “Common” feel wrong, but perhaps “Core” has a place somewhere? “This is what we're here for; the rest is in support of it.”

Comment by rana-dexsin on The Pavlov Strategy · 2018-12-26T03:22:36.428Z · score: 1 (1 votes) · LW · GW

Is the “Chaos” part meant to be a link? It doesn't seem to go anywhere.

Comment by rana-dexsin on The Bat and Ball Problem Revisited · 2018-12-13T23:10:26.798Z · score: 1 (1 votes) · LW · GW

The bat and ball problem I answer in what I'll call one conscious time-step with the correct “five cents”, but it happens too fast for me to verify how (beyond the usual trouble with verifying internal reflection). I would speculate, in decreasing order of intuitive probability, that in order to get the answer, either (a) I've seen an exactly analogous “trick” problem before and am pattern-matching on that or (b) I'm doing the algebra quickly using my seemingly well-developed mathematical intuition. I can also imagine (c) I'm leaping to the “wrong” answer, then trying to verify it, noticing it's wrong, and correcting it, all in the same subconscious flash, but that feels off. Imagining the “ten cents” answer doesn't actually feel compelling; it just feels wrong. (It feels like a similar emotion to noticing I've gotten the wrong amount of change, in fact.)

The widgets problem I do a noticeable double-take on, but it's rapidly corrected within one conscious time-step; the “100” is a momentary flicker before my brain settles on the correct answer. Imagining “100” afterwards feels wrong, but less immediately so than “ten cents” did. It feels like I have a bias there toward answering “how many widgets can you produce in a fixed time” questions, so I might have an echo of the misreading “how many widgets can 100 machines produce in [assumed to be the same amount of time as before, since no contrary time value is presented to override this]”.

The lily pads question takes me a conscious time-step longer to answer than either of the other two; the initial flash is “inconclusive”, and then I see myself rechecking the part where the quantity doubles every step before answering “47”. (I notice I didn't remember that the steps were days, only remembering that there was a time unit; I don't know if that's relevant.) Imagining “24” afterwards feels some intermediate level of wrong between “ten cents” and “100”; my mental graph of the growth curve puts the expected value 24 at “way too low” intuitively before I can compute the actual exponent.

Comment by rana-dexsin on What is "Social Reality?" · 2018-12-09T04:38:29.637Z · score: 6 (5 votes) · LW · GW

I wonder if Chris_Leong was trying to deliver a meta-joke-based answer by pointing out that any consensus definition of “social reality” is itself a part of social reality.

Comment by rana-dexsin on Anyone use the "read time" on Post Items? · 2018-12-04T00:43:21.230Z · score: 2 (2 votes) · LW · GW

Thanks for clarifying. In that case, I don't count that as a gesture for word count in the sense that I was hoping, because it's far too heavy and requires flow-breaking motion tracking of an unpredictable expand/collapse.

Comment by rana-dexsin on Anyone use the "read time" on Post Items? · 2018-12-02T05:52:45.641Z · score: 8 (3 votes) · LW · GW

I use it as a proxy, but I'd like word count better. T3t implied that there's already a gesture for word count, but I don't know what it is, so maybe that's not discoverable enough as it is, too.

Comment by rana-dexsin on Open Thread November 2018 · 2018-11-27T07:42:07.013Z · score: 2 (2 votes) · LW · GW

It would be appreciated (and pleasingly symmetrical). Thanks for the response.

Comment by rana-dexsin on Open Thread November 2018 · 2018-11-26T08:44:30.777Z · score: 3 (3 votes) · LW · GW

How do I report a top-level post to the moderators? I see a kebab menu for comments, but I don't see anything like that for top-level posts, neither on the front page nor on the post page. The specific situation is that there currently seem to be multiple spam posts in the “all posts” queue, but I'd also like to know how to do this in general for future reference.

Comment by rana-dexsin on Embedded Agents · 2018-10-30T06:02:01.975Z · score: 8 (3 votes) · LW · GW

This sounds similar in effect to what philosophy of mind calls “embodied cognition”, but it takes a more abstract tack. Is there a recognized background link between the two ideas already? Is that a useful idea, regardless of whether it already exists, or am I off track?

Comment by rana-dexsin on An Ontology of Systemic Failures: Dragons, Bullshit Mountain, and the Cloud of Doom · 2018-09-13T13:56:44.727Z · score: 9 (2 votes) · LW · GW

If I'm not the first, was this posted before? I don't see the same suggestion elsewhere in the comments, at least…

And the part I'm worried about above is that the poetic view will lead to conflationary thinking about the categories along the way, rendering the model a lot less useful; sure, a dragon can cause multiple symptoms, but that's not the central image that comes to mind (at least to me), and trying to get a grip on something like this as an intuition pump gets fragile if you lean into what sounds compelling.

Comment by rana-dexsin on An Ontology of Systemic Failures: Dragons, Bullshit Mountain, and the Cloud of Doom · 2018-09-11T14:47:56.755Z · score: 5 (3 votes) · LW · GW

I like the basic idea of the classification. I suggest “Hydra” instead of “Dragon”, since you specifically mention multiple seemingly independent heads/symptoms. If I were to only read the comments, I would think a Dragon was just a particularly large or difficult Bug; I don't know if that means people are letting the definition slip in that direction.

I think I need to chew on this more and think about how much usefully breaks down along these lines. As I read this, you're describing a correlation between a 2×2 matrix of bimodal levels of multiplicity of causes and effects, and good strategies for dealing with problems with those traits. Is that accurate? But there's also a very distinct feeling that each of these categories evokes (especially given the names), and I'm not as sure that the feeling is correlated with the purported criteria; I have an intuitive guess that it's more correlated with perceptions of agency over problems, which may have only a skewed relation to the “number” of causes and effects (insofar as that's meaningful in the first place).