Posts

Manifold “exploring real cash prizes” 2024-04-23T21:07:13.111Z
Do you consider your current, non-superhuman self aligned with “humanity” already? 2022-06-25T04:15:08.088Z
[Scribble] Bad Reasons Behind Different Systems and a Story with No Good Moral 2022-05-09T05:21:53.221Z
Does the “ugh field” phenomenon sometimes occur strongly enough to affect immediate sensory processing? 2022-05-03T13:42:46.225Z
Rana Dexsin's Shortform 2021-06-28T03:55:40.930Z

Comments

Comment by Rana Dexsin on What's with all the bans recently? · 2024-04-06T16:47:44.942Z · LW · GW

feature proposal: when someone is rate limited, they can still write comments. their comments are auto-delayed until the next time they'd be unratelimited. they can queue up to k comments before it behaves the same as it does now. I suggest k be 1. I expect this would reduce the emotional banneyness-feeling by around 10%.

If (as I suspect is the case) one of the in-practice purposes or benefits of a limit is to make it harder for an escalation spiral to continue via comments written in a heated emotional state, delaying the reading destroys that effect compared to delaying the writing. If the limited user is in a calm state and believes it's worth it to push back, they can save their own draft elsewhere and set their own timer.

Comment by Rana Dexsin on Partial value takeover without world takeover · 2024-04-05T13:48:54.986Z · LW · GW

![[Pasted image 20240404223131.png]]

Oops? Would love to see the actual image here.

Comment by Rana Dexsin on AI #58: Stargate AGI · 2024-04-04T15:49:17.102Z · LW · GW

Not LLMs yet, but McDonalds is rolling out automated order kiosks,

 

That Delish article is from 2018! (And tangentially, I've been using those as my preferred way to order things at McDonald's for a long while now, mostly because I find digital input far crisper than human contact through voice in a noisy environment.)

The subsequent “Ingrid Jacques” link goes to a separate tweet that links to the Delish article, but it's not the Ingrid Jacques tweet, which itself is from 2024. I think the “tyson brody” tweet it links to instead might be a reply to the Ingrid Jacques one, but if so, that's hidden to me, probably because I'm not logged in.

Comment by Rana Dexsin on LessWrong's (first) album: I Have Been A Good Bing · 2024-04-04T00:45:16.149Z · LW · GW

My (optimistic?) expectation is that it ends up (long run) a bit like baking.

Home/local music performance also transitioned into more of a niche already with the rise of recorded music as a default compared to live performance being the only option, didn't it?

At that scale its more "celebrity-following", but that is also something the AI would not have - I don't know how big a deal that is.

While I doubt it will be the same thing for a transformer-era generative model due to the balance of workflow and results (and the resultant social linkages) being so different, it seems worth pointing out as a nearby reality anchor that virtual singers have had celebrity followings for a while, with Hatsune Miku and the other Crypton Future Media characters being the most popular where I tread. In fact, I fuzzily remember a song said to be about a Vocaloid producer feeling like their own name was being neglected by listeners in favor of the virtual singer's (along the lines of mentally categorizing the songs as “Miku songs” rather than “(producer) songs”), but I can't seem to source the interpretation now to verify my memory; it might've been “Unknown Mother-Goose”.

Comment by Rana Dexsin on AI #55: Keep Clauding Along · 2024-03-15T17:40:50.359Z · LW · GW

I don't see it as an elephant overall, but I can see how you could push it to be the head of one: the head is facing to the right, the rightmost curve outlines the trunk, the upper left part of the main ‘object’ is an ear, and some of the vertical white shapes in the bottom left quadrant can be interpreted as tusks.

Comment by Rana Dexsin on The Gemini Incident · 2024-02-23T09:44:15.582Z · LW · GW

While I appreciate the attempt to bring in additional viewpoints, the “Sign-in Required” is currently an obstacle.

Comment by Rana Dexsin on The Altman Technocracy · 2024-02-16T13:35:36.711Z · LW · GW

I claim that very few people actually understand what they are using and what it effects it has on their mind.

How would you compare your generative-AI focus to the “toddlers being given iPads” transition, which seems to have already happened?

Comment by Rana Dexsin on Rana Dexsin's Shortform · 2024-02-12T21:48:17.859Z · LW · GW

This SMBC from a few years ago including an “entropic libertarian” probably isn't pointing at what people call “e/acc”… right? My immediate impression is that it rhymes though. I'm not sure how to feel about that.

Comment by Rana Dexsin on Status-oriented spending · 2024-01-26T06:23:34.192Z · LW · GW

The first sentence here is very confusing and I think inverts a comparison—I think you mean “would make the world enough worse off”.

Comment by Rana Dexsin on Status-oriented spending · 2024-01-25T08:22:25.388Z · LW · GW

The first somewhat contrary thing that comes to mind here is whether visible spending that looks like a status grab or is class-dissonant would also impact your social capital in terms of being able to source (loaned or gifted) money from your networks in case of a crunch or shock. If your friends will feel “well I sure would've liked to have X, but I was the ‘responsible’ one and you weren't, so now I'm not going to put money in when you're down” and that's what you rely on as a safety net, then maybe you do need to pay attention to that kind of self-policing. If you're reliant on less personal sources of credit, insurance, etc. or if your financially relevant social groups are themselves receptive to your ideas on not caring as much about class policing, then the self-policing can be mainly self-sabotage like you say.

Comment by Rana Dexsin on Is principled mass-outreach possible, for AGI X-risk? · 2024-01-22T00:06:53.166Z · LW · GW

Facepalm at self. You're right, of course. I think I confused myself about the overall context after reading the end-note link there and went off at an angle.

Now to leave the comment up for history and in case it contains some useful parts still, while simultaneously thanking the site designers for letting me un-upvote myself. 😛

Comment by Rana Dexsin on Is principled mass-outreach possible, for AGI X-risk? · 2024-01-21T20:51:55.959Z · LW · GW

(Epistemic status: mostly observation through heavy fog of war, partly speculation)

From your previous comment:

The "educated savvy left-leaning online person" consensus (as far as I can gather) is something like: "AI art is bad, the real danger is capitalism, and the extinction danger is some kind of fake regulatory-capture hype techbro thing which (if we even bother to look at the LW/EA spaces at all) is adjacent to racists and cryptobros".

So clearly you're aware of / agree with this being a substantial chunk of what's happening in the “mass social media” space, in which case…

Given this, plus anchoring bias, you should expect and be very paranoid about the "first thing people hear = sets the conversation" thing.

… why is this not just “お前はもう死んでいる” (that is, you are already cut off from this strategy due to things that happened before you could react) right out of the gate, at least for that (vocal, seemingly influential) subpopulation?

What I observe in many of my less-technical circles (which roughly match the above description) is that as soon as the first word exits your virtual mouth that implies that there's any substance to any underlying technology itself, good or bad or worth giving any thought to at all (and that's what gets you on the metatextual level, the frame-clinging plus some other stuff I want to gesture at but am not sure whether that's safe to do right now), beyond “mass stealing to create a class divide”, you instantly lose. At best everything you say gets interpreted as “so the flood of theft and soulless shit is going to get even worse” (and they do seem to be effectively running on a souls-based model of anticipation even if their overt dialogue isn't theistic, which is part of what creates a big inferential divide to start with). But you don't seem to be suggesting leaning into that spin, so I can't square what you're suggesting with what seem to be shared observations. Also, the less loud and angry people are still strongly focused on “AI being given responsibility it's not ready for”, so as soon as you hint at exceeding human intelligence, you lose (and you don't then get the chance to say “no no, I mean in the future”, you lose before any further words are processed).

Now, I do separately observe a subset of more normie-feeling/working-class people who don't loudly profess the above lines and are willing to e.g. openly use some generative-model art here and there in a way that suggests they don't have the same loud emotions about the current AI-technology explosion. I'm not as sure what main challenges we would run into with that crowd, and maybe that's whom you mean to target. I still think getting taken seriously would be tricky, but they might laugh at you more mirthfully instead of more derisively, and low-key repetition might have an effect. I do kind of worry that even if you start succeeding there, then the x-risk argument can get conflated with the easier-to-spread “art theft”, “laundering bias”, etc. models (either accidentally, or deliberately by adversaries) and then this second crowd maybe gets partly converted to that, partly starts rejecting you for looking too similar to that, and partly gets driven underground by other people protesting their benefiting from the current-day mundane-utility aspect.

I also observe a subset of business-oriented people who want the mundane utility a lot but often especially want to be on the hype train for capital-access or marketing reasons, or at least want to keep their friends and business associates who want that. I think they're kind of constrained in what they can openly say or do and might be receptive to strategic thinking about x-risk but ultimately dead ends for acting on it—but maybe that last part can be changed with strategic shadow consensus building, which is less like mass communication and where you might have more leeway and initial trust to work with. Obviously, if someone is already doing that, we don't necessarily see it posted on LW. There's probably some useful inferences to be drawn from events like the OpenAI board shakeup here, but I don't know what they are right now.

FWIW, I have an underlying intuition here that's something like “if you're going to go Dark Arts, then go big or go home”, but I don't really know how to operationalize that in detail and am generally confused and sad. In general, I think people who have things like “logical connectives are relevant to the content of the text” threaded through enough of their mindset tend to fall into a trap analogous to the “Average Familiarity” xkcd or to Hofstadter's Law when they try truly-mass communication unless they're willing to wrench things around in what are often very painful ways to them, and (per the analogies) that this happens even when they're specifically trying to correct for it.

Comment by Rana Dexsin on TurnTrout's shortform feed · 2024-01-10T16:31:28.884Z · LW · GW

You're right; I'd forgotten about the indicator. That makes sense and that is interesting then, huh.

Comment by Rana Dexsin on TurnTrout's shortform feed · 2024-01-05T02:56:15.206Z · LW · GW

and I'm faintly surprised it knows so much about it

GPT-4 via the API, or via ChatGPT Plus? Didn't they recently introduce browsing to the latter so that it can fetch Web sources about otherwise unknown topics?

Comment by Rana Dexsin on AI Girlfriends Won't Matter Much · 2023-12-24T00:58:53.334Z · LW · GW

The porno latent space has been explored so thoroughly by human creators that adding AI to the mix doesn't change much.

Something about this feels off to me. One of the salient possibilities in terms of technology affecting romantic relationships, I think, is hyperspecificity in preferences, which seems like it has a substantial social component to how it evolves. In the case of porn, with (broadly) human artists, the r34 space still takes a substantial delay and cost to translate a hyperspecific impulse into hyperspecific porn, including the cost of either having the skills and taking on the workload mentally (if the impulse-haver is also the artist) or exposing something unusual plus mundane coordination costs plus often commission costs or something (if the impulse-haver is asking a different artist).

With interactively usable, low-latency generative AI, an impulse-haver could not only do a single translation step like that much more easily, but iterate on a preference and essentially drill themselves a tunnel out of compatibility range. No? That seems like the kind of thing that makes an order-of-magnitude difference. Or do natural conformity urges or starting distributions stop that from being a big deal? Or what?

Having written that, I now wonder what circumstances would cause people to drill tunnels toward each other using the same underlying technology, assuming the above model were true…

Comment by Rana Dexsin on Scale Was All We Needed, At First · 2023-12-18T01:31:43.434Z · LW · GW

The public continued to react as they have to AI for the past year—confused, fearful, and weary.

Confirm word: “weary” or “wary”? Both are plausible here, but the latter gels better with the other two, so it's hard to tell whether it's a mistake.

Comment by Rana Dexsin on Is being sexy for your homies? · 2023-12-15T10:53:49.990Z · LW · GW

That last reminds me of Gwern's “The Melancholy of Subculture Society” with regard to creating a profusion of smaller status ladders to be on.

Comment by Rana Dexsin on Open Thread – Winter 2023/2024 · 2023-12-08T11:57:24.415Z · LW · GW

The Latin noun “instauratio” is feminine, so “magna” uses the feminine “-a” ending to agree with it. “forum” in Latin is neuter, so “magnum” would be the corresponding form of the adjective. (All assuming nominative case.)

Comment by Rana Dexsin on Speaking to Congressional staffers about AI risk · 2023-12-07T08:31:46.878Z · LW · GW

I'm not sure I understand the direction of reasoning here. Overestimating the difficulty would mean that it will actually be easier than they think, which would be true if they expected a requirement of high charisma but the requirement were actually absent, or would be true if the people who ended up doing it were of higher charisma than the ones making the estimate. Or did you mean underestimating the difficulty?

Comment by Rana Dexsin on ChatGPT 4 solved all the gotcha problems I posed that tripped ChatGPT 3.5 · 2023-11-30T04:32:15.756Z · LW · GW

I disagree with the last paragraph and think that “Normally” is misleading as stated in the OP; I think it's clear when talking about numbers in a general sense that issues with representations of numbers as used in computers aren't included except as a side curiosity or if there's a cue to the effect that that's what's being discussed.

Comment by Rana Dexsin on Lying Alignment Chart · 2023-11-30T03:38:45.045Z · LW · GW

“structure” feels off the mark for labeling the vertical axis; it feels like it wants to denote the structure of or within the (broadly defined) utterance, but instead it's mapping to part of the structure around it. If I consider some possible replacements:

  • “cause” feels much closer but maybe too specific.
  • “impetus” feels weird for a reason I can't immediately place; maybe because it implies too much of an intentional stance when that's the very thing under question?
  • “process” currently feels best to me of the ones I've considered.
  • “generation”, “root”, “initiation”, “emplacement” are other possibilities.
  • “intent” interestingly doesn't feel as bad as “impetus”, perhaps because it centers on a slightly different communicative mark that is nonetheless interpretable as the right thing.
Comment by Rana Dexsin on Public Weights? · 2023-11-02T07:31:07.090Z · LW · GW

Something I haven't yet personally observed in threads on this broad topic is the difference in risk modeling from the perspective of the potential malefactor. You note that outside a hackathon context, one could “take a biology class, read textbooks, or pay experienced people to answer your questions”—but especially that last one has some big-feeling risks associated with it. What happens if the experienced person catches onto what you're trying to do, stops answering questions, and alerts someone? The biology class is more straightforward, but still involves the risky-feeling action of talking to people and committing in ways that leave a trail. The textbooks have the lowest risk of those options but also require you to do a lot more intellectual work to get from the base knowledge to the synthesized form.

This restraining effect comes only partly in the form of real social risks to doing things that look ‘hinky’, and much more immediately in the form of psychological barriers from imagined such risks. People who are of the mindset to attempt competent social engineering attacks often report them being surprisingly easy, but most people are not master criminals and shy away from doing things that feel suspicious by reflex.

When we move to the LLM-encoded knowledge side of things, we get a different risk profile. Using a centralized, interface-access-only LLM involves some social risk to a malefactor via the possibility of surveillance, especially if the surveillance itself involves powerful automatic classification systems. Content policy violation warnings in ChatGPT are a very visible example of this; many people have of course posted about how to ‘jailbreak’ such systems, but it's also possible that there are other hidden tripwires.

For an published-weights LLM being run on local, owned hardware through generic code that's unlikely to contain relevant hidden surveillance, the social risk to experimenting drops into negligible range, and someone who understands the technology well enough may also understand this instinctively. Getting a rejection response when you haven't de-safed the model enough isn't potentially making everyone around you more suspicious or adding to a hidden tripwire counter somewhere in a Microsoft server room. You get unlimited retries that are punishment-free from this psychological social risk modeling perspective, and they stay punishment-free pretty much up until the point where you start executing on a concrete plan for harm in other ways that are likely to leave suspicious ripples.

Structurally this feels similar to untracked proliferation of other mixed-use knowledge or knowledge-related technology, but it seems worth having the concrete form written out here for potential discussion.

This is the main driving force behind why my intuition agrees with you that the accessibility of danger goes up a lot with a published-weights LLM. Emotionally I also agree with you that it would be sad if this meant it were too dangerous to continue open distribution of such technology. I don't currently have a well-formed policy position based on any of that.

Comment by Rana Dexsin on Can I take ducks home from the park? · 2023-09-15T15:31:11.300Z · LW · GW

So from what I can see, this was just one trial per (prompt, model) pair? That seems pretty brittle; it might be more informative to look at the distribution of scores over eleven responses each or something, especially if we don't care so much about the average as whether a user can take the most helpful response after several queries.

Comment by Rana Dexsin on a rant on politician-engineer coalitional conflict · 2023-09-04T21:10:44.630Z · LW · GW

far more than a highly effective not non-technical MBA would be

Is this meant to have only a single negation? I'm not sure how the sentence as written works in context.

Comment by Rana Dexsin on Rana Dexsin's Shortform · 2023-07-19T17:24:00.126Z · LW · GW

Currently thinking about the idea that an idea or meme can be dangerous or poorly adapted to a population, despite being true and coherent in itself, due to interactions with the preexisting landscape resulting in it being metabolized into a different, wrong and/or toxic idea.

A principle in engineering that feels similar is “design for manufacturability”, where a design can be theoretically sound but not yield adequate quality when put through the limitations of the actual fabrication process, including the possibility of breaking the manufacturing equipment if you try. In this case, the equivalent of the fabrication process is the interaction inside people's minds, so “design for mentalization”, perhaps?

Comment by Rana Dexsin on When people say robots will steal jobs, what kinds of jobs are never implied? · 2023-07-15T15:09:35.967Z · LW · GW

I think it would be helpful to have more context around the initial question, then. Do I infer correctly that your model of the phenomenon under discussion includes something like “there exists broad media consensus around which jobs literally can ever versus literally cannot ever be done (in a sufficiently competent and socially accepted way) by machines[1]”? Because I'm not sure such a consensus exists meaningfully enough to come to a boundary conclusions about, especially because to the extent that there is one, it seems like the recent spate of developments has blown up its stability for a while yet. It would've made more sense to ask what jobs machines can never do twenty years ago, in which fields like writing and visual art would have been popular examples—examples where we now have clear economic-displacement unrest.

As for the specific example, the “I'm terrified!” quote by Rabbi Joshua Franklin in this other CNN article about a GPT-generated sermon seems like it's in the general vein of “machines will steal jobs”. I'm not sure whether the intent of your question would consider this truer counterevidence to the first example (perhaps because it's an article published by a major mass media organization), cherrypicking (perhaps because it wasn't positioned in such a way as to get into the popular mindset the same way as some other jobs—I don't know whether this is true), or irrelevant (perhaps because you're pointing at something other than the trend of mass media articles that I associate with the “machines will steal jobs” meme).

There's also a separate underlying filter here regarding which jobs are salient in various popular consciousnesses in the first place, and I'm not sure how that's meant to interact with the question either…


  1. I'm using “machines” rather than “robots” here primarily because I think pure software replacements are part of the same discursive trends and are substantially similar in the “human vs automation” tradeoff ways despite physical differences and related resource differences. ↩︎

Comment by Rana Dexsin on When people say robots will steal jobs, what kinds of jobs are never implied? · 2023-07-15T12:00:54.557Z · LW · GW

There's some sources that put a substantial dent in your example. A recent one that's relevant to current AI trends is an experimental GPT-backed church service (article via Ars Technica) in Germany as part of a convention of Protestants, but some years ago there was already Mindar, a robot Buddhist preacher (article via CNN) in Japan, via a collaboration between the Kodaiji temple and a robotics professor at Osaka University.

Comment by Rana Dexsin on Open Thread: June 2023 (Inline Reacts!) · 2023-06-19T03:16:53.982Z · LW · GW

Putting this in the OT because I'm risking asking something silly and basic here—after reading “Are Bayesian methods guaranteed to overfit?”, it feels like there should exist a Bayesian-update analogue to Kelly betting: underfitting enough to preserve some property that's important because the environment is unpredictable, catastrophic losses have distortionary effects, etc., where fitting to the observations ‘as well as possible’ is analogous to playing purely for expected value and thus turns out to be ‘wrong’ in the kind of iterated games we wind up in in reality. Dweomite's comment is part of what inspired this, because of the way it enumerated reasons having to do with limited training data.

Is this maybe an existing well-known concept that I missed the boat on, or something that's already known to be unworkable or undefinable for some reason? Or what?

Comment by Rana Dexsin on Phone Number Jingle · 2023-05-25T20:40:59.356Z · LW · GW

That's what I did, though with do on 1 and mi′ on 0 (treating it as 10). I'm not sure what similarity metric is most relevant here, but in case of near-collisions, small additions of supporting rhythm or harmony could help nudge some sequences apart; swing time would give added contrast to even vs odd positions, for instance. Anyway, it sounds like this might not generalize well across people…

Aside: this is a time when I'm really appreciating two-axis voting. The vote-signal of “that's mildly interesting but really doesn't seem like it'd work” is very useful to see in a compact form, even though the written responses contain similar information in more detail.

Comment by Rana Dexsin on Phone Number Jingle · 2023-05-24T07:19:29.033Z · LW · GW

I'm actually writing from experience there! That's how my memory still mainly encodes the few main home telephone numbers from when I was growing up (that is, I remember the imputed melodies before the digits in any other form), but it's possible they were unusually harmonious. I don't think it was suggested by family, just something I did spontaneously, and I am not at all sure how well this would carry more generally… it may also be relevant that I had a bunch of exposure to Chinese numeric musical notation.

Comment by Rana Dexsin on Phone Number Jingle · 2023-05-23T23:22:52.948Z · LW · GW

Assigning digits to scale degrees feels like an interesting alternative here, especially for the musically-inclined.

Comment by Rana Dexsin on Yudkowsky on AGI risk on the Bankless podcast · 2023-03-13T05:41:18.026Z · LW · GW

I did in fact go back and listen to that part, but I interpreted that clarifying expansion as referring to the latter part of your quoted segment only, and the former part of your quoted segment to be separate—using cryptocurrency as a bridging topic to get to cryptography afterwards. Anyway, your interpretation is entirely reasonable as well, and you probably have a much better Eliezer-predictor than I do; it just seemed oddly unconservative to interpolate that much into a transcript proper as part of what was otherwise described as an error correction pass.

Comment by Rana Dexsin on Yudkowsky on AGI risk on the Bankless podcast · 2023-03-13T04:23:52.158Z · LW · GW

We have people in crypto[graphy] who are good at breaking things, and they're the reason why anything is not on fire. Some of them might go into breaking AI systems instead, because that's where you learn anything.

Was there out-of-band clarification that Eliezer meant “cryptography” here (at 01:28:41)? He verbalized “crypto”, and I interpreted it as “cryptocurrency” myself, partly to tie things in with both the overall context of the podcast and the hosts' earlier preemptively-retracted question which was more clearly about cryptocurrency. Certainly I would guess that the first statement there is informally true either way, and there's a lot of overlap. (I don't interpret the “cryptosystem” reference a few sentences later to bias it much, to be clear, due to that overlap.)

Comment by Rana Dexsin on Abusing Snap Circuits IC · 2023-03-05T22:57:05.341Z · LW · GW

If you weren't already aware, look up “circuit bending” if you want to see a grungier DIY/hacker-style subcultural activity around producing interesting sounds via ad-hoc manipulations of electronics.

Comment by Rana Dexsin on Meta "open sources" LMs competitive with Chinchilla, PaLM, and code-davinci-002 (Paper) · 2023-03-05T00:40:48.260Z · LW · GW

Tiberium at HN seems to think not. Copied and lightly reformatted, with the 4chan URLs linkified:

It seems that the leak originated from 4chan [1]. Two people in the same thread had access to the weights and verified that their hashes match [2][3] to make sure that the model isn't watermarked. However, the leaker made a mistake of adding the original download script which had his unique download URL to the torrent [4], so Meta can easily find them if they want to.

I haven't looked at the linked content myself yet.

Comment by Rana Dexsin on Sydney can play chess and kind of keep track of the board state · 2023-03-03T14:59:57.051Z · LW · GW

Long before we get to the “LLMs are showing a number of abilities that we don't really understand the origins of” part (which I think is the most likely here), a number of basic patterns in chess show up in the transcript semi-directly depending on the tokenization. The full set of available board coordinates is also countable and on the small side. Enough games and it would be possible to observe that “. N?3” and “. N?5” can come in sequence but the second one has some prerequisites (I'm using the dot here to point out that there's adjacent text cues showing which moves are from which side), that if there's a “0-0” there isn't going to be a second one in the same position later, that the pawn moves “. ?2” and “. ?1” never show up… and so on. You could get a lot of the way toward inferring piece positions by recognizing the alternating move structure and then just taking the last seen coordinates for a piece type, and a layer of approximate-rule-based discrimination would get you a lot further than that.

Comment by Rana Dexsin on Meta "open sources" LMs competitive with Chinchilla, PaLM, and code-davinci-002 (Paper) · 2023-02-25T11:08:15.021Z · LW · GW

I wonder whether, when approving applications for the full models for research, they watermark the provided data somehow to be able to detect leaks. Would that be doable by using the low-order bits of the weights or something, for instance?

Comment by Rana Dexsin on Cyborg Psychologist · 2023-02-16T00:09:04.822Z · LW · GW

I mean in the paragraph “If anyone wants to check it out, the app can be found at: https://cyborg-psychologiy.com”. I suppose I should have said “the main pseudo-link”. 🙂 What you gave in this last comment points to the blog post, not (directly) to the app.

Comment by Rana Dexsin on Cyborg Psychologist · 2023-02-15T22:36:01.015Z · LW · GW

The main link might be broken due to an extraneous ‘i’.

Comment by Rana Dexsin on In Defense of Chatbot Romance · 2023-02-11T22:35:32.026Z · LW · GW

Without knowing what implications this might have, I notice that the first two points against “People might neglect real romance” are analogous to arguments against “People won't bother with work if they have a basic income” based on a “scarcity decompensation threshold” model: avoiding getting trapped in a really bad relationship/job by putting a floor on alternatives, and avoiding having so little confidence/money that you can't put in the activation energy to engage with the pool/market to begin with.

Comment by Rana Dexsin on Acting Normal is Good, Actually · 2023-02-11T01:51:16.168Z · LW · GW
Comment by Rana Dexsin on Duckbill Masks Are Great · 2023-02-07T21:36:21.004Z · LW · GW

I wonder if anyone's tried targeting the avian furry market with these, since those would seem to be the most obvious class of “people who might not mind looking like a duck”. I can't seem to get a good grip on that intersection via search engines, mostly because I get a lot more results related to the non-protective masks more visibly associated with that group.

Comment by Rana Dexsin on Misleading Fast Charging Specs · 2023-02-05T22:26:11.858Z · LW · GW

I don't know, do you think they're trying to say that if connected to one of the 250kW chargers they wouldn't be able to hit their maximum charging speed?[1]

Let's rephrase the question from the publisher side: when compiling a specification sheet, which sets of conditions do you choose to include? This is naturally a balance, where comprehensiveness and comprehensibility trade off, along with other factors like difficulty of testing. For a more extreme example, consider a world in which Level 3 chargers regularly showed up in capacities advertised to 1 kW granularity[2]: does the manufacturer take every data point between 1 kW and 350 kW and put in a massive table and graph (and hope that J. Random Carbuyer knows how to read it)?[3]

The Kia EV6 spec seems to base its independent-axis points on anchors of public perception of Level 3 chargers at the time of publication. 350 kW is the top capacity of Level 3 charger commonly available to my knowledge (this shows up in various factoids on the Web for instance), and 50 kW I think may be the lowest or among the lowest. The consumer is, yes, left to guess what the curve in between looks like, but those bounds are good enough to establish an intuition and make rough plans from. These also correspond loosely to two useful anchor edges of the situation envelope: “living in a proven electric-vehicle-friendly area where 350 kW chargers are likely to be built if they're not here already / wanting to know what kind of charge time I can expect if I make sure to go to the good station” and “living in an area where only weaker chargers are getting any traction thus far / wanting to know what kind of charge time I might have to budget for if I just choose any old place”.[4]

A specific motivational factor that I would imagine here is implied range of observation. This comes into play both on the factual level, with facts like “they've tested that this car works with a 350 kW charger at all”[5] and “they know that this charging time stays true for a 350 kW charger”, and on the social status and emotional comfort levels that derive from those, along the lines of “this manufacturer is up to date with the progress of charging technology and is likely to know what they're doing with the modern stuff”. The latter, while to some degree self-serving to the manufacturer, is not misleading provided that the design and/or test conditions did in fact exist.

Comparability is another motivational factor that I would anticipate here—that the consumer would want to see “how do cars X, Y, and Z behave under whichever charging conditions I'm currently holding in mind”[6]. However, there wasn't much evidence of this in the set of specs under discussion. Possibly this is because they come from different manufacturers, where I would expect comparability to be a stronger force for multiple specs from the same source.

So: there's reasons the 350 kW condition could be advantageous as a point to provide, and I see no pointed, strong reason that they would want to put a 250 kW row in as well. But that doesn't mean they can't, either! Alternate approaches do occur in your list of examples. Let's look at two of them:

  • The VW ID.3 brochure is more technical-looking overall than the Kia spec sheet, and provides its DC3 (which I believe is the same as “Level 3 fast DC charging”) charge time based on slightly different charger capacities for the different battery options. I don't know why exactly, and I think this makes it require more mental steps to compare them, but assuming the reader will infer an inequality is a valid alternative and seems closer to what you're expecting.

  • The Hyundai IONIQ page specifies the inequality explicitly: “>250kW”. Quibbles over the use of > vs ≥ aside, this again seems closer to what you're imagining above as far as picking a tier beyond which further benefit cuts out as the reference point to provide.

From a more speculative plausibility-modeling perspective, I see that you calculate (and I also calculate) an 86% ratio between the P3 peak measurement and the charger advertised capacity condition for both of those cars. Let's suppose that the charging capacity conditions in the spec sheet were in fact based on an anticipated benefit ceiling per above. I didn't see in the P3 report where exactly they were measuring the power usage, but that ratio could easily be explained by transmission losses between two different measuring points, safety headroom, or similar natural discrepancies. If we assume that Hyundai is taking into account a 50 kW granularity of advertisement of charger capacity, then for their Kona model, the ceiling to that granularity of 75 kW / 0.86 also matches the 100 kW figure in the consumer spec sheet—harmlessly leaving extra room in the same way as with the 350 kW anchor in the Kia spec, since maybe a 90 kW charger would be just as fast.

But that wouldn't explain the three cars where (after revision) the figures match better, so let's look at those one by one:

  • The Fiat 500e… well, I actually can't seem to get at the spec page anymore after some kind of misgesture that causes it to redirect to the Fiat USA homepage, which doesn't list a 500e. I don't have time to figure out how to untangle right now, so that one's inconclusive for now.

  • The Mini Cooper SE brochure presents the figure you cite in a different place from the Kia and Hyundai spec sheets. Taking the yellow horizontal bars as the if-then separator, the preconditions are just Level 1/2/3, and the “up to 50 kW” text is below the “Level 3” bar, in the corresponding result cell. Note specifically that this doesn't say “if you connect it to a charger that advertises a maximum of 50 kW, the peak power drawn will be 50 kW”! If that turned out to be false, you could even make a reasonable case that this statement were more misleading from a consumer perspective (with some wishy-washiness around whether the consumer should be assumed to be aware of natural discrepancies per above). I didn't see a part of the P3 report that described attempting those conditions, so I don't know how the Mini Cooper would physically perform there.

  • The Tesla Model 3 page does something similar and is subject to a further vertical integration constraint: the heading of that cell doesn't say “Level 3” but rather says “Supercharging Max”, where I believe Supercharging refers specifically to Tesla's charging methods and facilities. Presenting a front in which the consumer has less discrepant information to reconcile from different suppliers is one of the benefits of vertical integration, and it seems reasonable that Tesla would take advantage of that.

From a broader communications-analysis perspective, I think the intended audience of the P3 report is different. If I look at the front page of the P3 website, it does not seem designed to provide a Consumer-Reports-like publication to me, and their “About” page implies a B2B/intra-industry context with “We advise our clients strategically in the areas of technology strategy, business process optimization and organizational development.”. So aside from the promise vs observation and independent vs dependent axis confusions, there's a big difference in communication context there.


  1. I actually think it is plausible that it would not! I don't know the details of how e.g. voltage negotiation, reference points of power measurement, or safety handling around transients work for high-capacity electric vehicle charging, but in general I would expect the effective tier requirement for the charger to be higher than the power delivered to the battery, with the uncertainty mostly revolving around how much higher. But I don't think this is necessary for my argument that their spec sheet is constructed reasonably; I think my primary position holds even if we assume that the same maximum charging speed would be reliably attained by a Kia EV6 connected to a 250 kW charger. ↩︎

  2. A rough glance around the Web suggests that something like 50 kW granularity of ‘tiers’ of Level 3 charger is common. ↩︎

  3. Longer datasheets with more specialized and technical audiences do do this sort of thing, though! Electrical components with current versus voltage curves, or power consumption versus clock speed, for instance. ↩︎

  4. Level 2 and Level 1 charging times are sufficiently far off, and their environmental prerequisites sufficiently far off as well, that they correspond to distinct behavior types and distinct rough planning regimes (at work / at home overnight rather than gas-station-style top-ups). This is presumably the motivation behind the level categorization in the first place. ↩︎

  5. You could consider this redundant with other marketing messages surrounding interchangeability of charging stations, but for anecdote, I personally observed a relative several years ago fretting over whether a higher-capacity USB flash storage device might require more experience or finesse to operate properly than a lower-capacity one, and I would be unsurprised if analogous forces came into play for a large chunk of the target market of household electric vehicles. (The route is even more plausible in this case, if I'm allowed to speculate; what if, say, a 350 kW charger puts out a within-spec but sharper voltage rise time than the previously tested 250 kW one, and this pushes the envelope of part of the charging circuit in an unanticipated way?) ↩︎

  6. From a marketing-ease standpoint, the conditions the consumer anchors on could well be based on whichever sheet came up first, and the reading-mechanical strictness could even get to the point of “scan the sheet for a specific number and give up or feel frustrated if it's not there”, though neither of these are necessary. ↩︎

Comment by Rana Dexsin on Misleading Fast Charging Specs · 2023-02-05T05:08:43.728Z · LW · GW

On the Kia EV6 page you link first, I think the position of the 350 kW value you quoted being part of the initial conditions rather than an expected draw is pretty clear. The interpretation I'm pointing at is “if connected to a charger with a capacity of 350 kW, the expected time is approximately 18 minutes”—the 350 kW is on the LHS of the conditional as signaled by its position in the text. By comparison to nearby text, the entry immediately above the one you quoted states 73 minutes under the condition of being connected to a Level 3 charger (the same fast-DC type) with a lower capacity tier of 50 kW, and other entries above that one display corresponding “if provided with” → “duration will be” pairs for weaker and easier-to-install charging pathways, down to household AC taking the longest duration. This would make no sense under an interpretation of each of the wattage values all reflecting the battery's internal acceptance rate. Note that the text as you quoted it is not visible to me as a straight-running block of plain text in the page linked; instead, “DC Fast Charge Time (10-80% @ 350 kW via Electric Vehicle Supply Equipment) Level 3 Charger” is the opening line of an expandable box whose body content is “Approx. 18 min.”, and that interaction/presentation provides more clarity to the conditional structure.

The Tesla Model 3 page states “max” for its 250 kW figure, whereas the P3 page is clear that the car “only achieves an average charging power of 146 kW” (emphasis mine), and the associated graph does show 250 kW being drawn at the initial charge state of 10%, then decreasing as the battery charge increases.

The Hyundai Kona and IONIQ pages are similar to the Kia EV6 page: the kilowatts are in the if-part of the conditional, and the measurements listed in the body cells to be relied on as outputs are the minutes.

The VW ID.3 brochure I also read as having the kilowatts in the if-part, though the text is laid out less clearly. Also, I only see your 125 kW figure mentioned in the context of the 77 kWh battery option whereas the P3 report specifies that they used one of the 58 kWh battery options.

In general, what information flow will the median consumer use? Not “do a bunch of division that's going to be wrong because of other variability in how batteries work”. “Show me how long it takes with this type of charger” is the information that's closest to their planning needs. The Tesla page is unusual here compared to the others, but “will a higher-capacity charging feed than X reduce my charging time” is a plausible second most relevant question and is answered better by the max than by the average (if you provide less than a 250 kW capacity, some part of the charge curve will take extended time by comparison to the nominal one).

Comment by Rana Dexsin on SolidGoldMagikarp (plus, prompt generation) · 2023-02-05T03:15:33.468Z · LW · GW

“ForgeModLoader” has an interestingly concrete plausible referent in the loader component of the modding framework Forge for Minecraft. I believe in at least some versions its logfiles are named beginning with that string exactly, but I'm not sure where else that appears exactly (it's often abbreviated to “FML” instead). “FactoryReloaded” also appears prominently in the whitespace-squashed name (repository and JAR file names in particular) of the mod “MineFactory Reloaded” which is a Forge mod. I wonder if file lists or log files were involved in swinging the distribution of those?

Comment by Rana Dexsin on How it feels to have your mind hacked by an AI · 2023-01-22T13:20:21.984Z · LW · GW

Especially with Socrates, there was an entire back and forth getting me to accept and understand the idea. It wasn't just a wise sentence, there was a full conversation.

If that seems like a significant distinguisher to you, it might be of more interest if you were to demonstrate it in that light—ideally with a full transcript, though of course I understand that that may be more troubling to share.

Comment by Rana Dexsin on How it feels to have your mind hacked by an AI · 2023-01-16T19:23:24.163Z · LW · GW

So you mean different modes of subjective experience? That's quite relevant in terms of how to manage such things from the inside, yes. But what I meant by “predict” above was as applied to the entire system—and I model “intuitive feeling of normal” as largely prediction-based as well, which is part of what I was getting at. People who are raised with different environmental examples of “what normal people do” wind up with different such intuitions. I'm not quite sure where this is going, admittedly.

Comment by Rana Dexsin on How it feels to have your mind hacked by an AI · 2023-01-15T15:23:01.857Z · LW · GW

This was and is already true to a lesser degree with manipulative digital socialization. The less of your agency you surrender to network X, the more your friends who have given their habits to network X will be able to work at higher speed and capacity with each other and won't bother with you. But X is often controlled by a powerful and misaligned entity.

And of course these two things may have quite a lot of synergy with each other.

Comment by Rana Dexsin on How it feels to have your mind hacked by an AI · 2023-01-15T13:04:28.520Z · LW · GW

I don't think I understand what you mean. To the extent that I might understand it I tentatively think I don't agree, but would you find it useful to describe the distinction in more detail, perhaps with examples?

Comment by Rana Dexsin on How it feels to have your mind hacked by an AI · 2023-01-15T11:52:54.538Z · LW · GW

As an autistic person, I've always kinda felt like I was making my way through life by predicting how a normal person would act.

I would tend to say that ‘normal’ people also make their way through life by predicting how normal people would act, trained by having observed a lot of them. That's what (especially childhood) socialization is. Of course, a neurotypical brain may be differently optimized for how this information is processed than other types of brains, and may come with different ‘hooks’ that mesh with the experience in specific ways; the binding between ‘preprogrammed’ instinct and social conditioning is poorly understood but clearly exists in a broad sense and is highly relevant to psychological development.

Separately, though:

And I seriously had to stop and think about all 3 of these responses for hours. It is wild how profound these AI manage to be, just from reading my message.

Beware how easy it is to sound Deep and Wise! This is especially relevant in this context since the tendency to conflate social context or framing with the inner content of a message is one of the main routes to crowbarring minds open. These are similar to Daniel Dennett's “deepities”. They are more like mirrors than like paintings, if that makes any sense—and most people when confronted with the Voice of Authority have an instinct to bow before the mirror. (I know I have that instinct!) But also, I am not an LLM (that I am aware of) and I would guess that I can come up with a nearly unlimited amount of these for any situation that are ultimately no more useful than as content-free probes. (In fact I suspect I have been partially trained to do so by social cues around ‘intelligence’, to such an extent that I actively suppress it at times.)