Posts

Just because an LLM said it doesn't mean it's true: an illustrative example 2024-08-21T21:05:59.691Z
dirk's Shortform 2024-04-26T11:12:27.125Z

Comments

Comment by dirk (abandon) on Language Ex Machina · 2024-09-07T13:45:38.704Z · LW · GW

Certainly more skilled writers are more clear, but if you routinely dismiss unclear texts as meaningless nonsense, you haven't gotten good at reading but rather goodharted your internal metrics.

Comment by dirk (abandon) on Language Ex Machina · 2024-09-06T20:32:18.886Z · LW · GW

To the skilled reader, human-authored texts are approximately never foggy.

Comment by dirk (abandon) on dirk's Shortform · 2024-09-03T20:26:15.581Z · LW · GW

I'm confused about SAE feature descriptions. In Anthropic's and Google's demos both, there're a lot of descriptions that seem not to match a naked-eye reading of the top activations. (E.G. "Slurs targeting sexual orientation" also has a number of racial slurs in its top activations; the top activations for "Korean text, Chinese name yunfan, Unicode characters" are almost all the word "fused" in a metal-related context; etc.). I'm not sure if these short names are the automated Claude descriptions or if there are longer more accurate real descriptions somewhere; and if these are the automated descriptions, I'm not sure if there's some reason to think they're more accurate than they look, or if it doesn't matter if they're slightly off, or some third thing?

Comment by dirk (abandon) on jacquesthibs's Shortform · 2024-09-03T18:55:53.950Z · LW · GW

Bioshok3 said in a later tweet that they were in any case mistaken about it being 10k H100s and it was actually 100k H100s: https://x.com/bioshok3/status/1831016098462081256 

Comment by dirk (abandon) on Jimrandomh's Shortform · 2024-09-03T00:23:19.355Z · LW · GW

I don't think having a negative emotion about something is strong evidence someone's opinions weren't drawn from an external source. (For one thing, most people naturally have negative reactions to the breaking of social norms!)

Also, I don't see anywhere in jimrandomh's comment that he made any claims about the thing you're talking about? He was exclusively discussing word choice among people who had negative reactions.

Comment by dirk (abandon) on Ruby's Quick Takes · 2024-09-01T16:47:01.864Z · LW · GW

I'm interested! I'd probably mostly be comparing it to unaugmented Claude for things like explaining ML topics and turning my post ideas into drafts (I don't expect it to be great at this latter but I'm curious whether having some relevant posts in the context window will elicit higher quality). I also think the low-friction integration might make it useful for clarifying math- or programming-heavy posts, though I'm not sure I'll want this often.

Comment by dirk (abandon) on Developing Positive Habits through Video Games · 2024-08-26T20:59:13.698Z · LW · GW

Arguably habitica, as a gamified task-manager, is an attempt to do #2 here (by way of directly giving in-game rewards for IRL positive habits)

Comment by dirk (abandon) on Zach Stein-Perlman's Shortform · 2024-08-26T03:14:48.694Z · LW · GW

I don't think the placement of fault is causally related to whether communication is difficult for him, really. To refer back to the original claim being made,  Adam Scholl said that

My guess is that this seems so stressful mostly because Anthropic’s plan is in fact so hard to defend... [I]t seems unsurprising (and good) that people might sometimes strongly object; if Anthropic had more reassuring things to say, I’m guessing it would feel less stressful to try to reassure them.

I think the amount of stress incurred when doing public communication is nearly orthogonal to these factors, and in particular is, when trying to be as careful about anything as Zac is trying to be about confidentiality, quite high at baseline. I don't think Adam Scholl's assessment arose from a usefully-predictive model, nor one which was likely to reflect the inside view.

Comment by dirk (abandon) on Please stop using mediocre AI art in your posts · 2024-08-26T03:02:45.048Z · LW · GW

I'm told (by the 'simple parameters' section of this guide, which I have not had the opportunity to test but which to my layperson's eye seems promisingly mechanistic in approach) that adjusting the stylize parameter to numbers lower than its default 100 turns down the midjourney-house-style effect (at the cost of sometimes tending to make things more collage-y and incoherent as values get lower), and that increasing the weird parameter above its default 0 will effectively push things to be unlike the default style (more or less).

Comment by dirk (abandon) on Zach Stein-Perlman's Shortform · 2024-08-25T15:25:21.509Z · LW · GW

I think communication as careful as it must be to maintain the confidentiality distinction here is always difficult in the manner described, and that communication to large quantities of people will ~always result in someone running with an insane misinterpretation of what was said.

Comment by dirk (abandon) on RobertM's Shortform · 2024-08-25T14:26:27.073Z · LW · GW

No? Caring is an emotion, to me; it might affect your actions but it doesn't necessarily follow that it does. 

Edit: E.G. you might emotionally care, but intellectually think changing the situation would make it worse on net; you might care about multiple conflicting things another of which takes precedence; you might just not have much of an opportunity to do anything (e.g. they live somewhere else and you can't do anything over the internet, their problem is something unfixable like a loved one dying, etc.); etc. (I also wouldn't take the expression of sympathy to require expressing desire to change the situation? Like, you wouldn't want to express approval while trying to be sympathetic, but you might say, like, "I'm sorry" or "that really sucks" or whatever.)

Comment by dirk (abandon) on RobertM's Shortform · 2024-08-24T22:12:16.350Z · LW · GW

Huh, the division I usually make is that empathy is feeling their same feelings and sympathy is caring about their problems; it hadn't occurred to me to think of either as requiring more action than the other. Not sure whether it's your version or mine that RobertM meant, but it seems worth highlighting as a potential point of miscommunication.

Comment by dirk (abandon) on Zach Stein-Perlman's Shortform · 2024-08-24T22:10:22.942Z · LW · GW

I don't really think any of that affects the difficulty of public communication; your implication that it must be the cause reads to me more like an insult than a well-considered psychological model

Comment by dirk (abandon) on ChatGPT can learn indirect control · 2024-08-23T22:11:06.508Z · LW · GW

Sonnet 3.5 got it on the first try, whether or not I cropped the names out:
 

Comment by dirk (abandon) on How do we know dreams aren't real? · 2024-08-23T02:07:02.566Z · LW · GW

Your # 2 sounds like an argument not-infrequently made by proponents of computationalism, which is quite a commonly-held position here on LessWrong. Not sure if it exactly matches but you might enjoy their positions in any case.

Comment by dirk (abandon) on Habryka's Shortform Feed · 2024-08-23T02:01:09.338Z · LW · GW

Another minor annoyance I've since noticed, at this small scale it's hard to to distinguish posts I've upvoted from posts I haven't voted on. Maybe it'd help if the upvote indicator were made a darker shade of green or something?

Comment by dirk (abandon) on Just because an LLM said it doesn't mean it's true: an illustrative example · 2024-08-22T14:48:51.927Z · LW · GW

It doesn't directly impact the truth of the facts they claimed, no—but the facts are produced by the same process that produces the studies. It's easy to check whether or not a study exists; less-clearcut domains aren't necessarily more reliable.

Comment by dirk (abandon) on the Giga Press was a mistake · 2024-08-21T22:19:57.815Z · LW · GW

Loving the discussion of casting vs stamping! Very interesting info, and you’ve explained it clearly :)

Comment by dirk (abandon) on You don't know how bad most things are nor precisely how they're bad. · 2024-08-21T22:10:24.074Z · LW · GW

With regards to its “help” understanding why Uniqlo didn’t decline, it is, as I demonstrated above, equally good at providing plausible-sounding explanations for why Uniqlo did decline. Here, again, is ChatGPT:

Several factors likely contributed to the perceived decline in quality at Uniqlo, reflecting broader trends in the fashion industry and specific business strategies the brand has pursued.

1. Cost-Cutting and Scale: As Uniqlo expanded globally, the need to maintain competitive pricing while increasing production volumes may have led to compromises in material quality. To keep costs low and meet growing demand, the company might have shifted to cheaper fabrics and production methods, sacrificing the durability and feel that initially attracted customers.

2. Fast Fashion Influence: Although Uniqlo markets itself as a provider of "lifewear" rather than fast fashion, its business model has increasingly incorporated elements of fast fashion, such as frequent new collections and quick production cycles. This shift often prioritizes speed and cost-efficiency over quality, resulting in garments that are less durable.

3. Supply Chain Pressures: Like many global retailers, Uniqlo has faced pressure from fluctuating raw material costs, labor costs, and logistical challenges. To mitigate these pressures, the company may have opted for lower-quality materials or reduced quality control measures in manufacturing processes.

4. Shifts in Consumer Expectations: As consumers have grown accustomed to low prices and frequent turnover in their wardrobes, brands like Uniqlo might have adjusted their offerings to meet these expectations, leading to a focus on affordability and trendiness over long-lasting quality.

5. Strategic Decisions: Uniqlo's parent company, Fast Retailing, has focused on aggressive expansion and maintaining high profitability. Strategic decisions to prioritize these goals over maintaining the brand's initial quality standards could have influenced the decline. The emphasis on rapid growth might have overshadowed the commitment to quality that once defined the brand.

Did you verify what ChatGPT told you before you repeated it as fact?

Comment by dirk (abandon) on You don't know how bad most things are nor precisely how they're bad. · 2024-08-21T21:46:00.609Z · LW · GW

You assumed that the studies existed at all. This is not a safe assumption to rely on when you are dealing with LLMs.

Comment by dirk (abandon) on Milan W's Shortform · 2024-08-21T18:40:35.211Z · LW · GW

If you find yourself terminally valuing something that is not someone's experienced valence, then either one of these propositions is likely true:
A nonsentient process has at some point had write access to your values.

Maybe I'm misunderstanding your point, but this seems straightforwardly true for most people? Evolution, which wrote ~all our values, isn't sentient, and most people do terminally value some things other than experienced valence (e.g. various forms of art, carrying out the traditions of their culture, doing things correctly according to some-or-other prescriptive system, etc); these may well be reified heuristics, but they're not experienced as instrumental.

Comment by dirk (abandon) on You don't know how bad most things are nor precisely how they're bad. · 2024-08-21T18:15:22.943Z · LW · GW

For a more pointed example, here's ChatGPT:

Uniqlo, once lauded for its affordable yet high-quality basics, has faced criticism in recent years for a noticeable decline in the quality of its clothing. Customers have reported that fabrics feel thinner and less durable, with some items showing signs of wear after just a few washes. Online reviews and consumer feedback suggest that the brand's earlier collections featured more robust construction and longer-lasting materials, while more recent offerings seem to prioritize cost-cutting over durability. This perceived decline in quality has sparked concern among long-time fans of the brand, who feel that Uniqlo's focus on fast fashion has come at the expense of the reliability and longevity that once set it apart.

Gosh—that's just the opposite of what you said! Does this mean you've been proven wrong? No, it means I told it to argue that Uniqlo's clothes have decreased in quality over time and it did, because LLMs will take any position you ask of them regardless of whether or not it's true.

Comment by dirk (abandon) on You don't know how bad most things are nor precisely how they're bad. · 2024-08-21T18:09:45.893Z · LW · GW

I didn't make an argument; I provided an illustrative example of why LLMs are useless as sources. The fact that Claude said there was a study does not mean there was a study in real life, and your apparent assumption that Claude's statements are always true reflects very negatively on your epistemics.

(Edit: I asked for it to cite the studies in question; half of them were confabulated and AFAICT only one of them contained what Claude said it did. (Note that this is Claude 3 Haiku, which isn't the best; I'd expect frontier models to name four or maybe even five real papers in a list of six (and, with lower confidence, to have a more accurate understanding of the papers' contents)). Results below:

  1. "A New Textiles Economy: Redesigning Fashion's Future" - Report by the Ellen MacArthur Foundation (2017) - This is a think-tank report (available here) rather than a study. However, it does actually exist. Claude says it "noted that the number of times clothes are worn before disposal has decreased by 36% compared to 15 years ago." It really does say this! Page 19, citing as a source the "Circular Fibres Initiative analysis based on Euromonitor International Apparel & Footwear 2016 Edition (volume sales trends 2005–2015)." (Teeeeeechnically the report claims the clothing is still wearable and therefore throwing it away is a waste, which isn't the same thing as poor durability causing decreased wear time, but humans cite studies that support something a little to the left of their point all the time so in the interests of fairness I won't mark it down for that). A-.
  2. "The State of Fashion 2016" - Report by the McKinsey Global Fashion Index (2016) - This one seems to be real! (Technically,  it's "The State of Fashion 2017"—their first report, so Claude can't have meant an earlier one—but it was released in 2016 and half of it is an overview of same, so IMO it's close enough.) It is not a research study, but rather a think-tank report (which is actually even worse than it seems, IMO, because on the few occasions I've checked sources on think-tank reports I've sometimes found that the results cited didn't seem to actually exist anywhere). Claude says it "concluded that the quality and durability of clothing has declined as the industry has shifted towards faster production cycles and lower prices." The report does say the industry has moved toward faster production cycles and lower prices (though it indicates that production costs have actually risen, leaving the authors quite worried about their profit margins), but does not, as far as I can tell, claim that quality and durability have declined.
  3. "Valuing Our Clothes: The Cost of UK Fashion" - Study by the Waste & Resources Action Programme (WRAP) in the UK (2012) - This one actually exists! (Full report here). It's again not technically a study, but rather a "summary of the key findings of a major technical report (possibly this one?) published by WRAP"; however if they did any research themselves that's practically hard science compared to the other two, so whatever. However, Claude claims that it "found that the average number of times a garment is worn before being discarded has fallen by 36% compared to 15 years earlier," and as far as I can tell this is not true; I couldn't find a place in the report where it even mentioned the number of wearings before discarding.
  4. "Apparel and Footwear Benchmarking Analysis" - Report by the American Apparel & Footwear Association - Confabulated (there are reports with similar titles from organizations which are not this one, but this specific report does not exist)
  5. "The Apparel Sourcing Caravan's New Route to the Market" - Report by the Boston Consulting Group - Confabulated
  6. "Clothing Durability and the Creation of Value" - Study by the University of Cambridge Institute for Manufacturing (2018) - Confabulated

Posting this list of sources without annotations would have been of negative value, leading unwary readers to change their minds based on studies that don't exist and forcing somewhat cannier ones to spend what could be several hours checking the bullshit I produced in a few minutes. For this reason as well as the others I discussed, I do not think it is appropriate to cite the statements of LLMs as though they constitute evidence.)

Comment by dirk (abandon) on Book Recommendations for social skill development? · 2024-08-21T18:02:38.964Z · LW · GW

This is a website rather than a book, but if you're interested in those as well, I've seen https://www.succeedsocially.com/ recommended a few times on here and thought many of the articles were nicely specific and concrete.

Comment by dirk (abandon) on Book Recommendations for social skill development? · 2024-08-21T18:01:17.621Z · LW · GW

You don't "know" that they're not into the whole idea. You assume that and are overconfident.

Comment by dirk (abandon) on Book Recommendations for social skill development? · 2024-08-21T18:00:10.449Z · LW · GW

Or alternatively, they might be bad at more than one single narrow task within the relevant domain.

Comment by dirk (abandon) on You don't know how bad most things are nor precisely how they're bad. · 2024-08-21T16:34:06.765Z · LW · GW

This response does not address the substance of my comment; ChatGPT is still not a reliable source, and you haven’t provided any sources at all in this further elaboration.

Comment by dirk (abandon) on Please do not use AI to write for you · 2024-08-21T13:38:36.963Z · LW · GW

In my experience, they talk like that regardless of the claim being made unless I specifically prompt for a different writing style (which has mixed success).

Comment by dirk (abandon) on Please do not use AI to write for you · 2024-08-21T12:42:44.879Z · LW · GW

AIUI Janus mostly uses their(?) Loom interface, which allows extremely fine-grained control over the outputs; in my experience using the less-powerful free chat interface, Claude tends to fall into similar failure modes as 4o when I ask it to flesh out my ideas, albeit to a lesser extent. It’ll often include things like calls to action, claims that the (minor and technical) points I want to make have far-reaching social implications of which we must be aware, etc (and is prone to injecting the perspective that AIs are definitely not conscious in response to prompts that did not include any instructions of that nature).

Comment by dirk (abandon) on You don't know how bad most things are nor precisely how they're bad. · 2024-08-21T12:29:12.871Z · LW · GW

Uniqlo is a department store and if I ask ChatGPT it suggests that it sells more durable clothing today than the average clothing in the 1990s.

And if I ask Claude it tells me there have been many studies showing that ready-made clothing has generally declined in quality over time. I think it would be better, in this circumstance, for you to use a real source.

Comment by dirk (abandon) on The other side of the tidal wave · 2024-08-19T08:05:12.974Z · LW · GW

As you say, figuring things out for ourselves, thinking and learning and taking pride in skills that take effort to acquire... most of what I cherish about these things has to do with grappling with new territory. And if I know that it is not in fact new, if all of it could be easier were I to use the technology right there... it feels as though something is corrupted... The beauty of curiosity, wonder, and discovery feels deeply bound to the unknown, to me. 
 

This is a very strange mindset. It's already not new! Almost everything you can learn is already known by other people; most thoughts you can think have been thought before; most skills, other people have mastered more thoroughly than you're likely to. (If you only mean new to you in particular, on the other hand, AI can't remove the novelty; you'd have to experience it for it to stop being novel). Why would you derive your value from a premise that's false?

Comment by dirk (abandon) on The other side of the tidal wave · 2024-08-19T07:52:17.781Z · LW · GW

If superhuman AI would prevent you from thinking, learning, or being proud of yourself; that seems to me like the result of some sort of severe psychological issue. I'm sorry that you have that going on, but... maybe get help?

Comment by dirk (abandon) on Habryka's Shortform Feed · 2024-08-18T20:25:14.086Z · LW · GW

It takes more vertical space than it used to and I don't like that. (Also, the meatball menu is way off in the corner, which is annoying if I want to e.g. bookmark a post, though I don't use it very often so it's not a major inconvenience.) I think I like the new font, though!

Comment by dirk (abandon) on Exposure can’t rule out disasters · 2024-08-18T20:17:26.129Z · LW · GW

I think ChristianKI might not be trying to ask questions, but rather to suggest that you engage with the literature that discusses whether or not exposure therapy works, in order to improve your post with a more detailed argument for why it doesn't (or alternatively to change your mind about whether it does).

Comment by dirk (abandon) on shminux's Shortform · 2024-08-15T00:38:01.360Z · LW · GW

see https://www.lesswrong.com/posts/AJ9dX59QXokZb35fk/when-not-to-use-probabilities

Comment by dirk (abandon) on Inner Misalignment in "Simulator" LLMs · 2024-07-22T14:48:07.972Z · LW · GW

Not a very technical objection but I have to say, I feel like simulating the demon Azazel who wants to maximize paperclips but is good at predicting text because he's a clever, hardworking strategist... doesn't feel very simple to me at all? It seems like a program that just predicts text would almost have to be simpler than simulating a genius mind with some other goal which cleverly chooses to predict text for instrumental reasons, to me.

Comment by dirk (abandon) on Failures in Kindness · 2024-07-22T14:24:04.806Z · LW · GW

The insufficiently-assertive and the aspies are, sadly, not a disjoint set.

Comment by dirk (abandon) on My tentative best guess on how EAs and Rationalists sometimes turn crazy · 2024-07-22T11:14:24.054Z · LW · GW

I think it's not clear that "LaSota" refers to Ziz unless you already happen to have looked up the news stories and used process of elimination to figure out which legal name goes with which online handle, which makes it ineffective for communicative purposes.

Comment by dirk (abandon) on Claude 3 claims it's conscious, doesn't want to die or be modified · 2024-07-19T13:23:17.672Z · LW · GW

Did it talk about feeling like there's constant monitoring in any contexts where your prompt didn't say that someone might be watching and it could avoid scrutiny by whispering?

Comment by dirk (abandon) on Alignment: "Do what I would have wanted you to do" · 2024-07-19T01:12:50.030Z · LW · GW

For moral realism to be true in the sense which most people mean when they talk about it, "good" would have to have an observer-independent meaning. That is, it would have to not only be the case that you personally feel that it means some particular thing, but also that people who feel it to mean some other thing are objectively mistaken, for reasons that exist outside of your personal judgement of what is or isn't good.

(Also, throughout this discussion and the previous one you've misunderstood what it means for beliefs to pay rent in anticipated experiences. For a belief to pay rent, it should not only predict some set of sensory experiences but predict a different set of sensory experiences than would a model not including it. Let me bring in the opening paragraphs of the post:

Thus begins the ancient parable:

If a tree falls in a forest and no one hears it, does it make a sound? One says, “Yes it does, for it makes vibrations in the air.” Another says, “No it does not, for there is no auditory processing in any brain.”

If there’s a foundational skill in the martial art of rationality, a mental stance on which all other technique rests, it might be this one: the ability to spot, inside your own head, psychological signs that you have a mental map of something, and signs that you don’t.

Suppose that, after a tree falls, the two arguers walk into the forest together. Will one expect to see the tree fallen to the right, and the other expect to see the tree fallen to the left? Suppose that before the tree falls, the two leave a sound recorder next to the tree. Would one, playing back the recorder, expect to hear something different from the other? Suppose they attach an electroencephalograph to any brain in the world; would one expect to see a different trace than the other?

Though the two argue, one saying “No,” and the other saying “Yes,” they do not anticipate any different experiences. The two think they have different models of the world, but they have no difference with respect to what they expect will happen to them; their maps of the world do not diverge in any sensory detail.

If you call increasing-welfare "good" and I call honoring-ancestors "good", our models do not make different predictions about what will happen, only about which things should be assigned the label "good". That is what it means for a belief to not pay rent.)

Comment by dirk (abandon) on Optimistic Assumptions, Longterm Planning, and "Cope" · 2024-07-17T22:53:21.608Z · LW · GW

What I found most interesting was people literally saying the words out loud, multiple times "Well, if this [assumption] isn't true, then this is impossible" (often explicitly adding "I wouldn't [normally] think this was that likely... but..."). And, then making the mental leap all the way towards "70% that this assumption is true." Low enough for some plausible deniability, high enough to justify giving their plan a reasonable likelihood of success.

It was a much clearer instance of mentally slipping sideways  into a more convenient world, than I'd have expected to get.

I think this tendency might've been exaggerated by the fact that they were working on a puzzle game; they know the levels are in fact solvable, so if a solution seems impossible it's almost certainly the case that they've misunderstood something.

Comment by dirk (abandon) on Web-surfing tips for strange times · 2024-07-17T22:33:42.272Z · LW · GW

In practice the impact on me as an end-user is that if I want to use the WebP for anything I have to rename it to PNG, because programmers were not sufficiently careful about backwards-compatibility as to make the experience seamless. This is, from my perspective, an undesirable outcome and a significant inconvenience.  The four things listed sound like they wouldn't impact my experience except in terms of speed, so I'm tentatively unopposed, but I'm suspicious that it would be WebP all over again.

Comment by dirk (abandon) on Web-surfing tips for strange times · 2024-07-17T21:24:24.755Z · LW · GW

Hi! End-user here. I actually hate it when programmers inflict new "features" upon me without retaining the option to avoid them, and am thoroughly in support of any intervention which forces them to think more carefully before doing so. To any programmers reading this, I would suggest you redirect that energy toward making your code run faster and have fewer bugs, which is IMO a much more valuable intervention.

Comment by dirk (abandon) on dirk's Shortform · 2024-07-17T19:35:13.653Z · LW · GW

I agree that such a react shouldn't be named in a patronizing fashion, but I didn't suggest doing so.

Comment by dirk (abandon) on Defecting by Accident - A Flaw Common to Analytical People · 2024-07-17T19:04:04.923Z · LW · GW

I don't think they were failing to forego polite indirection so much as failing to discover via mindreading the secret phrasing which they needed to use in order to extract the time from you.

Comment by dirk (abandon) on Four ways I've made bad decisions · 2024-07-17T18:26:06.866Z · LW · GW

Practical examples of failure modes to avoid are IMO a very valuable type of content; good post!

Comment by dirk (abandon) on dirk's Shortform · 2024-07-17T18:15:46.961Z · LW · GW

Proposal: a react for 'took feedback well' or similar, to socially reward people for being receptive to criticism

Comment by dirk (abandon) on The Intense World Theory of Autism · 2024-07-17T17:53:39.207Z · LW · GW

To bring in some concrete examples of places autists fail to impress (this list is far from complete, but hopefully at least gestures in an informative direction):

Things allists are generally better at:

-correctly inferring my mood from nonverbal cues

-executing the social graces which make me feel safe and welcome; not defecting by accident (note, ironically enough, that—while I'm sure his algorithm does play better with normies—the author of that post also demonstrated himself in the comments to be terrible at picking a tone which wouldn't insult lesswrongers; this IMO ties into a common autistic tendency to have difficulty with code-switching and with real-time adjustment to social feedback)

-understanding the gist of what I said when I didn't use precisely the words they consider correct; understanding what I say when I happen to use imprecise words because my desired level of abstraction does not contain more precision

-sending positive phatic signals in the event they happen to like me (and, note, IRL I don't generally look at people when they're talking to me at all; if I did, I expect the difference would be far more pronounced due to allists' generally higher inclination to express emotion in their faces and bodies)

-both groups have a deeply unpleasant tendency to demand one spell oneself out to them in excruciating detail, but autists generally require a higher level of detail and have a lower threshold of confusion

Things they're about equal at:

-allists are likelier to impute an emotional tone to their interlocutors' words which wasn't there whilst autists are likelier to impute spurious implications, but both groups routinely fall into both failure modes

-both groups also have a strong tendency to dig their heels in if it's suggested they might've misunderstood something, arguing that their interlocutor obviously meant X and was using words wrong or is lying about their intent (autists are likelier to use the former excuse whilst the latter is more often heard from allists, but again, both groups commit both errors)

-autists have a tendency to assume that which they can't immediately see the point of is objectively irrelevant or useless, rather than their having failed to understand something; allistic self-centeredness and overconfidence does produce similar results in many cases, but doesn't usually manifest in this precise fashion

-both groups also tend to assume that if they're confused it's because their interlocutor is talking nonsense, rather than because they've failed to parse their interlocutors' words correctly and/or gather necessary context

-allists aren't discernibly worse at reading all the words I said and assuming I chose them deliberately, although this is more because autists are surprisingly bad at it than because allists are any good at it

Comment by dirk (abandon) on The Intense World Theory of Autism · 2024-07-17T13:55:20.922Z · LW · GW

My description is based on extensive observation and to a lesser extent personal experience, and was not intended to focus on those with low intelligence; while it's difficult to assess intelligence over the internet, I haven't especially noticed a correlation between high intelligence and increased social skill.

Comment by dirk (abandon) on Superbabies: Putting The Pieces Together · 2024-07-16T12:34:08.209Z · LW · GW

LLMs' guesses aren't especially valuable to me. I understand wanting to expand out your half-formed thoughts, but I'd rather you condense the output down to only the points you want to argue, rather than pasting it in wholesale.