A Modest Proposal: Logging for Environmentalists 2021-08-18T21:53:46.522Z


Comment by localdeity on The AI apocalypse myth. · 2023-09-08T18:47:22.316Z · LW · GW

AIs have a symbiotic relationship with humans. If AIs were to exterminate all humans they would also simultaneously be committing mass suicide.

Today that's probably true, but if the capabilities of AI-controllable systems keep increasing, eventually they'll reach a point where they could maintain and extend themselves and the mining, manufacturing, and electrical infrastructures supporting them.  At that point, it would not be mass suicide, and might be (probably will eventually be) an efficiency improvement.

Humans are in symbiotic relationships with plants and animals. You can imagine what would happen if a group of humans decided it would be really interesting to get rid of all vegetation and animals -- that story wouldn't end well for those thrill seekers. Instead, we grow plants and animals and make sure they are in abundance.

People are working on lab-grown meat and other ways to substitute for the meat one currently gets from farming livestock.  If they succeed in making something that's greater than or equal to meat on all dimensions and also cheaper, then it seems likely that nearly all people will switch to the new alternative, and get rid of nearly all of that livestock.  If one likewise develops superior replacements for milk and everything else we get from animals... Then, if someone permanently wiped out all remaining animals, some people would be unhappy for sentimental reasons, and there's maybe some research we'd miss out on, but by no means would it be catastrophic.

Some portion of the above has already happened with horses.  When cars became the superior option in terms of performance and economics, the horse population declined massively.

Plants seem to have less inefficiency than animals, but it still seems plausible that we'll replace them with something superior in the future.  Already, solar panels are better than photosynthesis at taking energy from the sun—to the point where it's more efficient (not counting raw material cost) to have solar panels absorb sunlight which powers lamps that shine certain frequencies of light on plants, than to let that sunlight shine on plants directly.  And we're changing the plants themselves via selective breeding and, these days, genetic engineering.  I suspect that, at some point, we'll replace, say, corn with something that no longer resembles corn—possibly by extensive editing of corn itself, possibly with a completely new designer fungus or something.

Comment by localdeity on adamzerner's Shortform · 2023-08-30T05:38:01.195Z · LW · GW

The first explanation that comes to mind is that people usually go through school, wherein they spend all day with people the same age as them (plus adults, who generally don't socialize with the kids), and this continues through any education they do.  Then, at the very least, this means their starting social group is heavily seeded with people their age, and e.g. if friend A introduces friend B to friend C, the skew will propagate even to those one didn't meet directly from school.

Post-school, you tend to encounter more of a mix of ages, in workplaces, activity groups, meetups, etc.  Then your social group might de-skew over time.  But it would probably take a long time to completely de-skew, and age 30 is not especially long after school, especially for those who went to grad school.

There might also be effects where people your age are more likely to be similar in terms of inclination and capability to engage in various activities.  Physical condition, monetary resources, having a committed full-time job, whether one has a spouse and children—all can make it easier or harder to do things like world-traveling and sports.

Comment by localdeity on Assume Bad Faith · 2023-08-28T21:08:21.345Z · LW · GW

I don't know that that statement is false. I just have no knowledge at all about the state of the bar tonight

The technical name, for a statement made with no concern for its truth or falsehood, is bullshit.

Comment by localdeity on Digital brains beat biological ones because diffusion is too slow · 2023-08-27T08:38:35.040Z · LW · GW

I'm counting the time it takes to (a) develop the 250 IQ humans [15-50 years], (b) have them grow to adulthood and become world-class experts in their fields [25-40 years], (c) do their investigation and design in mice [10-25 years], and (d) figure out how to incorporate it into humans nonfatally [5-15 years].

Then you'd either grow new humans with the super-neurons, or figure out how to change the neurons of existing adults; the former is usually easier with genetics, but I don't think you could dial the power up to maximum in one generation without drastically changing how mental development goes in childhood, with a high chance of causing most children to develop severe psychological problems; the 250 IQ researchers would be good at addressing this, of course, perhaps even at evaluating the early signs of those problems early (to allow faster iteration); but I think they'd still have to spend 10-50 years on iterating with human children before fixing the crippling bugs.

So I think it might be faster to solve the harder problem of replacing an adult's neurons with backwards-compatible, adjustable super-neurons—that can interface with the old ones but also use the new method to connect to each other, which initially works at the same speed but then you can dial it up progressively and learn to fix the problems as they come up.  Harder to set it up—maybe 5-10 extra years—but once you have it, I'd say 5-15 years before you've successfully dialed people up to "maximum".

Comment by localdeity on Digital brains beat biological ones because diffusion is too slow · 2023-08-26T19:22:47.919Z · LW · GW

So I think in the long run, the only way biological brains win is if we simply do not build AGI.

Depends on how long you're talking about.  It seems plausible to me that, if we got a bunch of 250 IQ humans, then they could in fact do a major redesign of neurons.  However, I would expect all this to take at least 100 years (if not aided by superintelligent AI), which is longer than most AI timelines I've seen (unless we bring AI development to a snail's pace or a complete stop).

Comment by localdeity on Walk while you talk: don't balk at "no chalk" · 2023-08-26T00:05:22.830Z · LW · GW

There is also some evidence of general-purpose cognitive benefits to walking.  Reposting a comment below:

There have been studies on the subject, having people walk or not (and walk in varying conditions) and measuring their performance on some intellectual or creative task, and concluding that (a) walking does help and (b) the type of walk probably matters.  First citation I found:

Four experiments demonstrate that walking boosts creative ideation in real time and shortly after. In Experiment 1, while seated and then when walking on a treadmill, adults completed Guilford’s alternate uses (GAU) test of creative divergent thinking and the compound remote associates (CRA) test of convergent thinking. Walking increased 81% of participants’ creativity on the GAU, but only increased 23% of participants’ scores for the CRA. In Experiment 2, participants completed the GAU when seated and then walking, when walking and then seated, or when seated twice. Again, walking led to higher GAU scores. Moreover, when seated after walking, participants exhibited a residual creative boost. Experiment 3 generalized the prior effects to outdoor walking. Experiment 4 tested the effect of walking on creative analogy generation. Participants sat inside, walked on a treadmill inside, walked outside, or were rolled outside in a wheelchair. Walking outside produced the most novel and highest quality analogies. The effects of outdoor stimulation and walking were separable. Walking opens up the free flow of ideas, and it is a simple and robust solution to the goals of increasing creativity and increasing physical activity.

Comment by localdeity on If we had known the atmosphere would ignite · 2023-08-18T14:37:08.380Z · LW · GW

Somewhat related scenario: There were concerns about the Large Hadron Collider before it was turned on.  (And, I vaguely remember reading, to a lesser extent about a prior supercollider.)  Things like "Is this going to create a mini black hole, a strangelet, or some other thing that might swallow the earth?".  The strongest counterargument is generally "Cosmic rays with higher energies than this have been hitting the earth for billions of years, so if that was a thing that could happen, it would have already happened."

One potential counter-counterargument, for some experiments, might have been "But cosmic rays arrive at high speed, so their products would leave Earth at high speed and dissipate in space, whereas the result of colliding particles with equal and opposite momenta would be stationary relative to the earth and would stick around."  I can imagine a few ways that might be wrong; don't know enough to say which are relevant.

LHC has a webpage on it:

Comment by localdeity on Summary of and Thoughts on the Hotz/Yudkowsky Debate · 2023-08-16T17:43:07.416Z · LW · GW

Oh man.  My brain generates "Was this fixed with a literal s/Holtz/Hotz/ sed command, as opposed to s/Holtz/Hotz/g ?"  Because it seems that, on lines where the name occurs twice or more, the first instance is correctly spelled and the later instances are (edit: sometimes) not.

Comment by localdeity on Summary of and Thoughts on the Hotz/Yudkowsky Debate · 2023-08-16T16:59:56.269Z · LW · GW

Consistent typo: Holtz should be Hotz.

Comment by localdeity on Monthly Roundup #9: August 2023 · 2023-08-08T06:39:52.359Z · LW · GW

The reason I prefer a white car is that it absorbs less heat via sunlight.  A source says "Studies have shown the difference in temperature between a white car and a black car left in the sun can be as much as 5-6 degrees after just one hour."

Comment by localdeity on Stomach Ulcers and Dental Cavities · 2023-08-06T01:50:02.254Z · LW · GW

For those wondering about the safety of introducing new bacteria that "outcompete" existing bacteria, Aaron Silverbook's proposal has a safety review FAQ that, indeed, answered a bunch of my questions, starting with an anonymous question in the comments sidebar:

Q: Why is the GMO bacteria guaranteed to outcompete the native bacteria? Does it reproduce more quickly or something?

A: JH1140 secretes Mutacin 1140, a natural bacteriocin which kills native S mutans

Which is a pretty reassuring answer compared to the other alternatives one might imagine, and lends itself to generalization.

Comment by localdeity on Elizabeth's Shortform · 2023-08-04T02:55:16.499Z · LW · GW

I asked ChatGPT:

Someone wrote the following message:


What did the author likely do, which "drove someone to shout ..."?


Based on the context provided in the message, it seems that the author likely engaged in persistent efforts to fix a coordination or maintenance problem related to reading and writing. They might have been advocating for or trying to implement a system or activity (e.g., a reading or writing initiative) that they believed most people would prefer (X) over the current situation (Y).

However, despite some people endorsing the idea of reading and writing (X), they didn't have enough genuine interest or willingness to invest in the transition costs or maintenance efforts required for the change. This lack of true commitment led to the frustration of the person shouting at the author, expressing that nobody else liked reading or writing as much as the author did. Essentially, the author was trying to solve a coordination problem (or maintenance problem) that didn't have enough genuine support from others, and this led to tension and frustration.

Which is essentially what seems reasonable to guess, though it's not very specific.  My first guess as to specifics is "Elizabeth tried to organize a weekly gathering where people would pick a paper, read it, write up their thoughts, and discuss it at the meeting, and couldn't get people to commit the time necessary, and ended up questioning someone along the lines of 'Well, several people said it was good to practice these skills, and that the summaries are valuable public services, so why aren't they ...?', leading to the incident at the end."  Other variations that came to mind included hiring a writing teacher for a group, or some kind of large-scale book buying, though neither of those involves both reading and writing.

Comment by localdeity on Is Light Drinking Protective? · 2023-07-31T18:43:54.330Z · LW · GW

n=1: "because I highly value my brain as it is, and alcohol (and other drugs) seem to mess with the brain in ways that are presumably bad" (this opinion came from middle school), and, later, "because I have two uncles who've struggled with alcoholism".  Also, some people have assumed I'm a Mormon (I'm an atheist).

Comment by localdeity on Why You Should Never Update Your Beliefs · 2023-07-29T04:37:12.959Z · LW · GW

Corollary: If you see death coming, or e.g. you have a near miss and know it was only by chance that you survived, then now’s a good time to change your beliefs. Which, actually, seems to be a thing people do. (Though there are other reasons for that.)

Comment by localdeity on Underwater Torture Chambers: The Horror Of Fish Farming · 2023-07-26T09:28:24.756Z · LW · GW

You quote 3-8 billion per day, then the other numbers you mention are annual numbers.  3-8 billion per day would be ~1-3 trillion per year.  Seems your first reaction may have been more accurate.

Comment by localdeity on Clever arguers give weak evidence, not zero · 2023-07-18T17:25:51.437Z · LW · GW

Given that someone is being a clever arguer, the evidence of their argument may be taken to be evidence about the conclusion relative to what you might have expected they could come up with.

If a "clever arguer" says "Ten witnesses, named [...], all report seeing Bob murder Joe, and there are also multiple security cameras that caught it", then that's pretty damn strong evidence (assuming one follows up with the witnesses and gets the footage).

If a "clever arguer" says "Bob once got into a fight in middle school, so we see he has violent tendencies", and that's the best he managed to come up with, then it probably makes sense to update away from the conclusion that Bob murdered Joe.

Zvi has written about things of this ilk, vaguely connected to "bounded distrust".  I'll see if I can find a link... Ok, this is a decent example of the general principle, although the counterparty isn't a "clever arguer":

Then we need to consider what we saw relative to what we expected to see. In general, no news is good news. If ‘nothing happens’ regarding Omicron, that continuously makes us less worried, whereas most news will make us more worried. Getting a constant string of bad news is expected, but how much of it did we get, how fast and how bad?


The person linking to this ["Gauteng hospitalizations" going up rapidly] thought it was bad news, but given the rate at which cases are increasing, it looks to me like good news. Not easy to interpret, but the hospitalization rate per infection is what matters here. Note also that positive test rate is now >20%, which means a higher percentage of cases are being missed than before.

Comment by localdeity on A Hill of Validity in Defense of Meaning · 2023-07-17T07:21:37.824Z · LW · GW

I'll address this first:

More abstractly, what I've generally noticed is:

  • These sorts of people are not very interested in actually developing substantive theory or testing their claims in strong ways which might disprove them.
  • Instead they are mainly interested in providing a counternarrative to progressive theories.
  • They often use superficial or invalid psychometric methods.
  • They often make insinuations that they have some deep theory or deep studies, but really actually don't.

These things are bad, but, apart from point 2, I would ask: how do they compare to the average quality of social science research?  Do you have high standards, or do you just have high standards for one group?  I think most of us spend at least some time in environments where the incentive gradients point towards the latter.  Beware isolated demands for rigor.

Research quality being what it is, I would recommend against giving absolute trust to anyone, even if they appear to have earned it.  If there's a result you really care about, it's good to pick at least one study and dig into exactly what they did, and to see if there are other replications; and the prior probability of "fraud" probably shouldn't go below 1%.

As for point 2—if you were a researcher with heretical opinions, determined to publish research on at least some of them, what would you do?  It seems like a reasonable strategy is to pick something heretical that you're confident you can defend, and do a rock-solid study on it, and brace for impact.  Is it still the case that disproving the blank-slate hypothesis would constitute progress in some academic subfields?  If so, then expect people to continue trying it.

Now, digging into the examples:

Here's a classical example; an IQ researcher who is so focused on providing a counternarrative to motivational theories that he uses methods which are heavily downwards biased to "prove" that IQ test scores don't depend on effort.

The study says there was "a meta-analysis concluding that small monetary incentives could improve test scores by 0.64 SDs" (roughly 10 IQ points); looks to be Duckworth et all 2011.  The guy says it seemed sketchy—the studies had small N, weird conditions, and/or fraudulent researchers.  Looking at table S1 from Duckworth, indeed, N is <100 on most of the studies; "Bruening and Zella (1978)" sticks out as having a large effect size and a large N, and, when I google for more info about that, I find that Bruening was convicted by an NIMH panel of scientific fraud.  Checks out so far.

The guy ran a series of studies, the last of which offered incentives of nil, £2, and £5-£10 for test performance, with the smallest subgroup being N=150, taken from the adult population via "prolific academic".  He found that £2 and £5-£10 had similar effects, those being apparently 0.2 SD and 0.15 SD respectively, which would be 3 IQ points or a little less.  (Were the "small monetary incentives" from Duckworth of that size?  The Duckworth table shows most of the studies as being in the $1-$9 or <$1 range; looks like yes.)  So, at least as a "We suspected these results were bogus, tried to reproduce them, and got a much smaller effect size", this seems all in order.

Now, you say:

IQ test effort correlates with IQ scores, and they investigate whether it is causal using incentives. However, as far as I can tell, their data analysis is flawed, and when performed correctly the conclusion reverses.

[...] Incentives increase effort, but they only have marginal effects on performance. Does this show that effort doesn't matter?  No, because incentives also turn out to only have marginal effects on effort! Surely if you only improve effort a bit, you wouldn't expect to have much influence on scores. We can solve this by a technique called instrumental variables. Basically, we divide the effect of incentives on scores by the effect of incentives on effort.

Your analysis essentially proposes that, if there were some method of increasing effort by 3-4x as much as he managed to increase it, then maybe you could in fact increase IQ scores by 10 points.  This assumes that the effort-to-performance causation would stay constant as you step outside the tested range.  That's possible, but... I'm quite confident there's a limit to how much "effort" can increase your results on a timed multiple-choice test, that you'll hit diminishing marginal returns at some point (probably even negative marginal returns, if the incentive is strong enough to make many test-takers nervous), and extrapolating 3-4x outside the achieved effect seems dubious.  (I also note that the 1x effect here means increasing your self-evaluated effort from 4.13 to 4.28 on a scale that goes up to 5, so a 4x effect would mean going to 4.73, approaching the limits of the scale itself.)

You say, doing your analysis:

For study 2, I get an effect of 0.54. For study 3, I get an effect of 0.37. For study 4, I get an effect of 0.39. The numbers are noisy for various reasons, but this all seems to be of a similar order of magnitude to the correlation in the general population, so this suggests the correlation between IQ and test effort is due to a causal effect of test effort increasing IQ scores.

That is interesting... Though the correlation between test effort and test performance in the studies is given as 0.27 and 0.29 in different samples, so, noise notwithstanding, your effects are consistently larger by a decent margin.  That would suggest that there's something else going on than the simple causation.

The authors say:

6.1. Correlation and direction of causality

Across all three samples and cognitive ability tests (sentence verification, vocabulary, visual-spatial reasoning), the magnitude of the association between effort and test performance was approximately 0.30, suggesting that higher levels of motivation are associated better levels of test performance. Our results are in close accord with existing literature [...]

As is well-known, the observation of a correlation is a necessary but not sufficient condition for causality. The failure to observe concomitant increases in test effort and test performance, when test effort is manipulated, suggests the absence of a causal effect between test motivation and test performance.

That last sentence is odd, since there was in fact an increase in both test effort and test performance.  Perhaps they're equivocating between "low effect" and "no effect"?  (Which is partly defensible in that the effect was not statistically significant in most of the studies they ran.  I'd still count it as a mark against them.)  The authors continue:

Consequently, the positive linear assocation between effort and performance may be considered either spurious or the direction of causation reversed – flowing from ability to motivation. Several investigations have shown that the correlation between test-taking anxiety and test performance likely flows from ability to test-anxiety, not the other way around (Sommer & Arendasy, 2015; Sommer, Arendasy, Punter, Feldhammer-Kahr, & Rieder, 2019). Thus, if the direction of causation flows from ability to test motivation, it would help explain why effort is so difficult to shift via incentive manipulation.

6.2. Limitations & future research

We acknowledge that the evidence for the causal direction between effort and ability remains equivocal, as our evidence rests upon the absence of evidence (absence of experimental incentive effect). Ideally, positive evidence would be provided. Indirect positive evidence may be obtained by conducting an experiment, whereby half the subjects are given a relatively easy version of the paper folding task (10 easiest items) and the other half are given a relatively more difficult version (10 most difficult items). It is hypothesized that those given the relatively easier version of the paper folding task would then, on average, self-report greater levels of test-taking effort. Partial support for such a hypothesis is apparent in Table 1 of this investigation. Specifically, it can be seen that there is a perfect correspondence between the difficulty of the test (synonyms mean 73.4% correct; sentence verification mean 53.8% correct; paper folding mean 43.3%) and the mean level of reported effort (synonyms mean effort 4.42; sentence verification mean 4.11; paper folding mean 3.83).

That is a pretty interesting piece of evidence for the "ability leads to self-reported effort" theory.

Overall... The study seems to be a good one: doing a large replication study on prior claims.  The presentation of it... The author on Twitter said "testing over N= 4,000 people", which is maybe what you get if you add up the N from all the different studies, but each study is considerably smaller; I found that somewhat misleading, but suspect that's a common thing when authors report multiple studies at once.  On Twitter he says "We conclude that effort has unequivocally small effects", which omits caveats like "our results are accurate to the degree that alternative incentives do not yield appreciably larger effects" which are in the paper; this also seems like par for the course for science journalism (not to mention Twitter discourse).  And they seem to have equivocated in places between "low effect" and "no effect".  (Which I suspect is also not rare, unfortunately.)

Now.  You presented this as:

Here's a classical example; an IQ researcher who is so focused on providing a counternarrative to motivational theories that he uses methods which are heavily downwards biased to "prove" that IQ test scores don't depend on effort.

The "focused on providing a counternarrative" part is plausibly correct.  However, the "uses methods which are heavily downwards biased to "prove" [...]" is not.  The "downwards biased methods" are "offering a monetary incentive of £2-£10, which turned out to be insufficient to change effort much".  The authors were doing a replication of Duckworth, in which most of the cited studies had a monetary incentive of <$10—so that part is correctly matched—and they used high enough N that Duckworth's claimed effect size should have shown up easily.  They also preregistered the first of their incentive-based studies (with the £2 incentive), and the later ones were the same but with increased sample size, then increased incentive.  In other words, they did exactly what they should have done in a replication.  To claim that they chose downwards-biased methods for the purpose of proving their point seems quite unfair; those methods were chosen by Duckworth.

This seems to be a data point of the form "your priors led you to assume bad faith (without having looked deeply enough to discover this was unjustified), which then led you to take this as a case to justify those priors for future cases".  (We will see more of these later.)  Clearly this could be a self-reinforcing loop that, over time, could lead one's priors very far astray.  I would hope anyone who posts here would recognize the danger of such a trap.

Second example.  "Simon Baron-Cohen playing Motte-Bailey with the "extreme male brain" theory of autism."  Let's see... It seems uncontroversial (among the participants in this discussion) that there are dimensions on which male and female brains differ (on average), and on which autists are (on average) skewed towards the male side, and that this includes the empathizing and systematizing dimensions.

You quote Baron-Cohen as saying "According to the ‘extreme male brain’ theory of autism, people with autism or AS should always fall in the [extreme systematizing range]", and say that this is obviously false, since there exist autists who are not extreme systematizers—citing a later study coauthored by Baron-Cohen himself, which puts only ~10% of autists into the "Extreme Type S" category.  You say he's engaging in a motte-and-bailey.

After some reading, this looks to me like a case of "All models are wrong, but some are useful."  The same study says "Finally, we demonstrate that D-scores (difference between EQ and SQ) account for 19 times more of the variance in autistic traits (43%) than do other demographic variables including sex.  Our results provide robust evidence in support of both the E-S and EMB theories."  So, clearly he's aware that 57% of the variance is not explained by empathizing-systematizing.  I think it would be reasonable to cast him as saying "We know this theory is not exactly correct, but it makes some correct predictions."  Indeed, he counts the predictions made by these theories:

An extension of the E-S theory is the Extreme Male Brain (EMB) theory (11). This proposes that, with regard to empathy and systemizing, autistic individuals are on average shifted toward a more “masculine” brain type (difficulties in empathy and at least average aptitude in systemizing) (11). This may explain why between two to three times more males than females are diagnosed as autistic (12, 13). The EMB makes four further predictions: (vii) that more autistic than typical people will have an Extreme Type S brain; (viii) that autistic traits are better predicted by D-score than by sex; (ix) that males on average will have a higher number of autistic traits than will females; and (x) that those working in science, technology, engineering, and math (STEM) will have a higher number of autistic traits than those working in non-STEM occupations.

Note also that he states the definition of EMB theory as saying "autistic individuals are on average shifted toward a more “masculine” brain type".  You say "Sometimes EMB proponents say that this isn’t really what the EMB theory says. Instead, they make up some weaker predictions, that the theory merely asserts differences “on average”."  This is Baron-Cohen himself defining it that way.

Would it be better if he used a word other than "theory"?  "Model"?  You somewhat facetiously propose "If the EMB theory had instead been named the “sometimes autistic people are kinda nerdy” theory, then it would be a lot more justified by the evidence".  How about, say, the theory that "There are processes that masculinize the brain in males; and some of those processes going into overdrive is a thing that causes autism"?  (Which was part of the original paper: "What causes this shift remains unclear, but candidate factors include both genetic differences and prenatal testosterone.")  That is, in fact, approximately what I found when I googled for people talking about the EMB theory—and note that the article is critical of the theory:

This hypothesis, called the ‘extreme male brain’ theory, postulates that males are at higher risk for autism as a result of in-utero exposure to steroid hormones called androgens. This exposure, the theory goes, accentuates the male-like tendency to recognize patterns in the world (systemizing behavior) and diminishes the female-like capacity to perceive social cues (socializing behavior). Put simply, boys are already part way along the spectrum, and if they are exposed to excessive androgens in the womb, these hormones can push them into the diagnostic range.

That is the sense in which an autistic brain is, hypothetically, an "extreme male brain".  I guess "extremely masculinized brain" would be a bit more descriptive to someone who doesn't know the context.

The problem with a motte-and-bailey is that someone gets to go around advancing an extreme position, and then, when challenged by someone who would disprove it, he avoids the consequences by claiming he never said that, he only meant the mundane position.  According to you, the bailey is "they want to talk big about how empathizing-systematizing is the explanation for autism".  According to the paper, it was 43% of the explanation for autism, and the biggest individual factor?  Seems pretty good.

Has Baron-Cohen gone around convincing people that empathizing-systematizing is the only factor involved in autism?  I suspect that he doesn't believe it, he didn't mean to claim it, almost no one (except you) understood him as claiming it, and pretty much no one believes it.  Maybe he picked a suboptimal name, which lent itself to misinterpretation.  Do you have examples of Baron-Cohen making claims of that kind, which aren't explainable as him taking the "This theory is not exactly correct, but it makes useful predictions" approach?

The context here is explaining why you've "become horrified at what [you] once trusted", which you now call "supposed science".  I'm... underwhelmed by what I've seen.

Back to Damore...

I think Damore's point, in bringing it up, was that the stress in (some portion of) tech jobs may be a reason there are fewer women than men in tech.

You may or may not be right that this is what he meant.

...I thought it was overkill to cite four quotes on that issue, but apparently not.  Such priors!

(I think it's a completely wrong position, because the sex difference in neuroticism is much smaller (by something like 2x) than the sex difference in tech interests and tech abilities, and presumably the selection effect for neuroticism on career field is also much smaller than that of interests. So I'm not sure your reading on it is particularly more charitable, only uncharitable in a different direction; assuming a mistake rather than a conflict.)

It seems you're saying Damore mentions A but not B, and B is bigger, therefore Damore's "comprehensive" writeup is not so, and this omission is possibly ill-motivated.  But, erm, Damore does mention B, twice:

  • [Women, on average have more] Openness directed towards feelings and aesthetics rather than ideas. Women generally also have a stronger interest in people rather than things, relative to men (also interpreted as empathizing vs. systemizing).
    ○ These two differences in part explain why women relatively prefer jobs in social or artistic areas. More men may like coding because it requires systemizing and even within SWEs, comparatively more women work on front end, which deals with both people and aesthetics.


  • Women on average show a higher interest in people and men in things
    ○ We can make software engineering more people-oriented with pair programming and more collaboration. Unfortunately, there may be limits to how people-oriented certain roles at Google can be and we shouldn't deceive ourselves or students into thinking otherwise (some of our programs to get female students into coding might be doing this).

This suggests that casting aspersions on Damore's motives is not gated by "Maybe I should double-check what he said to see if this is unfair".

I think the anxiety/stress thing is more relevant for top executive roles than for engineer roles; a population-level difference is more important at the extremes.  Damore does talk about leadership specifically:

We always ask why we don't see women in top leadership positions, but we never ask why we
see so many men in these jobs. These positions often require long, stressful hours that may not
be worth it if you want a balanced and fulfilling life.


(Incidentally, imagine if Damore had claimed the opposite—"Women are less prone to anxiety and can handle stress more easily."  Wouldn't that also lead to accusations that Damore was saying we can ignore women's problems?)

The correct thing to claim is "We should investigate what people are anxious/stressed about". Jumping to conclusions that people's states are simply a reflection of their innate traits is the problem.

Well, he lists one source of stress above, and he does recommend to "Make tech and leadership less stressful".

I don't think this is at the heart of Zack's adventure? Zack's issues were mainly about leading rationalists jumping in to rationalize things in the name of avoiding conflicts.

And why would these rationalists care so much about avoiding these conflicts, to the point of compromising the intellectual integrity that seems so dear to them?  Fear that they'd face the kind of hostility and career-ruining accusations directed at Damore, and things downstream of fears like that, seems like a top candidate explanation.

Anyway, making weighty claims about people is core to what differential psychology is about.

Um.  Accusations are things you make about individuals, occasionally organizations.  I hope that the majority of differential psychology papers don't consist of "Bob Jones has done XYZ bad thing".

It's possible that some of my claims about Damore are false, in which case we should discuss that and fix the mistakes. However, the position that one should just keep quiet about claims about people simply because they are weighty would also seem to imply that we should keep quiet about claims about trans people and masculinity/femininity, or race and IQ, or, to make the Damore letter more relevant, men/women and various traits related to performance in tech.

You are equivocating between reckless claims of misconduct / malice by an individual, and heavily cited claims about population-level averages that are meant to inform company policy.  Are you seriously stating an ethical principle that anyone who makes the latter should expect to face the former and it's justified?

Somewhat possible this is true. I think nerdy communities like LessWrong should do a better job at communicating the problems with various differential psychology findings and communicating how they are often made by conservatives to promote an agenda. If they did this, perhaps Damore would not have been in this situation.

I think Damore was aware that there are people who use population-level differences to justify discriminating against individuals, and that's why he took pains to disavow that.  As for "the problems with various differential psychology findings"—do you think that some substantial fraction, say at least 20%, of the findings he cited were false?

Comment by localdeity on A Hill of Validity in Defense of Meaning · 2023-07-16T03:55:08.952Z · LW · GW

The way I had imagined the situation is, someone working with the Googlegeist had noticed that a lot of women reported anxiety or whatever, and had decided they need to work with women to figure out what's going on here, to solve it. And then James Damore felt that this was one instance of people looking at a disparity and claiming injustice, and that since he finds it biologically inevitable that women would be anxious, this shouldn't be treated as indicative of an external problem, but instead should be medicalized and treated psychologically (or psychiatrically?). [italics added]

As a side note, I consider the italicized part a rather weighty accusation.  I think one should therefore be careful about making such an accusation.  I guess, in this case, you were just honestly reporting the contents of your brain on the matter, not necessarily making an accusation.

Still, I think this to some extent illustrates an epistemic environment where it's normal to throw around damaging accusations whose truth value is somewhere between "extremely uncharitable interpretation" and "objectively false".  Precisely the type that got Damore fired, in other words.  Do we have such an environment even among rationalists?  That is at the heart of Zack's adventure.

(Incidentally, imagine if Damore had claimed the opposite—"Women are less prone to anxiety and can handle stress more easily."  Wouldn't that also lead to accusations that Damore was saying we can ignore women's problems?)

Anyway, on to object level.  I think Damore's point, in bringing it up, was that the stress in (some portion of) tech jobs may be a reason there are fewer women than men in tech.  Reasons to think this:

  • The title of the super-section containing the "neuroticism" quote is "Possible non-bias causes of the gender gap in tech".
  • The super-section is preceded by "For the rest of this document, I’ll concentrate on the extreme stance that all differences in outcome are due to differential treatment [italics added] and the authoritarian element that’s required to actually discriminate to create equal representation."
  • The last sentence in the section ("Personality differences") is "We need to stop assuming that gender gaps imply sexism."
  • As already quoted, he says that the anxiety thing implies that "Mak[ing] tech and leadership less stressful" would be a "non-discriminatory way to reduce the gender gap".

If Damore had said "Here are some issues women reported; and we should discount these reports because women are extra-anxious", then your model would be well-founded.  I don't see him saying anything like that in the document, though.  In the whole document, Damore doesn't mention anything reported by women on Googlegeist, other than the anxiety thing.  (I would be surprised if he, being an engineer and not in HR or leadership, had access to the arbitrary text field submissions from the other employees; I would guess he saw aggregated results on numerical questions, plus any items leadership chose to share with everyone.)  Googlegeist itself is mentioned only two other times in the document; both times it's him suggesting something be done with future Googlegeist surveys.

He does mention another item as a (primarily) women's issue, although the source is a 2006 paper rather than Googlegeist.  Again, he does advocate doing something about it (with caveats):

Non-discriminatory ways to reduce the gender gap


  • Women on average look for more work-life balance while men have a higher drive for
    status on average
    ○ Unfortunately, as long as tech and leadership remain high status, lucrative
    careers, men may disproportionately want to be in them. Allowing and truly
    endorsing (as part of our culture) part time work though can keep more women in

Now, at the end, he says this:

Philosophically, I don't think we should do arbitrary social engineering of tech just to make it appealing to equal portions of both men and women. For each of these changes, we need principled reasons for why it helps Google; that is, we should be optimizing for Google—with Google's diversity being a component of that. For example, currently those willing to work extra hours or take extra stress will inevitably get ahead and if we try to change that too much, it may have disastrous consequences. Also, when considering the costs and benefits, we should keep in mind that Google's funding is finite so its allocation is more zero-sum than is generally acknowledged.

The most uncharitable reader could say "Aha, so he's laid the groundwork to not follow through with anything that actually helps women, keeping the status quo, and everything he's said before is just a trick."  If the reader comes in with that kind of implicit assumption about Damore's character, then they'll probably stick with it; all I can say is, evidence for such a belief does not come from the document.  (Incidentally, I've met Damore at a party; I read him as a well-meaning nerd, who thought that if he made a sufficiently comprehensive, careful, well-cited, and constructively oriented writeup, he could cut through the hostility and they'd work out some solutions that would make everyone happier.  The result is really tragic in that light.)

I think, to come up with your conclusion, you have to do a lot of reading into the text, and a lot of not reading the actual text.  Which, I think, was par for the course for most negative takes on Damore.  I am surprised and somewhat perturbed by your report that you originally supported Damore, and wonder what happened since then.  Perhaps memory faded and "osmosis" brought in others' takes?

Comment by localdeity on A Hill of Validity in Defense of Meaning · 2023-07-15T22:46:10.985Z · LW · GW

(Let's not forget James Damore's memo, who cited research on greater female neuroticism as a justification for ignoring women's issues with their workplace.)

I don't think that's true, and if anything it looks to be the opposite.  Original document; the relevant quotes about neuroticism and what to do about it seem to be:

Personality differences

  • Neuroticism (higher anxiety, lower stress tolerance).
    ○ This may contribute to the higher levels of anxiety women report on Googlegeist and to the lower number of women in high stress jobs.


Non-discriminatory ways to reduce the gender gap


  • Women on average are more prone to anxiety
    • Make tech and leadership less stressful.  Google already partly does this with its many stress reduction courses and benefits.
Comment by localdeity on AI #19: Hofstadter, Sutskever, Leike · 2023-07-06T14:01:54.896Z · LW · GW

I would certainly insist upon the following: If it is safe to

It is very important that we

Comment by localdeity on What in your opinion is the biggest open problem in AI alignment? · 2023-07-05T10:49:57.610Z · LW · GW

Formal verification for specific techniques may be possible, and is desirable.

Formal verification for an entire overall plan... Let's suppose we wanted a formal proof of some basic sanity checks of the plan.  For example: if the plan is followed, then, as of 2100, there will be at least 8 billion humans alive and at least as happy and free as they are today.  I mean, forget "happy" and "free"—how can you even define a "human" in formal mathematical language?  Are they defined as certain arrangements of subatomic particles?  Such a definition would presumably be unmanageably long.  And if "human" is not defined, then that leaves you open to ending up with, say, 8 billion dumb automatons programmed to repeat "I'm happy and free".

You might try relying on some preexisting process to decide if something is a human.  If it's a real-world process, like polling a trusted group of humans or sending a query to a certain IP address, this is vulnerable to manipulation of the real world (coercing the humans, hacking the server).  You might try giving it a neural net that's trained to recognize humans—the neural net can be expressed as a precise mathematical object—but then you're vulnerable to adversarial selection, and might end up with bizarre-looking inanimate objects that the net thinks are human.  (Plus there's the question of exactly how you take a real-world human and get something that's fed into the neural net.  If the input to the net is pixels, then how is the photo taken, and can that be manipulated?)

Keep one's eyes open for opportunities, I guess, but it seems likely that the scope of formal verification will be extremely limited.  I expect it would be most useful in computer security, where the conclusion, the "thing to be proven", is a statement about objects that have precise definitions.  Though they might likely be too long and complex for human verification even then. is a nice illustration of how serious misbehavior can be inserted into innocuous-looking, relatively short snippets of programs (i.e. mathematically precise statements).

Comment by localdeity on Through a panel, darkly: a case study in internet BS detection · 2023-07-02T21:43:32.395Z · LW · GW

All else equal, AIs are slightly more likely to generate true facts than false ones

AIs are much more likely to generate false facts than true ones.

I think there might be some English ambiguity here.  Suppose there are 81 names that the AI might come up with when writing the sentence "$NAME invented plastic", and maybe the correct name has a 20% chance of being picked, and each of the incorrect names has a 1% chance of being picked.  Then it's simultaneously true that:

  1. The correct name is 20x as likely to be picked as any individual incorrect name.
  2. It is much more likely that the name picked will be incorrect than that it will be correct.
Comment by localdeity on I Think Eliezer Should Go on Glenn Beck · 2023-06-30T14:21:50.457Z · LW · GW

Hmmmm... can we get the "P(AI-related extinction) < 5%" position branded as libertarian?  Cement it as the position of a tiny minority.

Comment by localdeity on Model, Care, Execution · 2023-06-30T06:39:37.861Z · LW · GW

I really disagree with some of what seem to be the implicit premises of this post: mainly, that caring for someone includes proactively taking responsibility for their problems

No, their problems are theirs, and respecting this is the drama-and-conflict-minimizing strategy. There are other, better ways to care for others— but violating their sovereignty is not it

I think this is at least somewhat addressed by this section:

Note that none of the above is about whose fault it is that Choni overslept his interview—it’s his responsibility to make sure he wakes up in time, and you’re not to blame for his failure to do so. From the perspective of The Official Natural Law Code On Roommates’ Rights and Responsibilities, you have in no way violated a stricture or crossed a boundary (if anything, waking Choni is frowned upon by The Code).

But insofar as we (Ricki and Avital) aspire to live cooperatively and provide support for one another and for others, talking over our choices using MCE has been especially fruitful for us.

As I interpret it, the default is "leave me alone" (plus, I guess, anything you've explicitly agreed to, like a rotation of dishwashing duties), and any more intimate involvement with their lives is something you opt into.  Which seems all right prima facie.  As long as person A doesn't surprise people by unilaterally doing the "more intimate" thing, or unilaterally expecting others to do it.

Comment by localdeity on Contra Anton 🏴‍☠️ on Kolmogorov complexity and recursive self improvement · 2023-06-30T06:04:27.200Z · LW · GW

(Reaction to the first sentence: "Is this going to be an argument that would imply that humans can't improve their own intelligence?")

Yeah, his first wrong statement in the argument is "a more intelligent program p2 necessarily has more complexity than a less intelligent p1".  I would use an example along the lines of "p1 has a hundred data points about the path of a ball thrown over the surface of the Moon, and uses linear interpolation; p2 describes that path using a parabola defined by the initial position and velocity of the projectile and the gravitational pull at the surface of the Moon".  Or "rigid projectiles A and B will collide in a vacuum, and the task is to predict their paths; p1 has data down to the atom about projectile A, and no data at all about projectile B; p2 has the mass, position, and velocity of both projectiles".  Or, for that matter, "p1 has several megabytes of incorrect data which it incorporates into its predictions".

It seems he may have confused himself into assuming that p1 is the most intelligent possible program of Kolmogorov complexity k1.  (He later says "... then we have a contradiction since k1 was supposed to be the minimal expression of intelligence at that level".  Wrong; k1 was supposed to be the minimal expression of that particular intelligence p1, not the minimal expression of some set of possible intelligences.)  Then it would follow that any more intelligent (i.e. better-predicting, by his definition) program must be more complex.

Comment by localdeity on The Dictatorship Problem · 2023-06-21T01:58:44.227Z · LW · GW

Of course, Trump has already been arrested, and the revolution hasn't happened, so this isn't the case here, apparently.

I said "if Trump ends up in jail"; I meant as an outcome, like if that were his sentence; I would also count it if he were held in jail awaiting trial for months.  From what I've read, he hasn't spent a single night in jail and is still out giving speeches.

Local consequentialism - locally (both spatially and temporally) optimizing can have disastrous global effects. Today, we can't arrest an aspiring dictator. And so he, in 5 years, wins the election. Now he's in power, and we have the problem we hoped to avoid.

Surely you're not saying that the point of arresting him is to prevent him from winning an election.  Surely you're not saying that.

I believe we're discussing the merits of a general taboo against prosecuting presidential candidates unless the crime is particularly legible to the public.

Do you think such a taboo is likely to increase or decrease the risk from dictators taking over?  Maybe you could claim that would-be dictators are more likely than good candidates to have committed crimes, and thus removing the taboo selects against dictatorial candidates; I guess that's possible.  On the other hand, if there is no such taboo, then a dictator who has already been elected is more likely to appoint cronies who will prosecute his political opponents for whatever might stick to them—even if they don't stick, the prosecution itself can be damaging and onerous.  The second thing seems bigger to me.  The U.S. government has lots of interlock to limit the damage that the occasionally elected bad president can do, and I think that's a much better security model than doing everything possible to minimize the chance of electing him.

If I'm elected because you are too scared to arrest me

Do you think the taboo would enable would-be dictators to commit crimes that make them more likely to get elected?  Such crimes are conceivable, I guess, but the impactful ones (like tampering with voting machines) seem likely to be legible.  The current example is... keeping a bunch of classified documents he shouldn't have?  I don't see how that helps win an election.

Incentivizing self-modification against your values - if we reward people willing to invent conspiracy theories in their heads by not arresting their ideological leader, they are, both consciously and subconsciously, motivated to do just that, because they know you will back down.

Yeah, this is unfortunate.  Though if we back down because there's a taboo, or because we want to portray our country as better than others—rather than because we're scared of violence, based on an evaluation of how violence-prone the population is—then there's no need for self-modification (though some might engage in it anyway).  I guess there's a danger of self-modification shrinking the range of what crimes are "legible" to the public.  But I don't expect that to go very far and result in substantial damage due to additional crime by presidential candidates.  Overall, I think the benefits of the taboo exceed these risks.

Comment by localdeity on The Dictatorship Problem · 2023-06-19T16:12:49.549Z · LW · GW

Nobody arrested their political opponent (the politicians aren't the ones doing the arresting).

Biden nominated Merrick Garland as attorney general, who chose Jack Smith, who is doing the prosecuting.  The separations here are not going to impress someone who thinks the Democrats are using the system to attack the enemy they hate.

Why should public figures have immunity from being arrested unless >80% of the population agrees?

I wouldn't extend this to all public figures, just those who are serious candidates for an election for leader of the country.  The logic is similar to that which some have said underlies the justification for democracy: given an armed populace, voting is a less-bloody substitute for a violent revolution.  80% is a number I made up, but the point is that if there is any serious chance that arresting them leads to a violent revolution, then don't arrest them.

Comment by localdeity on The Dictatorship Problem · 2023-06-19T07:24:09.591Z · LW · GW

Bringing it full circle, there was an incident where someone at Fox News put a caption below two pictures of Biden and Trump: "Wannabe dictator speaks at the White House after having his political rival arrested".  It was taken down and Fox apologized, but some conservatives are saying "Ha ha—no, really".

Against that description, Washington Post says: "The Biden administration has maintained that it has no role in the federal Trump prosecution, with Attorney General Merrick Garland turning to a special counsel to avoid a conflict of interest."

I looked for more, and Fortune says:

Garland said Friday that Trump’s announcement of his presidential candidacy and President Joe Biden’s likely 2024 run were factors in his decision to appoint Jack Smith, a veteran prosecutor, to be the special counsel. Garland said the appointment would allow prosecutors to continue their work “indisputably guided” only by the facts and the law.


The Justice Department described Smith as a registered independent, an effort to blunt any attack of perceived political bias. Trump is a Republican, and Biden is a Democrat.

“Throughout his career, Jack Smith has built a reputation as an impartial and determined prosecutor who leads teams with energy and focus to follow the facts wherever they lead,” Garland said. “As special counsel, he will exercise independent prosecutorial judgment to decide whether charges should be brought.”

Is that enough?  It may well be that Jack Smith is determined to uphold principles and keep his personal opinions out of his work, and might also be that his personal opinions are neutral, but neither of these will be particularly legible to the public.  (His Wikipedia page doesn't show anything obviously politically relevant from him, except that he married someone who produced a documentary about Michelle Obama.)

Also, political bias isn't the only relevant dimension; if one in Garland's position were determined to get Trump taken down, one could pick a prosecutor who was politically neutral but "tough on crime", or even tough on that particular type of crime, not to mention personally disliking Trump.  And one might be able to use one's network to find a prosecutor whose private opinions were what one wanted; with a good network, one could probably even find a tough-on-crime Republican who disliked Trump but hadn't said so publicly.

Well, what then?  Is there no way to prosecute someone like Trump without risking it looking like your side is inappropriately using the legal system against your opposition?  Well... maybe not.  I do think it'd be different if Trump were, say, caught on video (deepfakes aside—let's say there are plenty of witnesses) punching someone unprovoked.  But for a case like this—no one has been injured, precedent is murky, and it's easy for Trump to tell stories about it where he did nothing wrong.

I think arresting him was a mistake.  It's bad if some people end up being "above the law", untouchable to an extent because of the political implications of prosecuting them; but those implications, the precedent it sets in many people's eyes, seem worse.  There are cases where we say, e.g., that although some speech is bad, wrong, and net negative, those in power shouldn't be trusted to decide which speech qualifies, and thus they must let it all be: we adopt a rule that limits tyranny, and accept that this means we must tolerate some genuinely bad stuff.  For the same reason, there should be such a strong taboo against arresting political opponents that one just doesn't do it—unless they're committing crimes so obvious and serious that, say, >80% of the public agrees he should be arrested.

As it stands, The Independent says that an ABC news poll says:

His favourability correlated with how people felt about charges bought against him. Around 47 per cent of people said the charges against Mr Trump were politically motivated, compared to 37 per cent who did not see politics behind the indictments.


Nearly half – 48 per cent of Americans – said Mr Trump should have been charged in the cases while 35 per cent voted against it.

(Amusingly, this implies that many of those who are uncertain whether the charges are politically motivated, and likely some who believe they are politically motivated, do believe Trump should have been charged.)  It also seems to have fired up Trump's supporters, and increased his chances of winning the Republican primary.  I suspect that, if Trump ends up in jail, that will lead to riots.

Ironically, this seems to have been bad for anti-Trump Democrats, and I think this was foreseeable; which in turn is decent evidence that the prosecution is not a considered political move, more a case of individuals doing what they think is their official duty.  I'm tempted to wonder if Trump anticipated and deliberately provoked this; I think that type of thing has served him well in the past.  It's hard to distinguish between "the guy is terminally childish" and "the guy has excellent political instincts"; probably both.

Comment by localdeity on Matt Taibbi's COVID reporting · 2023-06-15T18:04:40.759Z · LW · GW

To take some version of the opposite side: If we managed to figure out that, say, there was an X% chance per year of lab-leaking something like COVID, and a Y% chance per year of natural origin + wet market crossover producing something like COVID... that would determine the expected-value badness of lab practices and wet market practices, and the respective urgencies of doing something about them.  It wouldn't matter which specific thing happened in 2019.  (For an analogy, if the brakes on your car stopped working for 30 seconds while you were on the highway, this would be extremely concerning and warrant fixing, regardless of whether you managed to avoid crashing in that particular incident.)

That said, it seems unlikely that we'll get decent estimates on X and Y, and much more unlikely that there would be mainstream consensus on such estimates.  More likely, if COVID is proven to have come from a lab leak, then people will do something serious about bio-lab safety, and if it's proven not to have come from a lab leak, then people will do much less about bio-lab safety; this one data point will be taken as strong evidence about the danger.  So, getting an answer is potentially useful for political purposes.

(Remember: SARS 1 leaked from a lab 4 times.  That seems to me like plenty of evidence that lab leaks are a real danger, unless you think labs have substantially improved practices since then.)

Comment by localdeity on Why libertarians are advocating for regulation on AI · 2023-06-15T06:17:43.100Z · LW · GW

Berkson's Bias seems to be where you're getting a subset of people that are some combination of trait X and trait Y; that is, to be included in the subset, X + Y > threshold.  Here, "> threshold" seems to mean "willing to advocate for regulations".  It seems reasonably clear that "pessimism (about the default course of AI)" would make someone more willing to advocate for regulations, so we'll call that X.  Then Y is ... "being non-libertarian", I guess, since probably the more libertarian someone is, the more they hate regulations.  Is that what you had in mind?

I would probably put it as "Since libertarians generally hate regulations, a libertarian willing to resort to regulations for AI must be very pessimistic about AI."

Comment by localdeity on Is the confirmation bias really a bias? · 2023-06-14T15:07:32.025Z · LW · GW

You might be interested in Gigerenzer's "bias bias" paper (reviewed here):

Behavioral economics began with the intention of eliminating the psychological blind spot in rational choice theory and ended up portraying psychology as the study of irrationality. In its portrayal, people have systematic cognitive biases that are not only as persistent as visual illusions but also costly in real life—meaning that governmental paternalism is called upon to steer people with the help of “nudges.” These biases have since attained the status of truisms. In contrast, I show that such a view of human nature is tainted by a “bias bias,” the tendency to spot biases even when there are none. This may occur by failing to notice when small sample statistics differ from large sample statistics, mistaking people’s random error for systematic error, or confusing intelligent inferences with logical errors. Unknown to most economists, much of psychological research reveals a different portrayal, where people appear to have largely fine-tuned intuitions about chance, frequency, and framing. A systematic review of the literature shows little evidence that the alleged biases are potentially costly in terms of less health, wealth, or happiness. Getting rid of the bias bias is a precondition for psychology to play a positive role in economics.

An example from the paper:

Unsystematic Error Is Mistaken for Systematic Error

The classic study of Lichtenstein et al. [about causes of death] illustrates the second cause of a bias bias: when unsystematic error is mistaken for systematic error. One might object that systematic biases in frequency estimation have been shown in the widely cited letter-frequency study (Kahneman, 2011; Tversky and Kahneman, 1973). In this study, people were asked whether the letter K (and each of four other consonants) is more likely to appear in the first or the third position of a word. More people picked the first position, which was interpreted as a systematic bias in frequency estimation and attributed post hoc to the availability heuristic. After finding no single replication of this study, we repeated it with all consonants (not only the selected set of five, each of which has the atypical property of being more frequent in the third position) and actually measured availability in terms of its two major meanings, number and speed, that is, by the frequency of words produced within a fixed time and by time to the first word produced (Sedlmeier et al., 1998). None of the two measures of availability was found to predict the actual frequency judgments. In contrast, frequency judgments highly correlated with the actual frequencies, only regressed toward the mean. Thus, a reanalysis of the letter-frequency study provides no evidence of the two alleged systematic biases in frequency estimates or of the predictive power of availability.

Comment by localdeity on Snake Eyes Paradox · 2023-06-12T19:51:20.543Z · LW · GW

Because I'm not a real mathematician, I'm not going to find the actual limit, but just show that the limit is at least 50%.

Note that 1 + 2 + 4 + ... + 2^(n-1) = 2^n - 1.  Therefore, if we have a bunch of blue-eyed groups of size 1, 2, 4, ..., 2^(n-1), and one red-eyed group of size 2^n, then the overall fraction of snakes that are red-eyed is 2^n / (2^n + 2^n - 1), which, if we divide the numerator and denominator by 2^n, comes out to 1 / (2 - 1/(2^n)).  This is slightly above 1/2, and the limit as n -> ∞ is exactly 1/2.

Comment by localdeity on The Dictatorship Problem · 2023-06-12T12:04:43.203Z · LW · GW

Consider how people say, for example, that it's impossible to revolt against the government using just personal firearms, given that the government has nukes, fighter jets etc.

People do say that kind of thing.  Counterarguments:

  • Successful revolts don't need to be capable of defeating the army in a fair fight.  All you need to do is make it sufficiently painful for them to keep fighting that they give up.  I think the Middle East has modern examples of this.
  • A revolt may have some portion of the army on its side, and another portion might refuse to fight their own people.  Nukes in particular—I would be extremely astonished if any government used a large nuke, killing a bunch of civilians, when putting down a rebellion.  (Maybe they'd use very small tactical nukes—equivalent to large conventional bombs—in situations where there'd be no civilian casualties, but I suspect (and hope) that there'd still be strong resistance to breaking the nuclear taboo.  And would there even be an advantage to doing so?  Are the tactical nukes cheaper than the equivalents?  Heh, someone has looked into it: probably not.)
Comment by localdeity on What are brains? · 2023-06-10T16:54:38.720Z · LW · GW

Regarding the first part, here's what comes to mind: Long before brains evolved any higher capacities (for "conscious", "self-reflective", etc. thought), they evolved to make their hosts respond to situations in "evolutionarily useful" ways.  If you see food, some set of neurons fire and there's one group of responses; if you see a predator, a different set of neurons fire.

Then you might define "food (as perceived by this organism)" to be "what tends to make this set of neurons fire (when light reflects off it (for certain ranges of light) and reaches the eyes of this organism)".  Boundary conditions (like something having a color that's on the edge of what is recognized as food) are probably resolved "stochastically": whether something that's near the border of "food" actually fires the "food" neurons probably depends significantly on silly little environmental factors that normally don't make a difference; we tend to call this "random" and say that this almost-food thing has a 30% chance of making the "food" neurons fire.

There probably are some self-reinforcing things that happen, to try[1] to make the neurons resolve one way or the other quickly, and to some extent quick resolution is more important than accuracy.  (See Buridan's principle: "A discrete decision based upon an input having a continuous range of values cannot [always] be made within a bounded length of time.")  Also, extremely rare situations are unimportant, evolutionarily speaking, so "the API does not specify the consequences" for exactly how the brain will respond to strange and contrived inputs.

("This set of neurons fires" is not a perfectly well-defined and uniform phenomenon either.  But that doesn't prevent evolution from successfully making organisms that make it happen.)

Before brains (and alongside brains), organisms could adapt in other ways.  I think the advantage of brains is that they increase your options, specifically by letting you choose and execute complex sequences of muscular responses to situations in a relatively cheap and sensitive way, compared to rigging up Rube Goldberg macroscopic-physical-event machines that could execute the same responses.

Having a brain with different groups of neurons that execute different responses, and having certain groups fire in response to certain kinds of situations, seems like a plausibly useful way to organize the brain.  It would mean that, when fine-tuning how group X of neurons responds to situation Y, you don't have to worry about what impacts your changes might have in completely different situations ABC that don't cause group X to fire.

I suspect language was ultimately built on top of the above.  First you have groups of organisms that recognize certain things (i.e. they have certain groups of neurons that fire in response to perceiving something in the range of that thing) and respond in predictable ways; then you have organisms that notice the predictable behavior of other organisms, and develop responses to that; then you have organisms noticing that others are responding to their behavior, and doing certain things for the sole purpose[1] of signaling others to respond.

Learning plus parent-child stuff might be important here.  If your helpless baby responds (by crying) in different ways to different problems, and you notice this and learn the association, then you can do better at helping your baby.

Anyway, I think that at least the original notion of "a thing that I recognize to be an X" is ultimately derived from "a group of neurons that fire (reasonably reliably) when sensory input from something sufficiently like an X enters the brain".  Originally, the neuronal connections (and the concepts we might say they represented) were probably mostly hardcoded by DNA; later they probably developed a lot of "run-time configuration" (i.e. the DNA lays out processes for having the organism learn things, ranging from "what food looks like" [and having those neurons link into the hardcoded food circuit], through learning to associate mostly-arbitrary "language" tokens to concepts that existing neuron-groups recognize, to having general-purpose hardware for describing and pondering arbitrary new concepts).  But I suspect that the underlying "concept X <--> a group of neurons that fires in response to perceiving something like X, which gates the organism's responses to X" organization principle remains mostly intact.

  1. ^

    Anthropomorphic language shorthand for the outputs of evolutionary selection

Comment by localdeity on [deleted post] 2023-06-09T15:41:24.840Z

Wow, that article has some delicious allegations.

Upon careful consideration, Shapiro’s accounting for the origins of EMDR is questionable. This is because saccades during everyday functioning are physiologically invisible (Moses & Hart, 1987). Rosen (1995) addressed this concern by asking six individuals if they could experience eye movements while walking around and thinking of positive and negative thoughts. None were successful.

After publication of Rosen’s challenge to Shapiro’s origin story she alerted members of an EMDR listserv (, September 12, 1996) that a responsive critique would be published by a “world renowned perceptual psychology researcher.“ Shapiro was referring to Robert Welch [...]

Welch’s praise of Shapiro’s sensitivity and diligence, following as it did Shapiro’s praise of his expertise, occurred without either party disclosing a likely conflict of interest: they had a relationship and married (Carey, 2019). Remarkably, a similar failure to disclose involved Shapiro’s earlier marriage in 1969 to Gerald Puk (retrieved March 1, 2021 from when both were students in Brooklyn, New York. [...]

Licensed in New York State and without academic credentials (PsycInfo, retrieved on March 1, 2021) Puk was not on the faculty at the Professional School of Psychological Studies in California: yet somehow he became a member of Shapiro’s dissertation committee (Shapiro, 1988). As with Welch, Shapiro’s relationship history with Puk remained undisclosed to relevant parties (Anne Hanley, dissertation committee member, personal communication March 10, 2021).


It was in 1985 that Shapiro published an article in Holistic Life Magazine and discussed Neuro-Linguistic Programming (NLP) theories on various topics including the importance of eye movement patterns (Shapiro, 1985, pp. 41–43):

Neuro-Linguistic Programming is a technique developed over eight years ago. . .. It has been dubbed the “Super-Achievers” technology because the research team studied the most successful people they could find in law, medicine, business and psychology to see what made them so successful. .. In NLP, the key is that since people share the same neurological system, responses are predictable, verifiable, and repeatable. In other words, Neuro-Linguistic Programming is scientifically rather than merely theoretically based.

One of the findings of the Neuro-Linguistic Programming research is that all people cross-culturally (with the exception of the Basque nationality) show how they are thinking by the way their eyes move. . . Even without their saying a word, if you watch their eyes carefully, you can determine whether they are seeing a picture, hearing, or feeling something. As a further refinement, you can tell if they are remembering something or constructing it. Thousands have learned to walk on red-hot coals without injury, using Neuro-Linguistic Programming.. . Using Neuro-Linguistic Programming, people are shown how to tap into their own unlimited source of personal power, get rid of even the basic fear of fire and change their physiology to walk across the coals. The major dilemma that people are confronted with in Neuro-Linguistic Programming is the question of manipulation and free will. Since the powerful technology allows you to practically “read minds” and have people respond automatically in any way you choose, there is a distinct ethical issue.

(For those who aren't familiar: Wiki on firewalking)

Of course, it is possible that a person who appears to be generally dishonest, and over-credulous (and/or consciously dishonest) about the magic powers of NLP, might have stumbled upon a genuinely correct technique.  But it would seem prudent to, at the very least, discount any evidence that came from that person and anyone connected to her.

Comment by localdeity on AI #15: The Principle of Charity · 2023-06-08T14:59:37.189Z · LW · GW

If all it is doing is letting you issue commands to a computer, sure, fine. But if it’s letting you gain skills or writing to your memory, or other neat stuff like that, what is to keep the machine (or whoever has access to it) from taking control and rewriting your brain?

This brings to mind the following quotes from Sid Meier's Alpha Centauri (1999).

Neural Grafting

"I think, and my thoughts cross the barrier into the synapses of the machine—just as the good doctor intended. But what I cannot shake, and what hints at things to come, is that thoughts cross back. In my dreams the sensibility of the machine invades the periphery of my consciousness. Dark. Rigid. Cold. Alien. Evolution is at work here, but just what is evolving remains to be seen."
– Commissioner Pravin Lal, "Man and Machine"


Mind-Machine Interface

"The Warrior's bland acronym, MMI, obscures the true horror of this monstrosity. Its inventors promise a new era of genius, but meanwhile unscrupulous power brokers use its forcible installation to violate the sanctity of unwilling human minds. They are creating their own private army of demons."
– Commissioner Pravin Lal, "Report on Human Rights"

Comment by localdeity on AI #15: The Principle of Charity · 2023-06-08T12:34:16.768Z · LW · GW

Even indoors, everyone is coughing and our heads don’t feel right. I can’t think fully straight.

I highly recommend ordering an air purifier if you haven't already.  (In California we learned the utility of this from past wildfire seasons.)  Coway Airmega seems to be a decent brand.

Comment by localdeity on A mind needn't be curious to reap the benefits of curiosity · 2023-06-02T21:49:22.302Z · LW · GW

Maybe kindness is also like this: there might be benefits to behaving kindly, in some situations. But a mind behaving kindly (pico-psuedokindly?) need not value kindness for its own sake, nor have any basic drive or instinct to kindness.

I feel like this is common enough—"are they helping me out here just because they're really nice, or because they want to get in my good graces or have me owe them a favor?"—that authors often have fictional characters wonder if it's one or the other.  And real people certainly express similar concerns about, say, whether someone donates to charity for signaling purposes or for "altruism".

Also reminds me:

"You don't see nice ways to do the things you want to do," Harry said. His ears heard a note of desperation in his own voice. "Even when a nice strategy would be more effective you don't see it because you have a self-image of not being nice."

"That is a fair observation," said Professor Quirrell. "Indeed, now that you have pointed it out, I have just now thought of some nice things I can do this very day, to further my agenda."

Harry just looked at him.

Professor Quirrell was smiling. "Your lesson is a good one, Mr. Potter. From now on, until I learn the trick of it, I shall keep diligent watch for cunning strategies that involve doing kindnesses for other people. Go and practice acts of goodwill, perhaps, until my mind goes there easily."

Cold chills ran down Harry's spine.

Professor Quirrell had said this without the slightest visible hesitation.

Comment by localdeity on What's the consensus on porn? · 2023-06-01T14:26:20.382Z · LW · GW

This looks to be a correlational study.  As an exercise, let's try thinking of confounding effects that would point in both directions.

  1. If I had ED, then I imagine (a) I would try looking within porn to see if there was some way to sustain arousal, and (b) the problems with ED might mean I'd have less sex with a partner (or even break up, in marginal relationships; or lack the confidence to form new ones), and have to satisfy my sex drive by myself more often.  Thus, ED causes more porn use, and we'd expect them to be correlated.  Therefore, a null result means there must be a counter effect: porn use must reduce ED!
  2. (a) If I had ED, and I believed the "common wisdom" that porn causes ED, then I would avoid porn; in other words, ED causes less porn use.  (b) I would guess that a lower sex drive can cause both ED and reduced porn use.  Both of these effects imply anticorrelation, and therefore the study's result of a null correlation means porn use must cause ED.

Some of these might be investigated and/or controlled for.  Let's imagine controlling for 1(b), by looking at single men.  Now let's try to imagine what could screw up the analysis.  Consider these two worlds:

  1. For those with a high sex drive, ED is no handicap to relationships, because one compensates with oral sex and other measures.  But having low sex drive and ED will lead to a breakup.  Therefore, by restricting our sample to single men, we're creating an extra correlation between ED and low sex drive; and low sex drive causes less porn use, so we expect this to yield an anticorrelation between ED and porn use (and therefore null result means porn use causes ED).
  2. Having a high sex drive has no effect on whether ED causes you and your partner to break up.  Therefore, null result means null causation.

In general, if there's a piece of the causal chain you don't know about and could go either way, then whatever analysis you do can't yield the correct answer in both worlds.  If you have enough data, accurate measurements of all relevant variables, then you might be able to account for all confounding effects and end up isolating the causation you want.

Checking out the actual study... they controlled for exactly two things: age and education.  The sample was also restricted to "sexually active" men, which it doesn't seem to define.  (Does it mean they're currently in a relationship?  Have had sex in the last 12 months?  If a bad case of ED means a man hasn't had sex in years despite wanting to, does this exclude him from the study?  Surely such men are the most important ones for the study's goals?)  They did ask about sex drive... but in study 1, they lumped in "lack of sexual desire" with "sexual difficulties" that include ED, and in study 2, they asked specifically about a reduction in sex drive in the last 12 months, but seemingly nothing about overall sex drive.

Well, I was going to say: (a) if the "high sex drive protects relationships from ED" hypothesis is true (which I just made up; I suspect it's a real but weak effect), then this would leave us with a sample where ED is extra-correlated with high sex drive, which could be relevant; more importantly, (b) within a relationship, I expect things like "the man becomes less attracted to his partner" or "emotional conflicts or other relationship problems interfere with the attraction" (which have many possible causes, and I think are not rare) to cause both "instances of ED when the man tries to have sex with his partner" and "the man to use porn more often".  I expect (b) is a significant effect.

And even if there were a study that controlled for all the above, I can come up with more effects, at least some of which would be plausibly significant.  Not to mention, controlling for something requires measuring it, and I'm not sure things like "the man feeling emotionally distant in a way that may interfere with attraction" could be accurately quantified in survey questions.  This is why I want studies with a randomized intervention.

Technically, the title of the study asks about an "association"—that is, a correlation—and it delivered on that.  But I don't think anyone seriously cares about the association except insofar as it sheds light on causation.  (If they truly didn't care about causation, then why do any controls?)  Thus, despite the study's size, in terms of causality it looks pretty impotent.

Discussion and Conclusions

[...] The only significant relationship was observed in the 2011 Croatian sample (Study 1) between pornography use and ED. The direction of this association is unclear, as pornography use may also be a way to cope with sexual difficulties or decreased sexual satisfaction.

Ya think?!  (And are they using the word "association" to mean "causation" in that sentence?  I'm not sure what a "directed association" would be otherwise.  Is this a motte and bailey on the word "association"?  The paper comes from a Croatian university—maybe a translation issue?  Google says half of Croatians speak English fluently.)  Surely this was foreseeable.  I guess the charitable explanation is that they would have done followups to tease out the details if it looked like there was a big effect.

The study shows there can't be a huge effect that outweighs all confounders, but I think some potential confounders are at least medium-sized... and I do worry that the "sexually active" criterion might have excluded a lot of central examples of the phenomenon they're supposed to be investigating.  In study 1, only 52% of men who took the survey met all their criteria (including filling out the whole survey), and in study 2 it's only 26%.  Also, lumping in low sex drive with ED, as study 1 did, would probably reduce the correlation of ED with porn use (given that low sex drive probably reduces porn use).

Of course, nobody listens to science.

I'm afraid I agree with those who don't listen to whatever this study is an exemplar of.  I want to believe their conclusion, and I think it seems likely based on my "armchair theorizing and amateur observation", but their contribution has not really updated my beliefs or confidence.  Actually, I feel a little more worried than I did before.

Comment by localdeity on What's the consensus on porn? · 2023-05-31T06:34:40.163Z · LW · GW

One can imagine all kinds of confounding effects, where some other thing makes someone more likely to use porn, and also is (or causes) something else that is good or bad.  And the confounding effects are likely to be different in different subpopulations (e.g. young university kids; people who were raised religious and might not be anymore; people in relationships, some of which are going better than others; old people, some of whom are losing their sex drive).  So I would put very little trust in any study that didn't involve a randomized intervention.  (Which isn't a guarantee of quality—it's only one dimension.)  And I think there are plenty of people making claims based on ... well, let's just say their standards for empirical rigor are much lower than mine.

Wiki has an article, which is somewhat interesting to look through.  This is a bit hilarious coming after the above:

Studies have looked into both negative effects of pornography as well as potential benefits or positive effects of pornography. A large percentage of studies suffer from methodological issues. In one meta-study by researchers at Middlesex University in England, over 40,000 papers and articles were submitted to the team for review: 276 or 0.69% were suitable for consideration due to the low quality of research within the field.

It could be worth looking into those 276, but I'm not going to do so before posting this comment.  (Also I wouldn't be surprised if many of those were bad for reasons the researchers didn't catch.)  So, um, I think we're left with armchair theorizing and amateur observation.  Let's see.

Ways it could go wrong:

  • There are apparently people who get addicted to porn, so maybe it's harmful for them.  On the other hand, I think people who get addicted to things often have bad stuff happening in their life already.[1]  On the first hand, even if the addiction is caused by bad stuff, it's possible that the addictive behavior makes the situation worse.  On the second hand, if you were going to be addicted to something, porn is probably way less bad than, say, drugs or gambling.
  • Obviously, porn has its realistic and its unrealistic elements, and people who can't tell the difference and don't know it are likely to do ill-advised things.
  • For some, the usage of porn has baggage because of an upbringing that thought it (or perhaps masturbation or sex more generally) was shameful, or because other people in their life have opinions about their porn usage, or it connects to relationship problems (e.g. the partners have differing sex drives, or one wants a type of sex the other is unwilling to do, and uses masturbation with porn to compensate).  Recommendations there would be situation-specific, although in general I'd say these situations already have a conflict, and the other option (i.e. abstaining from porn, or masturbation in general) may carry its own downsides (frustration, resentment) and isn't necessarily better.
  • I've seen claims that some people get bored with "vanilla" porn, check out something a little spicier, get bored, etc., and end up in pretty extreme places.  I'm sure this has happened.  I don't think it's common.  I also suspect that those people had an underlying tendency and many of them would have arrived at similar places via in-person sex, if their environment made this easy (and would have been frustrated, and done who knows what else, if their environment did not).

Ways it could go well:

  • Obviously, people find it pleasurable or otherwise rewarding.
  • If one's sex drive would otherwise lead one to do unsafe, immoral, or otherwise bad things, this may be a better alternative.  (To some extent one can say this about masturbation regardless of whether it involves porn.)
  • If one wants to expand the range of, say, body types or demographics one is interested in, porn is an easy way to explore that.

There is plenty of speculation about long-term effects, habit forming, and so on.  (I personally look out for the possibility of becoming dependent on porn, and make a point of masturbating using only my imagination reasonably frequently.)  I don't think there are large effects that reliably happen, otherwise I'd probably have heard about it.  (There's a whole "No Fap" movement that some subscribe to.  I think some people claim it improves their motivation / energy.)  Probably, if there are such effects, they affect some people much more than others.

Overall, I'd say "seems probably harmless; it's probably worth having some awareness of failure modes and paying attention to yourself, but beyond that, do what thou wilt".

  1. ^

    Valentine had an interesting post where he said "This is the basic core of addiction. Addictions are when there's an intolerable sensation but you find a way to bear its presence without addressing its cause. The more that distraction becomes a habit, the more that's the thing you automatically turn to when the sensation arises."  The idea of being addicted to escape from a thing you're avoiding, rather than being particularly addicted to the specific form of escape, rings true to me.

Comment by localdeity on Open Thread With Experimental Feature: Reactions · 2023-05-25T21:47:55.408Z · LW · GW

Probably inspired by the /r/changemyview subreddit.

Comment by localdeity on What are the limits of the weak man? · 2023-05-18T03:13:53.945Z · LW · GW

In some cases, your goal is to figure out the "correct" political position to hold (based on your ethics, goals, and other beliefs).  In that scenario, what any particular person believes is, logically speaking, irrelevant[1].  (If you're debating a certain person, then it's probably rude and possibly against the rules of the debate (if there are any) for you to spend all your time talking about positions your opponent doesn't hold, and never engage with their actual views; so that's a reason to talk about their actual views, but not a reason to believe them any more than other potential views you find equally plausible.)

In other cases, your goal is to decide if some political movement is a good one, or whether to support a political party or coalition or group.  In that case, questions like "How many of you actually support position X vs position Y" are relevant.  (And "How much better is X than Y" is also relevant, if there are enough supporters of those positions to be worth considering.)

Truthseekers do well to bear in mind the difference between these goals, and when questions bear on one but not the other.

  1. ^

    Except to the extent that person X believing Y is taken as evidence that Y is true.  That would apply where X is known to be an expert on Y.

Comment by localdeity on How I apply (so-called) Non-Violent Communication · 2023-05-17T14:59:57.625Z · LW · GW

It occurs to me that "peacemaker communication" would be historically accurate, conveys what seems appropriate, and seems much better at avoiding controversial implications.

Comment by localdeity on How I apply (so-called) Non-Violent Communication · 2023-05-17T14:38:27.187Z · LW · GW

There is something to that.  However...

I would like it to be possible for people to say things like "Bob is wrong", "Bob is lying to you", "Bob's products don't work very well / are flawed", "Bob's studies have bad methodology", "Bob has made horrible decisions as a leader and should be voted out", etc.  And when they do so, if Bob gets offended and escalates to violence, I want there to be a very strong presumption that Bob is absolutely wrong to do so, that this effectively proves the criticism was well-founded (not because that's logically necessarily true, but because it disincentivizes violence).  If Bob hints that he may escalate to violence, I want there to be a strong presumption that he is wrong to do so and that this proves the criticism right.  If any onlookers (possibly aligned with Bob, possibly not) say, "Hey, um, you might not want to say that, it carries some risk of escalating to violence", I want the culture to provide a strong answer of "No, Bob will not do that—or if he does, it proves to everyone that he's monstrous and we'll throw him in jail faster than you can say 'uncivilized'.  Civilians should act like there's no risk to speaking up, and we will do our best to make this a correct decision."

This ethos seems difficult to reconcile with enshrining the idea "Unless you're very careful about what you say and how you phrase it, you may end up saying things that may provoke someone into violence" into the name of your philosophy.  Like, it is possible to "expect good behavior, punish not-good behavior, but also practice how to handle bad behavior"; but to call non-careful speech violent (either implicitly, or biting the bullet and making it explicit as you do) seems to imply it's your fault for making Bob punch you.  Which is kind of true in a causal sense, but not in a "blame" sense.[1]  Calling it provoking—"non-provoking communication"—would be somewhat better, though I'm not entirely happy with it.  "How To Communicate With Uncivilized People Who Are Dangerously Prone To Violence" would be ideal in this sense.

Rosenberg seems to have developed and exercised his philosophy around people who are in fact dangerously prone to violence.  His lecture talks about growing up with some race riots that killed people, and then (either that or another one) talks about visiting somewhere like Iraq and having someone scream "Murderer!" at him because he was an American, and getting the guy to calm down and have a valuable conversation.

To be sure, one probably will encounter, in life, a decent number of people who are dangerously prone to violence.  Many of them you can probably get a good guess about, from quick observation, but not all.  So it is useful to have such skills, and additionally some of them help make conversations more productive in general (the central example being to state specific observations, rather than leading with controversial interpretations of not-stated evidence).  But, for abovementioned reasons, I don't want the terminology to have any shred of implication that escalating from speech to violence is justifiable.

  1. ^

    This raises something of a parallel with the whole "What was she wearing?  To what extent is it her fault she got sexually assaulted?" thing.

Comment by localdeity on Bayesian Networks Aren't Necessarily Causal · 2023-05-16T00:52:36.386Z · LW · GW

I expected there to be some wordplay on "casual"/"causal" somewhere, but I'm not sure if I saw any.  This is obviously a central component of such a post's value proposition.

Comment by localdeity on How I apply (so-called) Non-Violent Communication · 2023-05-15T14:35:49.429Z · LW · GW

Do you mean that saying "my method of communication is non-violent communication" implies that everyone else is communicating violently?

That kind of thing, yes.  I should mention that I have no systematic perspective here—I've had several acquaintances mention they've learned about NVC, and seen various internet discussions, but I have no idea what the "usual" or "average" usage is like.  (I'll also mention that I think at least some of the central techniques are good ones—I've elsewhere encountered the formulation of "I statements", as in "I see X" and "When Y, I feel Z".)

I have seen a few people say that abusive people have used NVC as a tool.  Essentially using it as a way of communicating, legitimizing, and lending weight to their unreasonable desires.  Googling, I found this (I don't endorse everything this article says, and much of it is "I have no idea how often what you're saying happens in practice", but posed as a hypothetical it makes sense):

Consider this situation:

An abuser has an emotional need for respect. He experiences it as deeply hurtful when his partner has conversations with other men. When she talks to other men anyway, he feels betrayed. He says “When you talk to other men, I feel hurt because I need mutual respect.”

Using NVC principles, how do you say that what he is doing is wrong?

My memory traces also include (a) someone saying he and his ~10-year-old kid took a class on it, and (b) a boss using it with her employees [although the source I found on this seems to be a hypothetical], both cases leading to discovering how to use it for emotional blackmail.  (I think someone opined that NVC shouldn't be used when there is a power disparity; I wonder if this is common advice.)

I mean, to some extent any improved communication technique is going to be a tool that increases the options available to an abuser, especially when their interlocutor isn't very sophisticated.  The existence of misusers doesn't prove it's bad on net.

Still, having the name be as virtuous-sounding as "Nonviolent Communication" seems to make a couple of things easier:

  1. pressuring someone to go along with it ("How could you possibly object to this?", or someone anticipating that response and staying silent)
  2. practitioners and teachers being blithely unaware of possible misuses by themselves or others

It may also turn away some of the more scrupulous, who instinctively avoid a label that sounds like it encourages failure mode 2 above (I think I'm in this category); and some who perceive the naming choice as a manipulative move.

And problems with a naming choice seem to matter more for a philosophy that is deeply concerned with the use of language.  (Another case of this comes to mind, which I'll avoid mentioning because it's political.)

Again, I have no idea how often NVC is used well vs used badly.  But it does seem to me that very thoughtful practitioners of it should at least be aware of the naming issue, and that it would reflect badly on such practitioners if they'd never thought of it (sorry) and on the practice in general if none of the "top brass" (so to speak) were aware of it either.  Hence my wanting to probe on it.

Comment by localdeity on How I apply (so-called) Non-Violent Communication · 2023-05-15T12:07:28.604Z · LW · GW

Apologies for the somewhat offtopic question, but...

avoid phrasings [...] that come across as accusatory or potentially unfair

With that being a central tenet of NVC, do you perceive anything odd about going around saying "My method of communication is non-violent communication"?

(I tried to find out Marshall Rosenberg's rationale for using the name.  Hmm, it looks like the Wikipedia article has been updated.  In the video it points to, Rosenberg says he doesn't like the name for multiple reasons, and lists two of them; neither is the one I'm thinking of, but it's possible he'd thought of it too.  Of the listed alternatives, "giraffe language" would not suffer from this issue; "Rosenberg communication" would also work.)

Comment by localdeity on Bayesian Networks Aren't Necessarily Causal · 2023-05-14T16:19:56.174Z · LW · GW

First crucial point which this post is missing: the first (intuitively wrong) net reconstructed represents the probabilities using 9 parameters (i.e. the nine rows of the various truth tables), whereas the second (intuitively right) represents the probabilities using 8. That means the second model uses fewer bits; the distribution is more compressed by the model. So the "true" network is favored even before we get into interventions.

Is this always going to be the case?  I feel like the answer is "not always", but I have no empirical data or theoretical argument here.

Comment by localdeity on A brief collection of Hinton's recent comments on AGI risk · 2023-05-11T23:09:41.753Z · LW · GW

The guy is 75 years old.  Many people would have retired 10+ years ago.  Any effort he's putting in is supererogatory as far as I'm concerned.  One can hope for more, of course, but let there be no hint of obligation.

Comment by localdeity on 10 great reasons why Lex Fridman should invite Eliezer and Robin to re-do the FOOM debate on his podcast · 2023-05-10T21:56:22.633Z · LW · GW

Explicitly asking for upvotes is probably bad internet etiquette.  (And it doesn't really help if one thinks "it's justified in this case".  I'm sure lots of e.g. religious people would think the same.)  It so happens that there was a post about this on the EA Forum recently, which among other things describes adjacent approaches that are less bad: