sarahconstantin's Shortform

post by sarahconstantin · 2024-10-01T16:24:17.329Z · LW · GW · 74 comments

Contents

75 comments

74 comments

Comments sorted by top scores.

comment by sarahconstantin · 2024-10-07T15:58:01.224Z · LW(p) · GW(p)
  • Psychotic “delusions” are more about holding certain genres of idea with a socially inappropriate amount of intensity and obsession than holding a false idea. Lots of non-psychotic people hold false beliefs (eg religious people). And, interestingly, it is absolutely possible to hold a true belief in a psychotic way.
  • I have observed people during psychotic episodes get obsessed with the idea that social media was sending them personalized messages (quite true; targeted ads are real) or the idea that the nurses on the psych ward were lying to them (they were).
  • Preoccupation with the revelation of secret knowledge, with one’s own importance, with mistrust of others’ motives, and with influencing others' thoughts or being influenced by other's thoughts, are classic psychotic themes.
    • And it can be a symptom of schizophrenia when someone’s mind gets disproportionately drawn to those themes. This is called being “paranoid” or “grandiose.”
    • But sometimes (and I suspect more often with more intelligent/self-aware people) the literal content of their paranoid or grandiose beliefs is true!
      • sometimes the truth really has been hidden!
      • sometimes people really are lying to you or trying to manipulate you!
      • sometimes you really are, in some ways, important! sometimes influential people really are paying attention to you!
      • of course people influence each others' thoughts -- not through telepathy but through communication!
    • a false psychotic-flavored thought is "they put a chip in my brain that controls my thoughts." a true psychotic-flavored thought is "Hollywood moviemakers are trying to promote progressive values in the public by implanting messages in their movies."
      • These thoughts can come from the same emotional drive, they are drawn from dwelling on the same theme of "anxiety that one's own thoughts are externally influenced", they are in a deep sense mere arbitrary verbal representations of a single mental phenomenon...
      • but if you take the content literally, then clearly one claim is true and one is false.
      • and a sufficiently smart/self-aware person will feel the "anxiety-about-mental-influence" experience, will search around for a thought that fits that vibe but is also true, and will come up with something a lot more credible than "they put a mind-control chip in my brain", but is fundamentally coming from the same motive.  
  • There’s an analogous but easier to recognize thing with depression.
    • A depressed person’s mind is unusually drawn to obsessing over bad things. But this obviously doesn’t mean that no bad things are real or that no depressive’s depressing claims are true.
    • When a depressive literally believes they are already dead, we call that Cotard's Delusion, a severe form of psychotic depression. When they say "everybody hates me" we call it a mere "distorted thought". When they talk accurately about the heat death of the universe we call it "thermodynamics." But it's all coming from the same emotional place.
  • In general, mental illnesses, and mental states generally, provide a "tropism" towards thoughts that fit with certain emotional/aesthetic vibes.
    • Depression makes you dwell on thoughts of futility and despair
    • Anxiety makes you dwell on thoughts of things that can go wrong
    • Mania makes you dwell on thoughts of yourself as powerful or on the extreme importance of whatever you're currently doing
    • Paranoid psychosis makes you dwell on thoughts of mistrust, secrets, and influencing/being influenced
  • You can, to some extent, "filter" your thoughts (or the ones you publicly express) by insisting that they make sense. You still have a bias towards the emotional "vibe" you're disposed to gravitate towards; but maybe you don't let absurd claims through your filter even if they fit the vibe. Maybe you grudgingly admit the truth of things that don't fit the vibe but technically seem correct.
    • this does not mean that the underlying "tropism" or "bias" does not exist!!!
    • this does not mean that you believe things "only because they are true"!
    • in a certain sense, you are doing the exact same thing as the more overtly irrational person, just hiding it better!
      • the "bottom line" in terms of vibe has already been written, so it conveys no "updates" about the world
      • the "bottom line" in terms of details may still be informative because you're checking that part and it's flexible
  • "He's not wrong but he's still crazy" is a valid reaction to someone who seems to have a mental-illness-shaped tropism to their preoccupations.
    • eg if every post he writes, on a variety of topics, is negative and gloomy, then maybe his conclusions say more about him than about the truth concerning the topic;
      • he might still be right about some details but you shouldn't update too far in the direction of "maybe I should be gloomy about this too"
    • Conversely, "this sounds like a classic crazy-person thought, but I still separately have to check whether it's true" is also a valid and important move to make (when the issue is important enough to you that the extra effort is worth it). 
      • Just because someone has a mental illness doesn't mean every word out of their mouth is false!
      • (and of course this assumption -- that "crazy" people never tell the truth -- drives a lot of psychiatric abuse.)

link: https://roamresearch.com/#/app/srcpublic/page/71kfTFGmK

Replies from: davekasten, tailcalled, Dagon, kave, nikolas-kuhn, michael-roe
comment by davekasten · 2024-10-07T21:57:49.429Z · LW(p) · GW(p)

I once saw a video on Instagram of a psychiatrist recommending to other psychiatrists that they purchase ear scopes to check out their patients' ears, because:
1.  Apparently it is very common for folks with severe mental health issues to imagine that there is something in their ear (e.g., a bug, a listening device)
2.  Doctors usually just say "you are wrong, there's nothing in your ear" without looking
3.  This destroys trust, so he started doing cursory checks with an ear scope
4.  Far more often than he expected (I forget exactly, but something like 10-20%ish), there actually was something in the person's ear -- usually just earwax buildup, but occasionally something else like a dead insect -- that was indeed causing the sensation, and he gained a clinical pathway to addressing his patients' discomfort that he had previously lacked

Replies from: TrevorWiesinger
comment by trevor (TrevorWiesinger) · 2024-10-08T05:17:49.320Z · LW(p) · GW(p)

This reminds me of dath ilan's hallucination diagnosis from page 38 of Yudkowsky and Alicorn's glowfic But Hurting People Is Wrong.

It's pretty far from meeting dath ilan's standard though; in fact an x-ray would be more than sufficient as anyone capable of putting something in someone's ear would obviously vastly prefer to place it somewhere harder to check, whereas nobody would be capable of defeating an x-ray machine as metal parts are unavoidable. 

This concern pops up in books on the Cold War (employees at every org and every company regularly suffer from mental illnesses at somewhere around their base rates, but things get complicated at intelligence agencies where paranoid/creative/adversarial people are rewarded and even influence R&D funding) and an x-ray machine cleanly resolved the matter every time.

comment by tailcalled · 2024-10-07T17:04:22.712Z · LW(p) · GW(p)

Tangential, but...

Schizophrenia is the archetypal definitely-biological mental disorder, but recently for reasons relevant to the above, I've been wondering if that is wrong/confused. Here's my alternate (admittedly kinda uninformed) model:

  • Psychosis is a biological state or neural attractor, which we can kind of symptomatically characterize, but which really can only be understood at a reductionistic level.
  • One of the symptoms/consequences of psychosis is getting extreme ideas at extreme amounts of intensity.
  • This symptom/consequence then triggers a variety of social dynamics that give classic schizophrenic-like symptoms such as, as you say, "preoccupation with the revelation of secret knowledge, with one’s own importance, with mistrust of others’ motives, and with influencing others' thoughts or being influenced by other's thoughts"

That is, if you suddenly get an extreme idea (e.g. that the fly that flapped past you is a sign from god that you should abandon your current life), you would expect dynamics like:

  • People get concerned for you and try to dissuade you, likely even conspiring in private to do so (and even if they're not conspiring, it can seem like a conspiracy). In response, it might seem appropriate to distrust them.
  • Or, if one interprets it as them just lacking the relevant information, one needs to develop some theory of why one has access to special information that they don't.
  • Or, if one is sympathetic to their concern, it would be logical to worry about one's thoughts getting influenced.

But these sorts of dynamics can totally be triggered by extreme beliefs without psychosis! This might also be related to how Enneagram type 5 (the rationalist type) is especially prone to schizophrenia-like symptoms.

(When I think "in a psychotic way", I think of the neurological disorder, but it seems like the way you use it in your comment is more like the schizophrenia-like social dynamic?)

  • In general, mental illnesses, and mental states generally, provide a "tropism" towards thoughts that fit with certain emotional/aesthetic vibes.
    • Depression makes you dwell on thoughts of futility and despair
    • Anxiety makes you dwell on thoughts of things that can go wrong
    • Mania makes you dwell on thoughts of yourself as powerful or on the extreme importance of whatever you're currently doing
    • Paranoid psychosis makes you dwell on thoughts of mistrust, secrets, and influencing/being influenced

Also tangential, this is sort of a "general factor" model of mental states. That often seems applicable, but recently my default interpretation of factor models has been that they tend to get at intermediary variables and not root causes.

Let's take an analogy with computer programs. If you look at the correlations in which sorts of processes run fast or slow, you might find a broad swathe of processes whose performance is highly correlated, because they are all predictably CPU-bound. However, when these processes are running slow, there will usually be some particular program that is exhausting the CPU and preventing the others from running. This problematic program can vary massively from computer to computer, so it is hard to predict or model in general, but often easy to identify in the particular case by looking at which program is most extreme.

comment by Dagon · 2024-10-07T19:42:50.608Z · LW(p) · GW(p)

Thank you, this is interesting and important.  I worry that it overstates similarity of different points on a spectrum, though.

in a certain sense, you are doing the exact same thing as the more overtly irrational person, just hiding it better!

In a certain sense, yes.  In other, critical senses, no.  This is a case where quantitative differences are big enough to be qualitative.  When someone is clinically delusional, there are a few things which distinguish it from the more common wrong ideas.  Among them, the inability to shut up about it when it's not relevant, and the large negative impact on relationships and daily life.  For many many purposes, "hiding it better" is the distinction that matters.

I fully agree that "He's not wrong but he's still crazy" is valid (though I'd usually use less-direct phrasing).  It's pretty rare that "this sounds like a classic crazy-person thought, but I still separately have to check whether it's true" happens to me, but it's definitely not never.

comment by kave · 2024-10-07T18:31:41.280Z · LW(p) · GW(p)

the idea that social media was sending them personalized messages

I imagine they were obsessed with false versions of this idea, rather than obsession about targeted advertising?

Replies from: sarahconstantin, AprilSR
comment by sarahconstantin · 2024-10-08T03:36:45.548Z · LW(p) · GW(p)

no! it sounded like "typical delusion stuff" at first until i listened carefully and yep that was a description of targeted ads.

comment by AprilSR · 2024-10-07T21:39:43.877Z · LW(p) · GW(p)

For a while I ended up spending a lot of time thinking about specifically the versions of the idea where I couldn't easily tell how true they were... which I suppose I do think is the correct place to be paying attention to?

comment by Amalthea (nikolas-kuhn) · 2024-10-07T19:18:13.342Z · LW(p) · GW(p)

One has to be a bit careful with this though. E.g. someone experiencing or having experienced harassment may have a seemingly pathological obsession on the circumstances and people involved in the situation, but it may be completely proportional to the way that it affected them - it only seems pathological to people who didn't encounter the same issues.

Replies from: Seth Herd
comment by Seth Herd · 2024-10-11T17:47:53.636Z · LW(p) · GW(p)

If it's not serving them, it's pathological by definition, right?

So obsessing about exactly those circumstances and types of people could be pathological if it's done more than will protect them in the future, weighing in the emotional cost of all that obsessing.

Of course we can't just stop patterns of thought as soon as we decide they're pathological. But deciding it doesn't serve me so I want to change it is a start.

Yes, it's proportional to the way it affected them - but most of the effect is in the repetition of thoughts about the incident and fear of future similar experiences. Obsessing about unpleasant events is natural, but it often seems pretty harmful itself.

Trauma is a horrible thing. There's a delicate balance between supporting someone's right and tendency to obsess over their trauma while also supporting their ability to quit re-traumatizing themselves by simulating their traumatic event repeatedly.

Replies from: nikolas-kuhn
comment by Amalthea (nikolas-kuhn) · 2024-10-11T18:45:14.330Z · LW(p) · GW(p)

If it's not serving them, it's pathological by definition, right?

This seems way too strong, otherwise any kind of belief or emotion that is not narrowly in pursuit of your goals is pathological.

I completely agree that it's important to strike a balance between revisiting the incident and moving on.

but most of the effect is in the repetition of thoughts about the incident and fear of future similar experiences.

This seems partially wrong. The thoughts are usually consequences of the damage that is done, and they can be unhelpful in their own right, but they are not usually the problem. E.g. if you know that X is an abuser and people don't believe you, I wouldn't go so far as saying your mental dissonance about it is the problem.

comment by Michael Roe (michael-roe) · 2024-10-08T16:54:34.048Z · LW(p) · GW(p)

Some psychiatry textbooks classify “overvalued ideas” as distinct from psychotic delusions.


Depending on how wide you make the definition, a whole rag-bag of diagnoses from the DSM V are overvalued ideas (e.g, anorexia nervosa over valuing being fat).

comment by sarahconstantin · 2024-10-10T14:32:16.066Z · LW(p) · GW(p)
  • “we” can’t steer the future.
  • it’s wrong to try to control people or stop them from doing locally self-interested & non-violent things in the interest of “humanity’s future”, in part because this is so futile.
    • if the only way we survive is if we coerce people to make a costly and painful investment in a speculative idea that might not even work, then we don’t survive! you do not put people through real pain today for a “someday maybe!” This applies to climate change,  AI x-risk, and socially-conservative cultural reform.
  • most cultures and societies in human history have been so bad, by my present values, that I’m not sure they’re not worse than extinction, and we should expect that most possible future states are similarly bad;
  • history clearly teaches us that civilizations and states collapse (on timescales of centuries) and the way to bet is that ours will as well, but it’s kind of insane hubris to think that this can be prevented;
  • the literal species Homo sapiens is pretty resilient and might avoid extinction for a very long time, but have you MET Homo sapiens? this is cold fucking comfort! (see e.g. C. J. Cherryh’s vision in 40,000 in Gehenna for a fictional representation not far from my true beliefs — we are excellent at adaptation and survival but when we “survive” this often involves unimaginable harshness and cruelty, and changing into something that our ancestors would not have liked at all.)
  • identifying with species-survival instead of with the stuff we value now is popular among the thoughtful but doesn’t make any sense to me;
  • in general it does not make sense, to me, to compromise on personal values in order to have more power/influence. you will be able to cause stuff to happen, but who cares if it’s not the stuff you want?
  • similarly, it does not make sense to consciously optimize for having lots of long-term descendants. I love my children; I expect they’ll love their children; but go too many generations out and it’s straight-up fantasyland. My great-grandparents would have hated me.  And that’s still a lot of shared culture and values! Do you really have that much in common with anyone from five thousand years ago?
  • Evolution is not your friend. God is not your friend. Everything worth loving will almost certainly perish. Did you expect it to last forever?
  • “I love whatever is best at surviving” or “I love whatever is strongest” means you don’t actually care what it’s like. It means you have no loyalty and no standards. It means you don’t care so much if the way things turn out is hideous, brutal, miserable, abusive… so long as it technically “is alive” or “wins”. Fuck that.
  • I despise sour grapes. If the thing I want isn’t available, I’m not going to pretend that what is available is what I want.
  • I am not going to embrace the “realistic” plan of allying with something detestable but potent. There is always an alternative, even if the only alternative is “stay true to your dreams and then get clobbered.”

Link to this on my Roam

Replies from: tailcalled, Raemon, Chris_Leong, myron-hedderson, tao-lin, tailcalled, Unnamed, SaidAchmiz, Mitchell_Porter, AliceZ, StartAtTheEnd
comment by tailcalled · 2024-10-10T20:21:12.315Z · LW(p) · GW(p)
  • it’s wrong to try to control people or stop them from doing locally self-interested & non-violent things in the interest of “humanity’s future”, in part because this is so futile.
    • if the only way we survive is if we coerce people to make a costly and painful investment in a speculative idea that might not even work, then we don’t survive! you do not put people through real pain today for a “someday maybe!” This applies to climate change,  AI x-risk, and socially-conservative cultural reform.

How does "this is so futile" square with the massive success of taxes and criminal justice? From what I've heard, states have managed to reduce murder rates by 50x. Obviously that's stopping people from something violent rather than non-violent, but what's the aspect of violence that makes it relevant? Or e.g. how about taxes which fund change to renewable energy? The main argument for socially-conservative cultural reform is fertility, but what about taxes that fund kindergartens, they sort of seem to have a similar function?

The key trick to make it correct to try to control people or stop them is to be stronger than them. 

comment by Raemon · 2024-10-10T19:01:18.371Z · LW(p) · GW(p)

I think this prompts some kind of directional update in me. My paraphrase of this is:

  • it’s actually pretty ridiculous to think you can steer the future
  • It’s also pretty ridiculous to choose to identify with what the future is likely to be.

Therefore…. Well, you don’t spell out your answer. My answer is "I should have a personal meaning-making resolution to 'what would I do if those two things are both true,' even if one of them turns out to be false, so that I can think clearly about whether they are true."

I’ve done a fair amount of similar meaningmaking work through the lens of Solstice 2022 and 2023. But that was more through lens of ‘nearterm extinction’ than ‘inevitability of value loss', which does feel like a notably different thing.

So it seems worth doing some thinking and pre-grieving about that.

I of course have some answers to ‘why value loss might not be inevitable’, but it’s not something I’ve yet thought about through an unclouded lens.

Replies from: sarahconstantin
comment by sarahconstantin · 2024-10-13T22:10:43.172Z · LW(p) · GW(p)

Therefore, do things you'd be in favor of having done even if the future will definitely suck. Things that are good today, next year, fifty years from now... but not like "institute theocracy to raise birth rates", which is awful today even if you think it might "save the world".

Replies from: Raemon
comment by Raemon · 2024-10-13T22:23:38.795Z · LW(p) · GW(p)

Ah yeah that’s a much more specific takeaway than I’d been imagining.

comment by Chris_Leong · 2024-10-11T03:26:21.826Z · LW(p) · GW(p)

I honestly feel that the only appropriate response is something along the lines of "fuck defeatism"[1].

This comment isn't targeted at you, but at a particular attractor in thought space.

Let me try to explain why I think rejecting this attractor is the right response rather than engaging with it.

I think it's mostly that I don't think that talking about things at this level of abstraction is useful. It feels much more productive to talk about specific plans. And if you have a general, high-abstraction argument that plans in general are useless, but I have a specific argument why a specific plan is useful, I know which one I'd go with :-).

Don't get me wrong, I think that if someone struggles for a certain amount of time to try to make a difference and just hits wall after wall, then at some point they have to call it. But "never start" and "don't even try" are completely different.

It's also worth noting, that saving the world is a team sport. It's okay to pursue a plan that depends on a bunch of other folk stepping up and playing their part.

  1. ^

    I would also suggest that this is the best way to respond to depression rather than "trying to argue your way out of it".

Replies from: sarahconstantin
comment by sarahconstantin · 2024-10-11T13:54:18.847Z · LW(p) · GW(p)

I'm not defeatist! I'm picky.

And I'm not talking specifics because i don't want to provoke argument.

comment by Myron Hedderson (myron-hedderson) · 2024-10-11T14:14:28.051Z · LW(p) · GW(p)

We can't steer the future

What about influencing? If, in order for things to go OK, human civilization must follow a narrow path which I individually need to steer us down, we're 100% screwed because I can't do that. But I do have some influence. A great deal of influence over my own actions (I'm resisting the temptation to go down a sidetrack about determinism, assuming you're modeling humans as things that can make meaningful choices), substantial influence over the actions of those close to me, some influence over my acquaintances, and so on until very extremely little (but not 0) influence over humanity as a whole. I also note that you use the word "we", but I don't know who the "we" is. Is it everyone? If so, then everyone collectively has a great deal of say about how the future will go, if we collectively can coordinate. Admittedly, we're not very good at this right now, but there are paths to developing this civilizational skill further than we currently have. So maybe the answer to "we can't steer the future" is "not yet we can't, at least not very well"?
 

  • it’s wrong to try to control people or stop them from doing locally self-interested & non-violent things in the interest of “humanity’s future”, in part because this is so futile.
    • if the only way we survive is if we coerce people to make a costly and painful investment in a speculative idea that might not even work, then we don’t survive! you do not put people through real pain today for a “someday maybe!” This applies to climate change,  AI x-risk, and socially-conservative cultural reform.

Agree, mostly. The steering I would aim for would be setting up systems wherein locally self-interested and non-violent things people are incentivized to do have positive effects for humanity's future. In other words, setting up society such that individual and humanity-wide effects are in the same direction with respect to some notion of "goodness", rather than individual actions harming the group, or group actions harming or stifling the individual. We live in a society where we can collectively decide the rules of the game, which is a way of "steering" a group. I believe we should settle on a ruleset where individual short-term moves that seem good lead to collective long-term outcomes that seem good. Individual short-term moves that clearly lead to bad collective long-term outcomes should be disincentivized, and if the effects are bad enough then coercive prevention does seem warranted (E. G., a SWAT team to prevent a mass shooting). And similarly for groups stifling individuals ability to do things that seem to them to be good for them in the short term. And rules that have perverse incentive effects that are harmful to the individual, the group, or both? Definitely out. This type of system design is like a haiku - very restricted in what design choices are permissible, but not impossible in principle. Seems worth trying because if successful, everything is good with no coercion. If even a tiny subsystem can be designed (or the current design tweaked) in this way, that by itself is good. And the right local/individual move to influence the systems of which you are a part towards that state, as a cognitively-limited individual who can't hold the whole of complex systems in their mind and accurately predict the effect of proposed changes out into the far future, might be as simple as saying "in this instance, you're stifling the individual" and "in this instance you're harming the group/long-term future" wherever you see it, until eventually you get a system that does neither. Like arriving at a haiku by pointing out every time the rules of haiku construction are violated.

comment by Tao Lin (tao-lin) · 2024-10-10T23:24:11.254Z · LW(p) · GW(p)

I disagree a lot! Many things have gotten better! Is sufferage, abolition, democracy, property rights etc not significant? All the random stuff eg better angels of our nature claims has gotten better.

Either things have improved in the past or they haven't, and either people trying to "steer the future" in some sense have been influential on these improvements. I think things have improved, and I think there's definitely not strong evidence that people trying to steer the future was always useless. Because trying to steer the future is very important and motivating, i try to do it.

Yes the counterfactual impact of you individually trying to steer the future may or may not be insignificant, but people trying to steer the future is better than no one doing that!

Replies from: sarahconstantin
comment by sarahconstantin · 2024-10-13T22:04:22.526Z · LW(p) · GW(p)

"Let's abolish slavery," when proposed, would make the world better now as well as later.

I'm not against trying to make things better!

I'm against doing things that are strongly bad for present-day people to increase the odds of long-run human species survival.

comment by tailcalled · 2024-10-10T19:41:14.418Z · LW(p) · GW(p)
  • “I love whatever is best at surviving” or “I love whatever is strongest” means you don’t actually care what it’s like. It means you have no loyalty and no standards. It means you don’t care so much if the way things turn out is hideous, brutal, miserable, abusive… so long as it technically “is alive” or “wins”. Fuck that.

Proposal: For any given system, there's a destiny based on what happens when it's developed to its full extent. Sight is an example of this, where both human eyes and octopus eyes and cameras have ended up using lenses to steer light, despite being independent developments.

"I love whatever is the destiny" is, as you say, no loyalty and no standards. But, you can try to learn what the destiny is, and then on the basis of that decide whether to love or oppose it.

Plants and solar panels are the natural destiny for earthly solar energy. Do you like solarpunk? If so, good news, you can love the destiny, not because you love whatever is the destiny, but because your standards align with the destiny.

Replies from: Raemon, elityre
comment by Raemon · 2024-10-10T20:19:25.932Z · LW(p) · GW(p)

People who love solarpunk don't obviously love computronium dyson spheres tho

Replies from: tailcalled
comment by tailcalled · 2024-10-10T20:30:49.249Z · LW(p) · GW(p)

That is true, though:

1) Regarding tiling the universy with computronium as destiny is Gnostic [LW · GW] heresy.

2) I would like to learn more about the ecology of space infrastructure. Intuitively it seems to me like the Earth is much more habitable than anywhere else, and so I would expect sarah's "this is so futile" point to actually be inverted when it comes to e.g. a Dyson sphere, where the stagnation-inducing worldwide regulation regulation will by-default be stronger than the entropic pressure.

More generally, I have a concept I call the "infinite world approximation", which I think held until ~WWI. Under this approximation, your methods have to be robust against arbitrary adversaries, because they could invade from parts of the ecology you know nothing about. However, this approximation fails for Earth-scale phenomena, since Earth-scale organizations could shoot down any attempt at space colonization.

comment by Eli Tyre (elityre) · 2024-10-13T01:40:54.519Z · LW(p) · GW(p)

Are you saying this because you worship the sun?

Replies from: tailcalled
comment by tailcalled · 2024-10-13T07:48:24.946Z · LW(p) · GW(p)

I would more say the opposite: Henri Bergson (better known for inventing vitalism) convinced me that there ought to be a simple explanation for the forms life takes, and so I spent a while performing root cause analysis on that, and ended up with the sun as the creator.

comment by Unnamed · 2024-10-11T18:01:01.854Z · LW(p) · GW(p)

This post reads like it's trying to express an attitude or put forward a narrative frame, rather than trying to describe the world.

Many of these claims seem obviously false, if I take them at face value at take a moment to consider what they're claiming and whether it's true.

e.g., On the first two bullet points it's easy to come up with counterexamples. Some successful attempts to steer the future, by stopping people from doing locally self-interested & non-violent things, include: patent law ("To promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries") and banning lead in gasoline. As well as some others that I now see that other commenters have mentioned.

comment by Said Achmiz (SaidAchmiz) · 2024-10-13T22:42:08.636Z · LW(p) · GW(p)

history clearly teaches us that civilizations and states collapse (on timescales of centuries) and the way to bet is that ours will as well, but it’s kind of insane hubris to think that this can be prevented;

It seems like it makes some difference whether our civilization collapses the way that the Roman Empire collapsed, the way that the British Empire collapsed, or the way that the Soviet Union collapsed. “We must prevent our civilization from ever collapsing” is clearly an implausible goal, but “we should ensure that a successor structure exists and is not much worse than what we have now” seems rather more reasonable, no?

comment by Mitchell_Porter · 2024-10-10T20:44:13.029Z · LW(p) · GW(p)

Is it too much to declare this the manifesto of a new philosophical school, Constantinism?

Replies from: sarahconstantin
comment by sarahconstantin · 2024-10-10T22:48:53.397Z · LW(p) · GW(p)

wait and see if i still believe it tomorrow!

Replies from: sarahconstantin
comment by sarahconstantin · 2024-10-15T16:03:13.884Z · LW(p) · GW(p)

I don't think it was articulated quite right -- it's more negative than my overall stance (I wrote it when unhappy) and a little too short-termist.

I do still believe that the future is unpredictable, that we should not try to "constrain" or "bind" all of humanity forever using authoritarian means, and that there are many many fates worse than death and we should not destroy everything we love for "brute" survival.

And, also, I feel that transience is normal and only a bit sad. It's good to save lives, but mortality is pretty "priced in" to my sense of how the world works. It's good to work on things that you hope will live beyond you, but Dark Ages and collapses are similarly "priced in" as normal for me. Sara Teasdale: "You say there is no love, my love, unless it lasts for aye; Ah folly, there are episodes far better than the play!" If our days are as a passing shadow, that's not that bad; we're used to it.

I worry that people who are not ok with transience may turn themselves into monsters so they can still "win" -- even though the meaning of "winning" is so changed it isn't worth it any more.

Replies from: nc
comment by nc · 2024-10-16T15:07:07.075Z · LW(p) · GW(p)

I do think this comes back to the messages in On Green [LW · GW] and also why the post went down like a cup of cold sick - rationality is about winning [LW · GW]. Obviously nobody on LW wants to "win" in the sense you describe, but more winning over more harmony on the margin, I think.

The future will probably contain less of the way of life I value (or something entirely orthogonal), but then that's the nature of things.

Replies from: sharmake-farah
comment by Noosphere89 (sharmake-farah) · 2024-10-16T23:20:42.627Z · LW(p) · GW(p)

I think 2 cruxes IMO dominate the discussion a lot that are relevant here:

  1. Will a value lock-in event happen, especially soon in a way such that once the values are locked in, it's basically impossible to change values?

  2. Is something like the vulnerable world hypothesis correct about technological development?

If you believed 1 or 2, I could see why people disagreed with Sarah Constantin's statement on here.

comment by ZY (AliceZ) · 2024-10-11T00:11:25.798Z · LW(p) · GW(p)

I have been having some similar thoughts on the main points here for a while and thanks for this.

I guess to me what needs attention is when people do things along the lines of "benefit themselves and harm other people". That harm has a pretty strict definition,  though I know we may always be able to give borderline examples. This definitely includes the abuse of power in our current society and culture, and any current risks etc. (For example, if we are constraining to just AI with warning on content, https://www.iwf.org.uk/media/q4zll2ya/iwf-ai-csam-report_public-oct23v1.pdf.  And this is very sad to see.) On the other hand, with regards to climate change (can also be current too) or AI risks, it probably should also be concerned when corporates or developers neglect known risks or pursue science/development irresponsibly. I think it is not wrong to work on these, but I just don't believe in "do not solve the other current risks and only work on future risks."

On some comments that were saying our society is "getting better" - sure, but the baseline is a very low bar (slavery for example). There are still many, many, many examples in different societies of how things are still very systematically messed up.

comment by StartAtTheEnd · 2024-10-12T11:57:53.543Z · LW(p) · GW(p)

You seem to dislike reality. Could it not be that the worldview which clashes with reality is wrong (or rather, in the wrong), rather than reality being wrong/in the wrong? For instance that "nothing is forever" isn't a design flaw, but one of the required properties that a universe must have in order to support life?

comment by sarahconstantin · 2024-10-28T17:33:57.651Z · LW(p) · GW(p)

"weak benevolence isn't fake": https://roamresearch.com/#/app/srcpublic/page/ic5Xitb70

  • there's a class of statements that go like:
    • "fair-weather friends" who are only nice to you when it's easy for them, are not true friends at all
    • if you don't have the courage/determination to do the right thing when it's difficult, you never cared about doing the right thing at all
    • if you sometimes engage in motivated cognition or are sometimes intellectually lazy/sloppy, then you don't really care about truth at all
    • if you "mean well" but don't put in the work to ensure that you're actually making a positive difference, then your supposed "well-meaning" intentions were fake all along
  • I can see why people have these views.
    • if you actually need help when you're in trouble, then "fair-weather friends" are no use to you
    • if you're relying on someone to accomplish something, it's not enough for them to "mean well", they have to deliver effectively, and they have to do so consistently. otherwise you can't count on them.
    • if you are in an environment where people constantly declare good intentions or "well-meaning" attitudes, but most of these people are not people you can count on, you will find yourself caring a lot about how to filter out the "posers" and "virtue signalers" and find out who's true-blue, high-integrity, and reliable.
  • but I think it's literally false and sometimes harmful to treat "weak"/unreliable good intentions as absolutely worthless.
    • not all failures are failures to care enough/try hard enough/be brave enough/etc.
      • sometimes people legitimately lack needed skills, knowledge, or resources!
      • "either I can count on you to successfully achieve the desired outcome, or you never really cared at all" is a long way from true.
      • even the more reasonable, "either you take what I consider to be due/appropriate measures to make sure you deliver, or you never really cared at all" isn't always true either!
        • some people don't know how to do what you consider to be due/appropriate measures
        • some people care some, but not enough to do everything you consider necessary
        • sometimes you have your own biases about what's important, and you really want to see people demonstrate a certain form of "showing they care" otherwise you'll consider them negligent, but that's not actually the most effective way to increase their success rate
    • almost everyone has a finite amount of effort they're willing to put into things, and a finite amount of cost they're willing to pay. that doesn't mean you need to dismiss the help they are willing and able to provide.
      • as an extreme example, do you dismiss everybody as "insufficiently committed" if they're not willing to die for the cause? or do you accept graciously if all they do is donate $50?
      • "they only help if it's fun/trendy/easy/etc" -- ok, that can be disappointing, but is it possible you should just make it fun/trendy/easy/etc? or just keep their name on file in case a situation ever comes up where it is fun/trendy/easy and they'll be helpful then?
    • it's harmful to apply this attitude to yourself, saying "oh I failed at this, or I didn't put enough effort in to ensure a good outcome, so I must literally not care about ideals/ethics/truth/other people."
      • like...you do care any amount. you did, in fact, mean well.
        • you may have lacked skill;
        • you may have not been putting in enough effort;
        • or maybe you care somewhat but not as much as you care about something else
        • but it's probably not accurate or healthy to take a maximally-cynical view of yourself where you have no "noble" motives at all, just because you also have "ignoble" motives (like laziness, cowardice, vanity, hedonism, spite, etc).
          • if you have a flicker of a "good intention" to help people, make the world a better place, accomplish something cool, etc, you want to nurture it, not stomp it out as "probably fake".
          • your "good intentions" are real and genuinely good, even if you haven't always followed through on them, even if you haven't always succeeded in pursuing them.
          • you don't deserve "credit" for good intentions equal to the "credit" for actually doing a good thing, but you do deserve any credit at all.
          • basic behavioral "shaping" -- to get from zero to a complex behavior, you have to reward very incremental simple steps in the right direction.
            • e.g. if you wish you were "nicer to people", you may have to pat yourself on the back for doing any small acts of kindness, even really "easy" and "trivial" ones, and notice & make part of your self-concept any inclinations you have to be warm or helpful.
            • "I mean well and I'm trying" has to become a sentence you can say with a straight face. and your good intentions will outpace your skills so you have to give yourself some credit for them.
    • it may be net-harmful to create a social environment where people believe their "good intentions" will be met with intense suspicion.
      • it's legitimately hard to prove that you have done a good thing, particularly if what you're doing is ambitious and long-term.
      • if people have the experience of meaning well and trying to do good but constantly being suspected of insincerity (or nefarious motives), this can actually shift their self-concept from "would-be hero" to "self-identified villain"
        • which is bad, generally
          • at best, identifying as a villain doesn't make you actually do anything unethical, but it makes you less effective, because you preemptively "brace" for hostility from others instead of confidently attracting allies
          • at worst, it makes you lean into legitimately villainous behavior
      • OTOH, skepticism is valuable, including skepticism of people's motives.
      • but it can be undesirable when someone is placed in a "no-win situation", where from their perspective "no matter what I do, nobody will believe that I mean well, or give me any credit for my good intentions."
      • if you appreciate people for their good intentions, sometimes that can be a means to encourage them to do more. it's not a guarantee, but it can be a starting point for building rapport and starting to persuade. people often want to live up to your good opinion of them.
Replies from: johnswentworth, Algon
comment by johnswentworth · 2024-10-29T03:38:29.220Z · LW(p) · GW(p)

... this can actually shift their self-concept from "would-be hero" to "self-identified villain"

  • which is bad, generally
    • at best, identifying as a villain doesn't make you actually do anything unethical, but it makes you less effective, because you preemptively "brace" for hostility from others instead of confidently attracting allies
    • at worst, it makes you lean into legitimately villainous behavior

Sounds like it's time for a reboot of the ol' "join the dark side" essay.

Replies from: Raemon
comment by Raemon · 2024-10-29T20:41:30.626Z · LW(p) · GW(p)

I want to register in advance, I have qualms I’d be interested in talking about. (I think they are at least one level more interesting than the obvious ones, and my relationship with them is probably at least one level more interesting than the obvious relational stance)

comment by Algon · 2024-10-29T22:22:14.741Z · LW(p) · GW(p)

it may be net-harmful to create a social environment where people believe their "good intentions" will be met with intense suspicion.

The picture I get of Chinese culture from their fiction makes me think China is kinda like this. A recurrent trope was "If you do some good deeds, like offering free medicine to the poor, and don't do a perfect job, like treating everyone who says they can't afford medicine, then everyone will castigate you for only wanting to seem good. So don't do good." Another recurrent trope was "it's dumb, even wrong, to be a hero/you should be a villain." (One annoying variant is "kindness to your enemies is cruelty to your allies", which is used to justify pointless cruelty.) I always assumed this was a cultural anti-body formed in response to communists doing terrible things in the name of the common good.

comment by sarahconstantin · 2024-10-25T17:49:18.783Z · LW(p) · GW(p)

links 10/25/24: https://roamresearch.com/#/app/srcpublic/page/10-25-2024

 

comment by sarahconstantin · 2024-11-12T19:34:31.865Z · LW(p) · GW(p)

neutrality (notes towards a blog post): https://roamresearch.com/#/app/srcpublic/page/Ql9YwmLas

  • "neutrality is impossible" is sort-of-true, actually, but not a reason to give up.
    • even a "neutral" college class (let's say a standard algorithms & data structures CS class) is non-neutral relative to certain beliefs
      • some people object to the structure of universities and their classes to begin with;
      • some people may object on philosophical grounds to concepts that are unquestionably "standard" within a field like computer science.
      • some people may think "apolitical" education is itself unacceptable.
        • to consider a certain set of topics "political" and not mention them in the classroom is, implicitly, to believe that it is not urgent to resolve or act on those issues (at least in a classroom context), and therefore it implies some degree of acceptance of the default state of those issues.
      • our "neutral" CS class is implicitly taking a stand on certain things and in conflict with certain conceivable views. but, there's a wide range of views, including (I think) the vast majority of the actual views of relevant parties like students and faculty, that will find nothing to object to in the class.
    • we need to think about neutrality in more relative terms:
      • what rule are you using, and what things are you claiming it will be neutral between?
  • what is neutrality anyway and when/why do you want it?
    • neutrality is a type of tactic for establishing cooperation between different entities.
      • one way (not the only way) to get all parties to cooperate willingly is to promise they will be treated equally.
      • this is most important when there is actual uncertainty about the balance of power.
        • eg the Dutch Republic was the first European polity to establish laws of religious tolerance, because it happened to be roughly evenly divided between multiple religions and needed to unite to win its independence.
    • a system is neutral towards things when it treats them the same.
      • there lots of ways to treat things the same:
        • "none of these things belong here"
          • eg no religion in "public" or "secular" spaces
            • is the "public secular space" the street? no-hijab rules?
            • or is it the government? no 10 Commandments in the courthouse?
        • "each of these things should get equal treatment"
          • eg Fairness Doctrine
        • "we will take no sides between these things; how they succeed or fail is up to you"
          • e.g. "marketplace of ideas", "colorblindness"
    • one can always ask, about any attempt at procedural neutrality:
      • what things does it promise to be neutral between?
        • are those the right or relevant things to be neutral on?
      • to what degree, and with what certainty, does this procedure produce neutrality?
        • is it robust to being intentionally subverted?
    • here and now, what kind of neutrality do we want?
      • thanks to the Internet, we can read and see all sorts of opinions from all over the world. a wider array of worldviews are plausible/relevant/worth-considering than ever before. it's harder to get "on the same page" with people because they may have come from very different informational backgrounds.
      • even tribes are fragmented. even people very similar to one another can struggle to synch up and collaborate, except in lowest-common-denominator ways that aren't very productive.
      • narrowing things down to US politics, no political tribe or ideology is anywhere close to a secure monopoly. nor are "tribes" united internally.
      • we have relied, until now, on a deep reserve of "normality" -- apolitical, even apathetic, Just The Way Things Are. In the US that means, people go to work at their jobs and get paid for it and have fun in their free time. 90's sitcom style.
        • there's still more "normality" out there than culture warriors tend to believe, but it's fragile. As soon as somebody asks "why is this the way things are?" unexamined normality vanishes.
          • to the extent that the "normal" of the recent past was functional, this is a troubling development...but in general the operation of the mind is a good thing!
          • we just have more rapid and broader idea propagation now.
            • why did "open borders" and "abolish the police" and "UBI" take off recently? because these are simple ideas with intuitive appeal. some % of people will think "that makes sense, that sounds good" once they hear of them. and now, way more people are hearing those kinds of ideas.
      • when unexamined normality declines, conscious neutrality may become more important.
        • conscious neutrality for the present day needs to be aware of the wide range of what people actually believe today, and avoid the naive Panglossianism of early web 2.0.
          • many people believe things you think are "crazy".
          • "democratization" may lead to the most popular ideas being hateful, trashy, or utterly bonkers.
          • on the other hand, depending on what you're trying to get done, you may very well need to collaborate with allies, or serve populations, whose views are well outside your comfort zone.
        • neutrality has things to offer:
          • a way to build trust with people very different from yourself, without compromising your own convictions;
            • "I don't agree with you on A, but you and I both value B, so I promise to do my best at B and we'll leave A out of it altogether"
          • a way to reconstruct some of the best things about our "unexamined normality" and place them on a firmer foundation so they won't disappear as soon as someone asks "why?"
  • a "system of the world" is the framework of your neutrality: aka it's what you're not neutral about.
    • eg:
      • "melting pot" multiculturalism is neutral between cultures, but does believe that they should mostly be cosmetic forms of diversity (national costumes and ethnic foods) while more important things are "universal" and shared.
      • democratic norms are neutral about who will win, but not that majority vote should determine the winner.
      • scientific norms are neutral about which disputed claims will turn out to be true, but not on what sorts of processes and properties make claims credible, and not about certain well-established beliefs
    • right now our system-of-the-world is weak.
      • a lot of it is literally decided by software affordances. what the app lets you do is what there is.
        • there's a lot that's healthy and praiseworthy about software companies and their culture, especially 10-20 years ago. but they were never prepared for that responsibility!
    • a stronger system-of-the-world isn't dogmatism or naivety.
      • were intellectuals of the 20th, the 19th, or the 18th centuries childish because they had more explicit shared assumptions than we do? I don't think so.
        • we may no longer consider some of their frameworks to be true
        • but having a substantive framework at all clearly isn't incompatible with thinking independently, recognizing that people are flawed, or being open to changing your mind.
        • "hedgehogs" or "eternalists" are just people who consider some things definitely true.
          • it doesn't mean they came to those beliefs through "blind faith" or have never questioned them.
          • it also doesn't mean they can't recognize uncertainty about things that aren't foundational beliefs.
        • operating within a strongly-held, assumed-shared worldview can be functional for making collaborative progress, at least when that worldview isn't too incompatible with reality.
      • mathematics was "non-rigorous", by modern standards, until the early 20th century; and much of today's mathematics will be considered "non-rigorous" if machine-verified proofs ever become the norm. but people were still able to do mathematics in centuries past, most of which we still consider true.
        • the fact that you can generate a more general framework, within which the old framework was a special case; or in which the old framework was an unprincipled assumption of the world being "nicely behaved" in some sense; does not mean that the old framework was not fruitful for learning true things.
          • sometimes, taking for granted an assumption that's not literally always true (but is true mostly, more-or-less, or in the practically relevant cases) can even be more fruitful than a more radically skeptical and general view.
    • an *intellectual* system-of-the-world is the framework we want to use for the "republic of letters", the sub-community of people who communicate with each other in a single conversational web and value learning and truth.
      • that community expanded with the printing press and again with the internet.
      • it is radically diverse in opinion.
      • it is not literally universal. not everybody likes to read and write; not everybody is curious or creative. a lot of the "most interesting people in the world" influence each other.
        • everybody in the old "blogosphere" was, fundamentally, the same sort of person, despite our constant arguments with each other; and not a common sort of person in the broader population; and we have turned out to be more influential than we have ever been willing to admit.
      • but I do think of it as a pretty big and growing tent, not confined to 300 geniuses or anything like that.
        • "The" conversation -- the world's symbolic information and its technological infrastructure -- is something anybody can contribute to, but of course some contribute more than others.
        • I think the right boundary to draw is around "power users" -- people who participate in that network heavily rather than occasionally.
          • e.g. not all academics are great innovators, but pretty much all of them are "power users" and "active contributors" to the world's informational web.
          • I'm definitely a power user; I expect a lot of my readers are as well.
      • what do we need to not be neutral about in this context? what belongs in an intellectual system-of-the-world?
        • another way of asking this question: about what premises are you willing to say, not just for yourself but for the whole world and for your children's children, "if you don't accept this premise then I don't care to speak to you or hear from you, forever?"
          • clearly that's a high standard!
          • I have many values differences with, say, the author of the Epic of Gilgamesh, but I still want to read it. And I want lots of other people to be able to read it! I do not want the mind that created it to be blotted out of memory.
          • that's the level of minimal shared values we're talking about here. What do we have in common with everyone who has an interest in maintaining and extending humanity's collective record of thought?
        • lack of barriers to entry is not enough.
          • the old Web 2.0 idea was "allow everyone to communicate with everyone else, with equal affordances." This is a kind of "neutrality" -- every user account starts out exactly the same, and anybody can make an account.
            • I think that's still an underrated principle. "literally anybody can speak to anybody else who wants to listen" was an invention that created a lot of valuable affordances. we forget how painfully scarce information was when that wasn't true!
          • the problem is that an information system only works when a user can find the information they seek. And in many cases, what the user is seeking is true information.
          • mechanisms intended to make high quality information (reliable, accurate, credible, complete, etc) preferentially discoverable, are also necessary
            • but they shouldn't just recapitulate potentially-biased gatekeeping.
              • we want evaluative systems that, at least a priori, an ancient Sumerian could look at and say "yep, sounds fair", even if the Sumerian wouldn't like the "truths" that come out on top in those systems.
              • we really can't be parochial here. social media companies "patched" the problem of misinformation with opaque, partisan side-taking, and they suffered for it.
              • how "meta" do we have to get about determining what counts as reliable or valid? well, more meta than just picking a winning side in an ongoing political dispute, that's for sure.
                • probably also more "meta" than handpicking certain sources as trustworthy, the way Wikipedia does.
    • if we want to preserve and extend knowledge, the "republic of letters" needs intentional stewardship of the world's information, including serious attempts at neutrality.
      • perceived bias, of course, turns people away from information sources.
      • nostalgia for unexamined normality -- "just be neutral, y'know, like we were when I was young" -- is not a credible offer to people who have already found your nostalgic "normal" wanting.
      • rigorous neutrality tactics -- "we have so structured this system so that it is impossible for anyone to tamper with it in a biased fashion" -- are better.
        • this points towards protocols.
          • h/t Venkatesh Rao
          • think: zero-knowledge proofs, formal verification, prediction markets, mechanism design, crypto-flavored governance schemes, LLM-enabled argument mapping, AI mechanistic-interpretability and "showing its work", etc
        • getting fancy with the technology here often seems premature when the "public" doesn't even want neutrality; but I don't think it actually is.
          • people don't know they want the things that don't yet exist.
          • the people interested in developing "provably", "rigorously", "demonstrably" impartial systems are exactly the people you want to attract first, because they care the most.
          • getting it right matters.
            • a poorly executed attempt either fizzles instantly; or it catches on but its underlying flaws start to make it actively harmful once it's widely culturally influential.
        • OTOH, premature disputes on technology and methods are undesirable.
          • remember there aren't very many of you/us. that is:
            • pretty much everybody who wants to build rigorous neutrality, no matter why they want it or how they want to implement it, is a potential ally here.
              • the simple fact of wanting to build a "better" world that doesn't yet exist is a commonality, not to be taken for granted. most people don't do this at all.
              • the "softer" side, mutual support and collegiality, are especially important to people whose dreams are very far from fruition. people in this situation are unusually prone to both burnout and schism. be warm and encouraging; it helps keep dreams alive.
              • also, the whole "neutrality" thing is a sham if we can't even engage with collaborators with different views and cultural styles.
            • also, "there aren't very many of us" in the sense that none of these envisioned new products/tools/institutions are really off the ground yet, and the default outcome is that none of them get there.
              • you are playing in a sandbox. the goal is to eventually get out of the sandbox.
              • you will need to accumulate talent, ideas, resources, and vibe-momentum. right now these are scarce, or scattered; they need to be assembled.
              • be realistic about influence.
                • count how many people are at the conference or whatever. how many readers. how many users. how many dollars. in absolute terms it probably isn't much. don't get pretentious about a "movement", "community", or "industry" before it's shown appreciable results.
                • the "adjacent possible" people to get involved aren't the general public, they're the closest people in your social/communication graph who aren't yet participating. why aren't they part of the thing? (or why don't you feel comfortable going to them?) what would you need to change to satisfy the people you actually know?
                  • this is a better framing than speculating about mass appeal.
Replies from: Viliam
comment by Viliam · 2024-11-13T09:46:00.386Z · LW(p) · GW(p)

even a "neutral" college class (let's say a standard algorithms & data structures CS class) is non-neutral relative to certain beliefs

Things that many people consider controversial: evolution, sex education, history. But even for mathematical lessons, you will often find a crackpot who considers given topic controversial. (-1)×(-1) = 1? 0.999... = 1?

some people object to the structure of universities and their classes to begin with

In general, unschooling.

In my opinion, the important functionality of schools is: (1) separating reliable sources of knowledge from bullshit, (2) designing a learning path from "I know nothing" to "I am an expert" where each step only requires the knowledge of previous steps, (3) classmates and teachers to discuss the topic with.

Without these things, learning is difficult. If an autodidact stumbles on some pseudoscience in library, even if they later figure out that it was bullshit, it is a huge waste of time. Picking up random books on a topic and finding out that I don't understand the things they expect me to already know is disappointing. Finding people interested in the same topic can be difficult.

But everything else about education is incidental. No need to walk into the same building. No need to only have classmates of exactly the same age. The learning path doesn't have to be linear, could be a directed oriented graph. Generally, no need to learn a specific topic at a specific age, although it makes sense to learn the topics that are prerequisites to a lot of knowledge as soon as possible. Grading is incidental; you need some feedback, but IMHO it would be better to split the knowledge into many small pieces, and grade each piece as "you get it" or "you don't".

...and the conclusion of my thesis is that a good educational system would focus on the essentials, and be liberal about everything else. However, there are people who object against the very things I consider essential. The educational system that would seem incredible free for me would still seem oppressive to them.

neutrality is a type of tactic for establishing cooperation between different entities.

That means you can have a system neutral towards selected entities (the ones you want in the coalition), but not others. For example, you can have religious tolerance towards an explicit list of churches.

This can lead to a meta-game where some members of the coalition try to kick out someone, because they are no longer necessary. And some members strategically keeping someone in, not necessarily because they love them, but because "if they are kicked out today, tomorrow it could be me; better avoid this slippery slope".

Examples: Various cults in USA that are obviously destructive but enjoy a lot of legal protection. Leftists establishing an exception for "Nazis", and then expanding the definition to make it apply to anyone they don't like. Similarly, the right calling everything they don't like "communism". And everyone on internet calling everything "religion".

"we will take no sides between these things; how they succeed or fail is up to you"

Or the opposite of that: "the world is biased against X, therefore we move towards true neutrality by supporting X".

is it robust to being intentionally subverted?

So, situations like: the organization is nominally politically neutral, but the human at an important position has political preferences... so far it is normal and maybe unavoidable, but what if there are multiple humans like that, all having the same political preference. If they start acting in a biased way, is it possible for other members to point it out.. without getting accused in turn of "bringing politics" into the organization?

As soon as somebody asks "why is this the way things are?" unexamined normality vanishes.

They can easily create a subreddit r/anti-some-specific-way-things-are and now the opposition to the idea is forever a thing.

a way to reconstruct some of the best things about our "unexamined normality" and place them on a firmer foundation so they won't disappear as soon as someone asks "why?"

Basically, we need a "FAQ for normality". The old situation was that people who were interested in a topic knew why things are certain way, and others didn't care. If you joined the group of people who are interested, sooner or later someone explained it to you in person.

But today, someone can make a popular YouTube video containing some false explanation, and overnight you have tons of people who are suddenly interested in the topic and believe a falsehood... and the people who know how things are just don't have the capacity to explain that to someone who lacks the fundamentals, believes a lot of nonsense, has strong opinions, and is typically very hostile to someone trying to correct them. So they just give up. But now we have the falsehood established as an "alternative truth", and the old process of teaching the newcomers no longer works.

The solution for "I don't have a capacity to communicate to so many ignorant and often hostile people" is to make an article or a YouTube video with an explanation, and just keep posting the link. Some people will pay attention, some people won't, but it no longer takes a lot of your time, and it protects you from the emotional impact.

There are things for which we don't have a good article to link, or the article is not known to many. We could fix that. In theory, school was supposed to be this kind of FAQ, but that doesn't work in a dynamic society where new things happen after you are out of school.

a lot of it is literally decided by software affordances. what the app lets you do is what there is.

Yeah, I often feel that having some kind of functionality would improve things, but the functionality is simply not there.

To some degree this is caused by companies having a monopoly on the ecosystem they create. For example, if I need some functionality for e-mail, I can make an open-source e-mail client that has it. (I think historically spam filters started like this.) If I need some functionality for Facebook... there is nothing I can do about it, other than leave Facebook but there is a problem with coordinating that.

Sometimes this is on purpose. Facebook doesn't want me to be able to block the ads and spam, because they profit from it.

but having a substantive framework at all clearly isn't incompatible with thinking independently, recognizing that people are flawed, or being open to changing your mind.

Yeah, if we share a platform, we may start examining some of its assumptions, and maybe at some moment we will collectively update. But if everyone assumes something else, it's the Eternal September of civilization.

If we can't agree on what is addition, we can never proceed to discuss multiplication. And we will never build math.

I think the right boundary to draw is around "power users" -- people who participate in that network heavily rather than occasionally.

Sometimes this is reflected by the medium. For example, many people post comments on blogs, but only a small part of them writes blogs. By writing a blog you join the "power users", and the beauty of it is that it is free for everyone and yet most people keep themselves out voluntarily.

(A problem coming soon: many fake "power users" powered by LLMs.)

I have many values differences with, say, the author of the Epic of Gilgamesh, but I still want to read it.

There is a difference between reading for curiosity and reading to get reliable information. I may be curious about e.g. Aristotle's opinion on atoms, but I am not going to use it to study chemistry.

In some way, I treat some people's opinions as information about the world, and other people's opinions as information about them. Both are interesting, but in a different way. It is interesting to know my neighbor's opinion on astrology, but I am not using this information to update on astrology; I only use it to update on my neighbor.

So I guess I have two different lines: whether I care about someone as a person, and whether I trust someone as a source of knowledge. I listen to both, but I process the information differently.

this points towards protocols.

Thinking about the user experience, I think it would be best if the protocol already came with three default implementations: as a website, as a desktop application, and as a smartphone app.

A website doesn't require me to install anything; I just create an account and start using it. The downside is that the website has an owner, who can kick me out of the website. Also, I cannot verify the code. A malicious owner could probably take my password (unless we figure out some way to avoid this, that won't be too inconvenient). Multiple websites talking to each other in a way that is as transparent for the user as possible.

A smartphone app, because that's what most people use most of the day, especially when they are outside.

A desktop app, because that provides most options for the (technical) power user. For example, it would be nice to keep an offline archive of everything I want, delete anything I no longer want, export and import data.

comment by sarahconstantin · 2024-10-14T23:48:24.518Z · LW(p) · GW(p)

links, 10/14/2024

  • https://milton.host.dartmouth.edu/reading_room/pl/book_1/text.shtml [[John Milton]]'s Paradise Lost, annotated online [[poetry]]
  • https://darioamodei.com/machines-of-loving-grace [[AI]] [[biotech]] [[Dario Amodei]] spends about half of this document talking about AI for bio, and I think it's the most credible "bull case" yet written for AI being radically transformative in the biomedical sphere.
    • one caveat is that I think if we're imagining a future with brain mapping, regeneration of macroscopic brain tissue loss, and understanding what brains are doing well enough to know why neurological abnormalities at the cell level produce the psychiatric or cognitive symptoms they do...then we probably can do brain uploading! it's really weird to single out this one piece as pie-in-the-sky science fiction when you're already imagining a lot of similarly ambitious things as achievable.
  • https://venture.angellist.com/eli-dourado/syndicate [[tech industry]] when [[Eli Dourado]] picks startups, they're at least not boring! i haven't vetted the technical viability of any of these, but he claims to do a lot of that sort of numbers-in-spreadsheets work.
  • https://forum.effectivealtruism.org/topics/shapley-values [? · GW] [[EA]] [[economics]] how do you assign credit (in a principled fashion) to an outcome that multiple people contributed to? Shapley values! It seems extremely hard to calculate in practice, and subject to contentious judgment calls about the assumptions you make, but maybe it's an improvement over raw handwaving.
  • https://gwern.net/maze [[Gwern Branwen]] digs up the "Mr. Young" studying maze-running techniques in [[Richard Feynman]]'s "Cargo Cult Science" speech. His name wasn't Young but Quin Fischer Curtis, and he was part of a psychology research program at UMich that published little and had little influence on the outside world, and so was "rebooted" and forgotten. Impressive detective work, though not a story with a very satisfying "moral".
  • https://en.m.wikipedia.org/wiki/Cary_Elwes [[celebrities]] [[Cary Elwes]] had an ancestor who was [[Charles Dickens]]' inspiration for Ebenezer Scrooge!
  • https://feministkilljoys.com/2015/06/25/against-students/ [[politics]] an old essay by [[Sara Ahmed]] in defense of trigger warnings in the classroom and in general against the accusations that "students these days" are oversensitive and illiberal.
    • She's doing an interesting thing here that I haven't wrapped my head around. She's not making the positive case "students today are NOT oversensitive or illiberal" or "trigger warnings are beneficial," even though she seems to believe both those things. she's more calling into question "why has this complaint become a common talking point? what unstated assumptions does it perpetuate?" I am not sure whether this is a valid approach that's alternate to the forms of argument I'm more used to, or a sign of weakness (a thing she's doing only because she cannot make the positive case for the opposite of what her opponents claim.)
  • https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10080017/ [[cancer]][[medicine]] [[biology]] cancer preventatives are an emerging field
    • NSAIDS and omega-3 fatty acids prevent 95% of tumors in a tumor-prone mouse strain?!
    • also we're targeting [[STAT3]] now?! that's a thing we're doing.
      • ([[STAT3]] is a major oncogene but it's a transcription factor, it lives in the cytoplasm and the nucleus, this is not easy to target with small molecules like a cell surface protein.)
  • https://en.m.wikipedia.org/wiki/CLARITY [[biotech]] make a tissue sample transparent so you can make 3D microscopic imaging, with contrast from immunostaining or DNA/RNA labels
  • https://distill.pub/2020/circuits/frequency-edges/ [[AI]] [[neuroscience]] a type of neuron in vision neural nets, the "high-low frequency detector", has recently also been found to be a thing in literal mouse brain neurons (h/t [[Dario Amodei]]) https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10055119/
  • https://mosaicmagazine.com/essay/israel-zionism/2024/10/the-failed-concepts-that-brought-israel-to-october-7/ [[politics]][[Israel]][[war]] an informative and sober view on "what went wrong" leading up to Oct 7
    • tl;dr: Hamas consistently wants to destroy Israel and commit violence against Israelis, they say so repeatedly, and there was never going to be a long-term possibility of living peacefully side-by-side with them; Netanyahu is a tough talker but kind of a procrastinator who's kicked the can down the road on national security issues for his entire career; catering to settlers is not in the best interests of Israel as a whole (they provoke violence) but they are an unduly powerful voting bloc; Palestinian misery is real but has been institutionalized by the structure of the Gazan state and the UN which prevents any investment into a real local economy; the "peace process" is doomed because Israel keeps offering peace and the Palestinians say no to any peace that isn't the abolition of the State of Israel.
    • it's pretty common for reasonable casual observers (eg in America) to see Israel/Palestine as a tragic conflict in which probably both parties are somewhat in the wrong, because that's a reasonable prior on all conflicts. The more you dig into the details, though, the more you realize that "let's live together in peace and make concessions to Palestinians as necessary" has been the mainstream Israeli position since before 1948. It's not a symmetric situation.
  • [[von Economo neurons]] are spooky [[neuroscience]] https://en.wikipedia.org/wiki/Von_Economo_neuron
    • only found in great apes, cetaceans, and humans
    • concentrated in the [[anterior cingulate cortex]] and [[insular cortex]] which are closely related to the "sense of self" (i.e. interoception, emotional salience, and the perception that your e.g. hand is "yours" and it was "you" who moved it)
    • the first to go in [[frontotemporal dementia]]
    • https://www.nature.com/articles/s41467-020-14952-3 we don't know where they project to! they are so big that we haven't tracked them fully!
    • https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3953677/
  • https://www.wired.com/story/lee-holloway-devastating-decline-brilliant-young-coder/ the founder of Cloudflare had [[frontotemporal dementia]] [[neurology]]
  • [[frontotemporal dementia]] is maybe caused by misfolded proteins being passed around neuron-to-neuron, like prion disease! [[neurology]]
Replies from: Viliam, Raemon, MichaelDickens
comment by Viliam · 2024-10-15T13:00:11.651Z · LW(p) · GW(p)

she's more calling into question "why has this complaint become a common talking point? what unstated assumptions does it perpetuate?" I am not sure whether this is a valid approach that's alternate to the forms of argument I'm more used to, or a sign of weakness

It is good to have one more perspective, and perhaps also good to develop a habit to go meta. So that when someone tells you "X", in addition to asking yourself "is X actually true?" you also consider questions like "why is this person telling me X?", "what could they gain in this situation by making me think more about X?", "are they perhaps trying to distract me from some other Y?".

Because there are such things as filtered evidence, availability bias, limited cognition; and they all can be weaponized. While you are trying really hard to solve the puzzle the person gave you, they may be using your inattention to pick your pockets.

In extreme cases, it can even be a good thing to dismiss the original question entirely. Like, if you are trying to leave an abusive religious cult, and the leader gives you a list of "ten thousand extremely serious theological questions you need to think about deeply before you make the potentially horrible mistake of damning your soul by leaving this holy group", you should not actually waste your time thinking about them, but keep planning your escape.

Now the opposite problem is that some people get so addicted to the meta that they are no longer considering the object level. "You say I'm wrong about something? Well, that's exactly what the privileged X people love to do, don't they?" (Yeah, they probably do. But there is still a chance that you are actually wrong about something.)

tl;dr -- mentioning the meta, great; but completely avoiding the object level, weakness

So, how much meta is the right amount of meta? Dunno, that's a meta-meta question. At some point you need to follow your intuition and hope that your priors aren't horribly wrong.

The more you dig into the details, though, the more you realize that "let's live together in peace and make concessions to Palestinians as necessary" has been the mainstream Israeli position since before 1948. It's not a symmetric situation.

The situation is not symmetric, I agree. But also, it is too easy to underestimate the impact of the settlers. I mean, if you include them in the picture, then the overall Israeli position becomes more like: "Let's live together in peace, and please ignore these few guys who sometimes come to shoot your family and take your homes. They are an extremist minority that we don't approve of, but for complicated political reasons we can't do anything about them. Oh, and if you try to defend yourself against them, chances are our army might come to defend them. And that's also something we deeply regret."

It is much better than the other side, but in my opinion still fundamentally incompatible with peace.

comment by Raemon · 2024-10-15T00:40:32.993Z · LW(p) · GW(p)

kinda meta, but I find myself wondering if we should handle Roam [[ tag ]] syntax in some nicer way. Probably not but it seems nice if it managed to have no downsides.

Replies from: gwern, sarahconstantin
comment by gwern · 2024-10-15T01:59:09.956Z · LW(p) · GW(p)

It wouldn't collide with normal Markdown syntax use. (I can't think of any natural examples, aside from bracket use inside links, like [[editorial comment]](URL), which could be special-cased by looking for the parentheses required for the URL part of a Markdown link.) But it would be ambiguous where the wiki links point to (Sarah's Roam wiki? English Wikipedia?), and if it pointed to somewhere other than LW2 wiki entries, then it would also be ambiguous with that too (because the syntax is copied from Mediawiki and so the same as the old LW wiki's links).

And it seems like an overloading special case you would regret in the long run, compared to something which rewrote them into regular links. Adds in a lot of complexity for a handful of uses.

comment by sarahconstantin · 2024-10-15T02:15:36.212Z · LW(p) · GW(p)

I thought about manually deleting them all but I don't feel like it.

Replies from: MichaelDickens
comment by MichaelDickens · 2024-10-15T04:18:13.431Z · LW(p) · GW(p)

I don't know how familiar you are with regular expressions but you could do this with a two-pass regular expression search and replace: (I used Emacs regex format, your preferred editor might use a different format. notably, in Emacs [ is a literal bracket but ( is a literal parenthesis, for some reason)

  1. replace "^(https://.? )([[.?]] )*" with "\1"
  2. replace "[[(.*?)]]" with "\1"

This first deletes any tags that occur right after a hyperlink at the beginning of a line, then removes the brackets from any remaining tags.

comment by MichaelDickens · 2024-10-15T04:08:01.404Z · LW(p) · GW(p)

RE Shapley values, I was persuaded by this comment [EA(p) · GW(p)] that they're less useful than counterfactual value in at least some practical situations.

comment by sarahconstantin · 2024-11-08T15:02:35.514Z · LW(p) · GW(p)

links 11/08/2024: https://roamresearch.com/#/app/srcpublic/page/11-08-2024

 

comment by sarahconstantin · 2024-11-06T15:37:29.766Z · LW(p) · GW(p)

links 11/6/2024: https://roamresearch.com/#/app/srcpublic/page/11-06-2024

comment by sarahconstantin · 2024-11-05T17:02:17.187Z · LW(p) · GW(p)

links 11/05/2024: https://roamresearch.com/#/app/srcpublic/page/11-05-2024

comment by sarahconstantin · 2024-10-11T15:18:11.631Z · LW(p) · GW(p)

https://roamresearch.com/#/app/srcpublic/page/10-11-2024

 

  • https://www.mindthefuture.info/p/why-im-not-a-bayesian [[Richard Ngo]] [[philosophy]] I think I agree with this, mostly.
    • I wouldn't say "not a Bayesian" because there's nothing wrong with Bayes' Rule and I don't like the tribal connotations, but lbr, we don't literally use Bayes' rule very often and when we do it often reveals just how much our conclusions depend on problem framing and prior assumptions. A lot of complexity/ambiguity necessarily "lives" in the part of the problem that Bayes' rule doesn't touch. To be fair, I think "just turn the crank on Bayes' rule and it'll solve all problems" is a bit of a strawman -- nobody literally believes that, do they? -- but yeah, sure, happy to admit that most of the "hard part" of figuring things out is not the part where you can mechanically apply probability.
  • https://www.lesswrong.com/posts/YZvyQn2dAw4tL2xQY/rationalists-are-missing-a-core-piece-for-agent-like [LW · GW] [[tailcalled]] this one is actually interesting and novel; i'm not sure what to make of it. maybe literal physics, with like "forces", matters and needs to be treated differently than just a particular pattern of information that you could rederive statistically from sensory data? I kind of hate it but unlike tailcalled I don't know much about physics-based computational models...[[philosophy]]
  • https://alignbio.org/ [[biology]] [[automation]] datasets generated by the Emerald Cloud Lab! [[Erika DeBenedectis]] project. Seems cool!
  • https://www.sciencedirect.com/science/article/abs/pii/S0306453015009014?via%3Dihub [[psychology]] the forced swim test is a bad measure of depression.
    • when a mouse trapped in water stops struggling, that is not "despair" or "learned helplessness." these are anthropomorphisms. the mouse is in fact helpless, by design; struggling cannot save it; immobility is adaptive.
      • in fact, mice become immobile faster when they have more experience with the test. they learn that struggling is not useful and they retain that knowledge.
    • also, a mouse in an acute stress situation is not at all like a human's clinical depression, which develops gradually and persists chronically.
    • https://www.sciencedirect.com/science/article/abs/pii/S1359644621003615?via%3Dihub the forced swim test also doesn't predict clinical efficacy of antidepressants well. (admittedly this study was funded by PETA, which thinks the FST is cruel to mice)
  • https://en.wikipedia.org/wiki/Copy_Exactly! [[semiconductors]] the Wiki doesn't mention that Copy Exactly was famously a failure. even when you try to document procedures perfectly and replicate them on the other side of the world, at unprecedented precision, it is really really hard to get the same results.
  • https://neuroscience.stanford.edu/research/funded-research/optimization-african-killifish-platform-rapid-drug-screening-aggregate [[biology]] you know what's cool? building experimentation platforms for novel model organisms. Killifish are the shortest-lived vertebrate -- which is great if you want to study aging. they live in weird oxygen-poor freshwater zones that are hard to replicate in the lab. figuring out how to raise them in captivity and standardize experiments on them is the kind of unsung, underfunded accomplishment we need to celebrate and expand WAY more.
  • https://www.nature.com/articles/513481a [[biology]] [[drug discovery]] ever heard of curcumin doing something for your health? resveratrol? EGCG? those are all natural compounds that light up a drug screen like a Christmas tree because they react with EVERYTHING. they are not going to work on your disease in real life.
  • https://en.wikipedia.org/wiki/Fetal_bovine_serum [[biotech]] this cell culture medium is just...cow juice. it is not consistent batch to batch. this is a big problem.
  • https://www.nature.com/articles/s42255-021-00372-0 [[biology]] mice housed at "room temperature" are too cold for their health; they are more disease-prone, which calls into question a lot of experimental results.
  • https://calteches.library.caltech.edu/51/2/CargoCult.htm [[science]] the famous [[Richard Feynman]] "Cargo cult science" essay is about flawed experimental methods!
    • if your rat can smell the location of the cheese in the maze all along, then your maze isn't testing learning.
    • errybody want to test rats in mazes, ain't nobody want to test this janky-ass maze!
  • https://fastgrants.org/ [[metascience]] [[COVID-19]] this was cool, we should bring it back for other stuff
  • https://erikaaldendeb.substack.com/cp/147525831 [[biotech]] engineering biomanufacturing microbes for surviving on Mars?!
  • https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8278038/ [[prediction markets]] DARPA tried to use prediction markets to predict the success of projects. it didn't work! they couldn't get enough participants.
  • https://www.citationfuture.com/ [[prediction markets]] these guys do prediction markets on science
  • https://jamesclaims.substack.com/p/how-should-we-fund-scientific-error [[metascience]] [[James Heathers]] has a proposal for a science error detection (fraud, bad research, etc) nonprofit. We should fund him to do it!!
  • https://en.wikipedia.org/wiki/Elisabeth_Bik [[metascience]] [[Elizabeth Bik]] is the queen of research fraud detection. pay her plz.
  • https://substack.com/home/post/p-149791027 [[archaeology]] it was once thought that Gobekli Tepe was a "festival city" or religious sanctuary, where people visited but didn't live, because there wasn't a water source. Now, they've found something that looks like water cisterns, and they suspect people did live there.
    • I don't like the framing of "hunter-gatherer" = "nomadic" in this post.
      • We keep pushing the date of agriculture farther back in time. We keep discovering that "hunter-gatherers" picking plants in "wild" forests are actually doing some degree of forest management, planting seeds, or pulling undesirable weeds. Arguably there isn't a hard-and-fast distinction between "gathering" and "gardening". (Grain agriculture where you use a plow and completely clear a field for planting your crop is qualitatively different from the kind of kitchen-garden-like horticulture that can be done with hand tools and without clearing forests. My bet is that all so-called hunter-gatherers did some degree of horticulture until proven otherwise, excepting eg arctic environments)
      • what the water actually suggests is that people lived at Gobekli Tepe for at least part of the year. it doesn't say what they were eating.
      •  
Replies from: gwern
comment by gwern · 2024-10-12T02:01:35.557Z · LW(p) · GW(p)

everybody want to test rats in mazes, ain't nobody want to test this janky-ass maze!

One of the interesting things I found when I finally tracked down the source is that one of the improved mazes before that was a 3D maze where mice had to choose vertically, keeping them in the same position horizontally, because otherwise they apparently were hearing some sort of subtle sound whose volume/direction let them gauge their position and memorize the choice. So Hunter created a stack of T-junctions, so each time they were another foot upwards/downwards, but at the same point in the room and so the same distance away from the sound source.

comment by sarahconstantin · 2024-11-15T18:13:22.682Z · LW(p) · GW(p)

links 11/15/2024: https://roamresearch.com/#/app/srcpublic/page/11-15-2024

  • https://www.reddit.com/r/self/comments/1gleyhg/people_like_me_are_the_reason_trump_won/  a moderate/swing-voter (Obama, Trump, Biden) explains why he voted for Trump this time around:
    • he thinks Kamala Harris was an "empty shell" and unlikable and he felt the campaign was manipulative and deceptive.
    • he didn't like that she seemed to be a "DEI hire", but doesn't have a problem with black or female candidates generally, it's just that he resents cynical demographic box-checking.
      • this is a coherent POV -- he did vote for Obama, after all. and plenty of people are like "I want the best person regardless of demographics, not a person chosen for their demographics."
        • hm. why doesn't it seem natural to portray Obama as a "DEI hire"? his campaign made a bigger deal about race than Harris's, and he was criticized a lot for inexperience.
          • One guess: it's laughable to think Obama was chosen by anyone besides himself. He was not the Democratic Party's anointed -- that was Hillary. He's clearly an ambitious guy who wanted to be president on his own initiative and beat the odds to get the nomination. He can't be a "DEI hire" because he wasn't a hire at all.
          • another guess: Obama is clearly smart, speaks/writes in complete sentences, and welcomes lots of media attention and talks about his policies, while Harris has a tendency towards word salad, interviews poorly, avoids discussing issues, etc.
          • another guess: everyone seems to reject the idea that people prefer male to female candidates, but I'm still really not sure there isn't a gender effect! This is very vibes-based on my part, and apparently the data goes the other way, so very uncertain here.
  • https://trevorklee.substack.com/p/if-langurs-can-drink-seawater-can  Trevor Klee on adaptations for drinking seawater
Replies from: Viliam
comment by Viliam · 2024-11-15T20:57:16.408Z · LW(p) · GW(p)

Seems to me that Obama had the level of charisma that Hillary did not. (Neither do Biden or Harris). Bill Clinton had charisma, too. (So did Bernie.)

Also, imagine that you had a button that would make everyone magically forget about the race and gender for a moment. I think that the people who voted for Obama would still feel the same, but the people who voted for Hillary would need to think hard about why, and probably their only rationalization would be "so that Trump does not win".

I am not an American, so my perception of American elections is probably extremely unrepresentative, but it felt like Obama was about "hope" and "change", while Hillary was about "vote for Her, because she is a woman, so she deserves to be the president".

I'm still really not sure there isn't a gender effect!

I guess there are people (both men and women) who in principle wouldn't vote for a woman leader. But there are also people who would be happy to give a woman a chance. Not sure which group is larger.

But the wannabe woman leader should not make her campaign about her being a woman. That feels like admitting that she has no other interesting qualities. She needs to project the aura of a competent person who just happens to be female.

In my country, I have voted for a woman candidate twice (1, 2), but they never felt like "DEI hires". One didn't have any woke agenda, the other was pro- some woke topics, but she never made them about her. (It was like "this is what I will support if you elect me", not "this is what I am".)

Replies from: abandon
comment by dirk (abandon) · 2024-11-15T21:06:07.299Z · LW(p) · GW(p)

I voted for Hillary and wouldn't need to think hard about why: she's a democrat, and I generally prefer democrat policies.

comment by sarahconstantin · 2024-11-14T19:08:22.326Z · LW(p) · GW(p)

links 9/14/2024: https://roamresearch.com/#/app/srcpublic/page/11-14-2024

  • https://archive.org/details/byte-magazine  retro magazines
  • https://www.ribbonfarm.com/2019/09/17/weirding-diary-10/#more-6737 Venkatesh Rao on the fall of the MIT Media Lab
    • this stung a bit!
    • i have tended to think that the stuff with "intellectual-glamour" or "visionary" branding is actually pretty close to on-target. not always right, of course, often overhyped, but often still underinvested in even despite being highly hyped.
      • (a surprising number of famous scientists are starved for funding. a surprising number of inventions featured on TED, NYT, etc were never given resources to scale.)
    • I also am literally unconvinced that "Europe's kindergarten" was less sophisticated than our own time! but it seems like a fine debate to have at leisure, not totally sure how it would play out.
    • he's basically been proven right that energy has moved "underground" but that's not a mode i can work very effectively in. if you have to be invited to participate, well, it's probably not going to happen for me.
    • at the institutional level, he's probably right that it's wise to prepare for bad times and not get complacent. again, this was 2019; a lot of the bad times came later. i miss the good times; i want to believe they'll come again.
comment by sarahconstantin · 2024-11-13T17:19:33.145Z · LW(p) · GW(p)

links 11/13/2024: https://roamresearch.com/#/app/srcpublic/page/11-13-2024

 

comment by sarahconstantin · 2024-10-08T15:20:55.710Z · LW(p) · GW(p)

links 10/8/24 https://roamresearch.com/#/app/srcpublic/page/10-08-2024

comment by sarahconstantin · 2024-11-01T16:20:07.688Z · LW(p) · GW(p)

links 11/01/2024: https://roamresearch.com/#/app/srcpublic/page/11-01-2024

comment by sarahconstantin · 2024-10-01T16:24:18.442Z · LW(p) · GW(p)

links 10/1/24

https://roamresearch.com/#/app/srcpublic/page/10-01-2024

comment by sarahconstantin · 2024-11-18T19:25:29.830Z · LW(p) · GW(p)

links 11/18/2024: https://roamresearch.com/#/app/srcpublic/page/11-18-2024

Replies from: Viliam
comment by Viliam · 2024-11-19T16:09:43.528Z · LW(p) · GW(p)

i want to read his nonfiction

It would have been nice to read A Journal of the Plague Year during covid.

comment by sarahconstantin · 2024-11-07T16:33:57.183Z · LW(p) · GW(p)

links 11/07/2024: https://roamresearch.com/#/app/srcpublic/page/11-07-2024

comment by sarahconstantin · 2024-10-30T14:35:00.839Z · LW(p) · GW(p)

links 10/30/2024: https://roamresearch.com/#/app/srcpublic/page/10-30-2024

 

comment by sarahconstantin · 2024-10-29T14:59:50.365Z · LW(p) · GW(p)

links 10/29/2024: https://roamresearch.com/#/app/srcpublic/page/10-29-2024

comment by sarahconstantin · 2024-10-23T15:26:20.380Z · LW(p) · GW(p)

links 10/23/24:

https://roamresearch.com/#/app/srcpublic/page/10-23-2024

  • https://eukaryotewritesblog.com/2024/10/21/i-got-dysentery-so-you-dont-have-to/  personal experience at a human challenge trial, by the excellent Georgia Ray
  • https://catherineshannon.substack.com/p/the-male-mind-cannot-comprehend-the
    • I...guess this isn't wrong, but it's a kind of Take I've never been able to relate to myself. Maybe it's because I found Legit True Love at age 22, but I've never had that feeling of "oh no the men around me are too weak-willed" (not in my neck of the woods they're not!) or "ew they're too interested in going to the gym" (gym rats are fine? it's a hobby that makes you good-looking, I'm on board with this) or "they're not attentive and considerate enough" (often a valid complaint, but typically I'm the one who's too hyperfocused on my own work & interests) or "they're too show-offy" (yeah it's irritating in excess but a little bit of show-off energy is enlivening).
    • Look: you like Tony Soprano because he's competent and lives by a code? But you don't like it when a real-life guy is too competitive, intense, or off doing his own thing? I'm sorry, but that's not how things work.
      • Tony Soprano can be light-hearted and always have time for the women around him because he is a fictional character. In real life, being good at stuff takes work and is sometimes stressful.
      • My husband is, in fact, very close to this "Tony Soprano" ideal -- assertive, considerate, has "boyish charm", lives by a "code", is competent at lots of everyday-life things but isn't too busy for me -- and I guarantee you would not have thought to date him because he's also nerdy and argumentative and wouldn't fit in with the yuppie crowd.
      • Also like. This male archetype is a guy who fixes things for you and protects you and makes you feel good. In real life? Those guys get sad that they're expected to give, give, give and nobody cares about their feelings. I haven't watched The Sopranos but my understanding is that Tony is in therapy because the strain of this life is getting to him. This article doesn't seem to have a lot of empathy with what it's like to actually be Tony...and you probably should, if you want to marry him.
  • https://fas.org/publication/the-magic-laptop-thought-experiment/ from Tom Kalil, a classic: how to think about making big dreams real.
  • https://paulgraham.com/yahoo.html Paul Graham's business case studies!
  • https://substack.com/home/post/p-150520088 a celebratory reflection on the recent Progress Conference. Yes, it was that good.
  • https://en.m.wikipedia.org/wiki/Hecuba  in some tellings (not Homer's), Hecuba turns into a dog from grief at the death of her son.
  • https://www.librariesforthefuture.bio/p/lff
    • a framework for thinking about aging: "1st gen" is delaying aging, which is where the field started (age1, metformin, rapamycin), while "2nd gen" is pausing (stasis), repairing (reprogramming), or replacing (transplanting), cells/tissues. 2nd gen usually uses less mature technologies (eg cell therapy, regenerative medicine), but may have a bigger and faster effect size.
    • "function, feeling, and survival" are the endpoints that matter.
      • biomarkers are noisy and speculative early proxies that we merely hope will translate to a truly healthier life for the elderly. apply skepticism.
  • https://substack.com/home/post/p-143303463 I always like what Maxim Raginsky has to say. you can't do AI without bumping into the philosophy of how to interpret what it's doing.
comment by sarahconstantin · 2024-10-09T14:45:27.807Z · LW(p) · GW(p)

links 10/9/24 https://roamresearch.com/#/app/srcpublic/page/yI03T5V6t

comment by sarahconstantin · 2024-10-07T14:08:16.899Z · LW(p) · GW(p)

links 8/7/2024

https://roamresearch.com/#/app/srcpublic/page/yI03T5V6t

comment by sarahconstantin · 2024-10-04T14:32:05.585Z · LW(p) · GW(p)

links 10/4/2024

https://roamresearch.com/#/app/srcpublic/page/10-04-2024

comment by sarahconstantin · 2024-10-02T16:01:58.688Z · LW(p) · GW(p)

links 10/2/2024:

https://roamresearch.com/#/app/srcpublic/page/10-02-2024