sarahconstantin's Shortform

post by sarahconstantin · 2024-10-01T16:24:17.329Z · LW · GW · 119 comments

Contents

120 comments

119 comments

Comments sorted by top scores.

comment by sarahconstantin · 2024-10-07T15:58:01.224Z · LW(p) · GW(p)
  • Psychotic “delusions” are more about holding certain genres of idea with a socially inappropriate amount of intensity and obsession than holding a false idea. Lots of non-psychotic people hold false beliefs (eg religious people). And, interestingly, it is absolutely possible to hold a true belief in a psychotic way.
  • I have observed people during psychotic episodes get obsessed with the idea that social media was sending them personalized messages (quite true; targeted ads are real) or the idea that the nurses on the psych ward were lying to them (they were).
  • Preoccupation with the revelation of secret knowledge, with one’s own importance, with mistrust of others’ motives, and with influencing others' thoughts or being influenced by other's thoughts, are classic psychotic themes.
    • And it can be a symptom of schizophrenia when someone’s mind gets disproportionately drawn to those themes. This is called being “paranoid” or “grandiose.”
    • But sometimes (and I suspect more often with more intelligent/self-aware people) the literal content of their paranoid or grandiose beliefs is true!
      • sometimes the truth really has been hidden!
      • sometimes people really are lying to you or trying to manipulate you!
      • sometimes you really are, in some ways, important! sometimes influential people really are paying attention to you!
      • of course people influence each others' thoughts -- not through telepathy but through communication!
    • a false psychotic-flavored thought is "they put a chip in my brain that controls my thoughts." a true psychotic-flavored thought is "Hollywood moviemakers are trying to promote progressive values in the public by implanting messages in their movies."
      • These thoughts can come from the same emotional drive, they are drawn from dwelling on the same theme of "anxiety that one's own thoughts are externally influenced", they are in a deep sense mere arbitrary verbal representations of a single mental phenomenon...
      • but if you take the content literally, then clearly one claim is true and one is false.
      • and a sufficiently smart/self-aware person will feel the "anxiety-about-mental-influence" experience, will search around for a thought that fits that vibe but is also true, and will come up with something a lot more credible than "they put a mind-control chip in my brain", but is fundamentally coming from the same motive.  
  • There’s an analogous but easier to recognize thing with depression.
    • A depressed person’s mind is unusually drawn to obsessing over bad things. But this obviously doesn’t mean that no bad things are real or that no depressive’s depressing claims are true.
    • When a depressive literally believes they are already dead, we call that Cotard's Delusion, a severe form of psychotic depression. When they say "everybody hates me" we call it a mere "distorted thought". When they talk accurately about the heat death of the universe we call it "thermodynamics." But it's all coming from the same emotional place.
  • In general, mental illnesses, and mental states generally, provide a "tropism" towards thoughts that fit with certain emotional/aesthetic vibes.
    • Depression makes you dwell on thoughts of futility and despair
    • Anxiety makes you dwell on thoughts of things that can go wrong
    • Mania makes you dwell on thoughts of yourself as powerful or on the extreme importance of whatever you're currently doing
    • Paranoid psychosis makes you dwell on thoughts of mistrust, secrets, and influencing/being influenced
  • You can, to some extent, "filter" your thoughts (or the ones you publicly express) by insisting that they make sense. You still have a bias towards the emotional "vibe" you're disposed to gravitate towards; but maybe you don't let absurd claims through your filter even if they fit the vibe. Maybe you grudgingly admit the truth of things that don't fit the vibe but technically seem correct.
    • this does not mean that the underlying "tropism" or "bias" does not exist!!!
    • this does not mean that you believe things "only because they are true"!
    • in a certain sense, you are doing the exact same thing as the more overtly irrational person, just hiding it better!
      • the "bottom line" in terms of vibe has already been written, so it conveys no "updates" about the world
      • the "bottom line" in terms of details may still be informative because you're checking that part and it's flexible
  • "He's not wrong but he's still crazy" is a valid reaction to someone who seems to have a mental-illness-shaped tropism to their preoccupations.
    • eg if every post he writes, on a variety of topics, is negative and gloomy, then maybe his conclusions say more about him than about the truth concerning the topic;
      • he might still be right about some details but you shouldn't update too far in the direction of "maybe I should be gloomy about this too"
    • Conversely, "this sounds like a classic crazy-person thought, but I still separately have to check whether it's true" is also a valid and important move to make (when the issue is important enough to you that the extra effort is worth it). 
      • Just because someone has a mental illness doesn't mean every word out of their mouth is false!
      • (and of course this assumption -- that "crazy" people never tell the truth -- drives a lot of psychiatric abuse.)

link: https://roamresearch.com/#/app/srcpublic/page/71kfTFGmK

Replies from: davekasten, tailcalled, Dagon, kave, nikolas-kuhn, michael-roe
comment by davekasten · 2024-10-07T21:57:49.429Z · LW(p) · GW(p)

I once saw a video on Instagram of a psychiatrist recommending to other psychiatrists that they purchase ear scopes to check out their patients' ears, because:
1.  Apparently it is very common for folks with severe mental health issues to imagine that there is something in their ear (e.g., a bug, a listening device)
2.  Doctors usually just say "you are wrong, there's nothing in your ear" without looking
3.  This destroys trust, so he started doing cursory checks with an ear scope
4.  Far more often than he expected (I forget exactly, but something like 10-20%ish), there actually was something in the person's ear -- usually just earwax buildup, but occasionally something else like a dead insect -- that was indeed causing the sensation, and he gained a clinical pathway to addressing his patients' discomfort that he had previously lacked

Replies from: TrevorWiesinger
comment by trevor (TrevorWiesinger) · 2024-10-08T05:17:49.320Z · LW(p) · GW(p)

This reminds me of dath ilan's hallucination diagnosis from page 38 of Yudkowsky and Alicorn's glowfic But Hurting People Is Wrong.

It's pretty far from meeting dath ilan's standard though; in fact an x-ray would be more than sufficient as anyone capable of putting something in someone's ear would obviously vastly prefer to place it somewhere harder to check, whereas nobody would be capable of defeating an x-ray machine as metal parts are unavoidable. 

This concern pops up in books on the Cold War (employees at every org and every company regularly suffer from mental illnesses at somewhere around their base rates, but things get complicated at intelligence agencies where paranoid/creative/adversarial people are rewarded and even influence R&D funding) and an x-ray machine cleanly resolved the matter every time.

comment by tailcalled · 2024-10-07T17:04:22.712Z · LW(p) · GW(p)

Tangential, but...

Schizophrenia is the archetypal definitely-biological mental disorder, but recently for reasons relevant to the above, I've been wondering if that is wrong/confused. Here's my alternate (admittedly kinda uninformed) model:

  • Psychosis is a biological state or neural attractor, which we can kind of symptomatically characterize, but which really can only be understood at a reductionistic level.
  • One of the symptoms/consequences of psychosis is getting extreme ideas at extreme amounts of intensity.
  • This symptom/consequence then triggers a variety of social dynamics that give classic schizophrenic-like symptoms such as, as you say, "preoccupation with the revelation of secret knowledge, with one’s own importance, with mistrust of others’ motives, and with influencing others' thoughts or being influenced by other's thoughts"

That is, if you suddenly get an extreme idea (e.g. that the fly that flapped past you is a sign from god that you should abandon your current life), you would expect dynamics like:

  • People get concerned for you and try to dissuade you, likely even conspiring in private to do so (and even if they're not conspiring, it can seem like a conspiracy). In response, it might seem appropriate to distrust them.
  • Or, if one interprets it as them just lacking the relevant information, one needs to develop some theory of why one has access to special information that they don't.
  • Or, if one is sympathetic to their concern, it would be logical to worry about one's thoughts getting influenced.

But these sorts of dynamics can totally be triggered by extreme beliefs without psychosis! This might also be related to how Enneagram type 5 (the rationalist type) is especially prone to schizophrenia-like symptoms.

(When I think "in a psychotic way", I think of the neurological disorder, but it seems like the way you use it in your comment is more like the schizophrenia-like social dynamic?)

  • In general, mental illnesses, and mental states generally, provide a "tropism" towards thoughts that fit with certain emotional/aesthetic vibes.
    • Depression makes you dwell on thoughts of futility and despair
    • Anxiety makes you dwell on thoughts of things that can go wrong
    • Mania makes you dwell on thoughts of yourself as powerful or on the extreme importance of whatever you're currently doing
    • Paranoid psychosis makes you dwell on thoughts of mistrust, secrets, and influencing/being influenced

Also tangential, this is sort of a "general factor" model of mental states. That often seems applicable, but recently my default interpretation of factor models has been that they tend to get at intermediary variables and not root causes.

Let's take an analogy with computer programs. If you look at the correlations in which sorts of processes run fast or slow, you might find a broad swathe of processes whose performance is highly correlated, because they are all predictably CPU-bound. However, when these processes are running slow, there will usually be some particular program that is exhausting the CPU and preventing the others from running. This problematic program can vary massively from computer to computer, so it is hard to predict or model in general, but often easy to identify in the particular case by looking at which program is most extreme.

comment by Dagon · 2024-10-07T19:42:50.608Z · LW(p) · GW(p)

Thank you, this is interesting and important.  I worry that it overstates similarity of different points on a spectrum, though.

in a certain sense, you are doing the exact same thing as the more overtly irrational person, just hiding it better!

In a certain sense, yes.  In other, critical senses, no.  This is a case where quantitative differences are big enough to be qualitative.  When someone is clinically delusional, there are a few things which distinguish it from the more common wrong ideas.  Among them, the inability to shut up about it when it's not relevant, and the large negative impact on relationships and daily life.  For many many purposes, "hiding it better" is the distinction that matters.

I fully agree that "He's not wrong but he's still crazy" is valid (though I'd usually use less-direct phrasing).  It's pretty rare that "this sounds like a classic crazy-person thought, but I still separately have to check whether it's true" happens to me, but it's definitely not never.

comment by kave · 2024-10-07T18:31:41.280Z · LW(p) · GW(p)

the idea that social media was sending them personalized messages

I imagine they were obsessed with false versions of this idea, rather than obsession about targeted advertising?

Replies from: sarahconstantin, AprilSR
comment by sarahconstantin · 2024-10-08T03:36:45.548Z · LW(p) · GW(p)

no! it sounded like "typical delusion stuff" at first until i listened carefully and yep that was a description of targeted ads.

comment by AprilSR · 2024-10-07T21:39:43.877Z · LW(p) · GW(p)

For a while I ended up spending a lot of time thinking about specifically the versions of the idea where I couldn't easily tell how true they were... which I suppose I do think is the correct place to be paying attention to?

comment by Amalthea (nikolas-kuhn) · 2024-10-07T19:18:13.342Z · LW(p) · GW(p)

One has to be a bit careful with this though. E.g. someone experiencing or having experienced harassment may have a seemingly pathological obsession on the circumstances and people involved in the situation, but it may be completely proportional to the way that it affected them - it only seems pathological to people who didn't encounter the same issues.

Replies from: Seth Herd
comment by Seth Herd · 2024-10-11T17:47:53.636Z · LW(p) · GW(p)

If it's not serving them, it's pathological by definition, right?

So obsessing about exactly those circumstances and types of people could be pathological if it's done more than will protect them in the future, weighing in the emotional cost of all that obsessing.

Of course we can't just stop patterns of thought as soon as we decide they're pathological. But deciding it doesn't serve me so I want to change it is a start.

Yes, it's proportional to the way it affected them - but most of the effect is in the repetition of thoughts about the incident and fear of future similar experiences. Obsessing about unpleasant events is natural, but it often seems pretty harmful itself.

Trauma is a horrible thing. There's a delicate balance between supporting someone's right and tendency to obsess over their trauma while also supporting their ability to quit re-traumatizing themselves by simulating their traumatic event repeatedly.

Replies from: nikolas-kuhn
comment by Amalthea (nikolas-kuhn) · 2024-10-11T18:45:14.330Z · LW(p) · GW(p)

If it's not serving them, it's pathological by definition, right?

This seems way too strong, otherwise any kind of belief or emotion that is not narrowly in pursuit of your goals is pathological.

I completely agree that it's important to strike a balance between revisiting the incident and moving on.

but most of the effect is in the repetition of thoughts about the incident and fear of future similar experiences.

This seems partially wrong. The thoughts are usually consequences of the damage that is done, and they can be unhelpful in their own right, but they are not usually the problem. E.g. if you know that X is an abuser and people don't believe you, I wouldn't go so far as saying your mental dissonance about it is the problem.

comment by Michael Roe (michael-roe) · 2024-10-08T16:54:34.048Z · LW(p) · GW(p)

Some psychiatry textbooks classify “overvalued ideas” as distinct from psychotic delusions.


Depending on how wide you make the definition, a whole rag-bag of diagnoses from the DSM V are overvalued ideas (e.g, anorexia nervosa over valuing being fat).

comment by sarahconstantin · 2024-12-09T20:43:08.663Z · LW(p) · GW(p)

"Most people succumb to peer pressure", https://roamresearch.com/#/app/srcpublic/page/u3919iPfj

  • Most people will do very bad things, including mob violence, if they are peer-pressured enough.
  • It's not literally everyone, but there is no neurotype or culture that is immune to peer pressure.
    • Immunity to peer pressure is a rare accomplishment.
    • You wouldn't assume that everyone in some category would be able to run a 4-minute mile or win a math olympiad. It takes a "perfect storm" of talent, training, and motivation.
    • I'm not sure anybody "just" innately lacks the machinery to be peer-pressured. That's a common claim about autistics and loners, but I really don't think it fits observation. Lots of people "don't fit in" in one way, but are very driven to conform in other social contexts or about other topics.
    • Evidence that any culture (or subculture), present or past, didn't have peer pressure seems really weak.
      • there are environments where being independent-minded or high-integrity is valorized, but most of them still have covert peer-pressure dynamics.
    • Possibly all robust resistance to peer pressure is intentionally cultivated?
      • In other words, maybe it's not enough for a person to just not happen to feel a pull towards conformity. That just means they haven't yet encountered the triggers that would make them inclined to conform.
      • If someone really can't be peer-pressured, maybe they have to actually believe that peer pressure is bad and make an active effort to resist it. Even that doesn't always succeed, but it's a necessary condition.
  • upshot #1: It may be appropriate to be suspicious of claims like "I just hang out with those people, I'm not influenced by them." Most people, in the long run, do get influenced by their peer group.
    • otoh I also don't think cutting off contact with anyone "impure", or refusing to read stuff you disapprove of, is either practical or necessary. we can engage with people and things without being mechanically "nudged" by them.
    • maybe the distinction between engaging in any way and viewing someone as your ingroup is important?
    • or maybe we just have to Get Good at resisting peer pressure (even though that's super hard and rare.) Otherwise the next time some terrible thing happens to be popular, we'll go along with it.
      • like...basic realism here. most things don't last forever, it is an extraordinary claim to say that your virtue would survive any change in your culture.
  • upshot #2: "would probably have been a collaborator in Nazi Germany" is not actually that serious an accusation. it just means "like the majority of the population, not at all heroic." in good circumstances, non-heroes make perfectly fine friends and neighbors. in bad circumstances, they might murder you. that's what makes the circumstances bad!
    • and don't be too quick to assume that someone who's never been in bad circumstances would be a hero. it's just hard to tell ahead of time.
Replies from: D0TheMath, localdeity, vanessa-kosoy, leogao, Nutrition Capsule, D0TheMath, myron-hedderson, Raemon, paragonal, InquilineKea
comment by Garrett Baker (D0TheMath) · 2024-12-10T16:00:09.514Z · LW(p) · GW(p)

otoh I also don't think cutting off contact with anyone "impure", or refusing to read stuff you disapprove of, is either practical or necessary. we can engage with people and things without being mechanically "nudged" by them.

I think the reason not to do this is because of peer pressure. Ideally you should have the bad pressures from your peers cancel out, and in order to accomplish this you need your peers to be somewhat decorrelated from each other, and you can't really do this if all your peers and everyone you listen to is in the same social group.

comment by localdeity · 2024-12-09T22:31:49.995Z · LW(p) · GW(p)

What is categorized as "peer pressure" here?  Explicit threats to report you to authorities if you don't conform?  I'm guessing not.  But how about implicit threats?  What if you've heard (or read in the news) stories about people who don't conform—in ways moderately but not hugely more extreme than you—having their careers ruined?  In any situation that you could call "peer pressure", I imagine there's always at least the possibility of some level of social exclusion.

The defining questions for that aspect would appear to be "Do you believe that you would face serious risk of punishment for not conforming?" and "Would a reasonable person in your situation believe the same?".  Which don't necessarily have the same answer.  It might, indeed, be that people whom you observe to be "conformist" are the ones who are oversensitive to the risk of social exclusion.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2024-12-10T14:47:29.017Z · LW(p) · GW(p)

We call it  "peer pressure" when it is constraining the individual (or at least some of them) without providing perceived mutual value. It is the same mechanism that leads to people collaborating for the common good. The interesting question is which forces or which environments lead to a negative sum game.

comment by Vanessa Kosoy (vanessa-kosoy) · 2024-12-10T09:44:12.241Z · LW(p) · GW(p)

I kinda agree with the claim, but disagree with its framing. You're imagining that peer pressure is something extraneous to the person's core personality, which they want to resist but usually fail. Instead, the desire to fit in, to be respected, liked and admired by other people, is one of the core desires that most (virtually all?) people have. It's approximately on the same level as e.g. the desire to avoid pain. So, people don't "succumb to peer pressure", they (unconsciously) choose to prioritize social needs over other considerations.

At the same time, the moral denouncing of groupthink is mostly a self-deception defense against hostile telepaths [LW · GW]. With two important caveats:

  • Having "independent thinking" as part of the ethos of a social group is actually beneficial for that group's ability to discover true things. While the members of such a group still feel the desire to be liked by other members, they also have the license to disagree without being shunned for it, and are even rewarded for interesting dissenting opinions.
  • Hyperbolic discount seems to be real, i.e. human preferences are time-inconsistent. For example, you can be tempted to eat candy when one is placed in front of you, while also taking measures to avoid such temptation in the future. Something analogous might apply to peer pressure.
Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2024-12-10T14:53:29.637Z · LW(p) · GW(p)

he desire to fit in, to be respected, liked and admired by other people, is one of the core desires that most (virtually all?) people have. It's approximately on the same level as e.g. the desire to avoid pain.

I think the comparison to pain is correct in the sense that some part of the brain (brainstem) is responding to bodily signals in the same mechanistic way as it is to pain signals. The desire to fit in is grounded in something. Steven Byrnes suggests a mechanism in Neuroscience of human social instincts: a sketch [LW · GW]. 

comment by leogao · 2024-12-10T18:33:41.274Z · LW(p) · GW(p)

I won't claim to be immune to peer pressure but at least on the epistemic front I think I have a pretty legible track record of believing things that are not very popular in the environments I've been in.

comment by Nutrition Capsule · 2024-12-10T16:23:01.572Z · LW(p) · GW(p)

As for a specific group of people resistant to peer pressure - psychopaths. Psychopaths don't conform to peer pressure easily - or any kind of pressure, for that matter. Many of them are in fact willing to murder, sit in jail, or otherwise become very ostracized if it aligns with whatever goals they have in mind. I'd wager that the fact that a large percentage of psychopaths literally end up jailed speaks for itself - they just don't mind the consequences that much.

This is easily explained due to psychopaths being fearless and mostly lacking empathy. As far as I recall, some physiological correlates exist - psychopaths have a low cortisol response to stressors compared to normies. On top of the apparent fact that they are indifferent towards others' feelings, some brain imaging data supports this as well.

What they might be more vulnerable to is that peer pressure sometimes goes hand in hand with power and success. Psychopaths like power and success, and they might therefore play along with rules to get more of what they want. That might look like caving in to peer pressure, but judging by how the pathology is contemporarily understood, I'd still say it's not the pressure itself, but the benefits aligned with succumbing to it.

comment by Garrett Baker (D0TheMath) · 2024-12-10T15:55:24.986Z · LW(p) · GW(p)

there is no neurotype or culture that is immune to peer pressure

Seems like the sort of thing that would correlate pretty robustly to big-5 agreeableness, and in that sense there are neurotypes immune to peer pressure.

Edit: One may also suspect a combination of agreeableness and non-openness

comment by Myron Hedderson (myron-hedderson) · 2024-12-10T19:36:38.309Z · LW(p) · GW(p)

"Peer pressure" is a negatively-valanced term that could be phrased more neutrally as "social consequences". Seems to me it's good to think about what the social consequences of doing or not doing a thing will be (whether to "give in to peer pressure", and act in such a way as to get positive reactions from other people/avoid negative reactions, or not), but not to treat conforming when there is social pressure as inherently bad. It can lead to mob violence. Or, it can lead to a simplified social world which is easier for everyone to navigate, because you're doing things that have commonly understood meanings (think of teaching children to interact in a polite way). Or it can lead to great accomplishments, when someone internalizes whatever leads to status within their social hierarchy. Take away the social pressure to do things that impress other people, and lots of people might laze about doing the minimum required to have a nice life on the object-level, which in a society as affluent as the modern industrialized world is not much. There are of course other motivations for striving for internalized goals, but like, "people whose opinion I care about will be impressed" is one, and it does mean some good stuff gets done.

Someone who is literally immune to peer pressure to the extent that social consequences do not enter their mind as a thing that might happen or get considered at all in their decision-making, will probably face great difficulties in navigating their environment and accomplishing anything. People will try fairly subtle social pressure tactics, they will be disregarded as if they hadn't happened, and the person who tried it will either have to disengage from the not-peer-pressurable person, or escalate to more blunt control measures that do register as a thing this person will pay attention to.

Even if I'm right about "is immune to peer pressure" not being an ideal to aim for, I still do acknowledge that being extremely sensitive to what others may think has downsides, and when taken to extremes you get "I can't go to the store because of social anxiety". A balanced approach would be aiming to avoid paranoia while recognizing social pressure when someone is attempting to apply some, without immediately reacting to it, and be able to think through how to respond on a case-by-case basis. This is a nuanced social skill. "This person is trying to blackmail me by threatening social exclusion through blacklisting or exposing socially damaging information about me if I don't comply with what they want" requires a different response than "this person thinks my shirt looks tacky and their shirt looks cool. I note their sense of fashion, and how much importance they attach to clothing choices, and may choose to dress so as to get a particular reaction from them in future, without necessarily agreeing with/adopting/internalizing their perspective on the matter", which in turn is different from "everyone in this room disagrees with me about thing X (or at least says they disagree, preference falsification is a thing) should I say it anyway?".

The key, I would think, is to raise people to understand what social pressure is and its various forms, and that conformance is a choice they get to make rather than a thing they have to do or they'll suffer social death. Choices have consequences, but the worst outcomes I've seen from peer pressure are when people don't want to do the thing that is being peer-pressured towards, but don't treat "just don't conform" as an option they can even consider and ask what the consequences would be.

comment by Raemon · 2024-12-09T22:01:08.796Z · LW(p) · GW(p)

otoh I also don't think cutting off contact with anyone "impure", or refusing to read stuff you disapprove of, is either practical or necessary. we can engage with people and things without being mechanically "nudged" by them.

Is there a particular reason to believe this? Or is it more of a hope?

Replies from: sarahconstantin, Viliam
comment by sarahconstantin · 2024-12-10T00:17:16.469Z · LW(p) · GW(p)

it's an introspection/lived-experience/anecdotes from other people kind of thing, i don't have data, but yes i do believe this is true.

comment by Viliam · 2024-12-09T22:56:10.342Z · LW(p) · GW(p)

I think what might help is engaging with different kinds of people. A group's pressure is weaker if you also meet people who openly believe that the group is a group of idiots. You can voice your concerns without fearing disapproval; but even if some things are difficult to explain to outsiders, at least you have a mental model of someone who would disagree.

But I also suspect that some people would just develop a different persona for each group, and let themselves be peer-pressured towards different extremes on different occasions.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2024-12-10T14:44:11.967Z · LW(p) · GW(p)

some people would just develop a different persona for each group

That is possible but maybe only more likely if the groups are very clearly separate, such as when you are in a faraway country for a long time. But if you are e.g. in a multi-cultural city where there are many maybe even overlapping groups or where you can't easily tell which group it is, it is more difficult to "overfit" and easier to learn a more general strategy. I think universal morality is something of the more general case of this.

Replies from: Viliam
comment by Viliam · 2024-12-10T21:45:41.812Z · LW(p) · GW(p)

Julian Jaynes would say that this is how human consciousness as we know it today has evolved.

Which makes me wonder, what would he say about the internet bubbles we have today. Did we perhaps already reach peak consciousness, and now the pendulum is swinging back? (Probably not, but it's an interesting thought.)

comment by paragonal · 2024-12-10T15:30:31.959Z · LW(p) · GW(p)

 Most people will do very bad things, including mob violence, if they are peer-pressured enough.

Shouldn't this be weighted against the good things people do if they are peer-pressured? I think there's value in not conforming but if all cultures have peer-pressure there needs to be a careful analysis of the pros and cons instead of simply strifing for immunity from it.

 I'm not sure anybody "just" innately lacks the machinery to be peer-pressured.

My first thought here aren't autists but psychopaths.

comment by InquilineKea · 2024-12-10T00:21:34.844Z · LW(p) · GW(p)

My fear is that this will extend to many aspects of the Trump administration (just look at how it's vetting people based on who they voted for/if they believe in the 2020 election results), esp b/c some people who work in the government are now deleting their old tweets...

comment by sarahconstantin · 2024-10-10T14:32:16.066Z · LW(p) · GW(p)
  • “we” can’t steer the future.
  • it’s wrong to try to control people or stop them from doing locally self-interested & non-violent things in the interest of “humanity’s future”, in part because this is so futile.
    • if the only way we survive is if we coerce people to make a costly and painful investment in a speculative idea that might not even work, then we don’t survive! you do not put people through real pain today for a “someday maybe!” This applies to climate change,  AI x-risk, and socially-conservative cultural reform.
  • most cultures and societies in human history have been so bad, by my present values, that I’m not sure they’re not worse than extinction, and we should expect that most possible future states are similarly bad;
  • history clearly teaches us that civilizations and states collapse (on timescales of centuries) and the way to bet is that ours will as well, but it’s kind of insane hubris to think that this can be prevented;
  • the literal species Homo sapiens is pretty resilient and might avoid extinction for a very long time, but have you MET Homo sapiens? this is cold fucking comfort! (see e.g. C. J. Cherryh’s vision in 40,000 in Gehenna for a fictional representation not far from my true beliefs — we are excellent at adaptation and survival but when we “survive” this often involves unimaginable harshness and cruelty, and changing into something that our ancestors would not have liked at all.)
  • identifying with species-survival instead of with the stuff we value now is popular among the thoughtful but doesn’t make any sense to me;
  • in general it does not make sense, to me, to compromise on personal values in order to have more power/influence. you will be able to cause stuff to happen, but who cares if it’s not the stuff you want?
  • similarly, it does not make sense to consciously optimize for having lots of long-term descendants. I love my children; I expect they’ll love their children; but go too many generations out and it’s straight-up fantasyland. My great-grandparents would have hated me.  And that’s still a lot of shared culture and values! Do you really have that much in common with anyone from five thousand years ago?
  • Evolution is not your friend. God is not your friend. Everything worth loving will almost certainly perish. Did you expect it to last forever?
  • “I love whatever is best at surviving” or “I love whatever is strongest” means you don’t actually care what it’s like. It means you have no loyalty and no standards. It means you don’t care so much if the way things turn out is hideous, brutal, miserable, abusive… so long as it technically “is alive” or “wins”. Fuck that.
  • I despise sour grapes. If the thing I want isn’t available, I’m not going to pretend that what is available is what I want.
  • I am not going to embrace the “realistic” plan of allying with something detestable but potent. There is always an alternative, even if the only alternative is “stay true to your dreams and then get clobbered.”

Link to this on my Roam

Replies from: tailcalled, Raemon, Chris_Leong, myron-hedderson, tao-lin, tailcalled, SaidAchmiz, Unnamed, Mitchell_Porter, AliceZ, StartAtTheEnd
comment by tailcalled · 2024-10-10T20:21:12.315Z · LW(p) · GW(p)
  • it’s wrong to try to control people or stop them from doing locally self-interested & non-violent things in the interest of “humanity’s future”, in part because this is so futile.
    • if the only way we survive is if we coerce people to make a costly and painful investment in a speculative idea that might not even work, then we don’t survive! you do not put people through real pain today for a “someday maybe!” This applies to climate change,  AI x-risk, and socially-conservative cultural reform.

How does "this is so futile" square with the massive success of taxes and criminal justice? From what I've heard, states have managed to reduce murder rates by 50x. Obviously that's stopping people from something violent rather than non-violent, but what's the aspect of violence that makes it relevant? Or e.g. how about taxes which fund change to renewable energy? The main argument for socially-conservative cultural reform is fertility, but what about taxes that fund kindergartens, they sort of seem to have a similar function?

The key trick to make it correct to try to control people or stop them is to be stronger than them. 

comment by Raemon · 2024-10-10T19:01:18.371Z · LW(p) · GW(p)

I think this prompts some kind of directional update in me. My paraphrase of this is:

  • it’s actually pretty ridiculous to think you can steer the future
  • It’s also pretty ridiculous to choose to identify with what the future is likely to be.

Therefore…. Well, you don’t spell out your answer. My answer is "I should have a personal meaning-making resolution to 'what would I do if those two things are both true,' even if one of them turns out to be false, so that I can think clearly about whether they are true."

I’ve done a fair amount of similar meaningmaking work through the lens of Solstice 2022 and 2023. But that was more through lens of ‘nearterm extinction’ than ‘inevitability of value loss', which does feel like a notably different thing.

So it seems worth doing some thinking and pre-grieving about that.

I of course have some answers to ‘why value loss might not be inevitable’, but it’s not something I’ve yet thought about through an unclouded lens.

Replies from: sarahconstantin
comment by sarahconstantin · 2024-10-13T22:10:43.172Z · LW(p) · GW(p)

Therefore, do things you'd be in favor of having done even if the future will definitely suck. Things that are good today, next year, fifty years from now... but not like "institute theocracy to raise birth rates", which is awful today even if you think it might "save the world".

Replies from: Raemon
comment by Raemon · 2024-10-13T22:23:38.795Z · LW(p) · GW(p)

Ah yeah that’s a much more specific takeaway than I’d been imagining.

comment by Chris_Leong · 2024-10-11T03:26:21.826Z · LW(p) · GW(p)

I honestly feel that the only appropriate response is something along the lines of "fuck defeatism"[1].

This comment isn't targeted at you, but at a particular attractor in thought space.

Let me try to explain why I think rejecting this attractor is the right response rather than engaging with it.

I think it's mostly that I don't think that talking about things at this level of abstraction is useful. It feels much more productive to talk about specific plans. And if you have a general, high-abstraction argument that plans in general are useless, but I have a specific argument why a specific plan is useful, I know which one I'd go with :-).

Don't get me wrong, I think that if someone struggles for a certain amount of time to try to make a difference and just hits wall after wall, then at some point they have to call it. But "never start" and "don't even try" are completely different.

It's also worth noting, that saving the world is a team sport. It's okay to pursue a plan that depends on a bunch of other folk stepping up and playing their part.

  1. ^

    I would also suggest that this is the best way to respond to depression rather than "trying to argue your way out of it".

Replies from: sarahconstantin
comment by sarahconstantin · 2024-10-11T13:54:18.847Z · LW(p) · GW(p)

I'm not defeatist! I'm picky.

And I'm not talking specifics because i don't want to provoke argument.

comment by Myron Hedderson (myron-hedderson) · 2024-10-11T14:14:28.051Z · LW(p) · GW(p)

We can't steer the future

What about influencing? If, in order for things to go OK, human civilization must follow a narrow path which I individually need to steer us down, we're 100% screwed because I can't do that. But I do have some influence. A great deal of influence over my own actions (I'm resisting the temptation to go down a sidetrack about determinism, assuming you're modeling humans as things that can make meaningful choices), substantial influence over the actions of those close to me, some influence over my acquaintances, and so on until very extremely little (but not 0) influence over humanity as a whole. I also note that you use the word "we", but I don't know who the "we" is. Is it everyone? If so, then everyone collectively has a great deal of say about how the future will go, if we collectively can coordinate. Admittedly, we're not very good at this right now, but there are paths to developing this civilizational skill further than we currently have. So maybe the answer to "we can't steer the future" is "not yet we can't, at least not very well"?
 

  • it’s wrong to try to control people or stop them from doing locally self-interested & non-violent things in the interest of “humanity’s future”, in part because this is so futile.
    • if the only way we survive is if we coerce people to make a costly and painful investment in a speculative idea that might not even work, then we don’t survive! you do not put people through real pain today for a “someday maybe!” This applies to climate change,  AI x-risk, and socially-conservative cultural reform.

Agree, mostly. The steering I would aim for would be setting up systems wherein locally self-interested and non-violent things people are incentivized to do have positive effects for humanity's future. In other words, setting up society such that individual and humanity-wide effects are in the same direction with respect to some notion of "goodness", rather than individual actions harming the group, or group actions harming or stifling the individual. We live in a society where we can collectively decide the rules of the game, which is a way of "steering" a group. I believe we should settle on a ruleset where individual short-term moves that seem good lead to collective long-term outcomes that seem good. Individual short-term moves that clearly lead to bad collective long-term outcomes should be disincentivized, and if the effects are bad enough then coercive prevention does seem warranted (E. G., a SWAT team to prevent a mass shooting). And similarly for groups stifling individuals ability to do things that seem to them to be good for them in the short term. And rules that have perverse incentive effects that are harmful to the individual, the group, or both? Definitely out. This type of system design is like a haiku - very restricted in what design choices are permissible, but not impossible in principle. Seems worth trying because if successful, everything is good with no coercion. If even a tiny subsystem can be designed (or the current design tweaked) in this way, that by itself is good. And the right local/individual move to influence the systems of which you are a part towards that state, as a cognitively-limited individual who can't hold the whole of complex systems in their mind and accurately predict the effect of proposed changes out into the far future, might be as simple as saying "in this instance, you're stifling the individual" and "in this instance you're harming the group/long-term future" wherever you see it, until eventually you get a system that does neither. Like arriving at a haiku by pointing out every time the rules of haiku construction are violated.

comment by Tao Lin (tao-lin) · 2024-10-10T23:24:11.254Z · LW(p) · GW(p)

I disagree a lot! Many things have gotten better! Is sufferage, abolition, democracy, property rights etc not significant? All the random stuff eg better angels of our nature claims has gotten better.

Either things have improved in the past or they haven't, and either people trying to "steer the future" in some sense have been influential on these improvements. I think things have improved, and I think there's definitely not strong evidence that people trying to steer the future was always useless. Because trying to steer the future is very important and motivating, i try to do it.

Yes the counterfactual impact of you individually trying to steer the future may or may not be insignificant, but people trying to steer the future is better than no one doing that!

Replies from: sarahconstantin
comment by sarahconstantin · 2024-10-13T22:04:22.526Z · LW(p) · GW(p)

"Let's abolish slavery," when proposed, would make the world better now as well as later.

I'm not against trying to make things better!

I'm against doing things that are strongly bad for present-day people to increase the odds of long-run human species survival.

comment by tailcalled · 2024-10-10T19:41:14.418Z · LW(p) · GW(p)
  • “I love whatever is best at surviving” or “I love whatever is strongest” means you don’t actually care what it’s like. It means you have no loyalty and no standards. It means you don’t care so much if the way things turn out is hideous, brutal, miserable, abusive… so long as it technically “is alive” or “wins”. Fuck that.

Proposal: For any given system, there's a destiny based on what happens when it's developed to its full extent. Sight is an example of this, where both human eyes and octopus eyes and cameras have ended up using lenses to steer light, despite being independent developments.

"I love whatever is the destiny" is, as you say, no loyalty and no standards. But, you can try to learn what the destiny is, and then on the basis of that decide whether to love or oppose it.

Plants and solar panels are the natural destiny for earthly solar energy. Do you like solarpunk? If so, good news, you can love the destiny, not because you love whatever is the destiny, but because your standards align with the destiny.

Replies from: Raemon, elityre
comment by Raemon · 2024-10-10T20:19:25.932Z · LW(p) · GW(p)

People who love solarpunk don't obviously love computronium dyson spheres tho

Replies from: tailcalled
comment by tailcalled · 2024-10-10T20:30:49.249Z · LW(p) · GW(p)

That is true, though:

1) Regarding tiling the universy with computronium as destiny is Gnostic [LW · GW] heresy.

2) I would like to learn more about the ecology of space infrastructure. Intuitively it seems to me like the Earth is much more habitable than anywhere else, and so I would expect sarah's "this is so futile" point to actually be inverted when it comes to e.g. a Dyson sphere, where the stagnation-inducing worldwide regulation regulation will by-default be stronger than the entropic pressure.

More generally, I have a concept I call the "infinite world approximation", which I think held until ~WWI. Under this approximation, your methods have to be robust against arbitrary adversaries, because they could invade from parts of the ecology you know nothing about. However, this approximation fails for Earth-scale phenomena, since Earth-scale organizations could shoot down any attempt at space colonization.

comment by Eli Tyre (elityre) · 2024-10-13T01:40:54.519Z · LW(p) · GW(p)

Are you saying this because you worship the sun?

Replies from: tailcalled
comment by tailcalled · 2024-10-13T07:48:24.946Z · LW(p) · GW(p)

I would more say the opposite: Henri Bergson (better known for inventing vitalism) convinced me that there ought to be a simple explanation for the forms life takes, and so I spent a while performing root cause analysis on that, and ended up with the sun as the creator.

comment by Said Achmiz (SaidAchmiz) · 2024-10-13T22:42:08.636Z · LW(p) · GW(p)

history clearly teaches us that civilizations and states collapse (on timescales of centuries) and the way to bet is that ours will as well, but it’s kind of insane hubris to think that this can be prevented;

It seems like it makes some difference whether our civilization collapses the way that the Roman Empire collapsed, the way that the British Empire collapsed, or the way that the Soviet Union collapsed. “We must prevent our civilization from ever collapsing” is clearly an implausible goal, but “we should ensure that a successor structure exists and is not much worse than what we have now” seems rather more reasonable, no?

comment by Unnamed · 2024-10-11T18:01:01.854Z · LW(p) · GW(p)

This post reads like it's trying to express an attitude or put forward a narrative frame, rather than trying to describe the world.

Many of these claims seem obviously false, if I take them at face value at take a moment to consider what they're claiming and whether it's true.

e.g., On the first two bullet points it's easy to come up with counterexamples. Some successful attempts to steer the future, by stopping people from doing locally self-interested & non-violent things, include: patent law ("To promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries") and banning lead in gasoline. As well as some others that I now see that other commenters have mentioned.

comment by Mitchell_Porter · 2024-10-10T20:44:13.029Z · LW(p) · GW(p)

Is it too much to declare this the manifesto of a new philosophical school, Constantinism?

Replies from: sarahconstantin
comment by sarahconstantin · 2024-10-10T22:48:53.397Z · LW(p) · GW(p)

wait and see if i still believe it tomorrow!

Replies from: sarahconstantin
comment by sarahconstantin · 2024-10-15T16:03:13.884Z · LW(p) · GW(p)

I don't think it was articulated quite right -- it's more negative than my overall stance (I wrote it when unhappy) and a little too short-termist.

I do still believe that the future is unpredictable, that we should not try to "constrain" or "bind" all of humanity forever using authoritarian means, and that there are many many fates worse than death and we should not destroy everything we love for "brute" survival.

And, also, I feel that transience is normal and only a bit sad. It's good to save lives, but mortality is pretty "priced in" to my sense of how the world works. It's good to work on things that you hope will live beyond you, but Dark Ages and collapses are similarly "priced in" as normal for me. Sara Teasdale: "You say there is no love, my love, unless it lasts for aye; Ah folly, there are episodes far better than the play!" If our days are as a passing shadow, that's not that bad; we're used to it.

I worry that people who are not ok with transience may turn themselves into monsters so they can still "win" -- even though the meaning of "winning" is so changed it isn't worth it any more.

Replies from: nc
comment by nc · 2024-10-16T15:07:07.075Z · LW(p) · GW(p)

I do think this comes back to the messages in On Green [LW · GW] and also why the post went down like a cup of cold sick - rationality is about winning [LW · GW]. Obviously nobody on LW wants to "win" in the sense you describe, but more winning over more harmony on the margin, I think.

The future will probably contain less of the way of life I value (or something entirely orthogonal), but then that's the nature of things.

Replies from: sharmake-farah
comment by Noosphere89 (sharmake-farah) · 2024-10-16T23:20:42.627Z · LW(p) · GW(p)

I think 2 cruxes IMO dominate the discussion a lot that are relevant here:

  1. Will a value lock-in event happen, especially soon in a way such that once the values are locked in, it's basically impossible to change values?

  2. Is something like the vulnerable world hypothesis correct about technological development?

If you believed 1 or 2, I could see why people disagreed with Sarah Constantin's statement on here.

comment by ZY (AliceZ) · 2024-10-11T00:11:25.798Z · LW(p) · GW(p)

I have been having some similar thoughts on the main points here for a while and thanks for this.

I guess to me what needs attention is when people do things along the lines of "benefit themselves and harm other people". That harm has a pretty strict definition,  though I know we may always be able to give borderline examples. This definitely includes the abuse of power in our current society and culture, and any current risks etc. (For example, if we are constraining to just AI with warning on content, https://www.iwf.org.uk/media/q4zll2ya/iwf-ai-csam-report_public-oct23v1.pdf.  And this is very sad to see.) On the other hand, with regards to climate change (can also be current too) or AI risks, it probably should also be concerned when corporates or developers neglect known risks or pursue science/development irresponsibly. I think it is not wrong to work on these, but I just don't believe in "do not solve the other current risks and only work on future risks."

On some comments that were saying our society is "getting better" - sure, but the baseline is a very low bar (slavery for example). There are still many, many, many examples in different societies of how things are still very systematically messed up.

comment by StartAtTheEnd · 2024-10-12T11:57:53.543Z · LW(p) · GW(p)

You seem to dislike reality. Could it not be that the worldview which clashes with reality is wrong (or rather, in the wrong), rather than reality being wrong/in the wrong? For instance that "nothing is forever" isn't a design flaw, but one of the required properties that a universe must have in order to support life?

comment by sarahconstantin · 2024-10-28T17:33:57.651Z · LW(p) · GW(p)

"weak benevolence isn't fake": https://roamresearch.com/#/app/srcpublic/page/ic5Xitb70

  • there's a class of statements that go like:
    • "fair-weather friends" who are only nice to you when it's easy for them, are not true friends at all
    • if you don't have the courage/determination to do the right thing when it's difficult, you never cared about doing the right thing at all
    • if you sometimes engage in motivated cognition or are sometimes intellectually lazy/sloppy, then you don't really care about truth at all
    • if you "mean well" but don't put in the work to ensure that you're actually making a positive difference, then your supposed "well-meaning" intentions were fake all along
  • I can see why people have these views.
    • if you actually need help when you're in trouble, then "fair-weather friends" are no use to you
    • if you're relying on someone to accomplish something, it's not enough for them to "mean well", they have to deliver effectively, and they have to do so consistently. otherwise you can't count on them.
    • if you are in an environment where people constantly declare good intentions or "well-meaning" attitudes, but most of these people are not people you can count on, you will find yourself caring a lot about how to filter out the "posers" and "virtue signalers" and find out who's true-blue, high-integrity, and reliable.
  • but I think it's literally false and sometimes harmful to treat "weak"/unreliable good intentions as absolutely worthless.
    • not all failures are failures to care enough/try hard enough/be brave enough/etc.
      • sometimes people legitimately lack needed skills, knowledge, or resources!
      • "either I can count on you to successfully achieve the desired outcome, or you never really cared at all" is a long way from true.
      • even the more reasonable, "either you take what I consider to be due/appropriate measures to make sure you deliver, or you never really cared at all" isn't always true either!
        • some people don't know how to do what you consider to be due/appropriate measures
        • some people care some, but not enough to do everything you consider necessary
        • sometimes you have your own biases about what's important, and you really want to see people demonstrate a certain form of "showing they care" otherwise you'll consider them negligent, but that's not actually the most effective way to increase their success rate
    • almost everyone has a finite amount of effort they're willing to put into things, and a finite amount of cost they're willing to pay. that doesn't mean you need to dismiss the help they are willing and able to provide.
      • as an extreme example, do you dismiss everybody as "insufficiently committed" if they're not willing to die for the cause? or do you accept graciously if all they do is donate $50?
      • "they only help if it's fun/trendy/easy/etc" -- ok, that can be disappointing, but is it possible you should just make it fun/trendy/easy/etc? or just keep their name on file in case a situation ever comes up where it is fun/trendy/easy and they'll be helpful then?
    • it's harmful to apply this attitude to yourself, saying "oh I failed at this, or I didn't put enough effort in to ensure a good outcome, so I must literally not care about ideals/ethics/truth/other people."
      • like...you do care any amount. you did, in fact, mean well.
        • you may have lacked skill;
        • you may have not been putting in enough effort;
        • or maybe you care somewhat but not as much as you care about something else
        • but it's probably not accurate or healthy to take a maximally-cynical view of yourself where you have no "noble" motives at all, just because you also have "ignoble" motives (like laziness, cowardice, vanity, hedonism, spite, etc).
          • if you have a flicker of a "good intention" to help people, make the world a better place, accomplish something cool, etc, you want to nurture it, not stomp it out as "probably fake".
          • your "good intentions" are real and genuinely good, even if you haven't always followed through on them, even if you haven't always succeeded in pursuing them.
          • you don't deserve "credit" for good intentions equal to the "credit" for actually doing a good thing, but you do deserve any credit at all.
          • basic behavioral "shaping" -- to get from zero to a complex behavior, you have to reward very incremental simple steps in the right direction.
            • e.g. if you wish you were "nicer to people", you may have to pat yourself on the back for doing any small acts of kindness, even really "easy" and "trivial" ones, and notice & make part of your self-concept any inclinations you have to be warm or helpful.
            • "I mean well and I'm trying" has to become a sentence you can say with a straight face. and your good intentions will outpace your skills so you have to give yourself some credit for them.
    • it may be net-harmful to create a social environment where people believe their "good intentions" will be met with intense suspicion.
      • it's legitimately hard to prove that you have done a good thing, particularly if what you're doing is ambitious and long-term.
      • if people have the experience of meaning well and trying to do good but constantly being suspected of insincerity (or nefarious motives), this can actually shift their self-concept from "would-be hero" to "self-identified villain"
        • which is bad, generally
          • at best, identifying as a villain doesn't make you actually do anything unethical, but it makes you less effective, because you preemptively "brace" for hostility from others instead of confidently attracting allies
          • at worst, it makes you lean into legitimately villainous behavior
      • OTOH, skepticism is valuable, including skepticism of people's motives.
      • but it can be undesirable when someone is placed in a "no-win situation", where from their perspective "no matter what I do, nobody will believe that I mean well, or give me any credit for my good intentions."
      • if you appreciate people for their good intentions, sometimes that can be a means to encourage them to do more. it's not a guarantee, but it can be a starting point for building rapport and starting to persuade. people often want to live up to your good opinion of them.
Replies from: johnswentworth, Algon
comment by johnswentworth · 2024-10-29T03:38:29.220Z · LW(p) · GW(p)

... this can actually shift their self-concept from "would-be hero" to "self-identified villain"

  • which is bad, generally
    • at best, identifying as a villain doesn't make you actually do anything unethical, but it makes you less effective, because you preemptively "brace" for hostility from others instead of confidently attracting allies
    • at worst, it makes you lean into legitimately villainous behavior

Sounds like it's time for a reboot of the ol' "join the dark side" essay.

Replies from: Raemon
comment by Raemon · 2024-10-29T20:41:30.626Z · LW(p) · GW(p)

I want to register in advance, I have qualms I’d be interested in talking about. (I think they are at least one level more interesting than the obvious ones, and my relationship with them is probably at least one level more interesting than the obvious relational stance)

comment by Algon · 2024-10-29T22:22:14.741Z · LW(p) · GW(p)

it may be net-harmful to create a social environment where people believe their "good intentions" will be met with intense suspicion.

The picture I get of Chinese culture from their fiction makes me think China is kinda like this. A recurrent trope was "If you do some good deeds, like offering free medicine to the poor, and don't do a perfect job, like treating everyone who says they can't afford medicine, then everyone will castigate you for only wanting to seem good. So don't do good." Another recurrent trope was "it's dumb, even wrong, to be a hero/you should be a villain." (One annoying variant is "kindness to your enemies is cruelty to your allies", which is used to justify pointless cruelty.) I always assumed this was a cultural anti-body formed in response to communists doing terrible things in the name of the common good.

comment by sarahconstantin · 2024-10-25T17:49:18.783Z · LW(p) · GW(p)

links 10/25/24: https://roamresearch.com/#/app/srcpublic/page/10-25-2024

 

comment by sarahconstantin · 2024-11-12T19:34:31.865Z · LW(p) · GW(p)

neutrality (notes towards a blog post): https://roamresearch.com/#/app/srcpublic/page/Ql9YwmLas

  • "neutrality is impossible" is sort-of-true, actually, but not a reason to give up.
    • even a "neutral" college class (let's say a standard algorithms & data structures CS class) is non-neutral relative to certain beliefs
      • some people object to the structure of universities and their classes to begin with;
      • some people may object on philosophical grounds to concepts that are unquestionably "standard" within a field like computer science.
      • some people may think "apolitical" education is itself unacceptable.
        • to consider a certain set of topics "political" and not mention them in the classroom is, implicitly, to believe that it is not urgent to resolve or act on those issues (at least in a classroom context), and therefore it implies some degree of acceptance of the default state of those issues.
      • our "neutral" CS class is implicitly taking a stand on certain things and in conflict with certain conceivable views. but, there's a wide range of views, including (I think) the vast majority of the actual views of relevant parties like students and faculty, that will find nothing to object to in the class.
    • we need to think about neutrality in more relative terms:
      • what rule are you using, and what things are you claiming it will be neutral between?
  • what is neutrality anyway and when/why do you want it?
    • neutrality is a type of tactic for establishing cooperation between different entities.
      • one way (not the only way) to get all parties to cooperate willingly is to promise they will be treated equally.
      • this is most important when there is actual uncertainty about the balance of power.
        • eg the Dutch Republic was the first European polity to establish laws of religious tolerance, because it happened to be roughly evenly divided between multiple religions and needed to unite to win its independence.
    • a system is neutral towards things when it treats them the same.
      • there lots of ways to treat things the same:
        • "none of these things belong here"
          • eg no religion in "public" or "secular" spaces
            • is the "public secular space" the street? no-hijab rules?
            • or is it the government? no 10 Commandments in the courthouse?
        • "each of these things should get equal treatment"
          • eg Fairness Doctrine
        • "we will take no sides between these things; how they succeed or fail is up to you"
          • e.g. "marketplace of ideas", "colorblindness"
    • one can always ask, about any attempt at procedural neutrality:
      • what things does it promise to be neutral between?
        • are those the right or relevant things to be neutral on?
      • to what degree, and with what certainty, does this procedure produce neutrality?
        • is it robust to being intentionally subverted?
    • here and now, what kind of neutrality do we want?
      • thanks to the Internet, we can read and see all sorts of opinions from all over the world. a wider array of worldviews are plausible/relevant/worth-considering than ever before. it's harder to get "on the same page" with people because they may have come from very different informational backgrounds.
      • even tribes are fragmented. even people very similar to one another can struggle to synch up and collaborate, except in lowest-common-denominator ways that aren't very productive.
      • narrowing things down to US politics, no political tribe or ideology is anywhere close to a secure monopoly. nor are "tribes" united internally.
      • we have relied, until now, on a deep reserve of "normality" -- apolitical, even apathetic, Just The Way Things Are. In the US that means, people go to work at their jobs and get paid for it and have fun in their free time. 90's sitcom style.
        • there's still more "normality" out there than culture warriors tend to believe, but it's fragile. As soon as somebody asks "why is this the way things are?" unexamined normality vanishes.
          • to the extent that the "normal" of the recent past was functional, this is a troubling development...but in general the operation of the mind is a good thing!
          • we just have more rapid and broader idea propagation now.
            • why did "open borders" and "abolish the police" and "UBI" take off recently? because these are simple ideas with intuitive appeal. some % of people will think "that makes sense, that sounds good" once they hear of them. and now, way more people are hearing those kinds of ideas.
      • when unexamined normality declines, conscious neutrality may become more important.
        • conscious neutrality for the present day needs to be aware of the wide range of what people actually believe today, and avoid the naive Panglossianism of early web 2.0.
          • many people believe things you think are "crazy".
          • "democratization" may lead to the most popular ideas being hateful, trashy, or utterly bonkers.
          • on the other hand, depending on what you're trying to get done, you may very well need to collaborate with allies, or serve populations, whose views are well outside your comfort zone.
        • neutrality has things to offer:
          • a way to build trust with people very different from yourself, without compromising your own convictions;
            • "I don't agree with you on A, but you and I both value B, so I promise to do my best at B and we'll leave A out of it altogether"
          • a way to reconstruct some of the best things about our "unexamined normality" and place them on a firmer foundation so they won't disappear as soon as someone asks "why?"
  • a "system of the world" is the framework of your neutrality: aka it's what you're not neutral about.
    • eg:
      • "melting pot" multiculturalism is neutral between cultures, but does believe that they should mostly be cosmetic forms of diversity (national costumes and ethnic foods) while more important things are "universal" and shared.
      • democratic norms are neutral about who will win, but not that majority vote should determine the winner.
      • scientific norms are neutral about which disputed claims will turn out to be true, but not on what sorts of processes and properties make claims credible, and not about certain well-established beliefs
    • right now our system-of-the-world is weak.
      • a lot of it is literally decided by software affordances. what the app lets you do is what there is.
        • there's a lot that's healthy and praiseworthy about software companies and their culture, especially 10-20 years ago. but they were never prepared for that responsibility!
    • a stronger system-of-the-world isn't dogmatism or naivety.
      • were intellectuals of the 20th, the 19th, or the 18th centuries childish because they had more explicit shared assumptions than we do? I don't think so.
        • we may no longer consider some of their frameworks to be true
        • but having a substantive framework at all clearly isn't incompatible with thinking independently, recognizing that people are flawed, or being open to changing your mind.
        • "hedgehogs" or "eternalists" are just people who consider some things definitely true.
          • it doesn't mean they came to those beliefs through "blind faith" or have never questioned them.
          • it also doesn't mean they can't recognize uncertainty about things that aren't foundational beliefs.
        • operating within a strongly-held, assumed-shared worldview can be functional for making collaborative progress, at least when that worldview isn't too incompatible with reality.
      • mathematics was "non-rigorous", by modern standards, until the early 20th century; and much of today's mathematics will be considered "non-rigorous" if machine-verified proofs ever become the norm. but people were still able to do mathematics in centuries past, most of which we still consider true.
        • the fact that you can generate a more general framework, within which the old framework was a special case; or in which the old framework was an unprincipled assumption of the world being "nicely behaved" in some sense; does not mean that the old framework was not fruitful for learning true things.
          • sometimes, taking for granted an assumption that's not literally always true (but is true mostly, more-or-less, or in the practically relevant cases) can even be more fruitful than a more radically skeptical and general view.
    • an *intellectual* system-of-the-world is the framework we want to use for the "republic of letters", the sub-community of people who communicate with each other in a single conversational web and value learning and truth.
      • that community expanded with the printing press and again with the internet.
      • it is radically diverse in opinion.
      • it is not literally universal. not everybody likes to read and write; not everybody is curious or creative. a lot of the "most interesting people in the world" influence each other.
        • everybody in the old "blogosphere" was, fundamentally, the same sort of person, despite our constant arguments with each other; and not a common sort of person in the broader population; and we have turned out to be more influential than we have ever been willing to admit.
      • but I do think of it as a pretty big and growing tent, not confined to 300 geniuses or anything like that.
        • "The" conversation -- the world's symbolic information and its technological infrastructure -- is something anybody can contribute to, but of course some contribute more than others.
        • I think the right boundary to draw is around "power users" -- people who participate in that network heavily rather than occasionally.
          • e.g. not all academics are great innovators, but pretty much all of them are "power users" and "active contributors" to the world's informational web.
          • I'm definitely a power user; I expect a lot of my readers are as well.
      • what do we need to not be neutral about in this context? what belongs in an intellectual system-of-the-world?
        • another way of asking this question: about what premises are you willing to say, not just for yourself but for the whole world and for your children's children, "if you don't accept this premise then I don't care to speak to you or hear from you, forever?"
          • clearly that's a high standard!
          • I have many values differences with, say, the author of the Epic of Gilgamesh, but I still want to read it. And I want lots of other people to be able to read it! I do not want the mind that created it to be blotted out of memory.
          • that's the level of minimal shared values we're talking about here. What do we have in common with everyone who has an interest in maintaining and extending humanity's collective record of thought?
        • lack of barriers to entry is not enough.
          • the old Web 2.0 idea was "allow everyone to communicate with everyone else, with equal affordances." This is a kind of "neutrality" -- every user account starts out exactly the same, and anybody can make an account.
            • I think that's still an underrated principle. "literally anybody can speak to anybody else who wants to listen" was an invention that created a lot of valuable affordances. we forget how painfully scarce information was when that wasn't true!
          • the problem is that an information system only works when a user can find the information they seek. And in many cases, what the user is seeking is true information.
          • mechanisms intended to make high quality information (reliable, accurate, credible, complete, etc) preferentially discoverable, are also necessary
            • but they shouldn't just recapitulate potentially-biased gatekeeping.
              • we want evaluative systems that, at least a priori, an ancient Sumerian could look at and say "yep, sounds fair", even if the Sumerian wouldn't like the "truths" that come out on top in those systems.
              • we really can't be parochial here. social media companies "patched" the problem of misinformation with opaque, partisan side-taking, and they suffered for it.
              • how "meta" do we have to get about determining what counts as reliable or valid? well, more meta than just picking a winning side in an ongoing political dispute, that's for sure.
                • probably also more "meta" than handpicking certain sources as trustworthy, the way Wikipedia does.
    • if we want to preserve and extend knowledge, the "republic of letters" needs intentional stewardship of the world's information, including serious attempts at neutrality.
      • perceived bias, of course, turns people away from information sources.
      • nostalgia for unexamined normality -- "just be neutral, y'know, like we were when I was young" -- is not a credible offer to people who have already found your nostalgic "normal" wanting.
      • rigorous neutrality tactics -- "we have so structured this system so that it is impossible for anyone to tamper with it in a biased fashion" -- are better.
        • this points towards protocols.
          • h/t Venkatesh Rao
          • think: zero-knowledge proofs, formal verification, prediction markets, mechanism design, crypto-flavored governance schemes, LLM-enabled argument mapping, AI mechanistic-interpretability and "showing its work", etc
        • getting fancy with the technology here often seems premature when the "public" doesn't even want neutrality; but I don't think it actually is.
          • people don't know they want the things that don't yet exist.
          • the people interested in developing "provably", "rigorously", "demonstrably" impartial systems are exactly the people you want to attract first, because they care the most.
          • getting it right matters.
            • a poorly executed attempt either fizzles instantly; or it catches on but its underlying flaws start to make it actively harmful once it's widely culturally influential.
        • OTOH, premature disputes on technology and methods are undesirable.
          • remember there aren't very many of you/us. that is:
            • pretty much everybody who wants to build rigorous neutrality, no matter why they want it or how they want to implement it, is a potential ally here.
              • the simple fact of wanting to build a "better" world that doesn't yet exist is a commonality, not to be taken for granted. most people don't do this at all.
              • the "softer" side, mutual support and collegiality, are especially important to people whose dreams are very far from fruition. people in this situation are unusually prone to both burnout and schism. be warm and encouraging; it helps keep dreams alive.
              • also, the whole "neutrality" thing is a sham if we can't even engage with collaborators with different views and cultural styles.
            • also, "there aren't very many of us" in the sense that none of these envisioned new products/tools/institutions are really off the ground yet, and the default outcome is that none of them get there.
              • you are playing in a sandbox. the goal is to eventually get out of the sandbox.
              • you will need to accumulate talent, ideas, resources, and vibe-momentum. right now these are scarce, or scattered; they need to be assembled.
              • be realistic about influence.
                • count how many people are at the conference or whatever. how many readers. how many users. how many dollars. in absolute terms it probably isn't much. don't get pretentious about a "movement", "community", or "industry" before it's shown appreciable results.
                • the "adjacent possible" people to get involved aren't the general public, they're the closest people in your social/communication graph who aren't yet participating. why aren't they part of the thing? (or why don't you feel comfortable going to them?) what would you need to change to satisfy the people you actually know?
                  • this is a better framing than speculating about mass appeal.
Replies from: Viliam
comment by Viliam · 2024-11-13T09:46:00.386Z · LW(p) · GW(p)

even a "neutral" college class (let's say a standard algorithms & data structures CS class) is non-neutral relative to certain beliefs

Things that many people consider controversial: evolution, sex education, history. But even for mathematical lessons, you will often find a crackpot who considers given topic controversial. (-1)×(-1) = 1? 0.999... = 1?

some people object to the structure of universities and their classes to begin with

In general, unschooling.

In my opinion, the important functionality of schools is: (1) separating reliable sources of knowledge from bullshit, (2) designing a learning path from "I know nothing" to "I am an expert" where each step only requires the knowledge of previous steps, (3) classmates and teachers to discuss the topic with.

Without these things, learning is difficult. If an autodidact stumbles on some pseudoscience in library, even if they later figure out that it was bullshit, it is a huge waste of time. Picking up random books on a topic and finding out that I don't understand the things they expect me to already know is disappointing. Finding people interested in the same topic can be difficult.

But everything else about education is incidental. No need to walk into the same building. No need to only have classmates of exactly the same age. The learning path doesn't have to be linear, could be a directed oriented graph. Generally, no need to learn a specific topic at a specific age, although it makes sense to learn the topics that are prerequisites to a lot of knowledge as soon as possible. Grading is incidental; you need some feedback, but IMHO it would be better to split the knowledge into many small pieces, and grade each piece as "you get it" or "you don't".

...and the conclusion of my thesis is that a good educational system would focus on the essentials, and be liberal about everything else. However, there are people who object against the very things I consider essential. The educational system that would seem incredible free for me would still seem oppressive to them.

neutrality is a type of tactic for establishing cooperation between different entities.

That means you can have a system neutral towards selected entities (the ones you want in the coalition), but not others. For example, you can have religious tolerance towards an explicit list of churches.

This can lead to a meta-game where some members of the coalition try to kick out someone, because they are no longer necessary. And some members strategically keeping someone in, not necessarily because they love them, but because "if they are kicked out today, tomorrow it could be me; better avoid this slippery slope".

Examples: Various cults in USA that are obviously destructive but enjoy a lot of legal protection. Leftists establishing an exception for "Nazis", and then expanding the definition to make it apply to anyone they don't like. Similarly, the right calling everything they don't like "communism". And everyone on internet calling everything "religion".

"we will take no sides between these things; how they succeed or fail is up to you"

Or the opposite of that: "the world is biased against X, therefore we move towards true neutrality by supporting X".

is it robust to being intentionally subverted?

So, situations like: the organization is nominally politically neutral, but the human at an important position has political preferences... so far it is normal and maybe unavoidable, but what if there are multiple humans like that, all having the same political preference. If they start acting in a biased way, is it possible for other members to point it out.. without getting accused in turn of "bringing politics" into the organization?

As soon as somebody asks "why is this the way things are?" unexamined normality vanishes.

They can easily create a subreddit r/anti-some-specific-way-things-are and now the opposition to the idea is forever a thing.

a way to reconstruct some of the best things about our "unexamined normality" and place them on a firmer foundation so they won't disappear as soon as someone asks "why?"

Basically, we need a "FAQ for normality". The old situation was that people who were interested in a topic knew why things are certain way, and others didn't care. If you joined the group of people who are interested, sooner or later someone explained it to you in person.

But today, someone can make a popular YouTube video containing some false explanation, and overnight you have tons of people who are suddenly interested in the topic and believe a falsehood... and the people who know how things are just don't have the capacity to explain that to someone who lacks the fundamentals, believes a lot of nonsense, has strong opinions, and is typically very hostile to someone trying to correct them. So they just give up. But now we have the falsehood established as an "alternative truth", and the old process of teaching the newcomers no longer works.

The solution for "I don't have a capacity to communicate to so many ignorant and often hostile people" is to make an article or a YouTube video with an explanation, and just keep posting the link. Some people will pay attention, some people won't, but it no longer takes a lot of your time, and it protects you from the emotional impact.

There are things for which we don't have a good article to link, or the article is not known to many. We could fix that. In theory, school was supposed to be this kind of FAQ, but that doesn't work in a dynamic society where new things happen after you are out of school.

a lot of it is literally decided by software affordances. what the app lets you do is what there is.

Yeah, I often feel that having some kind of functionality would improve things, but the functionality is simply not there.

To some degree this is caused by companies having a monopoly on the ecosystem they create. For example, if I need some functionality for e-mail, I can make an open-source e-mail client that has it. (I think historically spam filters started like this.) If I need some functionality for Facebook... there is nothing I can do about it, other than leave Facebook but there is a problem with coordinating that.

Sometimes this is on purpose. Facebook doesn't want me to be able to block the ads and spam, because they profit from it.

but having a substantive framework at all clearly isn't incompatible with thinking independently, recognizing that people are flawed, or being open to changing your mind.

Yeah, if we share a platform, we may start examining some of its assumptions, and maybe at some moment we will collectively update. But if everyone assumes something else, it's the Eternal September of civilization.

If we can't agree on what is addition, we can never proceed to discuss multiplication. And we will never build math.

I think the right boundary to draw is around "power users" -- people who participate in that network heavily rather than occasionally.

Sometimes this is reflected by the medium. For example, many people post comments on blogs, but only a small part of them writes blogs. By writing a blog you join the "power users", and the beauty of it is that it is free for everyone and yet most people keep themselves out voluntarily.

(A problem coming soon: many fake "power users" powered by LLMs.)

I have many values differences with, say, the author of the Epic of Gilgamesh, but I still want to read it.

There is a difference between reading for curiosity and reading to get reliable information. I may be curious about e.g. Aristotle's opinion on atoms, but I am not going to use it to study chemistry.

In some way, I treat some people's opinions as information about the world, and other people's opinions as information about them. Both are interesting, but in a different way. It is interesting to know my neighbor's opinion on astrology, but I am not using this information to update on astrology; I only use it to update on my neighbor.

So I guess I have two different lines: whether I care about someone as a person, and whether I trust someone as a source of knowledge. I listen to both, but I process the information differently.

this points towards protocols.

Thinking about the user experience, I think it would be best if the protocol already came with three default implementations: as a website, as a desktop application, and as a smartphone app.

A website doesn't require me to install anything; I just create an account and start using it. The downside is that the website has an owner, who can kick me out of the website. Also, I cannot verify the code. A malicious owner could probably take my password (unless we figure out some way to avoid this, that won't be too inconvenient). Multiple websites talking to each other in a way that is as transparent for the user as possible.

A smartphone app, because that's what most people use most of the day, especially when they are outside.

A desktop app, because that provides most options for the (technical) power user. For example, it would be nice to keep an offline archive of everything I want, delete anything I no longer want, export and import data.

comment by sarahconstantin · 2024-12-10T18:16:04.203Z · LW(p) · GW(p)

links 12/10/24: https://roamresearch.com/#/app/srcpublic/page/12-10-2024

  • https://hedy.org/hedy Hedy, an educational Python variant that works in multiple languages and has tutorials starting from zero
  • https://www.bitsaboutmoney.com/archive/debanking-and-debunking/ Patrick McKenzie on "debanking"
    • tl;dr: yes, lots of legal businesses get debanked; no, he disagrees with some of the crypto advocates' characterization of the situation
    • in more detail:
      • you can lose bank account access, despite doing nothing unethical, for mundane business/credit-risk related reasons like "you are using your checking account as a small business bank account and transferring a lot of money in and out" or "you are a serial victim of identity theft".
        • this is encouraged by banking regulators but fundamentally banks would do something like this regardless.
      • FINCEN, the US treasury's anti-money-laundering arm, shuts down a lot of innocent businesses that do some kind of financial activity (like buying and selling gift cards) without proper KYC/AML controls. A lot of bodegas get shut down.
        • this is 100% a gov't-created issue and it's kind of tragic.
      • FDIC, which guarantees bank deposits in the event of a bank run, is also tasked with making rules against banks doing things that might lead to bank runs.
        • You know what might cause a run on a bank? A bunch of crypto-holders suddenly finding out their assets are worthless or gone, and wanting to cash out. To some extent, FDIC's statutory mandate does entitle it to tell banks not to serve the crypto sector too heavily, because crypto is risky.
        • Another thing the FDIC is entitled to do is regulate banking products to ensure that consumers are not misled into thinking their money is in an FDIC-insured institution when it isn't. Under that mandate, a lot of crypto-based consumer banking/trading products have gotten shut down.
        • This does amount to "FDIC doesn't like crypto", but it is in fact FDIC's job to regulate banking in ways related to preventing consumers from losing their savings. Patrick McKenzie is fine with this; given the picture he presents, if you are not fine with this, it basically means you're not fine with the existence of the FDIC. (Which is not an unheard-of position; it belongs in the same category as objecting to other New Deal innovations like going off the gold standard and creating the welfare state.)
      • Separately, In the Obama administration, Operation Chokepoint happened. the FDIC claimed that a wide variety of politically disfavored businesses (guns, pornography, fireworks, etc) were risky...because of the regulatory risk of FDIC disapproving of them.
        • unlike the crypto regulation, this is totally unrelated to things like bank run risk that are in FDIC's official mandate. It is simply using FDIC to punish businesses that someone in the government doesn't like. Patrick McKenzie considers it a "lawless" abuse of power.
      • The Fed & Treasury's refusal to allow Facebook to issue the Libra cryptocurrency was similarly politically motivated. Senators blamed Facebook (and the Cambridge Analytica scandal) for Trump's election and warned the CEOs of Visa, MasterCard, and Stripe not to engage with Libra. Patrick McKenzie also views this as the "naked exercise of power."
      • Politically motivated debanking of individuals is clearly possible -- it happened in Canada with the truckers' convoy. However, Patrick McKenzie does not think it is routine in the US today. It is a risk rather than a common reality.
      • However, he wants to insist that the "crypto agenda" of "crypto should be treated on an equal playing field with USD by the banking sector" is not going to protect ordinary people from getting debanked for being, say, bodega owners or gun enthusiasts or conservatives or pornographers. He views it as a crypto-specific lobbying agenda, pretty much separate from the civil-rights/authoritarianism issue of political debanking.
  • https://austinvernon.site/blog/datacenterpv.html Austin Vernon's outline of how off-grid, solar-powered datacenters could work and be cost-effective
  •  
Replies from: Viliam
comment by Viliam · 2024-12-11T15:58:50.054Z · LW(p) · GW(p)

Another interesting part from the "debanking" article:

[Sam Bankman-Fried] orchestrated a sequential privilege escalation attack on the system that is the United States of America, via consummate skill at understanding how power works, really works, in the United States. They rooted trusted institutions and used each additional domino’s weight against the next. A full recounting of the political strategy alone could easily fill a book. [...] One major reason why crypto has experienced what feels like performative outrage from Democrats since 2022 is that they are trying to demonstrate that crypto did not successfully buy them.

Replies from: lahwran
comment by the gears to ascension (lahwran) · 2024-12-11T16:14:14.389Z · LW(p) · GW(p)

This is talking about dem voters or generally progressive citizens, not dem politicians, correct?

Replies from: Viliam
comment by Viliam · 2024-12-11T16:34:11.666Z · LW(p) · GW(p)

Nope, politicians. SBF donated tons of money to Democrats (and a smaller ton of money to Republicans, just to be sure).

comment by sarahconstantin · 2024-12-13T16:50:29.960Z · LW(p) · GW(p)

links 12/13/2024:

 https://arxiv.org/pdf/2407.00695 Minimo, an RL agent for jointly learning both conjectures and proofs in Peano from "intrinsic motivation"

  • what is "intrinsic motivation" in RL?
    • https://arxiv.org/pdf/2203.02298 intrinsic motivation mechanisms include:
      • reward shaping, i.e. comparing the expected value of two possible states, so that the agent gets an incremental "reward" when it moves to a state with higher expected value
      • rewards based on novelty rather than expected success, such as assigning more reward to visiting novel states, or assigning more reward to states with high prediction error relative to the agent's model of the world
      •  
  • https://github.com/p-doom/gc-minimo gc-Minimo, the "goal-conditional" version that involves subgoals
  • https://pdoom.org/ AI organization, research aimed at AGI; young, educated European team, they seem smart (to my unsophisticated eye) and idealistic (they want to share/open-source as much as possible, in contrast to secretive for-profit AI labs)
  • https://news.mit.edu/2024/noninvasive-imaging-method-can-penetrate-deeper-living-tissue-1211  new non-invasive laser imaging technique; label free; 700 nm deep.
    • aka, not useful for subcutaneous imaging in living mammals, but possibly quite useful for non-destructive imaging of organoids (mentioned in the article) or maybe invertebrates, embryoids, other small living things;
    • maybe also nondestructive imaging of surface cells in live mammals:
      • skin
      • eyes
      • surgically exposed tissues
        • when you're operating on a tumor, it's important to make sure you have clean margins; would tumor cells look different under this sort of "metabolic" imaging?
  • https://xenaproject.wordpress.com/2024/12/11/fermats-last-theorem-how-its-going/ ongoing project to translate a proof of Fermat's Last Theorem into Lean.
    • https://xenaproject.wordpress.com/what-is-the-xena-project/ the Xena Project is a project to get undergraduate math majors to formalize things in Lean.
      • "One could imagine things like formally verified course notes, which would later turn into some searchable database, and then to a tool which attempts example sheet questions by applying theorems from the course".
      • "No available system currently has all of an undergraduate pure mathematics degree, so undergraduates can even contribute to research projects. Over ten Imperial maths undergraduates have contributed to Lean’s maths library, and there are plenty of students at other universities in the UK and beyond who have also got involved."
  • https://reactormag.com/the-vampire-p-h-lee/ eerie, touching short story: what if, in early-2010's Tumblr, there were active vampire and werewolf communities?
  • https://www.za-zu.com/blog/playbook how to cold-email at scale. apparently if you just send a bajillion emails from one account it can get marked as spam; there are methods to circumvent this.
  • https://en.m.wikipedia.org/wiki/Kray_twins celebrity-esque 1960's British gangsters.
  • https://www.nature.com/articles/s41591-024-03306-x
    • today in What Can't The Hypothalamus Do: stimulate the lateral hypothalamus and you get improved walking in recovery from spinal cord injury in mice, rats, and 2 humans.
      • appears to be specific to Vglut2 neurons (as shown by optogenetics)
      • got the patients to be able to climb stairs and walk 50 m, when they couldn't before, after 3 months of rehab (they had both had their spinal injury for many years prior without being able to walk/climb).
      • you can see from the emg data that both patients have way more leg muscle activation when trying to walk or raise their knees from a lying position when the DBS is on vs off
    • how crazy is this? the standard lists of things the lateral hypothalamus does don't include motor function. mostly it's autonomic stuff, arousal, hunger, and motivation/mood.
  • https://www.cognition.ai/blog/devin-generally-available  this worries me from a mundane security point of view, though maybe I'm excessively paranoid; do you really want an AI agent autonomously mucking about in your code repo and pushing changes? I've heard the argument that this doesn't really introduce more risk than a new junior developer (who might likewise be error-prone or even a crook) but my mind is not at ease.
  • https://ideaharbor.xyz/ a cute site where people can post project ideas. some of them are not, y'know, possible. "Batteries that can store the internet in them for when your connection goes down."
Replies from: steve2152
comment by Steven Byrnes (steve2152) · 2024-12-13T18:43:37.940Z · LW(p) · GW(p)

I’ve long had a tentative rule-of-thumb that:

  • medial hypothalamus neuron groups are mostly “tracking a state variable”;
  • lateral hypothalamus neuron groups are mostly “turning on a behavior” (especially a “consummatory [LW · GW] behavior”).

(…apart from the mammillary areas way at the posterior end of the hypothalamus. They’re their own thing.)

State variables are things like hunger, temperature, immune system status, fertility, horniness, etc.

I don’t have a great proof of that, just some indirect suggestive evidence. (Orexin, contiguity between lateral hypothalamus and PAG, various specific examples of people studying particular hypothalamus neurons.) Anyway, it’s hard to prove directly because changing a state variable can lead to taking immediate actions. And it’s really just a rule of thumb; I’m sure there’s exceptions, and it’s not really a bright-line distinction anyway.

The literature on the lateral hypothalamus is pretty bad. The main problem IIUC is that LH is “reticular”, i.e. when you look at it under the microscope you just see a giant mess of undifferentiated cells. That appearance is probably deceptive—appropriate stains can reveal nice little nuclei hiding inside the otherwise-undifferentiated mess. But I think only one or a few such hidden nuclei are known (the example I’m familiar with is “parvafox”).

Replies from: sarahconstantin
comment by sarahconstantin · 2024-12-13T20:39:37.253Z · LW(p) · GW(p)

plausible...but surely walking isn't "consummatory"? And turning on the DBS doesn't cause "automatic/involuntary" walking movements.

Replies from: steve2152
comment by Steven Byrnes (steve2152) · 2024-12-13T21:16:51.693Z · LW(p) · GW(p)

Yeah, the word “consummatory” isn’t great in general (see here [LW · GW]), maybe I shouldn’t have used it. But I do think walking is an “innate behavior”, just as sneezing and laughing and flinching and swallowing are. E.g. decorticate rats can walk. As for human babies, they’re decorticate-ish in effect for the first months but still have a “walking / stepping reflex” from day 1 I think.

There can be an innate behavior, but also voluntary cortex control over when and whether it starts—those aren’t contradictory, IMO. This is always true to some extent—e.g. I can voluntarily suppress a sneeze. Intuitively, yeah, I do feel like I have more voluntary control over walking than I do over sneezing or vomiting. (Swallowing is maybe the same category as walking?) I still want to say that all these “innate behaviors” (including walking) are orchestrated by the hypothalamus and brainstem, but that there’s also voluntary control coming via cortex→hypothalamus and/or cortex→brainstem motor-type output channels.

I’m just chatting about my general beliefs.  :)  I don’t know much about walking in particular, and I haven’t read that particular paper (paywall & I don’t have easy access).

comment by sarahconstantin · 2024-12-03T20:23:53.541Z · LW(p) · GW(p)

"three cultures of self-criticism" https://roamresearch.com/#/app/srcpublic/page/zzRZnCLd_

  • non-self-critical culture (Barbarians):
    • baseline assumptions:
      • people generally think they are okay and good, and they are generally right.
      • self-criticism is rare.
      • if someone is being self-critical, guilty, ashamed, etc, that indicates an unusual problem.
    • implications:
      • intense self-criticism will be taken as evidence of something actually wrong with the person -- either they really did screw up quite badly, or they have poor judgment.
      • criticism is direct and overt.
        • if someone objects to what you've done, they'll tell you straight out, and expect that this will clear the air and lead to a resolution of the problem.
        • "negative" judgments are not necessarily intended, or expected, to be painful; the listener may very well disagree with the judgment or find it helpful feedback.
        • as a corollary, nobody assumes that an ambiguous comment or facial expression is a hint at criticism or disapproval. The default assumption is that people are fine with you, that you're fine, and if there's a problem it'll become obvious.
  • pro-self-criticism culture (Puritans):
    • baseline assumptions:
      • people are generally deeply flawed; we are constantly screwing up, sinning, etc. this is the universal or near-universal human condition, not something limited to unusually bad people. but it really is genuinely Bad and Not Okay.
      • people tend to be complacent -- by default we engage in far too little self-criticism. We are screwing up without knowing it. We let ourselves off the hook, make excuses for ourselves, ignore warning signs. It takes active, continual effort to be vigilant against our own flaws.
    • implications:
      • intense self-criticism and guilt is normative. virtuous people will not think well of themselves. in fact, if someone does think well of themselves, that means they're lazy and have low standards.
        • corollary: an intensely self-critical or guilty person is not assumed to be an unusually bad person or to have a mental health problem; they are just doing what we're all supposed to do!
      • criticism can be harsh and intentionally painful, because the assumption is that it needs to be "strong enough" to overcome natural human complacency
      • it's also common to read criticism into subtle or ambiguous signs. the assumption is that there are always more problems than the obvious ones; it's never safe to presume things are fine.
  • counter-self-criticism culture (Therapy Patients):
    • baseline assumptions:
      • people generally are too self-critical. most people are basically fine but torture themselves over minutiae.
      • complacency -- failing to self-criticize enough about genuine faults -- is literally monstrous. complacent people are rare, and pathological; we might call them sociopaths. you absolutely would not want to be one, and you're almost certainly not.
      • "healing" or "growth" means learning to quiet the overactive inner critic. this is very difficult; people need help with it.
      • everybody always needs validation and reassurance that they're ok, and the kindest thing you can do for anyone is give them permission not to worry or self-criticize. the cruelest thing you can do is trigger their insecurities and intensify their (already painful) self-criticism.
    • implications:
      • self-criticism is not normative; it's an affliction we all suffer from and long to be freed from.
        • like sin in pro-self-criticism cultures, misery in counter-self-criticism culture is seen as Genuinely Terrible, Deeply Not Okay, but also a part of the human condition, not a sign that something has gone unusually wrong with you. you're mentally ill, like everyone else.
      • criticism is mild and gentle, or suppressed altogether, because it's assumed everybody is already torturing themselves and doesn't need other people piling on.
        • corollary: it's common to read a lot of criticism or disapproval into subtle or ambiguous signals because it's assumed that people are holding back their true negative opinions. The absence of reassurance or validation is considered a sign of severe, harsh disapproval.
  • relationships:
    • Barbarians see Puritans as totally excessive, and see Therapy Patients as trying to counteract a problem that one can just...not have.
    • Puritans see both Barbarians and Therapy Patients as dangerously complacent.
    • Therapy Patients see Puritans as a familiar enemy -- something they understand but reject and want to get away from, like an unhappy childhood home -- and see Barbarians as incomprehensible, alien, insane, not-even-human. 
comment by sarahconstantin · 2024-12-03T17:34:32.399Z · LW(p) · GW(p)

links 12/03/2024: https://roamresearch.com/#/app/srcpublic/page/12-03-2024

  • https://sashachapin.substack.com/p/my-mind-transformed-completely-and Sasha Chapin on how meditation changed him
    • it doesn't seem clear to me whether this is better or not!
    • reduced anxiety seems great, but reduced sense of narrative drama is a big cost. part of what makes life seem meaningful to me is the sense of being part of a story, and if anything i feel like my current arc involves gaining abilities to envision myself as inside a narrative.
  • https://www.wired.com/story/murderbot-she-wrote-martha-wells/ Martha Wells seems like a lovely person
  • https://www.orcasciences.com/articles  recommended by Ben Reinhardt, great example of rigorous analyses of potential future technologies.
    • https://www.orcasciences.com/articles/checking-my-prejudices-on-materials-decarbonization eg: where does it make economic sense to use electrochemical or biological manufacturing? (compared to "thermochemical", fossil-fuel-powered). For biomanufacturing, only for complex molecules like proteins; for electrochemical processing, mostly metals and things with big voltage potentials in the chemical reaction (zinc, cobalt, copper, lithium, etc) but not simple organic molecules (methane, ethanol, etc)
  • https://www.biotech.senate.gov/press-releases/interim-report/ "US National Security Commission on Emerging Biotechnology", a congressional advisory committee led by Jason Kelly of Gingko Bioworks
    • their purpose seems to be getting biotech-friendly policies through congress, with the rationale that this is good for national security/defense.
    • a lot of naive boosterism about biomanufacturing without engaging with the question of "is this better than alternative manufacturing techniques?"
  • https://www.aria.org.uk/request-for-opps/ new opportunities for program managers at ARIA: lead a scientific research program!
comment by sarahconstantin · 2024-12-04T18:27:50.150Z · LW(p) · GW(p)

links 12/4/2024: https://roamresearch.com/#/app/srcpublic/page/12-04-2024

Replies from: Viliam
comment by Viliam · 2024-12-05T15:22:05.515Z · LW(p) · GW(p)

Gena Gorlin hosts a discussion on "psychological safety"

Good point in comments, that different people see different (sometimes opposite) things necessary for psychological safety. For some, it means they can speak candidly about whatever they think and feel. For others, it means that some things cannot be said in their presence.

I think, you can make it both, as long as it is one-sided, e.g. in a therapy, where the client could say anything, and the therapist would be careful about their feedback.

But this wouldn't work at a workplace or any other larger group... unless you split people into "those who are safe" and "those who have a duty to make them feel safe", and even then, maybe someone in the former group could make someone else from the same group feel unsafe.

You make a good point that it is not enough for your boss to tell you "you can speak freely", you must also believe that it is true. (I also have a negative experience here: I was told to speak freely; I did; it had consequences.) This would probably sound more credible if other colleagues are already speaking freely. Also, if you generally don't feel like your job is at risk somehow. For example, if your performance is below the average (and by definition, half of the team is like that), you might believe that neither your performance nor the candor alone would get you fired, but their combination would.

comment by sarahconstantin · 2024-10-14T23:48:24.518Z · LW(p) · GW(p)

links, 10/14/2024

  • https://milton.host.dartmouth.edu/reading_room/pl/book_1/text.shtml [[John Milton]]'s Paradise Lost, annotated online [[poetry]]
  • https://darioamodei.com/machines-of-loving-grace [[AI]] [[biotech]] [[Dario Amodei]] spends about half of this document talking about AI for bio, and I think it's the most credible "bull case" yet written for AI being radically transformative in the biomedical sphere.
    • one caveat is that I think if we're imagining a future with brain mapping, regeneration of macroscopic brain tissue loss, and understanding what brains are doing well enough to know why neurological abnormalities at the cell level produce the psychiatric or cognitive symptoms they do...then we probably can do brain uploading! it's really weird to single out this one piece as pie-in-the-sky science fiction when you're already imagining a lot of similarly ambitious things as achievable.
  • https://venture.angellist.com/eli-dourado/syndicate [[tech industry]] when [[Eli Dourado]] picks startups, they're at least not boring! i haven't vetted the technical viability of any of these, but he claims to do a lot of that sort of numbers-in-spreadsheets work.
  • https://forum.effectivealtruism.org/topics/shapley-values [? · GW] [[EA]] [[economics]] how do you assign credit (in a principled fashion) to an outcome that multiple people contributed to? Shapley values! It seems extremely hard to calculate in practice, and subject to contentious judgment calls about the assumptions you make, but maybe it's an improvement over raw handwaving.
  • https://gwern.net/maze [[Gwern Branwen]] digs up the "Mr. Young" studying maze-running techniques in [[Richard Feynman]]'s "Cargo Cult Science" speech. His name wasn't Young but Quin Fischer Curtis, and he was part of a psychology research program at UMich that published little and had little influence on the outside world, and so was "rebooted" and forgotten. Impressive detective work, though not a story with a very satisfying "moral".
  • https://en.m.wikipedia.org/wiki/Cary_Elwes [[celebrities]] [[Cary Elwes]] had an ancestor who was [[Charles Dickens]]' inspiration for Ebenezer Scrooge!
  • https://feministkilljoys.com/2015/06/25/against-students/ [[politics]] an old essay by [[Sara Ahmed]] in defense of trigger warnings in the classroom and in general against the accusations that "students these days" are oversensitive and illiberal.
    • She's doing an interesting thing here that I haven't wrapped my head around. She's not making the positive case "students today are NOT oversensitive or illiberal" or "trigger warnings are beneficial," even though she seems to believe both those things. she's more calling into question "why has this complaint become a common talking point? what unstated assumptions does it perpetuate?" I am not sure whether this is a valid approach that's alternate to the forms of argument I'm more used to, or a sign of weakness (a thing she's doing only because she cannot make the positive case for the opposite of what her opponents claim.)
  • https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10080017/ [[cancer]][[medicine]] [[biology]] cancer preventatives are an emerging field
    • NSAIDS and omega-3 fatty acids prevent 95% of tumors in a tumor-prone mouse strain?!
    • also we're targeting [[STAT3]] now?! that's a thing we're doing.
      • ([[STAT3]] is a major oncogene but it's a transcription factor, it lives in the cytoplasm and the nucleus, this is not easy to target with small molecules like a cell surface protein.)
  • https://en.m.wikipedia.org/wiki/CLARITY [[biotech]] make a tissue sample transparent so you can make 3D microscopic imaging, with contrast from immunostaining or DNA/RNA labels
  • https://distill.pub/2020/circuits/frequency-edges/ [[AI]] [[neuroscience]] a type of neuron in vision neural nets, the "high-low frequency detector", has recently also been found to be a thing in literal mouse brain neurons (h/t [[Dario Amodei]]) https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10055119/
  • https://mosaicmagazine.com/essay/israel-zionism/2024/10/the-failed-concepts-that-brought-israel-to-october-7/ [[politics]][[Israel]][[war]] an informative and sober view on "what went wrong" leading up to Oct 7
    • tl;dr: Hamas consistently wants to destroy Israel and commit violence against Israelis, they say so repeatedly, and there was never going to be a long-term possibility of living peacefully side-by-side with them; Netanyahu is a tough talker but kind of a procrastinator who's kicked the can down the road on national security issues for his entire career; catering to settlers is not in the best interests of Israel as a whole (they provoke violence) but they are an unduly powerful voting bloc; Palestinian misery is real but has been institutionalized by the structure of the Gazan state and the UN which prevents any investment into a real local economy; the "peace process" is doomed because Israel keeps offering peace and the Palestinians say no to any peace that isn't the abolition of the State of Israel.
    • it's pretty common for reasonable casual observers (eg in America) to see Israel/Palestine as a tragic conflict in which probably both parties are somewhat in the wrong, because that's a reasonable prior on all conflicts. The more you dig into the details, though, the more you realize that "let's live together in peace and make concessions to Palestinians as necessary" has been the mainstream Israeli position since before 1948. It's not a symmetric situation.
  • [[von Economo neurons]] are spooky [[neuroscience]] https://en.wikipedia.org/wiki/Von_Economo_neuron
    • only found in great apes, cetaceans, and humans
    • concentrated in the [[anterior cingulate cortex]] and [[insular cortex]] which are closely related to the "sense of self" (i.e. interoception, emotional salience, and the perception that your e.g. hand is "yours" and it was "you" who moved it)
    • the first to go in [[frontotemporal dementia]]
    • https://www.nature.com/articles/s41467-020-14952-3 we don't know where they project to! they are so big that we haven't tracked them fully!
    • https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3953677/
  • https://www.wired.com/story/lee-holloway-devastating-decline-brilliant-young-coder/ the founder of Cloudflare had [[frontotemporal dementia]] [[neurology]]
  • [[frontotemporal dementia]] is maybe caused by misfolded proteins being passed around neuron-to-neuron, like prion disease! [[neurology]]
Replies from: Viliam, Raemon, MichaelDickens
comment by Viliam · 2024-10-15T13:00:11.651Z · LW(p) · GW(p)

she's more calling into question "why has this complaint become a common talking point? what unstated assumptions does it perpetuate?" I am not sure whether this is a valid approach that's alternate to the forms of argument I'm more used to, or a sign of weakness

It is good to have one more perspective, and perhaps also good to develop a habit to go meta. So that when someone tells you "X", in addition to asking yourself "is X actually true?" you also consider questions like "why is this person telling me X?", "what could they gain in this situation by making me think more about X?", "are they perhaps trying to distract me from some other Y?".

Because there are such things as filtered evidence, availability bias, limited cognition; and they all can be weaponized. While you are trying really hard to solve the puzzle the person gave you, they may be using your inattention to pick your pockets.

In extreme cases, it can even be a good thing to dismiss the original question entirely. Like, if you are trying to leave an abusive religious cult, and the leader gives you a list of "ten thousand extremely serious theological questions you need to think about deeply before you make the potentially horrible mistake of damning your soul by leaving this holy group", you should not actually waste your time thinking about them, but keep planning your escape.

Now the opposite problem is that some people get so addicted to the meta that they are no longer considering the object level. "You say I'm wrong about something? Well, that's exactly what the privileged X people love to do, don't they?" (Yeah, they probably do. But there is still a chance that you are actually wrong about something.)

tl;dr -- mentioning the meta, great; but completely avoiding the object level, weakness

So, how much meta is the right amount of meta? Dunno, that's a meta-meta question. At some point you need to follow your intuition and hope that your priors aren't horribly wrong.

The more you dig into the details, though, the more you realize that "let's live together in peace and make concessions to Palestinians as necessary" has been the mainstream Israeli position since before 1948. It's not a symmetric situation.

The situation is not symmetric, I agree. But also, it is too easy to underestimate the impact of the settlers. I mean, if you include them in the picture, then the overall Israeli position becomes more like: "Let's live together in peace, and please ignore these few guys who sometimes come to shoot your family and take your homes. They are an extremist minority that we don't approve of, but for complicated political reasons we can't do anything about them. Oh, and if you try to defend yourself against them, chances are our army might come to defend them. And that's also something we deeply regret."

It is much better than the other side, but in my opinion still fundamentally incompatible with peace.

comment by Raemon · 2024-10-15T00:40:32.993Z · LW(p) · GW(p)

kinda meta, but I find myself wondering if we should handle Roam [[ tag ]] syntax in some nicer way. Probably not but it seems nice if it managed to have no downsides.

Replies from: gwern, sarahconstantin
comment by gwern · 2024-10-15T01:59:09.956Z · LW(p) · GW(p)

It wouldn't collide with normal Markdown syntax use. (I can't think of any natural examples, aside from bracket use inside links, like [[editorial comment]](URL), which could be special-cased by looking for the parentheses required for the URL part of a Markdown link.) But it would be ambiguous where the wiki links point to (Sarah's Roam wiki? English Wikipedia?), and if it pointed to somewhere other than LW2 wiki entries, then it would also be ambiguous with that too (because the syntax is copied from Mediawiki and so the same as the old LW wiki's links).

And it seems like an overloading special case you would regret in the long run, compared to something which rewrote them into regular links. Adds in a lot of complexity for a handful of uses.

comment by sarahconstantin · 2024-10-15T02:15:36.212Z · LW(p) · GW(p)

I thought about manually deleting them all but I don't feel like it.

Replies from: MichaelDickens
comment by MichaelDickens · 2024-10-15T04:18:13.431Z · LW(p) · GW(p)

I don't know how familiar you are with regular expressions but you could do this with a two-pass regular expression search and replace: (I used Emacs regex format, your preferred editor might use a different format. notably, in Emacs [ is a literal bracket but ( is a literal parenthesis, for some reason)

  1. replace "^(https://.? )([[.?]] )*" with "\1"
  2. replace "[[(.*?)]]" with "\1"

This first deletes any tags that occur right after a hyperlink at the beginning of a line, then removes the brackets from any remaining tags.

comment by MichaelDickens · 2024-10-15T04:08:01.404Z · LW(p) · GW(p)

RE Shapley values, I was persuaded by this comment [EA(p) · GW(p)] that they're less useful than counterfactual value in at least some practical situations.

comment by sarahconstantin · 2024-12-18T17:01:05.585Z · LW(p) · GW(p)

links 12/18/2024: https://roamresearch.com/#/app/srcpublic/page/12-18-2024

 

  • https://hearth.ai/thesis  keeping track of people you know. as an inveterate birthday-forgetter and someone too prone to falling out of touch with friends, I bet there are ways for AI tools to do helpful things here.
  • https://www.statista.com/chart/33684/number-of-confirmed-human-h5n1-cases-by-exposure-source H5N1 cases by state. mostly California, mostly livestock handlers. 61 cases so far.
  • https://www.theintrinsicperspective.com/p/consciousness-is-a-great-mystery Eric Hoel says that "consciousness researchers" straightforwardly agree on what consciousness is.
    • Consciousness is:
      • the subjective experience of perceiving; Thomas Nagel's "what it is like to be a bat"; qualia
      • awake states (as opposed to dreamless sleep, anaesthesia, coma, etc)
      • things we are mentally aware of (perceptions, thoughts, emotions, etc) as opposed to things we are not aware of (most autonomic processes, blindsight, "subconscious" motives)
    • the fact that we do not have a scientific account of what consciousness is made of, doesn't mean consciousness doesn't exist or is inherently mystical or incoherent. Isaac Newton had never heard of "H20" but he knew what water is. The point of science is to give explanations for the things we know about experientially but don't fully understand.
    • A "theory of consciousness" would allow us to, given some monitoring data of brain activity in an organism, determine whether the organism is conscious or not, and what it is conscious of.
      • is the anaesthesia patient conscious?
      • is the locked-in patient conscious?
      • which animals have consciousness?
    • I've long had a vague sense of suspicion around consciousness research and the idea of qualia, but I've never really been able to put my finger on why.
      • When defined crisply like this, it does seem clear that consciousness is a real, mundane thing (if a nurse says "the patient is unconscious" there's no confusion about what that means).
      • But why is consciousness mysterious? why is it a "hard problem"?
        • David Chalmers' "hard problem of consciousness" refers to the difficulty of explaining how physical processes give rise to subjective experiences. Even if you explained a lot of brain mechanisms that have to go on for us to consciously experience something, would that really cross the explanatory gap?
          • I think this is what has turned me off "consciousness", because I don't get why there's supposed to be a gap.
          • If we had some full explanation based on patterns of brain activity, like "you consciously perceive a bright light precisely when when the foo blergs the bar", then...I think there wouldn't be any mystery left!
            • I agree that e.g. "you see a bright light when the visual cortex is stimulated" is not enough, because you don't see it if you're unconscious, and we don't have a necessary-and-sufficient physical correlate of consciousness. but, like, Eric Hoel and apparently a lot of mainstream neuroscientists are saying that we could find such a thing.
          • I guess you could keep asking "ok, the foo blerging the bar produces the phenomenon we experience as consciousness, but why does it?" and it would be hard to come up with any experimental way to even approach that question...
            • but that's an "explanatory gap" that comes up everywhere and we're usually happy to live with.
            • it also depends what kind of "why" you want.
              • if you're asking "why does it produce consciousness" as in "what's the efficient cause?" or "how does it work to produce consciousness?" then I think all how-does-it-work questions are going to have to be about physical (or algorithmic) processes. and if you say "well but my subjective experience is not even really commensurate with these kinds of objectively observable processes, it's a different sort of thing, how can it ever emerge from them" then...you are SOL? "how" questions will never satisfy you?
              • if you're asking "why does it produce consciousness" in a final-cause sense, like what is the use of consciousness, then I think we can have fruitful ideas. "why don't organisms operate on pure blindsight" is an interesting question! (pace Peter Watts, i think it must have some evolutionary function or we wouldn't have it.)
          • I think p-zombies are stupid, obviously just because you can verbally say you're "imagining" something exactly the same down to every physical detail, but magically different in its properties, doesn't mean it's possible!
      • ok, so: my beef with "consciousness studies" is primarily with the non-physicalists who say that even if we had a perfect neural correlate of consciousness, we still wouldn't understand consciousness as a subjective experience. but what I didn't realize, is that there are neuroscientists interested in consciousness who just want to find that neural correlate, and don't necessarily have any weird philosophical assumptions.
  • https://www.science.org/doi/10.1126/science.abj3259
    • The global neuronal workspace theory of consciousness says that consciousness is produced by an "interconnected network of prefrontal-parietal areas and many high-level sensory cortical areas."
      • early sensory processing is unconscious.
      • stimuli are sometimes attended to (made conscious), a process which involves sending (pre-processed) signals about the stimuli through the prefrontal and parietal areas which control executive function, and distributing them to a bunch of other areas of the brain as part of the current working context.
    • IIT is an information-theoretic theory of consciousness; it says that consciousness is measured by the power of a neuronal network to influence itself. "The more cause-effect power a system has, the more conscious it is."
  •  
Replies from: Viliam
comment by Viliam · 2024-12-19T11:05:31.483Z · LW(p) · GW(p)

keeping track of people you know. as an inveterate birthday-forgetter and someone too prone to falling out of touch with friends, I bet there are ways for AI tools to do helpful things here.

Facebook already reminded me when my friends had birthdays, but recently I noticed that it also offers to write a congratulation comment for me, I just need to make a single click to send it. Now, Facebook has an obvious incentive to keep me returning to their page every day, so they are not going to fully automate this.

The next necessary functionality would be to write automated replies. I think that could be achieved by LLMs, I just need some service to do it automatically. That way I could have a rich social life, without the need to interact with humans.

Replies from: sarahconstantin
comment by sarahconstantin · 2024-12-19T13:03:09.569Z · LW(p) · GW(p)

I don't want automatic messages; that seems too inhuman. I do want things like reminders to follow up with people I haven't talked to for a while, with context awareness for social appropriateness. like, i wouldn't know how to reach out to my roommate/best friend from college; we haven't talked in 16 years! but maybe the right app could keep that from happening in the first place, or create a new normalized type of social behavior that's "reaching out after a long time apart" or whatever.

Replies from: Viliam
comment by Viliam · 2024-12-19T14:36:38.343Z · LW(p) · GW(p)

The description on the page you linked --- "augments the brain's ability to reason on a) who am I, b) who are you, and c) who are you to me, now and over time" -- leaves a lot to imagination. Sounds like a chatbot that will talk to you about your contacts?

i wouldn't know how to reach out to my roommate/best friend from college; we haven't talked in 16 years!

Maybe try finding out their birthday (on social networks, by online research, or maybe ask a mutual friend), and then set up a reminder. "Happy birthday, we haven't seen each other for a while, how are you?" Sounds to me like a socially appropriate thing (but I am not an expert).

Also, spend 5 minutes by the clock writing a list of people you would like to stay in contact with.

Now, I guess the question is how to set up a system that will let you store the data and provide the reminders. The easiest version would a spreadsheet where you enter the names and birthdays, and some system that will read it and prepare notifications for you. A more complicated version would allow you to write more data about the person (how do we know each other, what kinds of activities did we do together, when was the last time we talked), and group the people by categories. You could make an AI go through your e-mail archive and compile an initial report on the person.

I would probably feel very uncomfortable doing this online, because it would feel like I am making reports on people, and the owner of the software will most likely sell the data to any third party. I would want this as a desktop application, maybe connected to a small phone app, to set up the reminders. But many people seem to prefer online solutions as more convenient, privacy be damned.

(The phone reminders could be like: "Today, XY has a birthday; you have their phone number, e-mail, and Less Wrong account. You relationship status is: you have met a few times at a LW meetup. Topics you usually discuss: AI, kitten videos.")

comment by sarahconstantin · 2024-12-06T22:40:58.893Z · LW(p) · GW(p)
  • https://www.natesilver.net/p/part-ii-the-failed-rebrand-of-kamala Nate Silver on the failures of the Harris campaign
    • tl;dr: he thinks they defaulted to a weak message of "generic Democrat" because they lacked the conviction to push any other distinctive brand (and in some cases the situation made alternatives infeasible).
  • https://www.biorxiv.org/content/10.1101/2024.08.29.610411v1  you can generate novel proteins with RFDiffusion and a new model called ChemNet by selecting for properties of a reaction site that indicate a better catalyst of a particular chemical reaction.
    • We're getting closer to designing new proteins to solve particular (chemical reaction) problems.
  • https://worksinprogress.co/issue/the-world-of-tomorrow/ excellent Virginia Postrel article on progress aesthetics and why we have to go beyond nostalgia for the retro-future.
  • https://minjunes.ai/posts/sleep/index.html how could we mimic the effects of the "short sleeper gene" so that everyone could get by on less sleep?
  • https://www.complexsystemspodcast.com/episodes/defrauding-government-jetson-leder-luis/ Patrick McKenzie and Jetson Leder-Luis on defrauding the government.
    • the optimal amount of fraud is not zero; anti-fraud enforcement trades off against ease of use and we (as a nation) generally don't want to make it super hard to get government benefits
    • nonetheless benefits fraud does indeed happen. kind of a lot. "let's bill Medicare for stuff we don't do" or "let's take unemployment insurance for fake SSNs" or "let's take PPP funds for anything and everything, they literally said that we wouldn't have to pay back the "loan""
    • the US government is much more upset about any amount of money going to terrorists or foreign enemies than it is about larger amounts of money going to ordinary crooks or just people who are ineligible for the benefits in question. we almost have two processes for these types of "fraud"?
    • Jetson thinks government fraud-detection agencies are underfunded.
  • https://www.complexsystemspodcast.com/episodes/fraud-choice-patrick-mckenzie/ Patrick McKenzie on fraud
    • most fraud prevention is managed by the financial sector, which is generally a good thing (far less expensive than court cases)
      • though it does often lead to the industry not really caring whether you are a fraudster or a fraud victim. either way you're a risk, which the bank doesn't like.
    • "one reason to buy services from the financial industry and not from the government is that the financial industry finds the statement “stealing from businesses is wrong” to be straightforwardly uncontroversial. A business owner would need to put some thought into whether they trust your local police department or district attorney to have the same belief. I apologize to non-American readers of this piece who believe I am spouting insanity. It has been an interesting few years in the United States."
      • I am an American and this sounds kind of Big If True to me too.
    • the reason firms put up annoying hurdles for their customers is often to screen for fraudsters. I already knew this, but somehow i did not realize that when they ask you for a phone call, they are not doing this because they hate you for being shy/neurodivergent, that too is a way to screen out scammers using fake identities.
  • https://chrislakin.blog/p/bounty-your-bottleneck Chris Lakin claims he can completely solve (psychological) insecurity through coaching. He's very young and new at this, but the pay-for-results model is unusually client-friendly.
  • https://screwworm.org/ these people want to use gene drives to eradicate screwworm, a parasite that infects animals in South America.
  • https://christopherrufo.com/p/counterrevolution-blueprint Chris Rufo is a troll on Twitter, but this is a pretty sober/earnest proposal for how all affirmative action, racial quotas, etc can be eliminated from the Federal Government. I am not qualified to opine on whether this is feasible or whether it will have harmful unintended consequences.
  • https://en.wikipedia.org/wiki/Adragon_De_Mello example of a "child prodigy" who was pushed into it by his emotionally abusive father and didn't like it at all
  • https://parthchopra.substack.com/p/what-i-learned-working-at-a-high  somewhere hidden behind the business-speak of this article, there is clearly an actual story about some Shit That Went Wrong. but unfortunately he is likely not free to disclose it and I am not familiar enough with this company to know what it was.
  • https://www.medrxiv.org/content/10.1101/2024.05.16.24307494v1.full.pdf this is the OpenWater tFUS study on depression. Not sham-controlled, things like this fail to replicate all the time, but they do register an effect.
  • https://www.darpa.mil/work-with-us/heilmeier-catechism good advice for how to write proposals

     

Replies from: Viliam
comment by Viliam · 2024-12-07T14:30:36.649Z · LW(p) · GW(p)

they lacked the conviction to push any other distinctive brand (and in some cases the situation made alternatives infeasible).

I guess it is difficult to promote the brand of Tough No-Nonsense Prosecutor in the age of Defund The Police.

Which is really unfortunate, because it seems like "defund the police" was actually what woke white people wanted. Black people were probably horrified by the idea of giving up and letting the crime grow exponentially at the places they live. Unfortunately, the woke do not care about the actual opinions of the people they speak for.

why we have to go beyond nostalgia for the retro-future

A part of this is the natural "hype - disappointment" cycle. The 21st century is better, but we were promised that it would be 100x better, and it is only maybe 10x better, so now we feel that it sucks. What we would need, psychologically, is probably some disaster that would first threaten to destroy us, but then we would overcome it, and then feel happy that now the future is better than we expected.

But we had covid, which kinda fits this pattern, except the popular reaction was opposite: instead of "thanks to the amazing science and technology of the 21st century, we have eradicated a pandemic in a year" the popular wisdom of the cool people became "it was never dangerous in the first place, the evil Americans just tried to scare us". Instead of admiring the mRNA vaccines, people seem outraged that we didn't let more people die naturally instead.

Another thing is that people are bad at noticing gradual change. If you could teleport 10 or 20 years in the future, you would be shocked. But if you advance to the future one day at a time, it mostly feels like nothing happens. (Even the proverbial flying cars would be a huge disappointment if we at first got cars that can only fly 1 cm above the surface, and then every year they could get 1 cm higher.)

Jetson thinks government fraud-detection agencies are underfunded.

Maybe the people who profit from the fraud want it that way, and lobby against the funding?

A business owner would need to put some thought into whether they trust your local police department or district attorney to have the same belief. I apologize to non-American readers of this piece who believe I am spouting insanity. It has been an interesting few years in the United States.

Uhm, our experience in Eastern Europe is that police was never optimizing for us, and quite often against us.

comment by sarahconstantin · 2024-12-09T19:56:00.679Z · LW(p) · GW(p)

links 12/9/24

  • https://gasstationmanager.github.io/ai/2024/11/04/a-proposal.html
    • a proposal that tentatively makes a lot of sense to me, for making LLM-generated code more robust and trustworthy.
    • the goal: give a formal specification (in e.g. Lean) of what you want the code to do; let the AI generate both the code and a proof that it meets the specification.
    • as a means to this end, a crowdsourced website called "Code With Proofs: The Arena", like LeetCode, where "players" can compete to submit code + proofs to solve coding challenges. This provides a source of training data for LLMs, producing both correct and incorrect (code, proof) pairs for each problem specification. A model can then be trained "given a problem specification, produce code that provably meets the specification".
      • In real life, the model would probably use the proof assistant's verifier directly at inference time, to ensure it only returned code + proofs that the automatic verifier confirmed were valid. It could use the error messages and intermediate feedback of the verifier to more efficiently search for code + proofs that were likely to be correct.
  • https://en.wikipedia.org/wiki/Post-quantum_cryptography  I know nothing about this field but it sure looks like the cryptography people have come a long way towards being ready, if and when quantum computers start being able to break RSA
  • https://en.m.wikipedia.org/wiki/Freik%C3%B6rperkultur  the German tradition of public nudity
  • https://theconversation.com/japanese-scientists-were-pioneers-of-ai-yet-theyre-being-written-out-of-its-history-243762  this piece is gratuitously anti-Big Tech, but does present an interesting part of the history of neural networks.
    • In general I wonder why Americans tend to be blind to Japanese scientific/technological innovation these days! A lot of great stuff was invented in Japan!
  • https://scratch.mit.edu/projects/editor/?tutorial=getStarted  a popular kids' programming language designed for making games and animations.
  • https://bayesshammai.substack.com/p/conditional-on-getting-to-trade-your Ricki Heicklen on adverse selection
  • https://www.complexsystemspodcast.com/episodes/teaching-trading-ricki-heicklen/ Patrick McKenzie and Ricki Heicklen on teaching trading. (It's mostly focused on the kind of quant finance you might see at a firm like Jane Street, not about managing your personal stock portfolio.)
  • https://www.nature.com/articles/s41592-024-02523-znew nucleotide transformer model just dropped. Can be fine-tuned to do things like predict whether a sequence is a promoter, enhancer, or splice site.
  • https://thecausalfallacy.com/p/disorder-at-the-starbucks I'm more civil-libertarian, but Charles Fain Lehman seems to be the thoughtful tough-on-crime advocate to keep an eye on.
    • I tend to think that the public will demand a certain level of safety and pleasantness in their environments no matter what, and it's the civil libertarian's job to find a way to deliver that without infringing anybody's rights and while avoiding undue cruelty/harm to those suspected of crime or viewed as "disorderly." If the public is unsatisfied, they will demand "tough on crime" policies sooner or later; we need to ensure that when they do, we end up with something reasonable and effective rather than overkill.
    • In that context, Lehman does seem concerned with using the least-harsh solutions where available. He recognizes that usually, if you want to deter a fairly mild public nuisance, you don't need to arrest or jail anybody, you just have cops and ordinary citizens tell troublemakers to knock it off, with escalating to tougher enforcement being an option that's usually not needed. We're on the same page that (valid) rules should be enforced, and that enforcement ultimately has to be backed by physical force, but ideally we wouldn't resort to force often. That's a reasonable basis for beginning to negotiate on policy.
    • OTOH his picture of reducing crime is entirely about calling for more enforcement, rather than addressing other points of failure like the lack of accountability (eg qualified immunity) for police generally. Lack of funding and tight restrictions on enforcement activities are not the only reason police might fail to enforce laws and catch criminals; sometimes they are gang-affiliated themselves, or are not bothering to do their jobs, in the fashion typical of any employee with infinite job security. When a police department is seriously dysfunctional, you're not going to get better public safety by giving it more funding and more freedom to operate.
Replies from: Viliam
comment by Viliam · 2024-12-09T22:31:21.714Z · LW(p) · GW(p)

Scratch is awesome for kids. My kids love it. My older daughter has afternoon lessons at school, and I help her debug her projects if there is a problem. I am not sure how I would teach her, if I had to start from zero.

I found a few videos on how to make games in Scratch, and I learned a lot about Scratch from them, but sometimes the author uses in the algorithm a mathematical expression that seems a bit too complicated for a small child. For example, how to make a moving object stop right before the wall. Like, if it moves 10 pixels each turn, and the wall is 5 pixels ahead, you want it to go 5 pixels at the last step; neither 10 nor 0. The author's solution is to go 10 pixels forward, and then "repeat 10 times: if there is a collision with the wall, go 1 pixel back". (Collisions of pictures are a primitive operation in Scratch.) That sounds trivial, but because the speed could be 10 pixels per turn or -10 pixels for turn, and it's not even guaranteed to be an integer, the algorithm becomes "repeat ceil(abs(V)) times: if there is a collision with the wall, go V/ceil(abs(V)) pixels back", and which point my daughter just says "I don't get it". (This is not a problem with Scratch per se; you could limit the speed to integer, and maybe avoid the absolute value by using an if-statement and doing the positive and negative values separately; and maybe ceil(abs(V)) could be a local variable. I am just saying that the videos are generally great... but you get one or two moments of this per video.)

In a bookstore I found a translation of Carol Vorderman's Computer Coding For Kids, which seems good (so it's going to be a Christmas present); the first 1/3 of the book is Scratch, the remaining 2/3 are Python.

.

I like the definition of disorder as domination of public space for private purposes. As I see it, the problem with informal systems of preventing disorder is that some people are resistant to shame; specifically:

  • assholes
  • criminals
  • homeless
  • mentally ill
  • drug addicts
  • teenagers, when encouraged by other teenagers (unless you happen to know their parents)

Once your neighborhood becomes a favorite place of these, you either need a strong community (the kind that can summon a group of adult men with baseball bats, who would ask the disorderly people to kindly leave and never set their foot in this neighborhood again), or you have to call the police. Or you give up your public space.

comment by sarahconstantin · 2024-11-25T15:52:55.537Z · LW(p) · GW(p)

links 11/25/2024

Replies from: Viliam
comment by Viliam · 2024-11-26T22:32:14.230Z · LW(p) · GW(p)

a particular kind of cis women who feel entitled to be extremely rude and intrusive because they assume "women = inherently benign".

Seems to me that some women believe that when they do something, it is fundamentally different from when a man does exactly the same thing. (Something like the fundamental attribution error [? · GW], or xkcd#385 but with reversed genders.) For example, if the woman gets angry and yells at someone, it is because that person was really annoying, or she was tired, etc. Simply, she acted that way because of external reasons. But if she sees a man get angry and yell at someone, it's obvious: men are inherently aggressive. (Or maybe, if she is a good feminist, it's because men are privileged.) This way, she can condemn a certain type of behavior and be really emotional about it... and then go and do exactly the same thing -- because in her mind, it is not the same thing at all.

Or to use the example from the article, men are inherently rude and intrusive; she faced an interesting situation and was naturally curious about it. To be curious about an interesting thing is a perfectly normal and healthy human reaction.

EDIT: I find it interesting - and sad - how the author insists, also in other articles, that their unpleasant experiences must be related to being trans, as opposed to simply being things that sometimes happen to men.

For example, "when a cis woman tells a trans person to follow sexist societal rules, she does so to demonstrate her own power". In my experience (as a cis man), when someone reminds me to follow societal rules, it is typically a woman; men usually don't give a fuck about societal rules, they only warn you if you annoy them personally. Just remember the elementary school; who was the first to tell the teachers when someone did something improper?

comment by sarahconstantin · 2024-11-08T15:02:35.514Z · LW(p) · GW(p)

links 11/08/2024: https://roamresearch.com/#/app/srcpublic/page/11-08-2024

 

comment by sarahconstantin · 2024-11-06T15:37:29.766Z · LW(p) · GW(p)

links 11/6/2024: https://roamresearch.com/#/app/srcpublic/page/11-06-2024

comment by sarahconstantin · 2024-11-05T17:02:17.187Z · LW(p) · GW(p)

links 11/05/2024: https://roamresearch.com/#/app/srcpublic/page/11-05-2024

comment by sarahconstantin · 2024-10-11T15:18:11.631Z · LW(p) · GW(p)

https://roamresearch.com/#/app/srcpublic/page/10-11-2024

 

  • https://www.mindthefuture.info/p/why-im-not-a-bayesian [[Richard Ngo]] [[philosophy]] I think I agree with this, mostly.
    • I wouldn't say "not a Bayesian" because there's nothing wrong with Bayes' Rule and I don't like the tribal connotations, but lbr, we don't literally use Bayes' rule very often and when we do it often reveals just how much our conclusions depend on problem framing and prior assumptions. A lot of complexity/ambiguity necessarily "lives" in the part of the problem that Bayes' rule doesn't touch. To be fair, I think "just turn the crank on Bayes' rule and it'll solve all problems" is a bit of a strawman -- nobody literally believes that, do they? -- but yeah, sure, happy to admit that most of the "hard part" of figuring things out is not the part where you can mechanically apply probability.
  • https://www.lesswrong.com/posts/YZvyQn2dAw4tL2xQY/rationalists-are-missing-a-core-piece-for-agent-like [LW · GW] [[tailcalled]] this one is actually interesting and novel; i'm not sure what to make of it. maybe literal physics, with like "forces", matters and needs to be treated differently than just a particular pattern of information that you could rederive statistically from sensory data? I kind of hate it but unlike tailcalled I don't know much about physics-based computational models...[[philosophy]]
  • https://alignbio.org/ [[biology]] [[automation]] datasets generated by the Emerald Cloud Lab! [[Erika DeBenedectis]] project. Seems cool!
  • https://www.sciencedirect.com/science/article/abs/pii/S0306453015009014?via%3Dihub [[psychology]] the forced swim test is a bad measure of depression.
    • when a mouse trapped in water stops struggling, that is not "despair" or "learned helplessness." these are anthropomorphisms. the mouse is in fact helpless, by design; struggling cannot save it; immobility is adaptive.
      • in fact, mice become immobile faster when they have more experience with the test. they learn that struggling is not useful and they retain that knowledge.
    • also, a mouse in an acute stress situation is not at all like a human's clinical depression, which develops gradually and persists chronically.
    • https://www.sciencedirect.com/science/article/abs/pii/S1359644621003615?via%3Dihub the forced swim test also doesn't predict clinical efficacy of antidepressants well. (admittedly this study was funded by PETA, which thinks the FST is cruel to mice)
  • https://en.wikipedia.org/wiki/Copy_Exactly! [[semiconductors]] the Wiki doesn't mention that Copy Exactly was famously a failure. even when you try to document procedures perfectly and replicate them on the other side of the world, at unprecedented precision, it is really really hard to get the same results.
  • https://neuroscience.stanford.edu/research/funded-research/optimization-african-killifish-platform-rapid-drug-screening-aggregate [[biology]] you know what's cool? building experimentation platforms for novel model organisms. Killifish are the shortest-lived vertebrate -- which is great if you want to study aging. they live in weird oxygen-poor freshwater zones that are hard to replicate in the lab. figuring out how to raise them in captivity and standardize experiments on them is the kind of unsung, underfunded accomplishment we need to celebrate and expand WAY more.
  • https://www.nature.com/articles/513481a [[biology]] [[drug discovery]] ever heard of curcumin doing something for your health? resveratrol? EGCG? those are all natural compounds that light up a drug screen like a Christmas tree because they react with EVERYTHING. they are not going to work on your disease in real life.
  • https://en.wikipedia.org/wiki/Fetal_bovine_serum [[biotech]] this cell culture medium is just...cow juice. it is not consistent batch to batch. this is a big problem.
  • https://www.nature.com/articles/s42255-021-00372-0 [[biology]] mice housed at "room temperature" are too cold for their health; they are more disease-prone, which calls into question a lot of experimental results.
  • https://calteches.library.caltech.edu/51/2/CargoCult.htm [[science]] the famous [[Richard Feynman]] "Cargo cult science" essay is about flawed experimental methods!
    • if your rat can smell the location of the cheese in the maze all along, then your maze isn't testing learning.
    • errybody want to test rats in mazes, ain't nobody want to test this janky-ass maze!
  • https://fastgrants.org/ [[metascience]] [[COVID-19]] this was cool, we should bring it back for other stuff
  • https://erikaaldendeb.substack.com/cp/147525831 [[biotech]] engineering biomanufacturing microbes for surviving on Mars?!
  • https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8278038/ [[prediction markets]] DARPA tried to use prediction markets to predict the success of projects. it didn't work! they couldn't get enough participants.
  • https://www.citationfuture.com/ [[prediction markets]] these guys do prediction markets on science
  • https://jamesclaims.substack.com/p/how-should-we-fund-scientific-error [[metascience]] [[James Heathers]] has a proposal for a science error detection (fraud, bad research, etc) nonprofit. We should fund him to do it!!
  • https://en.wikipedia.org/wiki/Elisabeth_Bik [[metascience]] [[Elizabeth Bik]] is the queen of research fraud detection. pay her plz.
  • https://substack.com/home/post/p-149791027 [[archaeology]] it was once thought that Gobekli Tepe was a "festival city" or religious sanctuary, where people visited but didn't live, because there wasn't a water source. Now, they've found something that looks like water cisterns, and they suspect people did live there.
    • I don't like the framing of "hunter-gatherer" = "nomadic" in this post.
      • We keep pushing the date of agriculture farther back in time. We keep discovering that "hunter-gatherers" picking plants in "wild" forests are actually doing some degree of forest management, planting seeds, or pulling undesirable weeds. Arguably there isn't a hard-and-fast distinction between "gathering" and "gardening". (Grain agriculture where you use a plow and completely clear a field for planting your crop is qualitatively different from the kind of kitchen-garden-like horticulture that can be done with hand tools and without clearing forests. My bet is that all so-called hunter-gatherers did some degree of horticulture until proven otherwise, excepting eg arctic environments)
      • what the water actually suggests is that people lived at Gobekli Tepe for at least part of the year. it doesn't say what they were eating.
      •  
Replies from: gwern
comment by gwern · 2024-10-12T02:01:35.557Z · LW(p) · GW(p)

everybody want to test rats in mazes, ain't nobody want to test this janky-ass maze!

One of the interesting things I found when I finally tracked down the source is that one of the improved mazes before that was a 3D maze where mice had to choose vertically, keeping them in the same position horizontally, because otherwise they apparently were hearing some sort of subtle sound whose volume/direction let them gauge their position and memorize the choice. So Hunter created a stack of T-junctions, so each time they were another foot upwards/downwards, but at the same point in the room and so the same distance away from the sound source.

comment by sarahconstantin · 2024-12-20T20:56:10.331Z · LW(p) · GW(p)

links 11/20/2024: https://roamresearch.com/#/app/srcpublic/page/12-20-2024

comment by sarahconstantin · 2024-12-16T18:08:25.450Z · LW(p) · GW(p)

links 12/16/2024: https://roamresearch.com/#/app/srcpublic/page/12-16-2024

https://people.mpi-sws.org/~dg/teaching/lis2014/modules/ifc-1-volpano96.pdf the Volpano-Smith-Irvine security type system assigns security levels to variables (like "high" and "low" security). You can either use type checking or information theory inequalities to verify properties like "information can't flow from low to high security."

comment by sarahconstantin · 2024-11-21T18:24:59.403Z · LW(p) · GW(p)

links 11/21/2024: https://roamresearch.com/#/app/srcpublic/page/11-21-2024

 

comment by sarahconstantin · 2024-11-15T18:13:22.682Z · LW(p) · GW(p)

links 11/15/2024: https://roamresearch.com/#/app/srcpublic/page/11-15-2024

  • https://www.reddit.com/r/self/comments/1gleyhg/people_like_me_are_the_reason_trump_won/  a moderate/swing-voter (Obama, Trump, Biden) explains why he voted for Trump this time around:
    • he thinks Kamala Harris was an "empty shell" and unlikable and he felt the campaign was manipulative and deceptive.
    • he didn't like that she seemed to be a "DEI hire", but doesn't have a problem with black or female candidates generally, it's just that he resents cynical demographic box-checking.
      • this is a coherent POV -- he did vote for Obama, after all. and plenty of people are like "I want the best person regardless of demographics, not a person chosen for their demographics."
        • hm. why doesn't it seem natural to portray Obama as a "DEI hire"? his campaign made a bigger deal about race than Harris's, and he was criticized a lot for inexperience.
          • One guess: it's laughable to think Obama was chosen by anyone besides himself. He was not the Democratic Party's anointed -- that was Hillary. He's clearly an ambitious guy who wanted to be president on his own initiative and beat the odds to get the nomination. He can't be a "DEI hire" because he wasn't a hire at all.
          • another guess: Obama is clearly smart, speaks/writes in complete sentences, and welcomes lots of media attention and talks about his policies, while Harris has a tendency towards word salad, interviews poorly, avoids discussing issues, etc.
          • another guess: everyone seems to reject the idea that people prefer male to female candidates, but I'm still really not sure there isn't a gender effect! This is very vibes-based on my part, and apparently the data goes the other way, so very uncertain here.
  • https://trevorklee.substack.com/p/if-langurs-can-drink-seawater-can  Trevor Klee on adaptations for drinking seawater
Replies from: Viliam
comment by Viliam · 2024-11-15T20:57:16.408Z · LW(p) · GW(p)

Seems to me that Obama had the level of charisma that Hillary did not. (Neither do Biden or Harris). Bill Clinton had charisma, too. (So did Bernie.)

Also, imagine that you had a button that would make everyone magically forget about the race and gender for a moment. I think that the people who voted for Obama would still feel the same, but the people who voted for Hillary would need to think hard about why, and probably their only rationalization would be "so that Trump does not win".

I am not an American, so my perception of American elections is probably extremely unrepresentative, but it felt like Obama was about "hope" and "change", while Hillary was about "vote for Her, because she is a woman, so she deserves to be the president".

I'm still really not sure there isn't a gender effect!

I guess there are people (both men and women) who in principle wouldn't vote for a woman leader. But there are also people who would be happy to give a woman a chance. Not sure which group is larger.

But the wannabe woman leader should not make her campaign about her being a woman. That feels like admitting that she has no other interesting qualities. She needs to project the aura of a competent person who just happens to be female.

In my country, I have voted for a woman candidate twice (1, 2), but they never felt like "DEI hires". One didn't have any woke agenda, the other was pro- some woke topics, but she never made them about her. (It was like "this is what I will support if you elect me", not "this is what I am".)

Replies from: abandon
comment by dirk (abandon) · 2024-11-15T21:06:07.299Z · LW(p) · GW(p)

I voted for Hillary and wouldn't need to think hard about why: she's a democrat, and I generally prefer democrat policies.

comment by sarahconstantin · 2024-11-14T19:08:22.326Z · LW(p) · GW(p)

links 9/14/2024: https://roamresearch.com/#/app/srcpublic/page/11-14-2024

  • https://archive.org/details/byte-magazine  retro magazines
  • https://www.ribbonfarm.com/2019/09/17/weirding-diary-10/#more-6737 Venkatesh Rao on the fall of the MIT Media Lab
    • this stung a bit!
    • i have tended to think that the stuff with "intellectual-glamour" or "visionary" branding is actually pretty close to on-target. not always right, of course, often overhyped, but often still underinvested in even despite being highly hyped.
      • (a surprising number of famous scientists are starved for funding. a surprising number of inventions featured on TED, NYT, etc were never given resources to scale.)
    • I also am literally unconvinced that "Europe's kindergarten" was less sophisticated than our own time! but it seems like a fine debate to have at leisure, not totally sure how it would play out.
    • he's basically been proven right that energy has moved "underground" but that's not a mode i can work very effectively in. if you have to be invited to participate, well, it's probably not going to happen for me.
    • at the institutional level, he's probably right that it's wise to prepare for bad times and not get complacent. again, this was 2019; a lot of the bad times came later. i miss the good times; i want to believe they'll come again.
comment by sarahconstantin · 2024-11-13T17:19:33.145Z · LW(p) · GW(p)

links 11/13/2024: https://roamresearch.com/#/app/srcpublic/page/11-13-2024

 

comment by sarahconstantin · 2024-10-08T15:20:55.710Z · LW(p) · GW(p)

links 10/8/24 https://roamresearch.com/#/app/srcpublic/page/10-08-2024

comment by sarahconstantin · 2024-11-01T16:20:07.688Z · LW(p) · GW(p)

links 11/01/2024: https://roamresearch.com/#/app/srcpublic/page/11-01-2024

comment by sarahconstantin · 2024-10-01T16:24:18.442Z · LW(p) · GW(p)

links 10/1/24

https://roamresearch.com/#/app/srcpublic/page/10-01-2024

comment by sarahconstantin · 2024-11-26T18:59:05.446Z · LW(p) · GW(p)

links 11/26/2024: https://roamresearch.com/#/app/srcpublic/page/11-26-2024

  • https://chrislakin.blog/archive  sensible, but not actionable for me, advice on becoming less insecure.
  • https://abundance.institute pro-progress think tank, where Eli Dourado works
  • The Myth of Er is the final scene of Plato's Republic.
    • it is a very strange story. in the afterlife, the good are rewarded in heaven and the bad are punished in hell; and then everyone lines up to choose their new reincarnated life. they get to see how each possible life will play out. people who have led unhappy lives often prefer to reincarnate as animals. people who were only virtuous out of habit and went to heaven often choose to be all-powerful tyrants, not realizing how this will backfire and hurt them. people who have learned philosophy are more likely to choose lives of virtue; they also "forget less" about their past lives by drinking from Lethe.
      • so in one sense it's straightforwardly a pitch for philosophy...but it has more moving parts than would seem to be necessary just to make that point.
        • most myths/stories about "good is rewarded, evil is punished" don't have this homeostatic mechanism where the good are most likely to turn bad (since Heaven makes them complacent) and the bad are more likely to turn good (since Hell makes them wish for a better next life.) why put that in?
      • how does this whole reincarnation thing relate to the rest of the Republic, which is ambiguous between being a plan for an ideal city and a metaphor for the ideal internal organization of the soul?
    • https://beccatarnas.com/2013/10/17/the-myth-of-er/
  • http://strangehorizons.com/fiction/the-spindle-of-necessity/
  • war in the Middle East
  • what went wrong with Gingko Bioworks?
  • https://www.isomorphiclabs.com/ AI-for-bio company
  • https://www.maximumnewyork.com/p/political-capital-savings-plan
    • I'm sure Daniel Golliher is doing a healthy thing but I struggle to get on board myself.
      • I think he's probably right that in order to actually make a political impact you have to pick a very small issue (like basketball courts in your city) to spend a lot of time on and you have to, um, have friends.
      • I looked into public art one time -- how do people get their murals etc into public spaces? -- and the answer was, simply, that they are full time on that project. they live eat sleep and breathe public art. now, do I like pretty things? yes. do I care so much about public art in particular that i would want to be full time on it? no.
      • Given that I don't want to spend my life on the issues "small enough" that i could actually shift them, it is absolutely rational for me not to participate in politics and to find it an uncongenial place! i can make a way bigger impact, much faster, with the reputational capital (and literal money) I've built up in more SV-adjacent circles than I can by grinding on NYC neighborhood issues.
  • https://www.nature.com/articles/s41593-024-01784-3
    • Is connectomics actually useful for anything? here’s strong evidence for “yes.”
    • Mapping how neurons connect and using graph clustering gives you (anatomically sensible) functional distinctions into systems like “oculomotor” (which governs eye movements) and “axial” (which governs movements along the body axis.)
    • Looking at the spectrum of the graph also predicts a chunky “wiring diagram”. Simulating the dynamics of this wiring diagram recapitulates real electrophysiology. In other words, just doing mathy graph stuff allowed the researchers to infer a modular organization at an intermediate scale between neurons and gross anatomy, a useful scale for predicting neural behavior. This is literally “cutting reality at the joints”.
    • One thing that has frustrated me as an amateur learning neuroscience is that we have a microscale (cells) and a macroscale (brain anatomy) but function — the brain’s ability to carry out specific tasks — has to happen at some kind of meso-scale regarding the interaction of groups of neurons. Clearly there’s redundancy — it’s possible for two different neuron-by-neuron patterns of activity to reflect “the same” functional behavior — so we need a “unit of function” that’s “all the activity patterns that do the same thing” — probably that coincides somewhat with spatial co-location, similar cell type, etc, but not at all necessarily! Only once you have “units of function” can you talk about the brain like a machine, know what its “state” is and how that “state” would change under specific interventions, simulate it efficiently, etc.
    • To understand brain function, we’d need to be able to discern human-interpretable “parts” of brain activity, like “remembering your grandmother just is the fizz blorking the buzz”…but we don’t seem to know what the “fizz”, the “buzz” or “blorking” are. We’d need to have “chunky things” in the brain-activity space, the way molecules, cells, or anatomical structures are “chunky things” at the micro and macro scales. And I felt like “what am I missing? does anybody in neuroscience even care about chunky-things? am wrong to care? or do I just have the wrong keyword?”
    • This paper definitely seems like an example of “chunky things neuroscience”, which is encouraging!
Replies from: CstineSublime
comment by CstineSublime · 2024-11-29T03:21:35.599Z · LW(p) · GW(p)

It's been a while since I've read Plato's Republic, but isn't the Myth of Er just a abstraction of the way people make decision based on (perceived) justice and injustice in their everyday life? Just in the same way that Socrates says it is easier to read large print than small print, so he scales up justice from an individual to the titular Kallipolis, so too the day to day determinism of choices motivated by what we consider is 'fair' or 'just' is easier seen if multiplied over endless cycles of lives, than days and nights.

Is it possible that Plato was saying that day to day we experience this homeostatic mechanism? (if you are rational enough to observe the patterns of how your choices affect your personal circumstances?).

An example from the Republic itself: if I remember correctly the entire dialogue starts because Socrates is in effect kidnapped after the end of a festival because his interlocutors find him so darn entertaining. This would appear to be unjust - but not unexpected because he is Socrates which he has this reputation for being engaging and wise even if it is not the 'right' or 'just' way to treat him. How then should he behave in future, knowing that this is the potential cost of his social behavior? And the Myth of Er says that Odysseus kept to himself, sought neither virtue nor tyranny. That's probably the wrong reading. It's been a while since I've read it.

 

comment by sarahconstantin · 2024-11-18T19:25:29.830Z · LW(p) · GW(p)

links 11/18/2024: https://roamresearch.com/#/app/srcpublic/page/11-18-2024

Replies from: Viliam
comment by Viliam · 2024-11-19T16:09:43.528Z · LW(p) · GW(p)

i want to read his nonfiction

It would have been nice to read A Journal of the Plague Year during covid.

comment by sarahconstantin · 2024-11-07T16:33:57.183Z · LW(p) · GW(p)

links 11/07/2024: https://roamresearch.com/#/app/srcpublic/page/11-07-2024

comment by sarahconstantin · 2024-10-30T14:35:00.839Z · LW(p) · GW(p)

links 10/30/2024: https://roamresearch.com/#/app/srcpublic/page/10-30-2024

 

comment by sarahconstantin · 2024-10-29T14:59:50.365Z · LW(p) · GW(p)

links 10/29/2024: https://roamresearch.com/#/app/srcpublic/page/10-29-2024

comment by sarahconstantin · 2024-10-23T15:26:20.380Z · LW(p) · GW(p)

links 10/23/24:

https://roamresearch.com/#/app/srcpublic/page/10-23-2024

  • https://eukaryotewritesblog.com/2024/10/21/i-got-dysentery-so-you-dont-have-to/  personal experience at a human challenge trial, by the excellent Georgia Ray
  • https://catherineshannon.substack.com/p/the-male-mind-cannot-comprehend-the
    • I...guess this isn't wrong, but it's a kind of Take I've never been able to relate to myself. Maybe it's because I found Legit True Love at age 22, but I've never had that feeling of "oh no the men around me are too weak-willed" (not in my neck of the woods they're not!) or "ew they're too interested in going to the gym" (gym rats are fine? it's a hobby that makes you good-looking, I'm on board with this) or "they're not attentive and considerate enough" (often a valid complaint, but typically I'm the one who's too hyperfocused on my own work & interests) or "they're too show-offy" (yeah it's irritating in excess but a little bit of show-off energy is enlivening).
    • Look: you like Tony Soprano because he's competent and lives by a code? But you don't like it when a real-life guy is too competitive, intense, or off doing his own thing? I'm sorry, but that's not how things work.
      • Tony Soprano can be light-hearted and always have time for the women around him because he is a fictional character. In real life, being good at stuff takes work and is sometimes stressful.
      • My husband is, in fact, very close to this "Tony Soprano" ideal -- assertive, considerate, has "boyish charm", lives by a "code", is competent at lots of everyday-life things but isn't too busy for me -- and I guarantee you would not have thought to date him because he's also nerdy and argumentative and wouldn't fit in with the yuppie crowd.
      • Also like. This male archetype is a guy who fixes things for you and protects you and makes you feel good. In real life? Those guys get sad that they're expected to give, give, give and nobody cares about their feelings. I haven't watched The Sopranos but my understanding is that Tony is in therapy because the strain of this life is getting to him. This article doesn't seem to have a lot of empathy with what it's like to actually be Tony...and you probably should, if you want to marry him.
  • https://fas.org/publication/the-magic-laptop-thought-experiment/ from Tom Kalil, a classic: how to think about making big dreams real.
  • https://paulgraham.com/yahoo.html Paul Graham's business case studies!
  • https://substack.com/home/post/p-150520088 a celebratory reflection on the recent Progress Conference. Yes, it was that good.
  • https://en.m.wikipedia.org/wiki/Hecuba  in some tellings (not Homer's), Hecuba turns into a dog from grief at the death of her son.
  • https://www.librariesforthefuture.bio/p/lff
    • a framework for thinking about aging: "1st gen" is delaying aging, which is where the field started (age1, metformin, rapamycin), while "2nd gen" is pausing (stasis), repairing (reprogramming), or replacing (transplanting), cells/tissues. 2nd gen usually uses less mature technologies (eg cell therapy, regenerative medicine), but may have a bigger and faster effect size.
    • "function, feeling, and survival" are the endpoints that matter.
      • biomarkers are noisy and speculative early proxies that we merely hope will translate to a truly healthier life for the elderly. apply skepticism.
  • https://substack.com/home/post/p-143303463 I always like what Maxim Raginsky has to say. you can't do AI without bumping into the philosophy of how to interpret what it's doing.
comment by sarahconstantin · 2024-10-09T14:45:27.807Z · LW(p) · GW(p)

links 10/9/24 https://roamresearch.com/#/app/srcpublic/page/yI03T5V6t

comment by sarahconstantin · 2024-10-07T14:08:16.899Z · LW(p) · GW(p)

links 8/7/2024

https://roamresearch.com/#/app/srcpublic/page/yI03T5V6t

comment by sarahconstantin · 2024-10-04T14:32:05.585Z · LW(p) · GW(p)

links 10/4/2024

https://roamresearch.com/#/app/srcpublic/page/10-04-2024

comment by sarahconstantin · 2024-10-02T16:01:58.688Z · LW(p) · GW(p)

links 10/2/2024:

https://roamresearch.com/#/app/srcpublic/page/10-02-2024