Posts

2025 Color Trends 2024-10-07T21:20:03.962Z
sarahconstantin's Shortform 2024-10-01T16:24:17.329Z
Fun With The Tabula Muris (Senis) 2024-09-20T18:20:01.901Z
The Great Data Integration Schlep 2024-09-13T15:40:02.298Z
Fun With CellxGene 2024-09-06T22:00:03.461Z
AI for Bio: State Of The Field 2024-08-30T18:00:02.187Z
LLM Applications I Want To See 2024-08-19T21:10:03.101Z
All The Latest Human tFUS Studies 2024-08-09T22:20:04.561Z
Multiplex Gene Editing: Where Are We Now? 2024-07-16T20:50:04.590Z
Superbabies: Putting The Pieces Together 2024-07-11T20:40:05.036Z
The Incredible Fentanyl-Detecting Machine 2024-06-28T22:10:01.223Z
Permissions in Governance 2019-08-02T19:50:00.592Z
The Costs of Reliability 2019-07-20T01:20:00.895Z
Book Review: Why Are The Prices So Damn High? 2019-06-28T19:40:00.643Z
Circle Games 2019-06-06T16:40:00.596Z
Pecking Order and Flight Leadership 2019-04-29T20:30:01.168Z
The Forces of Blandness and the Disagreeable Majority 2019-04-28T19:44:42.177Z
Degrees of Freedom 2019-04-02T21:10:00.516Z
Personalized Medicine For Real 2019-03-04T22:40:00.351Z
The Tale of Alice Almost: Strategies for Dealing With Pretty Good People 2019-02-27T19:34:03.906Z
Humans Who Are Not Concentrating Are Not General Intelligences 2019-02-25T20:40:00.940Z
The Relationship Between Hierarchy and Wealth 2019-01-23T02:00:00.467Z
Book Recommendations: An Everyone Culture and Moral Mazes 2019-01-10T21:40:04.163Z
Contrite Strategies and The Need For Standards 2018-12-24T18:30:00.480Z
The Pavlov Strategy 2018-12-20T16:20:00.542Z
Argue Politics* With Your Best Friends 2018-12-15T19:00:00.549Z
Introducing the Longevity Research Institute 2018-12-14T20:20:00.532Z
Player vs. Character: A Two-Level Model of Ethics 2018-12-14T19:40:00.520Z
Norms of Membership for Voluntary Groups 2018-12-11T22:10:00.975Z
Playing Politics 2018-12-05T00:30:00.996Z
“She Wanted It” 2018-11-11T22:00:01.645Z
Things I Learned From Working With A Marketing Advisor 2018-10-09T00:10:01.320Z
Fasting Mimicking Diet Looks Pretty Good 2018-10-04T19:50:00.695Z
Reflections on Being 30 2018-10-02T19:30:01.585Z
Direct Primary Care 2018-09-25T18:00:01.747Z
Tactical vs. Strategic Cooperation 2018-08-12T16:41:40.005Z
Oops on Commodity Prices 2018-06-10T15:40:00.499Z
Monopoly: A Manifesto and Fact Post 2018-05-31T18:40:00.479Z
Mental Illness Is Not Evidence Against Abuse Allegations 2018-05-13T19:50:42.645Z
Introducing the Longevity Research Institute 2018-05-08T03:30:00.768Z
Wrongology 101 2018-04-25T00:00:00.991Z
Good News for Immunostimulants 2018-04-16T16:10:00.575Z
Is Rhetoric Worth Learning? 2018-04-06T22:03:47.918Z
Naming the Nameless 2018-03-22T00:35:55.634Z
"Cheat to Win": Engineering Positive Social Feedback 2018-02-05T23:16:50.858Z
The Right to be Wrong 2017-11-28T23:43:24.210Z
Distinctions in Types of Thought 2017-10-10T03:36:06.820Z
Why I Quit Social Media 2017-09-26T00:58:28.379Z
Performance Trends in AI 2017-01-28T08:36:59.679Z
Life Extension Possibilities 2017-01-24T01:54:32.556Z

Comments

Comment by sarahconstantin on sarahconstantin's Shortform · 2024-10-15T16:03:13.884Z · LW · GW

I don't think it was articulated quite right -- it's more negative than my overall stance (I wrote it when unhappy) and a little too short-termist.

I do still believe that the future is unpredictable, that we should not try to "constrain" or "bind" all of humanity forever using authoritarian means, and that there are many many fates worse than death and we should not destroy everything we love for "brute" survival.

And, also, I feel that transience is normal and only a bit sad. It's good to save lives, but mortality is pretty "priced in" to my sense of how the world works. It's good to work on things that you hope will live beyond you, but Dark Ages and collapses are similarly "priced in" as normal for me. Sara Teasdale: "You say there is no love, my love, unless it lasts for aye; Ah folly, there are episodes far better than the play!" If our days are as a passing shadow, that's not that bad; we're used to it.

I worry that people who are not ok with transience may turn themselves into monsters so they can still "win" -- even though the meaning of "winning" is so changed it isn't worth it any more.

Comment by sarahconstantin on sarahconstantin's Shortform · 2024-10-15T02:15:36.212Z · LW · GW

I thought about manually deleting them all but I don't feel like it.

Comment by sarahconstantin on sarahconstantin's Shortform · 2024-10-14T23:48:24.518Z · LW · GW

links, 10/14/2024

  • https://milton.host.dartmouth.edu/reading_room/pl/book_1/text.shtml [[John Milton]]'s Paradise Lost, annotated online [[poetry]]
  • https://darioamodei.com/machines-of-loving-grace [[AI]] [[biotech]] [[Dario Amodei]] spends about half of this document talking about AI for bio, and I think it's the most credible "bull case" yet written for AI being radically transformative in the biomedical sphere.
    • one caveat is that I think if we're imagining a future with brain mapping, regeneration of macroscopic brain tissue loss, and understanding what brains are doing well enough to know why neurological abnormalities at the cell level produce the psychiatric or cognitive symptoms they do...then we probably can do brain uploading! it's really weird to single out this one piece as pie-in-the-sky science fiction when you're already imagining a lot of similarly ambitious things as achievable.
  • https://venture.angellist.com/eli-dourado/syndicate [[tech industry]] when [[Eli Dourado]] picks startups, they're at least not boring! i haven't vetted the technical viability of any of these, but he claims to do a lot of that sort of numbers-in-spreadsheets work.
  • https://forum.effectivealtruism.org/topics/shapley-values [[EA]] [[economics]] how do you assign credit (in a principled fashion) to an outcome that multiple people contributed to? Shapley values! It seems extremely hard to calculate in practice, and subject to contentious judgment calls about the assumptions you make, but maybe it's an improvement over raw handwaving.
  • https://gwern.net/maze [[Gwern Branwen]] digs up the "Mr. Young" studying maze-running techniques in [[Richard Feynman]]'s "Cargo Cult Science" speech. His name wasn't Young but Quin Fischer Curtis, and he was part of a psychology research program at UMich that published little and had little influence on the outside world, and so was "rebooted" and forgotten. Impressive detective work, though not a story with a very satisfying "moral".
  • https://en.m.wikipedia.org/wiki/Cary_Elwes [[celebrities]] [[Cary Elwes]] had an ancestor who was [[Charles Dickens]]' inspiration for Ebenezer Scrooge!
  • https://feministkilljoys.com/2015/06/25/against-students/ [[politics]] an old essay by [[Sara Ahmed]] in defense of trigger warnings in the classroom and in general against the accusations that "students these days" are oversensitive and illiberal.
    • She's doing an interesting thing here that I haven't wrapped my head around. She's not making the positive case "students today are NOT oversensitive or illiberal" or "trigger warnings are beneficial," even though she seems to believe both those things. she's more calling into question "why has this complaint become a common talking point? what unstated assumptions does it perpetuate?" I am not sure whether this is a valid approach that's alternate to the forms of argument I'm more used to, or a sign of weakness (a thing she's doing only because she cannot make the positive case for the opposite of what her opponents claim.)
  • https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10080017/ [[cancer]][[medicine]] [[biology]] cancer preventatives are an emerging field
    • NSAIDS and omega-3 fatty acids prevent 95% of tumors in a tumor-prone mouse strain?!
    • also we're targeting [[STAT3]] now?! that's a thing we're doing.
      • ([[STAT3]] is a major oncogene but it's a transcription factor, it lives in the cytoplasm and the nucleus, this is not easy to target with small molecules like a cell surface protein.)
  • https://en.m.wikipedia.org/wiki/CLARITY [[biotech]] make a tissue sample transparent so you can make 3D microscopic imaging, with contrast from immunostaining or DNA/RNA labels
  • https://distill.pub/2020/circuits/frequency-edges/ [[AI]] [[neuroscience]] a type of neuron in vision neural nets, the "high-low frequency detector", has recently also been found to be a thing in literal mouse brain neurons (h/t [[Dario Amodei]]) https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10055119/
  • https://mosaicmagazine.com/essay/israel-zionism/2024/10/the-failed-concepts-that-brought-israel-to-october-7/ [[politics]][[Israel]][[war]] an informative and sober view on "what went wrong" leading up to Oct 7
    • tl;dr: Hamas consistently wants to destroy Israel and commit violence against Israelis, they say so repeatedly, and there was never going to be a long-term possibility of living peacefully side-by-side with them; Netanyahu is a tough talker but kind of a procrastinator who's kicked the can down the road on national security issues for his entire career; catering to settlers is not in the best interests of Israel as a whole (they provoke violence) but they are an unduly powerful voting bloc; Palestinian misery is real but has been institutionalized by the structure of the Gazan state and the UN which prevents any investment into a real local economy; the "peace process" is doomed because Israel keeps offering peace and the Palestinians say no to any peace that isn't the abolition of the State of Israel.
    • it's pretty common for reasonable casual observers (eg in America) to see Israel/Palestine as a tragic conflict in which probably both parties are somewhat in the wrong, because that's a reasonable prior on all conflicts. The more you dig into the details, though, the more you realize that "let's live together in peace and make concessions to Palestinians as necessary" has been the mainstream Israeli position since before 1948. It's not a symmetric situation.
  • [[von Economo neurons]] are spooky [[neuroscience]] https://en.wikipedia.org/wiki/Von_Economo_neuron
    • only found in great apes, cetaceans, and humans
    • concentrated in the [[anterior cingulate cortex]] and [[insular cortex]] which are closely related to the "sense of self" (i.e. interoception, emotional salience, and the perception that your e.g. hand is "yours" and it was "you" who moved it)
    • the first to go in [[frontotemporal dementia]]
    • https://www.nature.com/articles/s41467-020-14952-3 we don't know where they project to! they are so big that we haven't tracked them fully!
    • https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3953677/
  • https://www.wired.com/story/lee-holloway-devastating-decline-brilliant-young-coder/ the founder of Cloudflare had [[frontotemporal dementia]] [[neurology]]
  • [[frontotemporal dementia]] is maybe caused by misfolded proteins being passed around neuron-to-neuron, like prion disease! [[neurology]]
Comment by sarahconstantin on sarahconstantin's Shortform · 2024-10-13T22:10:43.172Z · LW · GW

Therefore, do things you'd be in favor of having done even if the future will definitely suck. Things that are good today, next year, fifty years from now... but not like "institute theocracy to raise birth rates", which is awful today even if you think it might "save the world".

Comment by sarahconstantin on sarahconstantin's Shortform · 2024-10-13T22:04:22.526Z · LW · GW

"Let's abolish slavery," when proposed, would make the world better now as well as later.

I'm not against trying to make things better!

I'm against doing things that are strongly bad for present-day people to increase the odds of long-run human species survival.

Comment by sarahconstantin on sarahconstantin's Shortform · 2024-10-11T15:18:11.631Z · LW · GW

https://roamresearch.com/#/app/srcpublic/page/10-11-2024

 

  • https://www.mindthefuture.info/p/why-im-not-a-bayesian [[Richard Ngo]] [[philosophy]] I think I agree with this, mostly.
    • I wouldn't say "not a Bayesian" because there's nothing wrong with Bayes' Rule and I don't like the tribal connotations, but lbr, we don't literally use Bayes' rule very often and when we do it often reveals just how much our conclusions depend on problem framing and prior assumptions. A lot of complexity/ambiguity necessarily "lives" in the part of the problem that Bayes' rule doesn't touch. To be fair, I think "just turn the crank on Bayes' rule and it'll solve all problems" is a bit of a strawman -- nobody literally believes that, do they? -- but yeah, sure, happy to admit that most of the "hard part" of figuring things out is not the part where you can mechanically apply probability.
  • https://www.lesswrong.com/posts/YZvyQn2dAw4tL2xQY/rationalists-are-missing-a-core-piece-for-agent-like [[tailcalled]] this one is actually interesting and novel; i'm not sure what to make of it. maybe literal physics, with like "forces", matters and needs to be treated differently than just a particular pattern of information that you could rederive statistically from sensory data? I kind of hate it but unlike tailcalled I don't know much about physics-based computational models...[[philosophy]]
  • https://alignbio.org/ [[biology]] [[automation]] datasets generated by the Emerald Cloud Lab! [[Erika DeBenedectis]] project. Seems cool!
  • https://www.sciencedirect.com/science/article/abs/pii/S0306453015009014?via%3Dihub [[psychology]] the forced swim test is a bad measure of depression.
    • when a mouse trapped in water stops struggling, that is not "despair" or "learned helplessness." these are anthropomorphisms. the mouse is in fact helpless, by design; struggling cannot save it; immobility is adaptive.
      • in fact, mice become immobile faster when they have more experience with the test. they learn that struggling is not useful and they retain that knowledge.
    • also, a mouse in an acute stress situation is not at all like a human's clinical depression, which develops gradually and persists chronically.
    • https://www.sciencedirect.com/science/article/abs/pii/S1359644621003615?via%3Dihub the forced swim test also doesn't predict clinical efficacy of antidepressants well. (admittedly this study was funded by PETA, which thinks the FST is cruel to mice)
  • https://en.wikipedia.org/wiki/Copy_Exactly! [[semiconductors]] the Wiki doesn't mention that Copy Exactly was famously a failure. even when you try to document procedures perfectly and replicate them on the other side of the world, at unprecedented precision, it is really really hard to get the same results.
  • https://neuroscience.stanford.edu/research/funded-research/optimization-african-killifish-platform-rapid-drug-screening-aggregate [[biology]] you know what's cool? building experimentation platforms for novel model organisms. Killifish are the shortest-lived vertebrate -- which is great if you want to study aging. they live in weird oxygen-poor freshwater zones that are hard to replicate in the lab. figuring out how to raise them in captivity and standardize experiments on them is the kind of unsung, underfunded accomplishment we need to celebrate and expand WAY more.
  • https://www.nature.com/articles/513481a [[biology]] [[drug discovery]] ever heard of curcumin doing something for your health? resveratrol? EGCG? those are all natural compounds that light up a drug screen like a Christmas tree because they react with EVERYTHING. they are not going to work on your disease in real life.
  • https://en.wikipedia.org/wiki/Fetal_bovine_serum [[biotech]] this cell culture medium is just...cow juice. it is not consistent batch to batch. this is a big problem.
  • https://www.nature.com/articles/s42255-021-00372-0 [[biology]] mice housed at "room temperature" are too cold for their health; they are more disease-prone, which calls into question a lot of experimental results.
  • https://calteches.library.caltech.edu/51/2/CargoCult.htm [[science]] the famous [[Richard Feynman]] "Cargo cult science" essay is about flawed experimental methods!
    • if your rat can smell the location of the cheese in the maze all along, then your maze isn't testing learning.
    • errybody want to test rats in mazes, ain't nobody want to test this janky-ass maze!
  • https://fastgrants.org/ [[metascience]] [[COVID-19]] this was cool, we should bring it back for other stuff
  • https://erikaaldendeb.substack.com/cp/147525831 [[biotech]] engineering biomanufacturing microbes for surviving on Mars?!
  • https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8278038/ [[prediction markets]] DARPA tried to use prediction markets to predict the success of projects. it didn't work! they couldn't get enough participants.
  • https://www.citationfuture.com/ [[prediction markets]] these guys do prediction markets on science
  • https://jamesclaims.substack.com/p/how-should-we-fund-scientific-error [[metascience]] [[James Heathers]] has a proposal for a science error detection (fraud, bad research, etc) nonprofit. We should fund him to do it!!
  • https://en.wikipedia.org/wiki/Elisabeth_Bik [[metascience]] [[Elizabeth Bik]] is the queen of research fraud detection. pay her plz.
  • https://substack.com/home/post/p-149791027 [[archaeology]] it was once thought that Gobekli Tepe was a "festival city" or religious sanctuary, where people visited but didn't live, because there wasn't a water source. Now, they've found something that looks like water cisterns, and they suspect people did live there.
    • I don't like the framing of "hunter-gatherer" = "nomadic" in this post.
      • We keep pushing the date of agriculture farther back in time. We keep discovering that "hunter-gatherers" picking plants in "wild" forests are actually doing some degree of forest management, planting seeds, or pulling undesirable weeds. Arguably there isn't a hard-and-fast distinction between "gathering" and "gardening". (Grain agriculture where you use a plow and completely clear a field for planting your crop is qualitatively different from the kind of kitchen-garden-like horticulture that can be done with hand tools and without clearing forests. My bet is that all so-called hunter-gatherers did some degree of horticulture until proven otherwise, excepting eg arctic environments)
      • what the water actually suggests is that people lived at Gobekli Tepe for at least part of the year. it doesn't say what they were eating.
      •  
Comment by sarahconstantin on sarahconstantin's Shortform · 2024-10-11T13:54:18.847Z · LW · GW

I'm not defeatist! I'm picky.

And I'm not talking specifics because i don't want to provoke argument.

Comment by sarahconstantin on sarahconstantin's Shortform · 2024-10-10T22:48:53.397Z · LW · GW

wait and see if i still believe it tomorrow!

Comment by sarahconstantin on Why I’m not a Bayesian · 2024-10-10T14:57:56.970Z · LW · GW

I think I agree with this post directionally.

You cannot apply Bayes' Theorem until you have a probability space; many real-world situations, especially the ones people argue about, do not have well-defined probability spaces, including a complete set of mutually exclusive and exhaustive possible events, which are agreed upon by all participants in the argument. 

You will notice that, even on LessWrong, people almost never have Bayesian discussions where they literally apply Bayes' Rule.  It would probably be healthy to try to literally do that more often! But making a serious attempt to debate a contentious issue "Bayesianly" typically looks more like Rootclaim's lab leak debate, which took a lot of setup labor and time, and where the result of quantifying the likelihoods was to reveal just how heavily your "posterior" conclusion depends on your "prior" assumptions, which were outside the scope of debate.

I think prediction markets are good, and I think Rootclaim-style quantified debates are worth doing occasionally, but what we do in most discussion isn't Bayesian and can't easily be made Bayesian.

I am not so sure about preferring models to propositions. I think what you're getting at is that we can make much more rigorous claims about formal models than about "reality"... but most of the time what we care about is reality.  And we can't be rigorous about the intuitive "mental models" that we use for most real-world questions. So if you're take is "we should talk about the model we're using, not what the world is", then...I don't think that's true in general. 

In the context of formal models, we absolutely should consider how well they correspond to reality. (It's a major bias of science that it's more prestigious to make claims within a model than to ask "how realistic is this model for what we care about?") 

In the context of informal "mental models", it's probably good to communicate how things work "in your head" because they might work differently in someone else's head, but ultimately what people care about is the intersubjective commonalities that can be in both your heads (and, for all practical purposes, in the world), so you do have to deal with that eventually.

Comment by sarahconstantin on sarahconstantin's Shortform · 2024-10-10T14:32:16.066Z · LW · GW
  • “we” can’t steer the future.
  • it’s wrong to try to control people or stop them from doing locally self-interested & non-violent things in the interest of “humanity’s future”, in part because this is so futile.
    • if the only way we survive is if we coerce people to make a costly and painful investment in a speculative idea that might not even work, then we don’t survive! you do not put people through real pain today for a “someday maybe!” This applies to climate change,  AI x-risk, and socially-conservative cultural reform.
  • most cultures and societies in human history have been so bad, by my present values, that I’m not sure they’re not worse than extinction, and we should expect that most possible future states are similarly bad;
  • history clearly teaches us that civilizations and states collapse (on timescales of centuries) and the way to bet is that ours will as well, but it’s kind of insane hubris to think that this can be prevented;
  • the literal species Homo sapiens is pretty resilient and might avoid extinction for a very long time, but have you MET Homo sapiens? this is cold fucking comfort! (see e.g. C. J. Cherryh’s vision in 40,000 in Gehenna for a fictional representation not far from my true beliefs — we are excellent at adaptation and survival but when we “survive” this often involves unimaginable harshness and cruelty, and changing into something that our ancestors would not have liked at all.)
  • identifying with species-survival instead of with the stuff we value now is popular among the thoughtful but doesn’t make any sense to me;
  • in general it does not make sense, to me, to compromise on personal values in order to have more power/influence. you will be able to cause stuff to happen, but who cares if it’s not the stuff you want?
  • similarly, it does not make sense to consciously optimize for having lots of long-term descendants. I love my children; I expect they’ll love their children; but go too many generations out and it’s straight-up fantasyland. My great-grandparents would have hated me.  And that’s still a lot of shared culture and values! Do you really have that much in common with anyone from five thousand years ago?
  • Evolution is not your friend. God is not your friend. Everything worth loving will almost certainly perish. Did you expect it to last forever?
  • “I love whatever is best at surviving” or “I love whatever is strongest” means you don’t actually care what it’s like. It means you have no loyalty and no standards. It means you don’t care so much if the way things turn out is hideous, brutal, miserable, abusive… so long as it technically “is alive” or “wins”. Fuck that.
  • I despise sour grapes. If the thing I want isn’t available, I’m not going to pretend that what is available is what I want.
  • I am not going to embrace the “realistic” plan of allying with something detestable but potent. There is always an alternative, even if the only alternative is “stay true to your dreams and then get clobbered.”

Link to this on my Roam

Comment by sarahconstantin on sarahconstantin's Shortform · 2024-10-09T14:45:27.807Z · LW · GW

links 10/9/24 https://roamresearch.com/#/app/srcpublic/page/yI03T5V6t

Comment by sarahconstantin on Overview of strong human intelligence amplification methods · 2024-10-08T19:30:40.544Z · LW · GW

Neuronal activity could certainly affect gene regulation! so yeah, I think it's possible (which is not a strong claim...lots of things "regulate" other things, that doesn't necessarily make them effective intervention points)

Comment by sarahconstantin on Overview of strong human intelligence amplification methods · 2024-10-08T18:44:03.304Z · LW · GW

ditto

we have really not fully explored ultrasound and afaik there is no reason to believe it's inherently weaker than administering signaling molecules. 

Comment by sarahconstantin on sarahconstantin's Shortform · 2024-10-08T15:20:55.710Z · LW · GW

links 10/8/24 https://roamresearch.com/#/app/srcpublic/page/10-08-2024

Comment by sarahconstantin on sarahconstantin's Shortform · 2024-10-08T03:36:45.548Z · LW · GW

no! it sounded like "typical delusion stuff" at first until i listened carefully and yep that was a description of targeted ads.

Comment by sarahconstantin on 2025 Color Trends · 2024-10-08T03:35:06.255Z · LW · GW

they're in the substack post

Comment by sarahconstantin on sarahconstantin's Shortform · 2024-10-07T15:58:01.224Z · LW · GW
  • Psychotic “delusions” are more about holding certain genres of idea with a socially inappropriate amount of intensity and obsession than holding a false idea. Lots of non-psychotic people hold false beliefs (eg religious people). And, interestingly, it is absolutely possible to hold a true belief in a psychotic way.
  • I have observed people during psychotic episodes get obsessed with the idea that social media was sending them personalized messages (quite true; targeted ads are real) or the idea that the nurses on the psych ward were lying to them (they were).
  • Preoccupation with the revelation of secret knowledge, with one’s own importance, with mistrust of others’ motives, and with influencing others' thoughts or being influenced by other's thoughts, are classic psychotic themes.
    • And it can be a symptom of schizophrenia when someone’s mind gets disproportionately drawn to those themes. This is called being “paranoid” or “grandiose.”
    • But sometimes (and I suspect more often with more intelligent/self-aware people) the literal content of their paranoid or grandiose beliefs is true!
      • sometimes the truth really has been hidden!
      • sometimes people really are lying to you or trying to manipulate you!
      • sometimes you really are, in some ways, important! sometimes influential people really are paying attention to you!
      • of course people influence each others' thoughts -- not through telepathy but through communication!
    • a false psychotic-flavored thought is "they put a chip in my brain that controls my thoughts." a true psychotic-flavored thought is "Hollywood moviemakers are trying to promote progressive values in the public by implanting messages in their movies."
      • These thoughts can come from the same emotional drive, they are drawn from dwelling on the same theme of "anxiety that one's own thoughts are externally influenced", they are in a deep sense mere arbitrary verbal representations of a single mental phenomenon...
      • but if you take the content literally, then clearly one claim is true and one is false.
      • and a sufficiently smart/self-aware person will feel the "anxiety-about-mental-influence" experience, will search around for a thought that fits that vibe but is also true, and will come up with something a lot more credible than "they put a mind-control chip in my brain", but is fundamentally coming from the same motive.  
  • There’s an analogous but easier to recognize thing with depression.
    • A depressed person’s mind is unusually drawn to obsessing over bad things. But this obviously doesn’t mean that no bad things are real or that no depressive’s depressing claims are true.
    • When a depressive literally believes they are already dead, we call that Cotard's Delusion, a severe form of psychotic depression. When they say "everybody hates me" we call it a mere "distorted thought". When they talk accurately about the heat death of the universe we call it "thermodynamics." But it's all coming from the same emotional place.
  • In general, mental illnesses, and mental states generally, provide a "tropism" towards thoughts that fit with certain emotional/aesthetic vibes.
    • Depression makes you dwell on thoughts of futility and despair
    • Anxiety makes you dwell on thoughts of things that can go wrong
    • Mania makes you dwell on thoughts of yourself as powerful or on the extreme importance of whatever you're currently doing
    • Paranoid psychosis makes you dwell on thoughts of mistrust, secrets, and influencing/being influenced
  • You can, to some extent, "filter" your thoughts (or the ones you publicly express) by insisting that they make sense. You still have a bias towards the emotional "vibe" you're disposed to gravitate towards; but maybe you don't let absurd claims through your filter even if they fit the vibe. Maybe you grudgingly admit the truth of things that don't fit the vibe but technically seem correct.
    • this does not mean that the underlying "tropism" or "bias" does not exist!!!
    • this does not mean that you believe things "only because they are true"!
    • in a certain sense, you are doing the exact same thing as the more overtly irrational person, just hiding it better!
      • the "bottom line" in terms of vibe has already been written, so it conveys no "updates" about the world
      • the "bottom line" in terms of details may still be informative because you're checking that part and it's flexible
  • "He's not wrong but he's still crazy" is a valid reaction to someone who seems to have a mental-illness-shaped tropism to their preoccupations.
    • eg if every post he writes, on a variety of topics, is negative and gloomy, then maybe his conclusions say more about him than about the truth concerning the topic;
      • he might still be right about some details but you shouldn't update too far in the direction of "maybe I should be gloomy about this too"
    • Conversely, "this sounds like a classic crazy-person thought, but I still separately have to check whether it's true" is also a valid and important move to make (when the issue is important enough to you that the extra effort is worth it). 
      • Just because someone has a mental illness doesn't mean every word out of their mouth is false!
      • (and of course this assumption -- that "crazy" people never tell the truth -- drives a lot of psychiatric abuse.)

link: https://roamresearch.com/#/app/srcpublic/page/71kfTFGmK

Comment by sarahconstantin on sarahconstantin's Shortform · 2024-10-07T14:08:16.899Z · LW · GW

links 8/7/2024

https://roamresearch.com/#/app/srcpublic/page/yI03T5V6t

Comment by sarahconstantin on Nathan Helm-Burger's Shortform · 2024-10-04T14:47:18.103Z · LW · GW

Honestly this Pliny person seems rude. He entered a server dedicated to interacting with this modified AI; instead of playing along with the intended purpose of the group, he tried to prompt-inject the AI to do illegal stuff (that could risk getting the Discord shut down for TOS-violationy stuff?) and to generally damage the rest of the group's ability to interact with the AI.  This is troll behavior.  

Even if the Discord members really do worship a chatbot or have mental health issues, none of that is helped by a stranger coming in and breaking their toys, and then "exposing" the resulting drama online.

Comment by sarahconstantin on sarahconstantin's Shortform · 2024-10-04T14:32:18.449Z · LW · GW
Comment by sarahconstantin on sarahconstantin's Shortform · 2024-10-04T14:32:05.585Z · LW · GW

links 10/4/2024

https://roamresearch.com/#/app/srcpublic/page/10-04-2024

Comment by sarahconstantin on sarahconstantin's Shortform · 2024-10-02T16:01:58.688Z · LW · GW

links 10/2/2024:

https://roamresearch.com/#/app/srcpublic/page/10-02-2024

Comment by sarahconstantin on The Great Data Integration Schlep · 2024-10-02T15:59:07.637Z · LW · GW

I agree that if the AI can run its own experiments (via robotic actuators) it can do R&D prototyping independently of existing private/corporate data, and that's potentially the whole game. 

My current impression is that, as of 2024, we're starting to see enough investment into AI-controlled robots that in a few years it would be possible to get an "AI experimenter", albeit in the restricted set of domains where experiments can be automated easily. (biological experiments that are basically restricted to pipetting aqueous solutions and imaging the results? definitely yes. most sorts of benchtop electronics prototyping and testing? i imagine so, though I don't know for sure. the full range of reactions/syntheses a chemist can run at a lab bench? probably not for some time; creating a "mechanical chemist" is a famously hard problem since methods are so varied, though obviously it's not in principle impossible.)

Comment by sarahconstantin on sarahconstantin's Shortform · 2024-10-01T16:24:18.442Z · LW · GW

links 10/1/24

https://roamresearch.com/#/app/srcpublic/page/10-01-2024

Comment by sarahconstantin on What's up with self-esteem? · 2019-07-18T20:37:04.379Z · LW · GW

My current theory is that self-esteem isn't about yourself at all!

Self-esteem is your estimate of how much help/support/contribution/love you can get from others.

This explains why a person needs to feel a certain amount of "confidence" before trying something that is obviously their best bet. By "confidence" we basically just mean "support from other people or the expectation of same." The kinds of things that people usually need "confidence" to do are difficult and involve the risk of public failure and blame, even if they're clearly the best option from an individual perspective.

Comment by sarahconstantin on The AI Timelines Scam · 2019-07-11T14:11:00.368Z · LW · GW

Basically, AI professionals seem to be trying to manage the hype cycle carefully.

Ignorant people tend to be more all-or-nothing than experts. By default, they'll see AI as "totally unimportant or fictional", "a panacea, perfect in every way" or "a catastrophe, terrible in every way." And they won't distinguish between different kinds of AI.

Currently, the hype cycle has gone from "professionals are aware that deep learning is useful" (c. 2013) to "deep learning is AI and it is wonderful in every way and you need some" (c. 2015?) to "maybe there are problems with AI? burn it with fire! Nationalize! Ban!" (c. 2019).

Professionals who are still working on the "deep learning is useful for certain applications" project (which is pretty much where I sit) are quite worried about the inevitable crash when public opinion shifts from "wonderful panacea" to "burn it with fire." When the public opinion crash happens, legitimate R&D is going to lose funding, and that will genuinely be unfortunate. Everyone savvy knows this will happen. Nobody knows exactly when. There are various strategies for dealing with it.

Accelerate the decline: this is what Gary Marcus is doing.

Carve out a niche as an AI Skeptic (who is still in the AI business himself!) Then, when the funding crunch comes, his companies will be seen as "AI that even the skeptic thinks is legit" and have a better chance of surviving.

Be Conservative: this is a less visible strategy but a lot of people are taking it, including me.

Use AI only in contexts that are well justified by evidence, like rapid image processing to replace manual classification. That way, when the funding crunch happens, you'll be able to say you're not just using AI as a buzzword, you're using well-established, safe methods that have a proven track record.

Pivot Into Governance: this is what a lot of AI risk orgs are doing

Benefit from the coming backlash by becoming an advisor to regulators. Make a living not by building the tech but by talking about its social risks and harms. I think this is actually a fairly weak strategy because it's parasitic on the overall market for AI. There's no funding for AI think tanks if there's no funding for AI itself. But it's an ideal strategy for the cusp time period when we're just shifting between blind enthusiasm to blind panic.

Preserve Credibility: this is what Yann LeCun is doing and has been doing from day 1 (he was a deep learning pioneer and promoter even before the spectacular empirical performance results came in)

Try to forestall the backlash. Frame AI as good, not bad, and try to preserve the credibility of the profession as long as you can. Argue (honestly but selectively) against anyone who says anything bad about deep learning for any reason.

Any of these strategies may say true things! In fact, assuming you really are an AI expert, the smartest thing to do in the long run is to say only true things, and use connotation and selective focus to define your rhetorical strategy. Reality has no branding; there are true things to say that comport with all four strategies. Gary Marcus is a guy in the "AI Skeptic" niche saying things that are, afaik, true; there are people in that niche who are saying false things. Yann LeCun is a guy in the "Preserve AI Credibility" niche who says true things; when Gary Marcus says true things, Yann LeCun doesn't deny them, but criticizes Marcus's tone and emphasis. Which is quite correct; it's the most intellectually rigorous way to pursue LeCun's chosen strategy.

Comment by sarahconstantin on The AI Timelines Scam · 2019-07-11T13:45:55.486Z · LW · GW

Re: 2: nonprofits and academics have even more incentives than business to claim that a new technology is extremely dangerous. Think tanks and universities are in the knowledge business; they are more valuable when people seek their advice. "This new thing has great opportunities and great risks; you need guidance to navigate and govern it" is a great advertisement for universities and think tanks. Which doesn't mean AI, narrow or strong, doesn't actually have great opportunities and risks! But nonprofits and academics aren't immune from the incentives to exaggerate.

Re: 4: I have a different perspective. The loonies who go to the press with "did you know psychiatric drugs have SIDE EFFECTS?!" are not really a threat to public information to the extent that they are telling the truth. They are a threat to the perceived legitimacy of psychiatrists. This has downsides (some people who could benefit from psychiatric treatment will fear it too much) but fundamentally the loonies are right that a psychiatrist is just a dude who went to school for a long time, not a holy man. To the extent that there is truth in psychiatry, it can withstand the public's loss of reverence, in the long run. Blind reverence for professionals is a freebie, which locally may be beneficial to the public if the professionals really are wise, but is essentially fragile. IMO it's not worth trying to cultivate or preserve. In the long run, good stuff will win out, and smart psychiatrists can just as easily frame themselves as agreeing with the anti-psych cranks in spirit, as being on Team Avoid Side Effects And Withdrawal Symptoms, Unlike All Those Dumbasses Who Don't Care (all two of them).

Comment by sarahconstantin on Rule Thinkers In, Not Out · 2019-06-08T17:16:34.988Z · LW · GW

Some examples of valuable true things I've learned from Michael:

  • Being tied to your childhood narrative of what a good upper-middle-class person does is not necessary for making intellectual progress, making money, or contributing to the world.
  • Most people (esp. affluent ones) are way too afraid of risking their social position through social disapproval. You can succeed where others fail just by being braver even if you're not any smarter.
  • Fiddly puttering with something that fascinates you is the source of most genuine productivity. (Anything from hardware tinkering, to messing about with cost spreadsheets until you find an efficiency, to writing poetry until it "comes out right".) Sometimes the best work of this kind doesn't look grandiose or prestigious at the time you're doing it.
  • The mind and the body are connected. Really. Your mind affects your body and your body affects your mind. The better kinds of yoga, meditation, massage, acupuncture, etc, actually do real things to the body and mind.
  • Science had higher efficiency in the past (late 19th-to-mid-20th centuries).
  • Examples of potentially valuable medical innovation that never see wide application are abundant.
  • A major problem in the world is a 'hope deficit' or 'trust deficit'; otherwise feasible good projects are left undone because people are so mistrustful that it doesn't occur to them that they might not be scams.
  • A good deal of human behavior is explained by evolutionary game theory; coalitional strategies, not just individual strategies.
  • Evil exists; in less freighted, more game-theoretic terms, there exist strategies which rapidly expand, wipe out other strategies, and then wipe themselves out. Not *all* conflicts are merely misunderstandings.
  • How intersubjectivity works; "objective" reality refers to the conserved *patterns* or *relationships* between different perspectives.
  • People who have coherent philosophies -- even opposing ones -- have more in common in the *way* they think, and are more likely to get meaningful stuff done together, than they can with "moderates" who take unprincipled but middle-of-the-road positions. Two "bullet-swallowers" can disagree on some things and agree on others; a "bullet-dodger" and a "bullet-swallower" will not even be able to disagree, they'll just not be saying commensurate things.


Comment by sarahconstantin on Tactical vs. Strategic Cooperation · 2018-08-12T20:54:48.703Z · LW · GW

I'm not actually asking for people to do a thing for me, at this point. I think the closest to a request I have here is "please discuss the general topic and help me think about how to apply or fix these thoughts."

I don't think all communication is about requests (that's a kind of straw-NVC) only that when you are making a request it's often easier to get what you want by asking than by indirectly pressuring.

Comment by sarahconstantin on Are ethical asymmetries from property rights? · 2018-08-12T19:10:04.372Z · LW · GW

That's flattering to Rawls, but is it actually what he meant?

Or did he just assume that you don't need a mutually acceptable protocol for deciding how to allocate resources, and you can just skip right to enforcing the desirable outcome?

Comment by sarahconstantin on Monopoly: A Manifesto and Fact Post · 2018-06-04T02:55:57.250Z · LW · GW

Can you explain why return on cash vs. return on equity matters?

Comment by sarahconstantin on Three types of "should" · 2018-06-03T19:00:23.027Z · LW · GW

I'm struck by the assumption in this essay that you have a clear distinction between your own values and other people's.

I think that having a clear sense of personal identity can be difficult and not everyone may be able to hold on to their own perspective. I am concerned that this might be especially hard in an era of social media, when opinions are shared almost as soon as they are formed. Think of a blog/tumblr/fb that consists almost entirely of content copied from other sources: it is nominally a space curated/created by "you", but really it is a lot of other people's thoughts aggregated with very little personal modification. That could be a recipe for really poor internal coherence.

It's pretty standard psychologist's advice to have a journal where you write truly private reflections, shared with literally nobody else. I imagine this helps in constructing a self with boundaries.

Relatedly, "self-affirmation" (really kind of a misnomer: it means writing essays about what values are priorities for you) has a large psychology literature showing lots of good effects, and I find it extremely helpful for my own thoughts. A lot of self-help seems to boil down to "sit down and write reflections on what your priorities are." Complice is this in productivity-app form, The Desire Map is this in book form, etc.

Comment by sarahconstantin on Monopoly: A Manifesto and Fact Post · 2018-06-03T18:39:30.088Z · LW · GW

Note that the examples in the essay of mechanisms that produce inefficiency are union work rules, non-compete agreements between firms, tariffs, and occupational licensing laws. The former three are not federal regulations on industries, and so would not show up in a comparison of industry dynamism vs. regulatory stringency.

Comment by sarahconstantin on Monopoly: A Manifesto and Fact Post · 2018-06-03T18:29:42.971Z · LW · GW

Ok, this is a counterargument I want to make sure I understand.

Is the following a good representation of what you believe?

When you divide GDP by a commodity price, when the commodity has a nearly-fixed supply (like gold or land) we'd expect the price of the commodity to go up over time in a society that's getting richer -- in other words, if you have better tech and better and more abundant goods, but not more gold or land, you'd expect that other goods would become cheaper relative to gold or land. Thus, a GDP/gold or GDP/land value that doesn't increase over time is totally consistent with a society with increasing "true" wealth, and thus doesn't indicate stagnation.

Comment by sarahconstantin on Monopoly: A Manifesto and Fact Post · 2018-06-03T18:22:26.868Z · LW · GW

I agree that Carnegie's US Steel is not the type of "monopoly" that I consider socially harmful. I seem to remember that there is empirical evidence (though I don't know where) that monopolies due to superior product quality/price are actually fragile, and long-term monopolies must be maintained by legal privileges to survive. (If anybody remembers where, I'd appreciate a reference.)

Comment by sarahconstantin on Personal relationships with goodness · 2018-05-15T00:57:10.654Z · LW · GW

In this context, thinking about whether you are "good" is not "constructive."

Thinking about whether you're doing something "constructive" is, by contrast, extremely constructive.

Comment by sarahconstantin on Personal relationships with goodness · 2018-05-15T00:43:29.738Z · LW · GW

Here's my trajectory:

1.) Worry a lot about "I'm not good"

2.) Improve in some dimensions, also refactor my moral priorities so that I no longer believe some of my 'bad traits' are really bad

3.) Still worry a lot about "I'm not good" where "good" refers to some eldritch horror that I no longer literally endorse

4.) Learn the mental motion of going "fuck it", where I just rest my brain and self-soothe. Do that until I deeply do not give a fuck whether I'm good or not.

5.) Notice a mild but consistent desire to do things that are, not "good", but "constructive" -- i.e. contribute to the construction of a nice thing that takes time and effort to complete.

6.) Notice that the people around me mostly like it when I do "constructive" things, and call them "good."

Comment by sarahconstantin on Introducing the Longevity Research Institute · 2018-05-14T15:27:00.875Z · LW · GW

I'm a little more optimistic about calorie restriction mimetics than Aubrey, but I think everybody sensible has pretty low confidence about this.

Comment by sarahconstantin on Introducing the Longevity Research Institute · 2018-05-14T15:22:06.719Z · LW · GW

Practical constraints. The main contributor to the cost of a lifespan study is the cost of upkeep for the mice -- so it's proportional to number of mice and length of the study. Testing 50 compounds at once means raising 50x the money at once (which is out of reach at the moment) and may also run into constraints of the capacity of labs/CROs.

Comment by sarahconstantin on Introducing the Longevity Research Institute · 2018-05-14T15:18:15.221Z · LW · GW

Yep, that is my position.

(I've talked a bunch with Aubrey de Grey and he is very much supportive of the LRI's program. We're complements, not substitutes.)

Comment by sarahconstantin on Mental Illness Is Not Evidence Against Abuse Allegations · 2018-05-14T15:14:42.764Z · LW · GW

Thanks; I think I was just wrong here, I didn't think of that.

Comment by sarahconstantin on Duncan Sabien: "In Defense of Punch Bug" · 2018-05-13T19:12:30.563Z · LW · GW

This is not normal behavior on her part. This is domestic violence. The standard advice is to leave people who hit you. Possibly after clearly stating that you are not okay with being hit and you will leave if it continues, and giving her a chance to change her ways. Maybe she should work with a professional to help with her anger problems. But there is a significant risk that a person who regularly attacks you will escalate.

Comment by sarahconstantin on Introducing the Longevity Research Institute · 2018-05-08T21:01:29.704Z · LW · GW

Vaniver is right.

The mainstream biogerontology perspective is that there's an evolutionarily conserved "survival program", probably developed for surviving famines, that can slow the aging process somewhat. This is the stuff you'll find in Cynthia Kenyon's research, for instance. The hope is that you can find drugs that stimulate these pathways, and thereby slow down the incidence of age-related diseases. This is the approach LRI is taking.

The SENS position, as I understand, is that this won't work. As you go up from yeast to nematodes to flies to mice, "long-lived" mutants live less long, and perhaps by the time we get to humans these genetic alterations (or drugs that simulate them) won't be long-lived at all. SENS instead wants to work on reversing the damage caused by aging.

I don't know with high confidence whether SENS's skepticism is right; but even if they are, their research program seems to involve a lot of open questions in basic science that would take a long time to resolve.

Give to SENS if you want to invest in basic research that might one day reverse aging altogether; give to LRI to accelerate translational research into treatments that might lead to modest healthspan extension in the next decade or two. (Or give to both!) They're complementary strategies.

Comment by sarahconstantin on Noticing the Taste of Lotus · 2018-05-06T17:23:50.799Z · LW · GW

I really don't relate to the externalization people use about "lotus-eating", like, "Facebook is making me addicted, even though I want to be productive." Implicitly that means the "real" me is into "good" meaningful stuff. And that's not how it feels. It feels like I have very strong drives towards the bad stuff (like "contacting exes to annoy them") and Facebook is just a tool that enables me to do what I want, which is why I deleted my account a year ago, because some of my wants harm other people. But the wanting is mine.

In fact, sometimes I feel like "I want to do something cravey but I don't have anything cravey to do!" That comes up pretty often, tbh: food is only cravey when I'm hungry, videogames and shopping do nothing for me, I quit social media, etc.

Comment by sarahconstantin on Models of human relationships - tools to understand people · 2018-04-26T21:21:53.569Z · LW · GW

I can usually tailor the level of jargon correctly. What I can't do that well is figure out how to not make my presence burdensome -- I can feel that I need to "come up with something to say" that makes it worth talking to me, and I'm not great at coming up with those quickly. (When a kid says "tell me a story", I can't do that either. I'm great at discussions, where you have to speak off the cuff in relation to some subject, but open-ended improv is hell.)

Comment by sarahconstantin on Models of human relationships - tools to understand people · 2018-04-25T18:11:23.071Z · LW · GW

I really like this.

Let me try to apply it to an example in my own life. I'm frequently telling people about a project I'm working on. I'd like it to be well received, to make a good impression, and also to enlist help or advice.

This is probably consultation, collaboration, or delegation, depending on whom I'm talking to, right?

And "how to win people to your way of thinking" clearly seems to apply.

"Never say you're wrong" confuses me -- yes, there are people you can't afford to flatly contradict, but what do you do if you actually need to accomplish a task and the thing they're suggesting seems like a bad idea? There are cases where "do it their way without complaint" is unwise. So far I've been trying to ask a lot of questions to make sure I haven't misunderstood them, but sooner or later it's inevitable to encounter someone who really is wrong.

"Let the other person do most of the talking" -- I use this often (it's also a good social anxiety hack to take the pressure off myself!) but it seems to be more difficult in a scenario where you only have a few minutes of their time and need to "pitch" an idea. Is it wrong to launch into a quick summary in such cases?

"Get the other person saying 'yes, yes' right away" -- I know to transition gradually from claims that I know will be agreed with towards claims that might be more controversial or doubtful, but I think I probably err too much on the side of never bringing up things that I don't expect to get agreement on. Any advice on how to incrementally push further without skipping all the way to becoming shocking/offensive?

"Try honestly to see things from the other person's point of view" -- this is just straight up hard for me, especially if I'm talking to a stranger and am also just trying to keep track of the content of what I'm saying and he's saying, while avoiding social faux pas. It seems about as hard as "remember to multiply three-digit numbers in your head while you have your conversation!" Am I missing something?

Comment by sarahconstantin on Good News for Immunostimulants · 2018-04-20T15:46:35.197Z · LW · GW

And...yep, 33% objective response rates, which is great. https://www.google.com/amp/s/immuno-oncologynews.com/2018/04/20/dynavax-immunotherapy-and-keytruda-fight-head-and-neck-cancer-trial-shows/%3famp

Comment by sarahconstantin on Good News for Immunostimulants · 2018-04-20T15:41:32.033Z · LW · GW

Wanted to make a testable prediction that would be resolved soon.

Comment by sarahconstantin on Some Simple Observations Five Years After Starting Mindfulness Meditation · 2018-04-20T15:40:09.412Z · LW · GW

You took the update “subjective emotional states aren’t very important, because they can happen when objectively everything is fine.” From the same observation, I took the update “objective conditions aren’t very important, because I can still feel lousy when objectively everything is fine, or great when it isn’t.” Is there a reason you took the former approach?

Comment by sarahconstantin on Good News for Immunostimulants · 2018-04-16T22:36:38.034Z · LW · GW

"You can't pick winners in drug development" rhymes with a cluster of memes that are popular in the zeitgeist today:

  • "Complicated things can't be understood from first principles"
  • "Collecting a lot of data without models is better than building models"
  • "People don't engage in abstract reasoning much, they do things by feel and instinct"
  • "Don't overthink it"
  • "What it means to be human" refers to what distinguishes us from machines, not what distinguishes us from animals

Once you clarify any of these claims down to a specific proposition, sometimes they're true. But there is a general sense that you can get social approval from saying things whose upshot is "Thinking: it's not that great after all!"