Posts

Neutrality 2024-11-13T23:10:05.469Z
Bigger Livers? 2024-11-08T21:50:09.814Z
Join my new subscriber chat 2024-11-06T02:30:11.059Z
Metastatic Cancer Treatment Since 2010: The Success Stories 2024-11-04T22:50:09.386Z
Thinking in 2D 2024-10-20T19:30:05.842Z
2025 Color Trends 2024-10-07T21:20:03.962Z
sarahconstantin's Shortform 2024-10-01T16:24:17.329Z
Fun With The Tabula Muris (Senis) 2024-09-20T18:20:01.901Z
The Great Data Integration Schlep 2024-09-13T15:40:02.298Z
Fun With CellxGene 2024-09-06T22:00:03.461Z
AI for Bio: State Of The Field 2024-08-30T18:00:02.187Z
LLM Applications I Want To See 2024-08-19T21:10:03.101Z
All The Latest Human tFUS Studies 2024-08-09T22:20:04.561Z
Multiplex Gene Editing: Where Are We Now? 2024-07-16T20:50:04.590Z
Superbabies: Putting The Pieces Together 2024-07-11T20:40:05.036Z
The Incredible Fentanyl-Detecting Machine 2024-06-28T22:10:01.223Z
Permissions in Governance 2019-08-02T19:50:00.592Z
The Costs of Reliability 2019-07-20T01:20:00.895Z
Book Review: Why Are The Prices So Damn High? 2019-06-28T19:40:00.643Z
Circle Games 2019-06-06T16:40:00.596Z
Pecking Order and Flight Leadership 2019-04-29T20:30:01.168Z
The Forces of Blandness and the Disagreeable Majority 2019-04-28T19:44:42.177Z
Degrees of Freedom 2019-04-02T21:10:00.516Z
Personalized Medicine For Real 2019-03-04T22:40:00.351Z
The Tale of Alice Almost: Strategies for Dealing With Pretty Good People 2019-02-27T19:34:03.906Z
Humans Who Are Not Concentrating Are Not General Intelligences 2019-02-25T20:40:00.940Z
The Relationship Between Hierarchy and Wealth 2019-01-23T02:00:00.467Z
Book Recommendations: An Everyone Culture and Moral Mazes 2019-01-10T21:40:04.163Z
Contrite Strategies and The Need For Standards 2018-12-24T18:30:00.480Z
The Pavlov Strategy 2018-12-20T16:20:00.542Z
Argue Politics* With Your Best Friends 2018-12-15T19:00:00.549Z
Introducing the Longevity Research Institute 2018-12-14T20:20:00.532Z
Player vs. Character: A Two-Level Model of Ethics 2018-12-14T19:40:00.520Z
Norms of Membership for Voluntary Groups 2018-12-11T22:10:00.975Z
Playing Politics 2018-12-05T00:30:00.996Z
“She Wanted It” 2018-11-11T22:00:01.645Z
Things I Learned From Working With A Marketing Advisor 2018-10-09T00:10:01.320Z
Fasting Mimicking Diet Looks Pretty Good 2018-10-04T19:50:00.695Z
Reflections on Being 30 2018-10-02T19:30:01.585Z
Direct Primary Care 2018-09-25T18:00:01.747Z
Tactical vs. Strategic Cooperation 2018-08-12T16:41:40.005Z
Oops on Commodity Prices 2018-06-10T15:40:00.499Z
Monopoly: A Manifesto and Fact Post 2018-05-31T18:40:00.479Z
Mental Illness Is Not Evidence Against Abuse Allegations 2018-05-13T19:50:42.645Z
Introducing the Longevity Research Institute 2018-05-08T03:30:00.768Z
Wrongology 101 2018-04-25T00:00:00.991Z
Good News for Immunostimulants 2018-04-16T16:10:00.575Z
Is Rhetoric Worth Learning? 2018-04-06T22:03:47.918Z
Naming the Nameless 2018-03-22T00:35:55.634Z
"Cheat to Win": Engineering Positive Social Feedback 2018-02-05T23:16:50.858Z

Comments

Comment by sarahconstantin on sarahconstantin's Shortform · 2024-11-18T19:25:29.830Z · LW · GW

links 11/18/2024: https://roamresearch.com/#/app/srcpublic/page/11-18-2024

Comment by sarahconstantin on Neutrality · 2024-11-18T19:20:46.063Z · LW · GW

My intuition is to get less excited by single projects (a Double Crux bot) until someone has brought them all together & created momentum behind some kind of "big" agglomeration of people + resources in the "neutrality tools" space. 

Comment by sarahconstantin on Neutrality · 2024-11-18T19:18:54.327Z · LW · GW

I didn't know about all the existing projects and I appreciate the resource! Concrete >> vague in my book, I just didn't actually know much about concrete examples.

Comment by sarahconstantin on sarahconstantin's Shortform · 2024-11-15T18:13:22.682Z · LW · GW

links 11/15/2024: https://roamresearch.com/#/app/srcpublic/page/11-15-2024

  • https://www.reddit.com/r/self/comments/1gleyhg/people_like_me_are_the_reason_trump_won/  a moderate/swing-voter (Obama, Trump, Biden) explains why he voted for Trump this time around:
    • he thinks Kamala Harris was an "empty shell" and unlikable and he felt the campaign was manipulative and deceptive.
    • he didn't like that she seemed to be a "DEI hire", but doesn't have a problem with black or female candidates generally, it's just that he resents cynical demographic box-checking.
      • this is a coherent POV -- he did vote for Obama, after all. and plenty of people are like "I want the best person regardless of demographics, not a person chosen for their demographics."
        • hm. why doesn't it seem natural to portray Obama as a "DEI hire"? his campaign made a bigger deal about race than Harris's, and he was criticized a lot for inexperience.
          • One guess: it's laughable to think Obama was chosen by anyone besides himself. He was not the Democratic Party's anointed -- that was Hillary. He's clearly an ambitious guy who wanted to be president on his own initiative and beat the odds to get the nomination. He can't be a "DEI hire" because he wasn't a hire at all.
          • another guess: Obama is clearly smart, speaks/writes in complete sentences, and welcomes lots of media attention and talks about his policies, while Harris has a tendency towards word salad, interviews poorly, avoids discussing issues, etc.
          • another guess: everyone seems to reject the idea that people prefer male to female candidates, but I'm still really not sure there isn't a gender effect! This is very vibes-based on my part, and apparently the data goes the other way, so very uncertain here.
  • https://trevorklee.substack.com/p/if-langurs-can-drink-seawater-can  Trevor Klee on adaptations for drinking seawater
Comment by sarahconstantin on sarahconstantin's Shortform · 2024-11-14T19:08:22.326Z · LW · GW

links 9/14/2024: https://roamresearch.com/#/app/srcpublic/page/11-14-2024

  • https://archive.org/details/byte-magazine  retro magazines
  • https://www.ribbonfarm.com/2019/09/17/weirding-diary-10/#more-6737 Venkatesh Rao on the fall of the MIT Media Lab
    • this stung a bit!
    • i have tended to think that the stuff with "intellectual-glamour" or "visionary" branding is actually pretty close to on-target. not always right, of course, often overhyped, but often still underinvested in even despite being highly hyped.
      • (a surprising number of famous scientists are starved for funding. a surprising number of inventions featured on TED, NYT, etc were never given resources to scale.)
    • I also am literally unconvinced that "Europe's kindergarten" was less sophisticated than our own time! but it seems like a fine debate to have at leisure, not totally sure how it would play out.
    • he's basically been proven right that energy has moved "underground" but that's not a mode i can work very effectively in. if you have to be invited to participate, well, it's probably not going to happen for me.
    • at the institutional level, he's probably right that it's wise to prepare for bad times and not get complacent. again, this was 2019; a lot of the bad times came later. i miss the good times; i want to believe they'll come again.
Comment by sarahconstantin on sarahconstantin's Shortform · 2024-11-13T17:19:33.145Z · LW · GW

links 11/13/2024: https://roamresearch.com/#/app/srcpublic/page/11-13-2024

 

Comment by sarahconstantin on Eli's shortform feed · 2024-11-12T20:05:19.762Z · LW · GW

 I agree that more people should be starting revenue-funded/bootstrapped businesses (including ones enabled by software/technology).  

The meme is that if you're starting a tech company, it's going to be a VC-funded startup. This is, I think, a meme put out by VCs themselves, including Paul Graham/YCombinator, and it conflates new software projects and businesses generally with a specific kind of business model called the "tech startup".  

Not every project worth doing should be a business (some should be hobbies or donation-funded) and not every business worth doing should be a VC-funded startup (some should be bootstrapped and grow from sales revenue.) 

The VC startup business model requires rapid growth and expects 30x returns over a roughly 5-10 year time horizon. That simply doesn't include every project worth doing. Some businesses are viable but are simply not likely to grow that much or that fast; some projects shouldn't be expected to be profitable at all and need philanthropic support.

I think the narrative that "tech startups are where innovation happens" is...badly incomplete, but still a hell of a lot more correct than "tech startups are net destructive". 

Think about new technologies; then think about where they were developed. That process can ever happen end-to-end within a startup, but more often I think innovative startups are founded around IP developed while the founders were in academia; or the startup found a new use for open-source tools or tools developed within big companies. There simply isn't time to solve particularly hard technical problems if you have to get to profitability and 30x growth in 5 years. The startup format is primarily designed for finding product-market fit -- i.e. putting together existing technologies, packaging them as a "product" with a narrative about what and who it's for, and tweaking it until you find a context where people will pay for the product, and then making the whole thing bigger and bigger. You can do that in 5 years. But no, you can't do literally all of society's technological innovation within that narrow context! 

(Part of the issue is that we still technically count very big tech companies as "startups" and they certainly qualify as "Silicon Valley", so if you conflate all of "tech" into one big blob it includes the kind of big engineering-heavy companies that have R&D departments with long time horizons. Is OpenAI a "tech startup"? Sure, in that it's a recently founded technology company. But it is under very different financial constraints from a YC startup.)

Comment by sarahconstantin on sarahconstantin's Shortform · 2024-11-12T19:34:31.865Z · LW · GW

neutrality (notes towards a blog post): https://roamresearch.com/#/app/srcpublic/page/Ql9YwmLas

  • "neutrality is impossible" is sort-of-true, actually, but not a reason to give up.
    • even a "neutral" college class (let's say a standard algorithms & data structures CS class) is non-neutral relative to certain beliefs
      • some people object to the structure of universities and their classes to begin with;
      • some people may object on philosophical grounds to concepts that are unquestionably "standard" within a field like computer science.
      • some people may think "apolitical" education is itself unacceptable.
        • to consider a certain set of topics "political" and not mention them in the classroom is, implicitly, to believe that it is not urgent to resolve or act on those issues (at least in a classroom context), and therefore it implies some degree of acceptance of the default state of those issues.
      • our "neutral" CS class is implicitly taking a stand on certain things and in conflict with certain conceivable views. but, there's a wide range of views, including (I think) the vast majority of the actual views of relevant parties like students and faculty, that will find nothing to object to in the class.
    • we need to think about neutrality in more relative terms:
      • what rule are you using, and what things are you claiming it will be neutral between?
  • what is neutrality anyway and when/why do you want it?
    • neutrality is a type of tactic for establishing cooperation between different entities.
      • one way (not the only way) to get all parties to cooperate willingly is to promise they will be treated equally.
      • this is most important when there is actual uncertainty about the balance of power.
        • eg the Dutch Republic was the first European polity to establish laws of religious tolerance, because it happened to be roughly evenly divided between multiple religions and needed to unite to win its independence.
    • a system is neutral towards things when it treats them the same.
      • there lots of ways to treat things the same:
        • "none of these things belong here"
          • eg no religion in "public" or "secular" spaces
            • is the "public secular space" the street? no-hijab rules?
            • or is it the government? no 10 Commandments in the courthouse?
        • "each of these things should get equal treatment"
          • eg Fairness Doctrine
        • "we will take no sides between these things; how they succeed or fail is up to you"
          • e.g. "marketplace of ideas", "colorblindness"
    • one can always ask, about any attempt at procedural neutrality:
      • what things does it promise to be neutral between?
        • are those the right or relevant things to be neutral on?
      • to what degree, and with what certainty, does this procedure produce neutrality?
        • is it robust to being intentionally subverted?
    • here and now, what kind of neutrality do we want?
      • thanks to the Internet, we can read and see all sorts of opinions from all over the world. a wider array of worldviews are plausible/relevant/worth-considering than ever before. it's harder to get "on the same page" with people because they may have come from very different informational backgrounds.
      • even tribes are fragmented. even people very similar to one another can struggle to synch up and collaborate, except in lowest-common-denominator ways that aren't very productive.
      • narrowing things down to US politics, no political tribe or ideology is anywhere close to a secure monopoly. nor are "tribes" united internally.
      • we have relied, until now, on a deep reserve of "normality" -- apolitical, even apathetic, Just The Way Things Are. In the US that means, people go to work at their jobs and get paid for it and have fun in their free time. 90's sitcom style.
        • there's still more "normality" out there than culture warriors tend to believe, but it's fragile. As soon as somebody asks "why is this the way things are?" unexamined normality vanishes.
          • to the extent that the "normal" of the recent past was functional, this is a troubling development...but in general the operation of the mind is a good thing!
          • we just have more rapid and broader idea propagation now.
            • why did "open borders" and "abolish the police" and "UBI" take off recently? because these are simple ideas with intuitive appeal. some % of people will think "that makes sense, that sounds good" once they hear of them. and now, way more people are hearing those kinds of ideas.
      • when unexamined normality declines, conscious neutrality may become more important.
        • conscious neutrality for the present day needs to be aware of the wide range of what people actually believe today, and avoid the naive Panglossianism of early web 2.0.
          • many people believe things you think are "crazy".
          • "democratization" may lead to the most popular ideas being hateful, trashy, or utterly bonkers.
          • on the other hand, depending on what you're trying to get done, you may very well need to collaborate with allies, or serve populations, whose views are well outside your comfort zone.
        • neutrality has things to offer:
          • a way to build trust with people very different from yourself, without compromising your own convictions;
            • "I don't agree with you on A, but you and I both value B, so I promise to do my best at B and we'll leave A out of it altogether"
          • a way to reconstruct some of the best things about our "unexamined normality" and place them on a firmer foundation so they won't disappear as soon as someone asks "why?"
  • a "system of the world" is the framework of your neutrality: aka it's what you're not neutral about.
    • eg:
      • "melting pot" multiculturalism is neutral between cultures, but does believe that they should mostly be cosmetic forms of diversity (national costumes and ethnic foods) while more important things are "universal" and shared.
      • democratic norms are neutral about who will win, but not that majority vote should determine the winner.
      • scientific norms are neutral about which disputed claims will turn out to be true, but not on what sorts of processes and properties make claims credible, and not about certain well-established beliefs
    • right now our system-of-the-world is weak.
      • a lot of it is literally decided by software affordances. what the app lets you do is what there is.
        • there's a lot that's healthy and praiseworthy about software companies and their culture, especially 10-20 years ago. but they were never prepared for that responsibility!
    • a stronger system-of-the-world isn't dogmatism or naivety.
      • were intellectuals of the 20th, the 19th, or the 18th centuries childish because they had more explicit shared assumptions than we do? I don't think so.
        • we may no longer consider some of their frameworks to be true
        • but having a substantive framework at all clearly isn't incompatible with thinking independently, recognizing that people are flawed, or being open to changing your mind.
        • "hedgehogs" or "eternalists" are just people who consider some things definitely true.
          • it doesn't mean they came to those beliefs through "blind faith" or have never questioned them.
          • it also doesn't mean they can't recognize uncertainty about things that aren't foundational beliefs.
        • operating within a strongly-held, assumed-shared worldview can be functional for making collaborative progress, at least when that worldview isn't too incompatible with reality.
      • mathematics was "non-rigorous", by modern standards, until the early 20th century; and much of today's mathematics will be considered "non-rigorous" if machine-verified proofs ever become the norm. but people were still able to do mathematics in centuries past, most of which we still consider true.
        • the fact that you can generate a more general framework, within which the old framework was a special case; or in which the old framework was an unprincipled assumption of the world being "nicely behaved" in some sense; does not mean that the old framework was not fruitful for learning true things.
          • sometimes, taking for granted an assumption that's not literally always true (but is true mostly, more-or-less, or in the practically relevant cases) can even be more fruitful than a more radically skeptical and general view.
    • an *intellectual* system-of-the-world is the framework we want to use for the "republic of letters", the sub-community of people who communicate with each other in a single conversational web and value learning and truth.
      • that community expanded with the printing press and again with the internet.
      • it is radically diverse in opinion.
      • it is not literally universal. not everybody likes to read and write; not everybody is curious or creative. a lot of the "most interesting people in the world" influence each other.
        • everybody in the old "blogosphere" was, fundamentally, the same sort of person, despite our constant arguments with each other; and not a common sort of person in the broader population; and we have turned out to be more influential than we have ever been willing to admit.
      • but I do think of it as a pretty big and growing tent, not confined to 300 geniuses or anything like that.
        • "The" conversation -- the world's symbolic information and its technological infrastructure -- is something anybody can contribute to, but of course some contribute more than others.
        • I think the right boundary to draw is around "power users" -- people who participate in that network heavily rather than occasionally.
          • e.g. not all academics are great innovators, but pretty much all of them are "power users" and "active contributors" to the world's informational web.
          • I'm definitely a power user; I expect a lot of my readers are as well.
      • what do we need to not be neutral about in this context? what belongs in an intellectual system-of-the-world?
        • another way of asking this question: about what premises are you willing to say, not just for yourself but for the whole world and for your children's children, "if you don't accept this premise then I don't care to speak to you or hear from you, forever?"
          • clearly that's a high standard!
          • I have many values differences with, say, the author of the Epic of Gilgamesh, but I still want to read it. And I want lots of other people to be able to read it! I do not want the mind that created it to be blotted out of memory.
          • that's the level of minimal shared values we're talking about here. What do we have in common with everyone who has an interest in maintaining and extending humanity's collective record of thought?
        • lack of barriers to entry is not enough.
          • the old Web 2.0 idea was "allow everyone to communicate with everyone else, with equal affordances." This is a kind of "neutrality" -- every user account starts out exactly the same, and anybody can make an account.
            • I think that's still an underrated principle. "literally anybody can speak to anybody else who wants to listen" was an invention that created a lot of valuable affordances. we forget how painfully scarce information was when that wasn't true!
          • the problem is that an information system only works when a user can find the information they seek. And in many cases, what the user is seeking is true information.
          • mechanisms intended to make high quality information (reliable, accurate, credible, complete, etc) preferentially discoverable, are also necessary
            • but they shouldn't just recapitulate potentially-biased gatekeeping.
              • we want evaluative systems that, at least a priori, an ancient Sumerian could look at and say "yep, sounds fair", even if the Sumerian wouldn't like the "truths" that come out on top in those systems.
              • we really can't be parochial here. social media companies "patched" the problem of misinformation with opaque, partisan side-taking, and they suffered for it.
              • how "meta" do we have to get about determining what counts as reliable or valid? well, more meta than just picking a winning side in an ongoing political dispute, that's for sure.
                • probably also more "meta" than handpicking certain sources as trustworthy, the way Wikipedia does.
    • if we want to preserve and extend knowledge, the "republic of letters" needs intentional stewardship of the world's information, including serious attempts at neutrality.
      • perceived bias, of course, turns people away from information sources.
      • nostalgia for unexamined normality -- "just be neutral, y'know, like we were when I was young" -- is not a credible offer to people who have already found your nostalgic "normal" wanting.
      • rigorous neutrality tactics -- "we have so structured this system so that it is impossible for anyone to tamper with it in a biased fashion" -- are better.
        • this points towards protocols.
          • h/t Venkatesh Rao
          • think: zero-knowledge proofs, formal verification, prediction markets, mechanism design, crypto-flavored governance schemes, LLM-enabled argument mapping, AI mechanistic-interpretability and "showing its work", etc
        • getting fancy with the technology here often seems premature when the "public" doesn't even want neutrality; but I don't think it actually is.
          • people don't know they want the things that don't yet exist.
          • the people interested in developing "provably", "rigorously", "demonstrably" impartial systems are exactly the people you want to attract first, because they care the most.
          • getting it right matters.
            • a poorly executed attempt either fizzles instantly; or it catches on but its underlying flaws start to make it actively harmful once it's widely culturally influential.
        • OTOH, premature disputes on technology and methods are undesirable.
          • remember there aren't very many of you/us. that is:
            • pretty much everybody who wants to build rigorous neutrality, no matter why they want it or how they want to implement it, is a potential ally here.
              • the simple fact of wanting to build a "better" world that doesn't yet exist is a commonality, not to be taken for granted. most people don't do this at all.
              • the "softer" side, mutual support and collegiality, are especially important to people whose dreams are very far from fruition. people in this situation are unusually prone to both burnout and schism. be warm and encouraging; it helps keep dreams alive.
              • also, the whole "neutrality" thing is a sham if we can't even engage with collaborators with different views and cultural styles.
            • also, "there aren't very many of us" in the sense that none of these envisioned new products/tools/institutions are really off the ground yet, and the default outcome is that none of them get there.
              • you are playing in a sandbox. the goal is to eventually get out of the sandbox.
              • you will need to accumulate talent, ideas, resources, and vibe-momentum. right now these are scarce, or scattered; they need to be assembled.
              • be realistic about influence.
                • count how many people are at the conference or whatever. how many readers. how many users. how many dollars. in absolute terms it probably isn't much. don't get pretentious about a "movement", "community", or "industry" before it's shown appreciable results.
                • the "adjacent possible" people to get involved aren't the general public, they're the closest people in your social/communication graph who aren't yet participating. why aren't they part of the thing? (or why don't you feel comfortable going to them?) what would you need to change to satisfy the people you actually know?
                  • this is a better framing than speculating about mass appeal.
Comment by sarahconstantin on Eli's shortform feed · 2024-11-08T22:05:06.267Z · LW · GW

Shreeda Segan is working on building it, as a cashflow business. they need $10K to get to the MVP. https://manifund.org/projects/hire-a-dev-to-finish-and-launch-our-dating-site

Comment by sarahconstantin on sarahconstantin's Shortform · 2024-11-08T15:02:35.514Z · LW · GW

links 11/08/2024: https://roamresearch.com/#/app/srcpublic/page/11-08-2024

 

Comment by sarahconstantin on sarahconstantin's Shortform · 2024-11-07T16:33:57.183Z · LW · GW

links 11/07/2024: https://roamresearch.com/#/app/srcpublic/page/11-07-2024

Comment by sarahconstantin on sarahconstantin's Shortform · 2024-11-06T15:37:29.766Z · LW · GW

links 11/6/2024: https://roamresearch.com/#/app/srcpublic/page/11-06-2024

Comment by sarahconstantin on sarahconstantin's Shortform · 2024-11-05T17:02:17.187Z · LW · GW

links 11/05/2024: https://roamresearch.com/#/app/srcpublic/page/11-05-2024

Comment by sarahconstantin on sarahconstantin's Shortform · 2024-11-01T16:20:07.688Z · LW · GW

links 11/01/2024: https://roamresearch.com/#/app/srcpublic/page/11-01-2024

Comment by sarahconstantin on sarahconstantin's Shortform · 2024-10-30T14:35:00.839Z · LW · GW

links 10/30/2024: https://roamresearch.com/#/app/srcpublic/page/10-30-2024

 

Comment by sarahconstantin on sarahconstantin's Shortform · 2024-10-29T14:59:50.365Z · LW · GW

links 10/29/2024: https://roamresearch.com/#/app/srcpublic/page/10-29-2024

Comment by sarahconstantin on sarahconstantin's Shortform · 2024-10-28T17:33:57.651Z · LW · GW

"weak benevolence isn't fake": https://roamresearch.com/#/app/srcpublic/page/ic5Xitb70

  • there's a class of statements that go like:
    • "fair-weather friends" who are only nice to you when it's easy for them, are not true friends at all
    • if you don't have the courage/determination to do the right thing when it's difficult, you never cared about doing the right thing at all
    • if you sometimes engage in motivated cognition or are sometimes intellectually lazy/sloppy, then you don't really care about truth at all
    • if you "mean well" but don't put in the work to ensure that you're actually making a positive difference, then your supposed "well-meaning" intentions were fake all along
  • I can see why people have these views.
    • if you actually need help when you're in trouble, then "fair-weather friends" are no use to you
    • if you're relying on someone to accomplish something, it's not enough for them to "mean well", they have to deliver effectively, and they have to do so consistently. otherwise you can't count on them.
    • if you are in an environment where people constantly declare good intentions or "well-meaning" attitudes, but most of these people are not people you can count on, you will find yourself caring a lot about how to filter out the "posers" and "virtue signalers" and find out who's true-blue, high-integrity, and reliable.
  • but I think it's literally false and sometimes harmful to treat "weak"/unreliable good intentions as absolutely worthless.
    • not all failures are failures to care enough/try hard enough/be brave enough/etc.
      • sometimes people legitimately lack needed skills, knowledge, or resources!
      • "either I can count on you to successfully achieve the desired outcome, or you never really cared at all" is a long way from true.
      • even the more reasonable, "either you take what I consider to be due/appropriate measures to make sure you deliver, or you never really cared at all" isn't always true either!
        • some people don't know how to do what you consider to be due/appropriate measures
        • some people care some, but not enough to do everything you consider necessary
        • sometimes you have your own biases about what's important, and you really want to see people demonstrate a certain form of "showing they care" otherwise you'll consider them negligent, but that's not actually the most effective way to increase their success rate
    • almost everyone has a finite amount of effort they're willing to put into things, and a finite amount of cost they're willing to pay. that doesn't mean you need to dismiss the help they are willing and able to provide.
      • as an extreme example, do you dismiss everybody as "insufficiently committed" if they're not willing to die for the cause? or do you accept graciously if all they do is donate $50?
      • "they only help if it's fun/trendy/easy/etc" -- ok, that can be disappointing, but is it possible you should just make it fun/trendy/easy/etc? or just keep their name on file in case a situation ever comes up where it is fun/trendy/easy and they'll be helpful then?
    • it's harmful to apply this attitude to yourself, saying "oh I failed at this, or I didn't put enough effort in to ensure a good outcome, so I must literally not care about ideals/ethics/truth/other people."
      • like...you do care any amount. you did, in fact, mean well.
        • you may have lacked skill;
        • you may have not been putting in enough effort;
        • or maybe you care somewhat but not as much as you care about something else
        • but it's probably not accurate or healthy to take a maximally-cynical view of yourself where you have no "noble" motives at all, just because you also have "ignoble" motives (like laziness, cowardice, vanity, hedonism, spite, etc).
          • if you have a flicker of a "good intention" to help people, make the world a better place, accomplish something cool, etc, you want to nurture it, not stomp it out as "probably fake".
          • your "good intentions" are real and genuinely good, even if you haven't always followed through on them, even if you haven't always succeeded in pursuing them.
          • you don't deserve "credit" for good intentions equal to the "credit" for actually doing a good thing, but you do deserve any credit at all.
          • basic behavioral "shaping" -- to get from zero to a complex behavior, you have to reward very incremental simple steps in the right direction.
            • e.g. if you wish you were "nicer to people", you may have to pat yourself on the back for doing any small acts of kindness, even really "easy" and "trivial" ones, and notice & make part of your self-concept any inclinations you have to be warm or helpful.
            • "I mean well and I'm trying" has to become a sentence you can say with a straight face. and your good intentions will outpace your skills so you have to give yourself some credit for them.
    • it may be net-harmful to create a social environment where people believe their "good intentions" will be met with intense suspicion.
      • it's legitimately hard to prove that you have done a good thing, particularly if what you're doing is ambitious and long-term.
      • if people have the experience of meaning well and trying to do good but constantly being suspected of insincerity (or nefarious motives), this can actually shift their self-concept from "would-be hero" to "self-identified villain"
        • which is bad, generally
          • at best, identifying as a villain doesn't make you actually do anything unethical, but it makes you less effective, because you preemptively "brace" for hostility from others instead of confidently attracting allies
          • at worst, it makes you lean into legitimately villainous behavior
      • OTOH, skepticism is valuable, including skepticism of people's motives.
      • but it can be undesirable when someone is placed in a "no-win situation", where from their perspective "no matter what I do, nobody will believe that I mean well, or give me any credit for my good intentions."
      • if you appreciate people for their good intentions, sometimes that can be a means to encourage them to do more. it's not a guarantee, but it can be a starting point for building rapport and starting to persuade. people often want to live up to your good opinion of them.
Comment by sarahconstantin on sarahconstantin's Shortform · 2024-10-28T16:16:13.549Z · LW · GW

links 10/28/2024: https://roamresearch.com/#/app/srcpublic/page/10-28-2024

Comment by sarahconstantin on sarahconstantin's Shortform · 2024-10-25T17:49:18.783Z · LW · GW

links 10/25/24: https://roamresearch.com/#/app/srcpublic/page/10-25-2024

 

Comment by sarahconstantin on sarahconstantin's Shortform · 2024-10-23T15:26:20.380Z · LW · GW

links 10/23/24:

https://roamresearch.com/#/app/srcpublic/page/10-23-2024

  • https://eukaryotewritesblog.com/2024/10/21/i-got-dysentery-so-you-dont-have-to/  personal experience at a human challenge trial, by the excellent Georgia Ray
  • https://catherineshannon.substack.com/p/the-male-mind-cannot-comprehend-the
    • I...guess this isn't wrong, but it's a kind of Take I've never been able to relate to myself. Maybe it's because I found Legit True Love at age 22, but I've never had that feeling of "oh no the men around me are too weak-willed" (not in my neck of the woods they're not!) or "ew they're too interested in going to the gym" (gym rats are fine? it's a hobby that makes you good-looking, I'm on board with this) or "they're not attentive and considerate enough" (often a valid complaint, but typically I'm the one who's too hyperfocused on my own work & interests) or "they're too show-offy" (yeah it's irritating in excess but a little bit of show-off energy is enlivening).
    • Look: you like Tony Soprano because he's competent and lives by a code? But you don't like it when a real-life guy is too competitive, intense, or off doing his own thing? I'm sorry, but that's not how things work.
      • Tony Soprano can be light-hearted and always have time for the women around him because he is a fictional character. In real life, being good at stuff takes work and is sometimes stressful.
      • My husband is, in fact, very close to this "Tony Soprano" ideal -- assertive, considerate, has "boyish charm", lives by a "code", is competent at lots of everyday-life things but isn't too busy for me -- and I guarantee you would not have thought to date him because he's also nerdy and argumentative and wouldn't fit in with the yuppie crowd.
      • Also like. This male archetype is a guy who fixes things for you and protects you and makes you feel good. In real life? Those guys get sad that they're expected to give, give, give and nobody cares about their feelings. I haven't watched The Sopranos but my understanding is that Tony is in therapy because the strain of this life is getting to him. This article doesn't seem to have a lot of empathy with what it's like to actually be Tony...and you probably should, if you want to marry him.
  • https://fas.org/publication/the-magic-laptop-thought-experiment/ from Tom Kalil, a classic: how to think about making big dreams real.
  • https://paulgraham.com/yahoo.html Paul Graham's business case studies!
  • https://substack.com/home/post/p-150520088 a celebratory reflection on the recent Progress Conference. Yes, it was that good.
  • https://en.m.wikipedia.org/wiki/Hecuba  in some tellings (not Homer's), Hecuba turns into a dog from grief at the death of her son.
  • https://www.librariesforthefuture.bio/p/lff
    • a framework for thinking about aging: "1st gen" is delaying aging, which is where the field started (age1, metformin, rapamycin), while "2nd gen" is pausing (stasis), repairing (reprogramming), or replacing (transplanting), cells/tissues. 2nd gen usually uses less mature technologies (eg cell therapy, regenerative medicine), but may have a bigger and faster effect size.
    • "function, feeling, and survival" are the endpoints that matter.
      • biomarkers are noisy and speculative early proxies that we merely hope will translate to a truly healthier life for the elderly. apply skepticism.
  • https://substack.com/home/post/p-143303463 I always like what Maxim Raginsky has to say. you can't do AI without bumping into the philosophy of how to interpret what it's doing.
Comment by sarahconstantin on sarahconstantin's Shortform · 2024-10-15T16:03:13.884Z · LW · GW

I don't think it was articulated quite right -- it's more negative than my overall stance (I wrote it when unhappy) and a little too short-termist.

I do still believe that the future is unpredictable, that we should not try to "constrain" or "bind" all of humanity forever using authoritarian means, and that there are many many fates worse than death and we should not destroy everything we love for "brute" survival.

And, also, I feel that transience is normal and only a bit sad. It's good to save lives, but mortality is pretty "priced in" to my sense of how the world works. It's good to work on things that you hope will live beyond you, but Dark Ages and collapses are similarly "priced in" as normal for me. Sara Teasdale: "You say there is no love, my love, unless it lasts for aye; Ah folly, there are episodes far better than the play!" If our days are as a passing shadow, that's not that bad; we're used to it.

I worry that people who are not ok with transience may turn themselves into monsters so they can still "win" -- even though the meaning of "winning" is so changed it isn't worth it any more.

Comment by sarahconstantin on sarahconstantin's Shortform · 2024-10-15T02:15:36.212Z · LW · GW

I thought about manually deleting them all but I don't feel like it.

Comment by sarahconstantin on sarahconstantin's Shortform · 2024-10-14T23:48:24.518Z · LW · GW

links, 10/14/2024

  • https://milton.host.dartmouth.edu/reading_room/pl/book_1/text.shtml [[John Milton]]'s Paradise Lost, annotated online [[poetry]]
  • https://darioamodei.com/machines-of-loving-grace [[AI]] [[biotech]] [[Dario Amodei]] spends about half of this document talking about AI for bio, and I think it's the most credible "bull case" yet written for AI being radically transformative in the biomedical sphere.
    • one caveat is that I think if we're imagining a future with brain mapping, regeneration of macroscopic brain tissue loss, and understanding what brains are doing well enough to know why neurological abnormalities at the cell level produce the psychiatric or cognitive symptoms they do...then we probably can do brain uploading! it's really weird to single out this one piece as pie-in-the-sky science fiction when you're already imagining a lot of similarly ambitious things as achievable.
  • https://venture.angellist.com/eli-dourado/syndicate [[tech industry]] when [[Eli Dourado]] picks startups, they're at least not boring! i haven't vetted the technical viability of any of these, but he claims to do a lot of that sort of numbers-in-spreadsheets work.
  • https://forum.effectivealtruism.org/topics/shapley-values [[EA]] [[economics]] how do you assign credit (in a principled fashion) to an outcome that multiple people contributed to? Shapley values! It seems extremely hard to calculate in practice, and subject to contentious judgment calls about the assumptions you make, but maybe it's an improvement over raw handwaving.
  • https://gwern.net/maze [[Gwern Branwen]] digs up the "Mr. Young" studying maze-running techniques in [[Richard Feynman]]'s "Cargo Cult Science" speech. His name wasn't Young but Quin Fischer Curtis, and he was part of a psychology research program at UMich that published little and had little influence on the outside world, and so was "rebooted" and forgotten. Impressive detective work, though not a story with a very satisfying "moral".
  • https://en.m.wikipedia.org/wiki/Cary_Elwes [[celebrities]] [[Cary Elwes]] had an ancestor who was [[Charles Dickens]]' inspiration for Ebenezer Scrooge!
  • https://feministkilljoys.com/2015/06/25/against-students/ [[politics]] an old essay by [[Sara Ahmed]] in defense of trigger warnings in the classroom and in general against the accusations that "students these days" are oversensitive and illiberal.
    • She's doing an interesting thing here that I haven't wrapped my head around. She's not making the positive case "students today are NOT oversensitive or illiberal" or "trigger warnings are beneficial," even though she seems to believe both those things. she's more calling into question "why has this complaint become a common talking point? what unstated assumptions does it perpetuate?" I am not sure whether this is a valid approach that's alternate to the forms of argument I'm more used to, or a sign of weakness (a thing she's doing only because she cannot make the positive case for the opposite of what her opponents claim.)
  • https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10080017/ [[cancer]][[medicine]] [[biology]] cancer preventatives are an emerging field
    • NSAIDS and omega-3 fatty acids prevent 95% of tumors in a tumor-prone mouse strain?!
    • also we're targeting [[STAT3]] now?! that's a thing we're doing.
      • ([[STAT3]] is a major oncogene but it's a transcription factor, it lives in the cytoplasm and the nucleus, this is not easy to target with small molecules like a cell surface protein.)
  • https://en.m.wikipedia.org/wiki/CLARITY [[biotech]] make a tissue sample transparent so you can make 3D microscopic imaging, with contrast from immunostaining or DNA/RNA labels
  • https://distill.pub/2020/circuits/frequency-edges/ [[AI]] [[neuroscience]] a type of neuron in vision neural nets, the "high-low frequency detector", has recently also been found to be a thing in literal mouse brain neurons (h/t [[Dario Amodei]]) https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10055119/
  • https://mosaicmagazine.com/essay/israel-zionism/2024/10/the-failed-concepts-that-brought-israel-to-october-7/ [[politics]][[Israel]][[war]] an informative and sober view on "what went wrong" leading up to Oct 7
    • tl;dr: Hamas consistently wants to destroy Israel and commit violence against Israelis, they say so repeatedly, and there was never going to be a long-term possibility of living peacefully side-by-side with them; Netanyahu is a tough talker but kind of a procrastinator who's kicked the can down the road on national security issues for his entire career; catering to settlers is not in the best interests of Israel as a whole (they provoke violence) but they are an unduly powerful voting bloc; Palestinian misery is real but has been institutionalized by the structure of the Gazan state and the UN which prevents any investment into a real local economy; the "peace process" is doomed because Israel keeps offering peace and the Palestinians say no to any peace that isn't the abolition of the State of Israel.
    • it's pretty common for reasonable casual observers (eg in America) to see Israel/Palestine as a tragic conflict in which probably both parties are somewhat in the wrong, because that's a reasonable prior on all conflicts. The more you dig into the details, though, the more you realize that "let's live together in peace and make concessions to Palestinians as necessary" has been the mainstream Israeli position since before 1948. It's not a symmetric situation.
  • [[von Economo neurons]] are spooky [[neuroscience]] https://en.wikipedia.org/wiki/Von_Economo_neuron
    • only found in great apes, cetaceans, and humans
    • concentrated in the [[anterior cingulate cortex]] and [[insular cortex]] which are closely related to the "sense of self" (i.e. interoception, emotional salience, and the perception that your e.g. hand is "yours" and it was "you" who moved it)
    • the first to go in [[frontotemporal dementia]]
    • https://www.nature.com/articles/s41467-020-14952-3 we don't know where they project to! they are so big that we haven't tracked them fully!
    • https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3953677/
  • https://www.wired.com/story/lee-holloway-devastating-decline-brilliant-young-coder/ the founder of Cloudflare had [[frontotemporal dementia]] [[neurology]]
  • [[frontotemporal dementia]] is maybe caused by misfolded proteins being passed around neuron-to-neuron, like prion disease! [[neurology]]
Comment by sarahconstantin on sarahconstantin's Shortform · 2024-10-13T22:10:43.172Z · LW · GW

Therefore, do things you'd be in favor of having done even if the future will definitely suck. Things that are good today, next year, fifty years from now... but not like "institute theocracy to raise birth rates", which is awful today even if you think it might "save the world".

Comment by sarahconstantin on sarahconstantin's Shortform · 2024-10-13T22:04:22.526Z · LW · GW

"Let's abolish slavery," when proposed, would make the world better now as well as later.

I'm not against trying to make things better!

I'm against doing things that are strongly bad for present-day people to increase the odds of long-run human species survival.

Comment by sarahconstantin on sarahconstantin's Shortform · 2024-10-11T15:18:11.631Z · LW · GW

https://roamresearch.com/#/app/srcpublic/page/10-11-2024

 

  • https://www.mindthefuture.info/p/why-im-not-a-bayesian [[Richard Ngo]] [[philosophy]] I think I agree with this, mostly.
    • I wouldn't say "not a Bayesian" because there's nothing wrong with Bayes' Rule and I don't like the tribal connotations, but lbr, we don't literally use Bayes' rule very often and when we do it often reveals just how much our conclusions depend on problem framing and prior assumptions. A lot of complexity/ambiguity necessarily "lives" in the part of the problem that Bayes' rule doesn't touch. To be fair, I think "just turn the crank on Bayes' rule and it'll solve all problems" is a bit of a strawman -- nobody literally believes that, do they? -- but yeah, sure, happy to admit that most of the "hard part" of figuring things out is not the part where you can mechanically apply probability.
  • https://www.lesswrong.com/posts/YZvyQn2dAw4tL2xQY/rationalists-are-missing-a-core-piece-for-agent-like [[tailcalled]] this one is actually interesting and novel; i'm not sure what to make of it. maybe literal physics, with like "forces", matters and needs to be treated differently than just a particular pattern of information that you could rederive statistically from sensory data? I kind of hate it but unlike tailcalled I don't know much about physics-based computational models...[[philosophy]]
  • https://alignbio.org/ [[biology]] [[automation]] datasets generated by the Emerald Cloud Lab! [[Erika DeBenedectis]] project. Seems cool!
  • https://www.sciencedirect.com/science/article/abs/pii/S0306453015009014?via%3Dihub [[psychology]] the forced swim test is a bad measure of depression.
    • when a mouse trapped in water stops struggling, that is not "despair" or "learned helplessness." these are anthropomorphisms. the mouse is in fact helpless, by design; struggling cannot save it; immobility is adaptive.
      • in fact, mice become immobile faster when they have more experience with the test. they learn that struggling is not useful and they retain that knowledge.
    • also, a mouse in an acute stress situation is not at all like a human's clinical depression, which develops gradually and persists chronically.
    • https://www.sciencedirect.com/science/article/abs/pii/S1359644621003615?via%3Dihub the forced swim test also doesn't predict clinical efficacy of antidepressants well. (admittedly this study was funded by PETA, which thinks the FST is cruel to mice)
  • https://en.wikipedia.org/wiki/Copy_Exactly! [[semiconductors]] the Wiki doesn't mention that Copy Exactly was famously a failure. even when you try to document procedures perfectly and replicate them on the other side of the world, at unprecedented precision, it is really really hard to get the same results.
  • https://neuroscience.stanford.edu/research/funded-research/optimization-african-killifish-platform-rapid-drug-screening-aggregate [[biology]] you know what's cool? building experimentation platforms for novel model organisms. Killifish are the shortest-lived vertebrate -- which is great if you want to study aging. they live in weird oxygen-poor freshwater zones that are hard to replicate in the lab. figuring out how to raise them in captivity and standardize experiments on them is the kind of unsung, underfunded accomplishment we need to celebrate and expand WAY more.
  • https://www.nature.com/articles/513481a [[biology]] [[drug discovery]] ever heard of curcumin doing something for your health? resveratrol? EGCG? those are all natural compounds that light up a drug screen like a Christmas tree because they react with EVERYTHING. they are not going to work on your disease in real life.
  • https://en.wikipedia.org/wiki/Fetal_bovine_serum [[biotech]] this cell culture medium is just...cow juice. it is not consistent batch to batch. this is a big problem.
  • https://www.nature.com/articles/s42255-021-00372-0 [[biology]] mice housed at "room temperature" are too cold for their health; they are more disease-prone, which calls into question a lot of experimental results.
  • https://calteches.library.caltech.edu/51/2/CargoCult.htm [[science]] the famous [[Richard Feynman]] "Cargo cult science" essay is about flawed experimental methods!
    • if your rat can smell the location of the cheese in the maze all along, then your maze isn't testing learning.
    • errybody want to test rats in mazes, ain't nobody want to test this janky-ass maze!
  • https://fastgrants.org/ [[metascience]] [[COVID-19]] this was cool, we should bring it back for other stuff
  • https://erikaaldendeb.substack.com/cp/147525831 [[biotech]] engineering biomanufacturing microbes for surviving on Mars?!
  • https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8278038/ [[prediction markets]] DARPA tried to use prediction markets to predict the success of projects. it didn't work! they couldn't get enough participants.
  • https://www.citationfuture.com/ [[prediction markets]] these guys do prediction markets on science
  • https://jamesclaims.substack.com/p/how-should-we-fund-scientific-error [[metascience]] [[James Heathers]] has a proposal for a science error detection (fraud, bad research, etc) nonprofit. We should fund him to do it!!
  • https://en.wikipedia.org/wiki/Elisabeth_Bik [[metascience]] [[Elizabeth Bik]] is the queen of research fraud detection. pay her plz.
  • https://substack.com/home/post/p-149791027 [[archaeology]] it was once thought that Gobekli Tepe was a "festival city" or religious sanctuary, where people visited but didn't live, because there wasn't a water source. Now, they've found something that looks like water cisterns, and they suspect people did live there.
    • I don't like the framing of "hunter-gatherer" = "nomadic" in this post.
      • We keep pushing the date of agriculture farther back in time. We keep discovering that "hunter-gatherers" picking plants in "wild" forests are actually doing some degree of forest management, planting seeds, or pulling undesirable weeds. Arguably there isn't a hard-and-fast distinction between "gathering" and "gardening". (Grain agriculture where you use a plow and completely clear a field for planting your crop is qualitatively different from the kind of kitchen-garden-like horticulture that can be done with hand tools and without clearing forests. My bet is that all so-called hunter-gatherers did some degree of horticulture until proven otherwise, excepting eg arctic environments)
      • what the water actually suggests is that people lived at Gobekli Tepe for at least part of the year. it doesn't say what they were eating.
      •  
Comment by sarahconstantin on sarahconstantin's Shortform · 2024-10-11T13:54:18.847Z · LW · GW

I'm not defeatist! I'm picky.

And I'm not talking specifics because i don't want to provoke argument.

Comment by sarahconstantin on sarahconstantin's Shortform · 2024-10-10T22:48:53.397Z · LW · GW

wait and see if i still believe it tomorrow!

Comment by sarahconstantin on Why I’m not a Bayesian · 2024-10-10T14:57:56.970Z · LW · GW

I think I agree with this post directionally.

You cannot apply Bayes' Theorem until you have a probability space; many real-world situations, especially the ones people argue about, do not have well-defined probability spaces, including a complete set of mutually exclusive and exhaustive possible events, which are agreed upon by all participants in the argument. 

You will notice that, even on LessWrong, people almost never have Bayesian discussions where they literally apply Bayes' Rule.  It would probably be healthy to try to literally do that more often! But making a serious attempt to debate a contentious issue "Bayesianly" typically looks more like Rootclaim's lab leak debate, which took a lot of setup labor and time, and where the result of quantifying the likelihoods was to reveal just how heavily your "posterior" conclusion depends on your "prior" assumptions, which were outside the scope of debate.

I think prediction markets are good, and I think Rootclaim-style quantified debates are worth doing occasionally, but what we do in most discussion isn't Bayesian and can't easily be made Bayesian.

I am not so sure about preferring models to propositions. I think what you're getting at is that we can make much more rigorous claims about formal models than about "reality"... but most of the time what we care about is reality.  And we can't be rigorous about the intuitive "mental models" that we use for most real-world questions. So if you're take is "we should talk about the model we're using, not what the world is", then...I don't think that's true in general. 

In the context of formal models, we absolutely should consider how well they correspond to reality. (It's a major bias of science that it's more prestigious to make claims within a model than to ask "how realistic is this model for what we care about?") 

In the context of informal "mental models", it's probably good to communicate how things work "in your head" because they might work differently in someone else's head, but ultimately what people care about is the intersubjective commonalities that can be in both your heads (and, for all practical purposes, in the world), so you do have to deal with that eventually.

Comment by sarahconstantin on sarahconstantin's Shortform · 2024-10-10T14:32:16.066Z · LW · GW
  • “we” can’t steer the future.
  • it’s wrong to try to control people or stop them from doing locally self-interested & non-violent things in the interest of “humanity’s future”, in part because this is so futile.
    • if the only way we survive is if we coerce people to make a costly and painful investment in a speculative idea that might not even work, then we don’t survive! you do not put people through real pain today for a “someday maybe!” This applies to climate change,  AI x-risk, and socially-conservative cultural reform.
  • most cultures and societies in human history have been so bad, by my present values, that I’m not sure they’re not worse than extinction, and we should expect that most possible future states are similarly bad;
  • history clearly teaches us that civilizations and states collapse (on timescales of centuries) and the way to bet is that ours will as well, but it’s kind of insane hubris to think that this can be prevented;
  • the literal species Homo sapiens is pretty resilient and might avoid extinction for a very long time, but have you MET Homo sapiens? this is cold fucking comfort! (see e.g. C. J. Cherryh’s vision in 40,000 in Gehenna for a fictional representation not far from my true beliefs — we are excellent at adaptation and survival but when we “survive” this often involves unimaginable harshness and cruelty, and changing into something that our ancestors would not have liked at all.)
  • identifying with species-survival instead of with the stuff we value now is popular among the thoughtful but doesn’t make any sense to me;
  • in general it does not make sense, to me, to compromise on personal values in order to have more power/influence. you will be able to cause stuff to happen, but who cares if it’s not the stuff you want?
  • similarly, it does not make sense to consciously optimize for having lots of long-term descendants. I love my children; I expect they’ll love their children; but go too many generations out and it’s straight-up fantasyland. My great-grandparents would have hated me.  And that’s still a lot of shared culture and values! Do you really have that much in common with anyone from five thousand years ago?
  • Evolution is not your friend. God is not your friend. Everything worth loving will almost certainly perish. Did you expect it to last forever?
  • “I love whatever is best at surviving” or “I love whatever is strongest” means you don’t actually care what it’s like. It means you have no loyalty and no standards. It means you don’t care so much if the way things turn out is hideous, brutal, miserable, abusive… so long as it technically “is alive” or “wins”. Fuck that.
  • I despise sour grapes. If the thing I want isn’t available, I’m not going to pretend that what is available is what I want.
  • I am not going to embrace the “realistic” plan of allying with something detestable but potent. There is always an alternative, even if the only alternative is “stay true to your dreams and then get clobbered.”

Link to this on my Roam

Comment by sarahconstantin on sarahconstantin's Shortform · 2024-10-09T14:45:27.807Z · LW · GW

links 10/9/24 https://roamresearch.com/#/app/srcpublic/page/yI03T5V6t

Comment by sarahconstantin on Overview of strong human intelligence amplification methods · 2024-10-08T19:30:40.544Z · LW · GW

Neuronal activity could certainly affect gene regulation! so yeah, I think it's possible (which is not a strong claim...lots of things "regulate" other things, that doesn't necessarily make them effective intervention points)

Comment by sarahconstantin on Overview of strong human intelligence amplification methods · 2024-10-08T18:44:03.304Z · LW · GW

ditto

we have really not fully explored ultrasound and afaik there is no reason to believe it's inherently weaker than administering signaling molecules. 

Comment by sarahconstantin on sarahconstantin's Shortform · 2024-10-08T15:20:55.710Z · LW · GW

links 10/8/24 https://roamresearch.com/#/app/srcpublic/page/10-08-2024

Comment by sarahconstantin on sarahconstantin's Shortform · 2024-10-08T03:36:45.548Z · LW · GW

no! it sounded like "typical delusion stuff" at first until i listened carefully and yep that was a description of targeted ads.

Comment by sarahconstantin on 2025 Color Trends · 2024-10-08T03:35:06.255Z · LW · GW

they're in the substack post

Comment by sarahconstantin on sarahconstantin's Shortform · 2024-10-07T15:58:01.224Z · LW · GW
  • Psychotic “delusions” are more about holding certain genres of idea with a socially inappropriate amount of intensity and obsession than holding a false idea. Lots of non-psychotic people hold false beliefs (eg religious people). And, interestingly, it is absolutely possible to hold a true belief in a psychotic way.
  • I have observed people during psychotic episodes get obsessed with the idea that social media was sending them personalized messages (quite true; targeted ads are real) or the idea that the nurses on the psych ward were lying to them (they were).
  • Preoccupation with the revelation of secret knowledge, with one’s own importance, with mistrust of others’ motives, and with influencing others' thoughts or being influenced by other's thoughts, are classic psychotic themes.
    • And it can be a symptom of schizophrenia when someone’s mind gets disproportionately drawn to those themes. This is called being “paranoid” or “grandiose.”
    • But sometimes (and I suspect more often with more intelligent/self-aware people) the literal content of their paranoid or grandiose beliefs is true!
      • sometimes the truth really has been hidden!
      • sometimes people really are lying to you or trying to manipulate you!
      • sometimes you really are, in some ways, important! sometimes influential people really are paying attention to you!
      • of course people influence each others' thoughts -- not through telepathy but through communication!
    • a false psychotic-flavored thought is "they put a chip in my brain that controls my thoughts." a true psychotic-flavored thought is "Hollywood moviemakers are trying to promote progressive values in the public by implanting messages in their movies."
      • These thoughts can come from the same emotional drive, they are drawn from dwelling on the same theme of "anxiety that one's own thoughts are externally influenced", they are in a deep sense mere arbitrary verbal representations of a single mental phenomenon...
      • but if you take the content literally, then clearly one claim is true and one is false.
      • and a sufficiently smart/self-aware person will feel the "anxiety-about-mental-influence" experience, will search around for a thought that fits that vibe but is also true, and will come up with something a lot more credible than "they put a mind-control chip in my brain", but is fundamentally coming from the same motive.  
  • There’s an analogous but easier to recognize thing with depression.
    • A depressed person’s mind is unusually drawn to obsessing over bad things. But this obviously doesn’t mean that no bad things are real or that no depressive’s depressing claims are true.
    • When a depressive literally believes they are already dead, we call that Cotard's Delusion, a severe form of psychotic depression. When they say "everybody hates me" we call it a mere "distorted thought". When they talk accurately about the heat death of the universe we call it "thermodynamics." But it's all coming from the same emotional place.
  • In general, mental illnesses, and mental states generally, provide a "tropism" towards thoughts that fit with certain emotional/aesthetic vibes.
    • Depression makes you dwell on thoughts of futility and despair
    • Anxiety makes you dwell on thoughts of things that can go wrong
    • Mania makes you dwell on thoughts of yourself as powerful or on the extreme importance of whatever you're currently doing
    • Paranoid psychosis makes you dwell on thoughts of mistrust, secrets, and influencing/being influenced
  • You can, to some extent, "filter" your thoughts (or the ones you publicly express) by insisting that they make sense. You still have a bias towards the emotional "vibe" you're disposed to gravitate towards; but maybe you don't let absurd claims through your filter even if they fit the vibe. Maybe you grudgingly admit the truth of things that don't fit the vibe but technically seem correct.
    • this does not mean that the underlying "tropism" or "bias" does not exist!!!
    • this does not mean that you believe things "only because they are true"!
    • in a certain sense, you are doing the exact same thing as the more overtly irrational person, just hiding it better!
      • the "bottom line" in terms of vibe has already been written, so it conveys no "updates" about the world
      • the "bottom line" in terms of details may still be informative because you're checking that part and it's flexible
  • "He's not wrong but he's still crazy" is a valid reaction to someone who seems to have a mental-illness-shaped tropism to their preoccupations.
    • eg if every post he writes, on a variety of topics, is negative and gloomy, then maybe his conclusions say more about him than about the truth concerning the topic;
      • he might still be right about some details but you shouldn't update too far in the direction of "maybe I should be gloomy about this too"
    • Conversely, "this sounds like a classic crazy-person thought, but I still separately have to check whether it's true" is also a valid and important move to make (when the issue is important enough to you that the extra effort is worth it). 
      • Just because someone has a mental illness doesn't mean every word out of their mouth is false!
      • (and of course this assumption -- that "crazy" people never tell the truth -- drives a lot of psychiatric abuse.)

link: https://roamresearch.com/#/app/srcpublic/page/71kfTFGmK

Comment by sarahconstantin on sarahconstantin's Shortform · 2024-10-07T14:08:16.899Z · LW · GW

links 8/7/2024

https://roamresearch.com/#/app/srcpublic/page/yI03T5V6t

Comment by sarahconstantin on Nathan Helm-Burger's Shortform · 2024-10-04T14:47:18.103Z · LW · GW

Honestly this Pliny person seems rude. He entered a server dedicated to interacting with this modified AI; instead of playing along with the intended purpose of the group, he tried to prompt-inject the AI to do illegal stuff (that could risk getting the Discord shut down for TOS-violationy stuff?) and to generally damage the rest of the group's ability to interact with the AI.  This is troll behavior.  

Even if the Discord members really do worship a chatbot or have mental health issues, none of that is helped by a stranger coming in and breaking their toys, and then "exposing" the resulting drama online.

Comment by sarahconstantin on sarahconstantin's Shortform · 2024-10-04T14:32:18.449Z · LW · GW
Comment by sarahconstantin on sarahconstantin's Shortform · 2024-10-04T14:32:05.585Z · LW · GW

links 10/4/2024

https://roamresearch.com/#/app/srcpublic/page/10-04-2024

Comment by sarahconstantin on sarahconstantin's Shortform · 2024-10-02T16:01:58.688Z · LW · GW

links 10/2/2024:

https://roamresearch.com/#/app/srcpublic/page/10-02-2024

Comment by sarahconstantin on The Great Data Integration Schlep · 2024-10-02T15:59:07.637Z · LW · GW

I agree that if the AI can run its own experiments (via robotic actuators) it can do R&D prototyping independently of existing private/corporate data, and that's potentially the whole game. 

My current impression is that, as of 2024, we're starting to see enough investment into AI-controlled robots that in a few years it would be possible to get an "AI experimenter", albeit in the restricted set of domains where experiments can be automated easily. (biological experiments that are basically restricted to pipetting aqueous solutions and imaging the results? definitely yes. most sorts of benchtop electronics prototyping and testing? i imagine so, though I don't know for sure. the full range of reactions/syntheses a chemist can run at a lab bench? probably not for some time; creating a "mechanical chemist" is a famously hard problem since methods are so varied, though obviously it's not in principle impossible.)

Comment by sarahconstantin on sarahconstantin's Shortform · 2024-10-01T16:24:18.442Z · LW · GW

links 10/1/24

https://roamresearch.com/#/app/srcpublic/page/10-01-2024

Comment by sarahconstantin on What's up with self-esteem? · 2019-07-18T20:37:04.379Z · LW · GW

My current theory is that self-esteem isn't about yourself at all!

Self-esteem is your estimate of how much help/support/contribution/love you can get from others.

This explains why a person needs to feel a certain amount of "confidence" before trying something that is obviously their best bet. By "confidence" we basically just mean "support from other people or the expectation of same." The kinds of things that people usually need "confidence" to do are difficult and involve the risk of public failure and blame, even if they're clearly the best option from an individual perspective.

Comment by sarahconstantin on The AI Timelines Scam · 2019-07-11T14:11:00.368Z · LW · GW

Basically, AI professionals seem to be trying to manage the hype cycle carefully.

Ignorant people tend to be more all-or-nothing than experts. By default, they'll see AI as "totally unimportant or fictional", "a panacea, perfect in every way" or "a catastrophe, terrible in every way." And they won't distinguish between different kinds of AI.

Currently, the hype cycle has gone from "professionals are aware that deep learning is useful" (c. 2013) to "deep learning is AI and it is wonderful in every way and you need some" (c. 2015?) to "maybe there are problems with AI? burn it with fire! Nationalize! Ban!" (c. 2019).

Professionals who are still working on the "deep learning is useful for certain applications" project (which is pretty much where I sit) are quite worried about the inevitable crash when public opinion shifts from "wonderful panacea" to "burn it with fire." When the public opinion crash happens, legitimate R&D is going to lose funding, and that will genuinely be unfortunate. Everyone savvy knows this will happen. Nobody knows exactly when. There are various strategies for dealing with it.

Accelerate the decline: this is what Gary Marcus is doing.

Carve out a niche as an AI Skeptic (who is still in the AI business himself!) Then, when the funding crunch comes, his companies will be seen as "AI that even the skeptic thinks is legit" and have a better chance of surviving.

Be Conservative: this is a less visible strategy but a lot of people are taking it, including me.

Use AI only in contexts that are well justified by evidence, like rapid image processing to replace manual classification. That way, when the funding crunch happens, you'll be able to say you're not just using AI as a buzzword, you're using well-established, safe methods that have a proven track record.

Pivot Into Governance: this is what a lot of AI risk orgs are doing

Benefit from the coming backlash by becoming an advisor to regulators. Make a living not by building the tech but by talking about its social risks and harms. I think this is actually a fairly weak strategy because it's parasitic on the overall market for AI. There's no funding for AI think tanks if there's no funding for AI itself. But it's an ideal strategy for the cusp time period when we're just shifting between blind enthusiasm to blind panic.

Preserve Credibility: this is what Yann LeCun is doing and has been doing from day 1 (he was a deep learning pioneer and promoter even before the spectacular empirical performance results came in)

Try to forestall the backlash. Frame AI as good, not bad, and try to preserve the credibility of the profession as long as you can. Argue (honestly but selectively) against anyone who says anything bad about deep learning for any reason.

Any of these strategies may say true things! In fact, assuming you really are an AI expert, the smartest thing to do in the long run is to say only true things, and use connotation and selective focus to define your rhetorical strategy. Reality has no branding; there are true things to say that comport with all four strategies. Gary Marcus is a guy in the "AI Skeptic" niche saying things that are, afaik, true; there are people in that niche who are saying false things. Yann LeCun is a guy in the "Preserve AI Credibility" niche who says true things; when Gary Marcus says true things, Yann LeCun doesn't deny them, but criticizes Marcus's tone and emphasis. Which is quite correct; it's the most intellectually rigorous way to pursue LeCun's chosen strategy.

Comment by sarahconstantin on The AI Timelines Scam · 2019-07-11T13:45:55.486Z · LW · GW

Re: 2: nonprofits and academics have even more incentives than business to claim that a new technology is extremely dangerous. Think tanks and universities are in the knowledge business; they are more valuable when people seek their advice. "This new thing has great opportunities and great risks; you need guidance to navigate and govern it" is a great advertisement for universities and think tanks. Which doesn't mean AI, narrow or strong, doesn't actually have great opportunities and risks! But nonprofits and academics aren't immune from the incentives to exaggerate.

Re: 4: I have a different perspective. The loonies who go to the press with "did you know psychiatric drugs have SIDE EFFECTS?!" are not really a threat to public information to the extent that they are telling the truth. They are a threat to the perceived legitimacy of psychiatrists. This has downsides (some people who could benefit from psychiatric treatment will fear it too much) but fundamentally the loonies are right that a psychiatrist is just a dude who went to school for a long time, not a holy man. To the extent that there is truth in psychiatry, it can withstand the public's loss of reverence, in the long run. Blind reverence for professionals is a freebie, which locally may be beneficial to the public if the professionals really are wise, but is essentially fragile. IMO it's not worth trying to cultivate or preserve. In the long run, good stuff will win out, and smart psychiatrists can just as easily frame themselves as agreeing with the anti-psych cranks in spirit, as being on Team Avoid Side Effects And Withdrawal Symptoms, Unlike All Those Dumbasses Who Don't Care (all two of them).

Comment by sarahconstantin on Rule Thinkers In, Not Out · 2019-06-08T17:16:34.988Z · LW · GW

Some examples of valuable true things I've learned from Michael:

  • Being tied to your childhood narrative of what a good upper-middle-class person does is not necessary for making intellectual progress, making money, or contributing to the world.
  • Most people (esp. affluent ones) are way too afraid of risking their social position through social disapproval. You can succeed where others fail just by being braver even if you're not any smarter.
  • Fiddly puttering with something that fascinates you is the source of most genuine productivity. (Anything from hardware tinkering, to messing about with cost spreadsheets until you find an efficiency, to writing poetry until it "comes out right".) Sometimes the best work of this kind doesn't look grandiose or prestigious at the time you're doing it.
  • The mind and the body are connected. Really. Your mind affects your body and your body affects your mind. The better kinds of yoga, meditation, massage, acupuncture, etc, actually do real things to the body and mind.
  • Science had higher efficiency in the past (late 19th-to-mid-20th centuries).
  • Examples of potentially valuable medical innovation that never see wide application are abundant.
  • A major problem in the world is a 'hope deficit' or 'trust deficit'; otherwise feasible good projects are left undone because people are so mistrustful that it doesn't occur to them that they might not be scams.
  • A good deal of human behavior is explained by evolutionary game theory; coalitional strategies, not just individual strategies.
  • Evil exists; in less freighted, more game-theoretic terms, there exist strategies which rapidly expand, wipe out other strategies, and then wipe themselves out. Not *all* conflicts are merely misunderstandings.
  • How intersubjectivity works; "objective" reality refers to the conserved *patterns* or *relationships* between different perspectives.
  • People who have coherent philosophies -- even opposing ones -- have more in common in the *way* they think, and are more likely to get meaningful stuff done together, than they can with "moderates" who take unprincipled but middle-of-the-road positions. Two "bullet-swallowers" can disagree on some things and agree on others; a "bullet-dodger" and a "bullet-swallower" will not even be able to disagree, they'll just not be saying commensurate things.


Comment by sarahconstantin on Tactical vs. Strategic Cooperation · 2018-08-12T20:54:48.703Z · LW · GW

I'm not actually asking for people to do a thing for me, at this point. I think the closest to a request I have here is "please discuss the general topic and help me think about how to apply or fix these thoughts."

I don't think all communication is about requests (that's a kind of straw-NVC) only that when you are making a request it's often easier to get what you want by asking than by indirectly pressuring.

Comment by sarahconstantin on Are ethical asymmetries from property rights? · 2018-08-12T19:10:04.372Z · LW · GW

That's flattering to Rawls, but is it actually what he meant?

Or did he just assume that you don't need a mutually acceptable protocol for deciding how to allocate resources, and you can just skip right to enforcing the desirable outcome?