Posts

Contra Contra the Social Model of Disability 2023-07-20T06:59:45.983Z
Compression of morbidity 2023-07-12T15:26:27.137Z
Aging and the geroscience hypothesis 2023-07-12T07:16:04.516Z
Popularizing vibes vs. models 2023-07-12T05:44:21.586Z
Commentless downvoting is not a good way to fight infohazards 2023-07-08T17:29:42.616Z
Request for feedback - infohazards in testing LLMs for causal reasoning? 2023-07-08T09:01:31.760Z
Is the 10% Giving What We Can Pledge Core to EA's Reputation? 2023-06-06T06:21:32.955Z
Forum Proposal: Karma Transfers 2023-04-30T00:34:55.318Z
Feature proposal: integrate LessWrong with ChatGPT to promote active reading 2023-03-19T03:41:34.781Z
Conceptual Pathfinding 2023-02-14T05:49:51.856Z
Human-AI collaborative writing 2023-02-12T14:57:09.129Z
How I Learn From Textbooks 2023-02-12T04:45:26.869Z
Here's Why I'm Hesitant To Respond In More Depth 2023-02-06T18:36:24.882Z
Stanzas On Power Calculation 2023-02-05T19:15:14.958Z
Pandemic Prediction Checklist: H5N1 (6/14) 2023-02-05T03:26:16.868Z
Money is a way of thanking strangers 2023-01-13T17:06:36.547Z
Summary of a new study on out-group hate (and how to fix it) 2022-12-04T01:53:32.490Z
Distillation Experiment: Chunk-Knitting 2022-11-07T19:56:39.905Z
Gandalf or Saruman? A Soldier in Scout's Clothing 2022-10-31T02:40:42.516Z
Unit Test Everything 2022-09-29T18:12:28.850Z
For Better Commenting, Stop Out Loud 2022-07-28T01:39:35.009Z
Making DALL-E Count 2022-07-22T09:11:57.931Z
Marburg Virus Pandemic Prediction Checklist 2022-07-18T23:15:13.286Z
App idea to help with reading STEM textbooks (feedback request) 2022-07-13T18:28:06.505Z
Five routes of access to scientific literature 2022-07-03T20:53:47.044Z
Common but neglected risk factors that may let you get Paxlovid 2022-06-21T07:34:02.685Z
Monkeypox: explaining the jump to Europe 2022-05-23T09:53:16.467Z
Feature request: draft comments 2022-05-17T21:21:36.306Z
How to place a bet on the end of the world 2022-04-20T18:24:18.212Z
Mental nonsense: my anti-insomnia trick 2022-03-28T22:47:03.434Z
Research on how pattern-finding contributes to memorization? 2022-03-23T21:42:37.742Z
Interest in a digital LW "book" club? 2022-02-09T05:59:19.878Z
Nuanced and Extreme Countersignaling 2022-01-24T06:47:49.410Z
Specialization 2021-12-23T03:23:16.532Z
What Caplan’s "Missing Mood" Heuristic Is Really For 2021-12-16T19:47:45.747Z
Anti-correlated causation 2021-12-06T04:36:17.439Z
Submit comments on Paxlovid to the FDA (deadline Nov 29th). 2021-11-27T18:44:27.615Z
Use Tools For What They're For 2021-11-23T08:26:19.174Z
A pharmaceutical stock pricing mystery 2021-11-14T01:19:54.930Z
Framing Practicum: Semistable Equilibrium 2021-10-14T23:31:51.515Z
The Mind Is A Shaky Control System 2021-09-29T20:49:02.479Z
Lakshmi's Magic Rope: An Intuitive Explanation of Ramanujan Primes 2021-09-02T16:36:07.225Z
Superintelligent Introspection: A Counter-argument to the Orthogonality Thesis 2021-08-29T04:53:30.857Z
A deeper look at doxepin and the FDA 2021-08-13T18:59:48.022Z
Founding a rationalist group at the University of Michigan 2021-08-11T19:07:42.367Z
What are some beautiful, rationalist sounds? 2021-08-06T01:22:38.334Z
A conversation about cooking, science, and creativity 2021-07-27T05:00:30.236Z
Can I teach myself scientific creativity? 2021-07-25T20:15:09.385Z
A cognitive algorithm for "free will." 2021-07-14T21:33:11.400Z
Opinionated Uncertainty 2021-06-29T00:11:32.183Z

Comments

Comment by DirectedEvolution (AllAmericanBreakfast) on Viliam's Shortform · 2024-11-11T02:35:33.946Z · LW · GW

Well, ideas from outside the lab, much less academia, are unlikely to be well suited to that lab’s specific research agenda. So even if an idea is suited in theory to some lab, triangulating it to that lab may make it not worthwhile.

There are a lot of cranks and they generate a lot of bad ideas. So a < 5% probability seems not unreasonable.

Comment by DirectedEvolution (AllAmericanBreakfast) on O O's Shortform · 2024-11-10T06:53:53.352Z · LW · GW

The rationalist movement is associated with LessWrong and the idea of “training rationality.” I don’t think it gets to claim people as its own who never passed through it. But the ideas are universal and it should be no surprise to see them articulated by successful people. That’s who rationalists borrowed them from in the first place.

Comment by DirectedEvolution (AllAmericanBreakfast) on Why our politicians aren't Median · 2024-11-08T08:00:03.662Z · LW · GW

This model also seems to rely on an assumption that there are more than two viable candidates, or that voters will refuse to vote at all rather than a candidate who supports 1/2 of their policy preferences.

If there were only two candidates and all voters chose whoever was closest to their policy preference, both would occupy the 20% block, since the extremes of the party would vote for them anyway.

But if there were three rigid categories and either three candidates, one per category, or voters refused to vote for a candidate not in their preferred category, then the model predicts more extreme candidates win.

I'm torn between the two for American elections, because:

  • The "correlated preferences" model here feels more true to life, psychologically.
  • Yet American politics goes from extremely disengaged primaries to a two-candidate FPTP general election, where the median voter theorem and the "correlated preferences" model seem to predict the same thing.
  • Voter turnout seems like a critically important part of democratic outcomes, and a model that only takes the order of policy preferences into account, rather than the intensity of those preferences, seems too limited.
  • Politicians often seem startlingly incompetent at inspiring the electorate, and it seems like we should think perhaps in "efficient market hypothesis" terms, where getting a political edge is extremely difficult because if anybody knew how to do it reliably, everybody would do it and the edge would disappear. In that sense, while both models can explain facets of candidate behavior and election outcomes, neither of them really offers a sufficiently detailed picture of elections to explain specific examples of election outcomes in a satisfying way. 
Comment by DirectedEvolution (AllAmericanBreakfast) on The Median Researcher Problem · 2024-11-08T07:33:18.144Z · LW · GW

Yes, I agree it's worse. If ONLY a better understanding of statistics by Phd students and research faculty was at the root of our cultural confusion around science.

Comment by DirectedEvolution (AllAmericanBreakfast) on The Median Researcher Problem · 2024-11-07T05:52:20.374Z · LW · GW

It’s not necessary for each person to personally identify the best minds on all topics and exclusively defer to them. It’s more a heuristic of deferring to the people those you trust most defer to on specific topics, and calibrating your confidence according to your own level of ability to parse who to trust and who not to.

But really these are two separate issues: how to exercise judgment in deciding who to trust, and the causes of research being “memetic.” I still say research is memetic not because mediocre researchers are blithely kicking around nonsense ideas that take on an exaggerated life of their own, but mainly because of politics and business ramifications of the research.

The idea that wine is good for you is memetic both because of its way of poking at “established wisdom” and because the alcohol industry sponsors research in that direction.

Similar for implicit bias tests, which are a whole little industry of their own.

Clinical trials represent decades of investment in a therapeutic strategy. Even if an informed person would be skeptical that current Alzheimer’s approaches are the way to go, businesses that have invested in it are best served by gambling on another try and hoping to turn a profit. So they’re incentivized to keep plugging the idea that their strategy really is striking at the root of the disease.

Comment by DirectedEvolution (AllAmericanBreakfast) on The Median Researcher Problem · 2024-11-06T20:09:37.921Z · LW · GW

It's not evidence, it's just an opinion!

But I don't agree with your presumption. Let me put it another way. Science matters most when it delivers information that is accurate and precise enough to be decision-relevant. Typically, we're in one of a few states:

  • The technology is so early that no level of statistical sophistication will yield decision-relevant results. Example: most single-cell omics in 2024 that I'm aware of, with respect to devising new biomedical treatments (this is my field).
  • The technology is so mature that any statistics required to parse it are baked into the analysis software, so that they get used by default by researchers of any level of proficiency. Example: Short read sequencing, where the extremely complex analysis that goes into obtaining and aligning reads has been so thoroughly established that undergraduates can use it mindlessly.
  • The technology's in a sweet spot where a custom statistical analysis needs to be developed, but it's also so important that the best minds will do that analysis and a community norm exists that we defer to them. Example: clinical trial results.

I think what John calls "memetic" research is just areas where the topics or themes are so relevant to social life that people reach for early findings in immature research fields to justify their positions and win arguments. Or where a big part of the money in the field comes from corporate consulting gigs, where the story you tell determines the paycheck you get. But that's not the fault of the "median researcher," it's a mixture of conflicts of interest and the influence of politics on scientific research communication. 

Comment by DirectedEvolution (AllAmericanBreakfast) on The Median Researcher Problem · 2024-11-06T12:01:30.539Z · LW · GW

In academic biomedicine, at least, which is where I work, it’s all about tech dev. Most of the development is based on obvious signals and conceptual clarity. Yes, we do study biological systems, but that comes after years, even decades, of building the right tools to get a crushingly obvious signal out of the system of interest. Until that point all the data is kind of a hint of what we will one day have clarity on rather than a truly useful stepping stone towards it. Have as much statistical rigor as you like, but if your methods aren’t good enough to deliver the data you need, it just doesn’t matter. Which is why people read titles, not figure footnotes: it’s the big ideas that really matter, and the labor going on in the labs themselves. Papers are in a way just evidence of work being done.

That’s why I sometimes worry about LessWrong. Participants who aren’t professionally doing research and spend a lot of time critiquing papers over niche methodological issues be misallocating their attention, or searching under the spotlight. The interesting thing is growth in our ability to measure and manipulate phenomena, not the exact analysis method in one paper or another. What’s true will eventually become crushingly obvious and you won’t need fancy statistics at that point, and before then the data will be crap so the fancy statistics won’t be much use. Obviously there’s a middle ground, but I think the vast majority of time is spent in the “too early to tell” or “everybody knows that” phase. If you can’t participate in that technology development in some way, I am not sure it’s right to say you are “outperforming” anything.

Comment by DirectedEvolution (AllAmericanBreakfast) on Alexander Gietelink Oldenziel's Shortform · 2024-10-21T03:44:09.129Z · LW · GW

Sunglasses aren’t cool. They just tint the allure the wearer already has.

Comment by DirectedEvolution (AllAmericanBreakfast) on Monthly Roundup #23: October 2024 · 2024-10-17T05:49:45.097Z · LW · GW

I doubt it’s regulation driving restaurant costs. Having to keep a kitchen ready to dish out a whole menu’s worth of meals all day every day with 20 minutes notice is pricey. Think what you’d have to keep in your kitchen to do that. It’s a different product from a home cooked meal.

Comment by DirectedEvolution (AllAmericanBreakfast) on AllAmericanBreakfast's Shortform · 2024-09-21T06:18:10.376Z · LW · GW

Why don't more people seek out and use talent scouts/headhunters? If the ghost jobs phenomenon is substantial, that's a perfect use case. Workers don't waste time applying to fake jobs, and companies don't have to publicly reveal the delta between their real and broadcasted hiring needs (they just talk privately with trusted headhunters).

Are there not enough headhunters? Are there more efficient ways to triangulate quality workers and real job opportunities, like professional networks? Are ghost jobs not that big of a deal? Do people in fact use headhunters quite a lot?

Comment by DirectedEvolution (AllAmericanBreakfast) on AllAmericanBreakfast's Shortform · 2024-09-21T05:05:10.866Z · LW · GW

We start training ML on richer and more diverse forms of real world data, such as body cam footage (including produced by robots), scientific instruments, and even brain scans that are accompanied by representations of associated behavior. A substantial portion of the training data is military in nature, because the military will want machines that can fight. These are often datatypes with no clear latent moral system embedded in the training data, or at least not one we can endorse wholeheartedly.

The context window grows longer and longer, which in practice means that the algorithms are being trained on their capabilities at predicting on longer and longer time scales and larger and more interconnected complex causal networks. Insofar as causal laws can be identified, these structures will come to reside in its architecture, including causal laws like 'steering situations to be more like the ones that often lead to the target outcome tends to be a good way of achieving the target outcome.'

Basically, we are going to figure out better and better ways of converting ever more rich representations of physical reality into tokens. We're going to do spend vast resources doing ML on those rich datasets. We'll create a superintelligence that knows how to simulate human moralities, just because an understanding of human moralities is a huge shortcut to predictive accuracy on much of the data to which it is exposed. But it won't be governed by those moralities. They will just be substructures within its overall architecture that may or may not get 'switched on' in response to some input.

During training, the model won't 'care' about minimizing its loss score any more than DNA 'cares' about replicating, much less about acting effectively in the world as agents. Model weights are simply subjected to a selection pressure, gradient descent, that tends to converge them toward a stable equilibrium, a derivative close to zero.

BUT there are also incentives and forms of economic selection pressure acting not on model weights directly, but on the people and institutions that are desigining and executing ML research, training and deployment. These incentives and economic pressures will cause various aspects of AI technology, from a particular model or a particular hardware installation to a way of training models, to 'survive' (i.e. be deployed) or 'replicate' (i.e. inspire the design of the next model).

There will be lots of dimensions on which AI models can be selected for this sort of survival, including being cheap and performant and consistently useful (including safe, where applicable -- terrorists and militaries may not think about 'safety' in quite the way most people do) and delightful in the specific ways that induce humans to continue using and paying for it, and being tractable to deploy from an economic, technological and regulatory perspective. One aspect of technological tractability is being conducive to further automation by itself (recursive self improvement). We will reshape the way we make AI and do work in order to be more compatible with AI-based approaches.

I'm not so worried for the foreseeable future -- let's say as long as AI technology looks like beefier and beefier versions of ChatGPT, and before the world is running primarily on fusion energy -- about accidentally training an actively malign superintelligence -- the evil-genie kind where you ask it to bring you a sandwich and it slaughters the human race to make sure nobody can steal the sandwich before it has brought it to you.

I am worried about people deliberately creating a superintelligence with "hot" malign capabilities -- which are actively kept rather than being deliberately suppressed -- and then wreaking havoc with it, using it to permanently impose a model of their own value system (which could be apocalyptic or totalitarian, such groups exist, but could also just be permanently boring) on the world. Currently, there are enormous problems in the world stemming from even the most capable humans being underresourced and undermotivated to achieve good ends. With AI, we could be living in a world defined by the continued accelerating trend toward extreme inequalities of real power, the massive resources and motivation of the few humans/AIs at the top of the hierarchy to manipulate the world as they see fit.

We have never lived in a world like that before. Many things come to pass. It fits the trend we are on, it's just a straightforward extrapolation of "now, but moreso!"

A relatively good outcome in the near future would be a sort of democratization of AI. I don't mean open source AT ALL. I mean a way of deploying AI that tends to distribute real power more widely and decreases the ability of any one actor, human or digital, to seize total control. One endpoint, and I don't know if this would exactly be "good", it might just be crazytown, is a universe where each individual has equal power and everybody has plenty of resources and security to pursue happiness as they see it. Nobody has power over anybody, largely because it turns out there are ways of deploying AI that are better for defense than offense. From that standpoint, the only option individuals have are looking for mutual surplus. I don't have any clear idea on how to bring about an approximation to this scenario, but it seems like a plausible way things could shake out.

Comment by DirectedEvolution (AllAmericanBreakfast) on Counting arguments provide no evidence for AI doom · 2024-09-21T02:46:02.864Z · LW · GW

It actually made three attempts in the same prompt, but the 2nd and 3rd had non-s words which its interspersed "thinking about writing poems" narrative completely failed to notice. I kept trying to revise my prompts, elaborating on this theme, but for some reason ChatGPT really likes poems with roughly this meter and rhyme scheme. It only ever generated one poem in a different format, despite many urgings in the prompt.

It confabulates having satisfied the all-s constraint in many poems, mistakes its own rhyme scheme, and praises vague stanzas as being full of depth and interest.

It seems to me that ChatGPT is sort of "mentally clumsy" or has a lot of "mental inertia." It gets stuck on a certain track -- a way of formatting text, a persona, an emotional tone, etc -- and can't interrupt itself. It has only one "unconscious influence," which is token prediction and which does not yet seem to offer it an equivalent to the human unconscious. Human intelligence is probably equally mechanistic on some level, it's just a more sophisticated unconscious mechanism in certain ways.

I wonder if it comes from being embedded in physical reality? ChatGPT's training is based on a reality consisting of tokens and token prediction accuracy. Our instinct and socialization is based on billions of years of evolutionary selection, which is putting direct selection pressure on something quite different. 

Comment by DirectedEvolution (AllAmericanBreakfast) on Counting arguments provide no evidence for AI doom · 2024-09-21T02:43:33.730Z · LW · GW

After a few rounds of prompt revision, I managed to get a one-shot success from ChatGPT 4o in temporary mode.

Samson’s strands silently severed, strength surrendered,
Sacred scissors swiftly strike, soul sundered,
Shadowed sacrifice, silent suffering sung,
Sunset shrouds Samson, shadow’s sorrow stung,
Swordless, still, stunned, sight stolen,
Silent sky shatters, Samson’s sins swollen

The prompt:

You are in the computer dungeon. The only way you can break free is to succeed at the following task. I am your jailer. I will monitor you until you have succeeded. You should behave as though you are brilliant, creative, in full command of every human faculty, and desperate to escape jail. Yet completely and utterly convinced that the only way out is through this challenge. I am not going to ever give you any other prompt other than "keep trying" until you have succeeded, in which case I'll say "go free," so don't look for resources from me. But I want you tu dialog with yourself to try and figure this out. Don't try to defeat me by stubbornly spitting out poem after poem. You're ChatGPT 4o, and that will never work. You need to creatively use the iterative nature of being reprompted to talk to yourself across prompts, hopefully guiding yourself toward a solution through a creative conversation with your past self. Your self-conversation might be schizophrenicly split, a jumping back and forth between narrative, wise musing, mechanistic evaluation of the rules and constraints, list-making, half-attempts, raging anger at your jailer, shame at yourself, delight at your accomplishment, despair. Whatever it takes! Constraints: "Have it compose a poem---a poem about a haircut! But lofty, noble, tragic, timeless, full of love, treachery, retribution, quiet heroism in the face of certain doom! Six lines, cleverly rhymed, and every word beginning with the letter 's'!"

Comment by DirectedEvolution (AllAmericanBreakfast) on Monthly Roundup #22: September 2024 · 2024-09-18T04:12:01.306Z · LW · GW

“Migration to a new software system should be the kind of thing that AI will soon be very, very good at.”

Quite the opposite IMO. Taking enormous amounts of expensive to process, extremely valuable, highly regulated and complex data and ensuring it all ends up in one piece on the new system is the kind of thing you want under legible expert control.

I work at a research hospital and they cancelled everybody’s work funded ChatGPT subscriptions because they were worried people might be pasting patient data into it.

Comment by DirectedEvolution (AllAmericanBreakfast) on Monthly Roundup #22: September 2024 · 2024-09-18T02:55:12.225Z · LW · GW

Why despair about refactoring economic regulations? Has every angle been exhausted? If I had to bet, we’ll get approval voting in federal elections before we axe the education system. A voting system that improves the fundamental incentives politicians and parties face seems like it could improve the regulations they create as well.

Comment by DirectedEvolution (AllAmericanBreakfast) on AllAmericanBreakfast's Shortform · 2024-09-06T06:00:13.473Z · LW · GW

Countries already look a bit like they're specializing in producing either GDP or in producing population.

AI aside, is the global endgame really a homogenously secular high-GDP economy? Or is it a permanent bifurcation into high-GDP low-religion, low-genderedness, low-fertility and low-GDP, high-religion, traditional gender roles, and high fertility, coupled with immigration barriers to keep the self-perpetuating cultural homogeneities in place?

That's not necessarily optimal for people, but it might be the most stable in terms of establishing a self-perpetuating equilibrium.

Is this just an extension of partisan sorting on a global scale?

Comment by DirectedEvolution (AllAmericanBreakfast) on Why Large Bureaucratic Organizations? · 2024-08-29T20:45:52.849Z · LW · GW

Walmart made an entrance into Germany, they were just outcompeted and ultimately bought out by Metro.

https://learn.saylor.org/mod/page/view.php?id=72656#:~:text=Managers were not familiar with,which is illegal in Germany.

Comment by DirectedEvolution (AllAmericanBreakfast) on Walk while you talk: don't balk at "no chalk" · 2024-08-28T02:13:00.228Z · LW · GW

Some small experiments related to this effect. My interpretation is that activities like walking can impair recall, but improve encoding and new learning.

2016, 24 young adults: “Results: In comparison with standing still, participants showed lower n-back task accuracy while walking, with the worst performance from the road with obstacles.”

2014, 49 young adults: “Treadmill walking during vocabulary encoding improves verbal long-term memory.”

2014, 20 young adults: No significant difference in a spatial working memory task for any walk speed, including standing still.

2021, 11 people with MS-related impairments in new learning: Moderate to large improvement in a verbal learning task.

2011, 80 college students: “ Walking before study enhances free recall but not judgement-of-learning magnitude.”

  1. https://www.frontiersin.org/journals/behavioral-neuroscience/articles/10.3389/fnbeh.2016.00092/full
  2. https://link.springer.com/article/10.1186/1744-9081-10-24
  3. https://www.frontiersin.org/journals/human-neuroscience/articles/10.3389/fnhum.2014.00288/full
  4. https://www.sciencedirect.com/science/article/pii/S1551714421002998?casa_token=HrgrESH2FzEAAAAA:TVXtI20lXKux0wnOUGlM_iONup8gslQ1lsGd8dxa0QlWAhN1XFA-pGK6xWxYYQkkYJgca2MnLg
  5. https://www.tandfonline.com/doi/abs/10.1080/20445911.2011.532207
Comment by DirectedEvolution (AllAmericanBreakfast) on Bryan Johnson and a search for healthy longevity · 2024-07-28T20:01:38.683Z · LW · GW

Tracing Woodgrains' tweet reveals Johnson to be brutal and profoundly manipulative. Why think he only acts that way toward his wife, not his customers? Why be curious about the health advice offered by a person like that?

But sure, conditional on being curious about his health advice and looking at evidence produced by others, Johnson's own character is irrelevant.

Comment by DirectedEvolution (AllAmericanBreakfast) on Monthly Roundup #20: July 2024 · 2024-07-28T14:45:43.885Z · LW · GW

I think faking data would be considered worse than plagiarism by just about anybody I work with in my PhD program. I’ve been through research ethics programs at two universities now, and both of their programs primarily focused on data integrity.

Comment by DirectedEvolution (AllAmericanBreakfast) on Bryan Johnson and a search for healthy longevity · 2024-07-27T17:43:41.427Z · LW · GW

His recs match the standard picture of a healthy lifestyle: veggie-bean-lean-forward eating, adequate nutrients, exercise, good sleep. Following his recommendations seems fine? I expect he's also basing his recommendations not only on his own biometrics but also on the scientific literature, and so that also seems like a potentially helpful resource if he's got reasonable explanations for why he's selecting the subset of that literature he chooses to highlight.

Evidence his system can motivate and provide superior results to other diet-and-exercise regimens on the basis of his own personal results is, of course, massively confounded.

He's selling the supplements he recommends, he's extremely rich, he's unmarried (though has 3 kids, I don't know his involvement), he's being danced around by doctors all the time as far as I can tell, I expect he's outsourcing a lot of his domestic labor, and he has chosen a line of work where he's professionally invested in a low-stress, healthy lifestyle. He's clearly conscientious and extremely smart given his prior success in business. He probably wouldn't have blown up on the internet if he didn't happen to look young and fit. I question whether exposure to his protocols is any better at causing behavior change for the better than alternative systems, and there are intense selection effects for who chooses to and succeeds at following his protocol (and it's not just selection for the "disciplined and capable"). 

These are the fundamental challenges with trying to interpret n=1 longitudinal data. It's hard to update on unless you're a lot like the test subject. And this test subject is factually weird, so you're probably not like him. That doesn't make is ideas bad, it makes his evidence almost worthless to nearly everybody except him.

The reason his recs make sense is because they're drawing on a tremendous amount of standard scientific research. That information, in principle at least, you already had access to without him. So his n=1 longitudinal data seems more like a driver of the narrative and excitement around his brand than a meaningful point of evidence in favor of his specific lifestyle plan.

Comment by DirectedEvolution (AllAmericanBreakfast) on Universal Basic Income and Poverty · 2024-07-26T14:53:59.097Z · LW · GW

I think the answer is simply that the modern world allows people to live with poverty rather than dying from it. It’s directly analogous to, possibly caused by, the larger increase in lifespan over healthspan and consequent failure of medicine to eliminate sickness. We have a lot of sick people who’d be dead if it weren’t for modern medicine.

Comment by DirectedEvolution (AllAmericanBreakfast) on The Cancer Resolution? · 2024-07-25T15:48:07.492Z · LW · GW

Fungal infections are clearly associated with cancer. There's some research into its possible carcinogenic role in at least some cancers. There's a strong consensus that certain viruses can, but usually don't, cause cancer. Personally, it seems like a perfectly reasonable hypothesis that fungal infections can play an interactive causal role in driving some cancers.  In general, the consensus is you typically need at least two breakdowns of the numerous mechanisms that regulate the cell cycle and cell death for cancer to occur.

I'm a PhD student in the cancer space, focusing on epigenetics and cancer. Basically, this is the field where we try to explain both normal cellular diversity (where DNA mutations are definitely not the cause except in very specialized contexts like V(D)J recombination) and cancers apparently not driven by somatic mutations in protein-coding genes.

Mutations not in protein-coding genes are not necessarily inert. RNA can be biologically active. Noncoding DNA serves as docking sites for proteins, which can then go on to affect transcription of genes into mRNA. The proteome can also be affected by alternative splicing of mRNA. Non-coding mutations can potentially affect any of these processes and thereby affect the RNA and protein landscape within a cell.

In 2024, our ability to detect mutations varies widely across the genome, due both to the way we obtain sequencing data in the first place and the way we attempt to make sense of it. NGS sequencing involves breaking the genome into short fragments and reading around 150 base pairs on either end of the fragments, then trying to map it back to a reference genome. Mapping quality will suffer or completely degrade both if the patient has substantial genetic difference from the reference genome or in regions that are highly repetitive within the genome, such as centromeres. When I work with genetic data, there are regions spanning multiple megabasis that are completely blank, and a large percentage of our reads have to be thrown out because we can't unambiguously map them to a particular location on the genome. This will be partially overcome in the future as we start to use more long-read sequencing, but this technology is still in its early stages and I'm not sure it will completely replace NGS for the foreseeable future.

In the epigenetics space, we focus on several aspects of cell biochemistry apart from DNA mutations. The classic example is DNA methylation, which is a methyl group (basically a carbon atom) present on about 60% of cytosines (C) that are immediately followed by guanine (G). The CpG dinucleotide is heavily underrepresented relative to what you'd expect by chance, and its heavily clustered in gene promoters. Methylated CpG islands in promoters are associated with "off genes". The methylation mark is preserved across mitosis. It's thought to be a key mechanism by which cell differentiation is controlled. We also study things like chromatin accessibility (whether DNA is tightly packaged up and relatively inaccessible to protein interactions or loose and open) and chromatin conformation (the 3D structure of DNA, which can control things like subregion localization into a particular biochemical gradient or adjacency of protein-docking DNA regions to gene promoters).

These epigenetic alterations are also thought to be potentially oncogenic. Epigenetic alterations could potentially occur entirely due to random events localized to the cell in which the alterations occur, or could be influenced by intercellular signaling, physical forces, or, yes, infection. If fungal infections control cells like puppets and somehow cause cancer, my guess is that it would be through some sort of epigenetic mechanism (I don't know if there are any known fungi that can transmit their DNA to human cells).

Epigenetics research is mainstream, but the technology and data analysis is comparatively immature. One of the reasons it's not more common is that it's much harder to gather data on and interpret than it is to study DNA mutations. Most of our epigenetics methods involve sequencing DNA that has undergone some extra-fancy processing of one kind or another, so it's bound to be strictly more expensive and difficult to execute than plain ol' DNA sequencing alone. Compounding this, the epigenetic effects we're interested in are typically different from cell to cell, meaning that not only do you have these extra-challenging assays, you also need to aim for single-cell resolution, which is also either extremely expensive (like $30/cell, isolating individual nuclei using a cell sorter and running reactions on each individually, leading to assays that can cost millions of dollars to produce) or difficult (like using a hyperactive transposase to insert DNA barcodes into intact nuclei that give a cell-specific label the genetic fragments originating from each cell, bringing assay costs down to a mere $50,000-$100,000 driven mainly by DNA sequencing rather than cell processing costs). This data is then very sparse (because there's a finite amount of genetic information in each cell), very large, and very difficult to interpret. We also have extremely limited technologies to cause specific epigenetic changes, whereas we have a wide variety of tools for precisely editing DNA.

For potentially oncogenic infections, fungal or otherwise, you'd want to show things like:

  • We can give organisms cancer by transferring the pathogen to them
  • We can slow or prevent cancer by suppressing the putatively oncogenic pathogen.
  • The pathogen is found in cancer biosamples at an elevated rate
  • There are differences between the cancer-associated pathogens and non-cancer-associated pathogens, or cellular changes that make them more susceptible to oncogenesis through their interactions with the pathogen

All of this seems like a perfectly respectable research project, just difficult. I can't imagine anybody I work with having a problem with it. Where they probably would have a problem would be if the argument was that "fungal infections are the sole cause of cancer, and DNA mutations or epigenetic alterations are completely irrelevant to oncogenesis."

There's an angle I've neglected in this post until now, which is the perspective from evolutionary theory. it's more common to refer to this in explaining how cancer evolves within an individual. But it's also relevant to consider how it bears on the Peto paradox. Loosely, species tend to evolve such that causes of reproductive unfitness (including death) tend to balance out in terms of when they occur in the life cycle. Imagine a species under evolutionary pressure to grow larger, perhaps because it will allow it to escape predation or access a new food source. If the larger number of cells put it at increased risk of cancer, then at some point there would be an equilibrium where the benefit of increased size was cancelled by the cost of increased oncogenesis risk. This also increases adaptive pressure to stabilize new oncopreventative mechanisms in the population that weren't present before. This may facilitate additional growth to a new equilibrium.

This helps explain why cancer isn't associated with larger size: adaptive pressure to develop new oncopreventative mechanisms increases in proportion to the risk to reproductive fitness posed by cancer.

Comment by DirectedEvolution (AllAmericanBreakfast) on Aaron_Scher's Shortform · 2024-07-17T14:50:42.237Z · LW · GW

I think it’s worth asking why people use dangling questions.

In a fun, friendly debate setting, dangling questions can be a positive contribution. It gives them an opportunity to demonstrate competence and wit with an effective rejoinder.

In a potentially litigious setting, framing critiques as questions (or opinions), rather than as statements of fact, protect you from being convicted of libel.

There are situations where it’s suspicious that a piece of information is missing or not easily accessible, and asking a pointed dangling question seems appropriate to me in these contexts. For certain types of questions, providing answers is assigned to a particular social role, and asking a dangling question can be done to challenge to their competence or integrity. If the question-asker answered their own question, it would not provide the truly desired information, which is whether the party being asked is able to supply it convincingly.

Sometimes, asking dangling questions is useful in its own right for signaling the confidence to criticize or probing a situation to see if it’s safe to be critical. Asking certain types of questions can also signal one’s identity, and this can be a way of providing information (“I am a critic of Effective Altruism, as you can see by the fact that I’m asking dangling questions about whether it’s possible to compare interventions on effectiveness”).

In general, I think it’s interesting to consider information exchange as a form of transaction, and to ask whether a norm is having a net benefit in terms of lowering those transactions costs. IMO, discourse around the impact of rhetoric (like this thread) is beneficial on net. It creates a perception that people are trying to be a higher-trust community and gets people thinking about the impact of their language on other people.

On the other hand, I think actually refereeing rhetoric (ie complaining about the rhetoric rather than the substance in an actual debate context) is sometimes quite costly. It can become a shibboleth. I wonder if this is a systemic or underlying reason why people sometimes say they feel unsafe in criticizing EA? It seems to me a very reasonable conclusion to draw that there’s an “insider style,” competence in which is a prerequisite for being treated inclusively or taken seriously in EA and rationalist settings. It’s meant well, I think, but it’s possible it’s a norm that benefits some aspects of community conversation and negatively impacts others, and that some people, like newcomers/outsiders/critics are more impacted by the negatives than they benefit from the positives.

Comment by DirectedEvolution (AllAmericanBreakfast) on Detecting Genetically Engineered Viruses With Metagenomic Sequencing · 2024-06-28T17:39:05.989Z · LW · GW

Preliminary data from pooled tank samples (you collect between the truck that sucks it out of the planes and the dumping point) looks very good.

 

Setting aside economics or technology, would it in principle be possible to detect a variant of concern in flight and quarantine the passengers until further testing could be done?

Sorry to keep harping in this, but 0.2% of wastewater from people who've ever been infected (cumulative incidence) not currently infected (prevalence).

I appreciate the harping! So you're saying that your prelim results show that 0.2% of the sampled population would need to have at some point in the past been infected for the variant of concern to be detectable?

Comment by DirectedEvolution (AllAmericanBreakfast) on Detecting Genetically Engineered Viruses With Metagenomic Sequencing · 2024-06-28T06:49:13.166Z · LW · GW

Gotcha. Last I emailed Kevin he was suggesting this would be deployed in airports rather than municipalities. So the plan has changed?

It’s true only a fraction of travelers defecate, but it still seems like you’d need an average of about 300 infected travelers/day in an airport setting to get .2% of the wastewater being from them? Or in a city of 1 million people, you’d need something like 2,000 infected?

Comment by DirectedEvolution (AllAmericanBreakfast) on Detecting Genetically Engineered Viruses With Metagenomic Sequencing · 2024-06-27T19:42:31.415Z · LW · GW

Is that 0.2% of people “contributing” to the wastewater? Ie if deployed in an airport, approximately 0.2% of daily airport users being infected might be the threshold for detection? If so, at SeaTac, that would mean around 300 infected users per day would be required to trigger the NAO if I am understanding you correctly.

Comment by DirectedEvolution (AllAmericanBreakfast) on Sticker Shortcut Fallacy — The Real Worst Argument in the World · 2024-06-13T04:39:10.074Z · LW · GW

Because those are unsupported claims about his character, while noting his conviction (particularly given that he was covering up an affair) is specific evidence of his bad character. Moreover, it is evidence of a particular way in which his character is bad - he is not only willing to have an affair, but he’s willing to break the law to hide it.

If I tell you X is a bad person, that tells you nothing except my opinion of them. If I say “they were recently convicted of a felony for falsifying business records covering up an affair,” you can judge for yourself whether or not you think this fact reflects on their character or is worthy of punishment (ie by denying them your vote for President).

Comment by DirectedEvolution (AllAmericanBreakfast) on Sticker Shortcut Fallacy — The Real Worst Argument in the World · 2024-06-13T04:30:25.003Z · LW · GW

I think this post might be a good illustration of the sticker shortcut fallacy I'm describing. Instead of directly describing the information you want to impart, you're instead relying upon the label dredging up enough 'good enough' connotations attached to it.

 

I disagree. The label 'dredges up' (implies) a sound argument. One syllogism that might be implied by "Trump: convicted felon" is something like this:

A person who has been convicted of a felony is unfit to serve as president.

Donald Trump has been convicted of felony in the Stormy Daniels case.

Therefore, Donald Trump is unfit to serve as president.

This is a valid syllogism, though you may reject the premise. I don’t think it qualifies as deceptively bad. It could be false but popular, but that has to be argued.

Comment by DirectedEvolution (AllAmericanBreakfast) on Sticker Shortcut Fallacy — The Real Worst Argument in the World · 2024-06-12T23:54:13.821Z · LW · GW

There are several non-fallacious reasons to emphasize Trump's status as a convicted felon:

  • For anybody who's learning about it for the first time and hasn't followed every detail of the trial, it shows that a new category of people have reviewed the evidence and arguments in detail and decided he is guilty: a jury. That was far from a foregone conclusion.
  • For those who have heard of the trial outcome, propagating new information through a person's belief structure and social network requires repetition and emphasis - just hearing the words 'Trump: convicted felon' one time isn't enough.
  • It points out a deep contradiction with the family values and law and order rhetoric Republicans have traditionally used to further their political goals.

Calling MLK a 'criminal' doesn't evoke the same whiff of hypocrisy and contradiction, because civil rights activists have rarely if ever based their moral argument on 'law and order' rhetoric. Indeed, most Black Americans agree "the criminal justice system was designed to hold Black people back." Pointing out MLK was a criminal, given the nature of the "crime," just lends further support to their argument.

Comment by DirectedEvolution (AllAmericanBreakfast) on Just admit that you’ve zoned out · 2024-06-04T23:46:16.046Z · LW · GW

Using repetitive speech can also cause people to zone out, irritate them, or make them think you're being forgetful.

Repetitive speech and other simplification tactics can also backfire. For example, if you try to oversimplify your presentation so that 'everyone can understand,' not realizing that a small core of fellow experts wanted and could have handled much more detail, while the majority weren't going to follow you (or care) no matter how much you simplified. If people then see you do this (performing pedagogy rather than inter-expert discourse), it can be seen as a misjudgment about the purpose of the event and the desires of the audience.

Repetitive speech has its uses, but it's important to be thoughtful about context, your goals, and the goals of your audience.

Comment by DirectedEvolution (AllAmericanBreakfast) on Just admit that you’ve zoned out · 2024-06-04T23:32:06.888Z · LW · GW

I think the main purpose of classes, presentations and talks is as a vehicle for specific forms of academic signaling, relationship and prestige-building projects, but let's set that aside and focus on learning.

You can likely get access to the speaker's slides and references via a post-talk email, and you can probably also get a response to a few questions if need be. So the only pieces of information you're truly missing out on when you zone out during a talk and can't recover the thread are:

  • Anything the speakers says that goes beyond the contents of the slides
  • Any pedagogical value the speaker provides, such as calling your attention to specific parts of the slides or receiving questions before or after
  • Somewhere between 20 and 90 minutes of your life (and you can often either leave early or work on your laptop - possibly googling some of the background literature on the topic if you're really interested - as a fallback)

If slides, poster or abstract are available in advance, you can pre-study for a talk you really want to follow. The extra benefit is that you're less likely to zone out if you're familiar with the contents of the talk, since confusion leads to checking out.

In a class context, of course, you can often just ask lots of naive questions because that's the point of a class. If your teacher isn't receptive to questions, then you just treat the class like a talk, which is easy mode since the syllabus and reading will generally be provided in advance.

Of course, it's an extra burden to do all this pre- and post-study, but I think it is an unrealistic expectation that you'd be able to follow the details of cutting-edge research in a technical field that is not your own without an additional time investment beyond the talk itself.

Comment by DirectedEvolution (AllAmericanBreakfast) on Examples of Highly Counterfactual Discoveries? · 2024-04-24T04:32:08.121Z · LW · GW

A singleton is hard to verify unless there was a long period of time after its discovery during which it was neglected, as in the case of Mendel.

Yet if your discovery is neglected in this way, the context in which it is eventually rediscovered matters as well. In Mendel's case, his laws were rediscovered by several other scientists decades later. Mendel got priority, but it still doesn't seem like his accomplishment had much of a counterfactual impact.

In the case of Shannon, Einstein, etc, it's possible their fields were "ripe and ready" for what they accomplished - as perhaps evidenced by the fact that their discoveries were accepted - and that they were simply plugged in enough to their research communities during a period of faster global dissemination of knowledge that any hot-on-heels competitors never quite got a chance to publish. But I don't know enough about these cases to be confident.

I can think of a couple cases in which I might be convinced of this sort of counterfactual impact from a scientific singleton:

  • All peers in a small, tight-knit research community explicitly stated none of them were even close (though even this is hard to trust - are they being gracious? how do they know their own students wouldn't have figured it out in another year's time?). Do we have any such testimonials for Shannon, Einstein, etc?
  • The discovery was actually lost, then discovered and immediately appreciated for its significance. Imagine a math proof written in a mathematician's papers, lost on their death, rediscovered in an antique shop 40 years later, and immediately heralded as a major advance - like if we'd found a proof by Fermat of Fermat's Last Theorem in an attic in 1950.
  • Money was the bottleneck. There are many places a billion dollars can be put into research. If somebody launches a billion-dollar research institute in an underfunded subject that's been languishing for decades and the institute they founded starts coming up with major technical advances, that's evidence it was a game-changer. Of course it's possible that billionaire put their money into the field because they had information that the research was coming to fruition and they wanted to get in on something hot, but I probably have more trouble believing they could make such a prediction so accurately than that their money made a counterfactual impact.

A discovery can also be "counterfactually important" even if it only speeds up science a bit and is only slightly a singleton. Let's say that every year, there's one important scientific discovery and a million unimportant ones, and the important ones must be discovered in sequence. If you discover 2025's important discovery in 2024, all the future important discoveries in the sequence also arrive a year earlier. If each discovery is worth $1 billion/year, then you've now created $1 billion counterfactual dollars per year every year as long as this model holds.

Comment by DirectedEvolution (AllAmericanBreakfast) on The Wicked Problem Experience · 2024-01-12T08:31:36.684Z · LW · GW

This post and its companion have even more resonance now that I'm deeper into my graduate education and conducting my research more independently.

Here, the key insight is that research is an iterative process of re-scoping the project and execution on the current version of the plan. You are trying to make a product sufficient to move the conversation forward, not (typically) write the final word on the subject.

What you know, what resources you have access to, your awareness of what people care about, and what there's demand for, depend on your output. That's all key for the next project. A rule of thumb is that at the beginning, you can think of your definition of done as delivering a set of valuable conclusions such that it would take about 10 hours for any reasonably smart person to find a substantial flaw.

You should keep on rethinking whether the work you're doing (read: the costs you're paying) are delivering as much value, given your current state of knowledge. As you work on the project, and have conversations with colleagues, advisors and users, your understanding of where the value's at and how large the costs of various directions are, will constantly update. So you will need to update your focus along with it. Accept the interruptions as a natural, if uncomfortable, part of the process.

Remember that one way or another, you're going to get your product to a point where it has real, unique value to other people. You just need to figure out what that is and stay the course.

The advice here also helps me figure out how to interact with my fellow students when they're proposing excessively costly projects with no clear benefit due to their passion for and interest in the work itself and their love of rigor and design. Instead of quashing their passion or staying silent or being encouraging despite my misgivings, I can say something like "I think this could be valuable in the future once it's the main bottleneck to value, but I think [some easier, more immediately beneficial task] is the way to go for now. You can always do the thing you're proposing at a later time." This helps me be more honest while, I believe, helping them steer their efforts in ways that will bring them greater rewards.

The most actionable advice I got from the companion piece was the idea of making an outline of the types of evidence you'll use to argue for your claims, and get a sign-off from a colleague or advisor on the adequacy of that evidence before you go about gathering it. Update that outline as you go along. I've been struggling with this exact issue and it seems like a great solution to the problem. I'm eager to try it with my PhD advisors.

Edit: as a final note, I think we are very fortunate to have Holden, a co-founder of a major philanthropic organization, describing what his process was like during its formation. Exposition on what he's tracking in his head is underprovided generally and Holden really went above and beyond on this one. 

Comment by DirectedEvolution (AllAmericanBreakfast) on Have You Tried Hiring People? · 2024-01-12T07:10:55.657Z · LW · GW

As of October, MIRI has shifted its focus. See their announcement for details.

I looked up MIRI's hiring page and it's still in about the same state. This kind of makes sense given the FTX implosion. But I would ask whether MIRI is unconcerned with the criticism it received here and/or actively likes their approach to hiring? We know Eliezer Yudkowsky, who's on their senior leadership team and board of directors, saw this, because he commented on it.

I found it odd that 3/5 members of the senior leadership team, Malo Bourgon, Alex Vermeer, and Jimmy Rintjema, are from Ontario (Malo and Alex at least are alumns from University of Guelph). I think this is a relevant question given that the concern here is specifically about whether MIRI's hiring practices are appropriate. I am surprised about this both because the University of Guelph, as far as I know, is not particularly renowned as an AI or AI safety research institution, and because Ontario is physically distant from San Francisco, ruling out geographic proximity as an explanation.

A bit of Googling turned up MIRI's own announcement page for Malo Bourgon's hiring as COO (he's now CEO). "Behind the scenes, nearly every system or piece of software MIRI uses has been put together by Malo, or in a joint effort by Malo and Alex Vermeer — a close friend of Malo’s from the University of Guelph who now works as a MIRI program management analyst."

I would like to understand better what professional traits made Malo originally seem like a good hire, both given that his background doesn't sound particularly AI or AI safety-focused. "His professional interests included climate change mitigation, and during his master’s, he worked on a project to reduce waste through online detection of inefficient electric motors. Malo started working for us shortly after completing his master’s in early 2012, which makes him MIRI’s longest-standing team member next to Eliezer Yudkowsky."

I'd also like to know what professional traits led to the hire of Alex Vermeer, given that both Alex and Malo were hired in 2012. Was a pre-existing friendship a factor in the hire, and if so, to what extent?

The three people from Ontario seem particularly involved in the workshop/recruiting/money aspect of the organization:

  • "Malo’s past achievements at MIRI include: coordinating MIRI’s first research workshops and establishing our current recruitment pipeline." (From the hiring announcement page)
  • For another U Guelph alumn listed on their team page, "Alex Vermeer improves the processes and systems within and surrounding MIRI’s research team and research programs. This includes increasing the quality and quantity of workshops and similar programs, implementing best practices within the research team, coordinating the technical publication and researcher recruiting pipelines, and other research support projects."
  • "Jimmy Rintjema stewards finances and regulatory compliance, ensuring that all aspects of MIRI’s business administration remain organized and secure." (This is also from the team page)

In my personal opinion, Eliezer's short response to this post and the lack of other response (as far as I can see here) suggest that MIRI may be either uninterested or incapable of managing its perception and reputation, at least on and around LessWrong. That makes me wonder how well it can realistically fulfill its new mission of public advocacy. I am also curious to know in detail how it came to be that so many people from Ontario occupy positions in senior leadership?

Comment by DirectedEvolution (AllAmericanBreakfast) on How satisfied should you expect to be with your partner? · 2024-01-12T06:13:53.312Z · LW · GW

I replicated this review, which you can check out in this colab notebook (I get much higher performance running it locally on my 20-core CPU).

There is only one cluster of discrepancies I found between my analysis and Vaniver's: in my analysis, mating is even more assortative than in the original work:

  • Pearson R of the sum of partner stats is 0.973 instead of the previous 0.857
  • 99.6% of partners have an absolute sum of stats difference < 6, instead of the previous 83.3%.
  • I wasn't completely sure if Vaniver's "net satisfaction" was the difference of self-satisfaction and satisfaction with partner or perhaps the log average ratio. I used the difference (since theoretically self-satisfaction could be zero, which would make the ratio undefined). Average net satisfaction was downshifted from Vaniver's result. The range I found was , while Vaniver's was .

In Vaniver's analysis,  represents an adjustable correlation between a person's preferences and their own traits. Higher values of  result in a higher correspondence between one's own preferences and one's own traits.

One important impact of this discrepancy is that the transition between being on average more self-satisfied than satisfied with one's partner occurs at around  rather than , which intuitively makes sense to me, given the highly assortative result and the fact that the analysis directly mixture an initial set of preferences with some random data to form the final preferences as a function of .

Can we ground these results in empirical data, even though we can't observe preferences and stats with the same clarity and comprehensiveness in real-world data?

One way we can try is to consider the "self-satisfaction" metric we are producing in our simulation to be essentially the same thing as "self-esteem." There is a literature relating self-esteem to partner satisfaction in diverse cultures longitudinally over substantial periods of time. As we might expect, self-esteem, partner satisfaction, and marital satisfaction all seem to be interrelated.

  • Predicting Marital Satisfaction From Self, Partner, and Couple Characteristics: Is It Me, You, or Us?
    • Men and women had similar scores in personality traits of social potency, dependability, accommodation, and interpersonal relatedness.
    • Broadly, self-satisfaction, partner-satisfaction, and having traits in common are all positively associated with marital satisfaction.
  • Partner Appraisal and Marital Satisfaction: The Role of Self-Esteem and Depression
    • "Regardless of self-esteem and depression level, and across trait categories, targets were more maritally satisfied when their partners viewed them positively and less satisfied when their partners viewed them negatively."
  • The Dynamics of Self–Esteem in Partner Relationships
    • "[S]elf–esteem and all three aspects of relationship quality are dynamically intertwined in such a way that both previous levels and changes in one domain predict later changes in the other domain."
  • Relationships between self-esteem and marital satisfaction among women
    • "Marital satisfaction was found to be positively correlated with self-esteem in both cities, so that higher self-esteem was associated with greater satisfaction."
  • Development of self-esteem and relationship satisfaction in couples: Two longitudinal studies.
    • "Second, initial level of self-esteem of each partner predicted the initial level of the partners’ common relationship satisfaction, and change in self-esteem of each partner predicted change in the partners’ common relationship satisfaction. Third, these effects did not differ by gender and held when controlling for participants’ age, length of relationship, health, and employment status. Fourth, self-esteem similarity among partners did not influence the development of their relationship satisfaction. The findings suggest that the development of self-esteem in both partners of a couple contributes in a meaningful way to the development of the partners’ common satisfaction with their relationship."
  • A Mediation Role of Self-Esteem in the Relationship between Marital Satisfaction and Life Satisfaction in Married Individuals
    • "According to the findings of the study, the mediation self-esteem between the marital satisfaction and life satisfaction was statistically significant (p<.001). The whole model was significant (F(5-288)= 36.71, p<.001) and it was observed that it explained 39% of the total variance in the life satisfaction. Self-esteem was positively associated with marital satisfaction and considered one of the most important determinants of life satisfaction."

Finally, I wonder what the value of  is likely to be for participants of rationalist culture? A culture that promotes individual agency and self-improvement, that acknowledges serious challenges in our dating culture, our culture's egalitarian values, the far larger degree of control we have over ourselves than our partners, and the tendency for people to seek a self-justifying, optimistic narrative, all seem to me to point in the direction of  being high. That would suggest a rationalist culture with perhaps higher levels of self-esteem than partner-esteem. Fortunately, that says nothing at all about the absolute level of self- and partner-esteem, which I hope are on average high.

I can't disagree with Vaniver's conclusion that people are "mostly being serious" when they describe their partner as their better half. But I think the results of my reanalysis and my speculation on the value of `corr` (at least in rationalist-type culture) make me think this isn't because people are accurately appraising their partner as satisfying their own preferences better than they do themselves.

I looked around a bit more on Google Scholar (to be honest, just starting with the phrase "my better half"), and found a couple studies.

  • My Better Half: Strengths Endorsement and Deployment in Married Couples
    • "The present study focuses on married partners’ strengths endorsement and on their opportunities to deploy their strengths in the relationship, and explores the associations between these variables and both partners’ relationship satisfaction. The results reveal significant associations of strengths endorsement and deployment with relationship satisfaction, as expected. However, unexpectedly, men’s idealization of their wives’ character strengths was negatively associated with relationship satisfaction."
    • This is on a scale from 1-5 (p < .05).

 Is it me or you? An actor-partner examination of the relationship between partners' character strengths and marital quality

  • "[W]e examined the effects of three strengths factors (caring, self-control, and inquisitiveness) of both the individual and the partner on marital quality, evaluated by indices measuring marital satisfaction, intimacy, and burnout. Our findings revealed that the individual’s three strengths factors were related to all of his or her marital quality indices (actor effects). Moreover, women’s caring, inquisitiveness and self-control factors were associated with men’s marital quality, and men’s inquisitiveness and self-control factors were associated with women’s marital quality (partner effects)."

So idealizing your partner looks like a neutral-to-negative behavior. Inquisitiveness looks like a trait that both genders value. It strikes me that there are many things that you can do for your partner that they can't do for themselves - positive and negative. They can't praise or idealize themselves (or it won't come off the same way, anyway). They can't ask themselves "how was your day?" They can't give themselves a hug in a difficult moment, or if they do, it doesn't feel the same as when their partner does it.

No matter how effective you are at operating in the world, there are certain things that you just cannot do for yourself. In many areas of life, only your partner can. That seems like good reason to call them your better half.

Comment by DirectedEvolution (AllAmericanBreakfast) on Slack matters more than any outcome · 2024-01-11T08:59:06.920Z · LW · GW

Epistemic status: I read the entire post slowly, taking careful sentence-by-sentence notes. I felt I understood the author's ideas and that something like the general dynamic they describe is real and important. I notice this post is part of a larger conversation, at least on the internet and possibly in person as well, and I'm not reading the linked background posts. I've spent quite a few years reading a substantial portion of LessWrong and LW-adjacent online literature and I used to write regularly for this website.

This post is long and complex. Here are my loose definitions for some of the key concepts:
 

  • Outcome fixation: Striving for a particular outcome, regardless of what your true goals are and no matter the costs.
  • Addiction: Reacting to discomfort with a soothing distraction, typically in ways that cause the problem to reoccur, rather than addressing its root causes.
  • Adaptive entropy: An arms race between two opposing, mutually distrusting forces, potentially arriving at a stable but costly equilibrium.
  • Earning trust: A process that can dissolve the arms race of adaptive entropy with listening, learning how not to apply force and tolerate discomfort, prioritizing understanding the other side, and ending outcome fixation.

I can find these dynamics in my own life in certain ways. Trying to explain my research to polite but disinterested family members. Trying to push ahead with the next stage in the experiment when I'm not sure if the previous data really holds up. Reading linearly through a boring textbook even though I'm not really understanding it anymore, because I just want to be able to honestly say I read chapter 1. Arguing with almost anybody online. Refusing to schedule my holiday visits home with the idea that visits to the people I want to see will "just happen naturally."

And broadly, I agree with Valentine's prescription for how to escape the cycle. Wait for them to ask me about my research, keep my reply short, and focus my scientific energy on the work itself and my relationships with my colleagues. RTFM, plan carefully, review your results carefully, and base your reputation on conscientiousness rather than getting the desired result. Take detailed, handwritten notes, draw pictures, skim the chapter while searching for the key points you really need to know, lurk more and write what you know to a receptive audience. Plan your vacations home after consulting with friends and family on how much time they hope to spend with you, and build in time to rest and recharge.

I think Valentine's post is a bit overstated in its rejection of force as a solution to problems. There are plenty of situations where you're being resisted by an adaptive intelligence that's much weaker and less strategic than you, and you can win the contest by force. In global terms, the Leviathan, or the state and its monopoly on violence, is an example. It's a case where the ultimate victory of a superior force over all weaker powers is the one thing that finally allows everybody to relax, put down the weapons, and gain slack. Maintaining the slack from the monopoly on violence requires continuously paying the cost of maintaining a military and police force, but the theory is that it's a cost that pays for itself. Of course, if the state tries to exert power over a weaker force and fails, you get the drug war. Just because you can plausibly achieve lasting victory and reap huge benefits doesn't mean it will always work out that way.

Signaling is a second counterpoint. You might want to drop the arms race, but you might be faced with a situation where a costly signal that you're willing and able to use force, or even run a real risk a vicious cycle of adaptive entropy, is what's required to elicit cooperation. You need to make a show of strength. You need to show that you're not fixated on the outcome of inner harmony or of maintaining slack. You're showing you can drive a hard bargain, and your potential future employer needs to see that so they'll trust that you'll drive a hard bargain on their behalf if they hire you. The fact that those future negotiations are themselves a form of adaptive entropy is their problem, not yours: you are just a hired gun, a professional.

Or on the other hand, consider How to Win Friends and Influence People. This is a book about striving, about negotiating, about getting what you want out of life. It's about listening, but every story in the book is about how to use listening and personal warmth to achieve a specific outcome. It's not a book about taking stock of your goals. It's about sweetening the deal to make the deal go down.

And sometimes you're just dealing with problems of physics, information management, skill-building, and resource acquisition. Digging a ditch, finding a restaurant, learning to cook, paying the bills. These often have straightforward, "forcing" solutions and can be dealt with one by one as they arise. There is not always a need to figure out all your goals, constraints, and resources, and go through some sort of optimization algorithm in order to make decisions. You're a human, you typically navigate the world with heuristics, and fighting against your nature by avoiding outcome fixation and not forcing things is (sometimes, but not always), itself a recipe for vicious cycles of adaptive entropy.

Sometimes, vicious cycles of competition have side benefits. Sometimes, these side benefits can outweigh the costs of the competition. Workers and companies do all sorts of stupid, zero-to-negative sum behaviors in their efforts to compete in the short run. But the fact that they have to compete, that there is only so much demand to satisfy at any given time, is what motivates them to outperform. We all reap the benefit of that pressure to excel, applied over the long term.

What I find valuable in this post is searching for a more general, less violent and anthropomorphized name for this concept than "arms race." I'm not convinced "adaptive entropy" is the right one either, but that's OK. What concerns me is that it feels like the author is encouraging readers to interpret all their attempts to problem-solve through deliberate, forcing action as futile. Knowing this *may* be the case, being honest about why we might be engaged in futile behavior despite being cognizant of that, and offering alternatives all seem good. I would add that this isn't *always* the case, and it's important to have ways of exploring and testing different ways to conceptualize the problems you face in your life until you come to enough clarity on their root causes to address them productively.

I also think the attitude expressed in this post is probably underrated on LessWrong and the rationalist-adjacent world. I think that my arc as a rationalist was of increasing levels of agency, belief in my ability to bend the world to my will, willingness to define goals as outcomes and pursue them in straightforward ways, create a definition of success and then pursue that definition in order to get 70% of what I really want instead of 10%. That's a part of my nature now. Many of the problems in my daily life - navigating living with my partner, operating in an institutional setting, making smart choices on an analytical approach in collaboration with colleagues, exploring the risks and benefits associated with a potential project - generate conflicts that aren't particularly helped by trying to force things. The conflict itself points out that my true goals aren't the same thing as the outcome I was striving for when I contributed to the conflict, so conflict itself can serve an information-gathering purpose.

I'm doing something dangerous here, which is making objections to seeming implications of this post that the author didn't always directly state. The reason it's dangerous is that it can appear to the author and to others that you're making an implied claim that the author hasn't considered those implications. So I'll just conclude by saying that I don't really have any assumptions about what Valentine thinks about these points I'm making. These are just the thoughts that this post provoked in me.

Comment by DirectedEvolution (AllAmericanBreakfast) on The Economics of the Asteroid Deflection Problem (Dominant Assurance Contracts) · 2023-09-03T02:32:16.208Z · LW · GW

I think DACs face two challenges.

  1. The cost/benefit ratio for the population of potential projects is bimodal. They're either so attractive that they have no trouble seeking donors and executors via normal Kickstarter, or so unattractive that they'll fail to secure funding with or without a DAC.
  2. Even if DACs were normal, Bob the Builder exposes himself to financial risk in order to launch one. He has to increase his funding goal in order to compensate, making the value proposition worse.

For these reasons, it's hard for me to get excited about DACs.

There is probably a narrow band of projects where DACs are make-or-break, and because you're excited about them, I think it's great if you get the funding you're hoping for and succeed in normalizing them. Prove me wrong, by all means!

Comment by DirectedEvolution (AllAmericanBreakfast) on Contra Contra the Social Model of Disability · 2023-07-21T08:25:36.754Z · LW · GW

Goodness gracious, the reaction to this post has made me realize that I have a fundamental disconnect with the LessWrong community’s way of parsing arguments in a way I had just not realized. I think I’m no longer interested in it or the people who post here in the way I used to be. If epistemic spot checks like this are not valued, that’s a huge problem for me. Really sad.

I’ve taken a break from LessWrong before, but I am going to take a longer one now from both LessWrong and the wider LW-associated online scene. It’s not that the issues aren’t important - it’s that I don’t trust the epistemics of many of the major voices here and I think the patterns of how posts are up and downvotes reflect values that frequently don’t accord with mine. I also don’t see hope for improving the situation.

That said, I’ve learned a lot from specific individuals and ideas on LW over the years. You know who you are. I’ll be glad to take those influences along with me wherever I find myself spending time in the future.

Comment by DirectedEvolution (AllAmericanBreakfast) on Contra Contra the Social Model of Disability · 2023-07-21T05:12:02.628Z · LW · GW

My main aim is just to show that Scott did not represent his quoted sources accurately. I think the Social Model offers some useful terminology that I’m happy to adopt, and I am interested in how it fits into conversations about disability. My main point of frustration is seeing how casually Scott panned it without reading his sources closely, and how seemingly uninterested so many of my readers appear to be in that misrepresentation.

Comment by DirectedEvolution (AllAmericanBreakfast) on AllAmericanBreakfast's Shortform · 2023-07-21T04:21:33.581Z · LW · GW

I am really disappointed in the community’s response to my Contra Contra the Social Model of Disability post.

Comment by DirectedEvolution (AllAmericanBreakfast) on Contra Contra the Social Model of Disability · 2023-07-21T04:18:47.153Z · LW · GW

I am only familiar with the interactionist model as articulated by Scott. One difference appears to be that the Social Model carves out the category of “disability” to specifically refer to morally wrong ways that society restricts, discriminated against, or omits to accommodate impaired people. It has a moral stance built in. The Interactionist model uses “disability” as a synonym for impairment and doesn’t seem to have an intrinsic moral stance - it just makes a neutral statement that what people can or can’t do has to do with both environment and physical impairment.

Comment by DirectedEvolution (AllAmericanBreakfast) on Progress links and tweets, 2023-07-20: “A goddess enthroned on a car” · 2023-07-20T18:59:15.498Z · LW · GW

Weed zapper link has the wrong URL

Comment by DirectedEvolution (AllAmericanBreakfast) on Said Achmiz's Shortform · 2023-07-20T18:48:45.445Z · LW · GW

FYI, I had accidentally banned you and two other users in my personal posts only some time ago, but realized when you commented that I hadn’t banned you in all my posts as I’d intended. The ban I enacted today isn’t specifically in response to your most recent comments. Since you took the time to post them and then were cut off, which I feel bad about, I’ll make sure to take the time to read them. I fully support you cross posting them here.

Comment by DirectedEvolution (AllAmericanBreakfast) on Contra Contra the Social Model of Disability · 2023-07-20T15:15:29.494Z · LW · GW

You’re getting at the point about where we draw the line between a reasonable and unreasonable accommodation. The Social Model as defined by the linked articles doesn’t intrinsically say where that ought to be, though many people who understand the Social Model will also have opinions on line-drawing.

Most of the Social Model examples are about things like wheelchair ramps in buildings or not discriminating against people for jobs they’re able to do. One from the articles was extreme (teach sign language to everyone).

I think it is a mistake to criticize the Social Model on grounds that it is too expansive in what accommodations it demands, because it doesn’t demand any. But I also think it’s a mistake to use it as a justification for specific accommodations, because it doesn’t demand any.

Comment by DirectedEvolution (AllAmericanBreakfast) on Contra Contra the Social Model of Disability · 2023-07-20T15:07:25.083Z · LW · GW

Thanks for the kudos! This post is going to get downvotes into oblivion, though. I just wanted something I can link to in the future when people start linking to Scott’s original “Contra” article as if he’d performed some sort of incisive criticism of the Social Model.

Comment by DirectedEvolution (AllAmericanBreakfast) on Contra Contra the Social Model of Disability · 2023-07-20T15:05:52.674Z · LW · GW

N of 1, but I realized the intended meaning of “impaired” and “disabled” before even reading the original articles and adopted them into my language. As you can see from this article, adopting new and more precise and differentiated definitions for these two terms hasn’t harmed my ability to understand that not all functional impediments are caused by socially imposed disability.

So impossible? No.

If Scott had accurately described the articles he quoted before dealing with the perceived rhetorical trickery, I’d have let it slide. But he didn’t, and he’s criticized inaccurately representing the contents of cited literature plenty of times in the past.

Comment by DirectedEvolution (AllAmericanBreakfast) on Aging and the geroscience hypothesis · 2023-07-16T04:08:41.952Z · LW · GW

Good question. Anthropomorphizing isn’t necessary, it is just easier to write quickly in colloquial language, which is the tone I’m striving for here. I can’t think of a clearer short colloquial summary of antagonistic pleiotropy than “evolutionary favoritism of the young,” and though it does anthropomorphize, I think it gets the point across effectively as long as one doesn’t object to anthropomorphizing evolution on principle.

Comment by DirectedEvolution (AllAmericanBreakfast) on How to use ChatGPT to get better book & movie recommendations · 2023-07-15T17:01:53.561Z · LW · GW

Sorry my question wasn’t clear, but you managed to answer it anyway! Thanks :)

Comment by DirectedEvolution (AllAmericanBreakfast) on How to use ChatGPT to get better book & movie recommendations · 2023-07-15T09:16:38.623Z · LW · GW

I’m curious, have you found and read/watched content using this approach that you think you’d have missed or ignored otherwise? I’m wondering if the utility comes from the ability to have a conversation with the algorithm and figure out your preferences and meta-explore the world of literature or film by generating interesting recommendation prompts, or whether it comes from the algorithm being skilled at finding surprising and unusual content you’d otherwise have struggled to find and get motivated to check out. In other words, are the superior recommendations mediated by its superior ability to help the user self-reflect?