Posts

Contra Contra the Social Model of Disability 2023-07-20T06:59:45.983Z
Compression of morbidity 2023-07-12T15:26:27.137Z
Aging and the geroscience hypothesis 2023-07-12T07:16:04.516Z
Popularizing vibes vs. models 2023-07-12T05:44:21.586Z
Commentless downvoting is not a good way to fight infohazards 2023-07-08T17:29:42.616Z
Request for feedback - infohazards in testing LLMs for causal reasoning? 2023-07-08T09:01:31.760Z
Is the 10% Giving What We Can Pledge Core to EA's Reputation? 2023-06-06T06:21:32.955Z
Forum Proposal: Karma Transfers 2023-04-30T00:34:55.318Z
Feature proposal: integrate LessWrong with ChatGPT to promote active reading 2023-03-19T03:41:34.781Z
Conceptual Pathfinding 2023-02-14T05:49:51.856Z
Human-AI collaborative writing 2023-02-12T14:57:09.129Z
How I Learn From Textbooks 2023-02-12T04:45:26.869Z
Here's Why I'm Hesitant To Respond In More Depth 2023-02-06T18:36:24.882Z
Stanzas On Power Calculation 2023-02-05T19:15:14.958Z
Pandemic Prediction Checklist: H5N1 2023-02-05T03:26:16.868Z
Money is a way of thanking strangers 2023-01-13T17:06:36.547Z
Summary of a new study on out-group hate (and how to fix it) 2022-12-04T01:53:32.490Z
Distillation Experiment: Chunk-Knitting 2022-11-07T19:56:39.905Z
Gandalf or Saruman? A Soldier in Scout's Clothing 2022-10-31T02:40:42.516Z
Unit Test Everything 2022-09-29T18:12:28.850Z
For Better Commenting, Stop Out Loud 2022-07-28T01:39:35.009Z
Making DALL-E Count 2022-07-22T09:11:57.931Z
Marburg Virus Pandemic Prediction Checklist 2022-07-18T23:15:13.286Z
App idea to help with reading STEM textbooks (feedback request) 2022-07-13T18:28:06.505Z
Five routes of access to scientific literature 2022-07-03T20:53:47.044Z
Common but neglected risk factors that may let you get Paxlovid 2022-06-21T07:34:02.685Z
Monkeypox: explaining the jump to Europe 2022-05-23T09:53:16.467Z
Feature request: draft comments 2022-05-17T21:21:36.306Z
How to place a bet on the end of the world 2022-04-20T18:24:18.212Z
Mental nonsense: my anti-insomnia trick 2022-03-28T22:47:03.434Z
Research on how pattern-finding contributes to memorization? 2022-03-23T21:42:37.742Z
Interest in a digital LW "book" club? 2022-02-09T05:59:19.878Z
Nuanced and Extreme Countersignaling 2022-01-24T06:47:49.410Z
Specialization 2021-12-23T03:23:16.532Z
What Caplan’s "Missing Mood" Heuristic Is Really For 2021-12-16T19:47:45.747Z
Anti-correlated causation 2021-12-06T04:36:17.439Z
Submit comments on Paxlovid to the FDA (deadline Nov 29th). 2021-11-27T18:44:27.615Z
Use Tools For What They're For 2021-11-23T08:26:19.174Z
A pharmaceutical stock pricing mystery 2021-11-14T01:19:54.930Z
Framing Practicum: Semistable Equilibrium 2021-10-14T23:31:51.515Z
The Mind Is A Shaky Control System 2021-09-29T20:49:02.479Z
Lakshmi's Magic Rope: An Intuitive Explanation of Ramanujan Primes 2021-09-02T16:36:07.225Z
Superintelligent Introspection: A Counter-argument to the Orthogonality Thesis 2021-08-29T04:53:30.857Z
A deeper look at doxepin and the FDA 2021-08-13T18:59:48.022Z
Founding a rationalist group at the University of Michigan 2021-08-11T19:07:42.367Z
What are some beautiful, rationalist sounds? 2021-08-06T01:22:38.334Z
A conversation about cooking, science, and creativity 2021-07-27T05:00:30.236Z
Can I teach myself scientific creativity? 2021-07-25T20:15:09.385Z
A cognitive algorithm for "free will." 2021-07-14T21:33:11.400Z
Opinionated Uncertainty 2021-06-29T00:11:32.183Z

Comments

Comment by DirectedEvolution (AllAmericanBreakfast) on The Wicked Problem Experience · 2024-01-12T08:31:36.684Z · LW · GW

This post and its companion have even more resonance now that I'm deeper into my graduate education and conducting my research more independently.

Here, the key insight is that research is an iterative process of re-scoping the project and execution on the current version of the plan. You are trying to make a product sufficient to move the conversation forward, not (typically) write the final word on the subject.

What you know, what resources you have access to, your awareness of what people care about, and what there's demand for, depend on your output. That's all key for the next project. A rule of thumb is that at the beginning, you can think of your definition of done as delivering a set of valuable conclusions such that it would take about 10 hours for any reasonably smart person to find a substantial flaw.

You should keep on rethinking whether the work you're doing (read: the costs you're paying) are delivering as much value, given your current state of knowledge. As you work on the project, and have conversations with colleagues, advisors and users, your understanding of where the value's at and how large the costs of various directions are, will constantly update. So you will need to update your focus along with it. Accept the interruptions as a natural, if uncomfortable, part of the process.

Remember that one way or another, you're going to get your product to a point where it has real, unique value to other people. You just need to figure out what that is and stay the course.

The advice here also helps me figure out how to interact with my fellow students when they're proposing excessively costly projects with no clear benefit due to their passion for and interest in the work itself and their love of rigor and design. Instead of quashing their passion or staying silent or being encouraging despite my misgivings, I can say something like "I think this could be valuable in the future once it's the main bottleneck to value, but I think [some easier, more immediately beneficial task] is the way to go for now. You can always do the thing you're proposing at a later time." This helps me be more honest while, I believe, helping them steer their efforts in ways that will bring them greater rewards.

The most actionable advice I got from the companion piece was the idea of making an outline of the types of evidence you'll use to argue for your claims, and get a sign-off from a colleague or advisor on the adequacy of that evidence before you go about gathering it. Update that outline as you go along. I've been struggling with this exact issue and it seems like a great solution to the problem. I'm eager to try it with my PhD advisors.

Edit: as a final note, I think we are very fortunate to have Holden, a co-founder of a major philanthropic organization, describing what his process was like during its formation. Exposition on what he's tracking in his head is underprovided generally and Holden really went above and beyond on this one. 

Comment by DirectedEvolution (AllAmericanBreakfast) on Have You Tried Hiring People? · 2024-01-12T07:10:55.657Z · LW · GW

As of October, MIRI has shifted its focus. See their announcement for details.

I looked up MIRI's hiring page and it's still in about the same state. This kind of makes sense given the FTX implosion. But I would ask whether MIRI is unconcerned with the criticism it received here and/or actively likes their approach to hiring? We know Eliezer Yudkowsky, who's on their senior leadership team and board of directors, saw this, because he commented on it.

I found it odd that 3/5 members of the senior leadership team, Malo Bourgon, Alex Vermeer, and Jimmy Rintjema, are from Ontario (Malo and Alex at least are alumns from University of Guelph). I think this is a relevant question given that the concern here is specifically about whether MIRI's hiring practices are appropriate. I am surprised about this both because the University of Guelph, as far as I know, is not particularly renowned as an AI or AI safety research institution, and because Ontario is physically distant from San Francisco, ruling out geographic proximity as an explanation.

A bit of Googling turned up MIRI's own announcement page for Malo Bourgon's hiring as COO (he's now CEO). "Behind the scenes, nearly every system or piece of software MIRI uses has been put together by Malo, or in a joint effort by Malo and Alex Vermeer — a close friend of Malo’s from the University of Guelph who now works as a MIRI program management analyst."

I would like to understand better what professional traits made Malo originally seem like a good hire, both given that his background doesn't sound particularly AI or AI safety-focused. "His professional interests included climate change mitigation, and during his master’s, he worked on a project to reduce waste through online detection of inefficient electric motors. Malo started working for us shortly after completing his master’s in early 2012, which makes him MIRI’s longest-standing team member next to Eliezer Yudkowsky."

I'd also like to know what professional traits led to the hire of Alex Vermeer, given that both Alex and Malo were hired in 2012. Was a pre-existing friendship a factor in the hire, and if so, to what extent?

The three people from Ontario seem particularly involved in the workshop/recruiting/money aspect of the organization:

  • "Malo’s past achievements at MIRI include: coordinating MIRI’s first research workshops and establishing our current recruitment pipeline." (From the hiring announcement page)
  • For another U Guelph alumn listed on their team page, "Alex Vermeer improves the processes and systems within and surrounding MIRI’s research team and research programs. This includes increasing the quality and quantity of workshops and similar programs, implementing best practices within the research team, coordinating the technical publication and researcher recruiting pipelines, and other research support projects."
  • "Jimmy Rintjema stewards finances and regulatory compliance, ensuring that all aspects of MIRI’s business administration remain organized and secure." (This is also from the team page)

In my personal opinion, Eliezer's short response to this post and the lack of other response (as far as I can see here) suggest that MIRI may be either uninterested or incapable of managing its perception and reputation, at least on and around LessWrong. That makes me wonder how well it can realistically fulfill its new mission of public advocacy. I am also curious to know in detail how it came to be that so many people from Ontario occupy positions in senior leadership?

Comment by DirectedEvolution (AllAmericanBreakfast) on How satisfied should you expect to be with your partner? · 2024-01-12T06:13:53.312Z · LW · GW

I replicated this review, which you can check out in this colab notebook (I get much higher performance running it locally on my 20-core CPU).

There is only one cluster of discrepancies I found between my analysis and Vaniver's: in my analysis, mating is even more assortative than in the original work:

  • Pearson R of the sum of partner stats is 0.973 instead of the previous 0.857
  • 99.6% of partners have an absolute sum of stats difference < 6, instead of the previous 83.3%.
  • I wasn't completely sure if Vaniver's "net satisfaction" was the difference of self-satisfaction and satisfaction with partner or perhaps the log average ratio. I used the difference (since theoretically self-satisfaction could be zero, which would make the ratio undefined). Average net satisfaction was downshifted from Vaniver's result. The range I found was , while Vaniver's was .

In Vaniver's analysis,  represents an adjustable correlation between a person's preferences and their own traits. Higher values of  result in a higher correspondence between one's own preferences and one's own traits.

One important impact of this discrepancy is that the transition between being on average more self-satisfied than satisfied with one's partner occurs at around  rather than , which intuitively makes sense to me, given the highly assortative result and the fact that the analysis directly mixture an initial set of preferences with some random data to form the final preferences as a function of .

Can we ground these results in empirical data, even though we can't observe preferences and stats with the same clarity and comprehensiveness in real-world data?

One way we can try is to consider the "self-satisfaction" metric we are producing in our simulation to be essentially the same thing as "self-esteem." There is a literature relating self-esteem to partner satisfaction in diverse cultures longitudinally over substantial periods of time. As we might expect, self-esteem, partner satisfaction, and marital satisfaction all seem to be interrelated.

  • Predicting Marital Satisfaction From Self, Partner, and Couple Characteristics: Is It Me, You, or Us?
    • Men and women had similar scores in personality traits of social potency, dependability, accommodation, and interpersonal relatedness.
    • Broadly, self-satisfaction, partner-satisfaction, and having traits in common are all positively associated with marital satisfaction.
  • Partner Appraisal and Marital Satisfaction: The Role of Self-Esteem and Depression
    • "Regardless of self-esteem and depression level, and across trait categories, targets were more maritally satisfied when their partners viewed them positively and less satisfied when their partners viewed them negatively."
  • The Dynamics of Self–Esteem in Partner Relationships
    • "[S]elf–esteem and all three aspects of relationship quality are dynamically intertwined in such a way that both previous levels and changes in one domain predict later changes in the other domain."
  • Relationships between self-esteem and marital satisfaction among women
    • "Marital satisfaction was found to be positively correlated with self-esteem in both cities, so that higher self-esteem was associated with greater satisfaction."
  • Development of self-esteem and relationship satisfaction in couples: Two longitudinal studies.
    • "Second, initial level of self-esteem of each partner predicted the initial level of the partners’ common relationship satisfaction, and change in self-esteem of each partner predicted change in the partners’ common relationship satisfaction. Third, these effects did not differ by gender and held when controlling for participants’ age, length of relationship, health, and employment status. Fourth, self-esteem similarity among partners did not influence the development of their relationship satisfaction. The findings suggest that the development of self-esteem in both partners of a couple contributes in a meaningful way to the development of the partners’ common satisfaction with their relationship."
  • A Mediation Role of Self-Esteem in the Relationship between Marital Satisfaction and Life Satisfaction in Married Individuals
    • "According to the findings of the study, the mediation self-esteem between the marital satisfaction and life satisfaction was statistically significant (p<.001). The whole model was significant (F(5-288)= 36.71, p<.001) and it was observed that it explained 39% of the total variance in the life satisfaction. Self-esteem was positively associated with marital satisfaction and considered one of the most important determinants of life satisfaction."

Finally, I wonder what the value of  is likely to be for participants of rationalist culture? A culture that promotes individual agency and self-improvement, that acknowledges serious challenges in our dating culture, our culture's egalitarian values, the far larger degree of control we have over ourselves than our partners, and the tendency for people to seek a self-justifying, optimistic narrative, all seem to me to point in the direction of  being high. That would suggest a rationalist culture with perhaps higher levels of self-esteem than partner-esteem. Fortunately, that says nothing at all about the absolute level of self- and partner-esteem, which I hope are on average high.

I can't disagree with Vaniver's conclusion that people are "mostly being serious" when they describe their partner as their better half. But I think the results of my reanalysis and my speculation on the value of `corr` (at least in rationalist-type culture) make me think this isn't because people are accurately appraising their partner as satisfying their own preferences better than they do themselves.

I looked around a bit more on Google Scholar (to be honest, just starting with the phrase "my better half"), and found a couple studies.

  • My Better Half: Strengths Endorsement and Deployment in Married Couples
    • "The present study focuses on married partners’ strengths endorsement and on their opportunities to deploy their strengths in the relationship, and explores the associations between these variables and both partners’ relationship satisfaction. The results reveal significant associations of strengths endorsement and deployment with relationship satisfaction, as expected. However, unexpectedly, men’s idealization of their wives’ character strengths was negatively associated with relationship satisfaction."
    • This is on a scale from 1-5 (p < .05).

 Is it me or you? An actor-partner examination of the relationship between partners' character strengths and marital quality

  • "[W]e examined the effects of three strengths factors (caring, self-control, and inquisitiveness) of both the individual and the partner on marital quality, evaluated by indices measuring marital satisfaction, intimacy, and burnout. Our findings revealed that the individual’s three strengths factors were related to all of his or her marital quality indices (actor effects). Moreover, women’s caring, inquisitiveness and self-control factors were associated with men’s marital quality, and men’s inquisitiveness and self-control factors were associated with women’s marital quality (partner effects)."

So idealizing your partner looks like a neutral-to-negative behavior. Inquisitiveness looks like a trait that both genders value. It strikes me that there are many things that you can do for your partner that they can't do for themselves - positive and negative. They can't praise or idealize themselves (or it won't come off the same way, anyway). They can't ask themselves "how was your day?" They can't give themselves a hug in a difficult moment, or if they do, it doesn't feel the same as when their partner does it.

No matter how effective you are at operating in the world, there are certain things that you just cannot do for yourself. In many areas of life, only your partner can. That seems like good reason to call them your better half.

Comment by DirectedEvolution (AllAmericanBreakfast) on Slack matters more than any outcome · 2024-01-11T08:59:06.920Z · LW · GW

Epistemic status: I read the entire post slowly, taking careful sentence-by-sentence notes. I felt I understood the author's ideas and that something like the general dynamic they describe is real and important. I notice this post is part of a larger conversation, at least on the internet and possibly in person as well, and I'm not reading the linked background posts. I've spent quite a few years reading a substantial portion of LessWrong and LW-adjacent online literature and I used to write regularly for this website.

This post is long and complex. Here are my loose definitions for some of the key concepts:
 

  • Outcome fixation: Striving for a particular outcome, regardless of what your true goals are and no matter the costs.
  • Addiction: Reacting to discomfort with a soothing distraction, typically in ways that cause the problem to reoccur, rather than addressing its root causes.
  • Adaptive entropy: An arms race between two opposing, mutually distrusting forces, potentially arriving at a stable but costly equilibrium.
  • Earning trust: A process that can dissolve the arms race of adaptive entropy with listening, learning how not to apply force and tolerate discomfort, prioritizing understanding the other side, and ending outcome fixation.

I can find these dynamics in my own life in certain ways. Trying to explain my research to polite but disinterested family members. Trying to push ahead with the next stage in the experiment when I'm not sure if the previous data really holds up. Reading linearly through a boring textbook even though I'm not really understanding it anymore, because I just want to be able to honestly say I read chapter 1. Arguing with almost anybody online. Refusing to schedule my holiday visits home with the idea that visits to the people I want to see will "just happen naturally."

And broadly, I agree with Valentine's prescription for how to escape the cycle. Wait for them to ask me about my research, keep my reply short, and focus my scientific energy on the work itself and my relationships with my colleagues. RTFM, plan carefully, review your results carefully, and base your reputation on conscientiousness rather than getting the desired result. Take detailed, handwritten notes, draw pictures, skim the chapter while searching for the key points you really need to know, lurk more and write what you know to a receptive audience. Plan your vacations home after consulting with friends and family on how much time they hope to spend with you, and build in time to rest and recharge.

I think Valentine's post is a bit overstated in its rejection of force as a solution to problems. There are plenty of situations where you're being resisted by an adaptive intelligence that's much weaker and less strategic than you, and you can win the contest by force. In global terms, the Leviathan, or the state and its monopoly on violence, is an example. It's a case where the ultimate victory of a superior force over all weaker powers is the one thing that finally allows everybody to relax, put down the weapons, and gain slack. Maintaining the slack from the monopoly on violence requires continuously paying the cost of maintaining a military and police force, but the theory is that it's a cost that pays for itself. Of course, if the state tries to exert power over a weaker force and fails, you get the drug war. Just because you can plausibly achieve lasting victory and reap huge benefits doesn't mean it will always work out that way.

Signaling is a second counterpoint. You might want to drop the arms race, but you might be faced with a situation where a costly signal that you're willing and able to use force, or even run a real risk a vicious cycle of adaptive entropy, is what's required to elicit cooperation. You need to make a show of strength. You need to show that you're not fixated on the outcome of inner harmony or of maintaining slack. You're showing you can drive a hard bargain, and your potential future employer needs to see that so they'll trust that you'll drive a hard bargain on their behalf if they hire you. The fact that those future negotiations are themselves a form of adaptive entropy is their problem, not yours: you are just a hired gun, a professional.

Or on the other hand, consider How to Win Friends and Influence People. This is a book about striving, about negotiating, about getting what you want out of life. It's about listening, but every story in the book is about how to use listening and personal warmth to achieve a specific outcome. It's not a book about taking stock of your goals. It's about sweetening the deal to make the deal go down.

And sometimes you're just dealing with problems of physics, information management, skill-building, and resource acquisition. Digging a ditch, finding a restaurant, learning to cook, paying the bills. These often have straightforward, "forcing" solutions and can be dealt with one by one as they arise. There is not always a need to figure out all your goals, constraints, and resources, and go through some sort of optimization algorithm in order to make decisions. You're a human, you typically navigate the world with heuristics, and fighting against your nature by avoiding outcome fixation and not forcing things is (sometimes, but not always), itself a recipe for vicious cycles of adaptive entropy.

Sometimes, vicious cycles of competition have side benefits. Sometimes, these side benefits can outweigh the costs of the competition. Workers and companies do all sorts of stupid, zero-to-negative sum behaviors in their efforts to compete in the short run. But the fact that they have to compete, that there is only so much demand to satisfy at any given time, is what motivates them to outperform. We all reap the benefit of that pressure to excel, applied over the long term.

What I find valuable in this post is searching for a more general, less violent and anthropomorphized name for this concept than "arms race." I'm not convinced "adaptive entropy" is the right one either, but that's OK. What concerns me is that it feels like the author is encouraging readers to interpret all their attempts to problem-solve through deliberate, forcing action as futile. Knowing this *may* be the case, being honest about why we might be engaged in futile behavior despite being cognizant of that, and offering alternatives all seem good. I would add that this isn't *always* the case, and it's important to have ways of exploring and testing different ways to conceptualize the problems you face in your life until you come to enough clarity on their root causes to address them productively.

I also think the attitude expressed in this post is probably underrated on LessWrong and the rationalist-adjacent world. I think that my arc as a rationalist was of increasing levels of agency, belief in my ability to bend the world to my will, willingness to define goals as outcomes and pursue them in straightforward ways, create a definition of success and then pursue that definition in order to get 70% of what I really want instead of 10%. That's a part of my nature now. Many of the problems in my daily life - navigating living with my partner, operating in an institutional setting, making smart choices on an analytical approach in collaboration with colleagues, exploring the risks and benefits associated with a potential project - generate conflicts that aren't particularly helped by trying to force things. The conflict itself points out that my true goals aren't the same thing as the outcome I was striving for when I contributed to the conflict, so conflict itself can serve an information-gathering purpose.

I'm doing something dangerous here, which is making objections to seeming implications of this post that the author didn't always directly state. The reason it's dangerous is that it can appear to the author and to others that you're making an implied claim that the author hasn't considered those implications. So I'll just conclude by saying that I don't really have any assumptions about what Valentine thinks about these points I'm making. These are just the thoughts that this post provoked in me.

Comment by DirectedEvolution (AllAmericanBreakfast) on The Economics of the Asteroid Deflection Problem (Dominant Assurance Contracts) · 2023-09-03T02:32:16.208Z · LW · GW

I think DACs face two challenges.

  1. The cost/benefit ratio for the population of potential projects is bimodal. They're either so attractive that they have no trouble seeking donors and executors via normal Kickstarter, or so unattractive that they'll fail to secure funding with or without a DAC.
  2. Even if DACs were normal, Bob the Builder exposes himself to financial risk in order to launch one. He has to increase his funding goal in order to compensate, making the value proposition worse.

For these reasons, it's hard for me to get excited about DACs.

There is probably a narrow band of projects where DACs are make-or-break, and because you're excited about them, I think it's great if you get the funding you're hoping for and succeed in normalizing them. Prove me wrong, by all means!

Comment by DirectedEvolution (AllAmericanBreakfast) on Contra Contra the Social Model of Disability · 2023-07-21T08:25:36.754Z · LW · GW

Goodness gracious, the reaction to this post has made me realize that I have a fundamental disconnect with the LessWrong community’s way of parsing arguments in a way I had just not realized. I think I’m no longer interested in it or the people who post here in the way I used to be. If epistemic spot checks like this are not valued, that’s a huge problem for me. Really sad.

I’ve taken a break from LessWrong before, but I am going to take a longer one now from both LessWrong and the wider LW-associated online scene. It’s not that the issues aren’t important - it’s that I don’t trust the epistemics of many of the major voices here and I think the patterns of how posts are up and downvotes reflect values that frequently don’t accord with mine. I also don’t see hope for improving the situation.

That said, I’ve learned a lot from specific individuals and ideas on LW over the years. You know who you are. I’ll be glad to take those influences along with me wherever I find myself spending time in the future.

Comment by DirectedEvolution (AllAmericanBreakfast) on Contra Contra the Social Model of Disability · 2023-07-21T05:12:02.628Z · LW · GW

My main aim is just to show that Scott did not represent his quoted sources accurately. I think the Social Model offers some useful terminology that I’m happy to adopt, and I am interested in how it fits into conversations about disability. My main point of frustration is seeing how casually Scott panned it without reading his sources closely, and how seemingly uninterested so many of my readers appear to be in that misrepresentation.

Comment by DirectedEvolution (AllAmericanBreakfast) on AllAmericanBreakfast's Shortform · 2023-07-21T04:21:33.581Z · LW · GW

I am really disappointed in the community’s response to my Contra Contra the Social Model of Disability post.

Comment by DirectedEvolution (AllAmericanBreakfast) on Contra Contra the Social Model of Disability · 2023-07-21T04:18:47.153Z · LW · GW

I am only familiar with the interactionist model as articulated by Scott. One difference appears to be that the Social Model carves out the category of “disability” to specifically refer to morally wrong ways that society restricts, discriminated against, or omits to accommodate impaired people. It has a moral stance built in. The Interactionist model uses “disability” as a synonym for impairment and doesn’t seem to have an intrinsic moral stance - it just makes a neutral statement that what people can or can’t do has to do with both environment and physical impairment.

Comment by DirectedEvolution (AllAmericanBreakfast) on Progress links and tweets, 2023-07-20: “A goddess enthroned on a car” · 2023-07-20T18:59:15.498Z · LW · GW

Weed zapper link has the wrong URL

Comment by DirectedEvolution (AllAmericanBreakfast) on Said Achmiz's Shortform · 2023-07-20T18:48:45.445Z · LW · GW

FYI, I had accidentally banned you and two other users in my personal posts only some time ago, but realized when you commented that I hadn’t banned you in all my posts as I’d intended. The ban I enacted today isn’t specifically in response to your most recent comments. Since you took the time to post them and then were cut off, which I feel bad about, I’ll make sure to take the time to read them. I fully support you cross posting them here.

Comment by DirectedEvolution (AllAmericanBreakfast) on Contra Contra the Social Model of Disability · 2023-07-20T15:15:29.494Z · LW · GW

You’re getting at the point about where we draw the line between a reasonable and unreasonable accommodation. The Social Model as defined by the linked articles doesn’t intrinsically say where that ought to be, though many people who understand the Social Model will also have opinions on line-drawing.

Most of the Social Model examples are about things like wheelchair ramps in buildings or not discriminating against people for jobs they’re able to do. One from the articles was extreme (teach sign language to everyone).

I think it is a mistake to criticize the Social Model on grounds that it is too expansive in what accommodations it demands, because it doesn’t demand any. But I also think it’s a mistake to use it as a justification for specific accommodations, because it doesn’t demand any.

Comment by DirectedEvolution (AllAmericanBreakfast) on Contra Contra the Social Model of Disability · 2023-07-20T15:07:25.083Z · LW · GW

Thanks for the kudos! This post is going to get downvotes into oblivion, though. I just wanted something I can link to in the future when people start linking to Scott’s original “Contra” article as if he’d performed some sort of incisive criticism of the Social Model.

Comment by DirectedEvolution (AllAmericanBreakfast) on Contra Contra the Social Model of Disability · 2023-07-20T15:05:52.674Z · LW · GW

N of 1, but I realized the intended meaning of “impaired” and “disabled” before even reading the original articles and adopted them into my language. As you can see from this article, adopting new and more precise and differentiated definitions for these two terms hasn’t harmed my ability to understand that not all functional impediments are caused by socially imposed disability.

So impossible? No.

If Scott had accurately described the articles he quoted before dealing with the perceived rhetorical trickery, I’d have let it slide. But he didn’t, and he’s criticized inaccurately representing the contents of cited literature plenty of times in the past.

Comment by DirectedEvolution (AllAmericanBreakfast) on Aging and the geroscience hypothesis · 2023-07-16T04:08:41.952Z · LW · GW

Good question. Anthropomorphizing isn’t necessary, it is just easier to write quickly in colloquial language, which is the tone I’m striving for here. I can’t think of a clearer short colloquial summary of antagonistic pleiotropy than “evolutionary favoritism of the young,” and though it does anthropomorphize, I think it gets the point across effectively as long as one doesn’t object to anthropomorphizing evolution on principle.

Comment by DirectedEvolution (AllAmericanBreakfast) on How to use ChatGPT to get better book & movie recommendations · 2023-07-15T17:01:53.561Z · LW · GW

Sorry my question wasn’t clear, but you managed to answer it anyway! Thanks :)

Comment by DirectedEvolution (AllAmericanBreakfast) on How to use ChatGPT to get better book & movie recommendations · 2023-07-15T09:16:38.623Z · LW · GW

I’m curious, have you found and read/watched content using this approach that you think you’d have missed or ignored otherwise? I’m wondering if the utility comes from the ability to have a conversation with the algorithm and figure out your preferences and meta-explore the world of literature or film by generating interesting recommendation prompts, or whether it comes from the algorithm being skilled at finding surprising and unusual content you’d otherwise have struggled to find and get motivated to check out. In other words, are the superior recommendations mediated by its superior ability to help the user self-reflect?

Comment by DirectedEvolution (AllAmericanBreakfast) on Aging and the geroscience hypothesis · 2023-07-12T23:42:26.526Z · LW · GW

Thanks for the feedback! I plan on anthropomorphizing evolution freely in this series of posts, because I think that for most readers, describing evolution in this way is more intuitive. I am making the assumption that any serious reader is fully aware that evolution has no actual teleology. Since this isn't focused on explaining the basics of evolutionary theory to readers, it unfortunately won't cure somebody who's so confused about evolution as to think it actually has a teleology or goal.

Comment by DirectedEvolution (AllAmericanBreakfast) on AllAmericanBreakfast's Shortform · 2023-07-12T21:25:52.017Z · LW · GW

Does anybody know of research studying whether prediction markets/forecasting averages become more accurate if you exclude non-superforecaster predictions vs. including them?

To be specific, say you run a forecasting tournament with 1,000 participants. After determining the Brier score of each participant, you compute what the Brier score would be for the average of the best 20 participants vs. the average of all 1000 participants. Which average would typically have a lower Brier score - the average of the best 20 participants' predictions, or the average of all 1000 participants' predictions?

Comment by DirectedEvolution (AllAmericanBreakfast) on Aging and the geroscience hypothesis · 2023-07-12T07:17:09.276Z · LW · GW

Apologies - I accidentally moved this to drafts while making minor formatting edits and had to republish, which put it on the front page again.

Comment by DirectedEvolution (AllAmericanBreakfast) on Elizabeth's Shortform · 2023-07-12T02:30:54.940Z · LW · GW

Here is some random NFT (?) company (?) that's doing retroactive grants to support its community builders. I am in no way endorsing this specific example as I know nothing about it, just noticing that some are trying it out.

Comment by DirectedEvolution (AllAmericanBreakfast) on AllAmericanBreakfast's Shortform · 2023-07-12T02:27:05.477Z · LW · GW

P(Fraud|Massive growth & Fast growth & Consistent growth & Credence Good)

Bernie Madoff's ponzi scheme hedge fund had almost $70 billion (?) in AUM at its peak. Not adjusting for interest, if it existed today, it would be about the 6th biggest hedge fund, roughly tied with Two Sigma Investments.

 Madoff's scheme lasted 17 years, and if it had existed today, it would be the youngest hedge fund on the list by 5 years. Most top-10 hedge funds were founded in the 70s or 80s and are therefore 30-45 years old.

Theranos was a $10 billion company at its peak, which would have made it about the 25th largest healthcare company if it existed today, not adjusting for interest. It achieved that valuation 10 years after it was founded, which a very cursory check suggests it was decades younger than most other companies on the top-10 list.

FTX was valued at $32 billion and was the third-largest crypto exchange by volume at its peak, and was founded just two years before it collapsed. If it was a hedge fund, it would have been on the top-10 list. Its young age unfortunately doesn't help us much, since crypto is such a young technology, except in that a lot of people regard the crypto space as a whole as being rife with fraud.

Hedge funds and medical testing companies are credence goods - we have to trust that their products work.

So we have a sensible suggestion of a pattern to watch out for with the most eye-popping frauds - massive, shockingly fast growth of a company supplying a credence good. The faster and bigger a credence-good company grows, and the more consistent the results or the absence of competition, the likelier the explanation is to be fraud.

Comment by DirectedEvolution (AllAmericanBreakfast) on Grant applications and grand narratives · 2023-07-11T20:42:48.918Z · LW · GW

My complaint is that I think the existing applications don't make it obvious that that's an okay pitch to make. My goal is some combination of "get the forms changed to make it more obvious that this kind of pitch is okay" and "spread the knowledge that that this can work even if the form seems like the form wants something else". 

That seems like an easy win - and if the grantmaker is specifically not interested in pure model-based justifications, saying so would also be helpful so that honest model-based applicants don't have to waste their time.

and fill out forms with the vibe that I'm definitely going to do these specific things and if I don't have committed a moral fraud

That seems like a foolish grantmaking strategy - in the startup world, most VCs seem to encourage startups to pivot, kill unpromising projects, and assume that the first product idea isn't going to be the last one because it takes time to find a compelling benefit and product-market fit. To insist that the grantee stake their reputation not only on successful execution but also on sticking to the original project idea seems like a way to help projects fail while selecting for a mixture of immaturity and dishonesty. That doesn't mean I think those awarded grants are immoral - my hope is that most applicants are moral people and that such a rigid grantmaking process is just making the selection process marginally worse than it otherwise might be.


Honestly I think the best thing for funding me and people like me[3] might be to embrace impact certificates/retroactive grant making. It avoids the problems that stem from premature project legibilization without leaving grantmakers funding a bunch of random bullshit. That's probably a bigger deal than wording on a form. 

Yeah, I think this is an interesting space. Certainly much more work to make this work than changing the wording on a form though!

Sounds like we're pretty much in agreement at least in terms of general principles.

Comment by DirectedEvolution (AllAmericanBreakfast) on Aging and the geroscience hypothesis · 2023-07-11T06:00:38.859Z · LW · GW

Also, note point 5 - we expect that evolution will build us to be robust enough to survive only to the point where intrinsic and extrinsic causes of death are balanced out. So a viable alternative explanation is that humans and bowhead whales are two animals that are particularly resistant to extrinsic causes of death, which makes some sense in both cases.

Whether or not this hypothesis is correct is uncertain and may be context-dependent, and I haven't delved into the literature on this specific point enough to give any kind of authoritative opinion.

Comment by DirectedEvolution (AllAmericanBreakfast) on Aging and the geroscience hypothesis · 2023-07-11T05:54:59.514Z · LW · GW

On the other hand, my understanding of the warm-blooded naked mole rat in captivity is that it doesn't show clear signs of increasing mortality over time. We haven't kept enough naked mole rats for long enough to see how long that lasts or whether that result is robust. But we should care at least as much about increasing rates of mortality over time - which is what it conventionally means to "age" and is captured in the concept of "healthspan" - as we care about lifespan.

 

Edit: I’m wrong! Naked mole rats are basically cold blooded.

Comment by DirectedEvolution (AllAmericanBreakfast) on Aging and the geroscience hypothesis · 2023-07-09T16:21:44.518Z · LW · GW

The reason I didn't include material on the grandmother hypothesis is not covered in this 8-page chapter. It's not a key feature of most of the research on decreasing selection pressure with age, which assumes the overwhelmingly most important contribution to the germ line is via reproduction rather than nurture.

The grandmother hypothesis is mainly attempting to explain menopause, which is indeed an evolutionary mystery.

The ability of parents to provide nurture, and the difficulty of producing survivable large offspring, drives some organisms, like humans, away from a K-selection strategy (lots of offspring, small short-lived organisms, aiming for a few survivors) toward an R-selection strategy (a few offspring, large long-lived organisms, aiming for high likelihood of survival and reproduction in all offspring). But raising reproductive age, which enforces pressure for long life and thus the tendency for R-selected organisms to reach an older age, is an evolutionary cost. If evolution could engineer humans to become reproductively fertile at age 3 without any other costs, it would do so.

In R-selection, evolution is sacrificing opportunities to produce more young at a younger age in exchange for greater survivability and the ability to exploit new sources of energy. That can push reproductive age up, as we see in humans, and the effort to build us robustly enough to ensure we reach that later reproductive age both raises the likelihood we live much longer and allows surviving grandparents to confer benefits on the young, the latter of which is a straightforward adaptive "win" from a gene's perspective.

But my take is that whatever nurture grandparents can provide is a side benefit that doesn't change the overwhelming evolutionary pressure to make ~all tradeoffs favor the young, meaning that the arguments from antagonistic pleiotropy and limited/loss of selection pressure in old age still apply with full force. That is an encouraging take, because it means that there's plenty of scope for modern medicine to improve healthspan and lifespan - not to mention that there are plenty of things we can do using technology that are just not evolutionarily accessible. For example, we can build mechanical replacements for our organs, design and manufacture molecules arbitrarily with a much greater level of purity and under more controlled conditions, and of course we can eliminate threats and artificially extend the age of first reproduction in ways that tend to drive evolution toward longer life and healthspans.

Comment by DirectedEvolution (AllAmericanBreakfast) on Aging and the geroscience hypothesis · 2023-07-09T15:48:24.028Z · LW · GW

I edited the first point to make the non-deathism more clear, since I would hate to lose my audience at paragraph 1! Thanks for giving me your reaction.

Comment by DirectedEvolution (AllAmericanBreakfast) on Aging and the geroscience hypothesis · 2023-07-09T15:20:39.784Z · LW · GW

Thank you for the encouragement! I would suggest reading the complete post once more, because the post/article specifically points out that that aging, which is indeed entropic and inevitable, is counterbalanced by repair: “We can overcome the extrinsic damage and gradual aging of our houses, cars and clothing with cleaning, maintenance and repair.” The key to geroscience medicine - targeting aging - will be to slow the aging process (it can’t be stopped), but also to enhance repair (which can in theory indefinitely prolong life by replacing worn out parts to keep the system as a whole functioning).

It also discusses the principles that explain why we can expect scope to improve longevity via modern medicine, such as antagonistic pleiotropy and the loss of selection pressure beyond child rearing age. Evolution isn’t trying very hard to optimize for longevity and health span, so we can do better.

Finally, the whole chapter is on the geroscience hypothesis, advocating a new approach where we treat disease by targeting aging itself. I didn’t include the fact that it specifically mentions the SENS foundation, cites de Grey, etc, but it does!

My aim in these posts will be to pretty faithfully track the order of ideas and main biological points as presented by the handbook, rather than layering in my own interpretation about the political or long-term research prospects and ethics if they’re not explicitly mentioned in the book. But the book is certainly not “deathist.”

One thing you, as a transhumanist-diaper baby and (perhaps) a person who doesn’t work professionally in biomedical research, may not know, is that there’s quite a lot of pressure in biomedicine to make measured, noncontroversial public statements about topics like this. Some writers like Aubrey de Grey just ignore all that (I believe de Grey is independently wealthy which may enable him to do so), but many are for whatever reason going to make careful statements that fit within the conventional vibes of the medical field, but still read intelligently point in a radical direction. Figuring out how to make geroscience sound conventional, safe, and like everything else we study in medicine is an important normalizing step within the research community to counterbalance the hype and controversy machine.

Comment by DirectedEvolution (AllAmericanBreakfast) on Aging and the geroscience hypothesis · 2023-07-09T08:29:54.228Z · LW · GW

This is a test post to check audience reaction. If there are ways that the basic structure of this post could be improved, please let me know!

Comment by DirectedEvolution (AllAmericanBreakfast) on Why does anxiety (?) make me dumb? · 2023-07-09T05:47:50.039Z · LW · GW

Yes, I was unclear on what you were saying because of the wording, which seemed self-contradictory. Your description here is more clear and consistent.

It sounds to me a bit like you don't feel goal-oriented - as if you are following certain passive routines and distracted impulses, but don't have a feeling of striving and progress toward a meaningful end. You clearly feel dissatisfied with your current state, but don't see a specific alternative to strive for or don't see yourself as capable of putting a plan into action because of your distractibility.

I have personally never struggled with that particular issue, although I have many other challenges that I've had to work through in my life. So I may not be in a position to give advice - I have always been a pursuer of goals.

One thing you could try is working on that problem directly. For example, you say you fail to apply new ideas. I don't mean this in a rude way, but based on that, it seems like a fool's errand to give you yet more advice that you will likely fail to act on. So here is a very small thing you could do to show yourself that you're capable of acting on an idea: after you're done reading this sentence, stand up and take a (healthy and appropriate) physical action that's very unusual for you, which could be anything from singing a song to doing a pushup. Just show yourself that you're capable of reading something on your computer screen and acting on it.

If you can do that, then you can do bigger and more meaningful things. So you might want to try making a daily practice of listing some new ideas you would like to try, or goals you would like to achieve, and then trying at least one of them. They can be very small and simple, as long as they are novel, breaks from the current routines in your life. As you gain more skill in goal-oriented behavior, you can start to consider larger strategies. What are bigger accomplishments you would like to achieve in your life, and what sub-goals would let you move in that direction? Then do those things.

The keyword here with respect to LessWrong is "rational agency," or just "agency" in general and also the idea of "winning." You can probably find some interesting ideas to read by searching it, although the real test is to do it and incorporate it into your own life - which of course is the main challenge you are working on. Best of luck!

Comment by DirectedEvolution (AllAmericanBreakfast) on Why does anxiety (?) make me dumb? · 2023-07-09T03:01:46.492Z · LW · GW

I have two points/notes:

  1. Your post here made some self-contradictory-seeming claims. I'm not criticizing, just suggesting that you may benefit from additional work to clarify and articulate a coherent account of what you're struggling with. For example, the title of your post is "Why does anxiety (?) make me dumb," but that overemphasizes anxiety as a hypothesis relative to the rest of your post. You also claim both that "I’m stuck at... ‘understand the content of the statement’" and "I have no trouble understanding reasonably complex concepts, but just never will discuss them, think about them, or even apply them," and characterize yourself as a "human encyclopedia." Often, articulating the problem accurately means you're 90% of the way to solving it.
  2. I have personally experienced that a multi-year project of deliberately trying to learn how to learn has yielded massive improvements in my ability to both learn and think. There were a huge number of mistakes along the way, but it all has culminated into a level of skill that I'm really proud of, and has come despite the fact of me aging and the clear loss of certain mental faculties I possessed at a younger age. I think I am worse at automagically learning the way I used to, but better at learning overall via deliberate conscious effort. Yet my effort in this area depended a lot on the ability to use self-reflection and rational thinking to introspect, propose hypotheses, try things out, iterate and improve, and if you're feeling bottlenecked in that area you might need to work on that. 
Comment by DirectedEvolution (AllAmericanBreakfast) on Grant applications and grand narratives · 2023-07-09T02:31:54.225Z · LW · GW

I'm glad GPT worked for you but I think it's a risky maneuver and I'm scared of the world where it is the common solution to this problem. 

Yeah, I would distinguish between using GPT to generate a grand vision vs. using it to express it in a particular style. The latter is how I used it - with the project I'm referring to, the model + vision were in place because I'd already spent almost a year doing research on the topic for my MS. However, I just don't have much experience or flair for writing that up in a way that is resonant and has that "institutional flavor," and GPT was helpful for that.

Here's a revision of the first paragraph you were asking about. I think that there are many grantmaking models that can work at least somewhat, but they all face tradeoffs. If you try to pick only projects with a model-grounded vision, you risk giving to empty grand narratives instead. If you try to pick only grantees with a good model, then you risk creating a stultified process in which grantees all feel pressure to plan everything out in advance to an unrealistic degree. If you regrant to just fund people who seem like they know what they're doing, then you make grants susceptible to privilege and corruption.

I think all these are risks, and probably at the end of the day, the ideal amount of grift, frustration, and privilege/corruption in grantmaking is not zero (an idea I take from Lying For Money, which says the same about fraud). And I believe this because I also think that grantmakers can have reasonable success in any of these approaches - vision-based, model-based, and person-based. There are also some projects that are based on a model that's legible to just about anybody, where the person carrying them out is credible, and where it clearly can operate on the world scale. I would characterize Kevin Esvelt's wastewater monitoring project that way. Projects like this are winners under any grantmaking paradigm, and that's the sort of project my first paragraph in my original comment was about.

Another way I might put it is that grantmaking is in the ideal case giving money to a project that ticks all three boxes (vision, model, person), but in many cases grants are about ticking one or two boxes and crossing one's fingers. I think it would be good to be clear about that and create a space for model-based, or model+person-based, or model+vision-based grantmaking with some clarity about what a pivot might look like if the vision, model or person didn't pan out.

I have to disagree with you at least somewhat about projects to improve epistemics. Maybe it's selection bias - I'm not plugged into the SF rationalist scene and it may be that there's a lot of sloppy ideas bruited about in that space, I don't know, but I can think of a bunch of projects to improve epistemics that I have personally benefitted from greatly - LessWrong and the online rationalist community, the suite of prediction markets and competitions, a lot of information-gathering and processing software tools, and of course a great deal of scientific research that helps me think more clearly not just about technical topics about about thinking itself. I wouldn't be at all surprised if there are a bunch of bad or insignificant projects that are things like workshops or minor software apps. I guess I just think that projects to improve epistemics don't seem obviously more difficult than others, the vision makes sense to me, and it seems tractable to separate the wheat from the chaff with some efficacy. That might be my own naivety and lack of experience however.

I have personally benefitted from some of your projects and ideas, particularly the idea of epistemic spot-checks, which turn out to be useful even if you do have or are in the process of earning a graduate degree in the subject. That's not only because there's a lot of bull out there, but also because the process of checking a true claim can greatly enrich your interpretation of it. When I read review articles, I frequently find myself reading the citations 2-3 layers deep, and even that doesn't seem like enough in many cases, because I gain such great benefits from understanding what exactly the top-level review summary is referring to. It seems like your projects are somewhat on the borderline between academic research and a boutique report for individual or small-group decision making. I think both are useful. It's hard to judge utility unless you yourself have a need for either the academic research or are making a decision about the same topic, so I can't opine about the quality of the reports you have generated. I do think that my academic journey so far has made me see that there's tremendous utility in putting together the right collection of information to inform the right decision, but it's only possible to do that if you invest quite a bit of yourself into a particular domain and if you are in collaboration with others who are as well. So from the outside, it seems like it might be valuable to see if you can find a group of people doing work you really believe in, and then invest a lot in building up those relationships and figuring out how your research skills can be most informative. Maybe that's what you're already doing, I am not sure. But if I was a regranter and had money to give out at least on a model+person basis, I would happily regrant to you!

Comment by DirectedEvolution (AllAmericanBreakfast) on Commentless downvoting is not a good way to fight infohazards · 2023-07-09T01:52:04.244Z · LW · GW

Thank you for your thoughts, I think you are supplying valuable nuance. In private conversation I do see a general path by which this offers a strategy for capabilities enhancement, but I also think it's sufficiently low-hanging fruit that I'd be surprised if a complete hobbyist like myself discovered a way to contribute much of anything to AI capabilities research. Then again, I guess interfacing between GPT-4-quality LLMs and traditional software is a new enough tool to explore that maybe there is enough low-hanging fruit for even a hobbyist to pluck. I agree with you that it would be ideal if there was a closed but constructive community to interface with on these issues, and I'm such a complete hobbyist that I wouldn't know about such a group even if it existed, which is why I asked. I'll give it some more thought.

Comment by DirectedEvolution (AllAmericanBreakfast) on Commentless downvoting is not a good way to fight infohazards · 2023-07-08T22:05:06.781Z · LW · GW

I agree with you about the roughness and changeability of karma. My main issue with it - particularly with downvotes and on this specific topic - is that it is too effective at silencing, while being too frustration, for too little informational gain. Even that wouldn’t be too big a deal because it does offer benefits and is an attractively simple solution for drawing on crowd wisdom.

Where the difficulty lies, I think, is when requesting advice about infohazards is met with negative vibes - frowns and sternness in real life, or downvotes online - without useful explicit feedback about the question at hand. That does a disservice to both the person asking for advice and to the people who think infohazards are worth taking seriously. I think that advocating for a change of vibes is better than putting up with inappropriate vibes, which is partly why I chose not to just put up with a few random downvotes and instead spoke up about it.

Comment by DirectedEvolution (AllAmericanBreakfast) on Weekday Evening Beach Picnics · 2023-07-08T14:54:10.705Z · LW · GW

I bought a 5mm Cressi wetsuit new on Amazon for about $200, but my partner bought a used wetsuit for $10 at an outdoor store. Wetsuits are sized by height and weight, and I at least found that the chart I used to pick out my wetsuit size was accurate. There are charts available describing the wetsuit thickness that’s best for different water temps, but it’s not absolutely rigid - my partner has used a thinner than advised wetsuit, and I’ve foregone the hood, gloves, and booties, and it’s been fine.

Wetsuits are supposed to be somewhat uncomfortably tight when you put it on dry. Once you get in the water it will loosen up slightly and become comfortable.

Let me know if you have any other questions!

Comment by DirectedEvolution (AllAmericanBreakfast) on Grant applications and grand narratives · 2023-07-08T07:22:51.744Z · LW · GW

Thanks for the reply, Elizabeth. I agree with pretty much everything you say here. I particularly like this part:

[good models + high impact vision grounded in that model] > [good models alone + modest goals] > [mediocre model + grand vision]

I think this is a foundational starting point for thinking about the process of spinning up any impactful project. It also helps me see the wisdom in what you are arguing here more clearly, for two reasons:

  1. It shows how selecting for grand vision can result in selecting for heavily mediocre models, particularly if there is a) no equally stringent selection for good models and b) as we might expect, far more grand visions + mediocre models than good models + high-impact visions grounded in that model. Since I expect most people will agree that in the general population, (b) is true, the crux of any disagreement may depend on how successfully good models are already selected for by grantmakers, as well as how self-selecting the population of applicants is for having good models. I don't have an opinion on that subject, and I think that professional grantmakers and people who have looked at a lot of grant-winners and grant-losers would be the relevant experts. I hope they do deign to comment on your post.
  2. When people shift from pursuing a grand narrative to pursuing a good model, this can come witih the dissolution of the formerly motivating grand narrative, along with envy or a sense of discouragement when comparing oneself with those who have achieved a high impact vision grounded in a good model. When we read statements such as that of Hamming, that asks why you're not working on the obviously most important project in your field, we can feel discouraged or as if we are on the wrong track if our instant answer is anything other than "but I am working on the most important problem!" This model offers an alternative point of view, which is that the first step in getting to your Hamming problem is to stop pursuing a groundless grand vision and to pursue a good model, even if it's not obvious what the ultimate benefit is. Figuring out how to give that journey some structure so that it doesn't become an exercise in self-justification or a recipe for aimless wandering seems good, but I still think it is a step in the right direction over clinging to one's initial grand vision. I would rather see a population of people chasing good models, sometimes aimlessly, than a population of people chasing grand visions, since I expect this would be even more aimless.

Some minor notes:

  • It does sound like discomfort with articulating the most apt high-impact vision was part of why you were reluctant to list it. I'm not sure if that was emotional discomfort or intellectual discomfort, but I would not be surprised if in general, lack of emotional confidence to identify and articulate the most apt high-impact vision when one has a good model already is a slowdown for some people. I have noticed that ChatGPT has been pretty good at helping me to articulate the high-impact vision in suitably ringing prose when I have a good model already - I used it to write the copy for a database website I created, because it was a lot harder for me to write "mission statement-esque" prose than to write software, clean the data, and build the website. The good model was much easier than articulating the high-impact vision even though I had the vision. I don't know if that's a "flaw" exactly - the point is just to distinguish "I have no idea what the apt high-impact vision for my extant good model is" from "I have an idea of what the high-impact vision is, but I'm not sure" from "I know what the high-impact vision is, but I'm uncomfortable being loud and proud about it or don't know how to put it into words effectively."
  • Regarding my list of putative projects, I agree with you that only the wastewater monitoring project is a project, per se. The rest of the ones I listed are more themes for projects, but I presume there are a number of concrete projects within each theme that could be listed - I am simply relatively unfamiliar with these areas and so I didn't have a bunch of specific examples at the top of my head.
Comment by DirectedEvolution (AllAmericanBreakfast) on Weekday Evening Beach Picnics · 2023-07-08T05:05:21.944Z · LW · GW

If you're interested in swimming, I'd highly recommend buying a wetsuit and snorkeling gear. I'm on the Puget Sound, which is very cold, but my 5 mm wetsuit (no hood, hats or gloves) makes me comfortable for extended swims, and there's a huge amount of beautiful marine life in just a foot or two of water. My head is cold for the first 5-10 minutes and then adjusts, and I can swim indefinitely. All my snorkeling gear fits in a large sack-style backpack, so I can bike it to and from the beach.

Comment by DirectedEvolution (AllAmericanBreakfast) on Rational Unilateralists Aren't So Cursed · 2023-07-05T14:43:14.493Z · LW · GW

That makes sense. Thank you for the explanation!

Comment by DirectedEvolution (AllAmericanBreakfast) on AllAmericanBreakfast's Shortform · 2023-07-05T08:29:03.366Z · LW · GW

ChatGPT is a token-predictor, but it is often able to generate text that contains novel, valid causal and counterfactual reasoning. What it isn't able to do, at least not yet, is enforce an interaction with the user that guarantees that it will proceed through a desired chain of causal or counterfactual reasoning.

Many humans are inferior to ChatGPT at explicit causal and counterfactual reasoning. But not all of ChatGPT's failures to perform a desired reasoning task are due to inability - many are due to the fact that at baseline, its goal is to successfully predict the next token. Iff that goal is aligned with accurate reasoning, it will reason accurately.

Judea Pearl frames associative, causal and counterfactual reasoning as "rungs," and at least in April 2023, thought ChatGPT was still on a purely associative "rung" based on it responding to his question "Is it possible that smoking causes grade increase on the average and, simultaneously, smoking causes grade decrease in every age group?" with a discussion of Simpson's paradox. The correct interpretation is that ChatGPT assumed Pearl was himself confused and was using causal terminology imprecisely. Simply by appending the phrase "Focus on causal reasoning, not associative reasoning" to Pearl's original question, GPT-4 not only performs perfect causal reasoning, but also addresses any potential confusion on the prompter's part about the difference between associative and causal reasoning.

From now on, I will be interpreting all claims about ChatGPT being unable to perform causal or counterfactual reasoning primarily as evidence that the claimant is both bad and lazy at prompt engineering.

My dad, on the other hand, has it figured out. When I showed him at dinner tonight that ChatGPT is able to invent new versions of stories he used to tell me as a kid, he just looked at me, smiled, and said "we're fucked."

Comment by DirectedEvolution (AllAmericanBreakfast) on Grant applications and grand narratives · 2023-07-05T06:32:05.440Z · LW · GW

Furthering this line of argument, I can think of specific projects that do make it easy to supply a convincing grand narrative about their impact on the world, including technical AI safety research, wastewater monitoring for potential pandemics, institutions working on improved epistemics, and work to enhance human intelligence and decision-making. Whether or not a project lends itself to a grand narrative does, in fact, suggest to me that it's more likely to be able to achieve impact on the world scale. And many of these projects seem concrete enough to me that it's easy to say whether or not the grand narrative seems reasonable or not.

The activity of helping vegans get tested for nutritional deficiencies doesn't fit a grand narrative for world-scale impact. But if the idea was to work on making concierge medicine especially available to ultra-high performers in the field of x-risk in order to ensure that the Paul Christianos of the world face minimal health impediments to their research, I think that would lend itself to a grand narrative that might be compelling to grantmakers. It also suggests a wider and different range of options for how one might pivot if nutritional testing for vegans wasn't feeling like it was achieving enough impact.

I also think there's an analogy to be drawn here between startups and those applying for grants. One of the most common reasons startups fail is that they make a product people don't want to buy, and never pivot. One of the things venture capital and startup advisors can do is counsel startups on how to make a product the market wants. It seems like there's an opportunity here to help energetic, self-starting, smart people connect their professional interests with the kinds of world-impact grand narratives that grantmakers find compelling. EA and 80,000 Hours do this to some extent, but there's often a sense in which they're trying to recruit people into pre-established molds or simply headhunt people who have it all figured out already. Helping people who already have compelling but small-scale projects think bigger and adapt their projects into things that might actually have world-scale potential seems useful and perhaps under-supplied.

Comment by DirectedEvolution (AllAmericanBreakfast) on The literature on aluminum adjuvants is very suspicious. Small IQ tax is plausible - can any experts help me estimate it? · 2023-07-05T03:32:32.565Z · LW · GW

Oops, retracted

Comment by DirectedEvolution (AllAmericanBreakfast) on The literature on aluminum adjuvants is very suspicious. Small IQ tax is plausible - can any experts help me estimate it? · 2023-07-05T00:08:19.654Z · LW · GW

Here is another study that seems relevant:

https://pubmed.ncbi.nlm.nih.gov/22001122/

Aluminum is a ubiquitous element that is released naturally into the environment via volcanic activity and the breakdown of rocks on the earth's surface. Exposure of the general population to aluminum occurs primarily through the consumption of food, antacids, and buffered analgesics. Exposure to aluminum in the general population can also occur through vaccination, since vaccines often contain aluminum salts (frequently aluminum hydroxide or aluminum phosphate) as adjuvants. Because concerns have been expressed by the public that aluminum in vaccines may pose a risk to infants, we developed an up-to-date analysis of the safety of aluminum adjuvants. Keith et al. [1] previously analyzed the pharmacokinetics of aluminum for infant dietary and vaccine exposures and compared the resulting body burdens to those based on the minimal risk levels (MRLs) established by the Agency for Toxic Substances and Disease Registry. We updated the analysis of Keith et al. [1] with a current pediatric vaccination schedule [2]; baseline aluminum levels at birth; an aluminum retention function that reflects changing glomerular filtration rates in infants; an adjustment for the kinetics of aluminum efflux at the site of injection; contemporaneous MRLs; and the most recent infant body weight data for children 0-60 months of age [3]. Using these updated parameters we found that the body burden of aluminum from vaccines and diet throughout an infant's first year of life is significantly less than the corresponding safe body burden of aluminum modeled using the regulatory MRL. We conclude that episodic exposures to vaccines that contain aluminum adjuvant continue to be extremely low risk to infants and that the benefits of using vaccines containing aluminum adjuvant outweigh any theoretical concerns.

Comment by DirectedEvolution (AllAmericanBreakfast) on Rational Unilateralists Aren't So Cursed · 2023-07-04T22:50:33.404Z · LW · GW

I am confused - doesn’t Bostrom’s model of “naive unilateralists” by definition preclude updating on the behavior of other group members? And isn’t updating on the beliefs of others (as signaled by their behavior) an example of adopting a version of the “principle of conformity” that he endorses as a solution to the curse? If so, it seems like you are framing a proof of Bostrom’s point as a rebuttal to it. If not, then I’d appreciate more clarity on how your model of naivety differs from Bostrom’s.

Comment by DirectedEvolution (AllAmericanBreakfast) on The literature on aluminum adjuvants is very suspicious. Small IQ tax is plausible - can any experts help me estimate it? · 2023-07-04T22:17:07.960Z · LW · GW

I am not sure it’s accurate to say that chronically increased blood levels of aluminum is the only way to cause brain problems. The reviews I linked in my other comment suggest that aluminum can affect brain function by:

  1. Being carried to the brain by immune cell endocytosis.
  2. Disrupting immune cell cross-talk with the CNS.
  3. Rapidly crossing the incompletely developed neonate blood brain barrier before being cleared more slowly by incompletely developed neonate kidneys. Remember that the aluminum clearance data is from healthy adult males.
Comment by DirectedEvolution (AllAmericanBreakfast) on AllAmericanBreakfast's Shortform · 2023-07-04T20:17:18.525Z · LW · GW

The innumeracy and linguistic imprecision in medical papers is irritating.

In "The future of epigenetic therapy in solid tumours—lessons from the past," we have the sentence:

One exciting development is the recognition that virtually all tumours harbour mutations in genes that encode proteins that control the epigenome.9,10,11,12,13,14,15,16,17,18,19,20

11 citations! Wow!

What is the motte version of this sentence? Does it mean:

  • Nearly tumor in every patient has a mutation in an epigenetic control gene?
  • That in nearly every type of tumor, at least a fraction of them contain a mutation in an epigenetic control gene?
  • That epigenetic control gene mutations exist in at least a substantial minority of several tumor types?

The third, weakest claim is the one that the 11 citations can support: in a minority of blood cancer patients, a mutation in an epigenetic control gene may set up downstream epigenetic dysregulation that causes the cancer phenotype. I would find this version of their claim much more helpful than the original, vague, misleading version.

Comment by DirectedEvolution (AllAmericanBreakfast) on The literature on aluminum adjuvants is very suspicious. Small IQ tax is plausible - can any experts help me estimate it? · 2023-07-04T18:42:56.229Z · LW · GW

The big question these days is about ASIA syndrome - autoimmune disorder triggered by adjuvants. Here is a review from this year, which also links to a 2019 review of Al-induced chronic fatigue syndrome. "Once the aluminum-containing vaccine is injected, instead of rapidly solubilizing in the extracellular space, it accumulates at the injection site, forming aluminum conglomerates. This delay in solubilization allows the injected aluminum particles to be quickly captured by cells of the immune system and transported to different organs, including the brain, where aluminum stimulates the inflammatory response and causes chronic neurotoxicity."

They also talk about a 2020 sheep study that showed " Animals in the vaccine and adjuvant-only groups exhibited individual and social behavioral changes. Affiliative interactions were significantly reduced and aggressive interactions and stereotypies were significantly increased. Some of these alterations observed in this experimental model are similar to those observed in the ASIA syndrome."

Some symptoms used to diagnose ASIA include "myalgia, myositis, arthralgia, arthritis, chronic fatigue, sleep disturbances, demyelination, memory loss, pyrexia, and dry mouth."

One paper points out that the immune system plays a role in mediating brain development via intercellular cross-talk. It also points out that in children, renal function and the blood-brain barrier are incomplete, and so in conjunction with the small size of children and elevated levels of Al exposure relative to adults, there's mechanistic reason to think they may be vulnerable to Al-induced ASIA-mediated neurological disorders. There is some evidence (which they cite, i.e. Gallagher and Goodman) that vaccines are linked to autism in neonates. There is also apparently substantial evidence linking ASIA to neurological disorders in adults.

Based on what I see, there are two plausible mechanistic routes by which aluminum adjuvants could directly or indirectly impair childrens' neurological development, and some modest animal experimental and human epidemiological evidence to support that this might actually be going on. It's not enough for me to be convinced that this is a widespread common issue, or that the risks posed by vaccines outbalance the risks posed by infection, but I am intrigued.

Comment by DirectedEvolution (AllAmericanBreakfast) on Consider giving money to people, not projects or organizations · 2023-07-03T03:16:39.358Z · LW · GW

So is the idea to prefer funding informal collaborations to formal associations? I remain confused about what exactly we are being advised to prefer and why. I don’t want to come across as being unconstructively negative, so let me elaborate a bit.

A proposal to fund individuals, not projects or organizations, is implying that there’s alpha to be found in this class of funding targets. So first, I am trying to understand the definition of “individual” as target for grant funding. An individual can work with others, in varying degrees of formality, and usually there will be one or more projects that they’re carrying out. So I am struggling to understand what it means to “fund an individual” in a way that’s distinct from funding organizations and projects. Does it mean “fund an individual, regardless of what they’re working on or who they’re working with?”

Second, I’d like to understand the mechanism by which we expect to get more bang for our buck by doing this. Do we think that individuals need the freedom to discard unpromising projects and collaborators, and guaranteeing them funding regardless helps them find the most promising team and project? Do we think that large collaborations weigh people down? Do we think that by the time a project is well defined and the team is large and formal, there will be other sources of funding, such that the main funding gaps are at an early, fluid, informal stage of organization-forming?

I’m open to all these ideas, I would just like the theory of change to be better pinned down.

Comment by DirectedEvolution (AllAmericanBreakfast) on Consider giving money to people, not projects or organizations · 2023-07-02T23:22:51.311Z · LW · GW

I’m not sure I understand how funding an individual, rather than a small team, solves the feedback loop problem. No one person can do it all. What is the mechanism that yields alpha when funding an individual over funding a small team, for example?

Comment by DirectedEvolution (AllAmericanBreakfast) on Another medical miracle · 2023-06-28T02:43:20.625Z · LW · GW

True! I am a broccoli fan. Just to put a number on it, to get the proposed 160g of protein per day, you’d have to eat 5.6 kg of broccoli, or well over 10 lb.

Comment by DirectedEvolution (AllAmericanBreakfast) on Another medical miracle · 2023-06-27T15:57:55.591Z · LW · GW

I endorse this - I was reacting to the conventional way people use the word “vegetable,” which I don’t typically hear applied to legumes or to grain. But for the purpose of getting high protein on a low meat diet, it’s obviously not important it be from a vegetable per se.