lemonhope's Shortform
post by lemonhope (lcmgcd) · 2020-01-27T00:52:37.833Z · LW · GW · 101 commentsContents
102 comments
101 comments
Comments sorted by top scores.
comment by lemonhope (lcmgcd) · 2024-07-31T08:45:43.298Z · LW(p) · GW(p)
A tricky thing about feedback on LW (or maybe just human nature or webforum nature):
- Post: Maybe there's a target out there let's all go look (50 points)
- Comments: so inspiring! We should all go look!
- Post: What "target" really means (100 points)
- Comments: I feel much less confused, thank you
- Post: I shot an arrow at the target (5 points)
- Comments: bro you missed
- Post: Target probably in the NW cavern in the SE canyon (1 point)
- Comments: doubt it
- Post: Targets and arrows - a fictional allegory (500 points)
- Comments: I am totally Edd in this story
- Post: I hit the target. Target is dead. I have the head. (40 points)
- Comments: thanks. cool.
Basically, if you try to actually do a thing or be particularly specific/concrete then you are held to a much higher standard.
There are some counterexamples. And LW is better than lots of sites.
Nonetheless, I feel here like I have a warm welcome to talk bullshit around the water cooler but angry stares when I try to mortar a few bricks.
I feel like this is almost a good site for getting your hands dirty and getting feedback and such. Just a more positive culture towards actual shots on target would be sufficient I think. Not sure how that could be achieved.
Maybe this is like publication culture vs workshop culture or something.
Replies from: rotatingpaguro, faul_sname, ryan_greenblatt, pktechgirl, lc, Jozdien, ben-lang, niplav, MinusGix, robo↑ comment by rotatingpaguro · 2024-07-31T10:03:54.730Z · LW(p) · GW(p)
Unpolished first thoughts:
- Selection effect: people who go to a blog to read bc they like reading, not doing
- Concrete things are hard reads, math-heavy posts, doesn't feel ok to vote when you don't actually understand
- In general easier things have wider audience
- Making someone change their mind is more valuable to them than saying you did something?
- There are many small targets and few big ideas/frames, votes are distributed proportionally
↑ comment by faul_sname · 2024-07-31T09:04:18.923Z · LW(p) · GW(p)
It's not perfect, but one approach I saw on here and liked a lot was @turntrout's MATS team's approach for some of the initial shard theory work, where they made an initial post outlining the problem and soliciting predictions on a set of concrete questions [LW · GW] (which gave a nice affordance for engagement, namely "make predictions and maybe comment on your predictions), and then they made a follow-up post with their actual results [LW · GW]. Seemed to get quite good engagement.
A confounding factor, though, was that was also an unusually impressive bit of research.
↑ comment by ryan_greenblatt · 2024-07-31T16:03:53.056Z · LW(p) · GW(p)
At least as far as safety research goes, concrete empirical safety research is often well received.
↑ comment by Elizabeth (pktechgirl) · 2024-08-01T15:48:41.368Z · LW(p) · GW(p)
I think you're directionally correct and would like to see lesswrong reward concrete work more. But I think your analysis is suffering from survivorship bias. Lots of "look at the target" posts die on the vine so you never see their low karma, and decent arrow-shot posts tend to get more like 50 even when the comments section is empty.
↑ comment by Jozdien · 2024-07-31T11:14:30.125Z · LW(p) · GW(p)
I think a large cause might be that posts talking about the target are more accessible to a larger number of people. Posts like List of Lethalities are understandable to people who aren't alignment researchers, while something the original Latent Adversarial Training post [LW · GW] (which used to be my candidate for the least votes:promising ratio post) is mostly relevant to and understandable by people who think about inner alignment or adversarial robustness. This is to say nothing of posts with more technical content.
This seems like an issue with the territory that there are far more people who want to read things about alignment than people who work on alignment. The LW admins already try to counter similar effects by maintaining high walls for the garden [LW · GW], and with the karma-weighted voting system. On the other hand, it's not clear that pushing along those dimensions would make this problem better; plausibly you need slightly different mechanisms to account for this. The Alignment Forum sort-of seems like something that tries to address this: more vote balancing between posts about targets and posts about attempts to reach it because of the selection effect.
This doesn't fully address the problem, and I think you were trying to point out the effects of not accounting for some topics having epistemic standards that are easier to meet than others, even when the latter is arguably more valuable. I think it's plausible that's more important, but there are other ways to improve it as well[1].
- ^
When I finished writing, I realized that what you were pointing out is also somewhat applicable to this comment. You point out a problem, and focus on one cause that's particularly large and hard to solve. I write a comment about another cause that's plausibly smaller but easier to solve, because that meets an easier epistemic standard than failing at solving the harder problem.
↑ comment by Ben (ben-lang) · 2024-07-31T14:54:33.115Z · LW(p) · GW(p)
I certainly see where you are coming from.
One thing that might be cofounding it slightly is that (depending on the target) the reward for actually taking home targets might not be LW karma but something real. So the "I have the head" only gives 40 karma. But the "head" might well also be worth something in the real world, like if its some AI code toy example that does something cool it might lead to a new job. Or if its something more esoteric like "meditation technique improves performance at work" then you get the performance boost.
Replies from: pktechgirl↑ comment by Elizabeth (pktechgirl) · 2024-07-31T21:29:53.073Z · LW(p) · GW(p)
Data point: my journeyman posts on inconclusive lit reviews get 40-70 karma (unless I make a big claim and then retract it. Those both got great numbers). But I am frequently approached to do lit reviews, and I have to assume the boring posts no one comments on contribute to the reputation that attracts those.
↑ comment by niplav · 2024-07-31T09:07:17.535Z · LW(p) · GW(p)
Strong agree. I think this is because in the rest of the world, framing is a higher status activity than filling, so independent thinkers gravitate towards the higher-status activity of framing.
Replies from: aysja↑ comment by aysja · 2024-08-02T07:41:18.063Z · LW(p) · GW(p)
Or independent thinkers try to find new frames because the ones on offer are insufficient? I think this is roughly what people mean when they say that AI is "pre-paradigmatic," i.e., we don't have the frames for filling to be very productive yet. Given that, I'm more sympathetic to framing posts on the margin than I am to filling ones, although I hope (and expect) that filling-type work will become more useful as we gain a better understanding of AI.
Replies from: niplav↑ comment by niplav · 2024-08-02T15:22:13.678Z · LW(p) · GW(p)
This response is specific to AI/AI alignment, right? I wasn't "sub-tweeting" the state of AI alignment, and was more thinking of other endeavours (quantified self, paradise engineering, forecasting research).
In general, the bias towards framing can be swamped by other considerations.
↑ comment by MinusGix · 2024-08-01T00:55:05.285Z · LW(p) · GW(p)
I see this as occurring with various pieces of Infrabayesianism, like Diffractor's UDT posts. They're dense enough mathematically (hitting the target) which makes them challenging to read... and then also challenging to discuss. There are fewer comments even from the people who read the entire post because they don't feel competent enough to make useful commentary (with some truth behind that feeling); the silence also further making commentation harder. At least that's what I've noticed in myself, even though I enjoy & upvote those posts.
Less attention seems natural because of specialization into cognitive niches, not everyone has read all the details of SAEs, or knows all the mathematics referenced in certain agent foundations posts. But it does still make it a problem in socially incentivizing good research.
I don't know if there are any great solutions. More up-weighting for research-level posts? I view the distillation idea from a ~year ago as helping with drawing attention towards strong (but dense) posts, but it appeared to die down. Try to revive that more?
Replies from: lcmgcd↑ comment by lemonhope (lcmgcd) · 2024-08-05T17:30:23.405Z · LW(p) · GW(p)
What was the distillation idea from a year ago?
Replies from: MinusGix↑ comment by MinusGix · 2024-08-07T10:44:03.486Z · LW(p) · GW(p)
https://www.lesswrong.com/posts/zo9zKcz47JxDErFzQ/call-for-distillers [LW · GW]
comment by lemonhope (lcmgcd) · 2024-11-21T13:12:13.050Z · LW(p) · GW(p)
I don't have a witty, insightful, neutral-sounding way to say this. The grantmakers should let the money flow. There are thousands of talented young safety researchers with decent ideas and exceptional minds, but they probably can't prove it to you. They only need one thing and it is money.
They will be 10x less productive in a big nonprofit and they certainly won't find the next big breakthrough there.
(Meanwhile, there are becoming much better ways to make money that don't involve any good deeds at all.)
My friends were a good deal sharper and more motivated at 18 than now at 25. None of them had any chance at getting grants back then, but they have an ok shot now. At 35, their resumes will be much better and their minds much duller. And it will be too late to shape AGI at all.
I can't find a good LW voice for this point but I feel this is incredibly important. Managers will find all the big nonprofits and eat their gooey centers and leave behind empty husks. They will do this quickly, within a couple years of each nonprofit being founded. The founders themselves will not be spared. Look how the writing of Altman or Demis changed over the years.
The funding situation needs to change very much and very quickly. If a man has an idea just give him money and don't ask questions. (No, I don't mean me.)
Replies from: jeremy-gillen, Mathieu Putz, ChristianKl, ricraz↑ comment by Jeremy Gillen (jeremy-gillen) · 2024-11-21T14:32:32.947Z · LW(p) · GW(p)
I think I disagree. This is a bandit problem, and grantmakers have tried pulling that lever a bunch of times. There hasn't been any field-changing research (yet). They knew it had a low chance of success so it's not a big update. But it is a small update.
Probably the optimal move isn't cutting early-career support entirely, but having a higher bar seems correct. There are other levers that are worth trying, and we don't have the resources to try every lever.
Also there are more grifters now that the word is out, so the EV is also declining that way.
(I feel bad saying this as someone who benefited a lot from early-career financial support).
Replies from: TsviBT↑ comment by TsviBT · 2024-11-21T16:36:31.156Z · LW(p) · GW(p)
grantmakers have tried pulling that lever a bunch of times
What do you mean by this? I can think of lots of things that seem in some broad class of pulling some lever that kinda looks like this, but most of the ones I'm aware of fall greatly short of being an appropriate attempt to leverage smart young creative motivated would-be AGI alignment insight-havers. So the update should be much smaller (or there's a bunch of stuff I'm not aware of).
Replies from: jeremy-gillen↑ comment by Jeremy Gillen (jeremy-gillen) · 2024-11-21T16:59:54.069Z · LW(p) · GW(p)
The main thing I'm referring to are upskilling or career transition grants, especially from LTFF, in the last couple of years. I don't have stats, I'm assuming there were a lot given out because I met a lot of people who had received them. Probably there were a bunch given out by the ftx future fund also.
Also when I did MATS, many of us got grants post-MATS to continue our research. Relatively little seems to have come of these.
How are they falling short?
(I sound negative about these grants but I'm not, and I do want more stuff like that to happen. If I were grantmaking I'd probably give many more of some kinds of safety research grant. But "If a man has an idea just give him money and don't ask questions" isn't the right kind of change imo).
Replies from: TsviBT↑ comment by TsviBT · 2024-11-21T17:41:41.049Z · LW(p) · GW(p)
upskilling or career transition grants, especially from LTFF, in the last couple of years
Interesting; I'm less aware of these.
How are they falling short?
I'll answer as though I know what's going on in various private processes, but I don't, and therefore could easily be wrong. I assume some of these are sort of done somewhere, but not enough and not together enough.
- Favor insightful critiques and orientations as much as constructive ideas. If you have a large search space and little traction, a half-plane of rejects is as or more valuable than a guessed point that you knew how to even generate.
- Explicitly allow acceptance by trajectory of thinking, assessed by at least a year of low-bandwidth mentorship; deemphasize agenda-ish-ness.
- For initial exploration periods, give longer commitments with less required outputs; something like at least 2 years. Explicitly allow continuation of support by trajectory.
- Give a path forward for financial support for out of paradigm things. (The Vitalik fellowship, for example, probably does not qualify, as the professors, when I glanced at the list, seem unlikely to support this sort of work; but I could be wrong.)
- Generally emphasize judgement of experienced AGI alignment researchers, and deemphasize judgement of grantmakers.
- Explicitly asking for out of paradigm things.
- Do a better job of connecting people. (This one is vague but important.)
(TBC, from my full perspective this is mostly a waste because AGI alignment is too hard; you want to instead put resources toward delaying AGI, trying to talk AGI-makers down, and strongly amplifying human intelligence + wisdom.)
Replies from: jeremy-gillen↑ comment by Jeremy Gillen (jeremy-gillen) · 2024-11-22T11:31:32.829Z · LW(p) · GW(p)
I agree this would be a great program to run, but I want to call it a different lever to the one I was referring to.
The only thing I would change is that I think new researchers need to understand the purpose and value of past agent foundations research. I spent too long searching for novel ideas while I still misunderstood the main constraints of alignment. I expect you'd get a lot of wasted effort if you asked for out-of-paradigm ideas. Instead it might be better to ask for people to understand and build on past agent foundations research, then gradually move away if they see other pathways after having understood the constraints. Now I see my work as mostly about trying to run into constraints for the purpose of better understand them.
Maybe that wouldn't help though, it's really hard to make people see the constraints.
Replies from: TsviBT↑ comment by TsviBT · 2024-11-22T17:33:28.274Z · LW(p) · GW(p)
We agree this is a crucial lever, and we agree that the bar for funding has to be in some way "high". I'm arguing for a bar that's differently shaped. The set of "people established enough in AGI alignment that they get 5 [fund a person for 2 years and maybe more depending how things go in low-bandwidth mentorship, no questions asked] tokens" would hopefully include many people who understand that understanding constraints is key and that past research understood some constraints.
build on past agent foundations research
I don't really agree with this. Why do you say this?
a lot of wasted effort if you asked for out-of-paradigm ideas.
I agree with this in isolation. I think some programs do state something about OOP ideas, and I agree that the statement itself does not come close to solving the problem.
(Also I'm confused about the discourse in this thread (which is fine), because I thought we were discussing "how / how much should grantmakers let the money flow".)
Replies from: jeremy-gillen↑ comment by Jeremy Gillen (jeremy-gillen) · 2024-11-25T12:44:12.716Z · LW(p) · GW(p)
would hopefully include many people who understand that understanding constraints is key and that past research understood some constraints.
Good point, I'm convinced by this.
build on past agent foundations research
I don't really agree with this. Why do you say this?
That's my guess at the level of engagement required to understand something. Maybe just because when I've tried to use or modify some research that I thought I understood, I always realise I didn't understand it deeply enough. I'm probably anchoring too hard on my own experience here, other people often learn faster than me.
(Also I'm confused about the discourse in this thread (which is fine), because I thought we were discussing "how / how much should grantmakers let the money flow".)
I was thinking "should grantmakers let the money flow to unknown young people who want a chance to prove themselves."
Replies from: TsviBT↑ comment by TsviBT · 2024-11-25T21:19:23.589Z · LW(p) · GW(p)
That's my guess at the level of engagement required to understand something. Maybe just because when I've tried to use or modify some research that I thought I understood, I always realise I didn't understand it deeply enough. I'm probably anchoring too hard on my own experience here, other people often learn faster than me.
Hm. A couple things:
- Existing AF research is rooted in core questions about alignment.
- Existing AF research, pound for pound / word for word, and even idea for idea, is much more unnecessary stuff than necessary stuff. (Which is to be expected.)
- Existing AF research is among the best sources of compute-traces of trying to figure some of this stuff out (next to perhaps some philosophy and some other math).
- Empirically, most people who set out to stuff existing AF fail to get many of the deep lessons.
- There's a key dimension of: how much are you always asking for the context? E.g.: Why did this feel like a mainline question to investigate? If we understood this, what could we then do / understand? If we don't understand this, are we doomed / how are we doomed? Are there ways around that? What's the argument, more clearly?
- It's more important whether people are doing that, than whether / how exactly they engage with existing AF research.
- If people are doing that, they'll usually migrate away from playing with / extending existing AF, towards the more core (more difficult) problems.
I was thinking "should grantmakers let the money flow to unknown young people who want a chance to prove themselves."
Ah ok you're right that that was the original claim. I mentally autosteelmanned.
↑ comment by Matt Putz (Mathieu Putz) · 2024-11-21T22:34:06.289Z · LW(p) · GW(p)
Just wanted to flag quickly that Open Philanthropy's GCR Capacity Building team (where I work) has a career development and transition funding program.
The program aims to provide support—in the form of funding for graduate study, unpaid internships, self-study, career transition and exploration periods, and other activities relevant to building career capital—for individuals at any career stage who want to pursue careers that could help reduce global catastrophic risks (esp. AI risks). It’s open globally and operates on a rolling basis.
I realize that this is quite different from what lemonhope is advocating for here, but nevertheless thought it would be useful context for this discussion (and potential applicants).
Replies from: habryka4↑ comment by habryka (habryka4) · 2024-11-22T04:07:59.018Z · LW(p) · GW(p)
I would mostly advise people against making large career transitions on the basis of Open Phil funding, or if you do, I would be very conservative with it. Like, don't quit your job because of a promise of 1 year of funding, because it is quite possible your second year will only be given conditional on you aligning with the political priorities of OP funders or OP reputational management, and career transitions usually take longer than a year. To be clear, I think it often makes sense to accept funding from almost anyone, but in the case of OP it is funding with unusually hard-to-notice strings attached that might bite you when you are particularly weak-willed or vulnerable.
Also, if OP staff tells you they will give you future grants, or guarantee you some kind of "exit grant" I would largely discount that, at least at the moment. This is true for many, if not most, funders, but my sense is people tend to be particularly miscalibrated for OP (who aren't particularly more or less trustworthy in their forecasts than random foundations and philanthropists, but I do think often get perceived as much more).
Of course, different people's risk appetite might differ, and mileage might vary, but if you can, I would try to negotiate for a 2-3 year grant, or find another funder to backstop you for another year or two, even if OP has said they would keep funding you, before pursuing some kind of substantial career pivot.
Replies from: Mathieu Putz, andrei-alexandru-parfeni↑ comment by Matt Putz (Mathieu Putz) · 2024-11-22T20:32:28.034Z · LW(p) · GW(p)
Regarding our career development and transition funding (CDTF) program:
- The default expectation for CDTF grants is that they’re one-off grants. My impression is that this is currently clear to most CDTF grantees (e.g., I think most of them don't reapply after the end of their grant period, and the program title explicitly says that it’s “transition funding”).
- (When funding independent research through this program, we sometimes explicitly clarify that we're unlikely to renew by default).
- Most of the CDTF grants we make have grant periods that are shorter than a year (with the main exception that comes to mind being PhD programs). I think that’s reasonable (esp. given that the grantees know this when they accept the funding). I’d guess most of the people we fund through this program are able to find paid positions after <1 year.
(I probably won't have time to engage further.)
Replies from: habryka4↑ comment by habryka (habryka4) · 2024-11-22T20:43:01.872Z · LW(p) · GW(p)
Yeah, I was thinking of PhD programs as one of the most common longer-term grants.
Agree that it's reasonable for a lot of this funding to be shorter, but also think that given the shifting funding landscape where most good research by my lights can no longer get funding, I would be quite hesitant for people to substantially sacrifice career capital in the hopes of getting funding later (or more concretely, I think it's the right choice for people to choose a path where they end up with a lot of slack to think about what directions to pursue, instead of being particularly vulnerable to economic incentives while trying to orient towards the very high-stakes feeling and difficult to navigate existential risk reduction landscape, which tends to result in the best people predictably working for big capability companies).
This includes the constraints of "finding paid positions after <1 year", where the set of organizations that have funding to sponsor good work is also very small these days (though I do think that has a decent chance of changing again within a year or two, so it's not a crazy bet to make).
Given these recent shifts and the harsher economic incentives of transitioning into the space, I think it would make sense for people to negotiate with OP about getting longer grants than OP has historically granted (which I think aligns with what I think OP staff makes sense as well, based on conversations I've had).
↑ comment by sunwillrise (andrei-alexandru-parfeni) · 2024-11-22T05:45:24.086Z · LW(p) · GW(p)
conditional on you aligning with the political priorities of OP funders or OP reputational management
Do you mean something more expansive than "literally don't pursue projects that are either conversative/Republican-coded or explicitly involved in expanding/enriching the Rationality community"? Which, to be clear, would be less-than-ideal if true, but should be talked about in more specific terms when giving advice to potential grant-receivers.
I get an overall vibe from many of the comments you've made recently about OP, both here and on the EA forum, that you believe in a rather broad sense they are acting to maximize their own reputation or whatever Dustin's whims are that day (and, consequently, lying/obfuscating this in their public communications to spin these decisions the opposite way), but I don't think[1] you have mentioned any specific details that go beyond their own dealings with Lightcone and with right-coded figures.
- ^
Could be a failure of my memory, ofc
↑ comment by habryka (habryka4) · 2024-11-22T06:00:50.082Z · LW(p) · GW(p)
Yes, I do not believe OP funding constraints are well-described by either limitations on grants specifically to "rationality community" or "conservative/republican-coded activities".
Just as an illustration, if you start thinking or directing your career towards making sure we don't torture AI systems despite them maybe having moral value, that is also a domain where OP has withdrawn funding from. Same if you want to work on any wild animal or invertebrate suffering. I also know of multiple other grantees which do not straightforwardly fall into any domains that OP has announced they are withdrawing funding from that cannot receive funding.[1]
I think the best description for predicting what OP is avoiding funding right now, and will continue to avoid funding into the future is broadly "things that might make Dustin or OP look weird, and are not in a very small set of domains where OP is OK with taking reputational hits or defending people who want to be open about their beliefs, or might otherwise cost them political capital with potential allies (which includes but is not exclusive to the democratic party, AI capability companies, various US government departments, and a vague conception of the left-leaning intellectual elite)".
This is not a perfect description because I do think there is a very messy principal agent problem going on with Good Ventures and Open Phil, where Open Phil staff would often like to make weirder grants, and GV wants to do less, and they are reputationally entwined, and the dynamics arising from that are something I definitely don't understand in detail, but I think at a high level the description above will make better predictions than any list of domains.
See also this other comment of mine: https://www.lesswrong.com/posts/wn5jTrtKkhspshA4c/michaeldickens-s-shortform?commentId=zoBMvdMAwpjTEY4st [LW(p) · GW(p)]
Epistemic status: Speculating about adversarial and somewhat deceptive PR optimization, which is inherently very hard and somewhat paranoia inducing. I am quite confident of the broad trends here, but it's definitely more likely that I am getting things wrong here than in other domains where evidence is more straightforward to interpret, and people are less likely to shape their behavior in ways that includes plausible deniability and defensibility.
I agree with this, but I actually think the issues with Open Phil are substantially broader. As a concrete example, as far as I can piece together from various things I have heard, Open Phil does not want to fund anything that is even slightly right of center in any policy work. I don't think this is because of any COIs, it's because Dustin is very active in the democratic party and doesn't want to be affiliated with anything that is right-coded. Of course, this has huge effects by incentivizing polarization of AI policy work with billions of dollars, since any AI Open Phil funded policy organization that wants to engage with people on the right might just lose all of their funding because of that, and so you can be confident they will steer away from that.
Open Phil is also very limited in what they can say about what they can or cannot fund, because that itself is something that they are worried will make people annoyed with Dustin, which creates a terrible fog around how OP is thinking about stuff.[1]
Honestly, I think there might no longer a single organization that I have historically been excited about that OpenPhil wants to fund. MIRI could not get OP funding, FHI could not get OP funding, Lightcone cannot get OP funding, my best guess is Redwood could not get OP funding if they tried today (though I am quite uncertain of this), most policy work I am excited about cannot get OP funding, the LTFF cannot get OP funding, any kind of intelligence enhancement work cannot get OP funding, CFAR cannot get OP funding, SPARC cannot get OP funding, FABRIC (ESPR etc.) and Epistea (FixedPoint and other Prague-based projects) cannot get OP funding, not even ARC is being funded by OP these days (in that case because of COIs between Paul and Ajeya).[2] I would be very surprised if Wentworth's work, or Wei Dai's work, or Daniel Kokotajlo's work, or Brian Tomasik's work could get funding from them these days. I might be missing some good ones, but the funding landscape is really quite thoroughly fucked in that respect. My best guess is Scott Alexander could not get funding, but I am not totally sure.[3]
I cannot think of anyone who I would credit with the creation or shaping of the field of AI Safety or Rationality who could still get OP funding. Bostrom, Eliezer, Hanson, Gwern, Tomasik, Kokotajlo, Sandberg, Armstrong, Jessicata, Garrabrant, Demski, Critch, Carlsmith, would all be unable to get funding[4] as far as I can tell. In as much as OP is the most powerful actor in the space, the original geeks are being thoroughly ousted.[5]
In-general my sense is if you want to be an OP longtermist grantee these days, you have to be the kind of person that OP thinks is not and will not be a PR risk, and who OP thinks has "good judgement" on public comms, and who isn't the kind of person who might say weird or controversial stuff, and is not at risk of becoming politically opposed to OP. This includes not annoying any potential allies that OP might have, or associating with anything that Dustin doesn't like, or that might strain Dustin's relationships with others in any non-trivial way.
Of course OP will never ask you to fit these constraints directly, since that itself could explode reputationally (and also because OP staff themselves seem miscalibrated on this and do not seem in-sync with their leadership). Instead you will just get less and less funding, or just be defunded fully, if you aren't the kind of person who gets the hint that this is how the game is played now.
(Note that a bunch of well-informed people disagreed with at least sections of the above, like Buck from Redwood disagreeing that Redwood couldn't get funding, so it might make sense to check out the original discussion)
- ^
I am using OP's own language about "withdrawing funding". However, as I say in a recent EA Forum comment [EA(p) · GW(p)], as Open Phil is ramping up the degree to which it is making recommendations to non-GV funders, and OP's preferences come apart from the preferences of their funders, it might be a good idea to taboo the terms "OP funds X", because it starts being confusing.
↑ comment by gyfwehbdkch · 2024-11-23T16:31:47.862Z · LW(p) · GW(p)
Can't Dustin donate $100k anonymously (bitcoin or cash) to researchers in a way that decouples his reputation from the people he's funding?
↑ comment by ChristianKl · 2024-11-21T16:18:36.122Z · LW(p) · GW(p)
My friends were a good deal sharper and more motivated at 18 than now at 25.
How do you tell that there were sharper back then?
Replies from: interstice, nathan-helm-burger↑ comment by interstice · 2024-11-22T02:08:31.952Z · LW(p) · GW(p)
It sounds pretty implausible to me, intellectual productivity is usually at its peak from mid-20s to mid-30s(for high fluid-intelligence fields like math and physics)
Replies from: interstice↑ comment by interstice · 2024-11-22T12:27:13.510Z · LW(p) · GW(p)
People asked for a citation so here's one: https://www.google.com/url?sa=t&source=web&rct=j&opi=89978449&url=https://www.kellogg.northwestern.edu/faculty/jones-ben/htm/age%2520and%2520scientific%2520genius.pdf&ved=2ahUKEwiJjr7b8O-JAxUVOFkFHfrHBMEQFnoECD0QAQ&sqi=2&usg=AOvVaw0HF9-Ta_IR74M8df7Av6Qe
Although my belief was more based on anecdotal knowledge of the history of science. Looking up people at random: Einstein's annus mirabilis was at 26; Cantor invented set theory at 29; Hamilton discovered Hamiltonian mechanics at 28; Newton invented calculus at 24. Hmmm I guess this makes it seem more like early 20s - 30. Either way 25 is definitely in peak range, and 18 typically too young(although people have made great discoveries by 18, like Galois. But he likely would have been more productive later had he lived past 20)
Replies from: lcmgcd↑ comment by lemonhope (lcmgcd) · 2024-11-24T17:10:13.357Z · LW(p) · GW(p)
Einstein started doing research a few years before he actually had his miracle year. If he started at 26, he might have never found anything. He went to physics school at 17 or 18. You can't go to "AI safety school" at that age, but if you have funding then you can start learning on your own. It's harder to learn than (eg) learning to code, but not impossibly hard.
I am not opposed to funding 25 or 30 or 35 or 40 year olds, but I expect that the most successful people got started in their field (or a very similar one) as a teenager. I wouldn't expect funding an 18-year-old to pay off in less than 4 years. Sorry for being unclear on this in original post.
Replies from: interstice↑ comment by interstice · 2024-11-24T20:08:04.631Z · LW(p) · GW(p)
Yeah I definitely agree you should start learning as young as possible. I think I would usually advise a young person starting out to learn general math/CS stuff and do AI safety on the side, since there's way more high-quality knowledge in those fields. Although "just dive in to AI" seems to have worked out well for some people like Chris Olah, and timelines are plausibly pretty short so ¯\_(ツ)_/¯
↑ comment by Nathan Helm-Burger (nathan-helm-burger) · 2024-11-21T20:16:41.120Z · LW(p) · GW(p)
This definitely differs for different folks. I was nowhere near my sharpest in late teens or early twenties. I think my peak was early 30s. Now in early 40s, I'm feeling somewhat less sharp, but still ahead of where I was at 18 (even setting aside crystalized knowledge).
I do generally agree though that this is a critical point in history, and we should have more people trying more research directions.
↑ comment by Richard_Ngo (ricraz) · 2024-11-22T01:23:28.686Z · LW(p) · GW(p)
In general people should feel free to DM me with pitches for this sort of thing.
Replies from: kave↑ comment by kave · 2024-11-22T01:48:52.403Z · LW(p) · GW(p)
Perhaps say some words on why they might want to?
Replies from: ricraz↑ comment by Richard_Ngo (ricraz) · 2024-11-22T07:27:36.618Z · LW(p) · GW(p)
Because I might fund them or forward it to someone else who will.
comment by lemonhope (lcmgcd) · 2024-09-10T17:19:30.179Z · LW(p) · GW(p)
Where has the "rights of the living vs rights of the unborn" debate already been had? In the context of longevity. (Presuming that at some point an exponentially increasing population consumes its cubically increasing resources.)
Replies from: Raemon↑ comment by Raemon · 2024-09-10T18:17:24.331Z · LW(p) · GW(p)
I couldn't easily remember this, and then tried throwing it into our beta-testing LessWrong-contexted-LLM. (I'm interested in whether the following turned out to be helpful)
Eliezer Yudkowsky offers an interesting perspective in his post For The People Who Are Still Alive [? · GW]. He argues that in a "Big World" scenario (where the universe is vast or infinite), we should focus more on the welfare of existing people rather than creating new ones. He states:
It seems to me that in a Big World, the people who already exist in your region have a much stronger claim on your charity than babies who have not yet been born into your region in particular.
In a similar vein, Wei Dai's post The Moral Status of Independent Identical Copies [? · GW] explores related issues. While not directly about longevity, it addresses questions about how we should value additional copies of existing people versus new people. This has implications for how we might think about extending lives versus creating new ones.
The tension between extending lives and creating new ones in a resource-constrained environment is directly addressed in the post What exactly IS the overpopulation argument (in regards to immortality)? [? · GW] by Raemon. [oh hey it's me]
(it said more stuff but much of it seemed less relevant)
It pulled in these posts as potentially relevant (some of this doesn't seem like what you meant but filtering it manually didn't feel worth it).
Replies from: lcmgcd
- Being against involuntary death and being open to change are compatible [LW · GW] by Andy_McKenzie
- The Astronomical Sacrifice Dilemma [LW · GW] by Matthew McRedmond
- Please Do Fight the Hypothetical [LW · GW] by Lone Pine
- How curing aging could help progress [LW · GW] by jasoncrawford
- Debating myself on whether “extra lives lived” are as good as “deaths prevented” [LW · GW] by HoldenKarnofsky
- Second-order selection against the immortal [LW · GW] by Malmesbury
- Deminatalist Total Utilitarianism [LW · GW] by Vanessa Kosoy
- What economic gains are there in life extension treatments? [LW · GW] by Orborde
- Is death bad? [LW · GW] by Richard_Ngo
- You might be population too [LW · GW] by KatjaGrace
- Blind Spot: Malthusian Crunch [LW · GW] by bokov
- The Mere Cable Channel Addition Paradox [LW · GW] by Ghatanathoah
- One possible issue with radically increased lifespan [LW · GW] by Spectral_Dragon
- On "Friendly" Immortality [LW · GW] by daenerys
- Life Extension versus Replacement [LW · GW] by Julia_Galef
- What exactly IS the overpopulation argument (in regards to immortality)? [LW · GW] by Raemon
- Why abortion looks more okay to us than killing babies [LW · GW] by cousin_it
- The Moral Status of Independent Identical Copies [LW · GW] by Wei Dai
- The Difficulties of Potential People and Decision Making [LW · GW] by FrankAdamek
- For The People Who Are Still Alive [LW · GW] by Eliezer Yudkowsky
↑ comment by lemonhope (lcmgcd) · 2024-09-16T20:36:07.661Z · LW(p) · GW(p)
Thank you! Seems like this bot works quite well for this task
comment by lemonhope (lcmgcd) · 2024-04-11T21:44:07.205Z · LW(p) · GW(p)
I wish LW questions had an "accepted answer" thing like stackexchange
comment by lemonhope (lcmgcd) · 2024-04-17T17:59:35.865Z · LW(p) · GW(p)
I wonder how many recent trans people tried/considered doubling down on their assigned sex (eg males taking more testosterone) instead first. Maybe (for some people) either end of gender spectrum is comfortable and being in the middle feels bad¿ Anybody know? Don't want to ask my friends because this Q will certainly anger them
Replies from: ann-brown, sasha-liskova, michael-roe↑ comment by Ann (ann-brown) · 2024-04-17T19:01:14.999Z · LW(p) · GW(p)
If it worked, sounds potentially compatible with whatever the inverse(s) of agender is/are? Can at least say that many cisgender people get hormone therapy when they aren't getting what they would like out of their hormones (i.e., menopause, low testosterone, etc). Hormones do useful things, and having them miscalibrated relative to your preferences can be unpleasant.
It's also not uncommon to try to 'double down' on a quality you're repressing, i.e., if someone's actively trying to be their assigned sex, they may in fact try particularly hard to conform to it, consciously or otherwise. Even if not repressed, I know I've deliberately answered a few challenges in life where I discovered 'this is particularly hard for me' with 'then I will apply additional effort to achieving it', and I'm sure I've also done it subconsciously.
↑ comment by Sasha Lišková (sasha-liskova) · 2024-09-16T22:25:07.079Z · LW(p) · GW(p)
Hey, Luke. I don't know if I'm still your friend, but I'm not angered, and I'll bite --- plenty of people I know have tried this. Joining the military is common, although I have no idea if this is to effect hypermasculinity or not (most of my trans friends are dmab.) Janae Marie Kroc is probably the most extreme example I can name, but I expect if you find a forum for exmilitary trans folk somewhere you'll be able to find a lot more data on this.
I think I could argue that in the years I knew you personally (like 2015 to 2017) I was trying to do this in some kind of way. LK was one of the first people I publicly floated my name to --- we were out running around campus, I don't know if you were still dating at the time. I have absolutely no idea if either of you care. N=1.
They are, consciously or not, trying to hide in the closet. This is not the worst idea anyone's ever had, especially in a hostile environment.
I appreciate that you're still working in an environment I gave up on ever making progress in. I just...wasn't equal to it. I hope you're well.
Replies from: lcmgcd↑ comment by lemonhope (lcmgcd) · 2024-09-17T20:56:58.498Z · LW(p) · GW(p)
Hey!!! Thanks for replying. But did you or anyone you know consider chemical cisgenderization? Or any mention of such in the forums? I would it expect it to be a much stronger effect than eg joining the military. Although I hear it is common for men in the military to take steroids, so maybe there would be some samples there.... I imagine taking cis hormones is not an attractive idea, because if you dislike the result then you're worse off than you started.
(Oh and we were still together then. LK has child now, not sure how that affects the equation.)
Replies from: sasha-liskova↑ comment by Sasha Lišková (sasha-liskova) · 2024-09-18T17:25:56.264Z · LW(p) · GW(p)
"Chemical cisgenderization" is usually just called "detransition." To do it, you stop taking hormones. Unless you've had the appropriate surgeries (which most of us haven't because it's very expensive) your body will do it by itself.
Transfeminine HRT consists of synthetic estrogen and an anti-androgen of some sort (usually spironolactone or finasteride.) Estrogen monotherapy, in higher doses, is coming more into vogue now that more has been published that suggests it's more effective.
Anyway, I know some people who have tried. I'm told the dysphoria comes right back, worse than ever. I know at least one (AMAB nonbinary) person who actually needed to take low-dose T after their orchiectomy, although the dose was an order of magnitude less than what their body naturally produced, but that's rather an exceptional case.
Actual desistance rates are on the order of a few percent*, and >90% of those are for reasons other than "I'm not actually trans." [0]
↑ comment by Michael Roe (michael-roe) · 2024-04-18T12:41:56.248Z · LW(p) · GW(p)
Well there's this frequently observed phenomenon where someone feels insecure about their gender, and then does something hypermasculine like joining Special Forces or becoming a cage fighter or something like that. They are hoping that it will make them feel confident of their birth-certificate-sex. Then they discover that nope, this does not work and they are still trans.
People should be aware that there are copious examples of people who are like -- nope, still trans --- after hoping that going hard on their birth-certificate-gender will work,
Replies from: michael-roe, quetzal_rainbow↑ comment by Michael Roe (michael-roe) · 2024-04-18T12:53:23.656Z · LW(p) · GW(p)
Ascertainment bias, of course, because we only see the cases where this did not work, and do not know exactly how many members of e.g. Delta Force were originally in doubt as to their gender. We can know it doesnt work sometimes.
Replies from: michael-roe↑ comment by Michael Roe (michael-roe) · 2024-04-18T12:57:58.559Z · LW(p) · GW(p)
While I was typing this, quetzal_rainbow made the same point
↑ comment by quetzal_rainbow · 2024-04-18T12:50:31.812Z · LW(p) · GW(p)
I mean, the problem is if it works we won't hear about such people - they just live happily ever after and don't talk about uncomfortable period of their life.
comment by lemonhope (lcmgcd) · 2024-08-28T05:37:42.252Z · LW(p) · GW(p)
Is there a good like uh "intro to China" book or YouTube channel? Like something that teaches me (possibly indirectly) what things are valued, how people think and act, extremely basic history, how politics works, how factories get put up, etc etc. Could be about government, industry, the common person, or whatever.. I wish I could be asking for something more specific, but I honestly do not even know the basics.
All I've read is Shenzhen: A Travelogue from China which was quite good although very obsolete. Also it is a comic book.
I'm not much of a reader so I'm looking for something extremely basic.
I am asking humans instead of a chatbot because all the mainstream talk about China seems very wrong to me and I don't want to read something wrong
Replies from: thomas-kwa↑ comment by Thomas Kwa (thomas-kwa) · 2024-08-28T07:34:50.601Z · LW(p) · GW(p)
I'm a fan of this blog which is mainly translations and commentary on Chinese social media posts but also has some history posts.
Replies from: lcmgcd↑ comment by lemonhope (lcmgcd) · 2024-08-28T08:31:37.088Z · LW(p) · GW(p)
Thank you!
Replies from: lcmgcd↑ comment by lemonhope (lcmgcd) · 2024-08-28T08:37:27.498Z · LW(p) · GW(p)
This is so much better than what claude was giving me
comment by lemonhope (lcmgcd) · 2024-12-07T00:16:44.536Z · LW(p) · GW(p)
What is the current popular (or ideally wise) wisdom wrt publishing demos of scary/spooky AI capabilities? I've heard the argument that moderately scary demos drive capability development into secrecy. Maybe it's just all in the details of who you show what when and what you say. But has someone written a good post about this question?
Replies from: rhollerith_dot_com↑ comment by RHollerith (rhollerith_dot_com) · 2024-12-07T01:16:25.026Z · LW(p) · GW(p)
The way it is now, when one lab has an insight, the insight will probably spread quickly to all the other labs. If we could somehow "drive capability development into secrecy," that would drastically slow down capability development.
comment by lemonhope (lcmgcd) · 2024-07-31T07:41:16.731Z · LW(p) · GW(p)
It's hard to grasp just how good backprop is. Normally in science you estimate the effect of 1-3 variables on 1-3 outcomes. With backprop you can estimate the effect of a trillion variables on an outcome. You don't even need more samples! Around 100 is typical for both (n vs batch_size)
comment by lemonhope (lcmgcd) · 2024-07-20T19:45:51.148Z · LW(p) · GW(p)
I wonder how a workshop that teaches participants how to love easy victory and despise hard-fought battles could work
Replies from: Viliam↑ comment by Viliam · 2024-07-20T22:29:36.595Z · LW(p) · GW(p)
Give people a long list of tasks, a short time interval, and then reward them based on the number of tasks solved. Repeat until they internalize the lesson that solving a problem quickly is good, spending lots of time on a problem is bad, so if something seems complicated they should ignore it and move on to the next task.
comment by lemonhope (lcmgcd) · 2024-05-24T08:41:26.441Z · LW(p) · GW(p)
I wonder if a chat loop like this would be effective at shortcutting years of confused effort maybe in research andor engineering. (The AI just asks the questions and the person answers.)
- "what are you seeking?"
- "ok how will you do it?"
- "think of five different ways to do that"
- "describe a consistent picture of the consequences of that"
- "how could you do that in a day instead of a year"
- "give me five very different alternate theories of how the underlying system works"
Questions like that can be surprisingly easy to answer. Just hard to remember to ask.
Replies from: Seth Herdcomment by lemonhope (lcmgcd) · 2024-04-11T23:58:40.046Z · LW(p) · GW(p)
I notice I strong upvote on LW mobile a lot more than desktop because double-tap is more natural than long-click. Maybe mobile should have a min delay between the two taps?
comment by lemonhope (lcmgcd) · 2022-05-03T08:40:05.575Z · LW(p) · GW(p)
Practice speedruns for rebuilding civilization?
comment by lemonhope (lcmgcd) · 2024-04-13T22:16:39.707Z · LW(p) · GW(p)
Is it rude to make a new tag without also tagging a handful of posts for it? A few tags I kinda want:
- explanation: thing explained.
- idea: an idea for a thing someone could do (weaker version of "Research Agenda" tag)
- stating the obvious: pointing out something obviously true but maybe frequently overlooked
- experimental result
- theoretical result
- novel maybe: attempts to do something new (in the sense of novelty requirements for conference publications)
↑ comment by kave · 2024-04-13T22:21:18.618Z · LW(p) · GW(p)
Good question! From the Wiki-Tag FAQ [LW · GW]:
A good heuristic is that tag ought to have three high-quality posts, preferably written by two or more authors.
I believe all tags have to be approved. If I were going through the morning moderation queue, I wouldn't approve an empty tag.
↑ comment by Gunnar_Zarncke · 2024-04-14T11:34:56.580Z · LW(p) · GW(p)
At times, I have added tags that I felt were useful or missing, but usually, I add it to at least a few important posts to illustrate. At one time, one of them was removed but a good explanation for it was given.
comment by lemonhope (lcmgcd) · 2020-01-27T00:52:38.277Z · LW(p) · GW(p)
Zettelkasten in five seconds with no tooling
Have one big textfile with every thought you ever have. Number the thoughts and don't make each thought too long. Reference thoughts with a pound (e.g. #456) for easy search.
comment by lemonhope (lcmgcd) · 2024-11-16T04:13:38.295Z · LW(p) · GW(p)
I can only find capabilities jobs right now. I would be interested in starting a tiny applied research org or something. How hard is it to get funding for that? I don't have a strong relevant public record, but I did quite a lot of work at METR and elsewhere.
Replies from: ryan_greenblatt, habryka4, lcmgcd↑ comment by ryan_greenblatt · 2024-11-16T06:26:30.347Z · LW(p) · GW(p)
It might be easier to try to establish some track record by doing a small research project first. I don't know if you have enough runway for this though.
Replies from: lcmgcd↑ comment by lemonhope (lcmgcd) · 2024-11-17T06:30:45.708Z · LW(p) · GW(p)
Yeah I just wanted to check that nobody is giving away money before I go do the exact opposite thing I've been doing. I might try to tidy something up and post it first
↑ comment by habryka (habryka4) · 2024-11-16T04:33:32.539Z · LW(p) · GW(p)
What do you mean by "applied research org"? Like, applied alignment research?
Replies from: lcmgcd↑ comment by lemonhope (lcmgcd) · 2024-11-16T04:44:45.063Z · LW(p) · GW(p)
Yes.
↑ comment by lemonhope (lcmgcd) · 2024-11-16T04:27:06.331Z · LW(p) · GW(p)
I do think I could put a good team together and make decent contributions quickly
comment by lemonhope (lcmgcd) · 2024-05-14T05:41:17.542Z · LW(p) · GW(p)
The acceptable tone of voice here feels like 3mm wide to me. I'm always having bad manners
comment by lemonhope (lcmgcd) · 2024-05-12T08:34:23.827Z · LW(p) · GW(p)
LW mods, please pay somebody to turn every post with 20+ karma into a diagram. Diagrams are just so vastly superior to words.
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2024-05-12T09:03:07.054Z · LW(p) · GW(p)
can you demonstrate this for a few posts? (I suspect it will be much harder than you think.)
Replies from: lcmgcd↑ comment by lemonhope (lcmgcd) · 2024-05-12T09:04:28.605Z · LW(p) · GW(p)
The job would of course be done by a diagramming god, not a wordpleb like me
If i got double dog dared...
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2024-05-12T09:06:11.990Z · LW(p) · GW(p)
Link some posts you'd like diagrams of at least, then. If this were tractable, it might be cool. But I suspect most of the value is in even figuring out how to diagram the posts.
Replies from: lcmgcd↑ comment by lemonhope (lcmgcd) · 2024-05-12T09:14:06.473Z · LW(p) · GW(p)
From the frontpage:
https://www.lesswrong.com/posts/zAqqeXcau9y2yiJdi/can-we-build-a-better-public-doublecrux [LW · GW]
https://www.lesswrong.com/posts/bkr9BozFuh7ytiwbK/my-hour-of-memoryless-lucidity [LW · GW]
https://www.lesswrong.com/posts/Lgq2DcuahKmLktDvC/applying-refusal-vector-ablation-to-a-llama-3-70b-agent [LW · GW]
https://www.lesswrong.com/posts/ANGmJnZL2fskHX6tj/dyslucksia [LW · GW]
https://www.lesswrong.com/posts/BRZf42vpFcHtSTraD/linkpost-towards-a-theoretical-understanding-of-the-reversal [LW · GW]
Like all of them basically.
most of the value is in even figuring out how to diagram the posts
Think of it like a TLDR. There are many ways to TLDR but any method that's not terrible is fantastic
comment by lemonhope (lcmgcd) · 2024-09-05T03:53:47.533Z · LW(p) · GW(p)
maybe you die young so you don't get your descendants sick
I've always wondered why evolution didn't select for longer lifespans more strongly. Like, surely a mouse that lives twice as long would have more kids and better knowledge of safe food sources. (And lead their descendants to the same food sources.) I have googled for an explanation a few times but not found one yet.
I thought of a potential explanation the other day. The older you get, the more pathogens you take on. (Especially if you're a mouse.) If you share a den with your grandkids then you might be killing them. Also, if several generations live together, then endemic pathogens stick with the clan much longer. This might eventually wipe out your clan if one of the viruses etc has a bad mutation.
If you die before your offspring even hatch then you might not pass them any pathogens. Especially if you swim a mile up a river that's dry 90% of the year. https://youtube.com/watch?v=63Xs3Hi-2OU This is very funny and 1 minute long.
Most birds leave the nest (yes?) so perhaps that's why there's so many long-lived birds.
Although IIRC, bats live a really long time and have a mountain of pathogens.
Anybody know if this explanation is fleshed out somewhere, or know a better explanation?
Replies from: alexander-gietelink-oldenziel, nathan-helm-burger, nc↑ comment by Alexander Gietelink Oldenziel (alexander-gietelink-oldenziel) · 2024-09-10T22:55:55.890Z · LW(p) · GW(p)
I like this.
Another explanation I have heard:
a popular theory of aging is the mitochrondial theory of aging.
There are several variants of this theory some of which are definitely false, while some are plausibly in sorta-the-direction. It's a big controversy and I'm not an expert yada yada yada. Let me assume something like the following is true: aging is a metabolic phenomena where mitochrondia degrade overtime and at some point start to leak damaging byproducts which is substantially responsible for aging. Mitochrondial DNA have less repair mechanism than nuclear DNA. Over time they accrue mutations that are bad (much quicker than nuclear dna).
Species that reproduce fast & many may select less on (mitochrondial) mutational load since its matter less. On the other hand, species that have more selection on mitochrondial mutational load for whatever reason are less fecund. E.g. fetuses may be spontaneously aborted if the mitochrondia have too many mutations.
Some pieces of evidence: eggs contain the mitochrondia and are 'kept on ice', i.e. they do not metabolize. Birds have a much stronger selection pressure for high-functioning metabolism (because of flight)[1] and plausibly 'better mitochrondia'.
[there are also variant-hypotheses possible that have a similar mutation meltdown story but don't go through mitochrondia per se. There is some evidence and counterevidence for epigenetic and non-mitochrondial mutational meltdown theories of againg too. So not implausible]
- ^
compare bats? what about their lifespans?
↑ comment by Nathan Helm-Burger (nathan-helm-burger) · 2024-09-05T20:14:41.238Z · LW(p) · GW(p)
My cached mental explanation from undergrad when I was learning about the details of evolution and thinking about this was something along the lines of a heuristic like:
"Many plants and animals seem to have been selected for dying after a successful reproduction event. Part of this may be about giving maximal resources to that reproduction event (maybe your only one, or just your last one). But for animals that routinely survive their last reproductive event, and survive raising the children until the children become independent, then there's probably some other explanation. I think about this with mice as my prototypical example a lot, since they seem to have this pattern. Commonly both male and female mice will survive reproduction, potentially even multiple cycles. However, mice do seem to be selected for relatively fast senescence. What might underlie this?
My guess is that senescence can cause you to get out of the way of your existing offspring. Avoiding being a drag on them. There are many compatible (potentially co-occurring) ways this could happen. Some that I can think of off the top of my head are:
- Not being a vector for disease, while in a relatively weakened state of old age
- Not feeding predators, which could then increase in population and put further stress on the population of your descendants / relatives.
- Not consuming resources which might otherwise be more available to your descendants / relatives including:
- food
- good shelter locations
- potential mating opportunities
- etc
"
↑ comment by lemonhope (lcmgcd) · 2024-09-10T17:30:24.948Z · LW(p) · GW(p)
Thanks for the cached explanation, this is similar to what I thought before a few days ago. But now I'm thinking that an older-but-still-youthful mouse would be better at avoiding predators and could be just as fertile, if mice were long lived. So the food & shelter might be "better spent" on them, in terms of total expected descendants. This would only leave the disease explanation, yes?
↑ comment by nc · 2024-09-05T09:59:24.030Z · LW(p) · GW(p)
My understanding was the typical explanation was antagonistic pleiotropy, but I don't know whether that's the consensus view.
This seems to have the name 'pathogen control hypothesis' in the literature - see review. I think it has all the hallmarks of a good predictive hypothesis, but I'd really want to see some simulations of which parameter scenarios induce selection this way.
Replies from: lcmgcd↑ comment by lemonhope (lcmgcd) · 2024-09-05T15:59:54.817Z · LW(p) · GW(p)
They keywords are much appreciated. That second link is only from 2022! I wonder if anybody suggested this in like 1900. Edit: some of the citations are from very long ago
comment by lemonhope (lcmgcd) · 2024-07-01T18:03:32.598Z · LW(p) · GW(p)
I wonder how well a water cooled stovetop thermoelectric backup generator could work.
This is only 30W but air cooled https://www.tegmart.com/thermoelectric-generators/wood-stove-air-cooled-30w-teg
You could use a fish tank water pump to bring water to/from the sink. Just fill up a bowl of water with the faucet and stick the tube in it. Leave the faucet running. Put a filter on the bowl. Float switch to detect low water, run wire with the water tube
Normal natural gas generator like $5k-10k and you have to be homeowner
I think really wide kettle with coily bottom is super efficient at heat absorption. Doesn't have to be dishwasher safe obviously, unlike a pan.
comment by lemonhope (lcmgcd) · 2024-06-25T20:24:32.353Z · LW(p) · GW(p)
(Quoting my recent comment)
Apparently in the US we are too ashamed to say we have "worms" or "parasites", so instead we say we have "helminths". Using this keyword makes google work. This article estimates at least 5 million people (possibly far more) in the US have one of the 6 considered parasites. Other parasites may also be around. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7847297/ (table 1)
This is way more infections than I thought!!
Note the weird symptoms. Blurry vision, headache, respiratory illness, blindness, impaired cognition, fever... Not just IBS and anemia!
The author does not appear to be a crank
comment by lemonhope (lcmgcd) · 2024-05-27T16:11:10.524Z · LW(p) · GW(p)
I was working on this cute math notation the other day. Curious if anybody knows a better way or if I am overcomplicating this.
Say you have . And you want to be some particular value.
Sometimes you can control , sometimes you can control , and you can always easily measure . So you might use these forms of the equation:
It's kind of confusing that seems proportional to both and . So here's where the notation comes in. Can write above like
Which seems a lot clearer to me.
And you could shorten it to , , and .
comment by lemonhope (lcmgcd) · 2024-04-24T07:06:47.849Z · LW(p) · GW(p)
Seems it is easier / more streamlined / more googlable now for a teenage male to get testosterone blockers than testosterone. Latter is very frowned upon — I guess because it is cheating in sports. Try googling eg "get testosterone prescription high school reddit -trans -ftm". The results are exclusively people shaming the cheaters. Whereas of course googling "get testosterone blockers high school reddit" gives tons of love & support & practical advice.
Females however retain easy access to hormones via birth control.
comment by lemonhope (lcmgcd) · 2024-04-11T23:44:56.074Z · LW(p) · GW(p)
I wonder what experiments physicists have dreamed up to find floating point errors in physics. Anybody know? Or can you run physics with large ints? Would you need like int256?
comment by lemonhope (lcmgcd) · 2024-04-14T18:51:13.122Z · LW(p) · GW(p)
Andor is a word now. You're welcome everybody. Celebrate with champagne andor ice cream.
Replies from: lcmgcd↑ comment by lemonhope (lcmgcd) · 2024-04-28T23:11:52.849Z · LW(p) · GW(p)
What monster downvoted this
comment by lemonhope (lcmgcd) · 2024-04-24T07:45:25.652Z · LW(p) · GW(p)
I wonder how much testosterone during puberty lowers IQ. Most of my high school math/CS friends seemed low-T and 3/4 of them transitioned since high school. They still seem smart as shit. The higher-T among us seem significantly brain damaged since high school (myself included). I wonder what the mechanism would be here...
Like 40% of my math/cs Twitter is trans women and another 30% is scrawny nerds and only like 9% big bald men.
Replies from: Gunnar_Zarncke, Viliam, quetzal_rainbow, metachirality↑ comment by Gunnar_Zarncke · 2024-04-24T12:31:27.368Z · LW(p) · GW(p)
Testosterone influences brain function but not so much general IQ. It may influence to which areas your attention and thus most of your learning goes. For example, Lower testosterone increases attention to happy faces while higher to angry faces.
Replies from: lcmgcd↑ comment by lemonhope (lcmgcd) · 2024-04-28T20:40:59.411Z · LW(p) · GW(p)
Hmm I think the damaging effect would occur over many years but mainly during puberty. It looks like there's only two studies they mention lasting over a year. One found a damaging effect and the other found no effect.
↑ comment by Viliam · 2024-04-24T14:04:11.118Z · LW(p) · GW(p)
Also, accelerate education, to learn as much as possible before the testosterone fully hits.
Or, if testosterone changes attention (as Gunnar wrote), learn as much as possible before the testosterone fully hits... and afterwards learn it again, because it could give you a new perspective.
↑ comment by quetzal_rainbow · 2024-04-24T14:08:11.934Z · LW(p) · GW(p)
It's really weird hypothesis because DHT is used as nootropic.
I think the most effect of high T, if it exists, is purely behavioral.
↑ comment by metachirality · 2024-04-24T14:14:07.651Z · LW(p) · GW(p)
The hypothesis I would immediately come up with is that less traditionally masculine AMAB people are inclined towards less physical pursuits.