Posts

[April Fools' Day] Introducing Open Asteroid Impact 2024-04-01T08:14:15.800Z
Linkpost: Francesca v Harvard 2023-12-17T06:18:05.883Z
EA Infrastructure Fund's Plan to Focus on Principles-First EA 2023-12-06T03:24:55.844Z
EA Infrastructure Fund: June 2023 grant recommendations 2023-10-26T00:35:07.981Z
Linkpost: A Post Mortem on the Gino Case 2023-10-24T06:50:42.896Z
Is the Wave non-disparagement thingy okay? 2023-10-14T05:31:21.640Z
What do Marginal Grants at EAIF Look Like? Funding Priorities and Grantmaking Thresholds at the EA Infrastructure Fund 2023-10-12T21:40:17.654Z
The Long-Term Future Fund is looking for a full-time fund chair 2023-10-05T22:18:53.720Z
Linkpost: They Studied Dishonesty. Was Their Work a Lie? 2023-10-02T08:10:51.857Z
Long-Term Future Fund Ask Us Anything (September 2023) 2023-08-31T00:28:13.953Z
LTFF and EAIF are unusually funding-constrained right now 2023-08-30T01:03:30.321Z
What Does a Marginal Grant at LTFF Look Like? Funding Priorities and Grantmaking Thresholds at the Long-Term Future Fund 2023-08-11T03:59:51.757Z
Long-Term Future Fund: April 2023 grant recommendations 2023-08-02T07:54:49.083Z
Are the majority of your ancestors farmers or non-farmers? 2023-06-20T08:55:31.347Z
Some lesser-known megaproject ideas 2023-04-02T01:14:54.293Z
Announcing Impact Island: A New EA Reality TV Show 2022-04-01T17:28:23.277Z
The Motivated Reasoning Critique of Effective Altruism 2021-09-15T01:43:59.518Z
Linch's Shortform 2020-10-23T18:07:04.235Z
What are some low-information priors that you find practically useful for thinking about the world? 2020-08-07T04:37:04.127Z

Comments

Comment by Linch on Linch's Shortform · 2024-04-16T00:14:32.597Z · LW · GW

People might appreciate this short (<3 minutes) video interviewing me about my April 1 startup, Open Asteroid Impact:

 

Comment by Linch on [April Fools' Day] Introducing Open Asteroid Impact · 2024-04-10T21:38:04.764Z · LW · GW

Alas I think doing this will be prohibitively expensive/technologically infeasible.  We did some BOTECs at the launch party and even just getting rid of leap seconds was too expensive for us.

That's one of many reasons why I'm trying to raise 7 trillion dollars.

Comment by Linch on [April Fools' Day] Introducing Open Asteroid Impact · 2024-04-10T19:14:57.814Z · LW · GW

Thanks for the pro-tip! I'm not much of a geologist, more of an ideas guy[1] myself. 

  1. ^

    "I can handle the business end"

Comment by Linch on [April Fools' Day] Introducing Open Asteroid Impact · 2024-04-01T21:59:19.339Z · LW · GW

Open Asteroid Impact strongly disagrees with this line of thinking. Our theory of change relies on many asteroids filled with precious minerals hitting earth, as mining in space (even LEO) is prohibitively expensive compared to on-ground mining.

While your claims may be true for small asteroids, we strongly believe that scale is all you need. Over time, sufficiently large, and sufficiently many, asteroids can solve the problem of specific asteroids not successfully impacting Earth.
 

Comment by Linch on [April Fools' Day] Introducing Open Asteroid Impact · 2024-04-01T19:21:29.098Z · LW · GW

rare earth metals? More like common space metals, amirite?

Comment by Linch on I played the AI box game as the Gatekeeper — and lost · 2024-02-14T23:05:39.155Z · LW · GW

In 2015 or so when my friend and I independently came across a lot of rationalist concepts, we learned that each other were interested in this sort of LW-shaped thing. He offered for us to try the AI box game. I played the game as Gatekeeper and won with ease. So at least my anecdotes don't make me particularly worried.

That said, these days I wouldn't publicly offer to play the game against an unlimited pool of strangers. When my friend and I played against each other, there was an implicit set of norms in play, that explicitly don't apply to the game as stated as "the AI has no ethical constraints."

I do not particularly relish the thought of giving a stranger with a ton of free time and something to prove the license to be (e.g) as mean to me as possible over text for two hours straight (while having days or even weeks to prepare ahead of time). I might lose, too. I can think of at least 3 different attack vectors[1] that might get me to decide that the -EV of losing the game is not as bad as the -EV of having to stay online and attentive in such a situation for almost 2 more hours.

That said, I'm also not necessarily convinced that in the literal boxing example (a weakly superhuman AI is in a server farm somewhere, I'm the sole gatekeeper responsible to decide whether to let it out or not), I'd necessarily let it out.  Even after accounting for the greater cognitive capabilities and thoroughness of superhuman AI. This is because I expect my willingness to hold in an actual potential end-of-world scenario is much higher than my willingness to hold for $25 and some internet points. 

  1. ^

    In the spirit of the game, I will not publicly say what they are. But I can tell people over DMs if they're interested, I expect most people to agree that they're a)within the explicit rules of the game, b) plausibly will cause reasonable people to fold, and c) are not super analogous to actual end-of-world scenarios.

Comment by Linch on Conditional prediction markets are evidential, not causal · 2024-02-09T00:35:51.070Z · LW · GW

Yep. 

Comment by Linch on Conditional prediction markets are evidential, not causal · 2024-02-08T01:52:51.003Z · LW · GW

Yeah this came up in a number of times during covid forecasting in 2020. Eg, you might expect the correlational effect of having a lockdown during times of expected high mortality load to outweigh any causal advantages on mortality of lockdowns. 

Comment by Linch on Conditional prediction markets are evidential, not causal · 2024-02-08T01:52:19.298Z · LW · GW

Yeah this came up in a number of times during covid forecasting in 2020. Eg, you might expect the correalational effect of having a lockdown during times of expected high mortality load to outweigh any causal advantages on mortality of lockdowns. 

Comment by Linch on The impossible problem of due process · 2024-02-05T09:33:52.751Z · LW · GW

https://www.smbc-comics.com/comic/aaaah

Comment by Linch on Linch's Shortform · 2024-01-31T20:23:10.300Z · LW · GW

Going forwards, LTFF is likely to be a bit more stringent (~15-20%?[1] Not committing to the exact number) about approving mechanistic interpretability grants than in grants in other subareas of empirical AI Safety, particularly from junior applicants. Some assorted reasons (note that not all fund managers necessarily agree with each of them):

  • Relatively speaking, a high fraction of resources and support for mechanistic interpretability comes from other sources in the community other than LTFF; we view support for mech interp as less neglected within the community.
  • Outside of the existing community, mechanistic interpretability has become an increasingly "hot" field in mainstream academic ML; we think good work is fairly likely to come from non-AIS motivated people in the near future. Thus overall neglectedness is lower.
  • While we are excited about recent progress in mech interp (including some from LTFF grantees!), some of us are suspicious that even success stories in interpretability are that large a fraction of the success story for AGI Safety.
  • Some of us are worried about field-distorting effects of mech interp being oversold to junior researchers and other newcomers as necessary or sufficient for safe AGI.
  • A high percentage of our technical AIS applications are about mechanistic interpretability, and we want to encourage a diversity of attempts and research to tackle alignment and safety problems.

We wanted to encourage people interested in working on technical AI safety to apply to us with proposals for projects in areas of empirical AI safety other than interpretability. To be clear, we are still excited about receiving mechanistic interpretability applications in the future, including from junior applicants. Even with a higher bar for approval, we are still excited about funding great grants.

We tentatively plan on publishing a more detailed explanation about the reasoning later, as well as suggestions or a Request for Proposals for other promising research directions. However, these things often take longer than we expect/intend (and may not end up happening), so I wanted to give potential applicants a heads-up.

  1. ^

    Operationalized as "assuming similar levels of funding in 2024 as in 2023, I expect that about 80-85% of the mech interp projects we funded in 2023 will be above the 2024 bar."

Comment by Linch on Linkpost: Francesca v Harvard · 2023-12-19T01:58:43.714Z · LW · GW

My guess is that it's because "Francesca" sounds more sympathetic as a name.

Comment by Linch on OpenAI: Altman Returns · 2023-11-30T21:10:50.101Z · LW · GW

Interesting! That does align better with the survey data than what I see on e.g. Twitter.

Comment by Linch on OpenAI: Altman Returns · 2023-11-30T19:29:41.312Z · LW · GW

Out of curiosity, is "around you" a rationalist-y crowd, or a different one?

Comment by Linch on Ability to solve long-horizon tasks correlates with wanting things in the behaviorist sense · 2023-11-25T03:29:34.070Z · LW · GW

Apologies if I'm being naive, but it doesn't seem like an oracle AI[1] is logically or practically impossible, and a good oracle should be able to be able to perform well at long-horizon tasks[2] without "wanting things" in the behaviorist sense, or bending the world in consequentialist ways.

The most obvious exception is if the oracle's own answers are causing people to bend the world in the service of hidden behaviorist goals that the oracle has (e.g. making the world more predictable to reduce future loss), but I don't have strong reasons to believe that this is very likely. 

This is especially the case since at training time, the oracle doesn't have any ability to bend the training dataset to fit its future goals, so I don't see why gradient descent would find cognitive algorithms for "wanting things in the behaviorist sense."

[1] in the sense of being superhuman at prediction for most tasks, not in the sense of being a perfect or near-perfect predictor.

[2] e.g. "Here's the design for a fusion power plant, here's how you acquire the relevant raw materials, here's how you do project management, etc." or "I predict your polio eradication strategy to have the following effects at probability p, and the following unintended side effects that you should be aware of at probability q." 

Comment by Linch on OpenAI: The Battle of the Board · 2023-11-24T09:14:42.740Z · LW · GW

Someone privately messaged me this whistleblowing channel for people to give their firsthand accounts of board members. I can't verify the veracity/security of the channel but I'm hoping that having an anonymous place to post concerns might lower the friction or costs involved in sharing true information about powerful people:

https://openaiboard.wtf/

Comment by Linch on OpenAI: The Battle of the Board · 2023-11-23T04:38:07.490Z · LW · GW

In the last 4 days, they were probably running on no sleep (and less used to that/had less access to the relevant drugs than Altman and Bockman), and had approximately zero external advisors, while Altman seemed to be tapping into half of Silicon Valley and beyond for help/advice.

Comment by Linch on OpenAI: The Battle of the Board · 2023-11-23T02:23:57.853Z · LW · GW

Apologies, I've changed the link.

Comment by Linch on Dialogue on the Claim: "OpenAI's Firing of Sam Altman (And Shortly-Subsequent Events) On Net Reduced Existential Risk From AGI" · 2023-11-23T02:09:03.795Z · LW · GW

I think it was clear from context that Lukas' "EAs" was intentionally meant to include Ben, and is also meant as a gentle rebuke re: naivete, not a serious claim re: honesty.

Comment by Linch on Dialogue on the Claim: "OpenAI's Firing of Sam Altman (And Shortly-Subsequent Events) On Net Reduced Existential Risk From AGI" · 2023-11-23T02:07:45.985Z · LW · GW

I feel like you misread Lukas, and his words weren't particularly unclear.

Comment by Linch on OpenAI: The Battle of the Board · 2023-11-23T01:12:00.693Z · LW · GW

Yes, here is GPT-4's reasoning.

Comment by Linch on OpenAI: The Battle of the Board · 2023-11-23T00:12:49.037Z · LW · GW

Besides his most well-known controversial comments re: women, at least according to my read of his Wikipedia page, Summers has a poor track record re: being able to identify and ouster sketchy people specifically. 

Comment by Linch on OpenAI: Facts from a Weekend · 2023-11-22T23:51:05.074Z · LW · GW

I think it was most likely unanimous among the remaining 4, otherwise one of the dissenters would've spoken out by now.

Comment by Linch on OpenAI: Facts from a Weekend · 2023-11-22T23:49:07.865Z · LW · GW

My favorite low-probability theory is that he had blackmail material on one of the board members[1], who initially decided after much deliberation to go forwards despite the blackmail, and then when they realized they got outplayed by Sam not using the blackmail material, backpeddled and refused to dox themselves.  And the other 2-3 didn't know what to do afterwards, because their entire strategy was predicated on optics management around said blackmail + blackmail material.

  1. ^

    Like something actually really bad.

Comment by Linch on OpenAI: Facts from a Weekend · 2023-11-21T00:29:40.495Z · LW · GW

AFAICT the only formal power the board has is in firing the CEO, so if we get a situation where whenever the board wants to fire Sam, Sam comes back and fires the board instead, well, it's not exactly an inspiring story for OpenAI's governance structure.

Comment by Linch on Book Review: Going Infinite · 2023-11-04T06:43:35.237Z · LW · GW

Here are some notes on why I think Imperial Japan was unusually bad, even by the very low bar set by the Second World War.

Comment by Linch on Linch's Shortform · 2023-11-04T06:41:45.905Z · LW · GW

CW: fairly frank discussions of violence, including sexual violence, in some of the worst publicized atrocities with human victims in modern human history. Pretty dark stuff in general.

tl;dr: Imperial Japan did worse things than Nazis. There was probably greater scale of harm, more unambiguous and greater cruelty, and more commonplace breaking of near-universal human taboos.

I think the Imperial Japanese Army is noticeably worse during World War II than the Nazis. Obviously words like "noticeably worse" and "bad" and "crimes against humanity" are to some extent judgment calls, but my guess is that to most neutral observers looking at the evidence afresh, the difference isn't particularly close. 

  • probably greater scale 
    • of civilian casualties: It is difficult to get accurate estimates of the number of civilian casualties from Imperial Japan, but my best guess is that the total numbers are higher (Both are likely in the tens of millions)
    • of Prisoners of War (POWs): Germany's mistreatment of Soviet Union POWs is called "one of the greatest crimes in military history" and arguably Nazi Germany's second biggest crime. The numbers involved were that Germany captured 6 million Soviet POWs, and 3 million died, for a fatality rate of 50%. In contrast, of all Chinese POWs taken by Japan, 56 survived to the end.
      • Japan's attempted coverups of warcrimes often involved attempted total eradication of victims. We see this in both POWs and in Unit 731 (their biological experimental unit, which we will explore later).
  • more unambiguous and greater cruelty
    • It's instructive to compare Nazi Germany human experiments against Japanese human experiments at unit 731 (warning:body horror).  Both were extremely bad in absolute terms. However, without getting into the details of the specific experiments, I don't think anybody could plausibly argue that the Nazis were more cruel in their human experiments, or incurred more suffering. The widespread casualness and lack of any traces of empathy also seemed higher in Imperial Japan:
      • "Some of the experiments had nothing to do with advancing the capability of germ warfare, or of medicine. There is such a thing as professional curiosity: ‘What would happen if we did such and such?’ What medical purpose was served by performing and studying beheadings? None at all. That was just playing around. Professional people, too, like to play."
      • When (Japanese) Unit 731 officials were infected, they immediately went on the experimental chopping block as well (without anesthesia).  
  • more commonplace breaking of near-universal human taboos
    • I could think of several key taboos that were broken by Imperial Japan but not the Nazis. I can't think of any in reverse.
      • Taboo against biological warfare:
        • To a first approximation, Nazi Germany did not actually do biological warfare outside of small-scale experiments. In contrast, Imperial Japan was very willing to do biological warfare "in the field" on civilians, and estimates of civilian deaths from Japan-introduced plague are upwards of 200,000.
      • Taboo against mass institutionalized rape and sexual slavery.
        • While I'm sure rape happened and was commonplace in German-occupied territories, it was not, to my knowledge condoned and institutionalized widely. While there are euphemisms applied like "forced prostitution" and "comfort women", the reality was that 50,000 - 200,000 women (many of them minors) were regularly raped under the direct instruction of the Imperial Japanese gov't.
      • Taboo against cannibalism outside of extreme exigencies.
        • "Nazi cannibals" is the material of B-movies and videogames, ie approximately zero basis in history. In contrast, Japanese cannibalism undoubtedly happened and was likely commonplace.
          • We have documented oral testimony from Indian POWs,  Australian POWs, American soldiers, and Japanese soldiers themselves.
          • My rationalist-y friends sometimes ask why the taboo against cannibalism is particularly important. 
            • I'm not sure why, but I think part of the answer is "dehumanization." 

I bring this topic up mostly as a source of morbid curiosity. I haven't spent that much time looking into war crimes, and haven't dived into the primary literature, so happy to be corrected on various fronts. 

Comment by Linch on [Linkpost] Mark Zuckerberg confronted about Meta's Llama 2 AI's ability to give users detailed guidance on making anthrax - Business Insider · 2023-11-01T18:43:56.770Z · LW · GW

Yeah, terrorists are often not very bright, conscientious, or creative.[1] I think rationalist-y types might systematically overestimate how much proliferation of non-novel information can still be bad, via giving scary ideas to scary people.

  1. ^

    No offense intended to any members of the terror community reading this comment

Comment by Linch on Linkpost: A Post Mortem on the Gino Case · 2023-10-28T20:40:46.487Z · LW · GW

My guess is still that this is below the LTFF bar (which imo is quite high) but I've forwarded some thoughts to some metascience funders I know. I might spend some more free time trying to push this through later. Thanks for the suggestion! 

Comment by Linch on Linkpost: A Post Mortem on the Gino Case · 2023-10-26T08:25:13.292Z · LW · GW

This is the type of thing that speaks to me aesthetically but my guess is that it wouldn't pencil, though I haven't done the math myself (nor do I have a good sense of how to model it well). Improving business psychology is just not a very leveraged way to improve the long-term future compared to the $X00M/year devoted to x-risk, so the flow-through effects have to be massive in order to be better than marginal longtermist grants. (If I was making grants in metascience this is definitely the type of thing I'd consider).

I'm very open to being wrong though; if other people have good/well-justified Fermi estimates for a pretty large effect (better than the marginal AI interpretability grant for example) I'd be very happy to reconsider.

Comment by Linch on Book Review: Going Infinite · 2023-10-26T03:26:39.621Z · LW · GW

I started writing a comment reply to elaborate after getting some disagreevotes on the parent comment, but decided that it'd be a distraction from the main conversation; I might expand on my position in an LW shortform at some point in the near future.

Comment by Linch on AI as a science, and three obstacles to alignment strategies · 2023-10-26T03:24:40.833Z · LW · GW

The canonical source for this is What Engineers Know and How They Know It, though I confess to not actually reading the book myself.

Comment by Linch on Lying to chess players for alignment · 2023-10-26T00:05:42.804Z · LW · GW

I'm happy to play on any  of the 4 roles, I haven't played non-blitz chess in quite a while (and never played it seriously) but I would guess I'm ~1300 on standard time controls on chess.com (interpolating between different time controls and assuming a similar decay as other games like Go).

I'm free after 9pm PDT most weekdays, and free between noon and 6pm or so on weekends.

Comment by Linch on Book Review: Going Infinite · 2023-10-25T22:38:01.912Z · LW · GW

I think Germany is an extreme outlier here fwiw, (eg) Japan did far worse things and after WW2 cared more about covering up wrongdoing than with admitting fault; further, Germany's government and cultural "reformation" was very much strongarmed by the US and other allies, whereas the US actively assisted Japan in covering up war crimes.

EDIT: See shortform elaboration: https://www.lesswrong.com/posts/s58hDHX2GkFDbpGKD/linch-s-shortform?commentId=ywf8R3CobzdkbTx3d 

Comment by Linch on My checklist for publishing a blog post · 2023-10-23T19:58:39.254Z · LW · GW

Thanks, this is helpful.

Comment by Linch on Evaluating the historical value misspecification argument · 2023-10-11T08:29:51.096Z · LW · GW

Thanks! Though tbh I don't think I fully got the core point via reading the post so I should only get partial credit; for me it took Alexander's comment to make everything click together.

Comment by Linch on Examples of Low Status Fun · 2023-10-11T00:37:41.782Z · LW · GW

I think anything that roughly maps to "live vicariously through higher-status people" is usually seen as low-status, because higher-status people are presumed to be able to just do those things directly:

  • watching porn/masturbation (as opposed to having sex)
  • watching sports
  • watching esports
  • romance novels
  • reality TV
  • Guitar Hero etc (a bit of a stretch but you're playing the role of somebody who could actually play guitar)

One exception is when the people you're living vicariously through are very high status/do very difficult things (eg it's medium to high status to be watching the Olympics, or The West Wing, or reading biographies). Conversely, living vicariously is usually considered even lower status if the people you are living vicariously through aren't very high-status themselves (eg reality TV or esports). Another exception to this trend is reading classic novels and history. I'm not sure why.

Comment by Linch on Evaluating the historical value misspecification argument · 2023-10-11T00:25:51.420Z · LW · GW

I think I read this a few times but I still don't think I fully understand your point. I'm going to try to rephrase what I believe you are saying in my own words:

  • Our correct epistemic state in 2000 or 2010 should be to have a lot of uncertainty about the complexity and fragility of human values. Perhaps it is very complex, but perhaps people are just not approaching it correctly.
  • At the limit, the level of complexity can approach "simulate a number of human beings in constant conversation and moral deliberation with each other, embedded in the existing broader environment, and where a small mistake in the simulation renders the entire thing broken in the sense of losing almost all moral value in the universe if that's what you point at"
  • At the other, you can imagine a fairly simple mathematical statement that's practically robust to any OOD environments or small perturbations.
  • In worlds where human values aren't very complex, alignment isn't solved, but you should perhaps expect it to be (significantly) easier. ("Optimize for this mathematical statement" is an easier thing to point at than "optimize for the outcome of this complex deliberation, no, not the actual answers out of their mouths but the indirect more abstract thing they point at")
  • Suppose in 2000 you were told that a100-line Python program (that doesn't abuse any of the particular complexities embedded elsewhere in Python) can provide a perfect specification of human values. Then you should rationally conclude that human values aren't actually all that complex (more complex than the clean mathematical statement, but simpler than almost everything else). 
  • In such a world, if inner alignment is solved, you can "just" train a superintelligent AI to "optimize for the results of that Python program" and you'd get a superintelligent AI with human values.
    • Notably, alignment isn't solved by itself. You still need to get the superintelligent AI to actually optimize for that Python program and not some random other thing that happens to have low predictive loss in training on that program.
  • Well, in 2023 we have that Python program, with a few relaxations:
    • The answer isn't embedded in 100 lines of Python, but in a subset of the weights of GPT-4
      • Notably the human value function (as expressed by GPT-4) is necessarily significantly simpler than the weights of GPT-4, as GPT-4 knows so much more than just human values.
    • What we have now isn't a perfect specification of human values, but instead roughly the level of understanding of human values that a 85th percentile human can come up with.
  • The human value function as expressed by GPT-4 is also immune to almost all in-practice, non-adversarial, perturbations 
  • We should then rationally update on the complexity of human values. It's probably not much more complex than GPT-4, and possibly significantly simpler. ie, the fact that we have a pretty good description of human values well short of superintelligent AI means we should not expect a perfect description of human values to be very complex either.
  • This is a different claim from saying that Superintelligent AIs will understand human values; which everybody agrees with. Human values isn't any more mysterious from the perspective of physics than any other emergent property like fluid dynamics or the formation of cities.
  • However, if AIs needed to be superintelligent (eg at the level of approximating physics simulations of Earth) before they grasp human values, that'd be too late, as they can/will destroy the world before their human creators can task a training process (or other ways of making AGI) towards {this thing that we mean when we say human values}.
  • But instead, the world we live in is one where we can point future AGIs towards the outputs of GPT-N when asked questions about morality as the thing to optimize for.
    • Which, again, isn't to say the alignment problem is solved, we might still all die because future AGIs could just be like "lol nope" to the outputs of GPT-N, or try to hack it to produce adversarial results, or something. But at least one subset of the problem is either solved or a non-issue, depending on your POV.
  • Given all this, MIRI appeared to empirically be wrong when they previously talked about the complexity and fragility of human values. Human values now seem noticeably less complex than many possibilities, and empirically we already have a pretty good representation of human values in silica.

Is my summary reasonably correct?

Comment by Linch on Linkpost: They Studied Dishonesty. Was Their Work a Lie? · 2023-10-03T17:57:26.813Z · LW · GW

My current (maybe boring) view is that any academic field where the primary mode of inquiry is applied statistics (much of the social sciences and medicine) is suss. The fields where the primary tool is mathematics (pure mathematics, theoretical CS, game theory, theoretical physics) still seems safe, and the fields where the primary tool is computers (distributed systems, computational modeling in various fields) are reasonably safe. ML is somewhere in between computers and statistics. 

Fields where the primary tool is just looking around and counting (demography, taxonomy, astronomy(?)) are probably safe too? I'm confused about how to orient towards the humanities. 

Comment by Linch on Linkpost: They Studied Dishonesty. Was Their Work a Lie? · 2023-10-02T20:30:04.899Z · LW · GW

Yes, sorry. Will edit.

Comment by Linch on Adventist Health Study-2 supports pescetarianism more than veganism · 2023-10-01T22:36:06.904Z · LW · GW

What you originally said was "say it's not at all obvious that a vegan diet has health tradeoffs ex-ante". I think what you meant here was "it's not clear a vegan diet is net negative." A vegan diet leading to lower energy levels but longer lifespan is the definition of a trade-off. 

This might be semantics, but when you said "Change my mind: Veganism entails trade-offs, and health is one of the axes" I (until now) interpreted the claim as vegans needing to trade off health (writ large) against other desirable properties (taste, cost, convenience, etc), not a tradeoff within different components of health.

I don't have a sense of how common my reading was, however, and I don't want to put words in Natalia's mouth.

Comment by Linch on There should be more AI safety orgs · 2023-09-26T04:32:12.630Z · LW · GW

crossposted from answering a question on the EA Forum.

(My own professional opinions, other LTFF fund managers etc might have other views) 

Hmm I want to split the funding landscape into the following groups:

  1. LTFF
  2. OP
  3. SFF
  4. Other EA/longtermist funders
  5. Earning-to-givers
  6. Non-EA institutional funders.
  7. Everybody else

LTFF

At LTFF our two biggest constraints are funding and strategic vision. Historically it was some combination of grantmaking capacity and good applications but I think that's much less true these days. Right now we have enough new donations to fund what we currently view as our best applications for some months, so our biggest priority is finding a new LTFF chair to help (among others) address our strategic vision bottlenecks.

Going forwards, I don't really want to speak for other fund managers (especially given that the future chair should feel extremely empowered to shepherd their own vision as they see fit). But I think we'll make a bid to try to fundraise a bunch more to help address the funding bottlenecks in x-safety. Still, even if we double our current fundraising numbers or so[1], my guess is that we're likely to prioritize funding more independent researchers etc below our current bar[2], as well as supporting our existing grantees, over funding most new organizations. 

(Note that in $ terms LTFF isn't a particularly large fraction of the longtermist or AI x-safety funding landscape, I'm only talking about it most because it's the group I'm the most familiar with).

Open Phil

I'm not sure what the biggest constraints are at Open Phil. My two biggest guesses are grantmaking capacity and strategic vision.  As evidence for the former, my impression is that they only have one person doing grantmaking in technical AI Safety (Ajeya Cotra). But it's not obvious that grantmaking capacity is their true bottleneck, as a) I'm not sure they're trying very hard to hire, and b) people at OP who presumably could do a good job at AI safety grantmaking (eg Holden) have moved on to other projects. It's possible OP would prefer conserving their AIS funds for other reasons, eg waiting on better strategic vision or to have a sudden influx of spending right before the end of history.

SFF

I know less about SFF. My impression is that their problems are a combination of a) structural difficulties preventing them from hiring great grantmakers, and b) funder uncertainty.

Other EA/Longtermist funders

My impression is that other institutional funders in longtermism either don't really have the technical capacity or don't have the gumption to fund projects that OP isn't funding, especially in technical AI safety (where the tradeoffs are arguably more subtle and technical than in eg climate change or preventing nuclear proliferation). So they do a combination of saving money, taking cues from OP, and funding "obviously safe" projects.

Exceptions include new groups like Lightspeed (which I think is more likely than not to be a one-off thing), and Manifund (which has a regranters model).

Earning-to-givers

I don't have a good sense of how much latent money there is in the hands of earning-to-givers who are at least in theory willing to give a bunch to x-safety projects if there's a sufficiently large need for funding. My current guess is that it's fairly substantial. I think there are roughly three reasonable routes for earning-to-givers who are interested in donating:

  1. pooling the money in a (semi-)centralized source
  2. choosing for themselves where to give to
  3. saving the money for better projects later.

If they go with (1), LTFF is probably one of the most obvious choices. But LTFF does have a number of dysfunctions, so I wouldn't be surprised if either Manifund or some newer group ends up being the Schelling donation source instead.

Non-EA institutional funders

I think as AI Safety becomes mainstream, getting funding from government and non-EA philantropic foundations becomes an increasingly viable option for AI Safety organizations. Note that direct work AI Safety organizations have a comparative advantage in seeking such funds. In comparison, it's much harder for both individuals and grantmakers like LTFF to seek institutional funding[3]

I know FAR has attempted some of this already.

Everybody else

As worries about AI risk becomes increasingly mainstream, we might see people at all levels of wealth become more excited to donate to promising AI safety organizations and individuals. It's harder to predict what either non-Moskovitz billionaires or members of the general public will want to give to in the coming years, but plausibly the plurality of future funding for AI Safety will come from individuals who aren't culturally EA or longtermist or whatever.

Comment by Linch on The Talk: a brief explanation of sexual dimorphism · 2023-09-21T23:45:25.645Z · LW · GW

In others, being a worker vs. queen is something of a choice, and if circumstances change a worker may start reproducing. There isn't a sharp transition between cooperative breeding and eusociality. 

Yep. Once the old naked mole-rat queen dies, the remaining female naked mole-rats have a dominance contest until one girl emerges victorious and becomes the new queen.

Comment by Linch on Protest against Meta's irreversible proliferation (Sept 29, San Francisco) · 2023-09-20T19:42:06.255Z · LW · GW

Thanks, that's very helpful context. In principle, I wouldn't put too much stock in the specific numbers of a single poll, since those results depend too much on specific wording etc. But the trend in this poll is consistent enough over all questions that I'd be surprised if the questions could be massaged to get the opposite results, let alone ones 10x in favor of the accelerationist side.

I believe this has been replicated consistently across many polls. For the results to change, reality (in the sense of popular opinion) likely has to change, rather than polling techniques.

On the other hand, popular opinion changing isn't that unlikely, as it's not exactly something that either voters or the elites have thought much about, and (fortunately) this has not yet hewn along partisan lines.

Comment by Linch on LTFF and EAIF are unusually funding-constrained right now · 2023-09-13T23:12:30.586Z · LW · GW

UPDATE 2023/09/13: 

Including only money that has already landed in our bank account and extremely credible donor promises of funding, LTFF has raised ~1.1M and EAIF has raised ~500K.  After Open Phil matching, this means LTFF now has ~3.3M additional funding and EAIF has ~1.5m in additional funding.

We are also aware that other large donors, including both individuals and non-OP institutional donors, are considering donating to us. In addition, while some recurring donors have likely moved up their donations to us because of our recent unusually urgent needs, it is likely that we will still accumulate some recurring donations in the coming months as well. Thus,I think at least some of the less-certain sources of funding will come through. However, I decided to conservatively not include them in the estimate above. 

From my (Linch)'s perspective, this means both LTFF nor EAIF are no longer very funding constrained for the time period we wanted to raise money for (the next ~6 months), however both funds are still funding constrained and can productively make good grants with additional funding.

To be more precise, we estimated a good target spend rate for LTFF is as 1M/month, and a good target spend rate for EAIF as ~800k/month. The current funds will allow LTFF to spend ~550k/month and EAIF to spend ~250k/month, or roughly a gap of 450k/month and 550k/month, respectively. More funding is definitely helpful here, as more money will allow both funds to make productively make good grants[1].

Open Phil's matching is up to 3.5M from OP (or 1.75M from you) for each fund. This means LTFF would need ~650k more before maxing out on OP matching, and EAIF would need ~1.25M more. Given my rough estimate of funding needs above, which is ~6.2M/6 months for LTFF and ~5M/6 months for EAIF, this means LTFF would ideally like to receive 1M above the OP matching. 

I appreciate donors' generosity and commitment to improving the world. I hope the money will be used wisely and cost-effectively.

I plan to write a high-level update and reflections post[2] on the EAForum (crossposted to LessWrong) after LTFF either a) reach our estimated funding target or b) decided to deprioritize fundraising, whichever one comes earlier.

Comment by Linch on Sharing Information About Nonlinear · 2023-09-12T17:44:29.104Z · LW · GW

possibly enough to risk a suit if Lincoln wanted to

Would be pretty tough to do given the legal dubiousness re: enforceability of non-disparagement agreements in the US (note: the judgement applies retroactively)

Comment by Linch on Sharing Information About Nonlinear · 2023-09-12T04:36:11.926Z · LW · GW

Thanks, I haven't played ONUW much,  Avalon is the main game I play, also more classic mafia, werewolf, secret hitler and Quest.

Comment by Linch on Sharing Information About Nonlinear · 2023-09-11T19:10:18.273Z · LW · GW

Eg I think in advanced Among Us lobbies it's an important skill to subtly push an unproductive thread of conversation without making it obvious that you were the one who distracted everybody.

I'm not much of an avid Among Us player, but I suspect this only works in Among Us because of the (much) heavier-than usual time pressures. In the other social deception games I'm aware of, the structural incentives continue to point in the other direction, so the main reason for bad guys to make spurious accusations is for anti-inductive reasons (if everybody knows that spurious accusations are a vanilla tactic, then obviously spurious accusation becomes a good "bad guy" play to fake being good). 

Comment by Linch on Sharing Information About Nonlinear · 2023-09-11T00:24:04.801Z · LW · GW

I don't understand this - it reads to me like you're saying a similar thing is true for the game and real life? But that goes against your position.

Sorry that was awkwardly worded. Here's a simplified rephrase:

In games, bad guys want to act and look not the same. In real life, if you often agree with known bad folks, most think you're not good.

Put in a different way, because of the structure of games like Avalon (it's ~impossible for all the bad guys to not be found out, minions know who each other are, all minions just want their "team" to win so having sacrificial lambs make sense, etc), there are often equilibria where in even slightly advanced play, minions (bad guys) want to be seen as disagreeing with other minions earlier on. So if you find someone disagreeing with minions a lot (in voting history etc), especially in non-decision-relevant ways, this is not much evidence one way or another (and in some cases might even be negative evidence on your goodness). Similarly, if Mildred constantly speaks highly of you, and we later realize that Mildred is a minion, this shouldn't be a negative update on you (and in some cases is a positive), because minions often have structural reasons to praise/bribe good guys. At higher levels obviously people become aware of this dynamic so there's some anti-inductive play going on, but still. Frequently the structural incentives prevail.

In real life there's a bit of this dynamic but the level one model ("birds of a feather flock together") is more accurate, more of the time. 

Comment by Linch on Sharing Information About Nonlinear · 2023-09-09T18:38:37.961Z · LW · GW

Errol is a Logical Decision Theorist. Whenever he's playing a game of Werewolf, he's trying to not just win that game, but to maximize his probability of winning across all versions of the game, assuming he's predictable to other players. Errol firmly commits to reporting whether he's a werewolf whenever he gets handed that role, reasoning that behind the veil of ignorance, he's much more likely to land as villager than as werewolf, and that villager team always having a known villager greatly increases his overall odds of winning. Errol follows through with his commitments. Errol is not very fun to play with and has since been banned from his gaming group.