(The) Lightcone is nothing without its people: LW + Lighthaven's big fundraiser

post by habryka (habryka4) · 2024-11-30T02:55:16.077Z · LW · GW · 153 comments

Contents

  LessWrong
    Does LessWrong influence important decisions?
    Does LessWrong make its readers/writers more sane?
    LessWrong and intellectual progress 
      Public Accessibility of the Field of AI Alignment
      LessWrong's influence on research
  Lighthaven
    The economics of Lighthaven
    How does Lighthaven improve the world?
      Nooks nooks nooks nooks nooks
      Lighthaven "permanent" residents and the "river and shore" metaphor
      Does Lighthaven improve the events we run here? 
    The relationship between Lighthaven and LessWrong
  Lightcone and the funding ecosystem
  Our work on funding infrastructure
  If it's worth doing it's worth doing with made-up statistics
    The OP GCR capacity building team survey
    Lightcone/LessWrong cannot be funded by just running ads
    Comparing LessWrong to other websites and apps
    Lighthaven event surplus
  The future of (the) Lightcone
    Lightcone culture and principles
    Things I wish I had time and funding for
      Building an LLM-based editor.
      AI prompts and tutors as a content type on LW
      Building something like an FHI of the West
      Building funding infrastructure for AI x-risk reduction
      Something something policy
      A better review system for AI Alignment research
  What do you get from donating to Lightcone?
  Goals for the fundraiser
  Logistics of donating to Lightcone
  Tying everything together
None
153 comments

TLDR: LessWrong + Lighthaven need about $3M for the next 12 months. Donate here, or send me an email, DM [LW · GW] signal message (+1 510 944 3235), or leave a comment, if you want to support what we do. We are a registered 501(c)3, have big plans for the next year, and due to a shifting funding landscape need support from a broader community more than in any previous year. [1]

I've been running LessWrong/Lightcone Infrastructure for the last 7 years. During that time we have grown into the primary infrastructure provider for the rationality and AI safety communities. "Infrastructure" is a big fuzzy word, but in our case, it concretely means: 

In general, Lightcone considers itself responsible for the end-to-end effectiveness of the extended rationality and AI safety community. If there is some kind of coordination failure, or part of the engine of impact that is missing, I aim for Lightcone to be an organization that can jump in and fix that, whatever it is. 

Doing that requires a non-trivial amount of financial capital. For the next 12 months, we expect to spend around $3M, and in subsequent years around $2M (though we have lots of opportunities to scale up if we can get more funding for it). We currently have around $200k in the bank.[3]

Lightcone is, as far as I can tell, considered cost-effective by the large majority of people who have thought seriously about how to reduce existential risk and have considered Lightcone as a donation target, including all of our historical funders. Those funders can largely no longer fund us, or expect to fund us less, for reasons mostly orthogonal to cost-effectiveness (see the section below on "Lightcone and the funding ecosystem" [LW · GW] for details on why). Additionally, many individuals benefit from our work, and I think it makes sense for those people to support the institutions that provide them value. 

This, I think, creates a uniquely strong case for people reading this to donate to us.[4] 

I personally think there exists no organization that has been more cost-effective at reducing AI existential risk in the last 5 years, and I think that's likely to continue to be the case in the coming 5 years. Our actions seem to me responsible for a substantial fraction of the positive effects of the field of AI safety, and have also substantially alleviated the negative effects of our extended social cluster (which I think are unfortunately in-expectation of comparable magnitude to the positive effects, with unclear overall sign).

Of course, claiming to be the single most cost-effective intervention out there is a big claim, and one I definitely cannot make with great confidence. But the overall balance of evidence seems to me to lean this way, and I hope in this post to show you enough data and arguments that you feel comfortable coming to your own assessment. 

This post is a marathon, so strap in and get comfortable. Feel free to skip to any section of your choice (the ToC on the left, or in the hamburger menu is your friend). Also, ask me questions in the comments (or in DMs), even if you didn't read the whole post. 

Now let's zoom out a bit and look at some of the big picture trends and data of the projects we've been working on in the last few years and see what they tell us about Lightcone's impact: 

LessWrong

Here are our site metrics from 2017 to 2024:

On almost all metrics, we've grown the activity levels of LessWrong by around 4-5x since 2017 (and ~2x since the peak of LW 1.0). In more concrete terms, this has meant something like the following:

You will also quickly notice that many metrics peaked in 2023, not 2024. This is largely downstream the launch of ChatGPT, Eliezer's "List of Lethalities [LW · GW]" and Eliezer's TIME article [LW · GW], which caused a pretty huge spike in traffic and activity on the site. That spike is now over and we will see where things settle in terms of growth and activity. The collapse of FTX also caused a reduction in traffic and activity of practically everything Effective Altruism-adjacent, and I expect we are also experiencing some of that (though much less than more centrally EA-associated platforms like 80,000 hours and the EA Forum, as far as I can tell).

While I think these kind of traffic statistics are a very useful "sign of life" and sanity-check that what we are doing is having any effect at all in the grand scale of things, I don't think they are remotely sufficient for establishing we are having a large positive impact.

One way to get closer to an answer to that question is to decompose it into two questions: "Do the writings and ideas from LessWrong influence important decision-makers?" and "Does LessWrong make its readers & writers more sane?"

I expect the impact of LessWrong to end up extremely heavy-tailed, with a large fraction of the impact coming from a very small number of crucial decision-makers having learned something of great importance on a highly leveraged issue (e.g. someone like Geoffrey Hinton becoming concerned about AI existential risk, or an essay on LW opening the Overton window at AI capability companies to include AI killing everyone, or someone working on an AI control strategy learning about some crucial component of how AIs think that makes things work better).

Does LessWrong influence important decisions?

It's tricky to establish whether reading LessWrong causes people to become more sane and better informed on key issues. It is however relatively easy to judge whether LessWrong is being read by some of the most important decision-makers of the 21st century, or whether it is indirectly causing content to be written that is being read by the most important decision-makers of the 21st century. 

I think the extent of our memetic reach was unclear for a few years, but there is now less uncertainty. Among the leadership of the biggest AI capability companies (OpenAI, Anthropic, Meta, Deepmind, xAI), at least 4/5 have clearly been heavily influenced by ideas from LessWrong.[6] While the effect outside of Silicon Valley tech and AI is less clear, things look promising to me there too: 

Dominic Cummings talking about the impact of a LW post on UK government COVID response

Matt Clifford, CEO of Entrepreneur First and Chair of the UK’s ARIA recently said on a podcast (emphasis mine):

Jordan Schneider: What was most surprising to you in your interactions during the build-up to the summit, as well as over the course of the week?

Matt Clifford: When we were in China, we tried to reflect in the invite list a range of voices, albeit with some obvious limitations. This included government, but also companies and academics.

But one thing I was really struck by was that the taxonomy of risks people wanted to talk about was extremely similar to the taxonomy of risks that you would see in a LessWrong post or an EA Forum post.

I don't know enough about the history of that discourse to know how much of that is causal. It's interesting that when we went to the Beijing Academy of AI and got their presentation on how they think about AI risk safety governance, they were talking about autonomous replication and augmentation. They were talking about CBRN and all the same sort of terms. It strikes me there has been quite a lot of track II dialogue on AI safety, both formal and informal, and one of the surprises was that that we were actually starting with a very similar framework for talking about these things."

Patrick Collison talks on the Dwarkesh podcast about Gwern’s writing on LW and his website: 

How are you thinking about AI these days?

Everyone has to be highly perplexed, in the sense that the verdict that one might have given at the beginning of 2023, 2021, back, say, the last eight years — we're recording this pretty close to the beginning of 2024 — would have looked pretty different.

Maybe Gwern might have scored the best from 2019 or something onwards, but broadly speaking, it's been pretty difficult to forecast."

Lina Khan (head of the FTC) answering a question about her “p(doom)”, a concept that originated in LessWrong comments.

Does LessWrong make its readers/writers more sane?

I think this is a harder question to answer. I think online forums and online discussion tend to have a pretty high-variance effect on people's sanity and quality of decision-making. Many people's decision-making seems to have gotten substantially worse by becoming very involved with Twitter, and many subreddits seem to me to have similarly well-documented cases of smart people becoming markedly less sane.

We have tried a lot of things to make LessWrong have less of these sorts of effects, though it is hard to tell how much we have succeeded. We definitely have our own share of frustrating flame wars and tribal dynamics that make reasoning hard.

One proxy that seems useful to look at is something like, "did the things that LessWrong paid attention to before everyone else turn out to be important?". This isn't an amazing proxy for sanity, but it does tell you whether you are sharing valuable information. In market terms, it tells you how much alpha there is in reading LessWrong. 

I think on information alpha terms, LessWrong has been knocking it out of the park over the past few years. Its very early interest in AI, early interest in deep learning, early interest in crypto, early understanding of the replication crisis [? · GW], early interest in the COVID pandemic and early interest in prediction markets all have paid off handsomely, and indeed many LessWrong readers have gotten rich off investing in the beliefs they learned from the site (buying crypto and Nvidia early, and going long volatility before the pandemic, sure gives you high returns).[7]

On a more inside-view-y dimension, I have enormously benefitted from my engagement with LessWrong, and many of the people who seem to me to be doing the best work on reducing existential risk from AI and improving societal decision-making seem to report the same. I use many cognitive tools I learned on LessWrong on a daily level, and rarely regret reading things written on the site.

Some quotes and endorsements to this effect: 

LessWrong and intellectual progress 

While I think ultimately things on LessWrong have to bottom out in people making better decisions of some kind, I often find it useful to look at a proxy variable of something like "intellectual progress". When I think of intellectual progress, I mostly think about either discovering independently verifiable short descriptions of phenomena that previously lacked good explanations, or distilling ideas in ways that are clearer and more approachable than any previous explanation.

LessWrong hosts discussion about a very wide variety of interesting subjects (genetic engineering [LW · GW], obesity [LW · GW], US shipping law [LW · GW], Algorithmic Bayesian Epistemology [LW · GW], anti-aging [LW · GW], homemade vaccines [LW · GW], game theory [LW · GW], and of course the development of the art of rationality [? · GW]), but the single biggest topic on LessWrong is artificial intelligence and its effects on humanity's long term future. LessWrong is the central discussion and publication platform for a large ecosystem of people who discover, read, and write research about the problems facing us in the development of AI.

I think the ideas developed here push the frontier of human civilization's understanding of AI, how it will work, and how to navigate its development. 

This next section primarily consists of the latter sort of evidence, which is the only one I can really give you in a short amount of space.

Public Accessibility of the Field of AI Alignment

In 2017, trying to understand and contribute to the nascent field of AI alignment using the public written materials was basically not possible (or took 200+ hrs). Our goal with the AI Alignment Forum was to move the field of AI from primarily depending on a people's direct personal conversations with a few core researchers (at the time focused around MIRI and Paul Christiano) to being a field whose core ideas could be learned via engaging with the well-written explanations and discussions online.

I think we largely achieved this basic goal. By 2020 many people had a viable route by spending 20-30 hours engaging with the best LessWrong content. DeepMind's Rohin Shah agreed, writing in 2020 that “the AI Alignment Forum improved our pedagogic materials from 0.1 to 3 out of 10.”

To show this, below I've collected some key posts and testimonials about those posts from researchers and LW contributors about those posts.

Paul Christiano's Research Agenda FAQ [LW · GW] was published in 2018 by Alex Zhu (independent).

Evan Hubinger (Anthropic): “Reading Alex Zhu's Paul agenda FAQ was the first time I felt like I understood Paul's agenda in its entirety as opposed to only understanding individual bits and pieces. I think this FAQ was a major contributing factor in me eventually coming to work on Paul's agenda.”

Eli Tyre: “I think this was one of the big, public, steps in clarifying what Paul is talking about.”

An overview of 11 proposals for building safe advanced AI [LW · GW] by Evan Hubinger (Anthropic) in May 2020

Daniel Kokotajlo (AI Futures): “This post is the best overview of the field so far that I know of… Since it was written, this post has been my go-to reference both for getting other people up to speed on what the current AI alignment strategies look like (even though this post isn't exhaustive). Also, I've referred back to it myself several times. I learned a lot from it.”

Niplav: “I second Daniel's comment and review, remark that this is an exquisite example of distillation, and state that I believe this might be one of the most important texts of the last decade.”

It Looks Like You're Trying To Take Over The World [LW · GW] by Gwern (Gwern.net) in March 2022

Garrett Baker (Independent): "Clearly a very influential post on a possible path to doom from someone who knows their stuff about deep learning! There are clear criticisms, but it is also one of the best of its era. It was also useful for even just getting a handle on how to think about our path to AGI."[8]

Counterarguments to the basic AI x-risk case [LW · GW] by Katja Grace in October 2022

Vika Krakona (DeepMind safety researcher, cofounder of the Future of Life Institute): “I think this is still one of the most comprehensive and clear resources on counterpoints to x-risk arguments. I have referred to this post and pointed people to a number of times. The most useful parts of the post for me were the outline of the basic x-risk case and section A on counterarguments to goal-directedness (this was particularly helpful for my thinking about threat models and understanding agency).“

If you want to read more examples of this sort of thing, click to expand the collapsible section below.

10 more LW posts with testimonials

Embedded Agency [? · GW] is a mathematical cartoon series published in 2018 by MIRI researchers Scott Garrabrant and Abram Demski.

Rohin Shah (DeepMind): “I actually have some understanding of what MIRI's Agent Foundations work is about.”

John Wentworth (Independent): “This post (and the rest of the sequence) was the first time I had ever read something about AI alignment and thought that it was actually asking the right questions.”

David Manheim (FHI): “This post has significantly changed my mental model of how to understand key challenges in AI safety… the terms and concepts in this series of posts have become a key part of my basic intellectual toolkit.”

Risks from Learned Optimization [? · GW] is the canonical explanation of the concept of inner optimizers, by Hubinger et al in 2019.

Daniel Filan (Center for Human-Compatible AI): “I am relatively convinced that mesa-optimization… is a problem for AI alignment, and I think the arguments in the paper are persuasive enough to be concerning… Overall, I see the paper as sketching out a research paradigm that I hope to see fleshed out.”

Rohin Shah (DeepMind): “...it brought a lot more prominence to the inner alignment problem by making an argument for it in a lot more detail than had been done before… the conversation is happening at all is a vast improvement over the previous situation of relative (public) silence on the problem.”

Adam Shimi (Conjecture): “For me, this captures what makes this sequence and corresponding paper a classic in the AI Alignment literature: it keeps on giving, readthrough after readthrough.”

Inner Alignment: Explain like I'm 12 Edition by Rafael Harth (Independent) in August 2020

David Manheim (FHI): "This post is both a huge contribution, giving a simpler and shorter explanation of a critical topic, with a far clearer context, and has been useful to point people to as an alternative to the main sequence"

The Solomonoff Prior is Malign [LW · GW] by Mark Xu (Alignment Research Center) in October 2020

John Wentworth: “This post is an excellent distillation of a cluster of past work on maligness of Solomonoff Induction, which has become a foundational argument/model for inner agency and malign models more generally.”

Vanessa Kosoy (MIRI): “This post is a review of Paul Christiano's argument that the Solomonoff prior is malign, along with a discussion of several counterarguments and countercounterarguments. As such, I think it is a valuable resource for researchers who want to learn about the problem. I will not attempt to distill the contents: the post is already a distillation, and does a a fairly good job of it.”

Fun with +12 OOMs of Compute [LW · GW] by Daniel Kokotajlo (of AI Futures) in March 2021

Zach Stein-Perlman (AI Lab Watch): “The ideas in this post greatly influence how I think about AI timelines, and I believe they comprise the current single best way to forecast timelines.”

nostalgebraist: “This post provides a valuable reframing of a common question in futurology: "here's an effect I'm interested in -- what sorts of things could cause it?"

Another (outer) alignment failure story [LW · GW] by Paul Christiano (US AISI) in April 2021

1a3orn: “There's a scarcity of stories about how things could go wrong with AI which are not centered on the "single advanced misaligned research project" scenario. This post (and the mentioned RAAP post by Critch) helps partially fill that gap.”

What Multipolar Failure Looks Like, and Robust Agent-Agnostic Processes (RAAPs) [LW · GW] by Andrew Critch (Center for Human-Compatible AI) in April 2021

Adam Shimi: “I have made every person I have ever mentored on alignment study this post. And I plan to continue doing so. Despite the fact that I'm unconvinced by most timeline and AI risk scenarios post. That's how good and important it is.”

Selection Theorems: A Program For Understanding Agents [LW · GW] by John Wentworth (Independent) in September 2021

Vika Krakovna (DeepMind safety researcher, cofounder of the Future of Life Institute): “I like this research agenda because it provides a rigorous framing for thinking about inductive biases for agency and gives detailed and actionable advice for making progress on this problem. I think this is one of the most useful research directions in alignment foundations since it is directly applicable to ML-based AI systems.”

MIRI announces new "Death With Dignity" strategy [LW · GW] by Eliezer Yudkowsky (MIRI) in April 2022

John Wentworth: "Based on occasional conversations with new people, I would not be surprised if a majority of people who got into alignment between April 2022 and April 2023 did so mainly because of this post. Most of them say something like "man, I did not realize how dire the situation looked" or "I thought the MIRI folks were on it or something"."

Let’s think about slowing down AI [LW · GW] by Katja Grace (AI Impacts) in December 2022.

Eli Tyre: “This was counter to the prevailing narrative at the time, and I think did some of the work of changing the narrative. It's of historical significance, if nothing else.”

Larks: “This post seems like it was quite influential.”

LessWrong's influence on research

I think one of the main things LessWrong gives writers and researchers is an intelligent and philosophically mature audience who want to read great posts. This pulls writing out of authors that they wouldn't write if this audience wasn't here. A majority of high-quality alignment research on LessWrong is solely written for LessWrong, and not published elsewhere. 

As an example, one of Paul Christiano’s most influential essays is What Failure Looks Like [AF · GW], and while Christiano does have his own AI alignment blog, this essay was only written on the AI Alignment Forum.

As further evidence on this point, here is a quote from Rob Bensinger (from the MIRI staff) in 2021:

“LW made me feel better about polishing and posting a bunch of useful dialogue-style writing that was previously private (e.g., the 'security mindset' dialogues) or on Arbital (e.g., the 'rocket alignment problem' dialogue).”

“LW has helped generally expand my sense of what I feel happy posting [on the internet]. LW has made a lot of discourse about AI safety more open, candid, and unpolished; and it's increased the amount of that discussion a great deal. so MIRI can more readily release stuff that's 'of a piece' with LW stuff, and not worry as much about having a big negative impact on the overall discourse.”

So I think that the vast majority of this work wouldn't have been published if not for the Forum, and would've been done to a lower quality had the Forum not existed. For example, with the 2018 FAQ above on Christiano's Research, even though Alex Zhu may well have spent the same time understanding Paul Christiano’s worldview, Eliezer Yudkowsky would not have been able to get the benefit of reading Zhu’s write-up, and the broader research community would have seen neither Zhu’s understanding or Yudkowsky’s response.

Lighthaven

Since mid-2021 the other big thread in our efforts has been building in-person infrastructure. After successfully reviving LessWrong, we noticed that in more and more of our user interviews "finding collaborators" and "getting high-quality high-bandwidth feedback" were highlighted as substantially more important bottlenecks to intellectual progress than the kinds of things we could really help with by adding marginal features to our website. After just having had a year of pandemic lockdown with very little of that going on, we saw an opportunity to leverage the end of the pandemic into substantially better in-person infrastructure for people working on stuff we care about than existed before.

After a year or two of exploring by running a downtown Berkeley office space, we purchased a $16.5M hotel property, renovated it for approximately $6M and opened it up to events, fellowships, research collaborators and occasional open bookings under the name Lighthaven.

An aerial picture of Lighthaven

I am intensely proud of what we have built with Lighthaven and think of it as a great validation of Lightcone's organizational principles. A key part of Lightcone's philosophy is that I believe most cognitive skills are general in nature. IMO the key requirement to building great things is not to hire the best people for the specific job you are trying to get done, but to cultivate general cognitive skills and hire the best generalists you can find, who can then bring their general intelligence to bear on whatever problem you decide to focus on. Seeing the same people who built LessWrong, the world's best discussion platform, pivot to managing a year long $6M construction project, and see it succeed in quality beyond anything else I've seen in the space, fills me with pride about the flexibility and robustness of our ability to handle whatever challenges stand between us and our goals (which I expect will be myriad and similarly varied).

Others seem to think the same: 

 

 

And a quick collage of events we've hosted here (not comprehensive):

At conferences where we managed to sneak in a question about the venue quality, we've received a median rating of 10/10, with an average of 9.4. All annual conferences organized here wanted to come back the following year, and as far as I know we've never had a client who was not hoping to run more events at Lighthaven in the future (in Lighthaven's admittedly short life so far).

Lighthaven is a very capital-intensive project, and in contrast to our ambitions with LessWrong, is a project where we expect to recoup a substantial chunk of our costs by people just paying us. So a first lens to analyze Lighthaven through is to look at how we are doing in economic terms.

The economics of Lighthaven

We started Lighthaven when funding for work on rationality community building, existential risk, and AI safety was substantially more available. While FTX never gave us money directly for Lighthaven, they encouraged us to expand aggressively, and so I never intended it to be in a position to break even on purely financial grounds.

Luckily, despite hospitality and conferencing not generally being known as an industry with amazing margins, we made it work. I originally projected an annual shortfall of $1M per year, which we would need to make up with philanthropic donations. However, demand has been substantially higher than I planned for, and correspondingly our revenue has been much higher than I was projecting.

 Projections for 2024Actuals for 2024
Upkeep$800,000$1,600,000
Interest payment$1,000,000$1,000,000
Revenue($1,000,000)($1,800,000)
Totals-$800,000-$800,000
 Last year's projections for 2025New projections for 2025
Upkeep$800,000$1,600,000
Interest$1,000,000$1,000,000
Revenue($1,200,000)($2,600,000)
Totals-$600,000$0

Last year, while fundraising, I projected that we would spend about $800k on the upkeep, utilities and property taxes associated with Lighthaven in 2024 and 2025, as well as $1M on our annual interest payment. I expected we would make about $1M in revenue, resulting in a net loss of ~$500k - $800k. 

Since demand was substantially higher, we instead spent ~$1.6M on improvements, upkeep, staffing and taxes, plus an additional $1M in interest payment, against a total of around $1.8M in revenue, in a year in which the campus wasn't operational for a substantial fraction of that year, overall producing revenue much above my expectations.

My best projections for 2025 are that we will spend the same amount[9], but this time make ~$2.6M in revenue—breaking even—and if we project that growth out a bit more, we will be in a position to subsidize and fund other Lightcone activities in subsequent years. At this level of expenditure we are also making substantial ongoing capital investments into the venue, making more of our space usable and adding new features every month[10].

Here is a graph of our 2024 + 2025 monthly income with conservative projections:

How does Lighthaven improve the world?

The basic plan for Lighthaven to make the world better is roughly: 

  1. Improve the quality of events and fellowships that are hosted here, or cause additional high-quality events to happen (or save them time and money by being cheaper and easier to work with than equally good alternatives).
  2. From the people who attend fellowships and events here, we pick the best and grow a high-quality community of more permanent residents, researchers, and regulars at events.

I think the impact of in-person collaborative spaces on culture and effective information exchange can be very large. The exact models of how Lightcone hopes to do that are hard to communicate and are something I could write many posts about, but we can do a quick case study of how Lightcone differs from other event venues: 

Nooks nooks nooks nooks nooks

One of the central design principles of Lighthaven is that we try to facilitate small 2-6 person conversations in a relaxed environment, with relative privacy from each other, while making it as easy as possible to still find anyone you might be looking for. One of the central ways Lighthaven achieves that is by having a huge number of conversational nooks both on the inside and outside of the space. These nooks tend to max out at being comfortable for around 8 people, naturally causing conversations to break up into smaller chunks.

Conferences at Lighthaven therefore cause people to talk much more to each other than in standard conference spaces, in which the primary context for conversation might be the hallways, usually forcing people to stand, and often ballooning into large conversations of 20+ people, as the hallways provide no natural maximum for conversation size.

More broadly, my design choices for Lighthaven have been heavily influenced by Christopher Alexander's writing on architecture and the design of communal spaces. I recommend skimming through A Pattern Language and reading sections that spark your interest if you are interested in how Lighthaven was designed (I do not recommend trying to read the book from front to back, it will get boring quickly).

Lighthaven "permanent" residents and the "river and shore" metaphor

In the long run, I want Lightcone to become a thriving campus with occupants at many different timescales: 

The goal is for each of these to naturally feed into the following ones, creating a mixture of new people and lasting relationships across the campus. Metaphorically the flow of new people forms a fast-moving and ever-changing "river", with the "shore" being the aggregated sediment of the people who stuck around as a result of that flow.

Since we are just getting started, we have been focusing on the first and second of these, with only a small handful of permanently supported people on our campus (at present John Wentworth [LW · GW], David Lorell, Adam Scholl, Aysja Johnson, Gene Smith [LW · GW] and Ben Korpan).

On the more permanent organizational side, I hope that the campus will eventually house an organization worthy of an informal title like "FHI of the West" [LW · GW], either directly run by Lightcone, or heavily supported by us, but I expect to grow such an organization slowly and incrementally, instead of in one big push (which I initially considered, and might still do in the future, but for now decided against).

Does Lighthaven improve the events we run here? 

I've run a lot of conferences and events over the years (I was in charge of the first EA Global conference, and led the team that made EA Global into a global annual conference series with thousands of attendees). I designed Lighthaven to really leverage the lessons I learned from doing that, and I am pretty confident I succeeded, based on my own experiences of running events here, and the many conversations I've had with event organizers here.

The data also seems to back this up (see also my later section [LW · GW] on estimating the value of Lighthaven's surplus based on what people have told us they would be willing to pay to run events here): 

Feedback from the Manifest 2 feedback form

I expect a number of people who have run events at Lighthaven will be in the comments and will be happy to answer questions about what it's been like.[11]

The relationship between Lighthaven and LessWrong

The most popular LessWrong posts, SSC posts or books like HPMoR are usually people's first exposure to core rationality ideas and concerns about AI existential risk. LessWrong is also the place where many people who have spent years thinking about these topics write and share their ideas, which then attracts more people, which in some sense forms the central growth loop of the rationalist ecosystem. Lighthaven and the in-person programs it supports is one of the many components of what happens between someone reading LessWrong for the first time, and someone becoming an active intellectual contributor to the site, which I think usually takes about 3-4 years of lots of in-person engagement and orienting and talking to friends and getting a grip on these ideas, when it happens.

This means in some sense the impact of Lighthaven should in substantial parts be measured by its effects on producing better research and writing on LessWrong and other parts of public discourse. 

Of course, the intellectual outputs in the extended rationality and AI safety communities are far from being centralized on LessWrong, and much good being done does not route through writing blog posts or research papers. This makes the above a quite bad approximation of our total impact, but I would say that if I saw no positive effects of Lighthaven on what happens on LessWrong and the AI Alignment Forum, something would have gone quite wrong.

On this matter, I think it's quite early to tell whether Lighthaven is working. I currently feel optimistic that we are seeing a bunch of early signs of a rich intellectual community sprouting up around Lighthaven, but I think we won't know for another 2-3 years whether LessWrong and other places for public intellectual progress have gotten better as a result of our efforts here. 

Lightcone and the funding ecosystem

Having gone through some of our historical impact, and big projects, let's talk about funding. 

Despite what I, and basically all historical funders in the ecosystem, consider to be a quite strong track record, practically all historical mechanisms by which we have historically received funding are unable to fund us going forward, or can only give us substantially reduced funding.

Here is a breakdown of who we received funding from over the last few years:

You might notice the three big items in this graph, FTX Future Fund[12], Open Philanthropy, and the Survival and Flourishing Fund.

FTX Future Fund is no more, and indeed we ended up returning around half of the funding we received from them[13], and spent another 15% of the amount they gave to us in legal fees, and I spent most of my energy last year figuring out our legal defense and handling the difficulties of being sued by one of the most successful litigators of the 21st century, so that was not very helpful. And of course the Future Fund is even less likely to be helpful going forward. 

Good Ventures will not accept future Open Philanthropy recommendations to fund us and Open Phil generally seems to be avoiding funding anything that might have unacceptable reputational costs for Dustin Moskovitz. Importantly, Open Phil cannot make grants through Good Ventures to projects involved in almost any amount of "rationality community building", even if that work is only a fraction of the organizations efforts and even if there still exists a strong case on grounds unrelated to any rationality community building. The exact lines here seem somewhat confusing and unclear and my sense are still being figured out, but Lightcone seems solidly out.

This means we aren't getting any Open Phil/Good Ventures money anymore, while as far as I know, most Open Phil staff working on AI safety and existential risk think LessWrong is very much worth funding, and our other efforts at least promising (and many Open Phil grantees report being substantially helped by our work).

This leaves the Survival and Flourishing Fund, who have continued to be a great funder to us. And 2/3 of our biggest funders disappearing would already be enough to force us to seriously change how we go about funding our operations, but there are additional reasons why it's hard for us to rely on SFF funding:

  1. Historically on the order of 50% of SFF recommenders[14] are recused from recommending us money. SFF is quite strict about recusals, and we are friends with many of the people that tend to be recruited for this role. The way SFF is set up, this causes a substantial reduction in funding allocated to us (compared to the recommenders being fully drawn from the set of people who are not recused from recommending to us).
  2. Jaan and SFC[15] helped us fund the above-mentioned settlement with the FTX estate (providing $1.7M in funding). This was structured as a virtual "advance" against future potential donations, where Jaan expects to only donate 50% of future recommendations made to us via things like the SFF, until the other 50% add up to $1.29M[16] in "garnished" funding. This means for the foreseeable future, our funding from the SFF is cut in half.

Speaking extremely roughly, this means compared to 2022, two thirds of our funders have completely dropped out of funding us, and another sixth is going to be used to pay work that we had originally done under an FTX Future Fund grant, leaving us with one sixth of the funding, which is really not very much.

This all, importantly, is against a backdrop where none of the people or institutions that have historically funded us have updated against the cost-effectiveness of our operations. To the contrary, my sense is the people at Open Philanthropy, SFF and Future Fund have positively updated on the importance of our work, while mostly non-epistemic factors have caused the people involved to be unable to recommend funding to us.

This I think is a uniquely important argument for funding us. I think Lightcone is in the rare position of being considered funding-worthy by many of the key people that tend to try to pick up the most cost-effective interventions, while being de-facto unable to be funded by them.

I do want to express extreme gratitude for the individuals that have helped us survive throughout 2023 when most of these changes in the funding landscape started happening, and Lightcone transitioned from being a $8M/yr organization to a $3M/yr organization. In particular, I want to thank Vitalik Buterin and Jed McCaleb who each contributed $1,000,000 in 2023, Scott Alexander who graciously donated $100,000, Patrick LaVictoire who donated $50,000, and many others who contributed substantial amounts.

Our work on funding infrastructure

Now that I've established some context on the funding ecosystem, I also want to go a bit into the work that Lightcone has done on funding around existential risk reduction, civilizational sanity and rationality development.

The third big branch of historical Lightcone efforts has been to build the S-Process, a funding allocation mechanism used by SFF, FLI and Lightspeed Grants.

Together with the SFF, we built an app and set of algorithms that allows for coordinating a large number of independent grant evaluators and funders much more efficiently than anything I've seen before, and it has successfully been used to distribute over $100M in donations over the last 5 years. Internally I feel confident that we substantially increased the cost-effectiveness of how that funding was allocated—my best guess is on the order of doubling it, but more confidently by at least 20-30%[17], which I think alone is a huge amount of good done.[18]

Earlier this year, we also ran our own funding round owned end-to-end under the banner of "Lightspeed Grants": 

Somewhat ironically, the biggest bottleneck to us working on funding infrastructure has been funding for ourselves. Working on infrastructure that funds ourselves seems ripe with potential concerns about corruption and bad incentives, and so I have not felt comfortable applying for funding from a program like Lightspeed Grants ourselves. Our non-SFF funders historically were also less enthusiastic about us working on funding infrastructure for the broader ecosystem than our other projects.

This means that in many ways, working on funding infrastructure reduces the amount of funding we receive, by reducing the pots of money that could potentially go to us. As another instance of this, I have been spending around 10%-20% of my time over the past 5 years working as a fund manager on the Long Term Future Fund. As a result, Lightcone has never applied to the LTFF, or the EA Infrastructure Fund, as my involvement with EA Funds would pose too tricky of a COI in evaluating our application. But I am confident that both the LTFF and the EAIF would evaluate an application by Lightcone quite favorably, if we had never been involved in it. 

(The LTFF and the EAIF are therefore two more examples of funders that usually pick up the high cost-effectiveness fruit, but for independent reasons are unable to give to Lightcone Infrastructure, leaving us underfunded relative to our perceived cost-effectiveness.)

If it's worth doing it's worth doing with made-up statistics

Thus is it written: “It’s easy to lie with statistics, but it’s easier to lie without them.”

Ok, so I've waffled about with a bunch of high-level gobbledigosh, but as spreadsheet altruists the only arguments we are legally allowed to act on must involve the multiplication of at least 3 quantities and at least two google spreadsheets.

So here is the section where I make some terrible quantitative estimates which will fail to model 95% of the complexity of the consequences of any of our actions, but which I have found useful in thinking about our impact, and which you will maybe find useful too, and which you can use to defend your innocence when the local cost-effectiveness police demands your receipts.

The OP GCR capacity building team survey

Open Philanthropy has run two surveys in the last few years in which they asked people they thought were now doing good work on OP priority areas like AI safety what interventions, organizations and individuals were particularly important for people getting involved, or helped people to be more productive and effective. 

Using that survey, and weighting respondents by how impactful Open Phil thought their work was going to be, they arrived at cost-effectiveness estimates for various organizations (to be clear, this is only one of many inputs in OPs grantmaking). 

In their first 2020 survey, here is the table they produced[19]

Org$/net weighted impact points
(approx; lower is better)
SPARC$9
LessWrong 2.0$46
80,000 Hours$88
CEA + EA Forum$223
CFAR$273

As you can see, LessWrong 2.0's impact was in estimated cost-effectiveness only behind SPARC (which is a mostly volunteer driven program, and this estimate does not take into account opportunity cost of labor).

In their more recent 2023 survey, Lightcone's work performed similarly well. While the data they shared didn't include any specific cost-effectiveness estimates, they did include absolute estimates on the number of times that various interventions showed up in their data: 

These are the results from the section where we asked about a ton of items one by one and by name, then asked for the respondent’s top 4 out of those. I’ve included all items that were listed more than 5 times.

These are rounded to the nearest multiple of 5 to avoid false precision.

80,000 Hours125
University groups90
EAGs/EAGxes70
Open Philanthropy60
Eliezer's writing45
LessWrong (non-Eliezer)40
[...]
Lightcone (non-LW)15

To get some extremely rough cost-effectiveness numbers out of this, we can divide the numbers here by the budget for the associated organizations, though to be clear, this is definitely an abuse of numbers.

Starting from the top, during the time the survey covered (2020 - early 2023) the annual budget of 80,000 Hours averaged ~$6M. Lightcone's spending (excluding Lighthaven construction, which can't have been relevant by then) averaged around $2.3M. University groups seem to have been funded at around $5M/yr[20], and my best guess is that EAG events cost around $6M a year during that time [EA · GW]. I am going to skip Open Philanthropy because that seems like an artifact of the survey, and Eliezer, because I don't know how to estimate a reasonable number for him.

This produces this table (which I will again reiterate is a weird thing to do): 

ProjectMentionMentions / $M
80,000 Hours1256.4
University groups905.
EAGs/EAGxes703.6
Lightcone (incl. LW)40 + 156.8

As you can see, my totally objective table says that we are the most cost-effective intervention that you can fund out there (to be clear, I think the central takeaway here is more "by this very narrow methodology Lightcone is competitive with the best interventions, I think the case for it being the very best is kind of unstable")

Lightcone/LessWrong cannot be funded by just running ads

An IMO reasonable question to ask is "could we fund LessWrong if we just ran ads?". It's not fully clear how that relates to our cost-effectiveness, but I still find it a useful number to look at as a kind of lower-bound on the value that LessWrong could produce, with a small change.

LessWrong gets around 20 million views a year, for around 3 million unique users and 12 million engagement minutes. For our audience (mostly American and english-speaking) using Google AdSense, you would make about $2 per 1000 views, resulting in a total ad revenue of around $40,000, a far cry from the >$1,000,000 that LessWrong spends a year. 

Using Youtube as another benchmark, Youtubers are paid about $15 for 1000 U.S. based ad impressions, with my best guess of ad frequency on Youtube being about once every 6 minutes, resulting in 2 million ad impressions and therefore about $30,000 in ad revenue (this is ignoring sponsorship revenue for Youtube videos which differ widely for different channels, but where my sense is they tend to roughly double or triple the default Youtube ad revenue, so a somewhat more realistic number here is $60,000 or $90,000).

Interestingly, this does imply that if you were willing to buy advertisements that just consisted of getting people in the LessWrong demographic to read LessWrong content, that would easily cover LessWrong's budget. A common cost per click for U.S. based ads is around $2, and it costs around $0.3 to get someone to watch a 30-second ad on Youtube, resulting in estimates of around $40,000,000 to $4,000,000 to get people to read/watch LessWrong content by just advertising for it.

Comparing LessWrong to other websites and apps

Another (bad) way of putting some extremely rough number of the value LessWrong provides to the people on it, is to compare it against revenue per active user numbers for other websites and social networks.

PlatformU.S. ARPU (USD)YearSource
Facebook$226.93 (Annual)2023Statista
Twitter$56.84 (Annual)2022Statista
Snapchat$29.98 (Annual)2020Search Engine Land
Pinterest$25.52 (Annual)2023Stock Dividend Screener
Reddit$22.04 (Annual)2023Four Week MBA

I think by the standards of usual ARPU numbers, LessWrong has between 3,000 and 30,000 active users. So if we use Reddit as a benchmark this would suggest something like $75,000 - $750,000 per year in revenue, and if we use Facebook as a benchmark, this would suggest something like $600,000 - $6,000,000. 

Again, it's not enormously clear what exactly these numbers mean, but I still find them useful as very basic sanity-checks on whether we are just burning money in highly ineffectual ways.

Lighthaven event surplus

Over the last year, we negotiated pricing with many organizations that we have pre-existing relationships with using the following algorithm: 

  1. Please estimate your maximum willingness to pay for hosting your event at Lighthaven (i.e. at what price would you be indifferent between Lighthaven and your next best option)
  2. We will estimate the marginal cost to us of hosting your event
  3. We use the difference between these as an estimate of the surplus produced by Lighthaven and we split it 50/50, i.e. you pay us halfway between our marginal cost and your maximum willingness to pay

This allows a natural estimate of the total surplus generated by Lighthaven, measured in donations to the organizations that have hosted events here.

On average, event organizers estimated total value generated at around 2x our marginal cost. 

Assuming this ratio also holds for all events organized at Lighthaven, which seems roughly right to me, we can estimate the total surplus generated by Lighthaven. Also, many organizers adjusted the value-add from Lighthaven upwards after the event, suggesting this is an underestimate of the value we created (and we expect to raise prices in future years to account for that).

This suggests that our total value generated this way is ~1.33x our revenue from Lighthaven, which is likely to be around $2.8M in the next 12 months. This suggests that as long as Lighthaven costs less than ~$3.72M, it should be worth funding if you thought it was worth funding the organizations that have hosted events and programs here (and that in some sense historical donations to Lighthaven operate at least at a ~1.33x multiplier compared to the average donation to organizations that host events here).

To help get a sense of what kind of organizations do host events here, here is an annotated calendar of all the events hosted here in 2024, and our (charitable) bookings for 2025

The future of (the) Lightcone

Now that I have talked extensively about all the things we have done in the past, and how you should regret not giving to us last year, now comes the part where I actually describe what we might do in the future. In past fundraising documents to funders and the public, I have found this part always the hardest. I value flexibility and adaptability very highly, and with charities, even more so than with investors in for-profit companies, I have a feeling that people who give to us often get anchored on the exact plans and projects that we were working on when they did.

I think to predict what we will work on in the future, it is helpful to think about Lightcone at two different levels: What are the principles behind how Lightcone operates, and what are the concrete projects that we are considering working on?

Lightcone culture and principles

Lightcone has grown consistently but extremely slowly over the last 7 years. There are some organizations I have had a glimpse into that have seen less net-growth, but I can’t think of an organization that has added as few hires (including people who later left) to their roster that now still work there. I’ve consistently hired ~1 person per year to our core team for the six years Lightcone has existed (resulting in a total team size of 7 core team members).

This is the result of the organization being quite deeply committed to changing strategies when we see the underlying territory shift. Having a smaller team, and having long-lasting relationships, makes it much easier for us to pivot, and allows important strategic and conceptual updates to propagate through the organization more easily.[21] 

Another result of the same commitment is that we basically don’t specialize into narrow roles, but instead are aiming to have a team of generalists where, if possible, everyone in the organization can take on almost any other role in the organization. This enables us to shift resources between different parts of Lightcone depending on which part of the organization is under the most stress, and to feel comfortable considering major pivots that would involve doing a very different kind of work, without this requiring major staff changes every time. I don't think we have achieved full universal generality among our staff, but it is something we prioritize and have succeeded at much more than basically any other organization I can think of.

Another procedural commitment is that we try to automate as much of our work as possible, and aim for using software whenever possible to keep our total staff count low, and create processes to handle commitments and maintain systems, instead of having individuals who perform routine tasks on an ongoing basis (or at the very least try our best to augment the individuals doing routine tasks using software and custom tools). 

There is of course lots more to our team culture. For a glimpse into one facet of it, see our booklet "Adventures of the Lightcone Team".

Things I wish I had time and funding for

AGI sure looks to me like it's coming, and it's coming uncomfortably fast. While I expect the overall choice to build machine gods beyond our comprehension and control will be quite bad for the world, the hope that remains routes in substantial chunks through leveraging the nascent AGI systems that we have access to today and will see in the coming years.

Concretely, one of the top projects I want to work on is building AI-driven tools for research and reasoning and communication, integrated into LessWrong and the AI Alignment Forum. If we build something here, it will immediately be available to and can easily be experimented with by people working on reducing AI existential risk, and I think has a much larger chance than usual of differentially accelerating good things.

We've already spent a few weeks building things in the space, but our efforts here are definitely still at a very early stage. Here is a quick list of things I am interested in exploring, though I expect most of these to not be viable, and the right solutions and products to probably end up being none of these: 

Building an LLM-based editor. 

LessWrong admins currently have access to a few special features in our editor that I have found invaluable. Chief among them is having built-in UI for "base-model Claude 3.5 Sonnet"[22] and Llama 405b-base continuing whatever comment or post I am in the middle of writing, using my best LessWrong comments and posts as a style and content reference (as well as some selected posts and comments by other top LW authors). I have found this to be among the best tools against writer's block, where every time I solidly get stuck, I generate 5-10 completions of what the rest of my post could look like, use it as inspiration of all kinds of different directions my post could go, then delete them and keep writing.

Using base models has at least so far been essential for getting any useful writing work out of LLMs, with the instruction-tuned models reliably producing obtuse corpo-speak when asked to engage in writing tasks. 

Similarly LLMs are now at a point where they can easily provide high-level guidance to drafts of yours, notice sections where your explanations are unclear, fix typos, shorten and clean up extremely long left-branching sentences, and do various other straightforward improvements to the quality of your writing.

AI prompts and tutors as a content type on LW

LLM systems are really good tutors. They are not as good as human instructors (yet), but they are (approximately) free, eternally patient, and have a breadth of knowledge vastly beyond that of any human alive. With knowledge and skill transfer being one of the key goals for LessWrong, I think we should try to leverage that. 

I would like to start with iterating on getting AI systems to teach the core ideas on LW, then after doing it successfully, experiment with opening up the ability to create tutors like that to authors on LessWrong, who would like to get AI assistance explaining and teaching the concepts they would like to communicate.

Authors and the LessWrong team can read the chats people had with our AI tutors[23], giving authors the ability to correct anything wrong that the AI systems said, and then using those corrections as part of the prompt to update how the tutor will do things in the future. I feel like this unlocks a huge amount of cool pedagogical content knowledge [LW · GW] that has previously been inaccessible to people writing on LessWrong, and gives you a glimpse into how people fail to understand (or successfully apply your concepts) in ways that previously could have only been achieved by teaching people one on one.

Building something like an FHI of the West

But AI things are not the only things I want to work on. In a post a few months ago I said: 

The Future of Humanity Institute is dead:

I knew that this was going to happen in some form or another for a year or two, having heard through the grapevine and private conversations of FHI's university-imposed hiring freeze and fundraising block, and so I have been thinking about how to best fill the hole in the world that FHI left behind. 

I think FHI was one of the best intellectual institutions in history. Many of the most important concepts in my intellectual vocabulary were developed and popularized under its roof, and many crucial considerations that form the bedrock of my current life plans were discovered and explained there (including the concept of crucial considerations [? · GW] itself).

With the death of FHI (as well as MIRI moving away from research towards advocacy), there no longer exists a place for broadly-scoped research on the most crucial considerations for humanity's future. The closest place I can think of that currently houses that kind of work is the Open Philanthropy worldview investigation team, which houses e.g. Joe Carlsmith, but my sense is Open Philanthropy is really not the best vehicle for that kind of work. 

While many of the ideas that FHI was working on have found traction in other places in the world (like right here on LessWrong), I do think that with the death of FHI, there no longer exists any place where researchers who want to think about the future of humanity in an open ended way can work with other people in a high-bandwidth context, or get operational support for doing so. That seems bad. 

So I am thinking about fixing it (and have been jokingly internally referring to my plans for doing so as "creating an FHI of the West")

Since then, we had the fun and privilege of being sued by FTX, which made the umbrella of Lightcone a particularly bad fit for making things happen in the space, but now that that is over, I am hoping to pick this project back up again. 

As I said earlier in this post, I expect that if we do this, I would want to go about it in a pretty incremental and low-key way, but I do think it continues to be one of the best things that someone could do, and with our work on LessWrong and ownership of a world-class 20,000 sq. ft. campus in the most important geographical region of the world, I think we are among the best placed people do this.

Building funding infrastructure for AI x-risk reduction

There currently doesn't really exist any good way for people who want to contribute to AI existential risk reduction to give money in a way that meaningfully gives them assistance in figuring out what things are good to fund. This is particularly sad since I think there is now a huge amount of interest from funders and philanthropists who want to somehow help with AI x-risk stuff, as progress in capabilities has made work in the space a lot more urgent, but the ecosystem is currently at a particular low-point in terms of trust and ability to direct that funding towards productive ends.

I think our work on the S-Process and SFF has been among the best work in the space. Similarly, our work on Lightspeed Grants helped, and I think could grow into a systemic solution for distributing hundreds of millions of dollars a year, at substantially increased cost-effectiveness.

Something something policy

Figuring out how to sanely govern the development of powerful AI systems seems like a top candidate for the most important thing going on right now. I do think we have quite a lot of positive effect on that already, via informing people who work in the space and causing a bunch of good people to start working in the space, but it is plausible that we want to work on something that is substantially more directed towards that.

This seems particularly important to consider given the upcoming conservative administration, as I think we are in a much better position to help with this conservative administration than the vast majority of groups associated with AI alignment stuff. We've never associated ourselves very much with either party, have consistently been against various woke-ish forms of mob justice for many years, and have clearly been read a non-trivial amount by Elon Musk (and probably also some by JD Vance).

I really don't know what doing more direct work in the space would look like. The obvious thing to do is to produce content that is more aimed at decision-makers in government, and to just talk to various policy people directly, but it might also involve doing things like designing websites for organizations that work more directly on influencing policy makers (like our recently-started collaborations with Daniel Kokotajlo's research team AI Futures and Zach Stein-Perlman's AI Lab Watch to help them with their website designs and needs).

A better review system for AI Alignment research

I do not believe in pre-publication private anonymous peer-review. I think it's dumb to gate access to articles behind submissions to journals, and I think in almost all circumstances it's not worth it for reviewers to be anonymous, both because I think great reviewers should be socially rewarded for their efforts, and bad reviewers should be able to be weeded out.

But I do think there is a kind of work that is often undersupplied that consists of engaging critically with research, suggesting improvements, helping the author and the reader discover related work, and successfully replicating, or failing to replicate key results. Right now, the AI Alignment field has very little incentive for that kind of work, which I think is sad.

I would like to work on making more of that kind of review happen. I have various schemes and ideas in mind for how to facilitate it, and think we are well-placed to do it.


Again, our operating philosophy values pivoting to whatever we end up thinking is best and I think it's quite likely we will not make any of the above a substantial focus of the next 1-2 years, but it still seemed useful to list.

What do you get from donating to Lightcone?

I think the best reason to donate to us is because you think that doing so will cause good things to happen in the world (like it becoming less likely that you and all your friends will die from a rogue AI). That said, credit allocation is important, and I think over the past few years there has been too little credit given to people donating to keep our community institutions intact, and I personally have been too blinded by my scope-sensitivity[24] and so ended up under-investing in my relationships to anyone but the very largest donors.

I think many things would be better if projects like LessWrong and Lighthaven were supported more by the people who are benefitting from them instead of large philanthropists giving through long chains of deference with only thin channels of evidence about our work. This includes people who benefitted many years ago when their financial means were much less, and now are in a position to help the institutions that allowed them to grow.

That means if you've really had your thinking or life-path changed by the ideas on LessWrong or by events and conversations at Lighthaven, then I'd make some small request for you to chip in to keep up the infrastructure alive for you and for others.

If you donate to us, I will try to ensure you get appropriate credit (if you desire). I am still thinking through the best ways to achieve that, but some things I feel comfortable committing to (and more to come): 

  1. If you donate at least $1,000 we will send you a special-edition Lightcone or LessWrong t-shirt
  2. If you donate at least $5,000, we will add you to the Lightcone donor leaderboard, under whatever name you desire (to be created at lesswrong.com/leaderboard)
  3. We will also add a plaque in your celebration to our Lighthaven legacy wall with a sigil and name of your choice (also currently being built, but I'll post comments with pictures as donations come in!)
  4. Various parts of Lighthaven are open to be named after you! You can get a bench (or similarly prominent object) with a nice plaque with dedication of your choice if you donate at least $1,000 $2,000[25], or you can get a whole hall or area of the campus named after you at higher numbers.[26]

As the first instance of this, I'd like to give enormous thanks to @drethelin [LW · GW] for opening our fundraiser with a $150,000 donation in whose thanks we have renamed our northwest gardens to "The Drethelin Gardens" for at least the next 2 years.

If you can come up with any ways that you think would be cool to celebrate others who have given to Lightcone, or have any ideas for how you want your own donation to be recognized, please reach out! I wasn't really considering naming campus sections after people until drethelin reached out, and I am glad we ended up going ahead with that.

Goals for the fundraiser

We have three fundraising milestones for this fundraiser, one for each of the million dollars:

  1. May. The first million dollars will probably allow us to continue operating after our first (deferred) interest payment on our loan, and continue until May.
  2. November. The second million dollars gets us all the way to our second interest payment, in November.
  3. 2026. The third million dollars allows to make our second interest payment, and make it to the end of the year.

We'll track our progress through each goal with a fundraising thermometer on the front page[27]. Not all of Lightcone's resources will come from this fundraiser of course. Whenever we receive donations (from any source), we'll add the funds to the "Raised" total on the frontpage.

Logistics of donating to Lightcone

We are a registered 501(c)3 in the US and if there is enough interest, can probably set up equivalence determinations in most other countries that have a similar concept of tax-deductability, making donations tax-deductible there as well (so far we've had interest from the UK and Switzerland). 

We can also accept donations of any appreciated asset that you might want to donate. We are set up to receive crypto, stocks, stock options, and if you want to donate your appreciated Magic the Gathering collection, we can figure out some way of giving you a good donation receipt for that. Just reach out (via email, DM [LW · GW], or text/signal at +1 510 944 3235) and I will get back to you ASAP with the logistics.

Also, please check if your employer has a donation matching program! Many big companies double the donations made by their employees to nonprofits (for example, if you work at Google and donate to us, Google will match your donation up to $10k). Here is a quick list of organizations with matching programs I found, but I am sure there are many more.

If you want to donate less than $5k in cash, I recommend our Stripe donation link. We lose about 2-3% of that in fees if you use a credit card, and 1% if you use bank transfer, so if you want to donate more and want us to lose less to fees, you can reach out and I'll send you our wire transfer details.

If you want to send us BTC, we have a wallet! The address is 37bvhXnjRz4hipURrq2EMAXN2w6xproa9T.

Tying everything together

Whew, that was a marathon of a post. I had to leave out a huge number of things that we've done, and a huge number of hopes and aims and plans I have for the future. Feel free to ask me in the comments about any details. 

I hope this all helps explain what Lightcone's deal is and gives you the evidence you need to evaluate my bold claims of cost-effectiveness.

So thank you all. I think with the help from the community and recent invigorated interest into AI x-risk stuff, we can pull the funds together to continue Lightcone's positive legacy. 

If you can and want to be a part of that, donate to us here. We need to raise $3M to survive the next 12 months, and can productively use a lot of funding beyond that. 

  1. ^

    Donations are tax-deductible in the US. Reach out for other countries, we can likely figure something out.

  2. ^

    Our technical efforts here also contribute to the EA Forum, which started using our code in 2019.

  3. ^

    Why more money this year than next year? The reason is that we have an annual interest payment of $1M on our Lighthaven mortgage that was due in early November, which we negotiated to be deferred to March. This means this twelve month period will have double our usual mortgage payments. 

    We happen to also own a ~$1M building adjacent to Lighthaven in full, so we have a bit of slack. We are looking into taking out a loan on that property, but we are a non-standard corporate entity from the perspective of banks so it has not been easy. If for some reason you want to arrange a real-estate insured loan for us, instead of donating to us, that would also be quite valuable.

  4. ^

    I am also hoping to create more ways of directing appreciation and recognition to people whose financial contributions allow us to have good things (see the section below on "What do you get from donating to Lightcone?" [LW · GW]).

  5. ^

    What does "additional" mean here? That's of course quite tricky, since it's really hard to establish what would have happened if we hadn't worked on LessWrong. I am not trying to answer that tricky question here, I just mean "more content was posted to LW".

  6. ^

    As a quick rundown: Shane Legg is a Deepmind cofounder and early LessWrong poster [LW · GW] directly crediting Eliezer for working on AGI. Demis has also frequently referenced LW ideas and presented at both FHI and the Singularity Summit. OpenAI's founding team and early employees were heavily influenced by LW ideas (and Ilya was at my CFAR workshop in 2015). Elon Musk has clearly read a bunch of LessWrong, and was strongly influenced by Superintelligence which itself was heavily influenced by LW. A substantial fraction of Anthropic's leadership team actively read and/or write on LessWrong.

  7. ^

    For a year or two I maintained a simulated investment portfolio at investopedia.com/simulator/ with the primary investment thesis "whenever a LessWrong comment with investment advice gets over 40 karma, act on it". I made 80% returns over the first year (half of which was buying early shorts in the company "Nikola" after a user posted a critique of them on the site). 

    After loading up half of my portfolio on some option calls with expiration dates a few months into the future, I then forgot about it, only to come back to see all my options contracts expired and value-less, despite the sell-price at the expiration date being up 60%, wiping out most of my portfolio. This has taught me both that LW is amazing alpha for financial investment, and that I am not competent enough to invest on it (luckily other people [LW(p) · GW(p)] have [LW(p) · GW(p)] done [LW · GW] reasonable [LW(p) · GW(p)] things [LW(p) · GW(p)] based on things said on LW and do now have a lot of money, so that's nice, and maybe they could even donate some back to us!)

  8. ^

    This example is especially counterfactual on Lightcone's work. Gwern wrote the essay at a retreat hosted by Lightcone, partly in response to people at the retreat saying they had a hard time visualizing a hard AI takeoff; and Garrett Baker is a MATS fellow who was a MATS fellow at office space run by Lightcone and provided (at the time freely) to MATS.

  9. ^

    It might be a bit surprising to read that I expect the upkeep costs to stay the same, despite revenue increasing ~35%. The reason I expect this is that I see a large number of inefficiencies in our upkeep, and also we had a number of fixed-costs that we had to pay this year, that I don't expect to need to pay next year.

  10. ^

    Yes, I know that you for some reason aren't supposed to use the word "feature" to describe improvements to anything but software, but it's clearly the right word. 

    "We shipped the recording feature in Eigen Hall and Ground Floor Bayes, you can now record your talks by pressing the appropriate button on the wall iPad"

  11. ^

    Austin Chen from Manifold, Manifund and Manifest says: 

    I came up with the idea for Manifest while wandering around the cozy Lighthaven campus during some meetup, thinking "y'know, I would really love to run my own event here". I approached Oli with the idea for a festival for prediction markets, and he was enthusiastically on board, walking our greenhorn team through the necessary logistics: venue layout, catering, security, equipment and more. With Lightcone's help, we turned Manifest from just an idea into a runaway hit, one that's received major press coverage, built up a community, and even changed some lives. We've since run Manifest again, and organized another premier event (The Curve), each time to rave reviews. I'm very grateful to Lighthaven for putting the dream of Manifest in my head -- and to the Lightcone folks for helping me turn that dream to reality.

  12. ^

    For FTX, the graph above subtracts the amount we gave them in our settlement ($1.7M), from the total amount we received from them

  13. ^

    Returning isn't really the right word, it's more like "ended up giving them". See below on how we settled with FTX using SFC's and Jaan's help.

  14. ^

    SFF has a funding structure where grants get evaluated by a rotating set of "recommenders", which are usually people that Jaan Tallinn, the primary funder of SFF, respects. Those recommenders make funding recommendations 1-2 times a year via some cool mechanism design process that we helped build.

  15. ^

    The parent organization of SFF

  16. ^

    This exact number being lower than the amount Jaan & SFC contributed as a result of complicated dynamics in the settlement negotiations, and conversations we had around it, which ultimately settled with Jaan thinking this lower amount is fairer to garnish from future recommendations.

  17. ^

    I have discovered a truly marvelous proof of this, which this margin is too narrow to contain.

  18. ^

    Going into the details of our work is I think beyond the scope of this post, but if you are interested in the things we've built, I recommend checking out Zvi's recent post about his experiences in the latest SFF round [LW · GW], and this (somewhat outdated) video by Andrew Critch talking about the S-Process.

  19. ^

    This table is not exhaustive and OpenPhil told us they chose organisations for inclusion partially dependent on who it happened to be easy to get budget data on. Also, we've removed one organization at their request (which also ranked worse than LessWrong 2.0).

  20. ^

    The linked grant is ~6M over a bit more than 2 years, and there are a bunch of other grants that seem also to university groups, making my best guess around $5M/yr, but I might be off here.

  21. ^

    Though the last 2 years have been worse than par for that, for reasons that are over, like our fun lawsuit with FTX and a lot of soul searching post-FTX.

  22. ^

    This is in quotes because we don't have access to Claude 3.5 Sonnet base model. However, you can get a model that behaves surprisingly close to it by using Anthropic's assistant completion prefix feature. H/t to Janus for pointing this out.

  23. ^

    Unless they opt out or something, maybe requiring some amount of payment, since running LLMs isn't free.

  24. ^

    Relatedly, I really benefitted from reading Scott Garrabrant's "Geometric Rationality" [? · GW] sequence, which critiques various forms of scope-sensitivity that had led me astray, and argues for something more geometric in credit and resource allocations

  25. ^

    Due to an apparently ravenous hunger among our donor base for having benches with plaques dedicated to them, and us not actually having that many benches, the threshold for this is increased to $2,000. Everyone who donated more than $1,000 but less than $2,000 before Dec 2nd will still get their plaque.

  26. ^

    I can't guarantee the benches/plaques/objects will stay around forever, so I think it makes sense to limit our promise of the plaque being visible to 2 years, though I expect the majority of them to stay for a lot longer.

  27. ^

    We'll probably display this until the New Year

153 comments

Comments sorted by top scores.

comment by DaystarEld · 2024-11-30T11:10:54.989Z · LW(p) · GW(p)

I just donated $1,000. This is not a minor amount for me, and I almost just donated $10 as suggested in Shoshannah's comment,  but I knew I could donate that much without thought or effort, and I wanted to really put at least some effort into this, after seeing how much obvious effort Oliver and others at Lesswrong have been putting in. 

My decision process was as follows:

First, I dealt with my risk aversion/loss aversion/flinch response to giving large sums of money away. This took a couple minutes, much faster than it used to be thanks to things like my Season of Wealth a couple years ago, but felt like a mildly sharp object jiggling around in my chest until I smoothed it out with reminders of how much money I make these days compared to the relatively poor upbringing I had and the not-particularly-high salary I made for the first ~decade of my adult life. 

Second, I thought of how much I value Lesswrong and Lighthaven existing in the world as a vague thing. Impersonally, not in the ways they have affected me, just like... worlds-with-these-people-doing-this-thing-in-it vs worlds-without. This got me up to a feeling of more than double what I wanted to give, somewhere around 25ish.

Third, I thought about how much value I personally have gained from Lesswrong and Lighthaven. I cannot really put a number on this. It's hard to disentangle the value from all the various sources in the rationality space, and the people who posts on LW and attended Lighthaven events. This ballooned the amount to something extremely hard to measure. Far more than $100, but probably less than 10,000? 

Fourth, I dealt with the flinch-response again. 10,000 is a lot for me. I lost more than that due to FTX's collapse even before the clawback stress started, and that took a bit of time to stop feeling internal jabs over. A few subsections needed dealing with; what if I have an emergency and need lots of money? What if my hypothetical future wife or kids do? Would I regret donating then? This bumped me way back down to the hundreds range.

Fifth, I thought about how I would feel if I woke up today and instead of reading this post, I read a post saying that they had to shut down Lighthaven, and maybe even LessWrong, due to lack of funding. How much I would regret not having donated money, even if it didn't end up helping. I'm still quite sad that we lost Wytham, and would pay money to retroactively try to save it if I could. This brought me up to something like $3-500.

Sixth, I confronted the niggling thought of "hopefully someone out there will donate enough that my contribution will not really matter, so maybe I don't even need to really donate much at all?" This thought felt bad, and I had a brief chat with my parts, thanking my internal pragmatism for its role in ensuring we're not being wasteful before exploring together if this is the sort of person we want to be when other people might need us. After that conversation was over the number had stabilized around 500.

Seventh, I thought about the social signal if I say I donated a lot and how this might encourage others to donate more too, effectively increasing the amount Lesswrong gets, and decided this didn't really affect much. Maybe a minor effect toward increasing, but nothing noticeable.

Eighth, I thought about the impact to the world re: Alignment. I felt the black hole there, the potential infinite abyss that I could throw my savings and life into and probably not get any useful effect out of, and spent some time with that before examining it again and feeling like another few hundred may not "make sense" in one direction or the other, but felt better than not doing it.

And ninth, I finally thought about the individuals working at Lighthaven that I know. How much do I trust them? How much do I want them to feel supported and motivated and cared for by the community they're contributing so much to?

By the end of that I was around 8-900 and I thought, fuck it, I've made stupider financial decisions than an extra hundred bucks for a fancy T-shirt, and nice round numbers are nice and round.

Thank you all for all you do. I hope this helps.

Replies from: Raemon, Augustin Portier
comment by Raemon · 2024-11-30T18:45:28.203Z · LW(p) · GW(p)

So it benefits me and conflict of interest and all that, but I think this is a pretty great comment in terms of broadcasting how one might go about figuring out how much to donate. This is often a pretty messy process. There are some people out there who do more actual math here, but, I think for most people this sort of thing is more useful. (Integrating this-sort-of-thing into some back-of-envelope calculations would be cool too if someone good at that did it and could articulate what went on inside them)

To somewhat account for my conflict-of-interest, I'd add: "a thing potentially missing here is what other things might fill a similar role as Lightcone in your world?". If you have ~$1000ish you can give without hardship, you might want to reflect more on the alternatives.

It gets sort of overwhelming to think about all the alternatives, so I think my recommendation to people is to come up with ~3 things they might consider giving money to, and then use a process like the one described here to figure out which one is best, or how to split money if you want to for whatever reason.

comment by TeaTieAndHat (Augustin Portier) · 2024-12-02T10:11:44.741Z · LW(p) · GW(p)

My decision process was much dumber: 1. Try to spend less time on LW, and move to close the page after having reflexively opened it, deliberately not opening this post. 2. See Daystar’s comment on the frontpage and go "wait, that’s pretty important for me too". 3. Give ten bucks, because I don’t have $1,000 lying around.

So, basically, I’m making a mostly useless comment but thanks for reminding me to donate :-)

comment by So8res · 2024-12-02T00:35:44.251Z · LW(p) · GW(p)

I donated $25k. Thanks for doing what you do.

comment by Daniel Kokotajlo (daniel-kokotajlo) · 2024-12-02T19:54:27.326Z · LW(p) · GW(p)

My wife and I just donated $10k, and will probably donate substantially more once we have more funds available.

LW 1.0 was how I heard about and became interested in AGI, x-risk, effective altruism, and a bunch of other important ideas. LW 2.0 was the place where I learned to think seriously about those topics & got feedback from others on my thoughts. (I tried to discuss all this stuff with grad students at professors at UNC, where I was studying philosophy, with only limited success). Importantly, LW 2.0 was a place where I could write up my ideas in blog post or comment form, and then get fast feedback on them (by contrast with academic philosophy where I did manage to write on these topics but it took 10x longer per paper to write and then years to get published and then additional years to get replies from people I didn't already know). More generally the rationalist community that Lightcone has kept alive, and then built, is... well, it's hard to quantify how much I'd pay now to retroactively cause all that stuff to happen, but it's way more than $10k, even if we just focus on the small slice of it that benefitted me personally.

Looking forward, I expect a diminished role, due simply to AGI being a more popular topic these days so there are lots of other places to talk and think about it. In other words the effects of LW 2.0 and Lightcone more generally are now (large) drops in a bucket whereas before they were large drops in an eye-dropper. However, I still think Lightcone is one of the best bang-for-buck places to donate to from an altruistic perspective. The OP lists several examples of important people reading and being influenced by LW; I personally know of several more.

...All of the above was just about magnitude of impact, rather than direction. (Though positive direction was implied). So now I turn to the question of whether Lightcone is consistently a force for good in the world vs. e.g. a force for evil or a high-variance force for chaos.

Because of cluelessness, it's hard to say how things will shake out in the long run. For example, I wouldn't be surprised if the #1 determinant of how things go for humanity is whether the powerful people (POTUS & advisors & maybe congress and judiciary) take AGI misalignment and x-risk seriously when AGI is imminent. And I wouldn't be surprised if the #1 determinant of that is the messenger -- which voices are most prominently associated with these ideas? Esteemed professors like Hinton and Bengio, or nerdy weirdos like many of us here? On this model, perhaps all the good Lightcone has done is outweighed by this unfortunate set of facts, and it would have been better if this website never existed.

However, I can also imagine other possibilities -- for example, perhaps many of the Serious Respected People who are, and will, be speaking up about AGI and x-risk etc. were or will be influenced to do so by hearing arguments and pondering questions that originated on, or were facilitated by, LW 2.0. Or alternatively, maybe the most important thing is not the status of the messenger, but the correctness and rigor of the arguments. Or maybe the most important thing is not either of those but rather simply how much technical work on the alignment and control problems has been accomplished and published by the time of AGI. Or maybe... I could go on. The point is, I see multiple paths by which Lightcone could turn out, with the benefit of hindsight, to have literally prevented human extinction.

In situations of cluelessness like this I think it's helpful to put weight on factors that are more about the first-order effects of the project & the character of the people involved, and less about the long-term second and third-order effects etc. I think Lightcone does great on these metrics. I think LW 2.0 is a pocket of (relative) sanity in an otherwise insane internet. I think it's a way for people who don't already have lots of connections/network/colleagues to have sophisticated conversations about AGI, superintelligence, x-risk, ... and perhaps more importantly, also topics 'beyond' that like s-risk, acausal trade, the long reflection, etc. that are still considered weird and crazy now (like AGI and ASI and x-risk were twenty years ago). It's also a place for alignment research to get published and get fast, decently high-quality feedback. It's also a place for news, for explainer articles and opinion pieces, etc. All this seems good to me. I also think that Lighthaven has positively surprised me so far, it seems to be a great physical community hub and event space, and also I'm excited about some of the ideas the OP described for future work.

On the virtue side, in my experience Lightcone seems to have high standards for epistemic rationality and for integrity & honesty. Perhaps the highest, in fact, in this space. Overall I'm impressed with them and expect them to be consistently and transparently a force for good. Insofar as bad things result from their actions I expect it to be because of second-order effects like the status/association thing I mentioned above, rather than because of bad behavior on their part.

So yeah. It's not the only thing we'll be donating to, but it's in our top tier.

comment by Drake Thomas (RavenclawPrefect) · 2024-11-30T09:08:37.438Z · LW(p) · GW(p)

I've gotten enormous value out of LW and its derived communities during my life, at least some of which is attributable to the LW2.0 revival and its effects on those communities. More recently, since moving to the Bay, I've been very excited by a lot of the in-person events that Lighthaven has helped facilitate. Also, LessWrong is doing so many things right as a website and source-of-content that no one else does (karma-gated RSS feeds! separate upvote and agree-vote! built-in LaTeX support!) and even if I had no connection to the other parts of its mission I'd want to support the existence of excellently-done products. (Of course there's also the altruistic case for impact on how-well-the-future-goes, which I find compelling on its own merits.) Have donated $5k for now, but I might increase that when thinking more seriously about end-of-year donations.

(Conflict of interest notice: two of my housemates work at Lightcone Infrastructure and I would be personally sad and slightly logistically inconvenienced if they lost their jobs. I don't think this is a big contributor to my donation.)

comment by David Matolcsi (matolcsid) · 2024-11-30T05:01:21.079Z · LW(p) · GW(p)

I'm considering donating. Can you give us a little more information on breakdown of the costs? What are typical large expenses that the 1.6 million upkeep of Lighthaven consists of? Is this a usual cost for a similar sized event space, or is something about the location or the specialness of the place that makes it more expensive? 

How much money does running LW cost? The post says it's >1M, which somewhat surprised me, but I have no idea what's the usual cost of running such a site is. Is the cost mostly server hosting or salaries for content moderation or salaries for software development or something I haven't thought of? 

Replies from: habryka4
comment by habryka (habryka4) · 2024-11-30T06:04:27.075Z · LW(p) · GW(p)

Very reasonable question! Here is a breakdown of our projected budget:

TypeCost
Core Staff Salaries, Payroll, etc. (6 people)$1.4M
Lighthaven (Upkeep) 
Operations & Sales$240k 
Repairs & Maintenance Staff$200k 
Porterage & Cleaning Staff$320k 
Property Tax$300k 
Utilities & Internet$180k 
Additional Rental Property$180k 
Supplies (Food + Maintenance)$180k 
Lighthaven Upkeep Total$1.6M
Lighthaven Mortgage$1M
LW Hosting + Software Subscriptions$120k
Dedicated Software + Accounting Staff$330k
Total Costs$4.45M
Expected Lighthaven Income($2.55M)
Annual Shortfall$1.9M

And then, as explained in the post, in the coming year, we will have an additional mortgage payment of $1M due in March.

The core staff consists of generalists who work on a very wide range of different projects. My best guess is about 65% of the generalist labor in the coming year will go into LW, but that might drastically change depending on what projects we take on.

Is this a usual cost for a similar sized event space, or is something about the location or the specialness of the place that makes it more expensive? 

The costs of event venues and hotels differs enormously across the Bay Area. I think we currently operate at substantially higher expense per square feet than a low-margin hotel like the Rose Garden Inn, but at substantially lower cost than a dedicated conference center like the SSS-Ranch or the Oakland Marriott. I expect we can probably drive maintenance and upkeep costs down, but I think it also makes sense from a pure economic perspective to keep investing into venue upgrades.

Replies from: magi
comment by magi · 2024-12-03T10:10:34.295Z · LW(p) · GW(p)

"porterage and cleaning staff" for one year $320k

An entire family can retire with that kind of money where I'm from.

Have you considered not living in San Francisco and making flights there instead? (If you have and decided it's not worth it, I understand)

Replies from: habryka4
comment by habryka (habryka4) · 2024-12-03T18:02:51.150Z · LW(p) · GW(p)

This is a line item for basically all the service staff of a 100-bed, 30,000 sq. ft. conference center/hotel.

I don't think I understand how not living in the Bay Area and making flights there instead would work. This is a conference center, we kind of need to be where the people are to make that work.

comment by Jacob Pfau (jacob-pfau) · 2024-11-30T16:43:56.297Z · LW(p) · GW(p)

I am slightly worried about the rate at which LW is shipping new features. I'm not convinced they are net positive. I see lesswrong as a clear success, but unclear user of the marginal dollar; I see lighthaven as a moderate success and very likely positive to expand at the margin.

The interface has been getting busier[1] whereas I think the modal reader would benefit from having as few distractions as possible while reading. I don't think an LLM-enhanced editor would be useful, nor am I excited about additional tutoring functionality.

I am glad to see that people are donating, but I would have preferred this post to carefully signpost the difference between status-quo value of LW (immense) from the marginal value of paying for more features for LW (possibly negative), and from your other enterprises. Probably not worth the trouble, but is it possible to unbundle these for the purposes of donations?

Separately, thank you to the team! My research experience over the past years has benefitted from LW on a daily basis.

EDIT: thanks to Habryka for more details. After comparing to previous site versions I'm more optimistic about the prospects for active work on LW.


  1. (edit) in some places, less busy in others ↩︎

Replies from: habryka4, Raemon
comment by habryka (habryka4) · 2024-11-30T19:28:30.296Z · LW(p) · GW(p)

Yeah, I think this concern makes a bunch of sense. 

My current model is that LW would probably die a slow death within 3-4 years if we started developing at a much slower pace than the one which we have historically been developing. One reason for that is that that is exactly what happened with LW 1.0. There were mods, and the servers were kept on and bugs were being fixed, but without substantial technical development the site fell into disrepair and abandonment surprisingly quickly. 

The feature development here is important in the eternal race against spammers and trolls, but the internet is also constantly shifting, and with new modalities of how to read and interact with ideas, it does matter to have an active development team, even just for basic traffic and readability reasons. LW 1.0 missed a bunch of the transition to mobile and this was a large component of its decline. I think AI chat systems are likely a coming transition where you really want a team to actively iterate on how to best handle that shift (my current guess is 15-20% of new users are already referred to the site because ChatGPT or Claude told them to read things here), but it might also end up something VR or AR shaped. 

I also want to push back a bit on this sentence: 

The interface has been getting busier whereas I think the modal reader would benefit from having as few distractions as possible while reading.

I actually think LessWrong is very unique in how it has not been getting busier! I really try extremely hard to keep UI complexity under control. As an illustration, here is a screenshot of a post page from a year ago: 

Here is a screenshot of that page today: 

I think the second one is substantially cleaner and less distracting. UI on LessWrong gets less busy as frequently as it gets busy! 

Overall I am very proud of the design of the site, which successfully combines a really quite large (and IMO valuable) feature set with a very clean reading experience. Reducing clutter is the kind of iteration we've been doing a lot off just this year, meaning that is were a substantial chunk of marginal development resources are going into.

Of course, you might still overall disagree with the kind of feature directions we are exploring. I do think AI-driven features are really important for us to work on, and if you disagree with that, it makes sense to be less excited about donating to us. For another example of the kind of thing that I think could be quite big and useful, see Gwern's recent comment: https://www.lesswrong.com/posts/PQaZiATafCh7n5Luf/gwern-s-shortform?commentId=KGBqiXrKq8x8qnsyH [LW(p) · GW(p)] 

But overall, we did sure grow LessWrong a lot, and I expect future development will do that as well. It's from my perspective often extremely hard to tell which things cause the site to grow and get better, but as one example of a very recent change, our rework of shortform into Quick Takes and Popular Comments on the frontpage have I think enabled a way for new content to get written on the site that now hosts close to 40% of my favorite content, and I think that's huge. And that very much is the kind of thing that marginal feature development efforts go into.

Due to various complications in how our finances are structured in the upcoming year, our ability to marginally scale up or down is also very limited, making a discussion of the value of marginal contributions a bit hard. As I outlined in my comment to David [LW(p) · GW(p)], we actually have very little staff specialized just for software engineering, and as I said in the post, we already are an organization that grows extraordinarily slowly. And in the coming year, $2M of our $3M in expenses are mortgage payments where if we fail to meet them, we would end up with something close to bankruptcy, so that's not really a place where we can choose to spend less. 

This means, that at most we could reduce our burn rate by ~20%, even if we got rid of all of our specialized software engineering staff and let go of a third of our core staff.

And getting rid of our core staff seems like a bad choice, even if I agreed with your point about the value of marginal feature development. I've invested enormous amounts of resources into developing a good culture among my staff, there are large fixed costs associated with hiring, and the morale effects of firing people are huge. And they are already selected for being the kind of people that will just work with me on scaling up and improving Lighthaven, or going into policy, or pivoting into B2B Saas, exactly because I recognize that indeed different projects will hit diminishing returns. As such, I think the much more likely thing to do here would be to keep costs the same, be convinced by good arguments that we should do something else, and then invest more into things other than LW.

I think this overall means that from the perspective of a donor, there isn't really a way to unbundle investment into LessWrong or Lighthaven or other projects. Of course, we will take into account arguments and preferences from people who keep the show running, so making those arguments and sharing your preferences about where we should marginally allocate resources is valuable, but I don't think this could up with a way to allow donors to directly choose into which project we will invest marginal resources.

Replies from: jacob-pfau
comment by Jacob Pfau (jacob-pfau) · 2024-11-30T19:44:33.598Z · LW(p) · GW(p)

Thanks for these details. These have updated me to be significantly more optimistic about the value of spending on LW infra.

  • The LW1.0 dying to no mobile support is an analogous datapoint in favor of having a team ready for 0-5 year future AI integration.
  • The head-to-head on the site updated me towards thinking things that I'm not sure are positive (visible footnotes in sidebar, AI glossary, to a lesser extent emoji-reacts) are not a general trend. I will correct my original comment on this.
  • While I think the current plans for AI integration (and existing glossary thingy) are not great, I do think there will be predictably much better things to do in 1-2 years and I would want there to be a team with practice ready to go for those. Raemon's reply below also speaks to this. Actively iterating on integrations while keeping them opt-in (until very clearly net positive) seems like the best course of action to me.
comment by Raemon · 2024-11-30T19:33:39.402Z · LW(p) · GW(p)

I wrote some notes on how we've been working to keep UI simpler, but habryka beat me to it. Meanwhile:

Some thoughts Re: LLM integration

I don't think we'll get to agreement within this comment margin. I think there's a lot of ways LLM integration can go wrong. I think the first-pass at the JargonBot Beta Test [LW · GW] isn't quite right yet and I hope to fix some of that soon to make it a bit more clear what it looks like when it's working well, as proof-of-concept.

But, I think LLM integration is going to be extremely important, and I want to say a bit about it.

Most of what LLMs enable is entirely different paradigms of cognition, that weren't possible before. This is sort of a "inventing cars while everyone is still asking for slightly better-horses, or being annoyed by the car-centric infrastructure that's starting to roll out in fits and starts. Horses worked fine, what's going on?"

I think good LLM integrations make the difference between "it's exhausting and effortful to read a technical post in a domain you aren't familiar with" (and therefore, you don't bother) to "actually it's not that much harder than reading a regular post." (I think several UI challenges need to get worked out for this to work, but they are not particularly impossible UI challenges). This radically changes the game on what sort of stuff you can learn, and how quickly somewhat who is somewhat interested in a field can get familiar with it. You can just jump into the post that feels relevant, and have the gaps between your understanding and the cutting edge filled in automatically (instead of having to painstakingly figure out the basics of a field before you can start participating).

Once this is working reliably and you actually deeply believe in it, it opens up new atomic actions that you brain can automatically consider that would previously have been too expensive to be worth it. 

I don't think we even need advances on current LLM-skill for this to work pretty well – LLMs aren't very good at figuring stuff out at the cutting edge, but they are pretty good at filling it details that get you up to speed on the basics, and I think it's pretty obvious how to improve them along the edges here.

This is in addition to the very straightforward LLM-integrations into an editor that save obvious boring bits of work (identifying all typos and slight wording confusions and predictably hard-to-understand sections) and freeing up that attention for more complicated problem solving.

I think it's important for LessWrong in particular to be at the forefront here, because there are gnarly important bottlenecking-for-humanity's-future problems, that require people to skill up rapidly to have a hope of contributing in time. (My inspiration was a colleague kind of casually deciding "I think I'm going to learn about the technical problems underlying compute governance", and spinning up into the field so they could figure out how to contribute)

comment by ryan_greenblatt · 2024-12-02T00:36:34.679Z · LW(p) · GW(p)

I donated $3,000. I've gained and will continue to gain a huge amount of value from LW and other activities of Lightcone Infrastructure, so it seemed like a cooperative and virtuous move to donate.[1]

I tried to donate at a level such that if all people using LW followed a similar policy to me, Lightcone would be likely be reasonably funded, at least for the LW component.


  1. I think marginal funding to Lightcone Infrastructure beyond the ~$3 million needed to avoid substantial downsizing is probably worse than some other funding opportunities. So, while I typically donate larger amounts to a smaller number of things, I'm not sure if I will donate a large amount to Lightcone yet. You should interpret my $3,000 donation as indicating "this is a pretty good donation opportunity and I think there are general cooperativeness reasons to donate" rather than something stronger. ↩︎

comment by Jeffrey Ladish (jeff-ladish) · 2024-12-01T00:28:44.554Z · LW(p) · GW(p)

Just donated 2k. Thanks for all you’re doing Lightcone Team!

comment by Perhaps · 2024-11-30T17:05:02.467Z · LW(p) · GW(p)

What happens to the general Lightcone portfolio if you don't meet a fundraising target, either this year or a future year?

For concreteness, say you miss the $1M target by $200K. 

comment by Cole Wyeth (Amyr) · 2024-11-30T15:23:55.814Z · LW(p) · GW(p)

This post provided far more data than I needed to donate to support a site I use constantly.

comment by DocCoase · 2024-11-30T07:16:03.938Z · LW(p) · GW(p)

Well argued. I’m in. I’ve received ample surplus value over the years from LW. Less Online was a blast this year. Thank you for all the work you and your team do!

comment by Phil Parker (whaleinfo) · 2024-11-30T18:48:04.229Z · LW(p) · GW(p)

I just made my initial donation, with the intention of donating more over time.

The last year of my life was the hardest I've ever been through. In the spring, with a new job and a week's notice - I moved across the country to Berkeley with only my suitcase. I was at my rope's end for keeping it all together, and Lighthaven was there to catch me. I rented a basement room and was able to stay for a month or so until I could figure out a permanent place to live.

It's hard to write how much it meant to me. The logistics of finding a place to sleep was great of course, but more than that, when everything had fallen apart, every friendly face and hello, every coworking session, every late night fireside discussion showed me that I wasn't by myself. 

I think this is what Lighthaven means to many people - a place where we can go and see that we're not alone.

comment by CowardlyPersonUsingPseudonym · 2024-12-01T22:19:47.000Z · LW(p) · GW(p)

I like a lot of what you are doing, and I might donate your cause, but I feel there are some questions that need to be asked. (I feel uncomfortable about the questions, that's why I use a pseudonym.)

Have you considered cutting salaries in half? According to the table you share in the comments, you spend 1.4 million on the salary for the 6 of you, which is $230k per person. If the org was in a better shape, I would consider this a reasonable salary, but I feel that if I was in the situation you guys are in, I would request my salary to be at least halved. 

Relatedly, I don't know if it's possible for you to run with fewer employees than you currently have. I can imagine that 6 people is the minimum that is necessary to run this org, but I had the impression that at least one of you is working on creating new rationality and cognitive trainings, which might be nice in the long-term (though I'm pretty skeptical of the project altogether), but I would guess you don't have the slack for this kind of thing now if you are struggling for survival.

On the other side of the coin, can you extract more money out of your customers? The negotiation strategy you describe in the post (50-50ing the surplus) is very nice and gentlemanly, and makes sense if you are both making profit. But if there is a real chance of Lightcone going bankrupt and needing to sell Lighthaven, then your regular customers would need to fall back to their second best option, losing all their surplus. So I think in this situation it would be reasonable to try to charge your regular costumers practically the maximum they are willing to pay.

Replies from: habryka4, pktechgirl
comment by habryka (habryka4) · 2024-12-01T22:52:28.321Z · LW(p) · GW(p)

Have you considered cutting salaries in half? According to the table you share in the comments, you spend 1.4 million on the salary for the 6 of you, which is $230k per person. If the org was in a better shape, I would consider this a reasonable salary, but I feel that if I was in the situation you guys are in, I would request my salary to be at least halved. 

We have! Indeed, we have considered it so hard that we did in fact do it. For roughly the last 6-8 months our salaries have on-average been halved (and I have completely forfeited my salary, and donated ~$300k to Lightcone at the end of last year myself to keep us afloat). 

I don't think this is a sustainable situation and I expect that in the long run I would end up losing staff over this, or I would actively encourage people to make 3x[1] their salary somewhere else (and maybe donating it, or not) since I don't think donating 70% of your counterfactual salary is a particularly healthy default for people working on these kinds of projects. I currently think I wouldn't feel comfortable running Lightcone at salaries that low in the long run, or would at least want to very seriously rearchitect how Lightcone operates to make that more OK.

(Also, just to clarify, the $230k is total cost associated with an employee, which includes office space, food, laptops, insurance, payroll taxes, etc. Average salaries are ~20% lower than that.)

Relatedly, I don't know if it's possible for you to run with fewer employees than you currently have. I can imagine that 6 people is the minimum that is necessary to run this org, but I had the impression that at least one of you is working on creating new rationality and cognitive trainings, which might be nice in the long-term (though I'm pretty skeptical of the project altogether), but I would guess you don't have the slack for this kind of thing now if you are struggling for survival.

We are generally relatively low on slack, and mostly put in long hours. Ray has been working on new rationality and cognitive training projects, but not actually on his work time, and when he has been spending work time on it, he basically bought himself out with revenue from programs he ran (for example, he ran some recent weekend workshops for which he took 2 days off from work, and in-exchange made ~$1.5k of profit from the workshops which went to Lightcone to pay for his salary).

I currently would like to hire 1-2 more people in the next year. I definitely think we can make good use of them, including for projects that more directly bring in revenue (though I think the projects that don't would end up a bunch more valuable for the world).

On the other side of the coin, can you extract more money out of your customers? The negotiation strategy you describe in the post (50-50ing the surplus) is very nice and gentlemanly, and makes sense if you are both making profit. But if there is a real chance of Lightcone going bankrupt and needing to sell Lighthaven, then your regular customers would need to fall back to their second best option, losing all their surplus. So I think in this situation it would be reasonable to try to charge your regular costumers practically the maximum they are willing to pay.

I think doing the negotiation strategy we did was very helpful for getting estimates of the value we provide to people, but I agree that it was quite generous, and given the tightness have moved towards a somewhat more standard negotiation strategy. I am not actually sure that this has resulted in us getting more of the surplus, I think people have pretty strong fairness instincts around not giving up that much of the surplus, and negotiations are hard. 

We do expect to raise prices in the coming year, mostly as demand is outstripping supply for Lighthaven event slots, which means we have more credible BATNAs in our negotiations. I do hope this will increase both the total surplus, and the fraction of the surplus we receive (in as much as getting that much will indeed be fair, which I think it currently is, but it does depend on things being overall sustainable).

  1. ^

    Our historical salary policy was roughly "we will pay you 70% of what we are pretty confident you could make in a similar-ish industry job in compensation". So cutting that 70% in half leaves you with ~1/3rd of what you would make in industry, so the 3x is a relatively robust estimate, and probably a bit of an underestimate as we haven't increased salaries in 2-3 years, despite inflation and it doesn't take into account tail outcomes like founding a successful company (though also engineering salaries have gone down somewhat in that time, though not as much in more AI-adjacent spaces, so it's not totally obvious)

Replies from: CowardlyPersonUsingPseudonym, WilliamKiely
comment by CowardlyPersonUsingPseudonym · 2024-12-01T23:20:08.344Z · LW(p) · GW(p)

Thanks for the answers. I appreciate the team's sacrifices and will probably donate some good money to Lightcone. 

comment by WilliamKiely · 2024-12-02T05:31:54.406Z · LW(p) · GW(p)

I have completely forfeited my salary, and donated ~$300k to Lightcone at the end of last year myself to keep us afloat

If you had known you were going to do this, couldn't you have instead reduced your salary by ~60k/year for your first 5 years at Lightcone and avoided paying a large sum in income taxes to the government?

(I'm assuming that your after-tax salary from Lightcone from your first 5-6 years at Lightcone totaled more than ~$300k, and that you paid ~$50k-100k in income taxes on that marginal ~$350k-$400k of pre-tax salary from Lightcone.)

I'm curious if the answer is "roughly, yes" in which case it just seems unfortunately sad that that much money had to be unnecessarily wasted on income taxes.

Replies from: habryka4
comment by habryka (habryka4) · 2024-12-02T05:51:45.532Z · LW(p) · GW(p)

I could have saved a bit of money with better tax planning, but not as much as one might think. 

The money I was able to donate came from appreciated crypto, and was mostly unrelated to my employment at Lightcone (and also as an appreciated asset was therefore particularly tax-advantageous to donate). 

I have generally taken relatively low salaries for most of my time working at Lightcone. My rough guess is that my average salary has been around $70k/yr[1]. Lightcone only started paying more competetive salaries in 2022 when we expanded beyond some of our initial founding staff, and I felt like it didn't really make cultural or institutional sense to have extremely low salaries. The only year in which I got paid closer to any competetive Bay Area salary was 2023, and in that year I also got to deduct most of that since I donated in the same year.

(My salary has always been among the lowest in the organization, mostly as a costly signal to employees and donors that I am serious about doing this for impact reasons)

  1. ^

    I don't have convenient tax records for years before 2019, but my income post-federal-tax (but before state tax) for the last 6 years was $59,800 (2019), $71,473 (2020), $83,995 (2021), $36,949 (2022), $125,175 (2023), ~$70,000 (2024). 

Replies from: WilliamKiely, MondSemmel
comment by WilliamKiely · 2024-12-02T05:54:18.013Z · LW(p) · GW(p)

Very helpful reply, thank you!

(My salary has always been among the lowest in the organization, mostly as a costly signal to employees and donors that I am serious about doing this for impact reasons)

I appreciate that!

comment by MondSemmel · 2024-12-02T12:22:00.384Z · LW(p) · GW(p)

I can't find any off the top of my had, but I'm pretty sure the LW/Lightcone salary question has been asked and answered before, so it might help to link to past discussions?

comment by Elizabeth (pktechgirl) · 2024-12-01T22:48:21.991Z · LW(p) · GW(p)

it looks like you're taking the total amount spent per employee as the take-home salary, which is incorrect. At a minimum that amount should include payroll taxes, health insurance, CA's ETT, and state and federal unemployment insurance tax. It can also include things things like education benefits, equipment, and 401k bonuses. Given the crudeness of the budget, I expect there's quite a bit being included under "etc".

comment by cata · 2024-11-30T09:55:33.131Z · LW(p) · GW(p)

I was going to email but I assume others will want to know also so I'll just ask here. What is the best way to donate an amount big enough that it's stupid to pay a Stripe fee, e.g. $10k? Do you accept donations of appreciated assets like stock or cryptocurrency?

Replies from: habryka4
comment by habryka (habryka4) · 2024-11-30T18:19:14.033Z · LW(p) · GW(p)

Yes, we have a brokerage account and a Coinbase account and can accept basically whatever crazy asset you want to give to us, including hard to value ones (and honestly, it sounds fun to go on an adventure to figure out how much a first edition MtG Black Lotus costs, how to sell it, and how to make sure you get an appropriate tax return, if that's the kind of asset you want to donate). 

We of course also accept bank transfers to avoid the Stripe fees.

Replies from: datawitch
comment by datawitch · 2024-11-30T19:12:27.399Z · LW(p) · GW(p)

What's your btc address?

Replies from: kave
comment by kave · 2024-12-03T02:37:33.571Z · LW(p) · GW(p)

37bvhXnjRz4hipURrq2EMAXN2w6xproa9T

I've updated the post with it.

comment by Akash (akash-wasil) · 2024-11-30T17:02:45.937Z · LW(p) · GW(p)

What do you think are the biggest mistakes you/LightCone have made in the last ~2 years?

And what do you think a 90th percentile outcome looks like for you/LightCone in 2025? Would would success look like?

(Asking out of pure curiosity– I'd have these questions even if LC wasn't fundraising. I figured this was a relevant place to ask, but feel free to ignore it if it's not in the spirit of the post.)

Replies from: kave
comment by kave · 2024-12-01T16:55:34.961Z · LW(p) · GW(p)

I worry that cos this hasn't received a reply in a bit, people might think it's not in the spirit of the post. I'm even more worried people might think that critical comments aren't in the spirit of the post.

Both critical comments and high-effort-demanding questions are in the spirit of the post, IMO! But the latter might take awhile to get a response

comment by Steven Byrnes (steve2152) · 2024-11-30T16:31:12.355Z · LW(p) · GW(p)

I expect Lightcone to be my primary or maybe only x-risk-related donation this year—see my manifund comment here for my endorsement:

As a full-time AGI safety / alignment researcher (see my research output), I can say with confidence that I wouldn’t have been able to get into the field in the first place [LW(p) · GW(p)], and certainly wouldn’t have made a fraction as much progress, without lesswrong / alignment forum (LW/AF). I continue to be extremely reliant on it for my research progress. … [much more here]

Wish I had more to give, but I’ll send something in the mid four figures at the beginning of January (for tax reasons).

comment by romeostevensit · 2024-11-30T11:07:48.208Z · LW(p) · GW(p)

Recurring option at the main donation link?

Replies from: ozziegooen, habryka4
comment by ozziegooen · 2024-12-01T21:10:44.281Z · LW(p) · GW(p)

Minor point, but I'd be happy if LessWrong/Lightcone had various (popular) subscriptions for perks, like Patreon. 

Some potential perks:

  • A username with a certain color
  • A flag on your profile
  • Some experimental feature access
  • "We promise to consider your feature requests a bit more"
  • Some monthly (or less frequent) cheap event at Lighthaven
  • Get your name in the LessWrong books
  • "Supporters" part of Lighthaven. Maybe different supporters could sponsor various Lighthaven nooks - there are a bunch of these.
  • Secret Discord channel
  • Invites to some otherwise private Lighthaven things

I realize these can be a pain to set up though. 

(I'd want this if it helped with total profit, to Lightcone) 

Replies from: habryka4
comment by habryka (habryka4) · 2024-12-01T21:12:24.012Z · LW(p) · GW(p)

Yeah, I agree, and I've been thinking through things like this. I want to be very careful in making the site not feel like it's out to get you, and so isn't trying to sell you anything, and so have been hesitant for things in the space that come with prominent UI implications, but I also think there are positive externalities. I expect we will do at least some things in this space.

comment by habryka (habryka4) · 2024-12-01T23:05:53.412Z · LW(p) · GW(p)

Stripe doesn't allow for variable-amount recurring donations in their payment links. We will probably build our own donation page to work around that, but it might take a bit. 

Replies from: dtch1997
comment by Daniel Tan (dtch1997) · 2024-12-02T15:14:38.582Z · LW(p) · GW(p)

Might be worth posting the fixed-amount Stripe link anyway? I'm interested in donating something like 5 pounds a month, I figure that's handled

Replies from: kave
comment by kave · 2024-12-02T15:58:16.051Z · LW(p) · GW(p)

Habryka means we would have to pick one number per Stripe link (eg one like for $5/month, 1 for $100/month, etc)

comment by Chipmonk · 2024-11-30T17:41:31.901Z · LW(p) · GW(p)

I've run two workshops at LightHaven and it's pretty unthinkable to run a workshop anywhere else in the Bay Area. Lightcone has really made it easy to run overnight events without setup

comment by Stephen McAleese (stephen-mcaleese) · 2024-11-30T11:33:07.120Z · LW(p) · GW(p)

I donated $100, roughly equivalent to my yearly spending on Twitter/X Premium, because I believe LessWrong offers similar value. I would encourage most readers to do the same.

comment by elifland · 2024-11-30T21:43:35.903Z · LW(p) · GW(p)

Appreciate the post. I've previously donated $600 through the EA Manifund thing and will consider donating again late this year / early next year when thinking through donations more broadly.

I've derived lots of value with regards to thinking through AI futures from LW/AIAF content (some non-exhaustive standouts: 2021 MIRI conversations [? · GW], List of Lethalities [AF · GW] and Paul response [AF · GW], t-AGI framework [AF · GW], Without specific countermeasures... [LW · GW], Hero Licensing [LW · GW]). It's unclear to me how much of the value would have been retained if LW didn't exist, but plausibly LW is responsible for a large fraction.

In a few ways I feel not fully/spiritually aligned with the LW team and the rationalist community: my alignment difficulty/p(doom()[1] is farther from Eliezer's[2] than my perception of the median of the LW team[3] (though closer to Eliezer than most EAs), I haven't felt sucked in by most of Eliezer's writing, and I feel gut level cynical about people's ability to deliberatively improve their rationality (edit: with large effect size) (I haven't spent a long time examining evidence to decide whether I really believe this).

But still LW has probably made a large positive difference in my life, and I'm very thankful. I've also enjoyed Lighthaven, but I have to admit I'm not very observant and opinionated on conference venues (or web design, which is why I focused on LW's content).

  1. ^

    Previously just said "AI forecasts", edited to make more specific the view that I'm talking about.

  2. ^

    Previously said MIRI. edited MIRI -> Eliezer since MIRI has somewhat heterogenous views

  3. ^

    Previously just said "LW team", added "the median of" to better represent heterogeneity

Replies from: habryka4
comment by habryka (habryka4) · 2024-11-30T22:17:02.147Z · LW(p) · GW(p)

Hmm, my guess is we probably don’t disagree very much on timelines. My honest guess is that yours are shorter than mine, though mine are a bit in flux right now with inference compute scaling happening and the slope and reliability of that mattering a lot.

Replies from: elifland
comment by elifland · 2024-11-30T22:21:42.720Z · LW(p) · GW(p)

Yeah I meant more on p(doom)/alignment difficulty than timelines, I'm not sure what your guys' timelines are. I'm roughly in the 35-55% ballpark for a misaligned takeover, and my impression is that you all are closer to but not necessarily all the way at the >90% Eliezer view. If that's also wrong I'll edit to correct.

edit: oh maybe my wording of "farther" in the original comment was specifically confusing and made it sound like I was talking about timelines. I will edit to clarify.

Replies from: kave, habryka4, mattmacdermott
comment by kave · 2024-11-30T22:23:00.231Z · LW(p) · GW(p)

Lightcone is also heterogeneous, but I think it's accurate that the median view at Lightcone is >50% on misaligned takeover

Replies from: elifland
comment by elifland · 2024-11-30T22:26:28.568Z · LW(p) · GW(p)

Thanks. I edited again to be more precise. Maybe I'm closer to the median than I thought.

(edit: unimportant clarification. I just realized "you all" may have made it sound like I thought every single person on the Lightcone team was higher than my p(doom). I meant it to be more like a generic y'all to represent the group, not a claim about the minimum p(doom) of the team)

Replies from: kave
comment by kave · 2024-11-30T22:29:36.303Z · LW(p) · GW(p)

My impression matches your initial one, to be clear. Like my point estimate of the median is like 85%, my confidence only extends to >50%

comment by habryka (habryka4) · 2024-11-30T22:38:19.099Z · LW(p) · GW(p)

Ah, yep, I am definitely more doomy than that. I tend to be around 85%-90% these days. I did indeed interpret you to be talking about timelines due to the "farther".

comment by mattmacdermott · 2024-12-03T08:28:41.201Z · LW(p) · GW(p)

Do we have any data on p(doom) in the LW/rationalist community? I would guess the median is lower than 35-55%.

It's not exactly clear where to draw the line, but I would guess this to be the case for, say, the 10% most active LessWrong users.

comment by Joel Burget (joel-burget) · 2024-12-01T18:16:39.657Z · LW(p) · GW(p)

I donated $500. I get a lot of value from the website and think it's important for both the rationalist and AI safety communities. Two related things prevented me from donating more:

  1. Though it's the website which I find important, as I understand it, the majority of this money will go towards supporting Lighthaven.
    1. I could easily imagine, if I were currently in Berkeley, finding Lighthaven more important. My guess is that in general folks in Berkeley / the Bay Area will tend to value Lighthaven more highly than folks elsewhere. Whether this is because of Berkeley folks overvaluing it or the rest of us undervaluing, I'm not sure. Probably a bit of both.
    2. To me, this suggests unbundling the two rather different activities.
  2. Sustainability going forward. It's not clear to me that Lightcone is financially sustainable, in fact the numbers in this post make it look like it's not (due to the loss of funders), barring some very large donations. I worry that the future of LW will be endangered by the financial burden of Lighthaven.
    1. ETA: On reflection, I think some large donors will probably step in to prevent bankruptcy, though (a) I think there's a good chance Lightcone will then be stuck in perpetual fundraising mode, and (b) that belief of course calls into question the value of smaller donations like mine.
Replies from: habryka4, kave
comment by habryka (habryka4) · 2024-12-01T18:47:22.077Z · LW(p) · GW(p)

Though it's the website which I find important, as I understand it, the majority of this money will go towards supporting Lighthaven.

I think this is backwards! As you can see in the budget I posted here [LW(p) · GW(p)], and also look at the "Economics of Lighthaven" section, Lighthaven itself is actually surprisingly close to financially breaking even. If you ignore our deferred 2024 interest payment, my guess is we will overall either lose or gain some relatively small amount on net (like $100k). 

Most of the cost in that budget comes from LessWrong and our other generalist activities. At least right now, I think you should be more worried about the future of Lighthaven being endangered by the financial burden of LessWrong (and in the long run, I think it's reasonably likely that LessWrong will end up in part funded by revenue from Lighthaven).

Replies from: joel-burget, WilliamKiely
comment by Joel Burget (joel-burget) · 2024-12-02T15:14:29.168Z · LW(p) · GW(p)

Thanks for this! I just doubled my donation because of this answer and @kave [LW · GW]'s.


FWIW a lot of my understanding that Lighthaven was a burden comes from this section:

I initially read this as $3m for three interest payments. (Maybe change the wording so 2 and 3 don't both mention the interest payment?)

comment by WilliamKiely · 2024-12-02T05:12:28.673Z · LW(p) · GW(p)

I originally missed that the "Expected Income" of $2.55M from the budget means "Expected Income of Lighthaven" and consequently had the same misconception as Joel that donations mostly go towards subsidizing Lighthaven rather than almost entirely toward supporting the website in expectation.

Replies from: habryka4
comment by habryka (habryka4) · 2024-12-02T05:22:10.284Z · LW(p) · GW(p)

Aah, that makes sense. I will update the row to say "Expected Lighthaven Income"

comment by kave · 2024-12-01T18:42:59.464Z · LW(p) · GW(p)

as I understand it, the majority of this money will go towards supporting Lighthaven

I think if you take Habryka's numbers at face value, a hair under half of the money this year will go to Lighthaven (35% of core staff salaries@1.4M = 0.49M. 1M for a deferred interest payment. And then the claim that otherwise Lighthaven is breaking even). And in future years, well less than half.

I worry that the future of LW will be endangered by the financial burden of Lighthaven

I think this is a reasonable worry, but I again want to note that Habryka is projecting a neutral or positive cashflow from Lighthaven to the org.

That said, I can think of a couple of reasons for financial pessimism[1]. Having Lighthaven is riskier. It involves a bunch of hard-to-avoid costs. So, if Lighthaven has a bad year, that does indeed endanger the project as a whole.

Another reason to be worried: Lightcone might stop trying to make Lighthaven break even. Lightcone is currently fairly focused on using Lighthaven in revenue-producing ways. My guess is that we'll always try and structure stuff at Lighthaven such that it pays its own way (for example, when we ran LessOnline we sold tickets[2]). But maybe not! Maybe Lightcone will pivot Lighthaven to a loss-making plan, because it foresees greater altruistic benefit (and expects to be able to fundraise to cover it).

So the bundling of the two projects still leaks some risk.

Of course, you might also think Lighthaven makes LessWrong more financially robust, if on the mainline it ends up producing a modest profit that can be used to subsidise LessWrong.

  1. ^

    Other than just doubting Habryka's projections, which also might make sense.

  2. ^

    My understanding of the numbers is that we lost money once you take into account staff time, but broke even if you don't. And it seems the people most involved with running it are hopeful about cutting a bunch of costs in future.

comment by Czynski (JacobKopczynski) · 2024-11-30T18:42:15.422Z · LW(p) · GW(p)

I see much more value in Lighthaven than in the rest of the activity of Lightcone.

I wish Lightcone would split into two (or even three) organizations, as I would unequivocally endorse donating to Lighthaven and recommend it to others, vs. LessWrong where I'm not at all confident it's net positive over blogs and Substacks, and the grantmaking infastructure and other meta which is highly uncertain and probably highly replaceable.

All of the analysis of the impact of new LessWrong is misleading at best; it is assuming that volume on LessWrong is good in itself, which I do not believe to be the case; if similar volume is being stolen from other places, e.g. dropping away from blogs on the SSC blogroll and failing to create their own Substacks, which I think is very likely to be true, this is of minimal benefit to the community and likely negative benefit to the world, as LW is less visible and influential than strong existing blogs or well-written new Substacks.

That's on top of my long-standing objections to the structure of LW, which is bad for community epistemics by encouraging groupthink, in a way that standard blogs are not. If you agree with my contention there, then even a large net increase in volume would still be, in expectation, significantly negative for the community and the world. Weighted voting delenda est; post-author moderation delenda est; in order to win the war of good group epistemics we must accept losing the battles of discouraging some marginal posts from the prospect of harsh, rude, and/or ignorant feedback.

Replies from: Seth Herd, Vaniver
comment by Seth Herd · 2024-11-30T18:59:37.464Z · LW(p) · GW(p)

Hm, I was going to say I'd like LW distinguished from lighthaven so I could give more to LW.

The things you note about encouraging groupthink are good points. They should be addressed.

But the average quality of discussion here cannot be matched anywhere else. Non-voting comment systems like X and Slate Star Codex are too disorganized to consistently find the real in-depth discussions. Subreddits do not have the quality of community to make the comment voting work well. (They either have too few experts to sustain a conversation, or too many novices voting on vibes).

So while the risk of groupthink is pretty high, I don't know where else I can go that might advance the discussion fast enough to stay ahead of AI advances.

Groupthink would be super bad, but so would just remaining confused and conflicted when there are better answers available through collaborative analysis of important issues.

I'm curious what alternatives you suggest.

In the meantime, I'm donating to support LW.

Replies from: JacobKopczynski
comment by Czynski (JacobKopczynski) · 2024-11-30T22:14:51.929Z · LW(p) · GW(p)

Currently no great alternatives exist because LW killed them. The quality of the comment section on SSC and most other rationalist blogs I was following got much worse when LW was rebooted (and killed several of them), and initially it looked like LW was an improvement, but over time the structural flaws killed it.

I still see much better comments on individual blogs - Zvi, Sarah Constantin, Elizabeth vN, etc. - than on LessWrong. Some community Discords are pretty good, though they are small walled gardens; rationalist Tumblr has, surprisingly, gotten actively better over time, even as it shrank. All of these are low volume.

It's possible in theory that the volume of good comments on LessWrong is higher than those places. I don't know, and in practical terms don't care, because they're drowned out by junk, mostly highly-upvoted junk. I don't bother to look for good comments here at all because they're sufficiently bad that it's not worthwhile I post here only for visibility, not for good feedback, because I know I won't get it; I only noticed this post at all because of a link from a Discord.

Groupthink is not a possible future, to be clear. It's already here in a huge way, and probably not fixable. If there was a chance of reversing the trend, it ended with Said [LW · GW] being censured and censored for being stubbornly anti-groupthink to the point of rudeness. Because he was braver or more stubborn than me and kept trying for a couple years after I gave up.

Replies from: Seth Herd, pktechgirl
comment by Seth Herd · 2024-11-30T23:14:42.706Z · LW(p) · GW(p)

So I need to finally get on Tumblr, eh?

I should've specified that I really mostly care about AI alignment and strategy discussions. The rationalism stuff is fun and sometimes useful, but a far lower priority.

I don't expect to change your mind, so I'll keep this brief and for general reference. When I say LessWrong is the best source of discussion, I mean something different than the sum of value of comments. I mean that people often engage in depth with those who disagree with them in important ways.

It's still entirely possible that we're experiencing groupthink in important ways. But there is a fair amount of engagement with opposing viewpoints when they're both relatively well-informed about the discourse and fairly polite.

I think the value of keeping discourse not just civil but actively pleasant is easy to underestimate. Discussions that turn into unpleasant debates because the participants are irritated with each other don't seem to get very far. And there are good psychological reasons to expect that.

I'm also curious where you see LW as experiencing the most groupthink. I'd like to correct for it.

Replies from: JacobKopczynski
comment by Czynski (JacobKopczynski) · 2024-12-02T06:22:54.572Z · LW(p) · GW(p)

I don't have much understanding of current AI discussions and it's possible those are somewhat better/less advanced a case of rot.

Those same psychological reasons indicate that anything which is actual dissent will be interpreted as incivility. This has happened here and is happening as we speak. It was one of the significant causes of SBF. It's significantly responsible for the rise of woo among rationalists, though my sense is that that's started to recede (years later). It's why EA as a movement seems to be mostly useless at this point and coasting on gathered momentum (mostly in the form of people who joined early and kept their principles).

I'm aware there is a tradeoff, but being committed to truthseeking demands that we pick one side of that tradeoff, and LessWrong the website has chosen to pick the other side instead. I predicted this would go poorly years before any of the things I named above happened.

I can't claim to have predicted the specifics, I don't get many Bayes Points for any of them, but they're all within-model. Especially EA's drift (mostly seeking PR and movement breadth). The earliest specific point where I observed that this problem was happening was 'Intentional Insights', where it was uncivil to observe that the man was a huckster and faking community signals, and so it took several rounds of blatant hucksterism for him to finally be disavowed and forced out. If EA'd learned this lesson then, it would be much smaller but probably 80% could have avoided involvement in FTX. LW-central-rationalism is not as bad, yet, but it looks on the same path to me.

comment by Elizabeth (pktechgirl) · 2024-11-30T22:38:27.820Z · LW(p) · GW(p)

Comments on my own blog are almost non existent, all the interesting discussion happens on LW and Twitter.

(Full disclosure: am technically on mod team and have deep social ties to the core team)

Replies from: Benito, JacobKopczynski
comment by Ben Pace (Benito) · 2024-11-30T23:20:49.274Z · LW(p) · GW(p)

I wanted a datapoint [LW · GW] for Czynski's hypothesis that LW 2.0 killed the comment sections, so I checked how many comments your blogposts were getting in the first 3 months of 2017 (before LW 2.0 rebooted). There were 13 posts, and the comment counts were 0, 0, 2, 6, 9, 36, 0, 5, 0, 2, 0, 0, 2. (The 36 was a a political post in response to the US election, discussion of which I generally count as neutral or negative on LW, so I'd discount this.)

I'll try the same for Zvi. 13, 8, 3, 1, 3, 18, 2, 19, 2, 2, 2, 5, 3, 7, 7, 12, 4, 2, 61, 31, 79. That's more active (the end was his excellent sequence Against Facebook, and the last one was a call for people to share links to their blogs).

So that's not zero, there was something to kill. How do those numbers compare during LessWrong 2.0? My sense is that there's two Zvi eras, there's the timeless content (e.g. Mazes, Sabbaths, Simulacra) and the timeful content (e.g. Covid, AI, other news). The latter is a newer, more frequent, less deep writing style, so it's less apples to apples, so instead let's take the Moral Mazes sequence [? · GW] from 2020 (when LW 2.0 would've had a lot of time to kill Zvi's comments). I'm taking the 17 posts in this main sequence and counting the number of comments on LW and Wordpress.

#LWWordpress
1165
24019
32923
4812
5721
65610
7613
8128
9188
102118
112621
124216
13611
14915
151418
161119
172822
SUM349259

This shows the comment section on Wordpress about as active as it was in the 3-month period above (259 vs 284 comments) in the 2-months that the Mazes sequence was released, and comments were more evenly distributed (median of 17 vs 5). And it shows that the LessWrong comment section more than doubled the amount of discussion of the posts, without reducing the total discussion on Zvi's wordpress blog.

These bits of data aren't consistent with LW killing other blogs. FWIW my alternative hypothesis is that these things are synergistic (e.g. I also believe that the existence of LessWrong and the EA Forum increases discussion on each), and I think that is more consistent with the Zvi commenting numbers.

Replies from: pktechgirl
comment by Elizabeth (pktechgirl) · 2024-12-01T01:24:20.546Z · LW(p) · GW(p)

I was part of the 2.0 reboot beta: there are no posts of mine on LW before that

comment by Czynski (JacobKopczynski) · 2024-12-02T06:06:48.232Z · LW(p) · GW(p)

I still prefer the ones I see there to what I see on LW. Lower quantity higher value.

comment by Vaniver · 2024-12-03T00:45:08.140Z · LW(p) · GW(p)

I think it's hard to evaluate the counterfactual where I made a blog earlier, but I think I always found the built-in audience of LessWrong significantly motivating, and never made my own blog in part because I could just post everything here. (There's some stuff that ends up on my Tumblr or w/e instead of LW, even after ShortForm, but almost all of the nonfiction ended up here.)

comment by Quinn (quinn-dougherty) · 2024-12-02T23:14:27.125Z · LW(p) · GW(p)

Rumors are that 2025 lighthaven is jam packed. If this is the case, and you need money, rudimentary economics suggests only the obvious: raise prices. I know many clients are mission aligned, and there's a reasonable ideological reason to run the joint at or below cost, but I think it's aligned with that spirit if profits from the campus fund the website.

I also want to say in print what I said in person a year ago: you can ask me to do chores on campus to save money, it'd be within my hufflepuff budget. There are good reasons to not go totally "by and for the community" DIY like many say community libraries or soup kitchens, but nudging a little in that direction seems right.

EDIT: I did a mostly symbolic $200 right now, may or may not do more as I do some more calculations and find out my salary at my new job

comment by Sohaib Imran (sohaib-imran) · 2024-11-30T08:45:16.692Z · LW(p) · GW(p)

Sent a tenner, keep up the excellent work!

Replies from: sohaib-imran
comment by Sohaib Imran (sohaib-imran) · 2024-11-30T12:50:09.173Z · LW(p) · GW(p)

Realised that my donation did not reflect how much I value lesswrong, the alignment forum and the wider rationalist infrastructure. Have donated $100 more, although that still only reflects my stinginess rather than the value i receive from your work.

comment by Lucius Bushnaq (Lblack) · 2024-12-01T09:54:42.337Z · LW(p) · GW(p)

The donation site said I should leave a comment here if I donate, so I'm doing that. Gave $200 for now. 

I was in Lighthaven for the Illiad conference. It was an excellent space. The LessWrong forum feels like what some people in the 90s used to hope the internet would be.

Edit 03.12.2024: $100 more donated by me since the original message.

 

comment by Farkas · 2024-12-01T01:28:58.829Z · LW(p) · GW(p)

I'm a broke student but I donated what I could muster right now, intending to donate more in the future.

LessWrong is without a doubt worth more to me and to the world than what I can currently pay!

(Commenting because it might marginally increase probability of other people donating as well!)

comment by bilalchughtai (beelal) · 2024-11-30T22:49:18.386Z · LW(p) · GW(p)

Is there a way for UK taxpayers to tax-efficiently donate (e.g. via Gift Aid)?

Replies from: habryka4
comment by habryka (habryka4) · 2024-11-30T22:55:08.626Z · LW(p) · GW(p)

I am working on making that happen right now. I am pretty sure we can arrange something, but it depends a bit on getting a large enough volume to make it worth it for one of our UK friend-orgs to put in the work to do an equivalence determination. 

Can you let me know how much you are thinking of giving (either here or in a DM)?

Replies from: philh
comment by philh · 2024-12-01T09:07:05.567Z · LW(p) · GW(p)

If you can get it set up before March, I'll donate at least £2000.

(Though, um. I should say that at least one time I've been told "the way to donate with gift said is to set up an account with X, tell them to send the money to Y, and Y will pass it on to us", and the first step in the chain there had very high transaction fees and I think might have involved sending an email... historical precedent suggests that if that's the process for me to donate to lightcone, it might not happen.)

Do you know what rough volume you'd need to make it worthwhile?

Replies from: habryka4
comment by habryka (habryka4) · 2024-12-01T09:13:50.335Z · LW(p) · GW(p)

My favorite fiscal sponsorship would be through GWWC: https://www.givingwhatwecan.org/inclusion-criteria 

Their inclusion criteria suggests that they want to see at least $50k of expected donations in the next year. My guess is if we have $10k-$20k expected this month, then that is probably enough, but I am not sure (and it might also not work out for other reasons).

Replies from: philh
comment by philh · 2024-12-02T11:01:34.416Z · LW(p) · GW(p)

Okay. Make it £5k from me (currently ~$6350), that seems like it'll make it more likely to happen.

comment by Ari_Zerner (Overlord) · 2024-11-30T06:55:54.882Z · LW(p) · GW(p)

I don't have a lot to give right now but I chipped in what I can. Lighthaven is definitely a worthy cause!

Replies from: CuriousMeta
comment by Self (CuriousMeta) · 2024-12-01T14:35:47.405Z · LW(p) · GW(p)

Same. 

comment by davekasten · 2024-11-30T05:29:32.022Z · LW(p) · GW(p)

I am excited for this grounds of "we deserve to have nice things," though for boring financial planning reasons I am not sure whether I will donate additional funds prior to calendar year end or in calendar year 2025.

(Note that I made a similar statement in the past and then donated $100 to Lighthaven very shortly thereafter, so, like, don't attempt to reverse-engineer my financial status from this or whatever.)

Replies from: davekasten
comment by davekasten · 2024-11-30T05:31:43.181Z · LW(p) · GW(p)

Also, I would generally volunteer to help with selling Lighthaven as an event venue to boring consultant things that will give you piles of money, and IIRC Patrick Ward is interested in this as well, so please let us know how we can help. 

Replies from: habryka4
comment by habryka (habryka4) · 2024-11-30T06:07:41.061Z · LW(p) · GW(p)

That sounds great! Let's definitely chat about that. I'll reach out as soon as fundraising hustle has calmed down a bit.

comment by ozziegooen · 2024-12-01T18:12:21.519Z · LW(p) · GW(p)

Donated $300 now, intend to donate more (after more thinking).

My impression is that if you read LessWrong regularly, it could easily be worth $10-$30/month for you. If you've attended Lighthaven, there's an extra benefit there, which could be much more. So I think it's very reasonable for people in our community to donate $100 (a ~$10/month Substack subscription) to $10k (a fancy club membership) per person or so, depending on the person, just from the standpoint of thinking of it as communal/local infrastructure.

One potential point of contention is with the fact that I believe some of the team is working on future, more experimental projects, than just the LessWrong/Lighthaven basics. But I feel pretty good about this in-general, it's just more high-risk and more difficult to recommend. 

I also think it's just good practice for community-focused projections to get donations from the community. This helps keep incentives more aligned. I think that the Lighthaven team is about as relevant as things get, on this topic, now. 

comment by Alexander Gietelink Oldenziel (alexander-gietelink-oldenziel) · 2024-12-01T13:08:50.371Z · LW(p) · GW(p)

I just donated $400. This is not a minor amount for me but after thinking about it carefully this is an amount that feels substantial while remaining in my budget. I think it's important to support things, people and institutions that bring great value to oneself and the world. LessWrong is certainly one of those. 

comment by Logan Riggs (elriggs) · 2024-11-30T19:03:10.670Z · LW(p) · GW(p)

Donated $100. 

It was mostly due to LW2 that I decided to work on AI safety, actually, so thanks!

I've had the pleasure of interacting w/ the LW team quite a bit and they definitely embody the spirit of actually trying [LW · GW]. Best of luck to y'all's endeavors!

comment by niplav · 2024-11-30T21:38:08.497Z · LW(p) · GW(p)

I remember that Lightcone was interested in working on human intelligence amplification and/or pausing AI (I can't find the LW comment, I'm afraid). Is that still part of the plan?

Replies from: habryka4
comment by habryka (habryka4) · 2024-11-30T22:11:07.778Z · LW(p) · GW(p)

Yep, both of those motivate a good chunk of my work. I think the best way to do that is mostly to work one level removed, on the infrastructure that allows ideas like that to bubble up and be considered in the first place, but I’ll also take opportunities that make more direct progress on them as they present themselves.

comment by Casey B. (Zahima) · 2024-11-30T13:48:43.570Z · LW(p) · GW(p)

Haven't finished reading this, but I just want to say how glad I am that LW 2.0 and everything related to it (lightcone, etc) happened. I came across lw at a time when it seemed "the diaspora" was just going to get more and more disperse; that "the scene" had ended. I feel disappointed/guilty with how little I did to help this resurgence, like watching on the sidelines as a good thing almost died but then saved itself. 

How I felt at the time of seemingly peak "diaspora" actually somewhat reminds me of how I feel about CFAR now (but to a much lesser extent than LW); I think there is still some activity but it seems mostly dead; a valiant attempt at a worthwhile problem; but there are many Problems and many Good Things in the world, but limited time, and am I really going to invest time figuring out if this particular Thing is truly dead? Or start up my own rationality-training-adjacent effort? Or some other high leverage Good Thing? Generic EA? A giving pledge? The result is I carry on trying to do what I thought was most valuable, perversely hoping some weird mix of "that Good Thing was actually dead or close to it; it's good you didn't jump in as you'd be swimming against the tide" vs "even if not dead; it wasn't/isn't a good lever in the end" vs "your chosen alternative project/lever is a good enough guess at doing good; you aren't responsible for the survival of all Good Things". 

And tbh I'm a little murky on the forces that led to the LW resurgence, even if we can point to single clear boosts like ey's recent posts. But I'll finish reading the post to see if my understanding changes. 

comment by Jirachi 47 (jirachi-47) · 2024-11-30T22:29:09.075Z · LW(p) · GW(p)

Donated like 20 CAD - felt like it was the least I could do and didn't want to let hesistancy about it being enough stop me. 

comment by rictic · 2024-11-30T16:24:36.896Z · LW(p) · GW(p)

Donated. Lighthaven is incredible.

comment by Dan H (dan-hendrycks) · 2024-12-03T04:45:25.243Z · LW(p) · GW(p)

and have clearly been read a non-trivial amount by Elon Musk

Nit: He heard this idea in conversation with an employee AFAICT.

comment by Dmitriy (dmitriy) · 2024-12-03T02:00:27.093Z · LW(p) · GW(p)

"We'll probably display this until the New Year"

I'd guess plenty are planning to donate after Jan 1st for tax reasons, so perhaps best to keep highlighting the donation drive through the first week of Jan.

Also I donated $1,000. Lightcone's works have brought me a lot of direction and personal value over the years, so I'm happy I'm able to lend some support now

Replies from: habryka4
comment by habryka (habryka4) · 2024-12-03T02:05:03.762Z · LW(p) · GW(p)

Thank you! 

I'd guess plenty are planning to donate after Jan 1st for tax reasons, so perhaps best to keep highlighting the donation drive through the first week of Jan.

Yeah, I've been noticing that when talking to donors. It's a tricky problem because I would like the fundraiser to serve as a forcing function to get people who think LW should obviously be funded, but would like to avoid paying an unfair multiple of their fair share, to go and fund it. 

But it seems like half of the donors will really want to donate before the end of this year, and the other half will want to donate after the start of next year. 

It's tricky. My current guess is I might try to add some kind of "pledged funds" section to the thermometer, but it's not ideal. I'll think about it in the coming days and weeks.

comment by Elizabeth (pktechgirl) · 2024-12-02T20:18:01.121Z · LW(p) · GW(p)

I think the expenses for the website look high in this post because so much of it goes into invisible work like mod tools. Could you say more about that invisible work?

comment by habryka (habryka4) · 2024-12-02T05:03:46.107Z · LW(p) · GW(p)

Due to an apparently ravenous hunger among you all for having benches with plaques dedicated to them, and us not actually having that many benches, I increased the threshold for getting a bench (or equivalent) with a plaque to $2,000. Everyone who donated more than $1,000 but less than $2,000 before Dec 2nd will still get their plaque.

comment by papetoast · 2024-12-01T13:30:25.924Z · LW(p) · GW(p)

Donated $25 for all the things I have learned here.

comment by ektimo · 2024-12-01T01:52:09.614Z · LW(p) · GW(p)

What is your tax ID for people wanting to donate from a Donor Advised Fund (DAF) to avoid taxes on capital gains?

Replies from: kave
comment by kave · 2024-12-01T01:54:41.291Z · LW(p) · GW(p)

The EIN is 92-0861538

comment by Lucius Bushnaq (Lblack) · 2024-11-30T18:06:47.874Z · LW(p) · GW(p)

There currently doesn't really exist any good way for people who want to contribute to AI existential risk reduction to give money in a way that meaningfully gives them assistance in figuring out what things are good to fund. This is particularly sad since I think there is now a huge amount of interest from funders and philanthropists who want to somehow help with AI x-risk stuff, as progress in capabilities has made work in the space a lot more urgent, but the ecosystem is currently at a particular low-point in terms of trust and ability to direct that funding towards productive ends.

Really? What's the holdup here exactly? How is it still hard to give funders a decent up-to-date guide to the ecosystem, or a knowledgeable contact person, at this stage? For a workable budget version today, can't people just get a link to this [LW · GW] and then contact orgs they're interested in?

Replies from: habryka4
comment by habryka (habryka4) · 2024-11-30T23:00:09.343Z · LW(p) · GW(p)

I think a lot of projects in the space are very high variance, and some of them are actively deceptive, and I think that really means you want a bunch of people with context to do due diligence and think hard about the details. This includes some projects that Zvi recommends here, though I do think Zvi's post is overall great and provides a lot of value.

Another big component is doing fair splitting. I think many paths to impact require getting 4-5 pieces in place, and substantial capital investment, and any single donor might feel that there isn't really any chance for them to fund things in a way that gets the whole engine going, and before they feel good giving they want to know that other people will actually put in the other funds necessary to make things work. That's a lot of what our work on the S-Process and Lightspeed Grants was solving.

In-general, the philanthropy space is dominated by very hard principal-agent problems. If you have a lot of money, you will have tons of people trying to get your money, most of them for bad reasons. Creating infrastructure to connect high net worth people with others who are actually trustworthy and want to put in a real effort to help them is quite hard (especially in a way that results in the high net-worth people then actually building justified trust in those people).

comment by Shoshannah Tekofsky (DarkSym) · 2024-11-30T08:21:30.971Z · LW(p) · GW(p)

As someone who isn't really in a position to donate much at all, and who feels rather silly about the small amount I could possibly give, and what a tiny drop that is compared the bucket this post is sketching...

I uh ... sat down and did some simple math. If everyone who ever votes (>12M) donates $10 then you'd have >$120 million covered. If we follow bullshit statistics of internet activity, where it's said 99% of all content is generated by 1% of all people, then this heuristic would get us $1.2M from people paying this one time "subscription" fee. Now I also feel, based on intuition and ass-numbers, that LW folk have a better ratio than that, so let's multiply by 2 and then we could get a $2.4 million subscriber fee together from small donations.

Now on the pure power of typical mind ... I personally like people knowing when I do a nice thing - even a stupidly small thing.

So I'm commenting about it.

I find this embarrassing, and I'm working through the embarrassment to make it easier for others to farm this nutrient too and just normalize it in case that helps with getting a critical mass of small donations of the $10 variety.

Basically my point to readers is: 'Everyone' paying a one-time $10 subscription fee would solve the problem.

The trick is mostly to help each other generate the activation energy to do this thing. If it helps to post, high five, or wave about it, please do! Visibility of small donations may help activation energy and get critical mass! Group action is awesome. Using your natural reward centers about it is great! <3 Hi :D Wanna join? _

Thanks, abstractapplic, for noticing the first error in my calculation: It's number of votes, not number of people voting. Additionally I noticed I applied the power of dyslexia to the decimal point and read that as an thousand separator. So ignore the errored out math, give what you can, and maybe upvote each other for support on giving as much as possible? 

PS: I would prefer if actually big donators would get upvoted more than my post of error math. Feel free to downvote my post just to achieve a better ordering of comments. Thanks. <3

PPS: Note to the writer - Maybe remove decimal numbers entirely throughout the graphs? This is what it looked like for me, and led to the error. And this image is way zoomed in compared to what I see naturally on my screen.

Replies from: abstractapplic, quetzal_rainbow, kave
comment by abstractapplic · 2024-11-30T16:49:55.080Z · LW(p) · GW(p)

everyone who ever votes (>12M)

I . . . don't think that's a correct reading of the stats presented? Unless I'm missing something, "votes" counts each individual [up|down]vote each individual user makes, so there are many more total votes than total people.

'Everyone' paying a one-time $10 subscription fee would solve the problem.

A better (though still imperfect) measure of 'everyone' is the number of active users. The graph says that was ~4000 this month. $40,000 would not solve the problem.

Replies from: DarkSym, kave
comment by Shoshannah Tekofsky (DarkSym) · 2024-11-30T17:17:24.877Z · LW(p) · GW(p)

Oh shit. It's worse even. I read the decimal separators as thousand separators.

I'm gonna just strike through my comment.

Thanks for noticing ... <3

comment by kave · 2024-11-30T17:00:26.392Z · LW(p) · GW(p)

Yes, I think you're right. I was confused by Shoshannah's numbers last night, but it was late and I didn't manage to summon enough sapience to realise something was wrong and offer a correction. Thanks for doing that!

comment by quetzal_rainbow · 2024-11-30T10:04:06.888Z · LW(p) · GW(p)

Read your comment, donated 10$.

comment by kave · 2024-11-30T19:25:54.054Z · LW(p) · GW(p)

Maybe remove decimal numbers entirely throughout the graphs? This is what it looked like for me, and led to the error. And this image is way zoomed in compared to what I see naturally on my screen.

Good idea. Done.

comment by CronoDAS · 2024-12-01T20:58:35.928Z · LW(p) · GW(p)

I donated $100. I'm fairly income-constrained at the moment so I'd be nervous about donating more.

comment by Dennis Horte (dennis-horte) · 2024-11-30T17:04:22.821Z · LW(p) · GW(p)

Leaving a comment because it apparently helps. I've been occasionally involved with the Berkeley area rationality community since 2010, enjoyed re-reading the sequences last year, and continue to find interesting and valuable posts today. I hope to be more involved with the community again in the coming years. Thank you, Lightcone.

comment by MetaLevelUp (Jaeby2024) · 2024-12-03T02:50:44.165Z · LW(p) · GW(p)

LessWrong has been critical to my intellectual development. Just donated $1000. Thank you for all you do!

comment by David Matolcsi (matolcsid) · 2024-12-02T04:31:16.126Z · LW(p) · GW(p)

I donated $1000. Originally I was worried that this is a bottomless money-pit, but looking at the cost breakdown, it's actually very reasonable. If Oliver is right that Lighthaven funds itself apart from the labor cost, then the real costs are $500k for the hosting, software and accounting cost of LessWrong (this is probably an unavoidable cost and seems obviously worthy of being philanthropically funded), plus paying 4 people (equivalent to 65% of 6 people) to work on LW moderation and upkeep (it's an unavoidable cost to have some people working on LW, 4 seems probably reasonable, and this is also something that obviously should be funded), plus paying 2 people to keep Lighthaven running (given the surplus value Lighthaven generates, it seems reasonable to fund this), plus a one-time cost of 1 million to fund the initial cost of Lighthaven (I'm not super convinced it was a good decision to abandon the old Lightcone offices for Lighthaven, but I guess it made sense in the funding environment of the time, and once we made this decision, it would be silly not to fund the last 1 million of initial cost before Lighthaven becomes self-funded). So altogether I agree that this is a great thing to fund and it's very unfortunate that some of the large funders can't contribute anymore. 

(All of this relies on the hope that Lighthaven actually becomes self-funded next year. If it keeps producing big losses, then I think the funding case will become substantially worse. But I expect Oliver's estimates to be largely trustworthy, and we can still decide to decrease funding in later years if it turns out Lighthaven isn't financially sustainable.)

Replies from: habryka4
comment by habryka (habryka4) · 2024-12-02T04:42:40.635Z · LW(p) · GW(p)

Thank you so much!

Some quick comments: 

then the real costs are $500k for the hosting and hosting cost of LessWrong 

Raw server costs for LW are more like ~$120k (and to be clear, you could drive this lower with some engineering, though you would have to pay for that engineering cost). See the relevant line in the budget I posted.

Total labor cost for the ~4 people working on LW is closer to ~$800k, instead of the $500k you mention.

(I'm not super convinced it was a good decision to abandon the old Lightcone offices for Lighthaven, but I guess it made sense in the funding environment of the time, and once we made this decision, it would be silly not to fund the last 1 million of initial cost before Lighthaven becomes self-funded).

Lighthaven is actually cheaper (if you look at total cost) than the old Lightcone offices. Those also cost on the order of $1M per year, and were much smaller, though of course we could have recouped a bunch of that if we had started charging for more things. But cost-savings were actually a reason for Lighthaven, since according to our estimates, the mortgage and rent payments would end up quite comparable per square foot.

Again, thank you a lot.

Replies from: matolcsid
comment by David Matolcsi (matolcsid) · 2024-12-02T04:56:54.237Z · LW(p) · GW(p)

I fixed some misunderstandable parts, I meant the $500k being the LW hosting + Software subscriptions and the Dedicated software + accounting stuff together. And I didn't mean to imply that the labor cost of the 4 people is $500k, that was a separate term in the costs. 

Is Lighthaven still cheaper if we take into account the initial funding spent on it in 2022 and 2023? I was under the impression that buying Lighthaven is one of the things that made a lot of sense when the community believed it would have access to FTX funding, and once we bought it, it makes sense to keep it, but we wouldn't have bought it once FTX was out of the game. But in case this was a misunderstanding and Lighthaven saves money in the long run compared to the previous option, that's great news.

Replies from: habryka4
comment by habryka (habryka4) · 2024-12-02T05:21:44.382Z · LW(p) · GW(p)

I fixed some misunderstandable parts, I meant the $500k being the LW hosting + Software subscriptions and the Dedicated software + accounting stuff together. And I didn't mean to imply that the labor cost of the 4 people is $500k, that was a separate term in the costs. 

Ah yeah, I did misunderstand you there. Makes sense now. 

Is Lighthaven still cheaper if we take into account the initial funding spent on it in 2022 and 2023?

It's tricky because a lot of that is capital investment, and it's extremely unclear what the resell price of Lighthaven would end up being if we ended up trying to sell, since we renovated it in a pretty unconventional way. 

Total renovations cost around ~$7M-$8M. About $3.5M of that was funded as part of the mortgage from Jaan Tallinn, and another $1.2M of that was used to buy a property right next to Lighthaven which we are hoping to take out an additional mortgage on (see footnote #3 [LW(p) · GW(p)]), and which we currently own in full. The remaining ~$3M largely came from SFF and Open Phil funding. We also lost a total of around ~$1.5M in net operating costs so far. Since the property is super hard to value, let's estimate the value of the property after our renovations at our current mortgage value ($20M).[1]

During the same time, the Lightcone Offices would have cost around $2M, so if you view the value we provided in the meantime as roughly equivalent, we are out around $2.5M, but also, property prices tend to increase over time at least some amount, so by default we've probably recouped some fraction of that in appreciated property values, and will continue to recoup more as we break even.

My honest guess is that Lighthaven would make sense even without FTX, from an ex-post perspective, but that if we hadn't have had FTX there wouldn't have been remotely enough risk appetite for it to get funded ex-ante. I think in many worlds Lighthaven turned out much worse than it did (and for example, renovation costs already ended up in the like 85th percentile of my estimates due to much more extensive water and mold damage than I was expecting in the mainline).

  1. ^

    I think this is a potentially controversial choice, though I think it makes sense. I think most buyers would not be willing to pay remotely as much for the venue as that, since they would basically aim to return the property back to its standard hotel usage, and throw away most of our improvements, probably putting the property value at something like $15M. But I think our success of running the space as a conference venue suggests to me that someone else should also be able to tap into that, for e.g. weddings or corporate events, and I think that establishes the $20M as a more reasonable mean, but I think reasonable people could disagree with this.

comment by johnny_lw · 2024-12-01T16:47:25.383Z · LW(p) · GW(p)

Donated $100.

comment by Mckiev · 2024-12-01T04:46:13.651Z · LW(p) · GW(p)

Regarding donor recognition, I think online recognition makes a lot of sense, e.g. colored nickname on the forum, or a badge of some sorts (like the one GWWC encourages).

Thank you for clearly laying out the arguments for donating to the Lightcone. I will! 

comment by zeshen · 2024-11-30T12:24:39.276Z · LW(p) · GW(p)

Is there any difference between donating through Manifund or directly via Stripe?

Replies from: habryka4
comment by habryka (habryka4) · 2024-11-30T18:23:30.287Z · LW(p) · GW(p)

Don't think so. It is plausible that Manifund is eating the Stripe fees themselves, so we might end up getting ~1% more money, but I am not sure.

comment by Joe Fetsch (joe-fetsch) · 2024-12-03T17:32:42.025Z · LW(p) · GW(p)

Here's $1649 from me. Lighthaven is one of the most incredible places in the world, largely because of its people. I hope to see you all there next year at LessOnline and Manifest!

comment by JanGoergens (jantrooper2) · 2024-12-03T16:27:46.616Z · LW(p) · GW(p)

Although not much, I have donated 10$. I hope you will find some generours sponsors that are able financially support Lightcone!

comment by Mikhail Samin (mikhail-samin) · 2024-12-03T11:07:42.818Z · LW(p) · GW(p)

I've donated $1000. Thank you for your work.

comment by kjz · 2024-12-02T19:56:25.821Z · LW(p) · GW(p)

Donated $100. Thanks for everything you do!

comment by Alicorn · 2024-12-01T22:30:48.993Z · LW(p) · GW(p)

I just went to try to give you $40 (because there's an event that I expect to be hosted at Lighthaven, and I want to go to it, and would be happy to pay for a ticket at something in that ballpark of a price, but kind of expect to be offered free entry, so I might as well "pay for my ticket" now to make sure the place is there to have the event in).

But the form requires a phone number and will not accept all zeroes or all nines and you can have forty dollars  but you cannot have a real phone number.

Replies from: philh, kave
comment by philh · 2024-12-02T13:49:52.714Z · LW(p) · GW(p)

(At least in the UK, numbers starting 077009 are never assigned. So I've memorized a fake phone number that looks real, that I sometimes give out with no risk of accidentally giving a real phone number.)

comment by kave · 2024-12-01T22:51:25.448Z · LW(p) · GW(p)

Are you checking the box for “Save my info for 1-click checkout with Link”? That’s the only way I’ve figured out get Stripe to ask for my phone number. If so, you can safely uncheck that

(Also, I don’t know if it’s important you, but I don’t think we would see your phone number if you gave it to Stripe)

Replies from: habryka4
comment by habryka (habryka4) · 2024-12-01T22:56:07.586Z · LW(p) · GW(p)

(I at least have no ability to access to the phone numbers of anyone who has donated so far, and am pretty sure this is unrelated to the fundamental Stripe payment functionality. Just to verify this, I just went through the Stripe donation flow on an incognito window with a $1 donation, and it did not require any phone numbers)

Replies from: Alicorn
comment by Alicorn · 2024-12-02T00:00:54.582Z · LW(p) · GW(p)

Stripe is even less welcome to my phone number than you are!  But I'll retry without the info saving thing.

ETA: Yeah that worked.

comment by Algon · 2024-12-01T19:30:03.937Z · LW(p) · GW(p)

Donated $10. If I start earning substantially more, I think I'd be willing to donate $100. As it stands, I don't have that slack.

comment by Lucie Philippon (lucie-philippon) · 2024-12-03T16:47:59.839Z · LW(p) · GW(p)

I'd love to donate to Lightcone ~5K€ next year, but as long as it's not tax-deductible in France I'll keep to French AI safety orgs as the French non-profit donation tax break is stupidly good: it can basically triple the donation amount and reduce income tax to 0.

I know that Mieux Donner, a new French effective giving org, is acting as French tax-deductible front for a number of EA organisations. I'll contact them to check whether they could forward a donation to Lightcone and give an update under this comment.

Replies from: habryka4
comment by habryka (habryka4) · 2024-12-03T17:42:10.460Z · LW(p) · GW(p)

That would be great! Let’s hope they say yes :)

comment by Falacer · 2024-12-02T10:23:19.775Z · LW(p) · GW(p)

My employer's matching program currently doesn't accept Lightcone Infrastructure as a registered cause for donation matching, even though we're on your list of employer matching. We use Benevity, and the portal says that "a registration email has been sent to the cause", and that the cause should register through https://causes.benevity.org/ .

Is benevity registration on your list? I'd much rather donate with matching than without.

(From the UK, if that matters)

Replies from: habryka4
comment by habryka (habryka4) · 2024-12-02T10:41:07.312Z · LW(p) · GW(p)

Oh no! I just signed up for an account on Benevity, hopefully they will confirm us quickly. I haven't received any other communication from them, but I do think we should try to get on there, as it is quite helpful for matching, as you say.

comment by enolan · 2024-12-01T17:09:28.788Z · LW(p) · GW(p)

Chief among them is having built-in UI for "base-model Claude 3.5 Sonnet" and Llama 405b-base continuing whatever comment or post I am in the middle of writing

I was extremely surprised to read that Anthropic is giving out access to base models to outside parties. Especially as a single throwaway sentence in a giant post. What were the terms of your agreement with them? Do they do this with other people? Do they also give certain people access to the helpful-only (i.e. not necessarily harmless or honest) post-trained models, or just the base pretrained ones?

Replies from: kave
comment by kave · 2024-12-01T20:13:31.043Z · LW(p) · GW(p)

Habryka is slightly sloppily referring to using Janus' 'base model jailbreak' for Claude 3.5 Sonnet

Replies from: habryka4
comment by habryka (habryka4) · 2024-12-01T21:00:41.690Z · LW(p) · GW(p)

Oops, I thought I had added a footnote for that, to clarify what I meant. I shall edit. Sorry for the oversight.

comment by Mateusz Bagiński (mateusz-baginski) · 2024-12-01T05:47:55.847Z · LW(p) · GW(p)

I'm considering donating. Any chance of setting up some tax deduction for Euros?

Replies from: habryka4
comment by habryka (habryka4) · 2024-12-01T06:24:40.149Z · LW(p) · GW(p)

I am working on it! What country would you want it for? Not all countries have charity tax-deductability, IIRC.

Replies from: zimtkerl, Gyrodiot, mateusz-baginski
comment by zimtkerl · 2024-12-03T12:42:07.668Z · LW(p) · GW(p)

Would it be possible to set up tax deduction for Germany? I am considering donating and would be even more inclined to do that if a tax deduction was available; I may still donate even if that was not the case.

Replies from: habryka4
comment by habryka (habryka4) · 2024-12-03T17:37:15.037Z · LW(p) · GW(p)

I am working on it! Will post here in the coming week or two about how it’s going.

comment by Gyrodiot · 2024-12-01T21:01:12.467Z · LW(p) · GW(p)

Oh, glad I scrolled to find this comment. Adding a request for France, which does have charity tax deductions... but needs an appropriate receipt.

comment by Mateusz Bagiński (mateusz-baginski) · 2024-12-01T07:00:03.227Z · LW(p) · GW(p)

Estonia. (Alternatively, Poland, in which case: PLN, not EUR.)

comment by Rafael Harth (sil-ver) · 2024-12-03T11:35:28.077Z · LW(p) · GW(p)

I'm pretty poor right now so didn't donate, but I do generally believe that the Lightcone team has done a good job, overall, and is worth donating to.

comment by Elizabeth (pktechgirl) · 2024-12-01T18:44:03.112Z · LW(p) · GW(p)

Doesn't EAIF give to other EVF orgs? Seems weird that you would be a conflict of interest but that isn't.

Replies from: habryka4
comment by habryka (habryka4) · 2024-12-01T18:49:09.522Z · LW(p) · GW(p)

Caleb is heavily involved with the EAIF as well as the Long Term Future Fund, and I think me being on the LTFF with him is a stronger conflict of interest than the COI between EAIF and other EVF orgs.

Replies from: pktechgirl
comment by Elizabeth (pktechgirl) · 2024-12-01T20:33:18.695Z · LW(p) · GW(p)

(note for readers: I effectively gave >$10k to LW last year, this isn't an argument against donating)

This seems quite modest by EA COI standards.

Replies from: habryka4
comment by habryka (habryka4) · 2024-12-01T21:10:33.093Z · LW(p) · GW(p)

My inside view is that it's about as strong of a COI as I've seen. This is largely based on the exact dynamics of the LTFF, where there tends to be a lot of negotiation going on, and because there is a very clear way in which everything is about distributing money which I think makes a scenario like "Caleb rejects me on the EAIF, therefore I recommend fewer things to orgs he thinks are good on the LTFF" a kind of threat that seems hard to rule out. 

comment by mattmacdermott · 2024-12-03T08:37:09.492Z · LW(p) · GW(p)

I gave $290. Partly because of the personal value I get out of LW, partly because I think it's a solidly cost-effective donation.

comment by Eric Dalva (eric-dalva) · 2024-12-03T07:25:47.044Z · LW(p) · GW(p)

I have donated as well. I appreciate the work yall do.

Replies from: habryka4
comment by habryka (habryka4) · 2024-12-03T07:29:26.825Z · LW(p) · GW(p)

Thank you!

comment by Neil (neil-warren) · 2024-12-03T00:18:46.343Z · LW(p) · GW(p)

I think you could get a lot out of adding a temporary golden dollar sign with amount donated next to our LW names! Upon proof of donation receipt or whatever.

Seems like the lowest hanging fruit for monetizing vanity— benches being usually somewhat of a last resort!

(The benches seem still underpriced to me, given expected amount raised and average donation size in the foreseeable future).

comment by girllich · 2024-12-02T23:08:08.794Z · LW(p) · GW(p)

Why aren't the less wrong books available on amazon anymore, even as print on demand. Wouldn't that be additional revenue?

Replies from: habryka4
comment by habryka (habryka4) · 2024-12-03T01:09:47.182Z · LW(p) · GW(p)

It was never much additional revenue. The reason is that Amazon got annoyed at us because of some niche compliance requirement for our Amazon France account, and has decided to block all of our sales until that's resolved. I think it's going to be resolved before the end of the year, but man has it been a pain. 

If you come by Lighthaven you can also buy the books in-person! :P

comment by Gaius Leviathan XV · 2024-12-02T18:42:40.012Z · LW(p) · GW(p)

So did you guys end up paying back the loan you stole from FTX?

https://www.sfgate.com/tech/article/sbf-berkeley-rose-garden-inn-19520351.php

Replies from: kave
comment by kave · 2024-12-02T18:53:39.590Z · LW(p) · GW(p)

FTX did successfully retrieve the $1M from the title company! We didn't have any control over those funds, so I don't think we were involved apart from pointing FTX in the right direction.