Posts
Comments
I was just thinking of adding some kind of donation tier where if you donate $20k to us we will custom-build a Gerver sofa, and dedicate it to you.
My guess is neither of you is very good at using them, and getting value out of them somewhat scales with skill.
Models can easily replace on the order of 50% of my coding work these days, and if I have any major task, my guess is I quite reliably get 20%-30% productivity improvements out of them. It does take time to figure out at which things they are good at, and how to prompt them.
Could you send me a screenshot of your post list and tag filter list? What you are describing sounds really very weird to me and something must be going wrong.
It… was the fault of Jacob?
The post was misleading when it was written, and I think was called out as such by many people at the time. I think we should have some sympathy with Jacob being naive and being tricked, but surely a substantial amount of blame accrues to him for going to the bat for OpenAI when that turned out to be unjustified in the end (and at least somewhat predictably so).
What is plausibly a valid definition of multi-hop reasoning that we care about and that excludes getting mathematical proofs right and answering complicated never-before-seen physics questions and doing the kind of thing that a smaller model needed to do a CoT for?
Transformers are obviously capable of doing complicated internal chains of reasoning. Just try giving them a difficult problem and force them to start their answer in the very next token. You will see no interpretable or visible traces of their reasoning, but they will still get it right for almost all questions.
Visible CoT is only necessary for the frontier of difficulty. The rest is easily internalized.
I do not understand your comment at all. Why would it be falsified? Transformers are completely capable of steganography if you apply pressure towards it, which we will (and have done).
In Deepseek we can already see weird things happening in the chain of thought. I will happily take bets that we will see a lot more of that.
How are the triangle numbers not quadratic?
Sure looks quadratic to me.
Welcome! Hope you have a good time emerging from the shadows.
I think people usually want that sentence to mean something confused. I agree it has fine interpretations, but people by default use it as a semantic stopsign to stop looking for ways the individual parts mechanistically interface with each other to produce the higher utility thing than the individual parts naively summed would (see also https://www.lesswrong.com/posts/8QzZKw9WHRxjR4948/the-futility-of-emergence )
Also, I don't claim there's another major grant maker that's less constrained like this.)
I think the SFF appears less constrained like this
What's the context?
Alas, thank you for looking into it.
I set up an every.org donation link which supports crypto donations, stock donations and recurring donations, so this is now the case!
The post feels very salesy to me, was written by an org account, and also made statements that seemed false to me like:
1⃣ Fellows will work with the world’s leading AI safety organisations to advance the safe and beneficial development of AI. Some of our placement partners are the Center for Human Compatible AI (CHAI), FAR.AI, Conjecture, UK AISI and the Mila–Quebec AI Institute.
(Of those, maybe Far.AI would be deserving of that title, but also, I feel like there is something bad about trying to award that title in the first place).
There also is no disambiguation of whether this program is focused on existential risk efforts or near-term bias/filter-bubble/censorship/etc. AI efforts, the latter of which I think is usually bad for the world, but at least a lot less valuable.
Our post font is pretty big, but for many reasons it IMO makes sense for the comment font to be smaller. So that plus LaTeX is a bit of a dicey combination.
In the case of forward propagation, these artifacts means you get for ~free, and in backwards propagation you get for ~free.
Presumably you meant to say something else here than to repeat twice?
Edit: Oops, I now see. There is a switched . I did really look quite carefully to spot any difference, but I apparently still wasn't good enough. This all makes sense now.
This is a line item for basically all the service staff of a 100-bed, 30,000 sq. ft. conference center/hotel.
I don't think I understand how not living in the Bay Area and making flights there instead would work. This is a conference center, we kind of need to be where the people are to make that work.
That would be great! Let’s hope they say yes :)
I am working on it! Will post here in the coming week or two about how it’s going.
Thank you!
Thank you!
I'd guess plenty are planning to donate after Jan 1st for tax reasons, so perhaps best to keep highlighting the donation drive through the first week of Jan.
Yeah, I've been noticing that when talking to donors. It's a tricky problem because I would like the fundraiser to serve as a forcing function to get people who think LW should obviously be funded, but would like to avoid paying an unfair multiple of their fair share, to go and fund it.
But it seems like half of the donors will really want to donate before the end of this year, and the other half will want to donate after the start of next year.
It's tricky. My current guess is I might try to add some kind of "pledged funds" section to the thermometer, but it's not ideal. I'll think about it in the coming days and weeks.
It was never much additional revenue. The reason is that Amazon got annoyed at us because of some niche compliance requirement for our Amazon France account, and has decided to block all of our sales until that's resolved. I think it's going to be resolved before the end of the year, but man has it been a pain.
If you come by Lighthaven you can also buy the books in-person! :P
It seems to me that o1 and deepseek already do a bunch of the "mental simulation" kind of reasoning, and even previous LLMs did so a good amount if you prompted them to think in chain-of-thoughts, so the core point fell a bit flat for me.
This essay seems to have lost the plot of where the problems with AI come from. I was historically happy that Conjecture focused on the parts of AI development that are really obviously bad, like having a decent chance of literally killing everyone or permanently disempowering humanity, but instead this seems like it's a random rant against AI-generated art, and name-calling of obviously valuable tools like AI coding assistants .
I am not sure what happened. I hope you find the plot again.
Oh, interesting. I had not properly realized you could unbundle these. I am hesitant to add a hop to each request, but I do sure expect Cloudflare to be fast. I'll look into it, and thanks for the recommendation.
Oh no! I just signed up for an account on Benevity, hopefully they will confirm us quickly. I haven't received any other communication from them, but I do think we should try to get on there, as it is quite helpful for matching, as you say.
Yeah, we considered setting up a Cloudflare proxy for a while, but at least for logged-in users, LW is actually a really quite dynamic and personalized website, and not a great fit for it (I do think it would be nice to have a logged-out version of pages available on a Cloudflare proxy somehow).
I could have saved a bit of money with better tax planning, but not as much as one might think.
The money I was able to donate came from appreciated crypto, and was mostly unrelated to my employment at Lightcone (and also as an appreciated asset was therefore particularly tax-advantageous to donate).
I have generally taken relatively low salaries for most of my time working at Lightcone. My rough guess is that my average salary has been around $70k/yr[1]. Lightcone only started paying more competetive salaries in 2022 when we expanded beyond some of our initial founding staff, and I felt like it didn't really make cultural or institutional sense to have extremely low salaries. The only year in which I got paid closer to any competetive Bay Area salary was 2023, and in that year I also got to deduct most of that since I donated in the same year.
(My salary has always been among the lowest in the organization, mostly as a costly signal to employees and donors that I am serious about doing this for impact reasons)
- ^
I don't have convenient tax records for years before 2019, but my income post-federal-tax (but before state tax) for the last 6 years was $59,800 (2019), $71,473 (2020), $83,995 (2021), $36,949 (2022), $125,175 (2023), ~$70,000 (2024).
Aah, that makes sense. I will update the row to say "Expected Lighthaven Income"
I fixed some misunderstandable parts, I meant the $500k being the LW hosting + Software subscriptions and the Dedicated software + accounting stuff together. And I didn't mean to imply that the labor cost of the 4 people is $500k, that was a separate term in the costs.
Ah yeah, I did misunderstand you there. Makes sense now.
Is Lighthaven still cheaper if we take into account the initial funding spent on it in 2022 and 2023?
It's tricky because a lot of that is capital investment, and it's extremely unclear what the resell price of Lighthaven would end up being if we ended up trying to sell, since we renovated it in a pretty unconventional way.
Total renovations cost around ~$7M-$8M. About $3.5M of that was funded as part of the mortgage from Jaan Tallinn, and another $1.2M of that was used to buy a property right next to Lighthaven which we are hoping to take out an additional mortgage on (see footnote #3), and which we currently own in full. The remaining ~$3M largely came from SFF and Open Phil funding. We also lost a total of around ~$1.5M in net operating costs so far. Since the property is super hard to value, let's estimate the value of the property after our renovations at our current mortgage value ($20M).[1]
During the same time, the Lightcone Offices would have cost around $2M, so if you view the value we provided in the meantime as roughly equivalent, we are out around $2.5M, but also, property prices tend to increase over time at least some amount, so by default we've probably recouped some fraction of that in appreciated property values, and will continue to recoup more as we break even.
My honest guess is that Lighthaven would make sense even without FTX, from an ex-post perspective, but that if we hadn't have had FTX there wouldn't have been remotely enough risk appetite for it to get funded ex-ante. I think in many worlds Lighthaven turned out much worse than it did (and for example, renovation costs already ended up in the like 85th percentile of my estimates due to much more extensive water and mold damage than I was expecting in the mainline).
- ^
I think this is a potentially controversial choice, though I think it makes sense. I think most buyers would not be willing to pay remotely as much for the venue as that, since they would basically aim to return the property back to its standard hotel usage, and throw away most of our improvements, probably putting the property value at something like $15M. But I think our success of running the space as a conference venue suggests to me that someone else should also be able to tap into that, for e.g. weddings or corporate events, and I think that establishes the $20M as a more reasonable mean, but I think reasonable people could disagree with this.
Due to an apparently ravenous hunger among you all for having benches with plaques dedicated to them, and us not actually having that many benches, I increased the threshold for getting a bench (or equivalent) with a plaque to $2,000. Everyone who donated more than $1,000 but less than $2,000 before Dec 2nd will still get their plaque.
Thank you so much!
Some quick comments:
then the real costs are $500k for the hosting and hosting cost of LessWrong
Raw server costs for LW are more like ~$120k (and to be clear, you could drive this lower with some engineering, though you would have to pay for that engineering cost). See the relevant line in the budget I posted.
Total labor cost for the ~4 people working on LW is closer to ~$800k, instead of the $500k you mention.
(I'm not super convinced it was a good decision to abandon the old Lightcone offices for Lighthaven, but I guess it made sense in the funding environment of the time, and once we made this decision, it would be silly not to fund the last 1 million of initial cost before Lighthaven becomes self-funded).
Lighthaven is actually cheaper (if you look at total cost) than the old Lightcone offices. Those also cost on the order of $1M per year, and were much smaller, though of course we could have recouped a bunch of that if we had started charging for more things. But cost-savings were actually a reason for Lighthaven, since according to our estimates, the mortgage and rent payments would end up quite comparable per square foot.
Again, thank you a lot.
Should now be fixed. We've blocked traffic to basically all pages and been restoring them incrementally to make sure we don't go down again immediately. I just lifted the last of those blocks.
We were down between around 7PM and 8PM PT today. Sorry about that.
It's hard to tell whether we got DDosd or someone just wanted to crawl us extremely aggressively, but we've had at least a few hundred IP addresses and random user agents request a lot of quite absurd pages, in a way that was clearly designed to avoid bot-detection and block methods.
I wish we were more robust to this kind of thing, and I'll be monitoring things tonight to prevent it from happening again, but it would be a whole project to make us fully robust to attacks of this kind. I hope it was a one-off occurence.
But also, I think we can figure out how to make it so we are robust to repeated DDos attacks, if that is the world we live in. I do think it would mean strapping in for a few days of spotty reliability while we figure out how to do that.
Sorry again, and boo for the people doing this. It's one of the reasons why running a site like LessWrong is harder than it should be.
Stripe doesn't allow for variable-amount recurring donations in their payment links. We will probably build our own donation page to work around that, but it might take a bit.
(I at least have no ability to access to the phone numbers of anyone who has donated so far, and am pretty sure this is unrelated to the fundamental Stripe payment functionality. Just to verify this, I just went through the Stripe donation flow on an incognito window with a $1 donation, and it did not require any phone numbers)
Have you considered cutting salaries in half? According to the table you share in the comments, you spend 1.4 million on the salary for the 6 of you, which is $230k per person. If the org was in a better shape, I would consider this a reasonable salary, but I feel that if I was in the situation you guys are in, I would request my salary to be at least halved.
We have! Indeed, we have considered it so hard that we did in fact do it. For roughly the last 6-8 months our salaries have on-average been halved (and I have completely forfeited my salary, and donated ~$300k to Lightcone at the end of last year myself to keep us afloat).
I don't think this is a sustainable situation and I expect that in the long run I would end up losing staff over this, or I would actively encourage people to make 3x[1] their salary somewhere else (and maybe donating it, or not) since I don't think donating 70% of your counterfactual salary is a particularly healthy default for people working on these kinds of projects. I currently think I wouldn't feel comfortable running Lightcone at salaries that low in the long run, or would at least want to very seriously rearchitect how Lightcone operates to make that more OK.
(Also, just to clarify, the $230k is total cost associated with an employee, which includes office space, food, laptops, insurance, payroll taxes, etc. Average salaries are ~20% lower than that.)
Relatedly, I don't know if it's possible for you to run with fewer employees than you currently have. I can imagine that 6 people is the minimum that is necessary to run this org, but I had the impression that at least one of you is working on creating new rationality and cognitive trainings, which might be nice in the long-term (though I'm pretty skeptical of the project altogether), but I would guess you don't have the slack for this kind of thing now if you are struggling for survival.
We are generally relatively low on slack, and mostly put in long hours. Ray has been working on new rationality and cognitive training projects, but not actually on his work time, and when he has been spending work time on it, he basically bought himself out with revenue from programs he ran (for example, he ran some recent weekend workshops for which he took 2 days off from work, and in-exchange made ~$1.5k of profit from the workshops which went to Lightcone to pay for his salary).
I currently would like to hire 1-2 more people in the next year. I definitely think we can make good use of them, including for projects that more directly bring in revenue (though I think the projects that don't would end up a bunch more valuable for the world).
On the other side of the coin, can you extract more money out of your customers? The negotiation strategy you describe in the post (50-50ing the surplus) is very nice and gentlemanly, and makes sense if you are both making profit. But if there is a real chance of Lightcone going bankrupt and needing to sell Lighthaven, then your regular customers would need to fall back to their second best option, losing all their surplus. So I think in this situation it would be reasonable to try to charge your regular costumers practically the maximum they are willing to pay.
I think doing the negotiation strategy we did was very helpful for getting estimates of the value we provide to people, but I agree that it was quite generous, and given the tightness have moved towards a somewhat more standard negotiation strategy. I am not actually sure that this has resulted in us getting more of the surplus, I think people have pretty strong fairness instincts around not giving up that much of the surplus, and negotiations are hard.
We do expect to raise prices in the coming year, mostly as demand is outstripping supply for Lighthaven event slots, which means we have more credible BATNAs in our negotiations. I do hope this will increase both the total surplus, and the fraction of the surplus we receive (in as much as getting that much will indeed be fair, which I think it currently is, but it does depend on things being overall sustainable).
- ^
Our historical salary policy was roughly "we will pay you 70% of what we are pretty confident you could make in a similar-ish industry job in compensation". So cutting that 70% in half leaves you with ~1/3rd of what you would make in industry, so the 3x is a relatively robust estimate, and probably a bit of an underestimate as we haven't increased salaries in 2-3 years, despite inflation and it doesn't take into account tail outcomes like founding a successful company (though also engineering salaries have gone down somewhat in that time, though not as much in more AI-adjacent spaces, so it's not totally obvious)
Yeah, I agree, and I've been thinking through things like this. I want to be very careful in making the site not feel like it's out to get you, and so isn't trying to sell you anything, and so have been hesitant for things in the space that come with prominent UI implications, but I also think there are positive externalities. I expect we will do at least some things in this space.
My inside view is that it's about as strong of a COI as I've seen. This is largely based on the exact dynamics of the LTFF, where there tends to be a lot of negotiation going on, and because there is a very clear way in which everything is about distributing money which I think makes a scenario like "Caleb rejects me on the EAIF, therefore I recommend fewer things to orgs he thinks are good on the LTFF" a kind of threat that seems hard to rule out.
Oops, I thought I had added a footnote for that, to clarify what I meant. I shall edit. Sorry for the oversight.
Caleb is heavily involved with the EAIF as well as the Long Term Future Fund, and I think me being on the LTFF with him is a stronger conflict of interest than the COI between EAIF and other EVF orgs.
Though it's the website which I find important, as I understand it, the majority of this money will go towards supporting Lighthaven.
I think this is backwards! As you can see in the budget I posted here, and also look at the "Economics of Lighthaven" section, Lighthaven itself is actually surprisingly close to financially breaking even. If you ignore our deferred 2024 interest payment, my guess is we will overall either lose or gain some relatively small amount on net (like $100k).
Most of the cost in that budget comes from LessWrong and our other generalist activities. At least right now, I think you should be more worried about the future of Lighthaven being endangered by the financial burden of LessWrong (and in the long run, I think it's reasonably likely that LessWrong will end up in part funded by revenue from Lighthaven).
My favorite fiscal sponsorship would be through GWWC: https://www.givingwhatwecan.org/inclusion-criteria
Their inclusion criteria suggests that they want to see at least $50k of expected donations in the next year. My guess is if we have $10k-$20k expected this month, then that is probably enough, but I am not sure (and it might also not work out for other reasons).
I am working on it! What country would you want it for? Not all countries have charity tax-deductability, IIRC.
I think a lot of projects in the space are very high variance, and some of them are actively deceptive, and I think that really means you want a bunch of people with context to do due diligence and think hard about the details. This includes some projects that Zvi recommends here, though I do think Zvi's post is overall great and provides a lot of value.
Another big component is doing fair splitting. I think many paths to impact require getting 4-5 pieces in place, and substantial capital investment, and any single donor might feel that there isn't really any chance for them to fund things in a way that gets the whole engine going, and before they feel good giving they want to know that other people will actually put in the other funds necessary to make things work. That's a lot of what our work on the S-Process and Lightspeed Grants was solving.
In-general, the philanthropy space is dominated by very hard principal-agent problems. If you have a lot of money, you will have tons of people trying to get your money, most of them for bad reasons. Creating infrastructure to connect high net worth people with others who are actually trustworthy and want to put in a real effort to help them is quite hard (especially in a way that results in the high net-worth people then actually building justified trust in those people).
I am working on making that happen right now. I am pretty sure we can arrange something, but it depends a bit on getting a large enough volume to make it worth it for one of our UK friend-orgs to put in the work to do an equivalence determination.
Can you let me know how much you are thinking of giving (either here or in a DM)?
Ah, yep, I am definitely more doomy than that. I tend to be around 85%-90% these days. I did indeed interpret you to be talking about timelines due to the "farther".
Hmm, my guess is we probably don’t disagree very much on timelines. My honest guess is that yours are shorter than mine, though mine are a bit in flux right now with inference compute scaling happening and the slope and reliability of that mattering a lot.
Yep, both of those motivate a good chunk of my work. I think the best way to do that is mostly to work one level removed, on the infrastructure that allows ideas like that to bubble up and be considered in the first place, but I’ll also take opportunities that make more direct progress on them as they present themselves.