The LessWrong Team is now Lightcone Infrastructure, come work with us!
post by habryka (habryka4) · 2021-10-01T01:20:33.411Z · LW · GW · 71 commentsContents
71 comments
tl;dr: The LessWrong team is re-organizing as Lightcone Infrastructure. LessWrong is one of several projects we are working on to ensure the future of humanity goes well. We are looking to hire software engineers as well as generalist entrepreneurs in Berkeley who are excited to build infrastructure to ensure a good future.
I founded the LessWrong 2.0 team in 2017, with the goal of reviving LessWrong.com and reinvigorating the intellectual culture of the rationality community. I believed the community had great potential for affecting the long term future, but that the failing website was a key bottleneck to community health and growth.
Four years later, the website still seems very important. But when I step back and ask “what are the key bottlenecks for improving the longterm future?”, just ensuring the website is going well no longer seems sufficient.
For the past year, I’ve been re-organizing the LessWrong team into something with a larger scope. As I’ve learned from talking to over a thousand of you over the last 4 years, for most of you the rationality community is much larger than just this website, and your contributions to the future of humanity more frequently than not route through many disparate parts of our sprawling diaspora. Many more of those parts deserve attention and optimization than just LessWrong, and we seem to be the best positioned organization to make sure that happens.
I want to make sure that that whole ecosystem is successfully steering humanity towards safer and better futures, and more and more this has meant working on projects that weren't directly related to LessWrong.com:
- A bit over a year ago we started building grant-making software for Jaan Tallinn and the Survival and Flourishing Fund, helping distribute over 30 million dollars to projects that I think have the potential to have a substantial effect on ensuring a flourishing future for humanity.
- We helped run dozens of online meetups and events during the pandemic, and hundreds of in-person events for both this year and 2019s ACX Meetups everywhere
- We helped build and run the EA Forum and the AI Alignment Forum,
- We recently ran a 5-day retreat for 60-70 people whose work we think is highly impactful in reducing the likelihood of humanity's extinction,
- We opened an in-person office space in the Bay Area for organizations that are working towards improving the long-term future of humanity.
As our projects outside of the LessWrong.com website multiplied, our name became more and more confusing when trying to explain to people what we were about.
This confusion reached a new peak when we started having a team that we were internally calling the "LessWrong team", which was responsible for running the website, distinct from all of our other projects, and which soon after caused me to utter the following sentence at one of our team meetings:
LessWrong really needs to figure out what the LessWrong team should set as a top priority for LessWrong
As one can imagine, the reaction from the rest of the team was confusion and laughter and at that point I knew we had to change our name and clarify our organizational mission.
So, after doing many rounds of coming up with names, asking many of our colleagues and friends (including GPT-3) for suggestions, we finally decided on:
I like the light cone as a symbol, because it represents the massive scale of opportunity that humanity is presented with. If things go right, we can shape almost the full light cone of humanity to be full of flourishing life. Billions of galaxies, billions of light years across, for some 10^36 (or so) years until the heat death of the universe.
Separately, I am excited about where Lightcone Infrastructure is headed as an organization. I really enjoy working with the team, and I feel like there is a ton of low-hanging fruit in doing more end-to-end community optimization. This community of rationalists, effective altruists and longtermists has achieved an enormous amount, both in scale of impact, and in coming to a deeper understanding about the world, and I think our work in reviving LessWrong and our other infrastructure projects have already made a big difference in that success.
LessWrong will have a dedicated team within Lightcone Infrastructure. Ruby will be taking the lead on that, and he already has a number of great plans for the website that I expect he will tell you about in the near future. The current team and structure is:
- Oliver Habryka [LW · GW] (CEO)
- Campus team:
- Jacob Lagerros [LW · GW]
- Ben Pace [LW · GW]
- Raymond Arnold [LW · GW]
- Site team:
- Ruben Bloom (Ruby) [LW · GW]
Jim Babcock [LW · GW] is also paid by us as an independent Open Source contributor to the LessWrong website, and helps a lot with development. I also still fix bugs, answer support requests and write code, though I primarily spend my time on management these days.
If you want to work with us on these projects, we are hiring for three positions:
- A software engineer for LessWrong.com to assist with maintenance and expansion of the Rationalist community's online publishing hub (more info)
- A generalist to join our new campus team to build a thriving in-person rationality and longtermism community in the Bay Area (more info)
- A software engineer and product manager to be in charge of the "S-Process" application, a suite of custom software for grantmaking that we develop for Jaan Tallinn and the Survival and Flourishing Fund (more info)
We are also open to hiring people who don't fit into any of these positions, so err on the side of applying if you want to work with us. If you have thoughts on how to build a successful rationality and longtermism community, want to build a 1000-person strong campus, or have a pitch for a different infrastructure project we should run, reach out to us, and we would be excited to talk to you about working here.
Our current salary policy is to pay rates competitive with industry salary minus 30%. Given prevailing salary levels in the Bay Area for the kind of skill level we are looking at, we expect salaries to start at $150k/year plus healthcare (but we would be open to paying $315k for someone who would make $450k in industry). We also provide a generous relocation package if you aren't currently located in the Bay Area.
Apply here: https://airtable.com/shrdqS6JXok99f6EX
71 comments
Comments sorted by top scores.
comment by Elizabeth (pktechgirl) · 2021-10-01T03:22:12.055Z · LW(p) · GW(p)
I really appreciate that you listed salary so explicitly.
Replies from: lsusr↑ comment by lsusr · 2021-10-02T19:17:34.410Z · LW(p) · GW(p)
I want to commend Less Wrong Lightcone for making its salary ranges public. Public salary ranges are one of those things which makes the world a fairer place (but is also often difficult because many people have an vested interest in keeping the world unfair). This is a pro-women move.
comment by Jacob Falkovich (Jacobian) · 2021-10-01T17:34:17.571Z · LW(p) · GW(p)
The "generalist" description is basically my dream job right until
>The team is in Berkeley, California, and team members must be here full-time.
Just yesterday I was talking to a friend who wants to leave his finance job to work on AI safety and one of his main hesitations is that whichever organization he joins will require him to move to the Bay. It's one thing to leave a job, it's another to leave a city and a community (and a working partner, and a house, and a family...)
This also seems somewhat inefficient in terms of hiring. There are many qualified AI safety researchers and Lightcone-aligned generalists in the Bay, but there are surely even more outside it. So all the Bay-based orgs are competing for the same people, all complaining about being talent-constrained above anything else. At the same time, NYC, Austin, Seattle, London, etc. are full of qualified people with nowhere to apply.
I'm actually not suggesting you should open this particular job to non-Berkeley people. I want to suggest something even more ambitious. NYC and other cities are crying out for a salary-paying organization that will do mission-aligned work and would allow people to change careers into this area without uprooting their entire lives, potentially moving on to other EA organizations later. Given that a big part of Lightcone's mission is community building, having someone start a non-Bay office could be a huge contribution that will benefit the entire EA/Rationality ecosystem by channeling a lot of qualified people into it.
And if you decide to go that route you'll probably need a generalist who knows people...
Replies from: Benito↑ comment by Ben Pace (Benito) · 2021-10-01T17:48:36.659Z · LW(p) · GW(p)
I'd love to build campuses in other cities around the world. There's lots of incredible people with strong reasons to be in other places. When we talk in the team about what success looks like in the next 5-10 years, part of it is a major hub (e.g. 500 people) in the Bay, and growing hubs (200 people, 100 people, 50 people, etc) in multiple other places like the ones you mention.
You say "NYC and other cities are crying out for a salary-paying organization that will do mission-aligned work". I will point out there's a little chicken-and-egg problem here, in that the Bay already has several rationalist and longtermist orgs such that there's been a good way for us to get a foothold in starting an office. In some ways there's lots of low-hanging fruits of things to do, but in other ways it's a real challenge to find founding teams and help them execute on a project such that they can employ people.
But it certainly isn't a defeater, I do see paths to helping build the research and engineering projects for people to work on in these places.
And if we're successful, and are plotting to build in NYC, I look forward to talking with you (and many of the other excellent people in NYC) about it :)
comment by habryka (habryka4) · 2021-10-01T01:42:23.915Z · LW(p) · GW(p)
Note: I decided to frontpage this post, despite it being more of an organizational announcement, because it does feel pretty relevant to everyone on the site, and I would feel bad if someone was a regular user of LessWrong and didn't know about this relatively large change in our operating structure.
comment by AI_WAIFU · 2021-10-01T17:23:23.075Z · LW(p) · GW(p)
Our current salary policy is to pay rates competitive with industry salary minus 30%.
What was the reasoning behind this? To me this would make sense if there was a funding constraint, but I was under the impression that EA is flush with cash [EA · GW].
If the following are the stated stakes:
If things go right, we can shape almost the full light cone of humanity to be full of flourishing life. Billions of galaxies, billions of light years across, for some 10^36 (or so) years until the heat death of the universe.
Then I would strongly advise against low balling or cheaping-out when it comes to talent acquisition and retention.
Replies from: Benito, pktechgirl, Chris_Leong, ChristianKl↑ comment by Ben Pace (Benito) · 2021-10-01T17:55:28.296Z · LW(p) · GW(p)
One part of the reasoning goes something like this. Suppose you and your neighbor would like a nice hedge placed between your properties. This is something that both of you want. Also, your neighbor is a landscaper, with an hourly rate of $100. You propose to pay your neighbor to do the work.
What price is fair? One answer is $100/hour, given that's their standard rate. But I think this is wrong, because this isn't just a job for them, it is also something they personally care about.
To actually figure out the price we'd need some estimate of the value of the good to each of them, and I'm not going to follow it through here. But this is one of the reasons why "less than market rate" seems like a fair price for this sort of work.
Replies from: AI_WAIFU, cousin_it↑ comment by AI_WAIFU · 2021-10-01T21:15:08.989Z · LW(p) · GW(p)
I'm not convinced. Especially if this sort of underpay is a common policy across multiple orgs across the rationalist and EA communities. In a closed system with 2 people a "fair" price will balance the opportunity cost to the person doing the work and the value both parties assign to the fence.
But this isn't a closed system. I expect that low balling pay has a whole host of higher order negative effects. Off the top of my head:
- This strategy is not scaleable. There's a limited pool of talent willing to take a pay cut because they value the output of their own work. There are probably better places to put that talent, and it's probably put to better use than on something like generic software engineering, which is essentially a commodity.
- Pay is closely associated with social status, and status influences the appeal of ideas and value systems. If working in an sector pays less then industry, then it will lose support on the margin.
- Future pay is a function current pay, individuals deciding to take a pay cut from industry rates are not only temporarily losing money, but are foregoing potentially very large sums over their careers.
- Orgs like lightconeinfrastructure compete for talent not just with other EA orgs, but with earning to give, which pays industry rates, comes with big wads of status, and the option to pocket the money if for whatever reason an individual decides to leave EA, which I would expect to create to an over-allocation of manpower to earning to give and an under-allocation to actual EA work.
- This line of reasoning creates perverse incentives. Essentially you end up paying people less the more they share your values, which given that people have malleable values systems, means that your incentivizing them to not share your values or lie about sharing your values.
I can also see some benefits of the policy, such as filtering and extra runway, but there are other arguably better ways of doing the former, and the latter isn't all that important if you can maintain a 30% yoy growth rate.
Replies from: pktechgirl, AllAmericanBreakfast, Benito, Chris_Leong↑ comment by Elizabeth (pktechgirl) · 2021-10-02T06:07:42.579Z · LW(p) · GW(p)
Can I encourage you to write up a top-level post detailing what you think the ideal salary algorithm for non-profits is?
I think you raise some valid points, and also can viscerally feel the punishment being inflicted on habryka for his forthrightness here (which is useful to everyone regardless of the specific salary), which is going to reduce similar efforts in the future in ways we will all be poorer for. My general solution for problems like this is to suggest people write up a top level post making their general case (which may link to the motivating example but has a scope beyond it). The advantages here are:
- avoids punishing people who are taking steps in the right direction, although they may not have arrived at an optimum yet
- Lets you, the OP, and everyone else focus on identifying the right general algorithm, instead of arbitrating one particular instance, where one participant has way more information than the others.
- Your argument is seen by everyone who cares about the topic, rather than only people who click through an org announcement.
(Should I write up a top-level post arguing this rather than leave a comment? Probably eventually).
Replies from: StellaAthena, AllAmericanBreakfast, philh↑ comment by StellaAthena · 2021-10-02T19:16:52.525Z · LW(p) · GW(p)
This response confuses me.
-
Who is being punished here? I see people leaving feedback and discussing ideas, and have no idea who you are worried about.
-
I strongly agree with AI_WAIFU, but don’t have a useful general strategy for non-profit funding. My opposition is based on a simple heuristic: wealthy orgs should not systematically underpay their employees. Making a thread saying that seems extremely not useful.
Speaking to the general point, as AI_WAIFU points out, there is an extremely large amount of money apparently sitting around. The thread he links to implies that EA has about 5 million dollars per active member of the community and that cash is growing faster than membership. That’s an obscene amount of cash, and being stingy about pay doesn’t really make sense to me.
Others in this thread have brought up the fact that many non-profit underpay, but that’s not because there’s some kind of virtue in underpaying (quite the opposite: it’s exploitive), it’s because they’re poor. EA is apparently are swimming in cash, so that comparison doesn’t make much sense here. Additionally, many non-profits compensate for underpaying with extremely generous benefits, which this post makes no mention of.
“We pay less than you’re worth because we only want people who really care about the mission” is typically a lie HR tells people, not an actual thing people believe. Reading that it’s a thing that Lightcone believes worries me, as it makes me feel like you’re drinking your own Kool-Aide too hard.
This also signals that you don’t care about your employees. Pay is the number one way orgs indicate that they care about their employees.
Replies from: ChristianKl↑ comment by ChristianKl · 2021-10-02T20:59:24.016Z · LW(p) · GW(p)
“We pay less than you’re worth because we only want people who really care about the mission” is typically a lie HR tells people, not an actual thing people believe. Reading that it’s a thing that Lightcone believes worries me, as it makes me feel like you’re drinking your own Kool-Aide too hard.
Lightcone seems to be the kind of organization that wants members who might donate to it if they wouldn't work there. Startup XYZ usually isn't a place where it's employees would donate if they wouldn't work there so it's a HR lie in those cases.
Why do you think well paid people take jobs as ministers or other influencial political roles that pay significantly less then their previous jobs?
↑ comment by DirectedEvolution (AllAmericanBreakfast) · 2021-10-02T18:00:17.117Z · LW(p) · GW(p)
Personally, I don't pick up any vibe of "punishing" of habryka from reading these comments. And his post is highly upvoted, along with yours praising the transparency of the hiring decision. But this is very hard to tell from blog comments, and I agree that it's something to watch out for.
↑ comment by philh · 2021-10-08T21:53:00.750Z · LW(p) · GW(p)
(Should I write up a top-level post arguing this rather than leave a comment? Probably eventually).
If it saves you some effort, I feel like my now here's why I'm punching you [LW · GW] points at the same thing?
↑ comment by DirectedEvolution (AllAmericanBreakfast) · 2021-10-02T18:34:34.873Z · LW(p) · GW(p)
Lightcone Infrastructure is competing for talent not only with industry, but also with the rest of the nonprofit sector. You could also frame this pay rate as a way to attract people from other forms of lower-paying nonprofit work, where they may be getting, say, industry - 40% rather than Lightcone's pay rate of industry - 30%.
From that point of view, I think LI's approach makes more sense. First, attract talent from even lower-paying nonprofits, along with a view enthusiasts from industry. Once that resource is tapped out, then you start increasing your pay closer to that of industry. This pulls in people who are increasingly more driven by money than by mission.
There's a way to cast this as unfair. But I don't think that's your objection. You're more worried about whether there will be pernicious effects on the effectiveness of LC and the broader EA ecosystem by offering lower pay than you could get in industry. With this hypothesis that LC will first be primarily attracting talent from the even-lower-paying nonprofit sector in mind, here are my responses to your higher-order negative effects:
- The strategy is scalable. They can attract the talent they can get at industry -30%, then increase from there as necessary.
- The pay -> status -> values hypothesis seems shaky to me. US Senators and House representatives make a salary of $174,000/year. Doctors make $200,000/year. Who has higher status and a greater ability to shape the values of the nation, an average doctor or an average congressperson/senator? Even if pay does cause status and value-spreading ability to increase, it's not clear that this is a primary factor relative to, say, the ability to put more people on the job of promoting ideas and value systems.
- The future pay issue is equally a virtue when we consider attracting people from lower-paying nonprofit jobs. Beyond that, I think your objection is that "industry -30%" doesn't illustrate the amount of money at stake clearly enough, and that decisions that seem reasonable when we underestimate the magnitude in this way would seem unreasonable when we considered the lifetime earnings amount. That's reasonable, and easy to test by displaying the expected lifetime earnings cut along with the percentage figure. Would this change our gut reaction?
- It sounds like you're worried that people can purchase an equal or larger amount of status by giving 10% of their income to charity as they can by giving up 30% of their income working for Lightcone. If this leads to an "overallocation" of EtG, then this comes with the tacit assumption that people are using altruism as a way to chase status, and that status isn't allocated by utilitarian efficiency. Quite plausible, and I have to admit I don't know what to do about that.
- I'm not convinced by the "perverse incentives" worry. Introducing jobs that did not previously exist is creating an incentive where none existed before. What if LI had been offering unpaid internships before, and then decided to start paying those interns industry -30% salaries? Would that be a perverse incentive?
These are just off the top of my head, and I'm glad you raised these issues. Definitely worth further thought as the rationality/EA ecosystem grows!
Replies from: GuySrinivasan↑ comment by SarahNibs (GuySrinivasan) · 2021-10-04T17:22:17.308Z · LW(p) · GW(p)
US Senators and House representatives make a salary of $174,000/year. Doctors make $200,000/year
Do we think that the marginal increases in a Senator-making-174k's wealth over time compared to a Doctor-making-200k look much like the marginal increases in a Doctor-making-174k's wealth compared to a Doctor-making-200k?
↑ comment by Ben Pace (Benito) · 2021-10-01T21:27:01.838Z · LW(p) · GW(p)
Man, all good points, I alas do not have the time to compose a thoughtful reply on this today. (This is not a commitment to get back later, nor a commitment not to.)
↑ comment by Chris_Leong · 2021-10-01T23:05:18.694Z · LW(p) · GW(p)
Yeah, but $150K...
(Maybe I don't understand just how expensive the Bay Area is, but $150K sounds like an awful lot.
↑ comment by RobertM (T3t) · 2021-10-02T04:40:48.454Z · LW(p) · GW(p)
To be clear, it starts at 150k, presumably because 150k is about 30% less than the total liquid compensation offered to strong new grads by tech companies that take hiring relatively seriously. I'm a little curious how that ends up working for senior candidates who could be getting 450k (which is basically standard comp at those tech companies for senior engineers) - do you just assume that they'd be capable of passing an interview at one of those places if they clear your bar, assuming they don't work somewhere like that already? I think if you start asking people to, say, provide offer letters demonstrating their "market value", you run the risk of someone looking at their options and then changing their mind. Or worse, the thought that they might need to undergo interviews with a bunch of companies they don't even want to work at just to avoid leaving money on the table (beyond the 30% they're willingly giving up) might dissuade someone from applying at all.
Note: I actually think offering 30% below industry rates is an interesting and not-obviously-wrong idea. For one thing, I do think that it serves as a useful filter for the type of person you want to be working with on mission-aligned projects. For another, it's still substantially more than you'd make in pretty much any other non-profit context. I just think the way the number is decided should be made more legible to potential candidates, to avoid preemptively scaring anyone off. If it's as simple as "we'll take 30% off top-of-market from what a candidate with your skillset & experience could get, i.e. at FAANG/similar", then it's probably best just to say that and save everyone the headache.
Replies from: habryka4↑ comment by habryka (habryka4) · 2021-10-02T05:59:16.204Z · LW(p) · GW(p)
I'm a little curious how that ends up working for senior candidates who could be getting 450k (which is basically standard comp at those tech companies for senior engineers) - do you just assume that they'd be capable of passing an interview at one of those places if they clear your bar, assuming they don't work somewhere like that already?
I am not fully sure yet what the right algorithm here will be, since we haven't run into that problem yet. My guess is I would try to call in a third party to give me a guess of how much they could make in industry, or we just negotiate a bit back-and-forth and they just tell me the evidence they have for how much they could make in industry if they tried. I can also imagine this turning out to be harder, and I would have to think more about how to best get a fair assessment here.
I think if you start asking people to, say, provide offer letters demonstrating their "market value", you run the risk of someone looking at their options and then changing their mind.
This seems like a fine outcome to me. Indeed, in the past I have told past LW/Lightcone employees to really try to look for other options and take them seriously, even after I made them an offer, so that if they do decide to take the offer we both felt confident that working at LW/Lightcone is the best choice for them.
Replies from: T3t↑ comment by RobertM (T3t) · 2021-10-02T06:24:49.656Z · LW(p) · GW(p)
Thanks, appreciate the response! My worries are mostly modeled on a hypothetical version of myself in that situation, so I don't know how they generalize.
For what it's worth I'd happily take a 30% paycut to work at an aligned org; it's moving to the Bay that's not currently in the cards. I agree that colocation is desirable for people & teams that are "actually trying" so I understand why remote work isn't on the table, though I think Jacob's idea to have offices in other major metros is interesting, assuming you get to a scale where that makes sense.
Replies from: Ian David Moss↑ comment by Ian David Moss · 2021-10-03T01:03:52.545Z · LW(p) · GW(p)
For what it's worth I actually don't buy at all that "colocation is desirable for people & teams that are 'actually trying'." I've worked with dozens of organizations as a strategy consultant over the past decade, during which time I've gotten to see a number of different office configurations ranging from 100% place-based to fully virtual and many gradations in between. While this is anecdata, I personally haven't noticed any correlation whatsoever between the office setup and the effectiveness of the team. I think there are plenty of people who don't need to be in an office to do their best work and if you have a team of people like that, then you don't need an office, period.
(Edited to add: I recognize that organizations can have all sorts of reasons for preferring an in-person presence; I was just objecting to the "actually trying" frame. I've seen too many 100% virtual teams accomplish incredible things, especially over the past year, to believe that colocation is more than a minor auxiliary factor in facilitating achievement.)
↑ comment by cousin_it · 2021-10-02T09:19:35.343Z · LW(p) · GW(p)
If the hedge benefits a lot more people than just you and the neighbor, it seems unfair to make the neighbor bear a high percentage of the cost. Maybe it makes more sense to imagine a two-step process: everyone who cares about the hedge puts in some money, then someone is hired to do the work at market rate. If the person hired wants to also donate, that's up to them.
↑ comment by Elizabeth (pktechgirl) · 2021-10-03T22:14:41.428Z · LW(p) · GW(p)
I feel like a lot of this depends on what Oliver meant by "competitive", and people are making different assumptions in the comments. I indeed think 70% of average local programmer wage would be too low, because I expect the people LC hires to be better than that average. OTOH, if it means "30% off literally the highest offer you can get", which this comment [LW(p) · GW(p)] implies, that seems pretty reasonable to me (contingent on market rate coming into it at all). The highest offer you can get probably comes with a bunch of unpleasantness they have to pay people to tolerate. People who could work at FAANGs choose to accept lower pay elsewhere for lots of reasons all the time, so I don't think there's a moral imperative to match them.
You can make separate arguments about whether market value should enter into LC compensation at all, but if it does, I don't think "70% of the highest amount you could possibly earn, for a job you will find more enjoyable on a variety of levels" is unreasonable.
[Full disclosure: I occasionally contract for LW/LC and benefit from them being freer with worker compensation]
Replies from: habryka4↑ comment by habryka (habryka4) · 2021-10-03T23:19:59.747Z · LW(p) · GW(p)
30% of literally the highest offer you can get
This is roughly the sense in which I meant "competitive" (I think there are some edge-cases here, where for example I don't expect we will be able to fully cover the right tail of outcomes. Like, if Sam Bankman-Fried had decided to work with us instead of found FTX, we of course couldn't have paid him 10 billion dollars, or similar situations).
Replies from: pktechgirl, RyanCarey↑ comment by Elizabeth (pktechgirl) · 2021-10-04T00:44:06.994Z · LW(p) · GW(p)
FWIW that wasn't my interpretation of it when I read the draft and might be worth spelling out.
↑ comment by RyanCarey · 2021-10-04T10:36:24.638Z · LW(p) · GW(p)
Can you clarify whether you're talking about "30% of X" i.e. 0.3*X, or "30% off X", i.e. 0.7*X?
Replies from: habryka4↑ comment by habryka (habryka4) · 2021-10-04T18:48:12.219Z · LW(p) · GW(p)
0.7*X
↑ comment by Chris_Leong · 2021-10-03T07:50:15.782Z · LW(p) · GW(p)
One point that hasn't been discussed here is that in all communities there are a lot of people doing valuable work for it who aren't being compensated. The higher the salaries are, the less people who can either be hired or offered incentives via programs such as this one [LW · GW].
Replies from: lsusr, StellaAthena↑ comment by lsusr · 2021-10-03T17:26:44.204Z · LW(p) · GW(p)
I have no interest in moving to the Bay area to develop software but if I could get paid $500 per post for a publicly-accessible article I get my name on I'd write once per week for a year on whatever Lightcone wants me to.
Replies from: Ruby↑ comment by StellaAthena · 2021-10-03T21:16:27.996Z · LW(p) · GW(p)
While this is true in abstract, AI_WAIFU links to a post that describes a situation where they are struggling to find people to give all their money to. Specifically it describes
-
Having tens of millions of dollar per “core community member”
-
Having funding grow faster than “core community membership.”
If that’s an accurate description of EA as a whole and LW is finance-bound that indicates that LW needs to secure more funding. The funding is very clearly there.
↑ comment by ChristianKl · 2021-10-01T18:22:44.559Z · LW(p) · GW(p)
It's a way to filter out people who don't believe in the mission but just want to join because of the money.
Replies from: mingyuan, korin43, StellaAthena↑ comment by mingyuan · 2021-10-01T21:50:10.123Z · LW(p) · GW(p)
I share this impression. I also just... am confused about why anyone would consider a starting salary of $150k/year + healthcare insufficient. I guess maybe if you're buying a house? Or sending a kid to college? I mean, I live in the Bay and have never made anywhere close to $150k/year, and I am far from financially insecure.
Programmer salaries are insane, and most people (e.g. me) are not programmers, and manage to survive. I just feel like, if your objection is, "Well I'm worth more than that on the free market," then just... go work somewhere else, if that's what you care about? Nobody needs a salary of $450k/year!!!
Another EA/rationalist org I've worked at had a policy of "We don't want salary to be a major reason for people to want to work here, and we don't want it to be a reason for them to not want to work here." That makes a lot of sense to me, and I think it's probably what Lightcone is going for?
I don't know, like, I can sort of see where the other side is coming from. But it also still seems crazy to me.
Replies from: pktechgirl, StellaAthena↑ comment by Elizabeth (pktechgirl) · 2021-10-03T21:51:13.995Z · LW(p) · GW(p)
I guess maybe if you're buying a house? Or sending a kid to college?
Without taking a side on the overall policy: buying a house and raising children are extremely normal things to do and want to do, and it would be bad if people had to choose between working for Lightcone and doing them, especially if Lightcone could pay them more without affecting other programs. I feel like we in the bay have been frogboiled to the point of not noticing a bunch of sacrifices we make to be here.
I haven't done the math on what the listed salaries actually produce in terms of lifestyle, I'm not saying these particular salaries actually preclude what I consider reasonable, I'm only claiming that "it's only low if you want a house and children" is not a good argument that a salary is sufficient.
[Full disclosure: I occasionally contract for LW/LC and benefit from them being freer with worker compensation]
↑ comment by StellaAthena · 2021-10-03T21:19:30.301Z · LW(p) · GW(p)
Another EA/rationalist org I've worked at had a policy of "We don't want salary to be a major reason for people to want to work here, and we don't want it to be a reason for them to not want to work here." That makes a lot of sense to me, and I think it's probably what Lightcone is going for?
I think that having a blanket policy of “we aim to underpay you by 30% compared to what you would get on the open market” is making pay a reason to not work there. I don’t disagree that the salaries under discussion are massive, but I would never work for a place that openly brags about underpaying me by 30% as if that’s a moral high ground.
I don’t live on the west coast and can’t speak to how far different salaries go, but the rhetoric and strategy being employed here is a major red flag to me.
↑ comment by Brendan Long (korin43) · 2021-10-01T19:51:59.926Z · LW(p) · GW(p)
At least to me, it sounds like a way to filter out people who believe in the mission but don't want to be intentionally underpaid.
Replies from: Raemon↑ comment by Raemon · 2021-10-01T21:44:06.199Z · LW(p) · GW(p)
Note that this is being paid, like, way more than nonprofits normally pay.
Replies from: korin43↑ comment by Brendan Long (korin43) · 2021-10-03T20:22:26.228Z · LW(p) · GW(p)
Yes, but nonprofits usually underpay people because of their funding constraints, not as a hazing ritual. There's a big difference between "We believe that your work is worth x but we can't pay you that much because of funding constraints" and "We believe your work is worth x and we're not going to pay you that because we want you to prove your loyalty".
Replies from: Linch, StellaAthena↑ comment by Linch · 2021-10-07T11:01:29.716Z · LW(p) · GW(p)
"Funding constraints" are almost always fake. Givedirectly can double their pay and just give less to recipients if they wanted to, for example.
Institutions also usually have the option to just hire less people or fire more people.
I feel like treating fake constraints as a clear decision boundary is silly; what happened here is that Lightcone+ surrounding ecosystems chose to make the fake constraints less of a constraint and more of a visible choice.
↑ comment by StellaAthena · 2021-10-03T21:21:08.329Z · LW(p) · GW(p)
I strongly upvoted this comment and am sad that it has net negative votes. I was going to say the exact same thing.
↑ comment by StellaAthena · 2021-10-02T18:33:51.140Z · LW(p) · GW(p)
That would make a lot more sense to me as a justification for not paying more than market rate than paying significantly below market rate.
Also, if someone is good at the job why does it matter if they don’t believe in the mission? If they’re a grifter looking for more money you can just fire them right?
Replies from: ChristianKl↑ comment by ChristianKl · 2021-10-02T19:20:53.892Z · LW(p) · GW(p)
Also, if someone is good at the job why does it matter if they don’t believe in the mission? If they’re a grifter looking for more money you can just fire them right?
People who aren't interested in the mission will optimize their actions not in favor of the mission but in favor of what advances their own power. Most institutions are disfunctional because of infighting and it's important that this one doesn't go that route.
Replies from: StellaAthena↑ comment by StellaAthena · 2021-10-03T21:22:16.271Z · LW(p) · GW(p)
Why is this problem better solved by systematically underpaying everyone as opposed to firing people who act “in favor of what advances their own power” or who promote infighting?
Replies from: Joe_Collman↑ comment by Joe Collman (Joe_Collman) · 2021-10-03T21:47:27.829Z · LW(p) · GW(p)
I think the essential point is that you're actually not underpaying them - in terms of their own utility gain (if they believe in the mission). You're only 'underpaying' them in terms of money.
It's still not obviously the correct approach (externalities are an issue too), but [money != utility].
comment by Kaj_Sotala · 2021-10-01T13:07:54.348Z · LW(p) · GW(p)
Thank you for all your work so far! It's been great seeing LW come back to life again.
Here's to hoping we'll have many local LessWrongs all over the future lightcone.
comment by XFrequentist · 2021-10-01T15:41:13.946Z · LW(p) · GW(p)
The lightcone is such a great symbol. It also kind of looks like an hourglass, evoking (to me) the image of time (and galaxies) slipping away. Kudos!
comment by Chris_Leong · 2021-10-03T09:52:25.641Z · LW(p) · GW(p)
So can you tell us more about this whole campus project?
comment by TurnTrout · 2021-10-01T15:23:58.112Z · LW(p) · GW(p)
many of our colleagues and friends (including GPT-3)
So, which is it—is GPT-3 a colleague, or a friend? (I know my answer)
Replies from: Benito↑ comment by Ben Pace (Benito) · 2021-10-01T18:24:51.491Z · LW(p) · GW(p)
The truth is that it's both.
I have re-written the paragraph in question, to remove any ambiguity.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2021-10-01T18:25:25.648Z · LW(p) · GW(p)
(Note: This answer was generated by GPT-3. I have not re-written the paragraph.)
comment by Yoav Ravid · 2021-10-01T03:53:36.355Z · LW(p) · GW(p)
comment by Arthur Milchior (Arthur-Milchior) · 2021-10-01T02:10:00.399Z · LW(p) · GW(p)
May I suggest indicating in this post already that it's in Berkeley? Lot of job goes remote, and I'd expect other people to want to find this information quickly as it's an easy decision factor.
Also, if I may ask "no longer seems sufficient". Did you thought it was? The sentence seems really strange to be honest, or otherwise I'd be curious if you have a text where you explained why you thought that, as it seems quite surprising
Replies from: habryka4↑ comment by habryka (habryka4) · 2021-10-01T03:38:52.573Z · LW(p) · GW(p)
Also, if I may ask "no longer seems sufficient". Did you thought it was? The sentence seems really strange to be honest, or otherwise I'd be curious if you have a text where you explained why you thought that, as it seems quite surprising
I do think something like this is kind of correct. It's not that I thought that nothing else had to happen between now and then for humanity to successfully reach the stars, but I did meaningfully think that there were a good number of universes where my work on LessWrong would make the difference (with everyone else of course also doing things), and that I was really moving probability mass.
I still think I moved some probability mass, but I further updated that in order to realize a bunch of that probability mass that I was hoping for, I need to get some other pieces in place. Which is something I didn't think was as necessary, and I used to think more that the online component of things would itself be sufficient to realize a lot of that probability mass.
I definitely didn't believe that if I were to just make LessWrong great, existential risk would be solved in most worlds.
Replies from: Chris_Leong↑ comment by Chris_Leong · 2021-10-01T03:52:16.907Z · LW(p) · GW(p)
What do you think these components are?
Replies from: habryka4↑ comment by habryka (habryka4) · 2021-10-01T05:22:39.749Z · LW(p) · GW(p)
Having an in-person campus that allows people to have really good high-bandwidth communication is a big component that I now think is a really useful thing to have in many worlds.
On a higher level of abstraction, I have an internal model that suggests something like the following three components are things that are quite important for AGI (and some other x-risks) to go right:
- The ability to do really good research and be really good at truth-seeking (necessary to solve various parts of the AI Alignment problem, and also in general just seems really important for a community to have for lots of reasons)
- The ability to take advantage of crises and navigate really quickly changing situations (as a concrete intuition pump, I currently believe that before something like AGI we will probably have something like 10 more years at least as crazy as 2020, and I have a sense that some of the worlds where things go well, are worlds where a bunch of people concerned about AI Alignment are well set-up to take advantage of them, and make sure to not get wiped out by them)
- The ability to have high-stakes negotiations with large piles of resources and people (like, I think it's pretty plausible that in order to actually get the right AI Alignment solution deployed, and to avoid us getting killed some other way before then, some people who have some of the relevant components of solutions will need to negotiate in some pretty high-stakes situations to actually make them happen. And in a much more coherent way than people are currently capable of.)
Though these are all pretty abstract and high-level, and I have a lot of concrete thoughts that are less abstract, though it would take me a while to write them up.
comment by Alex Flint (alexflint) · 2021-10-06T04:29:05.586Z · LW(p) · GW(p)
Thank you for the work you are doing!
comment by George3d6 · 2021-10-10T18:56:23.668Z · LW(p) · GW(p)
Looking forward to seeing projects that come out of this, the LW UX is certainly the most impressive thing about it and one of the few examples of "modern web design" done well that I've seen.
Not being remote seems really weird given the economics of it, but to each their own, I guess.
Replies from: ChristianKl↑ comment by ChristianKl · 2021-10-11T13:38:32.439Z · LW(p) · GW(p)
Not being remote seems really weird given the economics of it, but to each their own, I guess.
One of their projects seems to be to build an in-person campus. For that it's helpful if the whole team is in one place.
comment by Raj Thimmiah (raj-thimmiah) · 2021-10-04T17:02:16.157Z · LW(p) · GW(p)
Asking for a friend of mine, would you be willing to hire/manage visa stuff for people that are interested in working at lightcone but live abroad?
Replies from: Benito↑ comment by Ben Pace (Benito) · 2021-10-04T17:35:47.129Z · LW(p) · GW(p)
Yep! We have done so many times already, this is something we know how to do. The team is 4/5ths not-American.
comment by lsusr · 2021-10-01T04:14:30.551Z · LW(p) · GW(p)
I like the light cone as a symbol, because it represents the massive scale of opportunity that humanity is presented with. If things go right, we can shape almost the full light cone of humanity to be full of flourishing life.
This is good to know. When I first heard "lightcone" I thought it referred to a siloed organizational structure i.e. one subsidiary tree cannot affect the others.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2021-10-01T04:26:08.498Z · LW(p) · GW(p)
The teams would have to be really far away from each other to have separate future lightcones :)
Replies from: lsusrcomment by Zach Stein-Perlman · 2021-10-01T23:00:02.119Z · LW(p) · GW(p)
You want "to build a thriving in-person rationality and longtermism community in the Bay Area." That sounds great. How do you plan to do it, at any level of generality? 'Thriving community' can mean a lot of different things.
comment by MondSemmel · 2021-11-03T18:13:53.941Z · LW(p) · GW(p)
There's currently a donation matching drive via the EA forums [EA · GW], which prompted some questions I didn't see answered in this announcement.
According to this post from 2019 [LW · GW]:
The LessWrong team operates legally as part of the Center for Applied Rationality while retaining full autonomy over both internal decision-making and decisions concerning the LessWrong website.
Is Lightcone Infrastructure still part of CFAR, or is it now an independent legal entity?
If the latter, is it a for-profit or a nonprofit? If nonprofit, is there a way to donate to it?
If it's still part of CFAR: if I donate to CFAR, how are donations allocated between CFAR-itself and Less Wrong / Lightcone?
Replies from: Benito↑ comment by Ben Pace (Benito) · 2021-11-03T18:24:01.358Z · LW(p) · GW(p)
We're still the same legal entity (and still essentially different orgs with no overlap in management / leadership except that the same person files taxes / expenses for both orgs).
It's a non-profit, and the easiest way to donate is using the paypal link in the donate page [? · GW] on the left sidebar :)
comment by makeswell · 2021-10-18T01:08:59.028Z · LW(p) · GW(p)
I would love to work on this. I applied through your website. Commenting here in case you get a huge flood of random resumes, then maybe my comment will help me stand out. Here's my LinkedIn: https://www.linkedin.com/in/max-pietsch-1ba12ba7/
Replies from: Benito↑ comment by Ben Pace (Benito) · 2021-10-18T05:26:31.659Z · LW(p) · GW(p)
We’re reading them all. Please don’t also leave a comment just to stand out, that’s not a good race to the bottom. (Thanks for your application!)