Posts

5 homegrown EA projects, seeking small donors 2024-10-28T23:24:25.745Z
If far-UV is so great, why isn't it everywhere? 2024-10-19T18:56:58.910Z
Announcing the $200k EA Community Choice 2024-08-14T00:39:37.350Z
What are you getting paid in? 2024-07-17T19:23:04.219Z
Podcast: Elizabeth & Austin on "What Manifold was allowed to do" 2024-06-28T22:10:41.607Z
Episode: Austin vs Linch on OpenAI 2024-05-25T16:15:38.790Z
Manifund Q1 Retro: Learnings from impact certs 2024-05-01T16:48:33.140Z
Manifund: 2023 in Review 2024-01-18T23:50:13.557Z
Manifold Halloween Hackathon 2023-10-23T22:47:18.462Z
Prediction markets covered in the NYT podcast “Hard Fork” 2023-10-13T18:43:29.644Z
NYT on the Manifest forecasting conference 2023-10-09T21:40:16.732Z
Manifest 2023 2023-09-06T11:24:31.274Z
Last Chance: Get tickets to Manifest 2023! (Sep 22-24 in Berkeley) 2023-09-06T10:35:37.510Z
Announcing Manifest 2023 (Sep 22-24 in Berkeley) 2023-08-14T05:13:03.186Z
Manifund: What we're funding (weeks 2-4) 2023-08-04T16:00:33.227Z
A $10k retroactive grant for VaccinateCA 2023-07-27T18:14:44.305Z
Announcing Manifund Regrants 2023-07-05T19:42:08.978Z
Manifund x AI Worldviews 2023-03-31T15:32:05.853Z
Postmortem: Trying out for Manifold Markets 2022-09-08T17:54:09.890Z
Prediction markets meetup/coworking (hosted by Manifold Markets) 2022-07-26T00:14:53.704Z
What We Owe the Past 2022-05-05T11:46:38.015Z
Predicting for charity 2022-05-02T22:59:49.741Z
Austin Chen's Shortform 2022-04-02T02:54:43.792Z
Manafold Markets is out of mana 🤭 2022-04-01T22:07:34.081Z
Create a prediction market in two minutes on Manifold Markets 2022-02-09T17:36:56.320Z

Comments

Comment by Austin Chen (austin-chen) on Eli's shortform feed · 2024-11-11T19:04:09.765Z · LW · GW

I mean, it's obviously very dependent on your personal finance situation but I'm using $100k as an order of magnitude proxy for "about a years salary". I think it's very coherent to give up a year of marginal salary in exchange for finding the love of your life, rather than like $10k or ~1mo salary.

Of course, the world is full of mispricings, and currently you can save a life for something like $5k. I think these are both good trades to make, and most people should have a portfolio that consists of both "life partners" and "impact from lives saved" and crucially not put all their investment into just one or the other.

Comment by Austin Chen (austin-chen) on Eli's shortform feed · 2024-11-10T06:13:40.028Z · LW · GW

Mm I think it's hard to get optimal credit allocation, but easy to get half-baked allocation, or just see that it's directionally way too low? Like sure, maybe it's unclear whether Hinge deserves 1% or 10% or ~100% of the credit but like, at a $100k valuation of a marriage, one should be excited to pay $1k to a dating app.

Like, I think matchmaking is very similarly shaped to the problem of recruiting employees, but there corporations are more locally rational about spending money than individuals, and can do things like pay $10k referral bonuses, or offer external recruiters 20% of their referee's first year salary.

Comment by Austin Chen (austin-chen) on Eli's shortform feed · 2024-11-09T05:43:27.172Z · LW · GW

Basically: I don't blame founders or companies for following their incentive gradients, I blame individuals/society for being unwilling to assign reasonable prices to important goods.

I think the bad-ness of dating apps is downstream of poor norms around impact attribution for matches made. Even though relationships and marriages are extremely valuable, individual people are not in the habit of paying that to anyone.

Like, $100k or a year's salary seems like a very cheap value to assign to your life partner. If dating apps could rely on that size of payment when they succeed, then I think there could be enough funding for something at least a good small business. But I've never heard of anyone actually paying anywhere near that. (myself included - though I paid a retroactive $1k payment to the person who organized the conference I met my wife at)

I think keeper.ai tries to solve this with large bounties on dating/marriages, it's one of the things I wish we pushed for more on Manifold Love. It seems possible to build one for the niche of "the ea/rat community"; Manifold Love, the checkboxes thing, dating docs got pretty good adoption for not that much execution.

(Also: be the change! I think building out OKC is one of the easiest "hello world" software projects one could imagine, Claude could definitely make a passable version in a day. Then you'll discover a bunch of hard stuff around getting users, but it sure could be a good exercise.)

Comment by Austin Chen (austin-chen) on 5 homegrown EA projects, seeking small donors · 2024-11-08T06:56:20.053Z · LW · GW

Thanks for forwarding my thoughts!

I'm glad your team is equipped to do small, quick grants - from where I am on the outside, it's easy to accidentally think of OpenPhil as a single funding monolith, so I'm always grateful for directional updates that help the community understand how to better orient to y'all.

I agree that 3months seems reasonable when 500k+ is at stake! (I think, just skimming the application, I mentally rounded off "3 months or less" to "about 3 months", as kind of a learned heuristic on how orgs relate to timelines they publish.)

As another data point from the Survival and Flourishing Funds, turnaround (from our application to decision) was about 5 months this year, for an ultimately 90k grant (we were applying for up to 1.2m). I think this year they were unusually slow due to changing over their processes; in past years it's been closer to 2-3 months.

Our own philosophy at Manifund does emphasize "moving money quickly", to almost a sacred level. This comes from watching programs like Fast Grants and Future Fund, and also our own lived experience as grantees. For grantees, knowing 1 month sooner that money is coming, often means that one can start hiring and executing 1 month sooner - and the impact of executing even 1 day sooner can sometimes be immense (see: https://www.1daysooner.org/about/ )

Comment by Austin Chen (austin-chen) on 5 homegrown EA projects, seeking small donors · 2024-11-07T00:08:19.608Z · LW · GW

@Matt Putz thanks for supporting Gavin's work and letting us know; I'm very happy to hear that my post helped you find this!

I also encourage others to check out OP's RFPs. I don't know about Gavin, but I was peripherally aware of this RFP, and it wasn't obvious to me that Gavin should have considered applying, for these reasons:

  1. Gavin's work seems aimed internally towards existing EA folks, while this RFP's media/comms examples (at a glance) seems to be aimed externally for public-facing outreach
  2. I'm not sure what the typical grant size that the OP RFP is targeting, but my cached heuristic is that OP tends to fund projects looking for $100k+ and that smaller projects should look elsewhere (eg through EAIF or LTFF), due to grantmaker capacity constraints on OP's side
  3. Relatedly, the idea of filling out an OP RFP seems somewhat time-consuming and burdensome (eg somewhere between 3 hours and 2 days), so I think many grantees might not consider doing so unless asking for large amounts
  4. Also, the RFP form seems to indicate a turnaround time of 3 months, which might have seemed too slow for a project like Gavin's

I'm evidently wrong on all these points given that OP is going to fund Gavin's project, which is great! So I'm listing these in the spirit of feedback. Some easy wins to encourage smaller projects to apply might be to update the RFP page to 1. list some example grants and grant sizes that were sourced through this, and 2. describe how much time you expect an applicant to take to fill out the form (something EA Funds does, which I appreciate, even if I invariably take much more time than they state).

Comment by Austin Chen (austin-chen) on Why I quit effective altruism, and why Timothy Telleen-Lawton is staying (for now) · 2024-10-28T23:39:16.775Z · LW · GW

Do you not know who the living Pope is, while still believing he's the successor to Saint Peter and has authority delegated from Jesus to rule over the entire Church?

I understand that the current pope is Pope Francis, but I know much much more about the worldviews of folks like Joe Carlsmith or Holden Karnofsky, compared to the pope. I don't feel this makes me not Catholic; I continue to go to church every Sunday, live my life (mostly) in accordance with Catholic teaching, etc. Similarly, I can't name my senator or representative and barely know what Biden stands for, but I think I'm reasonably American.

All the people I know who worked on those trips (either as an organiser or as a volunteer) don't think it helped their epistemics at all, compared to e.g. reading the literature on development economics.

I went on one of those trips as a middle schooler (to Mexico, not Africa). I don't know that it helped my epistemics much, but I did get like, a visceral experience of what the life of someone in a third-world country would be like, that I wouldn't have gotten otherwise and no amount of research literature reading would replicate.

I don't literally think that every EA should book plane tickets to Africa, or break into a factory farm, or whatnot. (though: I would love to see some folks try this!) I do think there's an overreliance on consuming research and data, and an underreliance on just doing things and having reality give you feedback.

Comment by Austin Chen (austin-chen) on Why I quit effective altruism, and why Timothy Telleen-Lawton is staying (for now) · 2024-10-24T02:05:45.626Z · LW · GW

Insofar as you're thinking I said bad people, please don't let yourself make that mistake, I said bad values. 

I appreciate you drawing the distinction! The bit about "bad people" was more directed at Tsvi, or possibly the voters who agreevoted with Tsvi.

There's a lot of massively impactful difference in culture and values

Mm, I think if the question is "what accounts for the differences between the EA and rationalist movements today, wrt number of adherents, reputation, amount of influence, achievements" I would assign credit in the ratio of ~1:3 to differences in (values held by individuals):systems. Where systems are roughly: how the organizations are set up, how funding and information flows through the ecosystem.

(As I write this, I realize that maybe even caring about adherents/reputation/influence/achievement in the first place is an impact-based, EA-frame, and the thing that Ben cares about is more like "what accounts for the differences in their philosophies or gestalt of what it feels like to be in the movement"; I feel like I'm lowkey failing an ITT here...)

Comment by Austin Chen (austin-chen) on Why I quit effective altruism, and why Timothy Telleen-Lawton is staying (for now) · 2024-10-24T00:27:06.107Z · LW · GW

Mm I basically agree that:

  • there are real value differences between EA folks and rationalists
  • good intentions do not substitute for good outcomes

However:

  • I don't think differences in values explain much of the differences in results - sure, truthseeking vs impact can hypothetically lead one in different directions, but in practice I think most EAs and rationalists are extremely value aligned
  • I'm pushing back against Tsvi's claims that "some people don't care" or "EA recruiters would consciously choose 2 zombies over 1 agent" - I think ascribing bad intentions to individuals ends up pretty mindkilly

Basically insofar as EA is screwed up, its mostly caused by bad systems not bad people, as far as I can tell.

Comment by Austin Chen (austin-chen) on Why I quit effective altruism, and why Timothy Telleen-Lawton is staying (for now) · 2024-10-23T17:42:17.347Z · LW · GW

Mm I'm extremely skeptical that the inner experience of an EA college organizer or CEA groups team is usefully modeled as "I want recruits at all costs". I predict that if you talk to one and asking them about it, you'd find the same.

I do think that it's easy to accidentally goodhart or be unreflective about the outcomes of pursuing a particular policy -- but I'd encourage y'all to extend somewhat more charity to these folks, who I generally find to be very kind and well-intentioned.

Comment by Austin Chen (austin-chen) on Why I quit effective altruism, and why Timothy Telleen-Lawton is staying (for now) · 2024-10-22T21:54:05.823Z · LW · GW

Some notes from the transcript:

I believe there are ways to recruit college students responsibly. I don't believe the way EA is doing it really has a chance to be responsible. I would say, the way EA is doing it can't filter and inform the way healthy recruiting needs to.  And they're funneling people, into something that naivete hurts you in. I think aggressive recruiting is bad for both the students and for EA itself.

Enjoyed this point -- I would guess that the feedback loop from EA college recruiting is super long and is weakly aligned.  Those in charge of setting recruiting strategy (eg CEA Groups team, and then university organizers) don't see the downstream impacts of their choices, unlike in a startup where you work directly with your hires, and quickly see whether your choices were good or bad.

Might be worth examining how other recruiting-driven companies (like Google) or movements (...early Christianity?) maintain their values, or degrade over time.

Seattle EA watched a couple of the animal farming suffering documentaries. And everyone was of course horrified But, not everyone was ready to just jump on, let's give this up entirely forever. So we started doing more research, and I posted about, a farm a couple hours away that did live tours, and that seemed like a reasonable thing to learn, like, a limited but useful thing.

Definitely think that on the margin, more "directly verifying base reality with your own eyes" would be good in EA circles. Eg at one point, I was very critical of those mission trips to Africa where high schoolers spend a week digging a well; "obviously you should just send cash!" But now I'm much more sympathetic.

This also stings a bit for Manifund; like 80% of what we fund is AI safety but I don't really have much ability to personally verify that the stuff we funded is any good.

The natural life cycle of movements and institutions is to get captured and be pretty undifferentiated from other movements in their larger cultural context. They just get normal because normal is there for a reason and normal is easiest.  And if you want to do better than that, if you want to keep high epistemics, because normal does not prioritize epistemics, you need to be actively fighting for it, and bringing a high amount of skill to it. I can't tell you if EA is degrading at like 5 percent a year or 25 percent a year, I can tell you that it is not self correcting enough to escape this trap.

I think not enforcing an "in or out" boundary is big contributor to this degradation -- like, majorly successful religions required all kinds of sacrifice and

What I think is more likely than EA pivoting is a handful of people launch a lifeboat and recreate a high integrity version of EA. 

It feels like AI safety is the best current candidate for this, though that is also much less cohesive and not a direct successor for a bunch of ways. I too have been lately wondering what "Post EA" looks like.

I hear that as every true wizard must test the integrity of their teacher or of their school, Hogwarts, whatever the thing is. The reason you don't get to graduate until you actually test the integrity of the school is because if you're just taking it on its own word, then you could become a villain.

You have to respect your own moral compass to be able to be trusted.

Really liked this analogy!

Which EA leaders do you most resonate with?

I would suggest that if you don't care about the movement leaders who have any steering power, you're not in that movement.

I like this as a useful question to keep in mind, though I don't think it's totally explanatory. I think I'm reasonably Catholic, even though I don't know anything about the living Catholic leaders.

Timothy: Give me a vision of a different world where ea would be better served by the by having leadership that actually was willing to own their power more 

Elizabeth: which you'll notice even holden won't do 

Timothy: yeah, he literally doesn't want the power.

Elizabeth: Yeah, none of them do. CEA doesn't want it. 

I've been thinking that EA should try to elect a president, someone who is empowered but also accountable to the general people in the movement, a schelling person to be the face of EA. (plus of course, we'd get to debate stuff like optimal voting systems and enfranchisement -- my kind of catnip)

Comment by Austin Chen (austin-chen) on Start an Upper-Room UV Installation Company? · 2024-10-20T08:21:53.556Z · LW · GW

Hm, I expect the advantage of far UV is that many places where people want to spend time indoors are not already well-ventilated, or that it'd be much more expensive to modify existing hvac setups vs just sticking a lamp on a wall.

I'm not at all familiar with the literature on safety; my understanding (based on this) is that no, we're not sure and more studies would be great, but there's a vicious cycle/chicken-and-egg problem where the lamps are expensive, so studies are expensive, so there aren't enough studies, so nobody buys lamps, so lamp companies don't stay in business, so lamps are expensive.

Comment by Austin Chen (austin-chen) on Start an Upper-Room UV Installation Company? · 2024-10-19T19:14:29.288Z · LW · GW

Another similar company I want someone to start is one that produces inexpensive, self-installable far UV lamps. My understanding is that far UV is safe to shine directly on humans (as opposed to standard UV), meaning that you don't need high ceilings or special technicians to install the lamp. However, it's a much newer technology with not very much adoption or testing, I think because of a combination of principal/agent problems and price; see this post on blockers to Far UV adoption.

Beacon does produce these $800 lamps, which are consumer friendly-ish. I bought one for the Manifold office, but due to a variety of trivial inconveniences (figuring out where to mount it; the mobile app not syncing with my phone) it's still not active. I think a competent operator in this space could make a device that's somewhat cheaper & easier to use, and hit a tipping point for widespread/viral adoption.

(If you or someone you know is interested in doing this and is looking for funding, reach out to me at austin@manifund.org!)

Comment by Austin Chen (austin-chen) on An Interactive Shapley Value Explainer · 2024-09-29T09:21:00.363Z · LW · GW

(maybe the part that seems unrealistic is the difficulty of eliciting values for the power set of possible coalitions, as generating a value for any one coalition feels like an expensive process, and the size of a power set grows exponentially with the number of players)

Comment by Austin Chen (austin-chen) on An Interactive Shapley Value Explainer · 2024-09-29T09:17:40.322Z · LW · GW

This is extremely well produced, I think it's the best introduction to Shapley values I've ever seen. Kudos for the simple explanation and approachable designs!

(Not an indictment of this site, but with this as with other explainers, I still struggle to see how to apply Shapley values to any real world problems haha - unlike something like quadratic funding, which also sports fancy mechanism math but is much more obvious how to use)

Comment by Austin Chen (austin-chen) on Announcing the $200k EA Community Choice · 2024-08-14T21:23:33.945Z · LW · GW

Thanks for the correction! My own interaction with Lighthaven is event space foremost, then housing, then coworking; for the purposes of EA Community Choice we're not super fussed about drawing clean categories, and we'd be happy to support a space like Lighthaven for any (or all) of those categories.

Comment by Austin Chen (austin-chen) on Announcing the $200k EA Community Choice · 2024-08-14T21:20:27.597Z · LW · GW

For now I've just added the your existing project into EA Community Choice; if you'd prefer to create a subproject with a different ask that's fine too, I can remove the old one. I think adding the existing one is a bit less work for everyone involved -- especially since your initial proposal has a lot more room for funding. (We'll figure out how to do the quadratic match correctly on our side.)

Comment by Austin Chen (austin-chen) on Announcing the $200k EA Community Choice · 2024-08-14T21:17:03.478Z · LW · GW

I recommend adding "EA Community Choice" existing applications. I've done so for you now, so the project will be visible to people browsing projects in this round, and future donations made will count for quadratic funding match. Thanks for participating!

Comment by Austin Chen (austin-chen) on keltan's Shortform · 2024-05-30T03:57:57.389Z · LW · GW

Welcome to the US; excited for your time at LessOnline (and maybe Manifest too?)

And re: 19., we're working on it![1]

  1. ^

    (Sorry, that was a lie too.)

Comment by Austin Chen (austin-chen) on Elizabeth's Shortform · 2024-05-28T21:01:29.789Z · LW · GW

One person got some extra anxiety because their paragraph was full of TODOs (because it was positive and I hadn’t worked as hard fleshing out the positive mentions ahead of time).


I think you're talking about me? I may have miscommunicated; I was ~zero anxious, instead trying to signal that I'd looked over the doc as requested, and poking some fun at the TODOs.

FWIW I appreciated your process for running criticism ahead of time (and especially enjoyed the back-and-forth comments on the doc; I'm noticing that those kinds of conversations on a private GDoc seem somehow more vibrant/nicer than the ones on LW or on a blog's comments.)

Comment by Austin Chen (austin-chen) on Episode: Austin vs Linch on OpenAI · 2024-05-26T04:37:15.286Z · LW · GW

most catastrophes through both recent and long-ago history have been caused by governments

 

Interesting lens! Though I'm not sure if this is fair -- the largest things that are done tend to get done through governments, whether those things are good or bad. If you blame catastrophes like Mao's famine or Hitler's genocide on governments, you should also credit things like slavery abolition and vaccination and general decline of violence in civilized society to governments too.

I'd be interested to hear how Austin has updated regarding Sam's trustworthiness over the past few days.

Hm I feel like a bunch of people have updated majorly negatively, but I haven't -- only small amounts. I think he eg gets credit for the ScarJo thing. I am mostly withholding judgement, though; now that the NDAs have been dropped, curious to see what comes to light (if nothing does, that would be more positive credit towards Sam, and some validation to my point that NDAs were not really concealing much).

Comment by Austin Chen (austin-chen) on Episode: Austin vs Linch on OpenAI · 2024-05-26T03:51:02.391Z · LW · GW

Ah interesting, thanks for the tips.

I use filler a lot so thought the um/ah removal was helpful (it actually cut down the recording by something like 10 minutes overall). It's especially good for making the transcript readable, though perhaps I could just edit the transcript without changing the audio/video.

Comment by Austin Chen (austin-chen) on Episode: Austin vs Linch on OpenAI · 2024-05-26T03:49:13.019Z · LW · GW

Thanks for the feedback! I wasn't sure how much effort to put into this producing this transcript (this entire podcast thing is pretty experimental); good to know you were trying to read along.

It was machine transcribed via Descript but then I did put in another ~90min cleaning it up a bit, removing filler words and correcting egregious mistranscriptions. I could have spent another hour or so to really clean it up, and perhaps will do so next time (or find some scaleable way to handle it eg outsource or LLM). I think that put it in an uncanny valley of "almost readable, but quite a bad experience".

Comment by Austin Chen (austin-chen) on Episode: Austin vs Linch on OpenAI · 2024-05-26T03:46:02.686Z · LW · GW

Yeah I meant her second post, the one that showed off the emails around the NDAs.

Comment by Austin Chen (austin-chen) on Talent Needs of Technical AI Safety Teams · 2024-05-24T22:21:11.857Z · LW · GW

Hm, I disagree and would love to operationalize a bet/market on this somehow; one approach is something like "Will we endorse Jacob's comment as 'correct' 2 years from now?", resolved by a majority of Jacob + Austin + <neutral 3rd party>, after deliberating for ~30m.

Comment by Austin Chen (austin-chen) on MATS Winter 2023-24 Retrospective · 2024-05-12T21:55:46.821Z · LW · GW

Starting new technical AI safety orgs/projects seems quite difficult in the current funding ecosystem. I know of many alumni who have founded or are trying to found projects who express substantial difficulties with securing sufficient funding.

Interesting - what's like the minimum funding ask to get a new org off the ground? I think something like $300k would be enough to cover ~9 mo of salary and compute for a team of ~3, and that seems quite reasonable to raise in this current ecosystem for pre-seeding a org.

Comment by Austin Chen (austin-chen) on What's with all the bans recently? · 2024-04-05T23:53:37.588Z · LW · GW

I very much appreciate @habryka taking the time to lay out your thoughts; posting like this is also a great example of modeling out your principles. I've spent copious amounts of time shaping the Manifold community's discourse and norms, and this comment has a mix of patterns I find true out of my own experiences (eg the bits about case law and avoiding echo chambers), and good learnings for me (eg young/non-English speakers improve more easily).

Comment by Austin Chen (austin-chen) on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-03-26T20:03:03.982Z · LW · GW

So, I love Scott, consider CM's original article poorly written, and also think doxxing is quite rude, but with all the disclaimers out of the way: on the specific issue of revealing Scott's last name, Cade Metz seems more right than Scott here? Scott was worried about a bunch of knock-off effects of having his last name published, but none of that bad stuff happened.[1]

I feel like at this point in the era of the internet, doxxing (at least, in the form of involuntary identity association) is much more of an imagined threat than a real harm. Beff Jezos's more recent doxxing also comes to mind as something that was more controversial for the controversy, than for any factual harms done to Jezos as a result.

  1. ^

    Scott did take a bunch of ameliorating steps, such as leaving his past job -- but my best guess is that none of that would have actually been necessary. AFAICT he's actually in a much better financial position thanks to subsequent transition to Substack -- though crediting Cade Metz for this is a bit like crediting Judas for starting Christianity.

Comment by Austin Chen (austin-chen) on Increase the tax value of donations with high-variance investments? · 2024-03-03T04:19:03.767Z · LW · GW

My friend Eric once proposed something similar, except where two charitable individuals just create the security directly. Say Alice and Bob both want to donate $7500 to Givewell; instead of doing so directly, they could create a security which is "flip a coin, winner gets $15000". They do so, Alice wins, waits a year and donates for $15000 of appreciated longterm gains and gets a tax deduction, while Bob deducts the $7500 loss.

This seems to me like it ought to work, but I've never actually tried this myself...

Comment by Austin Chen (austin-chen) on Announcing Dialogues · 2023-10-18T20:49:03.494Z · LW · GW

Warning: Dialogues seem like such a cool idea that we might steal them for Manifold (I wrote a quick draft proposal).

On that note, I'd love to have a dialogue on "How do the Manifold and Lightcone teams think about their respective lanes?"

Comment by Austin Chen (austin-chen) on Prediction markets covered in the NYT podcast “Hard Fork” · 2023-10-13T20:10:50.927Z · LW · GW

Haha, this actually seems normal and fine. We who work on prediction markets, understand the nuances and implementation of these markets (what it means in mathematical terms when a market says 25%).  And Kevin and Casey haven't quite gotten it yet, based on a couple of days of talking to prediction markets enthusiasts.

But that's okay! Ideas are actually super hard to understand by explanation, and much easier to understand by experience (aka trial and error). My sense is that if Kevin follows up and bets on a few other markets, he'd start to wonder "hm, why did I get M100 for winning this market but only M50 on that one?" and then learn that the odds at which you place the bet actually matter.  This principle underpins the idea of Manifold -- you can argue all day about whether prediction markets are good for X or Y, or... you can try using them with play money and find out.

It's reasonable for their reporting to be vibes-based for now - so long as they are reasonably accurate in characterizing the vibes, it sets the stage for other people to explore Manifold or other prediction markets.

Comment by Austin Chen (austin-chen) on Sharing Information About Nonlinear · 2023-09-13T20:58:21.261Z · LW · GW

Yeah, I guess that's fair -- you have much more insight into the number of and viewpoints of Wave's departing employees than I do. Maybe "would be a bit surprised" would have cashed out to "<40% Lincoln ever spent 5+ min thinking about this, before this week", which I'd update a bit upwards to 50/50 based on your comment.

For context, I don't think I pushed back on (or even substantively noticed) the NDA in my own severance agreement, whereas I did push back quite heavily on the standard "assignment of inventions" thing they asked me to sign when I joined. That said, I was pretty happy with my time and trusted my boss enough to not expect for the NDA terms to matter.

Comment by Austin Chen (austin-chen) on Sharing Information About Nonlinear · 2023-09-12T14:56:14.746Z · LW · GW

I definitely feel like "intentionally lying" is still a much much stronger norm violation than what happened here. There's like a million decisions that you have to make as a CEO and you don't typically want to spend your decisionmaking time/innovation budget on random minutiae like "what terms are included inside our severance agreements?" I would be a bit surprised if "should we include a NDA & non-disclosure" had even risen to the level of a conscious decision of Lincoln's at any point throughout Wave's history, as opposed to eg getting boilerplate legal contracts from their lawyers/an online form and then copying that for each severance agreement thereafter.

Comment by Austin Chen (austin-chen) on Sharing Information About Nonlinear · 2023-09-12T04:12:14.903Z · LW · GW

Yeah fwiw I wanted to echo that Oli's statement seems like an overreaction? My sense is that such NDAs are standard issue in tech (I've signed one before myself), and that having one at Wave is not evidence of a lapse in integrity; it's the kind of thing that's very easy to just defer to legal counsel on. Though the opposite (dropping the NDA) would be evidence of high integrity, imo!

Comment by Austin Chen (austin-chen) on A plea for more funding shortfall transparency · 2023-08-08T01:53:16.990Z · LW · GW

On the Manifund regranting program: we've received 60 requests for funding in the last month, and have commited $670k to date (or about 1/3rd of our initial budget of $1.9m). My rough guess is we could productively distribute another $1m immediately, or $10m total by the end of the year.

I'm not sure if the other tallies are as useful for us -- in contrast to an open call, a regranting program scales up pretty easily; we have a backlog of both new regrantors to onboard and existing regrantors to increase budgets, and regrantors tend to generate opportunities based on the size of their budgets.

(With a few million in unrestricted funding, we'd also branch out beyond regranting and start experimenting with other programs such as impact certificates, retroactive funding, and peer bonuses in EA)

Comment by Austin Chen (austin-chen) on Manifund: What we're funding (weeks 2-4) · 2023-08-05T23:31:24.504Z · LW · GW

Thanks for the feedback! We're still trying to figure out what time period for our newsletter makes the most sense, haha.

Comment by Austin Chen (austin-chen) on Announcing Manifund Regrants · 2023-07-11T22:42:44.878Z · LW · GW

The $400k regrantors were chosen by the donor; the $50k ones were chosen by the Manifund team.

Comment by Austin Chen (austin-chen) on Announcing Manifund Regrants · 2023-07-11T22:42:23.626Z · LW · GW

I can't speak for other regrantors, but I'm personally very sympathetic to retroactive grants for impactful work that got less funding than was warranted; we have one example for Vipul Naik's Donations List Website and hope to publish more examples soon!

Comment by Austin Chen (austin-chen) on Announcing Manifund Regrants · 2023-07-05T23:20:34.712Z · LW · GW

I'm generally interested in having a diverse range of regrantors; if you'd like to suggest names/make intros (either here, or privately) please let me know!

Comment by Austin Chen (austin-chen) on Announcing Manifund Regrants · 2023-07-05T21:03:18.524Z · LW · GW

Thanks! We're likewise excited by Lightspeed Grants, and by ways we can work together (or compete!) to make the funding landscape good.

Comment by Austin Chen (austin-chen) on Outrangeous (Calibration Game) · 2023-03-07T17:28:17.437Z · LW · GW

A similar calibration game I like to play with my girlfriend: one of us gives our 80% confidence interval for some quantity (eg "how long will it take us to get to the front of this line?") and the other offers to bet on the inside or the outside, at 4:1 odds.

I've learned that my 80% intervals are right like 50% of the time, almost always in favor of being too optimistic...

Comment by Austin Chen (austin-chen) on Conversational canyons · 2023-01-04T20:12:55.223Z · LW · GW

With my wife, I do it a little differently. Once a week or so, when the kids have fallen asleep, we’ll lie in separate beds—Johanna next to the baby, and me next to the 5-year-old. We’ll both be staring at our screens. Unlike the notes I keep with Torbjörn, these notes are shared. They are a bunch of Google docs.

 

This reminds me of the note-taking culture we have at Manifold, on Notion (which I would highly recommend as an alternative to Google docs -- much more structured, easier to navigate and link between things, prettier!)

For example, while we do our daily standup meetings, we're all jotting thoughts into our meeting notes, and often move between linked documents. To track who has been having which thought, we'll prefix a particular bullet point with your initials e.g. "[A] Should we consider moving to transactions?"

Comment by Austin Chen (austin-chen) on December 2022 updates and fundraising · 2022-12-23T01:17:06.156Z · LW · GW

Thanks for writing this up! I've just added AI Impacts to Manifold's charity list, so you can now donate your mana there too :)

I find the move from "website" to "wiki" very interesting. We've been exploring something similar for Manifold's Help & About pages. Right now, they're backed by an internal Notion wiki and proxied via super.so, but our pages are kind of clunky; plus we'd like to open it up to allow our power users to contribute. We've been exploring existing wiki solutions (looks like AI Impacts is on DokuWiki?) but it feels like most public wiki software was designed 10+ years ago, whereas modern software like Notion is generally targeted for the internal use case. I would also note that LessWrong seems to have moved away from having an internal wiki, too. There's some chance Manifold ends up building an in-house solution for this, on top of our existing editor...

Comment by Austin Chen (austin-chen) on How To Make Prediction Markets Useful For Alignment Work · 2022-10-19T06:40:54.769Z · LW · GW

Definitely agreed that the bottleneck is mostly having good questions! One way I often think about this is, a prediction market question conveys many bits of information about the world, while the answer tends to convey very few.

Part of the goal with Manifold is to encourage as many questions as possible, lowering the barrier to question creation by making it fast and easy and (basically) free. But sometimes this does lead to people asking questions that have wide appeal but are less useful (like the ones you identified above), whereas generating really good questions often requires deep subject-matter expertise. If you have eg a list of operationalized questions, we're always more than happy to promote them to our forecasters!

Comment by Austin Chen (austin-chen) on Consider your appetite for disagreements · 2022-10-09T02:06:45.596Z · LW · GW

Re your second point (score rather than ranking basketball players), Neel Nanda has the same advice which I've found fairly helpful for all kinds of assessment tasks: https://www.neelnanda.io/blog/48-rating

It makes me much more excited for eg 5-star voting instead of approval or especially ranked choice voting.

Comment by Austin Chen (austin-chen) on Calibrate - New Chrome Extension for hiding numbers so you can guess · 2022-10-08T02:08:48.231Z · LW · GW

Big fan of the concept! Unfortunately, Manifold seems too dynamic for this extension (using the extension seems to break our site very quickly) but I really like the idea of temporarily hiding our market % so you can form an opinion before placing a bet:

Comment by Austin Chen (austin-chen) on How my team at Lightcone sometimes gets stuff done · 2022-09-20T06:09:38.776Z · LW · GW

Really appreciate this list!

Things I very much agree with:

4. Have a single day, e.g. Tuesday, that’s the “meeting day”, where people are expected to schedule any miscellaneous, external meetings (e.g. giving someone career advice, or grabbing coffee with a contact).

12. Have a “team_transparency@companyname” email address, which is such that when someone CC’s it on an email, the email gets forwarded to a designated slack channel

17. Have regular 1-1s with the people you work with. Some considerations only get verbalised via meandering, verbal conversation. Don’t kill it with process or time-bounds.

Things I'm very unsure about:

8. Use a real-time chat platform like Slack to communicate (except for in-person communication). For god’s sake, never use email within the team.

I actually often wonder whether Slack (or in our case, Discord) optimizes for writeability at the cost of readability. Meaning, something more asynchronous like Notion, or maybe the LessWrong forum/Manifold site, would be a better system of documenting decisions and conversations -- chat is really easy to reach for and addictive, but does a terrible job of exposing history for people who aren't immediately reading along. In contrast, Manifold's standup and meeting calendar helps organize and spread info across the team in a way that's much more manageable than Discord channels.

14. Everyone on your team should be full-time

Definitely agree that 40h is much more than 2x 20h, but also sometimes we just don't have that much of certain kinds of work, slash really good people have other things to do with their lives?

Things we don't do at all

5. No remote work.

Not sure how a hypothetical Manifold that was fully in-person would perform -- it's very unclear if our company could even have existed, given that the cofounders are split across two cities haha. Being remote forces us to add processes (like a daily hour-long sync) that an in-person team can squeak by without, but also I think has led to a much better online community of Manifold users because we dogfood the remote nature of work so heavily.

 

Finally: could you describe some impressive things that Lightcone has accomplished using this methodology? I wonder if this is suited to particular kinds of work (eg ops, events, facilities) and less so others (software engineering, eg LessWrong doesn't seem to do this as much?)

Comment by Austin Chen (austin-chen) on Austin Chen's Shortform · 2022-08-10T18:12:52.186Z · LW · GW

Rob Wiblin from 80k asks:

Comment by Austin Chen (austin-chen) on Limerence Messes Up Your Rationality Real Bad, Yo · 2022-07-02T20:55:13.204Z · LW · GW

Inositol, I believe: https://www.facebook.com/100000020495165/posts/4855425464468089/?app=fbl

Comment by Austin Chen (austin-chen) on It’s Probably Not Lithium · 2022-06-29T22:42:35.673Z · LW · GW

I've been following the SMTM hypothesis with great interest; don't have much to add on a technical level, but I'm happy to pay a $200 bounty in M$ to Natália in recognition of her excellent writeup here.  Also - happy to match (in M$) any of the bounties that she outlined!

Comment by Austin Chen (austin-chen) on "Science Cathedrals" · 2022-06-24T06:51:13.517Z · LW · GW

San Jose has The Tech Interactive (formerly The Tech Museum of Innovation) located in the downtown. I remember going often as a kid, and being enthralled by the interactions and exhibits. One of the best is located outside, for free: a 2-story tall Rube Goldberg machine that shuffles billiards balls through various contraptions. Absolutely mesmerizing.