Posts

Manifund: 2023 in Review 2024-01-18T23:50:13.557Z
Manifold Halloween Hackathon 2023-10-23T22:47:18.462Z
Prediction markets covered in the NYT podcast “Hard Fork” 2023-10-13T18:43:29.644Z
NYT on the Manifest forecasting conference 2023-10-09T21:40:16.732Z
Manifest 2023 2023-09-06T11:24:31.274Z
Last Chance: Get tickets to Manifest 2023! (Sep 22-24 in Berkeley) 2023-09-06T10:35:37.510Z
Announcing Manifest 2023 (Sep 22-24 in Berkeley) 2023-08-14T05:13:03.186Z
Manifund: What we're funding (weeks 2-4) 2023-08-04T16:00:33.227Z
A $10k retroactive grant for VaccinateCA 2023-07-27T18:14:44.305Z
Announcing Manifund Regrants 2023-07-05T19:42:08.978Z
Manifund x AI Worldviews 2023-03-31T15:32:05.853Z
Postmortem: Trying out for Manifold Markets 2022-09-08T17:54:09.890Z
Prediction markets meetup/coworking (hosted by Manifold Markets) 2022-07-26T00:14:53.704Z
What We Owe the Past 2022-05-05T11:46:38.015Z
Predicting for charity 2022-05-02T22:59:49.741Z
Austin Chen's Shortform 2022-04-02T02:54:43.792Z
Manafold Markets is out of mana 🤭 2022-04-01T22:07:34.081Z
Create a prediction market in two minutes on Manifold Markets 2022-02-09T17:36:56.320Z

Comments

Comment by Austin Chen (austin-chen) on What's with all the bans recently? · 2024-04-05T23:53:37.588Z · LW · GW

I very much appreciate @habryka taking the time to lay out your thoughts; posting like this is also a great example of modeling out your principles. I've spent copious amounts of time shaping the Manifold community's discourse and norms, and this comment has a mix of patterns I find true out of my own experiences (eg the bits about case law and avoiding echo chambers), and good learnings for me (eg young/non-English speakers improve more easily).

Comment by Austin Chen (austin-chen) on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-03-26T20:03:03.982Z · LW · GW

So, I love Scott, consider CM's original article poorly written, and also think doxxing is quite rude, but with all the disclaimers out of the way: on the specific issue of revealing Scott's last name, Cade Metz seems more right than Scott here? Scott was worried about a bunch of knock-off effects of having his last name published, but none of that bad stuff happened.[1]

I feel like at this point in the era of the internet, doxxing (at least, in the form of involuntary identity association) is much more of an imagined threat than a real harm. Beff Jezos's more recent doxxing also comes to mind as something that was more controversial for the controversy, than for any factual harms done to Jezos as a result.

  1. ^

    Scott did take a bunch of ameliorating steps, such as leaving his past job -- but my best guess is that none of that would have actually been necessary. AFAICT he's actually in a much better financial position thanks to subsequent transition to Substack -- though crediting Cade Metz for this is a bit like crediting Judas for starting Christianity.

Comment by Austin Chen (austin-chen) on Increase the tax value of donations with high-variance investments? · 2024-03-03T04:19:03.767Z · LW · GW

My friend Eric once proposed something similar, except where two charitable individuals just create the security directly. Say Alice and Bob both want to donate $7500 to Givewell; instead of doing so directly, they could create a security which is "flip a coin, winner gets $15000". They do so, Alice wins, waits a year and donates for $15000 of appreciated longterm gains and gets a tax deduction, while Bob deducts the $7500 loss.

This seems to me like it ought to work, but I've never actually tried this myself...

Comment by Austin Chen (austin-chen) on Announcing Dialogues · 2023-10-18T20:49:03.494Z · LW · GW

Warning: Dialogues seem like such a cool idea that we might steal them for Manifold (I wrote a quick draft proposal).

On that note, I'd love to have a dialogue on "How do the Manifold and Lightcone teams think about their respective lanes?"

Comment by Austin Chen (austin-chen) on Prediction markets covered in the NYT podcast “Hard Fork” · 2023-10-13T20:10:50.927Z · LW · GW

Haha, this actually seems normal and fine. We who work on prediction markets, understand the nuances and implementation of these markets (what it means in mathematical terms when a market says 25%).  And Kevin and Casey haven't quite gotten it yet, based on a couple of days of talking to prediction markets enthusiasts.

But that's okay! Ideas are actually super hard to understand by explanation, and much easier to understand by experience (aka trial and error). My sense is that if Kevin follows up and bets on a few other markets, he'd start to wonder "hm, why did I get M100 for winning this market but only M50 on that one?" and then learn that the odds at which you place the bet actually matter.  This principle underpins the idea of Manifold -- you can argue all day about whether prediction markets are good for X or Y, or... you can try using them with play money and find out.

It's reasonable for their reporting to be vibes-based for now - so long as they are reasonably accurate in characterizing the vibes, it sets the stage for other people to explore Manifold or other prediction markets.

Comment by Austin Chen (austin-chen) on Sharing Information About Nonlinear · 2023-09-13T20:58:21.261Z · LW · GW

Yeah, I guess that's fair -- you have much more insight into the number of and viewpoints of Wave's departing employees than I do. Maybe "would be a bit surprised" would have cashed out to "<40% Lincoln ever spent 5+ min thinking about this, before this week", which I'd update a bit upwards to 50/50 based on your comment.

For context, I don't think I pushed back on (or even substantively noticed) the NDA in my own severance agreement, whereas I did push back quite heavily on the standard "assignment of inventions" thing they asked me to sign when I joined. That said, I was pretty happy with my time and trusted my boss enough to not expect for the NDA terms to matter.

Comment by Austin Chen (austin-chen) on Sharing Information About Nonlinear · 2023-09-12T14:56:14.746Z · LW · GW

I definitely feel like "intentionally lying" is still a much much stronger norm violation than what happened here. There's like a million decisions that you have to make as a CEO and you don't typically want to spend your decisionmaking time/innovation budget on random minutiae like "what terms are included inside our severance agreements?" I would be a bit surprised if "should we include a NDA & non-disclosure" had even risen to the level of a conscious decision of Lincoln's at any point throughout Wave's history, as opposed to eg getting boilerplate legal contracts from their lawyers/an online form and then copying that for each severance agreement thereafter.

Comment by Austin Chen (austin-chen) on Sharing Information About Nonlinear · 2023-09-12T04:12:14.903Z · LW · GW

Yeah fwiw I wanted to echo that Oli's statement seems like an overreaction? My sense is that such NDAs are standard issue in tech (I've signed one before myself), and that having one at Wave is not evidence of a lapse in integrity; it's the kind of thing that's very easy to just defer to legal counsel on. Though the opposite (dropping the NDA) would be evidence of high integrity, imo!

Comment by Austin Chen (austin-chen) on A plea for more funding shortfall transparency · 2023-08-08T01:53:16.990Z · LW · GW

On the Manifund regranting program: we've received 60 requests for funding in the last month, and have commited $670k to date (or about 1/3rd of our initial budget of $1.9m). My rough guess is we could productively distribute another $1m immediately, or $10m total by the end of the year.

I'm not sure if the other tallies are as useful for us -- in contrast to an open call, a regranting program scales up pretty easily; we have a backlog of both new regrantors to onboard and existing regrantors to increase budgets, and regrantors tend to generate opportunities based on the size of their budgets.

(With a few million in unrestricted funding, we'd also branch out beyond regranting and start experimenting with other programs such as impact certificates, retroactive funding, and peer bonuses in EA)

Comment by Austin Chen (austin-chen) on Manifund: What we're funding (weeks 2-4) · 2023-08-05T23:31:24.504Z · LW · GW

Thanks for the feedback! We're still trying to figure out what time period for our newsletter makes the most sense, haha.

Comment by Austin Chen (austin-chen) on Announcing Manifund Regrants · 2023-07-11T22:42:44.878Z · LW · GW

The $400k regrantors were chosen by the donor; the $50k ones were chosen by the Manifund team.

Comment by Austin Chen (austin-chen) on Announcing Manifund Regrants · 2023-07-11T22:42:23.626Z · LW · GW

I can't speak for other regrantors, but I'm personally very sympathetic to retroactive grants for impactful work that got less funding than was warranted; we have one example for Vipul Naik's Donations List Website and hope to publish more examples soon!

Comment by Austin Chen (austin-chen) on Announcing Manifund Regrants · 2023-07-05T23:20:34.712Z · LW · GW

I'm generally interested in having a diverse range of regrantors; if you'd like to suggest names/make intros (either here, or privately) please let me know!

Comment by Austin Chen (austin-chen) on Announcing Manifund Regrants · 2023-07-05T21:03:18.524Z · LW · GW

Thanks! We're likewise excited by Lightspeed Grants, and by ways we can work together (or compete!) to make the funding landscape good.

Comment by Austin Chen (austin-chen) on Outrangeous (Calibration Game) · 2023-03-07T17:28:17.437Z · LW · GW

A similar calibration game I like to play with my girlfriend: one of us gives our 80% confidence interval for some quantity (eg "how long will it take us to get to the front of this line?") and the other offers to bet on the inside or the outside, at 4:1 odds.

I've learned that my 80% intervals are right like 50% of the time, almost always in favor of being too optimistic...

Comment by Austin Chen (austin-chen) on Conversational canyons · 2023-01-04T20:12:55.223Z · LW · GW

With my wife, I do it a little differently. Once a week or so, when the kids have fallen asleep, we’ll lie in separate beds—Johanna next to the baby, and me next to the 5-year-old. We’ll both be staring at our screens. Unlike the notes I keep with Torbjörn, these notes are shared. They are a bunch of Google docs.

 

This reminds me of the note-taking culture we have at Manifold, on Notion (which I would highly recommend as an alternative to Google docs -- much more structured, easier to navigate and link between things, prettier!)

For example, while we do our daily standup meetings, we're all jotting thoughts into our meeting notes, and often move between linked documents. To track who has been having which thought, we'll prefix a particular bullet point with your initials e.g. "[A] Should we consider moving to transactions?"

Comment by Austin Chen (austin-chen) on December 2022 updates and fundraising · 2022-12-23T01:17:06.156Z · LW · GW

Thanks for writing this up! I've just added AI Impacts to Manifold's charity list, so you can now donate your mana there too :)

I find the move from "website" to "wiki" very interesting. We've been exploring something similar for Manifold's Help & About pages. Right now, they're backed by an internal Notion wiki and proxied via super.so, but our pages are kind of clunky; plus we'd like to open it up to allow our power users to contribute. We've been exploring existing wiki solutions (looks like AI Impacts is on DokuWiki?) but it feels like most public wiki software was designed 10+ years ago, whereas modern software like Notion is generally targeted for the internal use case. I would also note that LessWrong seems to have moved away from having an internal wiki, too. There's some chance Manifold ends up building an in-house solution for this, on top of our existing editor...

Comment by Austin Chen (austin-chen) on How To Make Prediction Markets Useful For Alignment Work · 2022-10-19T06:40:54.769Z · LW · GW

Definitely agreed that the bottleneck is mostly having good questions! One way I often think about this is, a prediction market question conveys many bits of information about the world, while the answer tends to convey very few.

Part of the goal with Manifold is to encourage as many questions as possible, lowering the barrier to question creation by making it fast and easy and (basically) free. But sometimes this does lead to people asking questions that have wide appeal but are less useful (like the ones you identified above), whereas generating really good questions often requires deep subject-matter expertise. If you have eg a list of operationalized questions, we're always more than happy to promote them to our forecasters!

Comment by Austin Chen (austin-chen) on Consider your appetite for disagreements · 2022-10-09T02:06:45.596Z · LW · GW

Re your second point (score rather than ranking basketball players), Neel Nanda has the same advice which I've found fairly helpful for all kinds of assessment tasks: https://www.neelnanda.io/blog/48-rating

It makes me much more excited for eg 5-star voting instead of approval or especially ranked choice voting.

Comment by Austin Chen (austin-chen) on Calibrate - New Chrome Extension for hiding numbers so you can guess · 2022-10-08T02:08:48.231Z · LW · GW

Big fan of the concept! Unfortunately, Manifold seems too dynamic for this extension (using the extension seems to break our site very quickly) but I really like the idea of temporarily hiding our market % so you can form an opinion before placing a bet:

Comment by Austin Chen (austin-chen) on How my team at Lightcone sometimes gets stuff done · 2022-09-20T06:09:38.776Z · LW · GW

Really appreciate this list!

Things I very much agree with:

4. Have a single day, e.g. Tuesday, that’s the “meeting day”, where people are expected to schedule any miscellaneous, external meetings (e.g. giving someone career advice, or grabbing coffee with a contact).

12. Have a “team_transparency@companyname” email address, which is such that when someone CC’s it on an email, the email gets forwarded to a designated slack channel

17. Have regular 1-1s with the people you work with. Some considerations only get verbalised via meandering, verbal conversation. Don’t kill it with process or time-bounds.

Things I'm very unsure about:

8. Use a real-time chat platform like Slack to communicate (except for in-person communication). For god’s sake, never use email within the team.

I actually often wonder whether Slack (or in our case, Discord) optimizes for writeability at the cost of readability. Meaning, something more asynchronous like Notion, or maybe the LessWrong forum/Manifold site, would be a better system of documenting decisions and conversations -- chat is really easy to reach for and addictive, but does a terrible job of exposing history for people who aren't immediately reading along. In contrast, Manifold's standup and meeting calendar helps organize and spread info across the team in a way that's much more manageable than Discord channels.

14. Everyone on your team should be full-time

Definitely agree that 40h is much more than 2x 20h, but also sometimes we just don't have that much of certain kinds of work, slash really good people have other things to do with their lives?

Things we don't do at all

5. No remote work.

Not sure how a hypothetical Manifold that was fully in-person would perform -- it's very unclear if our company could even have existed, given that the cofounders are split across two cities haha. Being remote forces us to add processes (like a daily hour-long sync) that an in-person team can squeak by without, but also I think has led to a much better online community of Manifold users because we dogfood the remote nature of work so heavily.

 

Finally: could you describe some impressive things that Lightcone has accomplished using this methodology? I wonder if this is suited to particular kinds of work (eg ops, events, facilities) and less so others (software engineering, eg LessWrong doesn't seem to do this as much?)

Comment by Austin Chen (austin-chen) on Austin Chen's Shortform · 2022-08-10T18:12:52.186Z · LW · GW

Rob Wiblin from 80k asks:

Comment by Austin Chen (austin-chen) on Limerence Messes Up Your Rationality Real Bad, Yo · 2022-07-02T20:55:13.204Z · LW · GW

Inositol, I believe: https://www.facebook.com/100000020495165/posts/4855425464468089/?app=fbl

Comment by Austin Chen (austin-chen) on It’s Probably Not Lithium · 2022-06-29T22:42:35.673Z · LW · GW

I've been following the SMTM hypothesis with great interest; don't have much to add on a technical level, but I'm happy to pay a $200 bounty in M$ to Natália in recognition of her excellent writeup here.  Also - happy to match (in M$) any of the bounties that she outlined!

Comment by Austin Chen (austin-chen) on "Science Cathedrals" · 2022-06-24T06:51:13.517Z · LW · GW

San Jose has The Tech Interactive (formerly The Tech Museum of Innovation) located in the downtown. I remember going often as a kid, and being enthralled by the interactions and exhibits. One of the best is located outside, for free: a 2-story tall Rube Goldberg machine that shuffles billiards balls through various contraptions. Absolutely mesmerizing.

Comment by Austin Chen (austin-chen) on AGI Ruin: A List of Lethalities · 2022-06-06T04:49:28.362Z · LW · GW

I'd have more hope - not significant hope, but more hope - in separating the concerns of (a) credibly promising to pay big money retrospectively for good work to anyone who produces it, and (b) venturing prospective payments to somebody who is predicted to maybe produce good work later.

 

I desperately want to make this ecosystem exist, either as part of Manifold Markets, or separately. Some people call it "impact certificates" or "retroactive public goods funding"; I call it "equity for public goods", or "Manifund" in the specific case.

If anyone is interested in:

a) Being a retroactive funder for good work (aka bounties, prizes)

b) Getting funding through this kind of mechanism (aka income share agreements, angel investment)

c) Working on this project full time (full-stack web dev, ops, community management)

Please get in touch! Reply here, or message austin@manifold.markets~

Comment by Austin Chen (austin-chen) on New Water Quality x Obesity Dataset Available · 2022-05-28T05:49:24.652Z · LW · GW

Thanks again Elizabeth for pushing forward this initiative; Slime Mold Time Mold's obesity hypothesis has been one of the most interesting things I've come across in the last couple years, and I'm glad to see citizen research efforts springing up to pursue it~

The credit for combining the data set really goes to Oliver S and Josh C; I mostly just posted the bounty haha:

Comment by Austin Chen (austin-chen) on Here's a List of Some of My Ideas for Blog Posts · 2022-05-26T14:29:22.435Z · LW · GW

I'm biased towards all the prediction market ones, naturally haha. In case you wanted to get a head start on manipulating markets for fun & profit:

Comment by Austin Chen (austin-chen) on The AI Countdown Clock · 2022-05-15T18:58:48.143Z · LW · GW

I like this a lot! I am also the kind of person to use a new tab death clock, though your post inspired me to update it to my own AI timeline (~10 years).

I briefly experimented with using New Tab Redirect to set your site as my new tab page, but I think it takes a smidgen longer to load haha (it needs to fetch the Metaculus API or something?)

Comment by Austin Chen (austin-chen) on ProjectLawful.com: Eliezer's latest story, past 1M words · 2022-05-13T23:18:51.274Z · LW · GW

Sorry about that - had some configuration issues. It should work now!

Comment by Austin Chen (austin-chen) on ProjectLawful.com: Eliezer's latest story, past 1M words · 2022-05-13T22:17:13.802Z · LW · GW

Thanks! I tried splitting into smaller sections (half the size) so that we don't have this issue as much; not sure what other solutions look like.

Comment by Austin Chen (austin-chen) on ProjectLawful.com: Eliezer's latest story, past 1M words · 2022-05-13T21:57:59.444Z · LW · GW

Yeah probably a stale caching layer, what fic were you reading? Glowflow doesn't read from an epub, it's reading html from the site itself.

Lemme try rebooting to see if that refreshes. That's obviously not sustainable... I didn't expect people to actually use it for a live, updating fic lol.

Edit: added a "Clear cache" button, hope that solves it!

Comment by Austin Chen (austin-chen) on ProjectLawful.com: Eliezer's latest story, past 1M words · 2022-05-13T12:26:16.738Z · LW · GW

Text centering should now be live!

Comment by Austin Chen (austin-chen) on ProjectLawful.com: Eliezer's latest story, past 1M words · 2022-05-13T11:24:14.234Z · LW · GW

dark mode seems to have stopped working

 

Hm, do you have "dark mode" toggled on the sidebar? (There's two settings unfortunately due to how Streamlit is set up):

 

The outer box doesn't widen together with the text and background, and the text doesn't stay centered

Yeah unfortunately this is mostly working-as-implemented. The box size isn't a thing I can change; "Wide Mode" lets it the box be big, otherwise it's small.

Text centering might be possible if you're in "Wide Mode" -- I'll look into that.

Comment by Austin Chen (austin-chen) on ProjectLawful.com: Eliezer's latest story, past 1M words · 2022-05-13T00:56:38.357Z · LW · GW

Done! Thanks for the feedback. Hoping 2000px is plenty but it's easy to increase lol.

(Having too many options is sometimes a symptom of bad UX design, but it seems reasonable for a web reader to support these all of these.)

Comment by Austin Chen (austin-chen) on ProjectLawful.com: Eliezer's latest story, past 1M words · 2022-05-12T16:59:02.849Z · LW · GW

Hrm, I think I could code in a way to specify the height of the box... lemme look into it.

Thanks for all your suggestions, btw!

Comment by Austin Chen (austin-chen) on ProjectLawful.com: Eliezer's latest story, past 1M words · 2022-05-12T12:24:45.031Z · LW · GW

I hope so! I myself bounced off of Mad Investor Chaos twice before because the formatting was too hard for me to read... but after implementing this reader, spent 2 hours last night reading through it.

Thanks so much for writing this Glowfic!

Comment by Austin Chen (austin-chen) on ProjectLawful.com: Eliezer's latest story, past 1M words · 2022-05-12T12:23:27.556Z · LW · GW

Your wish is my command - dark mode added!

Comment by Austin Chen (austin-chen) on ProjectLawful.com: Eliezer's latest story, past 1M words · 2022-05-12T03:24:37.904Z · LW · GW

I liked how the epub strips out unnecessary UI from the glowfic site, but downloading and moving epubs around is a pain...

So I built a web reader on top of this code! Check it out here: https://share.streamlit.io/akrolsmir/glowflow/main

It'll work for any Glowfic post actually, eg https://share.streamlit.io/akrolsmir/glowflow/main?post=5111 Would probably be simple to add a download button to get the epub file; source code here.

Comment by Austin Chen (austin-chen) on Open & Welcome Thread - May 2022 · 2022-05-11T22:06:44.290Z · LW · GW

I've been thinking for a while that maybe forecasting should have its own LessWrong instance, as a place to discuss and post essays (the way EA Forum and AI Alignment have their own instances); curious to get your thoughts on whether this would improve the forecasting scene by having a shared place to meet, or detract by making it harder for newcomers to hear about forecasting?

I really, really wish crossposting and crosslinking was easier between different ForumMagnum instances...

Comment by Austin Chen (austin-chen) on What We Owe the Past · 2022-05-09T18:20:26.153Z · LW · GW

I'm not sure it's as simple as that - I don't know that just because it's your past self, you get to make decisions on their behalf.

Toy example: last week I promised myself I would go hit the gym. Today I woke up and am feeling lazy about it. My lazy current self thinks breaking the promise is a good idea, but does that mean he's justified in thinking that the past version of Austin would agree?

Comment by Austin Chen (austin-chen) on What We Owe the Past · 2022-05-06T11:57:32.385Z · LW · GW

I don't even think I owe very much to many stated preferences of contemporary living humans

This feels like something of a crux? Definitely, before we get into respecting the preferences of the past, if we don't agree on respecting the preferences of the present/near-future humans we may not find much to agree on.

I'm not even sure where to begin on this philosophical point -- maybe something like universalizability, like "wouldn't it be good if other contemporary living humans, who I might add outnumber you 7billion to 1, try to obey your own stated preferences?"

Comment by Austin Chen (austin-chen) on What We Owe the Past · 2022-05-06T11:54:14.999Z · LW · GW

just like I wouldn't attend masses just because my friend from 10 years ago who is also dead now wanted me to.

I'm not so sure about this analogy -- intuitively, aren't your obligations to yourself much stronger than to a friend? E.g. if a friend randomly asked for $5000 to pay for a vacation I wouldn't just randomly give it to her; but if my twin or past self spent that much I'd be something like 10-100x more likely to to oblige.

Comment by Austin Chen (austin-chen) on What We Owe the Past · 2022-05-06T11:43:59.525Z · LW · GW

Your finger is on the scales with the example of the conservationist. That person's desires are an applause light, while those of their descendants are a boo light. Switch the two sets of desires and the example is no longer persuasive, if it ever was.

 

First: I picked this example partly because "cuteness optimization" does seem weird and contrary and unsympathetic. I imagine that to people in the past, our present lack of concern for our literal neighbors, or views on gay marriage, seem just as unsympathetic.

Second: "cuteness" might not be the exact correct framing, but "species extinction to maximize utilons" has a surprising amount of backing to it. In some sense, the story of industrial progress has been one of inadvertent species extinction, and I'm partial to the idea that this was in fact the right path because of the massive number of humans it has made happier (rather than slowing down industrial growth in service to sustainability). Or another example: see this piece arguing that we should desire the extinction of all carnivorous species, due to the massive amount of wild animal suffering imposed by predation. 

Comment by Austin Chen (austin-chen) on Austin Chen's Shortform · 2022-04-21T03:06:34.864Z · LW · GW

Okay, now I've used the live-collab/commenting feature on a LessWrong draft. It's pretty good! If you haven't seen it yet, I'd recommend writing a new LW post and requesting feedback; Justis Millis's feedback was super fast, highly detailed, and all-around incredibly valuable!

Can I turn on inline comments for a published LessWrong post too? Even after "publishing" it'd super useful to get the comments inline. In my view, a great post should be a timeless, living, breathing, collaborative document, rather than a bunch of words dumped out once and never revisited.

(There's value in the latter in terms for eg news posts; but LW's focus is less on that.)

 

Comment by Austin Chen (austin-chen) on Austin Chen's Shortform · 2022-04-15T05:34:13.559Z · LW · GW

Suggestion: Inline comments for LessWrong posts, ala Google Docs

It's been commented on before that much intellectual work in the EA/Rat community languishes behind private Google Docs. I think one reason is just that the inline-commenting mechanism on a GDoc is so much better than excerpting the comment below. Has the Lightcone team considered this/what is the status?

(I vaguely recall them working on a live-collab feature, not sure if commenting would have been part of this)

Comment by Austin Chen (austin-chen) on My Superpower: OODA Loops · 2022-04-04T02:04:48.829Z · LW · GW

I think feedback loops and OODA are really great; thanks for drawing attention to this concept! One thing that would have made this post more compelling: do you have any concrete examples of applying OODA in real life?

Comment by Austin Chen (austin-chen) on General Thoughts on Less Wrong · 2022-04-03T22:03:49.633Z · LW · GW

A bit hard to describe; kind of like ratfic, kind of like roleplay, kind of like a forum.

https://luminousalicorn.tumblr.com/post/145319779970/what-is-a-glowfic

Comment by Austin Chen (austin-chen) on General Thoughts on Less Wrong · 2022-04-03T22:01:47.133Z · LW · GW

One more: Progress Studies!

Comment by Austin Chen (austin-chen) on General Thoughts on Less Wrong · 2022-04-03T15:32:33.345Z · LW · GW

I do think it's a shame that LW, Alignment Forum, and EA Forum are three separate sites rather than a single one. Maybe there are weird political reasons for this but as a user I don't really care, I just want to be able to navigate between all of them and discover content and crosspost with ease. Some other possible subcommunities:

  • Forecasting and prediction (Especially if we could integrate prediction markets from Manifold!).
  • Tools for Thought slash  https://futureofcoding.org/. Feels like it should have a decent amount of audience overlap. I'm a bit external to this group, but I'd love to see what kinds of discussions they have!
  • Georgism/Model Cities?
  • Ratfic/Glowfic??

(I may just be listing all my weird geeky interests haha)