Posts
Comments
Starting new technical AI safety orgs/projects seems quite difficult in the current funding ecosystem. I know of many alumni who have founded or are trying to found projects who express substantial difficulties with securing sufficient funding.
Interesting - what's like the minimum funding ask to get a new org off the ground? I think something like $300k would be enough to cover ~9 mo of salary and compute for a team of ~3, and that seems quite reasonable to raise in this current ecosystem for pre-seeding a org.
I very much appreciate @habryka taking the time to lay out your thoughts; posting like this is also a great example of modeling out your principles. I've spent copious amounts of time shaping the Manifold community's discourse and norms, and this comment has a mix of patterns I find true out of my own experiences (eg the bits about case law and avoiding echo chambers), and good learnings for me (eg young/non-English speakers improve more easily).
So, I love Scott, consider CM's original article poorly written, and also think doxxing is quite rude, but with all the disclaimers out of the way: on the specific issue of revealing Scott's last name, Cade Metz seems more right than Scott here? Scott was worried about a bunch of knock-off effects of having his last name published, but none of that bad stuff happened.[1]
I feel like at this point in the era of the internet, doxxing (at least, in the form of involuntary identity association) is much more of an imagined threat than a real harm. Beff Jezos's more recent doxxing also comes to mind as something that was more controversial for the controversy, than for any factual harms done to Jezos as a result.
- ^
Scott did take a bunch of ameliorating steps, such as leaving his past job -- but my best guess is that none of that would have actually been necessary. AFAICT he's actually in a much better financial position thanks to subsequent transition to Substack -- though crediting Cade Metz for this is a bit like crediting Judas for starting Christianity.
My friend Eric once proposed something similar, except where two charitable individuals just create the security directly. Say Alice and Bob both want to donate $7500 to Givewell; instead of doing so directly, they could create a security which is "flip a coin, winner gets $15000". They do so, Alice wins, waits a year and donates for $15000 of appreciated longterm gains and gets a tax deduction, while Bob deducts the $7500 loss.
This seems to me like it ought to work, but I've never actually tried this myself...
Warning: Dialogues seem like such a cool idea that we might steal them for Manifold (I wrote a quick draft proposal).
On that note, I'd love to have a dialogue on "How do the Manifold and Lightcone teams think about their respective lanes?"
Haha, this actually seems normal and fine. We who work on prediction markets, understand the nuances and implementation of these markets (what it means in mathematical terms when a market says 25%). And Kevin and Casey haven't quite gotten it yet, based on a couple of days of talking to prediction markets enthusiasts.
But that's okay! Ideas are actually super hard to understand by explanation, and much easier to understand by experience (aka trial and error). My sense is that if Kevin follows up and bets on a few other markets, he'd start to wonder "hm, why did I get M100 for winning this market but only M50 on that one?" and then learn that the odds at which you place the bet actually matter. This principle underpins the idea of Manifold -- you can argue all day about whether prediction markets are good for X or Y, or... you can try using them with play money and find out.
It's reasonable for their reporting to be vibes-based for now - so long as they are reasonably accurate in characterizing the vibes, it sets the stage for other people to explore Manifold or other prediction markets.
Yeah, I guess that's fair -- you have much more insight into the number of and viewpoints of Wave's departing employees than I do. Maybe "would be a bit surprised" would have cashed out to "<40% Lincoln ever spent 5+ min thinking about this, before this week", which I'd update a bit upwards to 50/50 based on your comment.
For context, I don't think I pushed back on (or even substantively noticed) the NDA in my own severance agreement, whereas I did push back quite heavily on the standard "assignment of inventions" thing they asked me to sign when I joined. That said, I was pretty happy with my time and trusted my boss enough to not expect for the NDA terms to matter.
I definitely feel like "intentionally lying" is still a much much stronger norm violation than what happened here. There's like a million decisions that you have to make as a CEO and you don't typically want to spend your decisionmaking time/innovation budget on random minutiae like "what terms are included inside our severance agreements?" I would be a bit surprised if "should we include a NDA & non-disclosure" had even risen to the level of a conscious decision of Lincoln's at any point throughout Wave's history, as opposed to eg getting boilerplate legal contracts from their lawyers/an online form and then copying that for each severance agreement thereafter.
Yeah fwiw I wanted to echo that Oli's statement seems like an overreaction? My sense is that such NDAs are standard issue in tech (I've signed one before myself), and that having one at Wave is not evidence of a lapse in integrity; it's the kind of thing that's very easy to just defer to legal counsel on. Though the opposite (dropping the NDA) would be evidence of high integrity, imo!
On the Manifund regranting program: we've received 60 requests for funding in the last month, and have commited $670k to date (or about 1/3rd of our initial budget of $1.9m). My rough guess is we could productively distribute another $1m immediately, or $10m total by the end of the year.
I'm not sure if the other tallies are as useful for us -- in contrast to an open call, a regranting program scales up pretty easily; we have a backlog of both new regrantors to onboard and existing regrantors to increase budgets, and regrantors tend to generate opportunities based on the size of their budgets.
(With a few million in unrestricted funding, we'd also branch out beyond regranting and start experimenting with other programs such as impact certificates, retroactive funding, and peer bonuses in EA)
Thanks for the feedback! We're still trying to figure out what time period for our newsletter makes the most sense, haha.
The $400k regrantors were chosen by the donor; the $50k ones were chosen by the Manifund team.
I can't speak for other regrantors, but I'm personally very sympathetic to retroactive grants for impactful work that got less funding than was warranted; we have one example for Vipul Naik's Donations List Website and hope to publish more examples soon!
I'm generally interested in having a diverse range of regrantors; if you'd like to suggest names/make intros (either here, or privately) please let me know!
Thanks! We're likewise excited by Lightspeed Grants, and by ways we can work together (or compete!) to make the funding landscape good.
A similar calibration game I like to play with my girlfriend: one of us gives our 80% confidence interval for some quantity (eg "how long will it take us to get to the front of this line?") and the other offers to bet on the inside or the outside, at 4:1 odds.
I've learned that my 80% intervals are right like 50% of the time, almost always in favor of being too optimistic...
With my wife, I do it a little differently. Once a week or so, when the kids have fallen asleep, we’ll lie in separate beds—Johanna next to the baby, and me next to the 5-year-old. We’ll both be staring at our screens. Unlike the notes I keep with Torbjörn, these notes are shared. They are a bunch of Google docs.
This reminds me of the note-taking culture we have at Manifold, on Notion (which I would highly recommend as an alternative to Google docs -- much more structured, easier to navigate and link between things, prettier!)
For example, while we do our daily standup meetings, we're all jotting thoughts into our meeting notes, and often move between linked documents. To track who has been having which thought, we'll prefix a particular bullet point with your initials e.g. "[A] Should we consider moving to transactions?"
Thanks for writing this up! I've just added AI Impacts to Manifold's charity list, so you can now donate your mana there too :)
I find the move from "website" to "wiki" very interesting. We've been exploring something similar for Manifold's Help & About pages. Right now, they're backed by an internal Notion wiki and proxied via super.so, but our pages are kind of clunky; plus we'd like to open it up to allow our power users to contribute. We've been exploring existing wiki solutions (looks like AI Impacts is on DokuWiki?) but it feels like most public wiki software was designed 10+ years ago, whereas modern software like Notion is generally targeted for the internal use case. I would also note that LessWrong seems to have moved away from having an internal wiki, too. There's some chance Manifold ends up building an in-house solution for this, on top of our existing editor...
Definitely agreed that the bottleneck is mostly having good questions! One way I often think about this is, a prediction market question conveys many bits of information about the world, while the answer tends to convey very few.
Part of the goal with Manifold is to encourage as many questions as possible, lowering the barrier to question creation by making it fast and easy and (basically) free. But sometimes this does lead to people asking questions that have wide appeal but are less useful (like the ones you identified above), whereas generating really good questions often requires deep subject-matter expertise. If you have eg a list of operationalized questions, we're always more than happy to promote them to our forecasters!
Re your second point (score rather than ranking basketball players), Neel Nanda has the same advice which I've found fairly helpful for all kinds of assessment tasks: https://www.neelnanda.io/blog/48-rating
It makes me much more excited for eg 5-star voting instead of approval or especially ranked choice voting.
Big fan of the concept! Unfortunately, Manifold seems too dynamic for this extension (using the extension seems to break our site very quickly) but I really like the idea of temporarily hiding our market % so you can form an opinion before placing a bet:
Really appreciate this list!
Things I very much agree with:
4. Have a single day, e.g. Tuesday, that’s the “meeting day”, where people are expected to schedule any miscellaneous, external meetings (e.g. giving someone career advice, or grabbing coffee with a contact).
12. Have a “team_transparency@companyname” email address, which is such that when someone CC’s it on an email, the email gets forwarded to a designated slack channel
17. Have regular 1-1s with the people you work with. Some considerations only get verbalised via meandering, verbal conversation. Don’t kill it with process or time-bounds.
Things I'm very unsure about:
8. Use a real-time chat platform like Slack to communicate (except for in-person communication). For god’s sake, never use email within the team.
I actually often wonder whether Slack (or in our case, Discord) optimizes for writeability at the cost of readability. Meaning, something more asynchronous like Notion, or maybe the LessWrong forum/Manifold site, would be a better system of documenting decisions and conversations -- chat is really easy to reach for and addictive, but does a terrible job of exposing history for people who aren't immediately reading along. In contrast, Manifold's standup and meeting calendar helps organize and spread info across the team in a way that's much more manageable than Discord channels.
14. Everyone on your team should be full-time
Definitely agree that 40h is much more than 2x 20h, but also sometimes we just don't have that much of certain kinds of work, slash really good people have other things to do with their lives?
Things we don't do at all
5. No remote work.
Not sure how a hypothetical Manifold that was fully in-person would perform -- it's very unclear if our company could even have existed, given that the cofounders are split across two cities haha. Being remote forces us to add processes (like a daily hour-long sync) that an in-person team can squeak by without, but also I think has led to a much better online community of Manifold users because we dogfood the remote nature of work so heavily.
Finally: could you describe some impressive things that Lightcone has accomplished using this methodology? I wonder if this is suited to particular kinds of work (eg ops, events, facilities) and less so others (software engineering, eg LessWrong doesn't seem to do this as much?)
Rob Wiblin from 80k asks:
Inositol, I believe: https://www.facebook.com/100000020495165/posts/4855425464468089/?app=fbl
I've been following the SMTM hypothesis with great interest; don't have much to add on a technical level, but I'm happy to pay a $200 bounty in M$ to Natália in recognition of her excellent writeup here. Also - happy to match (in M$) any of the bounties that she outlined!
San Jose has The Tech Interactive (formerly The Tech Museum of Innovation) located in the downtown. I remember going often as a kid, and being enthralled by the interactions and exhibits. One of the best is located outside, for free: a 2-story tall Rube Goldberg machine that shuffles billiards balls through various contraptions. Absolutely mesmerizing.
I'd have more hope - not significant hope, but more hope - in separating the concerns of (a) credibly promising to pay big money retrospectively for good work to anyone who produces it, and (b) venturing prospective payments to somebody who is predicted to maybe produce good work later.
I desperately want to make this ecosystem exist, either as part of Manifold Markets, or separately. Some people call it "impact certificates" or "retroactive public goods funding"; I call it "equity for public goods", or "Manifund" in the specific case.
If anyone is interested in:
a) Being a retroactive funder for good work (aka bounties, prizes)
b) Getting funding through this kind of mechanism (aka income share agreements, angel investment)
c) Working on this project full time (full-stack web dev, ops, community management)
Please get in touch! Reply here, or message austin@manifold.markets~
Thanks again Elizabeth for pushing forward this initiative; Slime Mold Time Mold's obesity hypothesis has been one of the most interesting things I've come across in the last couple years, and I'm glad to see citizen research efforts springing up to pursue it~
The credit for combining the data set really goes to Oliver S and Josh C; I mostly just posted the bounty haha:
I'm biased towards all the prediction market ones, naturally haha. In case you wanted to get a head start on manipulating markets for fun & profit:
I like this a lot! I am also the kind of person to use a new tab death clock, though your post inspired me to update it to my own AI timeline (~10 years).
I briefly experimented with using New Tab Redirect to set your site as my new tab page, but I think it takes a smidgen longer to load haha (it needs to fetch the Metaculus API or something?)
Sorry about that - had some configuration issues. It should work now!
Thanks! I tried splitting into smaller sections (half the size) so that we don't have this issue as much; not sure what other solutions look like.
Yeah probably a stale caching layer, what fic were you reading? Glowflow doesn't read from an epub, it's reading html from the site itself.
Lemme try rebooting to see if that refreshes. That's obviously not sustainable... I didn't expect people to actually use it for a live, updating fic lol.
Edit: added a "Clear cache" button, hope that solves it!
Text centering should now be live!
dark mode seems to have stopped working
Hm, do you have "dark mode" toggled on the sidebar? (There's two settings unfortunately due to how Streamlit is set up):
The outer box doesn't widen together with the text and background, and the text doesn't stay centered
Yeah unfortunately this is mostly working-as-implemented. The box size isn't a thing I can change; "Wide Mode" lets it the box be big, otherwise it's small.
Text centering might be possible if you're in "Wide Mode" -- I'll look into that.
Done! Thanks for the feedback. Hoping 2000px is plenty but it's easy to increase lol.
(Having too many options is sometimes a symptom of bad UX design, but it seems reasonable for a web reader to support these all of these.)
Hrm, I think I could code in a way to specify the height of the box... lemme look into it.
Thanks for all your suggestions, btw!
I hope so! I myself bounced off of Mad Investor Chaos twice before because the formatting was too hard for me to read... but after implementing this reader, spent 2 hours last night reading through it.
Thanks so much for writing this Glowfic!
Your wish is my command - dark mode added!
I liked how the epub strips out unnecessary UI from the glowfic site, but downloading and moving epubs around is a pain...
So I built a web reader on top of this code! Check it out here: https://share.streamlit.io/akrolsmir/glowflow/main
It'll work for any Glowfic post actually, eg https://share.streamlit.io/akrolsmir/glowflow/main?post=5111 Would probably be simple to add a download button to get the epub file; source code here.
I've been thinking for a while that maybe forecasting should have its own LessWrong instance, as a place to discuss and post essays (the way EA Forum and AI Alignment have their own instances); curious to get your thoughts on whether this would improve the forecasting scene by having a shared place to meet, or detract by making it harder for newcomers to hear about forecasting?
I really, really wish crossposting and crosslinking was easier between different ForumMagnum instances...
I'm not sure it's as simple as that - I don't know that just because it's your past self, you get to make decisions on their behalf.
Toy example: last week I promised myself I would go hit the gym. Today I woke up and am feeling lazy about it. My lazy current self thinks breaking the promise is a good idea, but does that mean he's justified in thinking that the past version of Austin would agree?
I don't even think I owe very much to many stated preferences of contemporary living humans
This feels like something of a crux? Definitely, before we get into respecting the preferences of the past, if we don't agree on respecting the preferences of the present/near-future humans we may not find much to agree on.
I'm not even sure where to begin on this philosophical point -- maybe something like universalizability, like "wouldn't it be good if other contemporary living humans, who I might add outnumber you 7billion to 1, try to obey your own stated preferences?"
just like I wouldn't attend masses just because my friend from 10 years ago who is also dead now wanted me to.
I'm not so sure about this analogy -- intuitively, aren't your obligations to yourself much stronger than to a friend? E.g. if a friend randomly asked for $5000 to pay for a vacation I wouldn't just randomly give it to her; but if my twin or past self spent that much I'd be something like 10-100x more likely to to oblige.
Your finger is on the scales with the example of the conservationist. That person's desires are an applause light, while those of their descendants are a boo light. Switch the two sets of desires and the example is no longer persuasive, if it ever was.
First: I picked this example partly because "cuteness optimization" does seem weird and contrary and unsympathetic. I imagine that to people in the past, our present lack of concern for our literal neighbors, or views on gay marriage, seem just as unsympathetic.
Second: "cuteness" might not be the exact correct framing, but "species extinction to maximize utilons" has a surprising amount of backing to it. In some sense, the story of industrial progress has been one of inadvertent species extinction, and I'm partial to the idea that this was in fact the right path because of the massive number of humans it has made happier (rather than slowing down industrial growth in service to sustainability). Or another example: see this piece arguing that we should desire the extinction of all carnivorous species, due to the massive amount of wild animal suffering imposed by predation.
Okay, now I've used the live-collab/commenting feature on a LessWrong draft. It's pretty good! If you haven't seen it yet, I'd recommend writing a new LW post and requesting feedback; Justis Millis's feedback was super fast, highly detailed, and all-around incredibly valuable!
Can I turn on inline comments for a published LessWrong post too? Even after "publishing" it'd super useful to get the comments inline. In my view, a great post should be a timeless, living, breathing, collaborative document, rather than a bunch of words dumped out once and never revisited.
(There's value in the latter in terms for eg news posts; but LW's focus is less on that.)
Suggestion: Inline comments for LessWrong posts, ala Google Docs
It's been commented on before that much intellectual work in the EA/Rat community languishes behind private Google Docs. I think one reason is just that the inline-commenting mechanism on a GDoc is so much better than excerpting the comment below. Has the Lightcone team considered this/what is the status?
(I vaguely recall them working on a live-collab feature, not sure if commenting would have been part of this)
I think feedback loops and OODA are really great; thanks for drawing attention to this concept! One thing that would have made this post more compelling: do you have any concrete examples of applying OODA in real life?
A bit hard to describe; kind of like ratfic, kind of like roleplay, kind of like a forum.
https://luminousalicorn.tumblr.com/post/145319779970/what-is-a-glowfic
One more: Progress Studies!