Posts

The Inner Ring by C. S. Lewis 2024-04-24T22:48:09.228Z
Come to Manifest 2024 (June 7-9 in Berkeley) 2024-03-27T21:30:17.306Z
Retro funder profile & Manifund team recs (ACX Grants 2024: Impact Market) 2024-03-26T03:29:43.482Z
Invest in ACX Grants projects! 2024-03-06T20:27:04.616Z
Things You’re Allowed to Do: University Edition 2024-02-06T00:36:11.690Z
Explaining Impact Markets 2024-01-31T09:51:27.587Z
Link Collection: Impact Markets 2023-12-26T09:01:48.815Z
Solving Two-Sided Adverse Selection with Prediction Market Matchmaking 2023-11-26T20:10:21.622Z
OPTIC: Announcing Intercollegiate Forecasting Tournaments in SF, DC, Boston 2023-10-13T01:36:48.331Z
Manifest 2023 2023-09-06T11:24:31.274Z
Last Chance: Get tickets to Manifest 2023! (Sep 22-24 in Berkeley) 2023-09-06T10:35:37.510Z
Announcing Manifest 2023 (Sep 22-24 in Berkeley) 2023-08-14T05:13:03.186Z

Comments

Comment by Saul Munn (saul-munn) on This is Water by David Foster Wallace · 2024-04-25T00:51:17.505Z · LW · GW

thanks oli, and thanks for editing mine! appreciate the modding <3

Comment by Saul Munn (saul-munn) on Quick evidence review of bulking & cutting · 2024-04-24T22:01:28.924Z · LW · GW

thanks for writing this — also, some broad social encouragement for the practice of doing quick/informal lit reviews + posting them publicly! well done :)

Comment by Saul Munn (saul-munn) on Things You’re Allowed to Do: University Edition · 2024-04-06T00:06:43.413Z · LW · GW

that is... wild. thanks for sharing!

Comment by Saul Munn (saul-munn) on My simple AGI investment & insurance strategy · 2024-04-01T02:20:32.179Z · LW · GW

I think institutional market makers are basically not pricing [slow takeoff, or the expectation of one] in

why do you think they're not pricing this in?

Comment by Saul Munn (saul-munn) on Failures in Kindness · 2024-03-31T04:57:18.870Z · LW · GW

Really love this post. Thanks for writing it!

Comment by Saul Munn (saul-munn) on LessOnline (May 31—June 2, Berkeley, CA) · 2024-03-28T01:12:15.073Z · LW · GW

thanks for the feedback on the website. here's the explanation we gave on the manifest announcement post:

In the week between LessOnline and Manifest, come hang out at Lighthaven with other attendees! Cowork and share meals during the day, attend casual workshops and talks in the evening, and enjoy conversations by the (again, literal) fire late into the night.

Summer Camp will be pretty lightweight: we’ll provide the space and the tools, and you & your fellow attendees will bring the discussions, workshops, tournaments, games, and whatever else you’re excited about organizing.

Here are the types of events you’ll see at Summer Camp:

  • Hackathons (or “Forecastathons”)
  • Organized discussions and workshops
  • Jam sessions and dance parties
  • Games of all kinds: social deception games, poker, MTG, jackbox, etc.
  • Campy activities: sardines, s’mores, singalongs
  • Multi-day intensive workshops, e.g. a CFAR-style workshop or a Quantitative Trading Bootcamp (Note: these may come at some extra cost, TBD by the organizers)

let me know if you have other questions.

Comment by Saul Munn (saul-munn) on Toward a Broader Conception of Adverse Selection · 2024-03-17T16:11:39.288Z · LW · GW

I really enjoyed this — thank you for writing. I also think the updated version is a lot better than the previous version, and I appreciate the work you put in to update it. I'm really, really looking forward to the other posts in this sequence.

I'd also really enjoy a post that's on this exact topic, but one that I'd feel comfortable sending to my mom or something, cf "Broad adverse selection (for poets)."

Comment by Saul Munn (saul-munn) on Things You’re Allowed to Do: University Edition · 2024-02-20T00:51:26.839Z · LW · GW

cf "there's no speed limit"

Comment by Saul Munn (saul-munn) on More Hyphenation · 2024-02-12T00:34:14.848Z · LW · GW

relevant: story-based decision-making

Comment by saul-munn on [deleted post] 2024-02-11T23:24:56.611Z
  1. the name of this post was really confusing for me. i thought it would be about "how to stop defeating akrasia," not "how to defeat akrasia by stopping." consider renaming it to be a bit more clear?
  2. the part at the end really reminded me of this piece by dr mciver: https://notebook.drmaciver.com/posts/2022-12-20-17:21.html
Comment by Saul Munn (saul-munn) on Concrete examples of doing agentic things? · 2024-01-12T22:46:25.616Z · LW · GW

+1 on Things You're Allowed To Do, it's really really great

Comment by Saul Munn (saul-munn) on Concrete examples of doing agentic things? · 2024-01-12T22:45:34.625Z · LW · GW

here are some specific, random, generally small things that i do quite often:

  • sit on the floor. i notice myself wanting to sit, and i notice myself lacking a chair. fortunately, the floor is always there.
  • explicitly babble! i babble about thoughts that are bouncing around in my head, no matter the topic! open a new doc — docs.new works well, or whatever you use — set a 5 minute timer, and just babble. write whatever comes to mind.
  • message effective/competent people to cowork with them. i'm probably not the most effective/competent person you know, but feel free practice this with me!
  • board airplanes close to last, intentionally. i do this to avoid pushing through the lines, and to give myself an extra ~15 minutes to work in the airport terminal.
  • write out informal/babbled decision docs for big/important decisions. i've done this ~5 times over the last 6 months, and it both helps me during the decision (forces me to make my thoughts/worries/hopes concrete, lets me get quick thoughts/advice from friends, etc) and after the decision (i can remind myself why i originally made the decision, regardless of what ends up happening).
  • actually doing a lot of the things on this list
  • after or while doing something with someone else (a long conversation, a group project, a friendship, a ski trip, etc) asking them "what was the worst thing i did?"
  • waking myself up by intentionally getting water up my nose, then blowing it
  • pick a few things — about 3-8 — that look pretty good on a menu. ask siri to pick a random number, 1 through [number of things that looked good]. that's now my default order; if i want, i can order something else, but i should have a pretty strong reason.
  • when sending someone a list of questions, send them in a accompanying accompanying list of the answers that are likely correct, so that they can just say “yes” or "actually no, it's _____." it cuts down on the amount of time they have to spend answering my questions significantly.

here are some more general mental TAPs that i've accidentally burned into my brain:

  • "gahhh, i wish [x] thing existed!" → "could i make [x]?"
  • "boy, [x] is annoying/bad/disruptive/generally a problem." → "could i solve [x]?"
  • "now that i think about it, [person] is actually a really rare & awesome person." → "could i text/call them right now?"
  • "i don't like [x] about myself/my environment/the people i hang out with/my workspace/etc." → "could i change [x]?"
  • "i hate that i always have to do [x]." → "what would happen if i didn't do [x]? would i take damage? if so, would i take more damage than the damage i'm currently taking while forcing myself to do [x]?"
  • "ooh, i have to remember to [x]." → "should i set an alarm/reminder/calendar event about [x]?" (almost always: yes, you should)
  • "that's a great idea." → "should i quickly write a sentence about this down in my notes app? even just a sazen for my future self?"
  • "i should do [x]." → "do i have a concrete plan for doing [x] that's worked in relevantly similar situations for me in the past? if not, how am i expecting to get [x] done?"
  • [in a meeting] "great, let's make sure to get [x] done." → "is there one clear person who's owning this? who's responsible if it doesn't get done, and in charge of making sure it does get done? do i trust this person to actually get [x] done?"
  • "hmm, i'm realizing that i've built up a really big ugh field around [x]." → "[insert generic strategies for dealing with ugh fields; i think mine are mediocre, so would love to hear others']"
  • notes:
    • most of the above was stolen from this thread
    • the actions above are questions that i ask myself, not concrete actions i force myself to do. they're sorta like saying "hey, here's a concrete action you could take — do you want to?" most of the time, my internal response is "nah, i'm good." but sometimes, my internal response is "actually, yeah! let's do this!" importantly, the questions are meant to reduce friction toward acting on your available options, not imply an obligation to those options.
Comment by Saul Munn (saul-munn) on Calibration Trivia · 2023-12-23T05:15:31.140Z · LW · GW

If you and your audience have smartphones, we suggest making use of a copy of this spreadsheet and google form.

are "spreadsheet" and "google form" meant to be linked to something?

Comment by Saul Munn (saul-munn) on Meetup Tip: Heartbeat Messages · 2023-12-16T09:24:44.692Z · LW · GW

I think a lot of what I write for rationalist meetups would apply straightforwardly to EA meetups.

agreed. this sort of thing feels completely missing from the EA Groups Resources Centre, and i'd guess it would be a big/important contribution.

This may be a silly question, but- how does cross posting usually work?

iirc, when you're publishing a post on {LessWrong, the EA forum}, one of the many settings at the bottom is "Cross-Post to {the EA forum, LessWrong}," or something along those lines. there's some karma requirement for both the EA forum and for LW — if you don't meet the karma requirement for one, you might need to manually cross-post until you have enough karma.

are there norms on EA forum around say, pseudonyms and real names, or being a certain amount aligned with EA?

re: pseudonyms: though there's a general, mild preference for post authors to use their real names, using a pseudonym is perfectly fine — and many do (example).

re: alignment: you don't need to be fully on-board with EA to post on the forum (and many aren't), but the content of your post should at least relate to EA.

for other questions regarding the norms on the EA forum, here's a guide to the norms on the EA forum. (on that guide, they have a section on "rules for pseudonymous and multiple accounts" and "privacy and pseudonymity.")

*i'll edit & delete this part later, but: i'll get back to you over email in a bit! caught up with other stuff, and getting to things one at a time :)

Comment by Saul Munn (saul-munn) on Meetup Tip: Heartbeat Messages · 2023-12-10T00:29:20.369Z · LW · GW

This is great! Have you cross-posted this to the EA Forum? If not, may I?

Comment by Saul Munn (saul-munn) on Solving Two-Sided Adverse Selection with Prediction Market Matchmaking · 2023-11-28T05:20:00.506Z · LW · GW

Thanks for the response!

Re: concerns about bad incentives, I agree that you can depict the losses associated with manipulating conditional prediction markets as paying a "cost" — even though you'll probably lose a boatload of money, it might be worth it to lose a boatload of money to manipulate the markets. In the words of Scott Alexander, though:

If you’re wondering why people aren’t going to get an advantage in the economy by committing horrible crimes, the answer is probably the same combination of laws, ethics, and reputational concerns that works everywhere else.

I'm concerned about this, but it feels like a solvable problem.

Re: personal stuff & the negative externalities of publicly revealing probabilities, thanks for pointing these out. I hadn't thought of it. Added it to the post!

Comment by Saul Munn (saul-munn) on Solving Two-Sided Adverse Selection with Prediction Market Matchmaking · 2023-11-28T04:23:03.775Z · LW · GW

Thanks for the response!

This applies to roughly the entire post, but I see an awful lot of magical thinking in this space.

Could you point to some specific areas of magical thinking in the post? and/or in the space?[1] (I'm not claiming that there aren't any, I definitely think there are. I'm interested to know where I & the space are being overconfident/thinking magically, so that I/it can do less magical thinking.)

What is the actual mechanism by which you think prediction markets will solve these problems?

The mechanism that Manifold Love uses. In section 2, I put it as "run a bunch of conditional prediction markets on a bunch of key benchmarks for potential pairs between two sides that are normally caught in adverse selection." I wrote this post to explain the actual mechanism by which I think (conditional) prediction markets might solve these problems, but I also want to note that I definitely do not think that (conditional) prediction markets will definitely for sure 100% totally completely solve these problems. I just think they have potential, and I'm excited to see people giving it a shot.

In order to get a good prediction from a market you need traders to put prices in the right places. This means you need to subsidise[2] the markets.

I agree! In order to get a good prediction from a market, you (probably, see the footnote) need participation to be positive-sum.[3] I think there are a few ways to get this:

  • Direct subsidies
    • Since prediction markets create valuable price information, it might make sense to have those who benefit from the price information directly pay. I could imagine this pretty clearly, actually: Manifold Love could charge users for (e.g.) more than 5 matches, and some part of the fee that the user pays goes toward market subsidies. As you pointed out, paying a 3rd party is currently the case for most of my examples — matchmakers, headhunters, real estate agents, etc — so it seems like this sort of thing aligns with the norms & users' expectations.
  • Hedging
    • Some participants bet to hedge other off-market risks. These participants are likely to lose money on prediction markets, and know that ahead of time. That's because they're not betting their beliefs; they're paying the equivalent to an insurance premium.
    • For prediction markets generally, this seems like the most viable path to getting money flowing into the market. I'm not sure how well it'd work for this sort of setup, though — mainly because the markets are so personal.
    • This requires finding markets on which participants would want to hedge, which seems like a difficult problem. I give an example before, but I'm pretty unsure what something like this would look like in a lot of the examples I listed in the original essay.
    • Continuing the example of the labor market from section 2: I could imagine (e.g.) a Google employee buying an ETF-type-thing that bets NO on whether all potential Google employees will remain at Google a year from their hiring date. This protects that Google empoyee against the risk of some huge round of layoffs — they've bought "insurance" against that outcome. In doing so, it provides the markets a way to become positive-sum for those participants who're betting their beliefs.
  • New traders
    • This provides an inflow of money, but is (obviously) tied to new traders joining the market. I don't like this at all, because it's totally unsustainable and leads to community dynamics like those in crypto. Also, it's a scheme that's pyramid-shaped, a "pyramid scheme" if you will.
    • I'm mainly including this for completeness; I think relying on this is a terrible idea.

Whether or not a subsidised prediction market is going to be cheaper for the equivalent level of forecast than paying another 3rd party (as is currently the case in most of your examples) is very unclear to me

Agreed! It's unclear to me too. This sort of question is answerable by trying the thing and seeing if it works — that's why I'm excited about for people & companies to try it out and see if it works.

 

  1. ^

    I'm assuming you mean the "prediction market/forecasting space," so please let me know if that's not the space to which you're referring.

  2. ^

    I'll interpret "subsidize" more broadly as "money flowing into the market to make it positive-sum after fees, inflation, etc."

  3. ^

    I'm comfortable working under this assumption for now, but I do want to be clear that I'm not fully convinced this is the case. The stock market is clearly negative-sum for the majority of traders, and yet... traders still join. It seems at least plausible that, as long as the market is positive-sum for some key market participants, the markets still can still provide valuable price information.