Posts

Atlantis: Berkeley event venue available for rent 2023-11-22T01:47:12.026Z
LessWrong readers are invited to apply to the Lurkshop 2022-11-22T09:19:05.412Z
You can now apply to EA Funds anytime! (LTFF & EAIF only) 2021-06-18T13:40:51.100Z
Apply to Effective Altruism Funds now 2021-02-13T13:36:39.977Z

Comments

Comment by Jonas V (Jonas Vollmer) on My simple AGI investment & insurance strategy · 2024-04-05T19:50:00.460Z · LW · GW

Implied volatility of long-dated, far-OTM calls is similar between AI-exposed indices (e.g. SMH) and individual stocks like TSM or MSFT (though not NVDA). 

The more concentrated exposure you get from AI companies or AI-exposed indices compared to VTI is likely worth it, unless you expect that short-timelines slow AI takeoff will involve significant acceleration of the broader economy (not just tech giants), which I think is not highly plausible.

Comment by Jonas V (Jonas Vollmer) on My simple AGI investment & insurance strategy · 2024-04-05T19:43:46.978Z · LW · GW

There are SPX options that expire in 2027, 2028, and 2029, those seem more attractive to me than 2-3-y-dated VTI options, especially given that they have strike prices that are much further out of the money.

Would you mind posting the specific contracts you bought?

Comment by Jonas V (Jonas Vollmer) on How to (hopefully ethically) make money off of AGI · 2024-03-12T23:33:53.293Z · LW · GW

I have some of those in my portfolio. It's worth slowly walking GTC orders up the bid-ask spread, you'll get better pricing that way.

Comment by Jonas V (Jonas Vollmer) on FUTARCHY NOW BABY · 2024-02-29T20:51:02.093Z · LW · GW

Doing a post-mortem on sapphire's other posts, their track record is pretty great:

  • BTC/crypto liftoff prediction: +22%
  • Meta DAO: +1600%
  • SAVE: -60%
  • BSC Launcher: -100%?
  • OLY2021: +32%
  • Perpetual futures: +20%
  • Perpetual futures, DeFi edition: +15%
  • Bet on Biden: +40%
  • AI portfolio: approx. -5% compared to index over same time period
  • AI portfolio, second post: approx. +30% compared to index over same time period
  • OpenAI/MSFT: ~0%
  • Buy SOL: +1000%
  • There are many more that I didn't look into.

All of these were over a couple weeks/months, so if you just blindly put 10% of your portfolio into each of the above, you get very impressive returns. (Overall, roughly ~5x relative to the broad market.)

Comment by Jonas V (Jonas Vollmer) on FUTARCHY NOW BABY · 2024-02-29T20:22:58.556Z · LW · GW

Worth pointing out that this is up ~17x since it was posted. Sometimes, good ideas will be communicated poorly, and you'll pay a big price for not investigating yourself. (At least I did.)

Comment by Jonas V (Jonas Vollmer) on EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem · 2024-02-18T19:36:15.402Z · LW · GW

Just stumbled across this post, and copying a comment I once wrote:

  • Intuitively and anecdotally (and based on some likely-crappy papers), it seems harder to see animals as sentient beings or think correctly about the badness of factory farming while eating meat; this form of motivated reasoning plausibly distorts most people's epistemics, and this is about a pretty important part of the world, and recognizing the badness of factory farming has minor implications for s-risks and other AI stuff

With some further clarifications:

  • Nobody actively wants factory farming to happen, but it's the cheapest way to get something we want (i.e. meat), and we've built a system where it's really hard for altruists to stop it from happening. If a pattern like this extended into the long-term future, we might want to do something about it.
  • In the context of AI, suffering subroutines might be an example of that.
  • Regarding futures without strong AGI: Factory farming is the arguably most important example of a present-day ongoing atrocity. If you fully internalize just how bad this is, that there's something like a genocide (in terms of moral badness, not evilness) going on right here, right now, under our eyes, in wealthy Western democracies that are often understood to be the most morally advanced places on earth, and it's really hard for us to stop it, that might affect your general outlook on the long-term future. I still think the long-term future will be great in expectation, but it also makes me think that utopian visions that don't consider these downside risks seem pretty naïve.

 

I used to eat a lot of meat, and once I stopped doing that, I started seeing animals with different eyes (treating them as morally relevant, and internalizing that a lot more). The reason why I don't eat meat now is not that I think it would cause value drift, but that it would make me deeply sad and upset – eating meat would feel similar to owning slaves that I treat poorly, or watching a gladiator fight for my own amusement. It just feels deeply morally wrong and isn't enjoyable anymore. The fact that the consequences are only mildly negative in the grand scheme of things doesn't change that. So [I actually don't think that] my argument supports me remaining a vegan now, but I think it's a strong argument for me to go vegan in the first place at some point.

My guess is that a lot of people don't actually see animals as sentient beings whose emotions and feelings matter a great deal, but more like cute things to have fun with. And anecdotally, how someone perceives animals seems to be determined by whether they eat them, not the other way around. (Insert plausible explanation – cognitive dissonance, rationalizations, etc.) I think squashing dust mites, drinking milk, eating eggs etc. seems to have a much less strong effect in comparison to eating meat, presumably because they're less visceral, more indirect/accidental ways of hurting animals.

 

Yeah, as I tried to explain above (perhaps it was too implicit), I think it probably matters much more whether you went vegan at some point in your life than whether you're vegan right now.

I don't feel confident in this; I wanted to mainly offer it as a hypothesis that could be tested further. I also mentioned the existence of crappy papers that support my perspective (you can probably find them in 5 minutes on Google Scholar). If people thought this was important, they could investigate this more.

Comment by Jonas V (Jonas Vollmer) on How to (hopefully ethically) make money off of AGI · 2024-02-18T19:24:20.714Z · LW · GW

You can also just do the unlevered versions of this, like SMH / SOXX / SOXQ, plus tech companies with AI exposure (MSFT, GOOGL, META, AMZN—or a tech ETF like QQQ).

A leverage + put options combo means you'll end up paying lots of money to market makers.

Comment by Jonas V (Jonas Vollmer) on FUTARCHY NOW BABY · 2024-02-18T19:16:26.234Z · LW · GW

I think more like you don't argue why you believe what you believe and instead just assert it's cool, and the whole thing looks a bit sloppy (spelling mistakes, all-caps, etc.)

Comment by Jonas V (Jonas Vollmer) on AI Impacts 2023 Expert Survey on Progress in AI · 2024-01-07T12:35:17.225Z · LW · GW

Does anyone know if the 2022 survey responses were collected before or after ChatGPT came out?

Comment by Jonas V (Jonas Vollmer) on How to (hopefully ethically) make money off of AGI · 2024-01-07T12:19:02.158Z · LW · GW

I think having some personal retirement savings is still useful in a broad range of possible AGI outcome scenarios, so I personally still do some retirement saving.

Regarding 401k/IRA, anything that preserves your ability to make speculative investments based on an information advantage (as outlined in this post) seems especially good; anything that limits you to a narrow selection of index funds seems potentially suboptimal to me.

Comment by Jonas V (Jonas Vollmer) on Spirit Airlines Merger Play · 2024-01-04T14:20:31.149Z · LW · GW

What's the story for why we would have an advantage here? Surely quants who specialize in this area are on top of this, and aren't constrained by capital? Unlike previous trades where rationalists made lots of money (Covid short, ETH presale, etc.), this doesn't look like a really weird thing that the pros would be unable to do with sufficient volume.

Comment by Jonas V (Jonas Vollmer) on How to (hopefully ethically) make money off of AGI · 2023-12-22T22:15:58.078Z · LW · GW

You mean an AI ETF? My answer is no; I think making your own portfolio (based on advice in this post and elsewhere) will be a lot better.

Comment by Jonas V (Jonas Vollmer) on How to (hopefully ethically) make money off of AGI · 2023-12-08T22:15:28.302Z · LW · GW

Very helpful, thanks. And interesting about the Eris SOFR Swap Futures. 

Interest rate swaptions might also be worth looking into, though they may only be available to large institutional investors.

Why not just short-sell treasuries (e.g. TLT)?

Comment by Jonas V (Jonas Vollmer) on The Lighthaven Campus is open for bookings · 2023-11-22T01:50:32.723Z · LW · GW

If you're running an event and Lighthaven isn't an option for some reason, you may be interested in Atlantis: https://www.lesswrong.com/posts/pvz53LTgFEPtnaWbP/atlantis-berkeley-event-venue-available-for-rent 

Comment by Jonas V (Jonas Vollmer) on Dialogue on the Claim: "OpenAI's Firing of Sam Altman (And Shortly-Subsequent Events) On Net Reduced Existential Risk From AGI" · 2023-11-21T23:05:11.672Z · LW · GW

Market on the primary claim discussed here: 

Comment by Jonas V (Jonas Vollmer) on How to (hopefully ethically) make money off of AGI · 2023-11-21T06:54:00.196Z · LW · GW

I personally would not put Meta on the list

Comment by Jonas V (Jonas Vollmer) on How to (hopefully ethically) make money off of AGI · 2023-11-18T00:26:29.522Z · LW · GW

Yes, but if they're far out of the money, they are a more capital-efficient way to make a very concentrated bet on outlier growth scenarios.

Comment by Jonas V (Jonas Vollmer) on How to (hopefully ethically) make money off of AGI · 2023-11-14T06:37:21.870Z · LW · GW

No, but I also didn't reach out (mostly because I'm lazy/busy)

Comment by Jonas V (Jonas Vollmer) on How to (hopefully ethically) make money off of AGI · 2023-11-08T23:44:24.067Z · LW · GW

https://www.investopedia.com/articles/investing/082714/how-invest-samsung.asp 

Comment by Jonas V (Jonas Vollmer) on How to (hopefully ethically) make money off of AGI · 2023-11-08T00:24:12.373Z · LW · GW

There's some evidence from 2013 suggesting that long-dated, out-of-the-money call options have strongly negative EV; common explanations are that some buyers like gambling and drive up prices. See this article. I also heard that over the last decade, some hedge funds therefore adopted the strategy of writing OTM calls on stocks they hold to boost their returns, and also heard that some of these hedge funds disappeared a couple years ago.

Has anyone looked into whether 1) this has replicated more recently, 2) how much worse it makes some of the suggested strategies (if at all)?

Comment by Jonas V (Jonas Vollmer) on How to (hopefully ethically) make money off of AGI · 2023-11-08T00:17:07.613Z · LW · GW

I meant something like the Fed intervening to buy lots of bonds (including long-dated ones), without particularly thinking of YCC, though perhaps that's the main regime under which they might do it?

Are there strong reasons to believe that the Fed wouldn't buy lots of (long-dated) bonds if interest rates increased a lot?

Comment by Jonas V (Jonas Vollmer) on How to (hopefully ethically) make money off of AGI · 2023-11-08T00:10:37.917Z · LW · GW

How did you get SMSN exposure?

Comment by Jonas V (Jonas Vollmer) on How to (hopefully ethically) make money off of AGI · 2023-11-08T00:09:22.217Z · LW · GW

This looked really reasonable until I saw that there was no NVDA in there; why's that? (You might say high PE, but note that Forward PE is much lower.)

Comment by Jonas V (Jonas Vollmer) on Vote on Interesting Disagreements · 2023-11-08T00:06:26.479Z · LW · GW

Having another $1 billion to prevent AGI x-risk would be pretty useful.

Comment by Jonas V (Jonas Vollmer) on Vote on Interesting Disagreements · 2023-11-07T23:58:33.417Z · LW · GW

Just like the last 12 months was the time of the chatbots, the next 12 months will be the time of agent-like AI product releases.

Comment by Jonas V (Jonas Vollmer) on Vote on Interesting Disagreements · 2023-11-07T23:57:01.001Z · LW · GW

The current AI x-risk grantmaking ecosystem is bad and could be improved substantially.

Comment by Jonas V (Jonas Vollmer) on Vote on Interesting Disagreements · 2023-11-07T23:52:56.855Z · LW · GW

Having another $1 billion to prevent AGI x-risk would be useful because we could spend it on large-scale lobbying efforts in DC.

Comment by Jonas V (Jonas Vollmer) on Vote on Interesting Disagreements · 2023-11-07T23:52:41.758Z · LW · GW

Having another $1 billion to prevent AGI x-risk would be useful because we could spend it on large compute budgets for safety research teams.

Comment by Jonas V (Jonas Vollmer) on Vote on Interesting Disagreements · 2023-11-07T23:51:50.387Z · LW · GW

Investing in early-stage AGI companies helps with reducing x-risk (via mission hedging, having board seats, shareholder activism)

Comment by Jonas V (Jonas Vollmer) on How to (hopefully ethically) make money off of AGI · 2023-11-07T04:36:40.418Z · LW · GW

Yeah, that does also feel right to me. I have been thinking about setting up some fund that maybe buys up a bunch of the equity that's held by safety researchers, so that the safety researchers don't have to also blow up their financial portfolio when they press the stop button or do some whistleblowing or whatever, and that does seem pretty incentive wise. 

I'm interested in helping with making this happen.

Comment by Jonas V (Jonas Vollmer) on How to (hopefully ethically) make money off of AGI · 2023-11-07T04:25:27.103Z · LW · GW

Very interesting conversation!

I'm surprised by the strong emphasis of shorting long-dated bonds. Surely there's a big risk of nominal interest rates coming apart from real interest rates, i.e. lots of money getting printed? I feel like it's going to be very hard to predict what the Fed will do in light of 50% real interest rates, and Fed interventions could plausibly hurt your profits a lot here.

(You might suggest shorting long-dated TIPS, but those markets have less volume and higher borrow fees.)

Comment by Jonas V (Jonas Vollmer) on UFO Betting: Put Up or Shut Up · 2023-08-23T21:36:55.545Z · LW · GW

Some additional people reached out to me—just reiterating that I'm happy to do more at 20:1 odds!

Comment by Jonas V (Jonas Vollmer) on UFO Betting: Put Up or Shut Up · 2023-06-24T03:59:27.907Z · LW · GW

Happy to do another $40k at 55:1 odds if you like (another $727), and another $20k at 20:1 odds after that.

Comment by Jonas V (Jonas Vollmer) on UFO Betting: Put Up or Shut Up · 2023-06-23T22:58:39.852Z · LW · GW

Confirm.

Comment by Jonas V (Jonas Vollmer) on UFO Betting: Put Up or Shut Up · 2023-06-23T22:36:54.280Z · LW · GW

Happy to bet $40k at 110:1 20:1 odds ($364 $2k). (Edited Sep 2023; previous bets confirmed at previous odds.)

USDC ERC-20 (Ethereum): (address removed for privacy, please DM if you want to trade more)

USDC Polygon: (address removed for privacy, please DM if you want to trade more)

(Edit 23 June 3:45 PT): I'm only willing to bet assuming that AGI-created tech doesn't count for the purposes of this bet—it has to be something more supernatural than that.)

Comment by Jonas V (Jonas Vollmer) on My tentative best guess on how EAs and Rationalists sometimes turn crazy · 2023-06-22T17:36:18.791Z · LW · GW

I think SBF rarely ever fired anyone, so "kicked out" seems wrong, but I heard that people who weren't behaving in the way SBF liked (e.g., recklessly risk-taking) got sidelined and often left on their own because their jobs became unpleasant or they had ethical qualms, which would be consistent with evaporative cooling.

Comment by Jonas V (Jonas Vollmer) on LessWrong readers are invited to apply to the Lurkshop · 2022-12-16T19:04:26.369Z · LW · GW

Yeah, I fully agree with this, and am aware of the cost. Apologies once more for not jumping in sooner when I wasn't sure whether applicants had been emailed by my colleague or not.

Comment by Jonas V (Jonas Vollmer) on LessWrong readers are invited to apply to the Lurkshop · 2022-12-15T20:21:17.840Z · LW · GW

This workshop will be postponed to January, likely to 6–9 Jan. GradientDissenter was planning to give an update to all applicants; I hope they will do so soon. I understand that some of you may have made their plans hoping to be able to participate in the workshop or were otherwise hoping for a fast response, and I apologize for completely missing the deadline, the lack of communication, and the change of plans.

(Why did this happen? Evaluating applications ended up being harder than anticipated, and I failed to jump in and fix things when the workshop planning wasn't progressing as planned, partly because I was on vacation.)

Comment by Jonas V (Jonas Vollmer) on LessWrong readers are invited to apply to the Lurkshop · 2022-11-24T17:05:10.294Z · LW · GW

We might if it goes well. If you want to be pinged if we run one, please submit a quick application through our form!

Comment by Jonas V (Jonas Vollmer) on LessWrong readers are invited to apply to the Lurkshop · 2022-11-22T21:00:34.183Z · LW · GW

Thanks for the feedback! I’ve edited the post to clarify where the funding is coming from and who is running this.

Regarding the content, one of my co-organizers may leave another comment later. The short version is that we’ll be re-running some of the most popular content from previous workshops, but primarily focus on informal conversations, as participants usually rate that as much more useful than the actual content of the workshop.

Comment by Jonas V (Jonas Vollmer) on LessWrong readers are invited to apply to the Lurkshop · 2022-11-22T19:59:14.633Z · LW · GW

It's funded by the Atlas Fellowship, which is funded by Open Philanthropy. It's something of a side-hustle of Atlas (outside of the scope and brand of the organization, and run by a subset of our team and some external collaborators). We have a fair amount of experience running different kinds of workshops, and are experimenting with what programs targeted at other demographic and niches might look like.

Thanks for the feedback! Added to the OP.

Comment by Jonas V (Jonas Vollmer) on LessWrong readers are invited to apply to the Lurkshop · 2022-11-22T13:34:39.871Z · LW · GW

"Career networking" feels like it encompasses some useful stuff, like hearing about new opportunities, meeting potential co-founders, etc.

It also sounds like it encompasses some bad stuff, like a race to get the most connections and impressing people.

We're going to try and have some of the useful kind of career networking, and to push hard against the strong pressures towards the bad kind of career networking.

There also aren't that many careers out there just waiting for an employee to come slot into a well-defined role that actually makes progress on preventing x-risk or similar, so we're much more excited about helping people carve out their own path, not in connecting them to to employers running hiring rounds.

Is there going to be a digital option?

Unfortunately we're not going to be able to accommodate that. It's fully in-person, since a large part of the point is in-person interactions.

Comment by Jonas V (Jonas Vollmer) on LessWrong readers are invited to apply to the Lurkshop · 2022-11-22T13:09:31.344Z · LW · GW

Yes. (Within reason.)

Comment by Jonas V (Jonas Vollmer) on Forecasting Newsletter: December 2021 · 2022-01-25T10:09:17.184Z · LW · GW

Would be very excited to get applications to the EA Infrastructure Fund (EAIF)! Apply here, it's fast and easy.

(I run EA Funds, which includes EAIF.)

Comment by Jonas V (Jonas Vollmer) on Apply to Effective Altruism Funds now · 2021-02-14T16:10:36.785Z · LW · GW

I largely agree with Habryka's perspective. I personally (not speaking on behalf of the EA Infrastructure Fund) would be particularly interested in such a grant if you had a track record of successful writing, as this would make it more likely you'd actually reach a large audience. E.g., Eliezer did not just write HPMoR but was a successful blogger on Overcoming Bias and wrote the sequences.

Comment by Jonas V (Jonas Vollmer) on The rationalist community's location problem · 2020-10-12T18:33:21.451Z · LW · GW

Yeah, that seems very plausible for frugal people who don't pay much rent, don't eat out that often, etc. and updates me against Berlin

Comment by Jonas V (Jonas Vollmer) on The rationalist community's location problem · 2020-10-12T08:24:36.521Z · LW · GW

I know of some EAs who lived in Berlin and found it very difficult to make friends due to the language barriers, and some EAs who had an experience more similar to yours. 

Comment by Jonas V (Jonas Vollmer) on The rationalist community's location problem · 2020-10-12T08:23:02.722Z · LW · GW

Yeah. Adjusting for cost of living and purchasing power, it would be (much?) less, but still a good reason against moving.

Comment by Jonas V (Jonas Vollmer) on The rationalist community's location problem · 2020-10-10T08:57:51.965Z · LW · GW

Upvoted, I would like to see Berlin considered more strongly. Having lived there for two years, I think it's hard to overestimate how high the quality of living in Berlin is, not just in the easily verifiable ways listed above, but also in more subtle ways. E.g., in addition to being much cheaper, restaurants/cuisine just generally seems higher quality compared to many other places. German housing is much better than UK/US housing in ways that seem hard to appreciate for people who haven't lived in both locations, etc.

Edit: To clarify, I don't want to suggest Berlin as the one single best rationalist hub, but as one of the global top 5.

To add some downsides:

  • The language barrier is still a bit of an issue if you care about making friends outside the rationalist community
  • The airports are among the worst in the world Not true anymore (finally)
Comment by Jonas V (Jonas Vollmer) on Draft report on AI timelines · 2020-09-19T10:28:56.067Z · LW · GW

Super exciting that this is being shared. Thanks!