Posts

Atlantis: Berkeley event venue available for rent 2023-11-22T01:47:12.026Z
LessWrong readers are invited to apply to the Lurkshop 2022-11-22T09:19:05.412Z
You can now apply to EA Funds anytime! (LTFF & EAIF only) 2021-06-18T13:40:51.100Z
Apply to Effective Altruism Funds now 2021-02-13T13:36:39.977Z

Comments

Comment by Jonas V (Jonas Vollmer) on o3 · 2024-12-21T00:56:48.274Z · LW · GW

OpenAI didn't say what the light blue bar is

Presumably light blue is o3 high, and dark blue is o3 low?

Comment by Jonas V (Jonas Vollmer) on There Should Be More Alignment-Driven Startups · 2024-09-07T20:32:34.996Z · LW · GW

If you're launching an AI safety startup, reach out to me; Polaris Ventures (which I'm on the board of) may be interested, and I can potentially introduce you to other VCs and investors.

Comment by Jonas V (Jonas Vollmer) on The Potential Impossibility of Subjective Death · 2024-09-03T00:43:52.882Z · LW · GW

Makes sense! And yeah, IDK, I think the concept of 'measure' is pretty confusing itself and not super convincing to me, but if you think through the alternatives, they seem even less satisfying.

Comment by Jonas V (Jonas Vollmer) on The Potential Impossibility of Subjective Death · 2024-09-03T00:43:27.090Z · LW · GW

That leads you to always risk your life if there's a slight chance it'll make you feel better … Right? E.g. 2% death, 98% not just totally fine but actually happy, etc., all the way to 99.99% death, 0.01% super duper happy

Comment by Jonas V (Jonas Vollmer) on How to (hopefully ethically) make money off of AGI · 2024-08-26T22:53:08.151Z · LW · GW

I did some research and the top ones I found were SMH and QQQ.

Comment by Jonas V (Jonas Vollmer) on The Potential Impossibility of Subjective Death · 2024-08-26T22:50:36.636Z · LW · GW

I'm not sure what (if any) action-relevant point you might be making. If this is supposed to make you less concerned about death, I'd point out that the thing to care about is your measure (as in quantum immortality), which is still greatly reduced. 

Comment by Jonas V (Jonas Vollmer) on jacobjacob's Shortform Feed · 2024-07-23T18:18:25.725Z · LW · GW

Someone else added these quotes from a 1968 article about how the Vietnam war could go so wrong:

Despite the banishment of the experts, internal doubters and dissenters did indeed appear and persist. Yet as I watched the process, such men were effectively neutralized by a subtle dynamic: the domestication of dissenters. Such "domestication" arose out of a twofold clubbish need: on the one hand, the dissenter's desire to stay aboard; and on the other hand, the nondissenter's conscience. Simply stated, dissent, when recognized, was made to feel at home. On the lowest possible scale of importance, I must confess my own considerable sense of dignity and acceptance (both vital) when my senior White House employer would refer to me as his "favorite dove." Far more significant was the case of the former Undersecretary of State, George Ball. Once Mr. Ball began to express doubts, he was warmly institutionalized: he was encouraged to become the inhouse devil's advocate on Vietnam. The upshot was inevitable: the process of escalation allowed for periodic requests to Mr. Ball to speak his piece; Ball felt good, I assume (he had fought for righteousness); the others felt good (they had given a full hearing to the dovish option); and there was minimal unpleasantness. The club remained intact; and it is of course possible that matters would have gotten worse faster if Mr. Ball had kept silent, or left before his final departure in the fall of 1966. There was also, of course, the case of the last institutionalized doubter, Bill Moyers. The President is said to have greeted his arrival at meetings with an affectionate, "Well, here comes Mr. Stop-the-Bombing...." Here again the dynamics of domesticated dissent sustained the relationship for a while.

A related point—and crucial, I suppose, to government at all times—was the "effectiveness" trap, the trap that keeps men from speaking out, as clearly or often as they might, within the government. And it is the trap that keeps men from resigning in protest and airing their dissent outside the government. The most important asset that a man brings to bureaucratic life is his "effectiveness," a mysterious combination of training, style, and connections. The most ominous complaint that can be whispered of a bureaucrat is: "I'm afraid Charlie's beginning to lose his effectiveness." To preserve your effectiveness, you must decide where and when to fight the mainstream of policy; the opportunities range from pillow talk with your wife, to private drinks with your friends, to meetings with the Secretary of State or the President. The inclination to remain silent or to acquiesce in the presence of the great men—to live to fight another day, to give on this issue so that you can be "effective" on later issues—is overwhelming. Nor is it the tendency of youth alone; some of our most senior officials, men of wealth and fame, whose place in history is secure, have remained silent lest their connection with power be terminated. As for the disinclination to resign in protest: while not necessarily a Washington or even American specialty, it seems more true of a government in which ministers have no parliamentary backbench to which to retreat. In the absence of such a refuge, it is easy to rationalize the decision to stay aboard. By doing so, one may be able to prevent a few bad things from happening and perhaps even make a few good things happen. To exit is to lose even those marginal chances for "effectiveness."

Comment by Jonas V (Jonas Vollmer) on My simple AGI investment & insurance strategy · 2024-04-05T19:50:00.460Z · LW · GW

Implied volatility of long-dated, far-OTM calls is similar between AI-exposed indices (e.g. SMH) and individual stocks like TSM or MSFT (though not NVDA). 

The more concentrated exposure you get from AI companies or AI-exposed indices compared to VTI is likely worth it, unless you expect that short-timelines slow AI takeoff will involve significant acceleration of the broader economy (not just tech giants), which I think is not highly plausible.

Comment by Jonas V (Jonas Vollmer) on My simple AGI investment & insurance strategy · 2024-04-05T19:43:46.978Z · LW · GW

There are SPX options that expire in 2027, 2028, and 2029, those seem more attractive to me than 2-3-y-dated VTI options, especially given that they have strike prices that are much further out of the money.

Would you mind posting the specific contracts you bought?

Comment by Jonas V (Jonas Vollmer) on How to (hopefully ethically) make money off of AGI · 2024-03-12T23:33:53.293Z · LW · GW

I have some of those in my portfolio. It's worth slowly walking GTC orders up the bid-ask spread, you'll get better pricing that way.

Comment by Jonas V (Jonas Vollmer) on FUTARCHY NOW BABY · 2024-02-29T20:51:02.093Z · LW · GW

Doing a post-mortem on sapphire's other posts, their track record is pretty great:

  • BTC/crypto liftoff prediction: +22%
  • Meta DAO: +1600%
  • SAVE: -60%
  • BSC Launcher: -100%?
  • OLY2021: +32%
  • Perpetual futures: +20%
  • Perpetual futures, DeFi edition: +15%
  • Bet on Biden: +40%
  • AI portfolio: approx. -5% compared to index over same time period
  • AI portfolio, second post: approx. +30% compared to index over same time period
  • OpenAI/MSFT: ~0%
  • Buy SOL: +1000%
  • There are many more that I didn't look into.

All of these were over a couple weeks/months, so if you just blindly put 10% of your portfolio into each of the above, you get very impressive returns. (Overall, roughly ~5x relative to the broad market.)

Comment by Jonas V (Jonas Vollmer) on FUTARCHY NOW BABY · 2024-02-29T20:22:58.556Z · LW · GW

Worth pointing out that this is up ~17x since it was posted. Sometimes, good ideas will be communicated poorly, and you'll pay a big price for not investigating yourself. (At least I did.)

Comment by Jonas V (Jonas Vollmer) on EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem · 2024-02-18T19:36:15.402Z · LW · GW

Just stumbled across this post, and copying a comment I once wrote:

  • Intuitively and anecdotally (and based on some likely-crappy papers), it seems harder to see animals as sentient beings or think correctly about the badness of factory farming while eating meat; this form of motivated reasoning plausibly distorts most people's epistemics, and this is about a pretty important part of the world, and recognizing the badness of factory farming has minor implications for s-risks and other AI stuff

With some further clarifications:

  • Nobody actively wants factory farming to happen, but it's the cheapest way to get something we want (i.e. meat), and we've built a system where it's really hard for altruists to stop it from happening. If a pattern like this extended into the long-term future, we might want to do something about it.
  • In the context of AI, suffering subroutines might be an example of that.
  • Regarding futures without strong AGI: Factory farming is the arguably most important example of a present-day ongoing atrocity. If you fully internalize just how bad this is, that there's something like a genocide (in terms of moral badness, not evilness) going on right here, right now, under our eyes, in wealthy Western democracies that are often understood to be the most morally advanced places on earth, and it's really hard for us to stop it, that might affect your general outlook on the long-term future. I still think the long-term future will be great in expectation, but it also makes me think that utopian visions that don't consider these downside risks seem pretty naïve.

 

I used to eat a lot of meat, and once I stopped doing that, I started seeing animals with different eyes (treating them as morally relevant, and internalizing that a lot more). The reason why I don't eat meat now is not that I think it would cause value drift, but that it would make me deeply sad and upset – eating meat would feel similar to owning slaves that I treat poorly, or watching a gladiator fight for my own amusement. It just feels deeply morally wrong and isn't enjoyable anymore. The fact that the consequences are only mildly negative in the grand scheme of things doesn't change that. So [I actually don't think that] my argument supports me remaining a vegan now, but I think it's a strong argument for me to go vegan in the first place at some point.

My guess is that a lot of people don't actually see animals as sentient beings whose emotions and feelings matter a great deal, but more like cute things to have fun with. And anecdotally, how someone perceives animals seems to be determined by whether they eat them, not the other way around. (Insert plausible explanation – cognitive dissonance, rationalizations, etc.) I think squashing dust mites, drinking milk, eating eggs etc. seems to have a much less strong effect in comparison to eating meat, presumably because they're less visceral, more indirect/accidental ways of hurting animals.

 

Yeah, as I tried to explain above (perhaps it was too implicit), I think it probably matters much more whether you went vegan at some point in your life than whether you're vegan right now.

I don't feel confident in this; I wanted to mainly offer it as a hypothesis that could be tested further. I also mentioned the existence of crappy papers that support my perspective (you can probably find them in 5 minutes on Google Scholar). If people thought this was important, they could investigate this more.

Comment by Jonas V (Jonas Vollmer) on How to (hopefully ethically) make money off of AGI · 2024-02-18T19:24:20.714Z · LW · GW

You can also just do the unlevered versions of this, like SMH / SOXX / SOXQ, plus tech companies with AI exposure (MSFT, GOOGL, META, AMZN—or a tech ETF like QQQ).

A leverage + put options combo means you'll end up paying lots of money to market makers.

Comment by Jonas V (Jonas Vollmer) on FUTARCHY NOW BABY · 2024-02-18T19:16:26.234Z · LW · GW

I think more like you don't argue why you believe what you believe and instead just assert it's cool, and the whole thing looks a bit sloppy (spelling mistakes, all-caps, etc.)

Comment by Jonas V (Jonas Vollmer) on AI Impacts 2023 Expert Survey on Progress in AI · 2024-01-07T12:35:17.225Z · LW · GW

Does anyone know if the 2022 survey responses were collected before or after ChatGPT came out?

Comment by Jonas V (Jonas Vollmer) on How to (hopefully ethically) make money off of AGI · 2024-01-07T12:19:02.158Z · LW · GW

I think having some personal retirement savings is still useful in a broad range of possible AGI outcome scenarios, so I personally still do some retirement saving.

Regarding 401k/IRA, anything that preserves your ability to make speculative investments based on an information advantage (as outlined in this post) seems especially good; anything that limits you to a narrow selection of index funds seems potentially suboptimal to me.

Comment by Jonas V (Jonas Vollmer) on Spirit Airlines Merger Play · 2024-01-04T14:20:31.149Z · LW · GW

What's the story for why we would have an advantage here? Surely quants who specialize in this area are on top of this, and aren't constrained by capital? Unlike previous trades where rationalists made lots of money (Covid short, ETH presale, etc.), this doesn't look like a really weird thing that the pros would be unable to do with sufficient volume.

Comment by Jonas V (Jonas Vollmer) on How to (hopefully ethically) make money off of AGI · 2023-12-22T22:15:58.078Z · LW · GW

You mean an AI ETF? My answer is no; I think making your own portfolio (based on advice in this post and elsewhere) will be a lot better.

Comment by Jonas V (Jonas Vollmer) on How to (hopefully ethically) make money off of AGI · 2023-12-08T22:15:28.302Z · LW · GW

Very helpful, thanks. And interesting about the Eris SOFR Swap Futures. 

Interest rate swaptions might also be worth looking into, though they may only be available to large institutional investors.

Why not just short-sell treasuries (e.g. TLT)?

Comment by Jonas V (Jonas Vollmer) on The Lighthaven Campus is open for bookings · 2023-11-22T01:50:32.723Z · LW · GW

If you're running an event and Lighthaven isn't an option for some reason, you may be interested in Atlantis: https://www.lesswrong.com/posts/pvz53LTgFEPtnaWbP/atlantis-berkeley-event-venue-available-for-rent 

Comment by Jonas V (Jonas Vollmer) on Dialogue on the Claim: "OpenAI's Firing of Sam Altman (And Shortly-Subsequent Events) On Net Reduced Existential Risk From AGI" · 2023-11-21T23:05:11.672Z · LW · GW

Market on the primary claim discussed here: 

Comment by Jonas V (Jonas Vollmer) on How to (hopefully ethically) make money off of AGI · 2023-11-21T06:54:00.196Z · LW · GW

I personally would not put Meta on the list

Comment by Jonas V (Jonas Vollmer) on How to (hopefully ethically) make money off of AGI · 2023-11-18T00:26:29.522Z · LW · GW

Yes, but if they're far out of the money, they are a more capital-efficient way to make a very concentrated bet on outlier growth scenarios.

Comment by Jonas V (Jonas Vollmer) on How to (hopefully ethically) make money off of AGI · 2023-11-14T06:37:21.870Z · LW · GW

No, but I also didn't reach out (mostly because I'm lazy/busy)

Comment by Jonas V (Jonas Vollmer) on How to (hopefully ethically) make money off of AGI · 2023-11-08T23:44:24.067Z · LW · GW

https://www.investopedia.com/articles/investing/082714/how-invest-samsung.asp 

Comment by Jonas V (Jonas Vollmer) on How to (hopefully ethically) make money off of AGI · 2023-11-08T00:24:12.373Z · LW · GW

There's some evidence from 2013 suggesting that long-dated, out-of-the-money call options have strongly negative EV; common explanations are that some buyers like gambling and drive up prices. See this article. I also heard that over the last decade, some hedge funds therefore adopted the strategy of writing OTM calls on stocks they hold to boost their returns, and also heard that some of these hedge funds disappeared a couple years ago.

Has anyone looked into whether 1) this has replicated more recently, 2) how much worse it makes some of the suggested strategies (if at all)?

Comment by Jonas V (Jonas Vollmer) on How to (hopefully ethically) make money off of AGI · 2023-11-08T00:17:07.613Z · LW · GW

I meant something like the Fed intervening to buy lots of bonds (including long-dated ones), without particularly thinking of YCC, though perhaps that's the main regime under which they might do it?

Are there strong reasons to believe that the Fed wouldn't buy lots of (long-dated) bonds if interest rates increased a lot?

Comment by Jonas V (Jonas Vollmer) on How to (hopefully ethically) make money off of AGI · 2023-11-08T00:10:37.917Z · LW · GW

How did you get SMSN exposure?

Comment by Jonas V (Jonas Vollmer) on How to (hopefully ethically) make money off of AGI · 2023-11-08T00:09:22.217Z · LW · GW

This looked really reasonable until I saw that there was no NVDA in there; why's that? (You might say high PE, but note that Forward PE is much lower.)

Comment by Jonas V (Jonas Vollmer) on Vote on Interesting Disagreements · 2023-11-08T00:06:26.479Z · LW · GW

Having another $1 billion to prevent AGI x-risk would be pretty useful.

Comment by Jonas V (Jonas Vollmer) on Vote on Interesting Disagreements · 2023-11-07T23:58:33.417Z · LW · GW

Just like the last 12 months was the time of the chatbots, the next 12 months will be the time of agent-like AI product releases.

Comment by Jonas V (Jonas Vollmer) on Vote on Interesting Disagreements · 2023-11-07T23:57:01.001Z · LW · GW

The current AI x-risk grantmaking ecosystem is bad and could be improved substantially.

Comment by Jonas V (Jonas Vollmer) on Vote on Interesting Disagreements · 2023-11-07T23:52:56.855Z · LW · GW

Having another $1 billion to prevent AGI x-risk would be useful because we could spend it on large-scale lobbying efforts in DC.

Comment by Jonas V (Jonas Vollmer) on Vote on Interesting Disagreements · 2023-11-07T23:52:41.758Z · LW · GW

Having another $1 billion to prevent AGI x-risk would be useful because we could spend it on large compute budgets for safety research teams.

Comment by Jonas V (Jonas Vollmer) on Vote on Interesting Disagreements · 2023-11-07T23:51:50.387Z · LW · GW

Investing in early-stage AGI companies helps with reducing x-risk (via mission hedging, having board seats, shareholder activism)

Comment by Jonas V (Jonas Vollmer) on How to (hopefully ethically) make money off of AGI · 2023-11-07T04:36:40.418Z · LW · GW

Yeah, that does also feel right to me. I have been thinking about setting up some fund that maybe buys up a bunch of the equity that's held by safety researchers, so that the safety researchers don't have to also blow up their financial portfolio when they press the stop button or do some whistleblowing or whatever, and that does seem pretty incentive wise. 

I'm interested in helping with making this happen.

Comment by Jonas V (Jonas Vollmer) on How to (hopefully ethically) make money off of AGI · 2023-11-07T04:25:27.103Z · LW · GW

Very interesting conversation!

I'm surprised by the strong emphasis of shorting long-dated bonds. Surely there's a big risk of nominal interest rates coming apart from real interest rates, i.e. lots of money getting printed? I feel like it's going to be very hard to predict what the Fed will do in light of 50% real interest rates, and Fed interventions could plausibly hurt your profits a lot here.

(You might suggest shorting long-dated TIPS, but those markets have less volume and higher borrow fees.)

Comment by Jonas V (Jonas Vollmer) on UFO Betting: Put Up or Shut Up · 2023-08-23T21:36:55.545Z · LW · GW

Some additional people reached out to me—just reiterating that I'm happy to do more at 20:1 odds!

Comment by Jonas V (Jonas Vollmer) on UFO Betting: Put Up or Shut Up · 2023-06-24T03:59:27.907Z · LW · GW

Happy to do another $40k at 55:1 odds if you like (another $727), and another $20k at 20:1 odds after that.

Comment by Jonas V (Jonas Vollmer) on UFO Betting: Put Up or Shut Up · 2023-06-23T22:58:39.852Z · LW · GW

Confirm.

Comment by Jonas V (Jonas Vollmer) on UFO Betting: Put Up or Shut Up · 2023-06-23T22:36:54.280Z · LW · GW

Happy to bet $40k at 110:1 20:1 odds ($364 $2k). (Edited Sep 2023; previous bets confirmed at previous odds.)

USDC ERC-20 (Ethereum): (address removed for privacy, please DM if you want to trade more)

USDC Polygon: (address removed for privacy, please DM if you want to trade more)

(Edit 23 June 3:45 PT): I'm only willing to bet assuming that AGI-created tech doesn't count for the purposes of this bet—it has to be something more supernatural than that.)

Comment by Jonas V (Jonas Vollmer) on My tentative best guess on how EAs and Rationalists sometimes turn crazy · 2023-06-22T17:36:18.791Z · LW · GW

I think SBF rarely ever fired anyone, so "kicked out" seems wrong, but I heard that people who weren't behaving in the way SBF liked (e.g., recklessly risk-taking) got sidelined and often left on their own because their jobs became unpleasant or they had ethical qualms, which would be consistent with evaporative cooling.

Comment by Jonas V (Jonas Vollmer) on LessWrong readers are invited to apply to the Lurkshop · 2022-12-16T19:04:26.369Z · LW · GW

Yeah, I fully agree with this, and am aware of the cost. Apologies once more for not jumping in sooner when I wasn't sure whether applicants had been emailed by my colleague or not.

Comment by Jonas V (Jonas Vollmer) on LessWrong readers are invited to apply to the Lurkshop · 2022-12-15T20:21:17.840Z · LW · GW

This workshop will be postponed to January, likely to 6–9 Jan. GradientDissenter was planning to give an update to all applicants; I hope they will do so soon. I understand that some of you may have made their plans hoping to be able to participate in the workshop or were otherwise hoping for a fast response, and I apologize for completely missing the deadline, the lack of communication, and the change of plans.

(Why did this happen? Evaluating applications ended up being harder than anticipated, and I failed to jump in and fix things when the workshop planning wasn't progressing as planned, partly because I was on vacation.)

Comment by Jonas V (Jonas Vollmer) on LessWrong readers are invited to apply to the Lurkshop · 2022-11-24T17:05:10.294Z · LW · GW

We might if it goes well. If you want to be pinged if we run one, please submit a quick application through our form!

Comment by Jonas V (Jonas Vollmer) on LessWrong readers are invited to apply to the Lurkshop · 2022-11-22T21:00:34.183Z · LW · GW

Thanks for the feedback! I’ve edited the post to clarify where the funding is coming from and who is running this.

Regarding the content, one of my co-organizers may leave another comment later. The short version is that we’ll be re-running some of the most popular content from previous workshops, but primarily focus on informal conversations, as participants usually rate that as much more useful than the actual content of the workshop.

Comment by Jonas V (Jonas Vollmer) on LessWrong readers are invited to apply to the Lurkshop · 2022-11-22T19:59:14.633Z · LW · GW

It's funded by the Atlas Fellowship, which is funded by Open Philanthropy. It's something of a side-hustle of Atlas (outside of the scope and brand of the organization, and run by a subset of our team and some external collaborators). We have a fair amount of experience running different kinds of workshops, and are experimenting with what programs targeted at other demographic and niches might look like.

Thanks for the feedback! Added to the OP.

Comment by Jonas V (Jonas Vollmer) on LessWrong readers are invited to apply to the Lurkshop · 2022-11-22T13:34:39.871Z · LW · GW

"Career networking" feels like it encompasses some useful stuff, like hearing about new opportunities, meeting potential co-founders, etc.

It also sounds like it encompasses some bad stuff, like a race to get the most connections and impressing people.

We're going to try and have some of the useful kind of career networking, and to push hard against the strong pressures towards the bad kind of career networking.

There also aren't that many careers out there just waiting for an employee to come slot into a well-defined role that actually makes progress on preventing x-risk or similar, so we're much more excited about helping people carve out their own path, not in connecting them to to employers running hiring rounds.

Is there going to be a digital option?

Unfortunately we're not going to be able to accommodate that. It's fully in-person, since a large part of the point is in-person interactions.

Comment by Jonas V (Jonas Vollmer) on LessWrong readers are invited to apply to the Lurkshop · 2022-11-22T13:09:31.344Z · LW · GW

Yes. (Within reason.)