Posts
Comments
Implied volatility of long-dated, far-OTM calls is similar between AI-exposed indices (e.g. SMH) and individual stocks like TSM or MSFT (though not NVDA).
The more concentrated exposure you get from AI companies or AI-exposed indices compared to VTI is likely worth it, unless you expect that short-timelines slow AI takeoff will involve significant acceleration of the broader economy (not just tech giants), which I think is not highly plausible.
There are SPX options that expire in 2027, 2028, and 2029, those seem more attractive to me than 2-3-y-dated VTI options, especially given that they have strike prices that are much further out of the money.
Would you mind posting the specific contracts you bought?
I have some of those in my portfolio. It's worth slowly walking GTC orders up the bid-ask spread, you'll get better pricing that way.
Doing a post-mortem on sapphire's other posts, their track record is pretty great:
- BTC/crypto liftoff prediction: +22%
- Meta DAO: +1600%
- SAVE: -60%
- BSC Launcher: -100%?
- OLY2021: +32%
- Perpetual futures: +20%
- Perpetual futures, DeFi edition: +15%
- Bet on Biden: +40%
- AI portfolio: approx. -5% compared to index over same time period
- AI portfolio, second post: approx. +30% compared to index over same time period
- OpenAI/MSFT: ~0%
- Buy SOL: +1000%
- There are many more that I didn't look into.
All of these were over a couple weeks/months, so if you just blindly put 10% of your portfolio into each of the above, you get very impressive returns. (Overall, roughly ~5x relative to the broad market.)
Worth pointing out that this is up ~17x since it was posted. Sometimes, good ideas will be communicated poorly, and you'll pay a big price for not investigating yourself. (At least I did.)
Just stumbled across this post, and copying a comment I once wrote:
- Intuitively and anecdotally (and based on some likely-crappy papers), it seems harder to see animals as sentient beings or think correctly about the badness of factory farming while eating meat; this form of motivated reasoning plausibly distorts most people's epistemics, and this is about a pretty important part of the world, and recognizing the badness of factory farming has minor implications for s-risks and other AI stuff
With some further clarifications:
- Nobody actively wants factory farming to happen, but it's the cheapest way to get something we want (i.e. meat), and we've built a system where it's really hard for altruists to stop it from happening. If a pattern like this extended into the long-term future, we might want to do something about it.
- In the context of AI, suffering subroutines might be an example of that.
- Regarding futures without strong AGI: Factory farming is the arguably most important example of a present-day ongoing atrocity. If you fully internalize just how bad this is, that there's something like a genocide (in terms of moral badness, not evilness) going on right here, right now, under our eyes, in wealthy Western democracies that are often understood to be the most morally advanced places on earth, and it's really hard for us to stop it, that might affect your general outlook on the long-term future. I still think the long-term future will be great in expectation, but it also makes me think that utopian visions that don't consider these downside risks seem pretty naïve.
I used to eat a lot of meat, and once I stopped doing that, I started seeing animals with different eyes (treating them as morally relevant, and internalizing that a lot more). The reason why I don't eat meat now is not that I think it would cause value drift, but that it would make me deeply sad and upset – eating meat would feel similar to owning slaves that I treat poorly, or watching a gladiator fight for my own amusement. It just feels deeply morally wrong and isn't enjoyable anymore. The fact that the consequences are only mildly negative in the grand scheme of things doesn't change that. So [I actually don't think that] my argument supports me remaining a vegan now, but I think it's a strong argument for me to go vegan in the first place at some point.
My guess is that a lot of people don't actually see animals as sentient beings whose emotions and feelings matter a great deal, but more like cute things to have fun with. And anecdotally, how someone perceives animals seems to be determined by whether they eat them, not the other way around. (Insert plausible explanation – cognitive dissonance, rationalizations, etc.) I think squashing dust mites, drinking milk, eating eggs etc. seems to have a much less strong effect in comparison to eating meat, presumably because they're less visceral, more indirect/accidental ways of hurting animals.
Yeah, as I tried to explain above (perhaps it was too implicit), I think it probably matters much more whether you went vegan at some point in your life than whether you're vegan right now.
I don't feel confident in this; I wanted to mainly offer it as a hypothesis that could be tested further. I also mentioned the existence of crappy papers that support my perspective (you can probably find them in 5 minutes on Google Scholar). If people thought this was important, they could investigate this more.
You can also just do the unlevered versions of this, like SMH / SOXX / SOXQ, plus tech companies with AI exposure (MSFT, GOOGL, META, AMZN—or a tech ETF like QQQ).
A leverage + put options combo means you'll end up paying lots of money to market makers.
I think more like you don't argue why you believe what you believe and instead just assert it's cool, and the whole thing looks a bit sloppy (spelling mistakes, all-caps, etc.)
Does anyone know if the 2022 survey responses were collected before or after ChatGPT came out?
I think having some personal retirement savings is still useful in a broad range of possible AGI outcome scenarios, so I personally still do some retirement saving.
Regarding 401k/IRA, anything that preserves your ability to make speculative investments based on an information advantage (as outlined in this post) seems especially good; anything that limits you to a narrow selection of index funds seems potentially suboptimal to me.
What's the story for why we would have an advantage here? Surely quants who specialize in this area are on top of this, and aren't constrained by capital? Unlike previous trades where rationalists made lots of money (Covid short, ETH presale, etc.), this doesn't look like a really weird thing that the pros would be unable to do with sufficient volume.
You mean an AI ETF? My answer is no; I think making your own portfolio (based on advice in this post and elsewhere) will be a lot better.
Very helpful, thanks. And interesting about the Eris SOFR Swap Futures.
Interest rate swaptions might also be worth looking into, though they may only be available to large institutional investors.
Why not just short-sell treasuries (e.g. TLT)?
If you're running an event and Lighthaven isn't an option for some reason, you may be interested in Atlantis: https://www.lesswrong.com/posts/pvz53LTgFEPtnaWbP/atlantis-berkeley-event-venue-available-for-rent
Market on the primary claim discussed here:
I personally would not put Meta on the list
Yes, but if they're far out of the money, they are a more capital-efficient way to make a very concentrated bet on outlier growth scenarios.
No, but I also didn't reach out (mostly because I'm lazy/busy)
https://www.investopedia.com/articles/investing/082714/how-invest-samsung.asp
There's some evidence from 2013 suggesting that long-dated, out-of-the-money call options have strongly negative EV; common explanations are that some buyers like gambling and drive up prices. See this article. I also heard that over the last decade, some hedge funds therefore adopted the strategy of writing OTM calls on stocks they hold to boost their returns, and also heard that some of these hedge funds disappeared a couple years ago.
Has anyone looked into whether 1) this has replicated more recently, 2) how much worse it makes some of the suggested strategies (if at all)?
I meant something like the Fed intervening to buy lots of bonds (including long-dated ones), without particularly thinking of YCC, though perhaps that's the main regime under which they might do it?
Are there strong reasons to believe that the Fed wouldn't buy lots of (long-dated) bonds if interest rates increased a lot?
How did you get SMSN exposure?
This looked really reasonable until I saw that there was no NVDA in there; why's that? (You might say high PE, but note that Forward PE is much lower.)
Having another $1 billion to prevent AGI x-risk would be pretty useful.
Just like the last 12 months was the time of the chatbots, the next 12 months will be the time of agent-like AI product releases.
The current AI x-risk grantmaking ecosystem is bad and could be improved substantially.
Having another $1 billion to prevent AGI x-risk would be useful because we could spend it on large-scale lobbying efforts in DC.
Having another $1 billion to prevent AGI x-risk would be useful because we could spend it on large compute budgets for safety research teams.
Investing in early-stage AGI companies helps with reducing x-risk (via mission hedging, having board seats, shareholder activism)
Yeah, that does also feel right to me. I have been thinking about setting up some fund that maybe buys up a bunch of the equity that's held by safety researchers, so that the safety researchers don't have to also blow up their financial portfolio when they press the stop button or do some whistleblowing or whatever, and that does seem pretty incentive wise.
I'm interested in helping with making this happen.
Very interesting conversation!
I'm surprised by the strong emphasis of shorting long-dated bonds. Surely there's a big risk of nominal interest rates coming apart from real interest rates, i.e. lots of money getting printed? I feel like it's going to be very hard to predict what the Fed will do in light of 50% real interest rates, and Fed interventions could plausibly hurt your profits a lot here.
(You might suggest shorting long-dated TIPS, but those markets have less volume and higher borrow fees.)
Some additional people reached out to me—just reiterating that I'm happy to do more at 20:1 odds!
Happy to do another $40k at 55:1 odds if you like (another $727), and another $20k at 20:1 odds after that.
Confirm.
Happy to bet $40k at 110:1 20:1 odds ($364 $2k). (Edited Sep 2023; previous bets confirmed at previous odds.)
USDC ERC-20 (Ethereum): (address removed for privacy, please DM if you want to trade more)
USDC Polygon: (address removed for privacy, please DM if you want to trade more)
(Edit 23 June 3:45 PT): I'm only willing to bet assuming that AGI-created tech doesn't count for the purposes of this bet—it has to be something more supernatural than that.)
I think SBF rarely ever fired anyone, so "kicked out" seems wrong, but I heard that people who weren't behaving in the way SBF liked (e.g., recklessly risk-taking) got sidelined and often left on their own because their jobs became unpleasant or they had ethical qualms, which would be consistent with evaporative cooling.
Yeah, I fully agree with this, and am aware of the cost. Apologies once more for not jumping in sooner when I wasn't sure whether applicants had been emailed by my colleague or not.
This workshop will be postponed to January, likely to 6–9 Jan. GradientDissenter was planning to give an update to all applicants; I hope they will do so soon. I understand that some of you may have made their plans hoping to be able to participate in the workshop or were otherwise hoping for a fast response, and I apologize for completely missing the deadline, the lack of communication, and the change of plans.
(Why did this happen? Evaluating applications ended up being harder than anticipated, and I failed to jump in and fix things when the workshop planning wasn't progressing as planned, partly because I was on vacation.)
We might if it goes well. If you want to be pinged if we run one, please submit a quick application through our form!
Thanks for the feedback! I’ve edited the post to clarify where the funding is coming from and who is running this.
Regarding the content, one of my co-organizers may leave another comment later. The short version is that we’ll be re-running some of the most popular content from previous workshops, but primarily focus on informal conversations, as participants usually rate that as much more useful than the actual content of the workshop.
It's funded by the Atlas Fellowship, which is funded by Open Philanthropy. It's something of a side-hustle of Atlas (outside of the scope and brand of the organization, and run by a subset of our team and some external collaborators). We have a fair amount of experience running different kinds of workshops, and are experimenting with what programs targeted at other demographic and niches might look like.
Thanks for the feedback! Added to the OP.
"Career networking" feels like it encompasses some useful stuff, like hearing about new opportunities, meeting potential co-founders, etc.
It also sounds like it encompasses some bad stuff, like a race to get the most connections and impressing people.
We're going to try and have some of the useful kind of career networking, and to push hard against the strong pressures towards the bad kind of career networking.
There also aren't that many careers out there just waiting for an employee to come slot into a well-defined role that actually makes progress on preventing x-risk or similar, so we're much more excited about helping people carve out their own path, not in connecting them to to employers running hiring rounds.
Is there going to be a digital option?
Unfortunately we're not going to be able to accommodate that. It's fully in-person, since a large part of the point is in-person interactions.
Yes. (Within reason.)
Would be very excited to get applications to the EA Infrastructure Fund (EAIF)! Apply here, it's fast and easy.
(I run EA Funds, which includes EAIF.)
I largely agree with Habryka's perspective. I personally (not speaking on behalf of the EA Infrastructure Fund) would be particularly interested in such a grant if you had a track record of successful writing, as this would make it more likely you'd actually reach a large audience. E.g., Eliezer did not just write HPMoR but was a successful blogger on Overcoming Bias and wrote the sequences.
Yeah, that seems very plausible for frugal people who don't pay much rent, don't eat out that often, etc. and updates me against Berlin
I know of some EAs who lived in Berlin and found it very difficult to make friends due to the language barriers, and some EAs who had an experience more similar to yours.
Yeah. Adjusting for cost of living and purchasing power, it would be (much?) less, but still a good reason against moving.
Upvoted, I would like to see Berlin considered more strongly. Having lived there for two years, I think it's hard to overestimate how high the quality of living in Berlin is, not just in the easily verifiable ways listed above, but also in more subtle ways. E.g., in addition to being much cheaper, restaurants/cuisine just generally seems higher quality compared to many other places. German housing is much better than UK/US housing in ways that seem hard to appreciate for people who haven't lived in both locations, etc.
Edit: To clarify, I don't want to suggest Berlin as the one single best rationalist hub, but as one of the global top 5.
To add some downsides:
- The language barrier is still a bit of an issue if you care about making friends outside the rationalist community
The airports are among the worst in the worldNot true anymore (finally)
Super exciting that this is being shared. Thanks!