Posts
Comments
OpenAI didn't say what the light blue bar is
Presumably light blue is o3 high, and dark blue is o3 low?
If you're launching an AI safety startup, reach out to me; Polaris Ventures (which I'm on the board of) may be interested, and I can potentially introduce you to other VCs and investors.
Makes sense! And yeah, IDK, I think the concept of 'measure' is pretty confusing itself and not super convincing to me, but if you think through the alternatives, they seem even less satisfying.
That leads you to always risk your life if there's a slight chance it'll make you feel better … Right? E.g. 2% death, 98% not just totally fine but actually happy, etc., all the way to 99.99% death, 0.01% super duper happy
I did some research and the top ones I found were SMH and QQQ.
I'm not sure what (if any) action-relevant point you might be making. If this is supposed to make you less concerned about death, I'd point out that the thing to care about is your measure (as in quantum immortality), which is still greatly reduced.
Someone else added these quotes from a 1968 article about how the Vietnam war could go so wrong:
Despite the banishment of the experts, internal doubters and dissenters did indeed appear and persist. Yet as I watched the process, such men were effectively neutralized by a subtle dynamic: the domestication of dissenters. Such "domestication" arose out of a twofold clubbish need: on the one hand, the dissenter's desire to stay aboard; and on the other hand, the nondissenter's conscience. Simply stated, dissent, when recognized, was made to feel at home. On the lowest possible scale of importance, I must confess my own considerable sense of dignity and acceptance (both vital) when my senior White House employer would refer to me as his "favorite dove." Far more significant was the case of the former Undersecretary of State, George Ball. Once Mr. Ball began to express doubts, he was warmly institutionalized: he was encouraged to become the inhouse devil's advocate on Vietnam. The upshot was inevitable: the process of escalation allowed for periodic requests to Mr. Ball to speak his piece; Ball felt good, I assume (he had fought for righteousness); the others felt good (they had given a full hearing to the dovish option); and there was minimal unpleasantness. The club remained intact; and it is of course possible that matters would have gotten worse faster if Mr. Ball had kept silent, or left before his final departure in the fall of 1966. There was also, of course, the case of the last institutionalized doubter, Bill Moyers. The President is said to have greeted his arrival at meetings with an affectionate, "Well, here comes Mr. Stop-the-Bombing...." Here again the dynamics of domesticated dissent sustained the relationship for a while.
A related point—and crucial, I suppose, to government at all times—was the "effectiveness" trap, the trap that keeps men from speaking out, as clearly or often as they might, within the government. And it is the trap that keeps men from resigning in protest and airing their dissent outside the government. The most important asset that a man brings to bureaucratic life is his "effectiveness," a mysterious combination of training, style, and connections. The most ominous complaint that can be whispered of a bureaucrat is: "I'm afraid Charlie's beginning to lose his effectiveness." To preserve your effectiveness, you must decide where and when to fight the mainstream of policy; the opportunities range from pillow talk with your wife, to private drinks with your friends, to meetings with the Secretary of State or the President. The inclination to remain silent or to acquiesce in the presence of the great men—to live to fight another day, to give on this issue so that you can be "effective" on later issues—is overwhelming. Nor is it the tendency of youth alone; some of our most senior officials, men of wealth and fame, whose place in history is secure, have remained silent lest their connection with power be terminated. As for the disinclination to resign in protest: while not necessarily a Washington or even American specialty, it seems more true of a government in which ministers have no parliamentary backbench to which to retreat. In the absence of such a refuge, it is easy to rationalize the decision to stay aboard. By doing so, one may be able to prevent a few bad things from happening and perhaps even make a few good things happen. To exit is to lose even those marginal chances for "effectiveness."
Implied volatility of long-dated, far-OTM calls is similar between AI-exposed indices (e.g. SMH) and individual stocks like TSM or MSFT (though not NVDA).
The more concentrated exposure you get from AI companies or AI-exposed indices compared to VTI is likely worth it, unless you expect that short-timelines slow AI takeoff will involve significant acceleration of the broader economy (not just tech giants), which I think is not highly plausible.
There are SPX options that expire in 2027, 2028, and 2029, those seem more attractive to me than 2-3-y-dated VTI options, especially given that they have strike prices that are much further out of the money.
Would you mind posting the specific contracts you bought?
I have some of those in my portfolio. It's worth slowly walking GTC orders up the bid-ask spread, you'll get better pricing that way.
Doing a post-mortem on sapphire's other posts, their track record is pretty great:
- BTC/crypto liftoff prediction: +22%
- Meta DAO: +1600%
- SAVE: -60%
- BSC Launcher: -100%?
- OLY2021: +32%
- Perpetual futures: +20%
- Perpetual futures, DeFi edition: +15%
- Bet on Biden: +40%
- AI portfolio: approx. -5% compared to index over same time period
- AI portfolio, second post: approx. +30% compared to index over same time period
- OpenAI/MSFT: ~0%
- Buy SOL: +1000%
- There are many more that I didn't look into.
All of these were over a couple weeks/months, so if you just blindly put 10% of your portfolio into each of the above, you get very impressive returns. (Overall, roughly ~5x relative to the broad market.)
Worth pointing out that this is up ~17x since it was posted. Sometimes, good ideas will be communicated poorly, and you'll pay a big price for not investigating yourself. (At least I did.)
Just stumbled across this post, and copying a comment I once wrote:
- Intuitively and anecdotally (and based on some likely-crappy papers), it seems harder to see animals as sentient beings or think correctly about the badness of factory farming while eating meat; this form of motivated reasoning plausibly distorts most people's epistemics, and this is about a pretty important part of the world, and recognizing the badness of factory farming has minor implications for s-risks and other AI stuff
With some further clarifications:
- Nobody actively wants factory farming to happen, but it's the cheapest way to get something we want (i.e. meat), and we've built a system where it's really hard for altruists to stop it from happening. If a pattern like this extended into the long-term future, we might want to do something about it.
- In the context of AI, suffering subroutines might be an example of that.
- Regarding futures without strong AGI: Factory farming is the arguably most important example of a present-day ongoing atrocity. If you fully internalize just how bad this is, that there's something like a genocide (in terms of moral badness, not evilness) going on right here, right now, under our eyes, in wealthy Western democracies that are often understood to be the most morally advanced places on earth, and it's really hard for us to stop it, that might affect your general outlook on the long-term future. I still think the long-term future will be great in expectation, but it also makes me think that utopian visions that don't consider these downside risks seem pretty naïve.
I used to eat a lot of meat, and once I stopped doing that, I started seeing animals with different eyes (treating them as morally relevant, and internalizing that a lot more). The reason why I don't eat meat now is not that I think it would cause value drift, but that it would make me deeply sad and upset – eating meat would feel similar to owning slaves that I treat poorly, or watching a gladiator fight for my own amusement. It just feels deeply morally wrong and isn't enjoyable anymore. The fact that the consequences are only mildly negative in the grand scheme of things doesn't change that. So [I actually don't think that] my argument supports me remaining a vegan now, but I think it's a strong argument for me to go vegan in the first place at some point.
My guess is that a lot of people don't actually see animals as sentient beings whose emotions and feelings matter a great deal, but more like cute things to have fun with. And anecdotally, how someone perceives animals seems to be determined by whether they eat them, not the other way around. (Insert plausible explanation – cognitive dissonance, rationalizations, etc.) I think squashing dust mites, drinking milk, eating eggs etc. seems to have a much less strong effect in comparison to eating meat, presumably because they're less visceral, more indirect/accidental ways of hurting animals.
Yeah, as I tried to explain above (perhaps it was too implicit), I think it probably matters much more whether you went vegan at some point in your life than whether you're vegan right now.
I don't feel confident in this; I wanted to mainly offer it as a hypothesis that could be tested further. I also mentioned the existence of crappy papers that support my perspective (you can probably find them in 5 minutes on Google Scholar). If people thought this was important, they could investigate this more.
You can also just do the unlevered versions of this, like SMH / SOXX / SOXQ, plus tech companies with AI exposure (MSFT, GOOGL, META, AMZN—or a tech ETF like QQQ).
A leverage + put options combo means you'll end up paying lots of money to market makers.
I think more like you don't argue why you believe what you believe and instead just assert it's cool, and the whole thing looks a bit sloppy (spelling mistakes, all-caps, etc.)
Does anyone know if the 2022 survey responses were collected before or after ChatGPT came out?
I think having some personal retirement savings is still useful in a broad range of possible AGI outcome scenarios, so I personally still do some retirement saving.
Regarding 401k/IRA, anything that preserves your ability to make speculative investments based on an information advantage (as outlined in this post) seems especially good; anything that limits you to a narrow selection of index funds seems potentially suboptimal to me.
What's the story for why we would have an advantage here? Surely quants who specialize in this area are on top of this, and aren't constrained by capital? Unlike previous trades where rationalists made lots of money (Covid short, ETH presale, etc.), this doesn't look like a really weird thing that the pros would be unable to do with sufficient volume.
You mean an AI ETF? My answer is no; I think making your own portfolio (based on advice in this post and elsewhere) will be a lot better.
Very helpful, thanks. And interesting about the Eris SOFR Swap Futures.
Interest rate swaptions might also be worth looking into, though they may only be available to large institutional investors.
Why not just short-sell treasuries (e.g. TLT)?
If you're running an event and Lighthaven isn't an option for some reason, you may be interested in Atlantis: https://www.lesswrong.com/posts/pvz53LTgFEPtnaWbP/atlantis-berkeley-event-venue-available-for-rent
Market on the primary claim discussed here:
I personally would not put Meta on the list
Yes, but if they're far out of the money, they are a more capital-efficient way to make a very concentrated bet on outlier growth scenarios.
No, but I also didn't reach out (mostly because I'm lazy/busy)
https://www.investopedia.com/articles/investing/082714/how-invest-samsung.asp
There's some evidence from 2013 suggesting that long-dated, out-of-the-money call options have strongly negative EV; common explanations are that some buyers like gambling and drive up prices. See this article. I also heard that over the last decade, some hedge funds therefore adopted the strategy of writing OTM calls on stocks they hold to boost their returns, and also heard that some of these hedge funds disappeared a couple years ago.
Has anyone looked into whether 1) this has replicated more recently, 2) how much worse it makes some of the suggested strategies (if at all)?
I meant something like the Fed intervening to buy lots of bonds (including long-dated ones), without particularly thinking of YCC, though perhaps that's the main regime under which they might do it?
Are there strong reasons to believe that the Fed wouldn't buy lots of (long-dated) bonds if interest rates increased a lot?
How did you get SMSN exposure?
This looked really reasonable until I saw that there was no NVDA in there; why's that? (You might say high PE, but note that Forward PE is much lower.)
Having another $1 billion to prevent AGI x-risk would be pretty useful.
Just like the last 12 months was the time of the chatbots, the next 12 months will be the time of agent-like AI product releases.
The current AI x-risk grantmaking ecosystem is bad and could be improved substantially.
Having another $1 billion to prevent AGI x-risk would be useful because we could spend it on large-scale lobbying efforts in DC.
Having another $1 billion to prevent AGI x-risk would be useful because we could spend it on large compute budgets for safety research teams.
Investing in early-stage AGI companies helps with reducing x-risk (via mission hedging, having board seats, shareholder activism)
Yeah, that does also feel right to me. I have been thinking about setting up some fund that maybe buys up a bunch of the equity that's held by safety researchers, so that the safety researchers don't have to also blow up their financial portfolio when they press the stop button or do some whistleblowing or whatever, and that does seem pretty incentive wise.
I'm interested in helping with making this happen.
Very interesting conversation!
I'm surprised by the strong emphasis of shorting long-dated bonds. Surely there's a big risk of nominal interest rates coming apart from real interest rates, i.e. lots of money getting printed? I feel like it's going to be very hard to predict what the Fed will do in light of 50% real interest rates, and Fed interventions could plausibly hurt your profits a lot here.
(You might suggest shorting long-dated TIPS, but those markets have less volume and higher borrow fees.)
Some additional people reached out to me—just reiterating that I'm happy to do more at 20:1 odds!
Happy to do another $40k at 55:1 odds if you like (another $727), and another $20k at 20:1 odds after that.
Confirm.
Happy to bet $40k at 110:1 20:1 odds ($364 $2k). (Edited Sep 2023; previous bets confirmed at previous odds.)
USDC ERC-20 (Ethereum): (address removed for privacy, please DM if you want to trade more)
USDC Polygon: (address removed for privacy, please DM if you want to trade more)
(Edit 23 June 3:45 PT): I'm only willing to bet assuming that AGI-created tech doesn't count for the purposes of this bet—it has to be something more supernatural than that.)
I think SBF rarely ever fired anyone, so "kicked out" seems wrong, but I heard that people who weren't behaving in the way SBF liked (e.g., recklessly risk-taking) got sidelined and often left on their own because their jobs became unpleasant or they had ethical qualms, which would be consistent with evaporative cooling.
Yeah, I fully agree with this, and am aware of the cost. Apologies once more for not jumping in sooner when I wasn't sure whether applicants had been emailed by my colleague or not.
This workshop will be postponed to January, likely to 6–9 Jan. GradientDissenter was planning to give an update to all applicants; I hope they will do so soon. I understand that some of you may have made their plans hoping to be able to participate in the workshop or were otherwise hoping for a fast response, and I apologize for completely missing the deadline, the lack of communication, and the change of plans.
(Why did this happen? Evaluating applications ended up being harder than anticipated, and I failed to jump in and fix things when the workshop planning wasn't progressing as planned, partly because I was on vacation.)
We might if it goes well. If you want to be pinged if we run one, please submit a quick application through our form!
Thanks for the feedback! I’ve edited the post to clarify where the funding is coming from and who is running this.
Regarding the content, one of my co-organizers may leave another comment later. The short version is that we’ll be re-running some of the most popular content from previous workshops, but primarily focus on informal conversations, as participants usually rate that as much more useful than the actual content of the workshop.
It's funded by the Atlas Fellowship, which is funded by Open Philanthropy. It's something of a side-hustle of Atlas (outside of the scope and brand of the organization, and run by a subset of our team and some external collaborators). We have a fair amount of experience running different kinds of workshops, and are experimenting with what programs targeted at other demographic and niches might look like.
Thanks for the feedback! Added to the OP.
"Career networking" feels like it encompasses some useful stuff, like hearing about new opportunities, meeting potential co-founders, etc.
It also sounds like it encompasses some bad stuff, like a race to get the most connections and impressing people.
We're going to try and have some of the useful kind of career networking, and to push hard against the strong pressures towards the bad kind of career networking.
There also aren't that many careers out there just waiting for an employee to come slot into a well-defined role that actually makes progress on preventing x-risk or similar, so we're much more excited about helping people carve out their own path, not in connecting them to to employers running hiring rounds.
Is there going to be a digital option?
Unfortunately we're not going to be able to accommodate that. It's fully in-person, since a large part of the point is in-person interactions.
Yes. (Within reason.)