How should TurnTrout handle his DeepMind equity situation?
post by habryka (habryka4), TurnTrout · 2023-10-16T18:25:38.895Z · LW · GW · 35 commentsContents
36 comments
35 comments
Comments sorted by top scores.
comment by Dave Orr (dave-orr) · 2023-10-16T19:05:46.426Z · LW(p) · GW(p)
How much do you think that your decisions affect Google's stock price? Yes maybe more AI means a higher price, but on the margin how much will you be pushing that relative to a replacement AI person? And mostly the stock price fluctuates on stuff like how well the ads business is doing, macro factors, and I guess occasionally whether we gave a bad demo.
It feels to me like the incentive is just so diffuse that I wouldn't worry about it much.
Your idea of just donating extra gains also seems fine.
Replies from: habryka4↑ comment by habryka (habryka4) · 2023-10-16T20:27:00.289Z · LW(p) · GW(p)
As I said in the dialogue, I think as a safety engineer, especially as someone who might end up close to the literal or metaphorical "stop button", the effect here seems to me to be potentially quite large, especially in aggregate.
Replies from: dave-orr↑ comment by Dave Orr (dave-orr) · 2023-10-18T04:35:47.373Z · LW(p) · GW(p)
FWIW as an executive working on safety at Google, I basically never consider my normal working activities in light of what they would do to Google's stock price.
The exception is around public communication. There I'm very careful because it's asymmetrical -- I could potentially cause a pr disaster that would affect the stock, but I don't see how I could give a talk that's so good that it helps it.
Maybe a plug pulling situation would be different, but I also think it's basically impossible for it to be a unilateral situation, and if we're in such a moment, I hardly think any damage would be contained to Google's stock price, versus say the market as a whole.
Replies from: habryka4, kave↑ comment by habryka (habryka4) · 2023-10-18T06:31:12.872Z · LW(p) · GW(p)
Hmm, I do think that is something that seems pretty likely to change, I think?
I expect safety researchers to be consulted quite a bit on regulations that will affect Google pretty heavily and i.e. any given high-level safety researcher currently has a decent chance to testify in front of congress, and like, I would want them to feel comfortable taking actions that definitely would have a large effect on the Google stock price (like saying that Google's AGI program should be shut down completely, or nationalized, or Google should be held liable for some damages caused by its AI systems).
↑ comment by kave · 2024-05-20T23:47:04.928Z · LW(p) · GW(p)
OpenAI employees currently seem like they can't/won't say public critical statements about OpenAI because of equity considerations. This seems like a situation where it is important not to have your public communication affected by thinking about stock prices.
Does this change your thinking any?
Replies from: dave-orr↑ comment by Dave Orr (dave-orr) · 2024-05-21T00:28:52.500Z · LW(p) · GW(p)
I would be very unhappy if a non disparagement agreement were sprung on me when I left the company. And I would be very reluctant to sign one entering any company.
Luckily we don't have those at Google Deepmind.
Replies from: kave↑ comment by kave · 2024-05-21T00:37:04.952Z · LW(p) · GW(p)
Fair enough! But perhaps disparaging enough things could affect the value of equity, though probably by less than refusing to sign a non-disparagement agreement and not getting your vested PPUs.
Does that make you reconsider whether having the equity might give you action-altering (and, particularly, speech-altering) incentives?
comment by Dagon · 2023-10-16T20:13:43.391Z · LW(p) · GW(p)
I think you're massively over-complicating things. I think you're over-estimating by a whole lot the impact that you COULD have on Google's stock price, and even more over-estimating the impact on your motivation and activities based on this impact. Especially compared to the motivation/activities that will come from having the job in the first place, wanting to be aligned with your coworkers and the stated mission of the team.
I generally recommend (and follow myself, at retrospective great cost) that people diversify stock away from their employer as soon as feasible, which translates to "sell grants as soon as they vest". This still leaves value changes between grant and vest. You'll pay to hedge that, and have to take the risk that you won't stay through full vesting. I'd recommend NOT paying or taking that risk.
If you're worried enough about it, you could design a donation-intent agreement to give to a charity some "excess" vested value defined by the difference in price between (grant + some historically-justified growth) and the actual vest value. This generally isn't fully enforceable, but the charity CAN often borrow against it, and it's pretty strong social motivation to follow through.
But really, if you don't want to benefit your employer and try to increase it's value, you probably should consider just not working there.
Replies from: habryka4↑ comment by habryka (habryka4) · 2023-10-16T20:25:13.826Z · LW(p) · GW(p)
But really, if you don't want to benefit your employer and try to increase it's value, you probably should consider just not working there.
I mean, sure, I think most (or at least a quite substantial fraction of) people working in safety roles would prefer for their employer to not exist, or to make substantially less money, but I still think there are valid arguments for them wanting to work at the big capability labs.
I think the motivational and distortional effects of being in a social environment of an organization are also huge, but they are much harder to hedge, and I think the impact of the financial entanglement is still quite substantial (though definitely less). I think if someone could spend 200 hours of getting rid of the distortionary social effects they should definitely do it, and correspondingly I think if someone can spend 20 hours of getting rid of the distortionary financial effects they should also do it, since it seems like an easy win..
comment by ryan_b · 2024-12-05T23:09:12.473Z · LW(p) · GW(p)
I think this post is quite important because it is about Skin in the Game. Normally we love it, but here is the doubly-interesting case of wanting to reduce the financial version in order to allow the space for better thinking.
The content of the question is good by itself as a moment in time of thinking about the problem. The answers to the question are good both for what they contain, and also for what they do not contain, by which I mean what we want to see come up in questions of this kind to answer them better.
As a follow-up, I would like to see a more concrete exploration of options for how to deal with this kind of situation. Sort of like an inverse of the question "how does one bet on AI profitably?" In this case it might be the question "how does one neutralize implicit bets on AI?" By options I mean specific financial vehicles and legal maneuvers.
comment by Nicholas / Heather Kross (NicholasKross) · 2023-10-17T22:29:24.116Z · LW(p) · GW(p)
This is a kind of post I appreciate and would like more of: Detailed and clearheaded explorations (as dialogues or otherwise) of weirdly-specific issues that are faced by individuals, yet have unusually-high bearing on larger issues.
- It helps others in similar situations.
- It demonstrates ways for people to think in broadly-similar situations.
- It encourages people in weirdly-specific-yet-high-abs(EV) situations to seek out help from trusted people.
comment by jacobjacob · 2023-10-17T17:57:56.979Z · LW(p) · GW(p)
Hm, I was a bit confused reading this. My impression was "seems like there are multiple viable solutions", but then they were discarded for reasons that seemed kind of tangential, or not dealbreakers to me, where some extra fiddling might've done the trick?
If I get the time later will write up more concretely why some of them still seemed promising.
comment by Matt Goldenberg (mr-hire) · 2023-10-17T16:56:20.587Z · LW(p) · GW(p)
But also, whether you end up in such a position might depend on having already committed to that (like, I would feel more comfortable electing you to the stop button position if I could somehow be confident that you won't have GOOG exposure, which would be easiest if you had already signed a contract that made you GOOG neutral)
It seems marginally more likely to me that Google would put people with "skin in the game" in relation to google's stock price in positions of power.
comment by nim · 2023-10-17T16:55:35.628Z · LW(p) · GW(p)
Contracts and stuff are great and all, but have you ruled out doing it the dumb way? (dumb in opposite-of-clever sense)
IMO, the dumb way of decoupling equity compensation from decision-making is to pre-make your decisions about it, and then think about it as little as possible going forward.
My "do it the dumb way" rule around ESPP comp is to max my contribution, then sell as soon as legally possible and take the tax hit, and not try to meta-game whether or not I'd be better off holding it then selling later. I budget the ESPP as I would any other likely-but-not-guaranteed bonus of unknown size.
My "do it the dumb way" rule around equity packages is to never sell during the capital gains period. Then once it's out of capital gains and I form a long-term financial goal for it, I set a ~multi-month "sell at this price" order for what feels like a good but realistic price, and then it sells itself automatically when it gets to that price and I put it toward the long-term goal.
Your "do it the dumb way" around equity might be pre-committing to sell it all as soon as it gets out of the capital gains period, then maybe keep a fixed dollar amount, and donate the rest to a charity of your choice (as you might budget any other windfall). There's a certain kind of emotionally giving up on solving the problem perfectly that you might be able to do -- acknowledge that investing extreme mental effort in figuring out whether you're doing it optimally would actually be so costly that it drags you further from optimal in other, more important areas of your work.
Basically, you might be too smart/knowledgeable/whatever for this to work, but maybe you can do it the dumb way? Take a comp package that would be satisfactory without the equity, precommit to what you'll do with your equity at specific times regardless of the stock's performance, and then just execute the pre-selected algorithm and taboo further optimization attempts around it.
comment by Max H (Maxc) · 2023-10-16T23:11:56.512Z · LW(p) · GW(p)
Do impact markets or impact certificates help with this, even in theory? Say you press a (real or metaphorical) stop button in a situation where lots of other people would have chosen differently, due to financial incentives or other reasons. There would plausibly be people willing to buy the impact of your decision at a high price.
If it's not immediately obvious that you made a correct / net-positive decision, the initial impact buyers might be investors rather than philanthropists, gambling that they will later be able to resell the purchased impact to philanthropists in the (perhaps distant) future.
I don't think impact markets are currently mature / liquid enough that you should actually count on them for anything right now, but this consideration probably still has some effect on the expected value calculation which is at least directionally aligned with incentives the way you want.
comment by Logan Zoellner (logan-zoellner) · 2024-12-02T15:45:03.765Z · LW(p) · GW(p)
Isn't there just literally a financial product for this? TurnTrout could sell Puts for GOOG exactly equal to his vesting amounts/times.
Replies from: lsgos↑ comment by lewis smith (lsgos) · 2024-12-02T16:13:12.527Z · LW(p) · GW(p)
like at many public companies, google has anti-insider trading policies that prohibit employees from trading in options and other derivatives on the company stock, or shorting it.
Replies from: logan-zoellner↑ comment by Logan Zoellner (logan-zoellner) · 2024-12-03T13:02:35.475Z · LW(p) · GW(p)
Seems like he could just fake this by writing a note to his best friend that says "during the next approved stock trading window I will sell X shares of GOOG to you for Y dollars".
Admittedly:
1. technically this is a derivative (maybe illegal?)
2. principal agent risk (he might not follow through on the note)
3. his best friend might encourage him to work harder for GOOG to succeed
But I have a hard time believing any of those would be a problem in the real world, assuming TurnTrout and his friend are reasonably virtuous about actually not wanting TurnTrout to make a profit off of GOOG.
You could come up with more complicated versions of the same thing. For example instead of his best friend, TurnTrout could gift the profit to an for-charity LLC that had AI Alignment as its mandate. This would (assuming it was set up correctly) eliminate 1. and 3.
↑ comment by lewis smith (lsgos) · 2024-12-03T14:22:46.963Z · LW(p) · GW(p)
your example agreement with a friend is obviously a derivative, which is just a contract whose value depends on the value of an underlying asset (google stock in this case). If it's not a formal derivative contract you might be less likely to get in trouble for it compared to doing it on robinhood or whatever (not legal advice!) but it doesn't seem like a very good idea.
comment by Karolis Jucys (karolis-ramanauskas) · 2023-10-29T22:23:25.037Z · LW(p) · GW(p)
Would "delta hedging" be useful here? It helps hedge long option exposure by shorting some amount of a stock.
For example, at the money calls generally have a delta of 0.5, so holding 100 at the money calls and shorting 50 shares makes you roughly neutral for small moves in the underlying asset.
Would probably require monthly rebalancing based on how many options you effectively hold and market moves. It also wouldn't work well if AGI happens at GDM and Google stock goes exponential ("volatility smile" problem).
comment by Oliver Sourbut · 2023-10-17T10:47:00.609Z · LW(p) · GW(p)
Generally I'd hope for conscience to be enough, and with a few exoconsciences to help, this seems quite doable to me. If it helps, I can offer to precommit to considering you a massive jerk if you do anything selfish on the basis of GOOG :)
The serious version of this comment is: are there cheap social incentives you can build in? e.g. promise to donate excess gains (perhaps to some not-too-appealing cause), and get a few trusted people to check in with you once in a while? And those people are people you trust to actually call you out in some painful way(s) if you err? Ideally the people would be somewhat diversified over your communities e.g. a handful from: AIS researchers, an old friend or two, family, maybe even current/former colleagues?
comment by nimit · 2023-10-17T02:33:03.736Z · LW(p) · GW(p)
Assuming your investment portfolio consists of some broad index of stocks, you might modify it to contain every sector except tech, since your google equity compensation makes you over-allocated in tech anyway. So you would be "short" QQQ versus the counterfactual world where you don't have equity compensation. If necessary you could get even more short by buying some SQQQ.
In theory, you could be perfectly indifferent to GOOG by owning SQQQ and all the other stocks in QQQ except GOOG in the correct proportion. Though this probably runs against the spirit if not the letter of any employee trading policy.
Owning some GOOG while being short QQQ probably works pretty well in making you indifferent to GOOG's price in most cases, even over fairly idiosyncratic events like quarterly earnings (though you may have to be more short QQQ in that case). It would fall apart if you were deciding whether to press a button that halved/doubled the value of Google (unless you were unreasonably short QQQ). For that case, precommitting to donate seems like the most reasonable scheme? It feels like any CoI from that should be dominated by whether pressing the button is good for humanity.
Another option: find one or more persons at other companies who also receive stock compensation. Commit to shorting each other's stock so that as a group you have 0 exposure, and then donating everything to charity. This leaves even charity contributions perfectly hedged.
Replies from: ChristianKl↑ comment by ChristianKl · 2023-10-17T13:14:18.649Z · LW(p) · GW(p)
That seems like it would to the opposite of what's intended. If major value gets created via AI in tech, holding other companies with AI exposure would make it less important that Google wins at AI.
comment by johnswentworth · 2023-10-16T18:58:26.449Z · LW(p) · GW(p)
I think you guys should double-check the mechanisms of the stock grants. IIRC the big tech companies' compensation package includes "stock" which doesn't actually behave like stock in the obvious intuitive sense; they do some thing where they give you X dollars worth of stock at the market price at vesting time, so it's basically equivalent to giving you X dollars except the liquidity is funky.
(Low-confidence, I haven't actually worked at a big tech company but have a lot of friends who have.)
Replies from: dave-orr, Dagon, habryka4↑ comment by Dave Orr (dave-orr) · 2023-10-16T19:01:47.999Z · LW(p) · GW(p)
That's not correct, or at least not how my Google stock grants work. The price is locked in at grant time, not vest time. In practice what that means is that you get x shares every month, which counts as income when multiplied by the current stock price.
And then you can sell them or whatever, including having a policy that automatically sells them as soon as they vest.
↑ comment by Dagon · 2023-10-16T20:22:55.108Z · LW(p) · GW(p)
Most big tech companies use RSUs, or Restricted Stock Units. These are not options nor dollar equivalents, they are shares. They are denominated in shares, and the grant is for a number of shares, with a vesting schedule of how many shares become yours on what dates (often a 4-year term, with 25% vesting after 1 year of employment, then ~2% per month or 12.5% every 6 months). Additional grants may be made in future years, with a similar or different vesting schedule.
The stock price when granted is mostly irrelevant (it matters to the company, and sometimes to your loan officer if you're going for a mortgage or something, but it's not part of your compensation or taxable income when granted). The stock price when VESTING each block is treated as normal compensation on your taxes. This stock price (at vest, not grant) is the capital gains basis if you hold it for awhile then sell it.
Behind the scenes, the companies calculate the grants by a "total comp target", and figuring out how many shares to grant based on a dollar amount. But by the time it's actually written down in an offer or made as a grant, it's just shares. The only difference between a granted-but-unvested RSU and a normal share is that it's conditional on continued employment, and there are no voting rights. Once vested, it's literally a normal share.
Replies from: johnswentworth↑ comment by johnswentworth · 2023-10-16T20:26:36.727Z · LW(p) · GW(p)
Ah, I was probably thinking of the RSU/option distinction and misremembered. Thanks!
↑ comment by habryka (habryka4) · 2023-10-16T20:26:12.118Z · LW(p) · GW(p)
This is at least not true for Google, as far as I can tell. Also seems kind of weird since I feel like the primary purpose of stock compensation is to create incentive alignment between employee and company.
Replies from: Dagon↑ comment by Dagon · 2023-10-16T23:14:38.064Z · LW(p) · GW(p)
It's always tricky to impute "primary purpose", but from what I saw, the incentive effect was mentioned often, but nobody really believed it. There's just no way that any individual to tell if they had impact on that dimension. I'd say the finance and reporting aspects (it's not part of your salary, and is separate in many corporate reports) are probably more important in keeping it universal.
comment by momom2 (amaury-lorin) · 2023-10-17T08:08:40.970Z · LW(p) · GW(p)
To avoid being negatively influenced by perverse incentives to make societally risky plays, couldn't TurnTrout just leave the handling of his finances to someone else and be unaware of whether or not he has Google stock?
Doesn't matter if he does, as long as he doesn't think he does; and if he's uncertain about it, I think psychologically it'll already greatly reduce caring about Google stock.
Replies from: habryka4↑ comment by habryka (habryka4) · 2023-10-17T16:55:15.277Z · LW(p) · GW(p)
Everyone at Google gets unvested Google stock that can't be sold. It's going to be very hard to start believing that he doesn't own any Google stock.
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2023-10-19T03:36:24.368Z · LW(p) · GW(p)
Are you sure it can't be sold? That doesn't sound right to me. I was able to sell mine.
Replies from: habryka4↑ comment by habryka (habryka4) · 2023-10-19T05:05:31.834Z · LW(p) · GW(p)
I mean, you can sell as soon as it vests, but you can't sell the unvested stock (they are non-transferrable until vested).
comment by the gears to ascension (lahwran) · 2023-10-18T01:44:05.800Z · LW(p) · GW(p)
Just set up autosale. They should tell you about it during onboarding - you can automatically sell your stock grants as soon as they're given to you.
Replies from: habryka4↑ comment by habryka (habryka4) · 2023-10-18T01:50:39.575Z · LW(p) · GW(p)
Not sure what this is responding to. I agree that this helps, but of course this means you are still long Google for the whole period that your stock is vesting.