Should we buy Google stock?
post by Random Trader (Sergio Manuel Justo Maceda) · 2022-05-15T18:38:24.009Z · LW · GW · 25 commentsThis is a question post.
Contents
Answers 3 Orborde None 25 comments
It seems Deepmind is winning the AGI race. In the past, they negotiated with Google for having a nonprofit legal structure, but the proposal got totally rejected.
They have an agreement that, if AGI is achieved, its control will rely on Deepmind's Ethics Board, which we know pretty much nothing about. One would think it's very likely that Google will end up making a profit.
So... what do you think? Is Google stock the best investment a rationalist can make at this very moment, given the information that we have?
Answers
I have sometimes considered this but worry that doing so will lower the cost of capital for AGI-constructing companies and accelerate AGI development.
I'm not sure this is a realistic concern for Google/Alphabet - I think they have not bothered to raise capital since the Google IPO and aren't about to start.
25 comments
Comments sorted by top scores.
comment by starship006 (cody-rushing) · 2022-05-15T23:02:08.028Z · LW(p) · GW(p)
Meta comment: Would someone mind explaining to me why this question is being received poorly (negative karma right now)? It seemed like a very honest question, and while the answer may be obvious to some, I doubt it was to Sergio. Ic's response was definitely unnecessarily aggressive/rude, and it appears that most people would agree with me there. But many people also downvoted the question itself, too, and that doesn't make sense to me; shouldn't questions like these be encouraged?
Replies from: Dagon, lc↑ comment by Dagon · 2022-05-15T23:30:11.886Z · LW(p) · GW(p)
I didn't downvote because it was already negative and I didn't feel the need to pile on. But if it'd been positive, I would have.
It's probably an honest question, but it doesn't contain any analysis or hooks to a direction of inquiry. It doesn't explain why investing in google at the retail level is likely to have any impact on speed or alignment of AGI, nor why the stock will do particularly better than already priced in on any given timeframe based on this deal.
↑ comment by lc · 2022-05-15T23:10:06.720Z · LW(p) · GW(p)
My guess is that the question is being received poorly because either:
- People agree with me that supporting investment in Google stock because they're going to build "profitably" world-ending AGI is immoral, and downvoted me because of my aggressive+rude posture.
- OP is disregarding the efficient market hypothesis on a company with 1MMM market cap for no good reason.
↑ comment by Random Trader (Sergio Manuel Justo Maceda) · 2022-05-15T23:31:42.006Z · LW(p) · GW(p)
The efficient market hypothesis? Are you serious? So... you're saying the world is going to end and nobody is doing anything to avoid it, but I can't say that a stock is going to appreciate and nobody is buying.
Replies from: lc↑ comment by lc · 2022-05-15T23:36:22.178Z · LW(p) · GW(p)
Yeah, pretty much. Welcome to Earth.
Replies from: Sergio Manuel Justo Maceda↑ comment by Random Trader (Sergio Manuel Justo Maceda) · 2022-05-16T00:02:02.236Z · LW(p) · GW(p)
You've deleted the first part of your comment cause you probably realized it didn't make much sense, but I'm going to answer to it anyway. You made a comparison between solving the alignment problem and predicting the price of a stock, and that's just not right. Google execs don't have to solve the alignment problem themselves, they just have to recognize its existence and its magnitude, in the same way that retail investors don't have to build the AGI themselves, they just have to notice that it's going to happen soon.
Replies from: lc↑ comment by lc · 2022-05-16T00:05:51.906Z · LW(p) · GW(p)
You've deleted the first part of your comment cause you probably realized it didn't make much sense
I deleted it cause my comment sounds cooler in my head if I leave the explanation out, and also I was tired of arguing.
You made a comparison between solving the alignment problem and predicting the price of a stock, and that's just not right. Google execs don't have to solve the alignment problem themselves, they just have to recognize its existence and its magnitude, in the same way that retail investors don't have to build the AGI themselves, they just have to notice that it's going to happen soon.
The point I was (maybe poorly) trying to make was that Google execs are not individually incentivized to lobby their company to prevent AGI collapse in the same way that hedge fund managers are incentivized to predict Google's stock price. Those executives are not getting paid to delay AGI timelines, and many are getting paid not to delay AGI timelines.
AGI prevention is a coordination problem. Securities pricing is a technical problem. In the same way society is really bad at tax law, or preventing global warming, and really good at video game development, society is really bad at AGI alignment and really good at pricing securities. And thus oil barons continue producing oil and Google continues producing AGI research.
Replies from: Sergio Manuel Justo Maceda↑ comment by Random Trader (Sergio Manuel Justo Maceda) · 2022-05-16T00:31:16.631Z · LW(p) · GW(p)
Yeah, but let's be honest; oil barons don't think climate change is going to kill them. Capitalism may produce all sorts of coordination problems, but personal survival is still the strongest incentive. I think Google execs wouldn't hesitate to stop the research if they were expecting a paperclip maximizer.
Replies from: lc↑ comment by lc · 2022-05-17T00:31:26.620Z · LW(p) · GW(p)
I think you're being naive. But it doesn't really matter. Oil barons, in practice, also tend to convince themselves climate change is a hoax, or rationalize their participation away with "if we don't do it somebody else will". That's what the vast majority of Google executives would do if it got to the point where they started worrying a bit, and unfortunately the social pressure isn't sufficient to even drive them there yet.
comment by lc · 2022-05-15T19:42:57.059Z · LW(p) · GW(p)
Go fuck yourself.
Replies from: Sergio Manuel Justo Maceda↑ comment by Random Trader (Sergio Manuel Justo Maceda) · 2022-05-15T20:00:50.248Z · LW(p) · GW(p)
May I know why?
Replies from: lc↑ comment by lc · 2022-05-15T20:23:31.654Z · LW(p) · GW(p)
There are some deep theoretical reasons why even conscientiously designed AGI would be antagonistic towards human values. No solution or roadmap to a solution for those problems is known. Deliberately investing in the companies that succeed in pushing the AI capabilities frontier, before it's clear that research won't eventually kill everyone, so we can make a little bit of money in the 10-15 year interim, is probably counterproductive.
This is by no means the only example, but if you'd like a good intuitive understanding of the type of thing that can go wrong, Rob Miles did a computerphile episode you can find here.
Replies from: Sergio Manuel Justo Maceda↑ comment by Random Trader (Sergio Manuel Justo Maceda) · 2022-05-15T21:12:32.089Z · LW(p) · GW(p)
Well, Google already has cash to build an AGI 20 times over. I don't think you can blame human extinction on average Joes who buy shares right before the end of the world.
Replies from: lc↑ comment by lc · 2022-05-15T21:52:43.387Z · LW(p) · GW(p)
I can and do blame everyone that invests in Google, especially the ones that do it because of their end-of-the-world research department. My circle of blame actually extends as far as the crypto miners increasing the price of Nvidia GPUs.
Replies from: Sergio Manuel Justo Maceda↑ comment by Random Trader (Sergio Manuel Justo Maceda) · 2022-05-15T22:22:43.547Z · LW(p) · GW(p)
Fair enough. Have you already told Rohin Shah to go fuck himself?
Replies from: lc↑ comment by lc · 2022-05-15T22:35:45.761Z · LW(p) · GW(p)
Don't split hairs. He's an alignment researcher.
Replies from: Sergio Manuel Justo Maceda↑ comment by Random Trader (Sergio Manuel Justo Maceda) · 2022-05-15T22:58:33.523Z · LW(p) · GW(p)
But he's not a doomer like you. Aren't you pissed at everyone who's not a doomer?
Replies from: lc↑ comment by lc · 2022-05-15T23:08:55.724Z · LW(p) · GW(p)
I'm not pissed at the Indian rice farmer who doesn't understand alignment and will be as much of a victim as me when DeepMind researchers accidentally kill me and my relatives.
I'm very not pissed at Robin Shah, who, whatever his beliefs, is making a highly respectable attempt to solve the problem and not contributing to it.
I am appropriately angry at the DeepMind researchers who push the capabilities frontier and for some reason err in their anticipation of the consequences.
I am utterly infuriated at the people who agree with me about the consequences and decide to help push that capabilities frontier anyways, either out of greed or some "science fiction protagonist" syndrome.
Replies from: Sergio Manuel Justo Maceda↑ comment by Random Trader (Sergio Manuel Justo Maceda) · 2022-05-15T23:20:09.219Z · LW(p) · GW(p)
Who are those latest people? Do you have any examples?
Replies from: lc↑ comment by lc · 2022-05-15T23:48:47.474Z · LW(p) · GW(p)
It's subtle [LW · GW] because few people explicitly believe that's what they're doing in their heads; they just agree on doomerism and then perform greed- or prestige-induced rationalizations that what they're doing isn't really contributing. For example, Shane Legg; he'll admit that the chance of human extinction from AGI is somewhere "between 5-50%" but then go found DeepMind. Many people at OpenAI also fit the bill, for varying reasons.
Replies from: niplav↑ comment by niplav · 2022-05-16T12:03:48.941Z · LW(p) · GW(p)
It's relevant to note that Legg is also doing a bunch of safety research, much of it listed here, I don't see why it should be obvious that he's making a less respectable attempt to solve the problem than other alignment researchers. (He's working on the causal incentives framework, and on stuff related to avoiding wireheading.)
Also, wasn't deepmind an early attempt at gathering researchers to be able to coordinate against arms races?
Replies from: lc↑ comment by lc · 2022-05-16T23:44:36.038Z · LW(p) · GW(p)
It's relevant to note that Legg is also doing a bunch of safety research, much of it listed here, I don't see why it should be obvious that he's making a less respectable attempt to solve the problem than other alignment researchers. (He's working on the causal incentives framework, and on stuff related to avoiding wireheading.)
I'm glad, but if Hermann Goring retired from public leadership in 1936 and then spent the rest of his life making world peace posters, I still wouldn't consider him a good person.
Also, wasn't deepmind an early attempt at gathering researchers to be able to coordinate against arms races?
Sounds like a great rationalization for AI researchers who are intellectually concerned about their actions, but really want to make a boatload of money doing exactly what they were going to do in the first place. I don't understand at all how that would work, and it sure doesn't seem like it did.
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2022-05-20T03:22:45.285Z · LW(p) · GW(p)
Without necessarily disagreeing, I'm curious exactly how far back you want to push this. The natural outcome of technological development has been clear to sufficiently penetrating thinkers since the nineteenth century. Samuel Butler saw it. George Eliot saw it [LW · GW]. Following Butler, should "every machine of every sort [...] be destroyed by the well-wisher of his species," that we should "at once go back to the primeval condition of the race"?
In 1951, Turing wrote that "it seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers [...] At some stage therefore we should have to expect the machines to take control".
Turing knew. He knew, and he went and founded the field of computer science anyway. What a terrible person, right?
Replies from: lc↑ comment by lc · 2022-05-20T22:00:09.337Z · LW(p) · GW(p)
I don't know. At least to Shane Legg.
Replies from: Sergio Manuel Justo Maceda↑ comment by Random Trader (Sergio Manuel Justo Maceda) · 2022-05-23T18:17:46.130Z · LW(p) · GW(p)
According to Eliezer, free will is an illusion, so Shane doesn't really have a choice.