Engaging Seriously with Short Timelines

post by sapphire (deluks917) · 2020-07-29T19:21:31.641Z · LW · GW · 21 comments

Contents

22 comments

It seems like transformative AI might be coming fairly soon. By transformative AI I just mean AI that will rapidly accelerate economic and technological progress. Of course, I am not ruling out a true singularity either. I am assuming such technology can be created using variants of current deep learning techniques.

Paul Christiano has written up arguments for a 'slow takeoff' where "There will be a complete 4-year interval in which world output doubles, before the first 1-year interval in which world output doubles.". It is unclear to me whether that is more or less likely than a rapid and surprising singularity. But it certainly seems much easier to prepare for. I don't think we have a good model of what exactly will happen but we should prepare for as many winnable scenarios as we can.

What should we do now if think big changes are coming soon? Here are some ideas:

Work on quickly useable AI safety theory: Iterated Amplification and Distillation - Assuming timelines are short we might not have time for provably safe AI. We need ai-safety theory that can be applied quickly to neural nets. Any techniques that can quickly be used to align GPT-style AI are very high value. If you have the ability, work on them now.

IDA is a good framework to bet on imo. OpenAI seems to be betting on IDA. Here is an explanation. Here is a lesswrong discussion [LW · GW]. If you are mathematically inclined and understand the basics of deep learning now might be a great time to read the IDA papers and see if you can contribute.

Get capital while you can - Money is broadly useful and can be quickly converted into other resources in a critical moment. At the very least money can be converted into time. Be frugal, you might need your resources soon.

Besides, the value of human capital might fall. If you have a lucrative position (ex: finance or tech) now is a good time to focus on making money.  The value of human capital might fall. Investing in your human capital by going back to school is a bad idea.

Invest Capital in companies that will benefit from AI technology - Tech stocks are already expensive so great deals will be hard to find. But if things get crazy you want your capital to grow rapidly. I would especially recommend hedging 'transformative aI' if you will get rich anyway if nothing crazy happens.

I am doing something like the following portfolio:

ARKQ - 27%
Botz - 9%
Microsoft - 9%
Amazon - 9%
Alphabet - 8% (ARKQ is ~4% alphabet)

Facebook - 7%
Tencent - 6%
Baidu - 6%
Apple - 5%
IBM - 4%

Tesla - 0 (ArkQ is 10% Tesla)
Nvidia - 2% (both Botz and ARKQ hold Nvidia)
Intel - 3%
Salesforce - 2%
Twilio - 1.5%
Alteryx - 1.5%

BOTZ and ARKQ are ETFs. They have pretty high expense ratios. You can replicate them if you want to save 68-75 basis points. Botz is pretty easy to replicate with only ~10K.

Several people think that land will remain valuable in many scenarios. But I don't see a good way to operationalize a bet on land. Some people have suggested buying options since it is easier to get leverage and the upside is higher. But getting the timing right seems tricky to me. But if you think you can time things, buy options.

Physical and Emotional Preparation - You don't want your body or mind to fail you during the critical period. Invest in keeping yourself as healthy as possible. If you have issues with RSI work on fixing them now so you can give future developments your full attention.

You can also invest in mental preparation. Meditation is high value for many people. A systematic study of rationality techniques could be useful. But keep in mind that it is easy to waste time if you casually approach training. Track your results and have a system!

In general, you want to make these investments now while you still have time. Keep in mind these investments may conflict with attempts to increase your monetary capital. I would prioritize keeping yourself healthy. Make sure you are getting good returns on more speculative investments (and remember many self-improvement plans fail).

Political Organizing and Influence - Technological progress does not intrinsically help people. Current technology can be used for good ends. But they can also be used to control people on a huge scale. One can interpret the rise of humanity as singularity 1.0. By the standards of the previous eras change accelerated a huge amount. 'Singularity 1.0' did not go so well for the animals in factory farms. Even if align AI, we need to make the right choices or singularity 2.0 might not go so well for most inhabitants of the Earth.

In a slow takeoff, human governments are likely to be huge players. As Milton Friedman said, "Only a crisis - actual or perceived - produces real change". If there is a crisis coming there may be large political changes coming soon. Influencing these changes might be of high value. Politics can be influenced from both the outside and the inside. Given the political situation, I find it unlikely an AI arms race can be averted for too long. But various sorts of intergovernmental cooperation might be possible and increasing the odds of these deals could be high value.

Capabilities Research - This is a sketchy and rather pessimistic idea. But imagine that GPT-3 has already triggered an arms race or at least that GPT-4 will. In this case, it might make sense to help a relatively values-aligned organization win (such as OpenAI as opposed to the CCP). If you are, or could be, very talented at deep learning you might have to grapple with this option.

What ideas do other people have for dealing with short timelines?

Cross posted from my blog: Short Timelines

21 comments

Comments sorted by top scores.

comment by ogrok · 2020-07-29T21:27:27.813Z · LW(p) · GW(p)

You have detailed perhaps the more productive side of the advice—what to do—and I have consequently thought of some quick bullets on what not to do.

  1. Getting into a career likely to be made obsolete is bad. This will either be most of them—in which case it is prudent to distill productive activities to a very small list not unlike the one you write above—or it will be a more workable subset. Either way, much of copywriting, rudimentary coding activities for basic websites, etc. are likely to disappear, and one should either become an expert in a very specific type of these things, or avoid them; there won't be much economic advantage for someone whose body of work is the corpus for GPT-4, etc.!

  2. Paralyzing pessimism or indifference. Even if we are racing toward a singularity, inaction is worse than action, and for mental health it is very important to stay active. Not everyone should take this as "go get a job in AI research" but it might be a good time to read over 80,000 Hours to make sure your direction in life is deliberate and robust enough to fulfill you into an uncertain future.

comment by PeterMcCluskey · 2020-08-05T17:15:27.123Z · LW(p) · GW(p)

AI-related investment thoughts:

I have stock in Google and Intel, mainly because of their Waymo and Mobileye subsidiaries, and to a lesser extent due to DeepMind.

NVIDIA was my favorite AI bet in 2017-8, but it currently looks too expensive for me.

One Stop Systems Inc is an obscure company in which I've invested, because its business involves AI. It's hardly a leader in AI, but should benefit by enabling ordinary businesses to use AI.

I doubt that Apple, Microsoft, or Salesforce will be good ways to benefit from AI, and their prices are looking somewhat bubble-like (as do most of the big tech companies).

I expect that semiconductor equipment companies will be somewhat more appropriate, and they currently seem less likely to be overpriced. I've currently got investments in these small semiconductor equipment companies:

  • Amtech Systems
  • Trio-Tech International
  • SCI Engineered Materials, Inc.

The biggest semiconductor equipment companies (symbols BRKS, LRCX, KLAC, and AMAT) look like decent investments, but not quite cheap enough that I'm willing to buy them.

There are some datacenter-focused companies that are potentially good investments, but none of the ones I've looked at appear reasonably priced.

Transformative AI might speed up the creation of an Age of Em scenario, so it might be good to invest in regions which that book suggests might have em cities (places with stable governance, and enough cold water for cooling purposes). Countries that best fit this description include: Norway, New Zealand, Sweden, Denmark, Finland, Canada, and Japan.

I currently have investments in New Zealand and Sweden etfs (symbols ENZL, EWD). I'm avoiding Norway for now, and not investing much in Canada, due to their dependence on oil (I expect oil prices to decline this decade).

Here are some ideas for investing in companies that own land (listed by symbol; not AI-focused, but recommended for general diversification of bets): WPS (etf, non-US real estate), CTT (timber lands), TPL, and TRC. I'm currently invested in WPS and a bunch of real-estate-related companies in the US and Hong Kong, but I haven't yet focused on land (I've been betting mainly on buildings).

Replies from: habryka4, ioannes_shade
comment by habryka (habryka4) · 2020-08-05T21:41:59.027Z · LW(p) · GW(p)

Microsoft

Doesn't microsoft own a substantial fraction of the cloud-computing industry, which seems to have benefited a good amount from AI? I.e. they had that big deal with Open AI about giving them access to a billion dollars of compute.

Replies from: PeterMcCluskey
comment by PeterMcCluskey · 2020-08-09T04:09:32.936Z · LW(p) · GW(p)

Yes, now that I look more carefully, I see that Microsoft's cloud revenues are large enough to be a somewhat plausible reason to bet on Microsoft.

comment by ioannes (ioannes_shade) · 2020-08-11T20:07:50.912Z · LW(p) · GW(p)
The biggest semiconductor equipment companies (symbols BRKS, LRCX, KLAC, and AMAT) look like decent investments, but not quite cheap enough that I'm willing to buy them.

What do you think of companies like Broadcom, NXP, Marvell, and MediaTek?

(I don't quite know where these sit in the value chain in relation to the companies you quoted; I believe they're focused more on chip design and mostly don't do fabrication)

Replies from: PeterMcCluskey
comment by PeterMcCluskey · 2020-08-15T02:42:47.047Z · LW(p) · GW(p)

They're harder to evaluate. Broadcom looks somewhat promising. I know very little about the others.

Micron Technology is another one that's worth looking at.

comment by dominicq · 2020-07-31T09:10:07.088Z · LW(p) · GW(p)

Some career ideas for non-math and non-finance people:

Pursue a more primitive lifestyle: live off the land and farm. You can make it escapist (trying to ignore what's going on in the world) or a strategic fortress (if everything crumbles, I will not starve in the city). Everyone will always need food, so for as long as there are humans, there will be need for those who grow it. Also a good option because you can dial the primitive part up or down: you can either be a secluded monk or a farmer feeding the region.

Pursue a trade or human contact job: no GPT will replace a nurse, a physical therapist, a plumber, an electrician. For as long as people need things, they will need someone to do these things for them.

comment by Donald Hobson (donald-hobson) · 2020-07-29T23:34:39.564Z · LW(p) · GW(p)
Get capital while you can - Money is broadly useful and can be quickly converted into other resources in a critical moment. At the very least money can be converted into time. Be frugal, you might need your resources soon.
Besides, the value of human capital might fall. If you have a lucrative position (ex: finance or tech) now is a good time to focus on making money.  The value of human capital might fall. Investing in your human capital by going back to school is a bad idea.

I find this advice puzzling.

Firstly, I suspect that when things get sufficiently crazy, money will become increasingly worthless.

Secondly, the general paradigm of automation often involves replacing many low skill workers with a few high skill engineers. By the point that an AI can do everything that a highly skilled AI expert human can do, most human jobs are automated, and recursive self improvement is picking up.

Suppose that an AI system can do work similar to a human AI expert, at a compute cost of $10,000. (Ie less than an AI expert salary, but not orders of magnitude less).

This is the lest extreme tech you need to put an AI expert out of a good job, and even then, I suspect some people will want a human. At this stage, the improvement work on AI is being done by AI. If we haven't already developed some friendliness theory, we will have unfriendly AI's producing more powerful unfriendly AI's. Most of the alignment work that you can do on an AI that's as smart as you and twice as fast can be done on an AI far smarter than you, like post Foom ones.

In this scenario, the AI's will probably be smart enough to consider the end they are working towards. If that is profit, we are pretty much doomed. If that is being nice, we won't really need money.

Replies from: yosarian-t, deluks917
comment by Yosarian T (yosarian-t) · 2020-07-30T00:00:37.632Z · LW(p) · GW(p)

I think there's a wide range of scenarios where narrow ai make certain companies more profitable and replaces a lot of jobs and maybe changes society as much as the industrial revolution did, without tipping over into recursive self improvement of that type. Or at least not right away.

comment by sapphire (deluks917) · 2020-07-29T23:47:55.640Z · LW(p) · GW(p)

Most people, including most lesswrong readers, are not top AI experts. Nor will they be able to become one quickly.

Replies from: donald-hobson
comment by Donald Hobson (donald-hobson) · 2020-07-30T10:50:00.986Z · LW(p) · GW(p)

I would be surprised if we got a future where everything except top AI research is done by AI. What future are you thinking of where a few hundred AI researchers can earn a living, and no one else can.

comment by Nicholas / Heather Kross (NicholasKross) · 2020-08-05T13:17:11.793Z · LW(p) · GW(p)

Any ideas for accruing money quickly outside of a job? I don't have much capital to invest currently.

comment by Natália (Natália Mendonça) · 2020-08-01T00:16:36.901Z · LW(p) · GW(p)
if things get crazy you want your capital to grow rapidly.

Why (if by "crazy" you mean "world output increasing rapidly")? Isn't investing to try to have much more money in case world output is very high somewhat like buying insurance to pay the cost of a taxi to the lottery office in case you win the lottery? Your net worth is positively correlated with world GDP, so worlds in which world GDP is higher are worlds in which you have more money, and thus worlds in which money has a lower marginal utility to you. People do tend to value being richer than others in addition to merely being rich, but perhaps not enough to generate the numbers you need to make those investments be the obviously best choice.

(h/t to Avraham Eisenberg for this point)

Replies from: gwern
comment by gwern · 2020-08-01T00:27:51.668Z · LW(p) · GW(p)

In an automation scenario, is your net worth correlated with world GDP? (Was the net worth of horses correlated with world GDP growth during the Industrial Revolution? Or chimpanzees during all human history? In an em scenario, who do the gains flow to - is it humans who own no capital and who earn income through labor?)

Replies from: Natália Mendonça, Natália Mendonça
comment by Natália (Natália Mendonça) · 2020-08-01T06:01:07.183Z · LW(p) · GW(p)

Thanks for pointing this out; you’re right that your net worth wouldn’t necessarily be correlated with world GDP in many plausible scenarios of how takeoff could happen. I suppose the viability of things like taxation and redistribution of wealth by governments as well as trade involving humans during and after a takeoff could be the main determinants of whether the correlation between the two would be as strong as it is today or closer to zero. I wonder what I should expect the correlation to be.

ETA: After all, governments don’t redistribute human wealth to either horses or chimpanzees, and humans don’t engage in trade with them.

comment by Mitchell_Porter · 2020-07-30T02:37:45.008Z · LW(p) · GW(p)

Support metaethical.ai

Replies from: stoat
comment by stoat · 2020-08-01T02:59:29.910Z · LW(p) · GW(p)

I happened upon this website once previously and couldn't quickly come to an assessment of the project, before moving on. I assume from your comment you feel this is a worthwhile project? I'd be interested to hear your take on it.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2020-09-21T12:11:30.813Z · LW(p) · GW(p)

Hi, for some reason I didn't see this reply until recently.

metaethical.ai is the most sophisticated sketch I've seen, of how to make human-friendly AI. In my personal historiography of "friendliness theory", the three milestones so far are Yudkowsky 2004 (Coherent Extrapolated Volition), Christiano 2016 (alignment via capability amplification), and June Ku 2019 ("AIXI for Friendliness").

To me, it's conceivable that the metaethical.ai schema is sufficient to solve the problem. It is an idealization ("we suppose that unlimited computation and a complete low-level causal model of the world and the adult human brains in it are available"), but surely a bounded version that uses heuristic models can be realized.

Replies from: stoat
comment by stoat · 2020-09-22T19:29:05.703Z · LW(p) · GW(p)

Thanks! FWIW your high opinion of the project counts for a lot with me; I will allocate more attention to it and seriously consider donating.

comment by Alexei · 2020-07-30T02:58:22.702Z · LW(p) · GW(p)

But getting the timing [on buying options] right seems tricky to me.

Decide on how much to spend per month and just buy longest available options. (I think the payout for this gets progressively better the closer you think we are to singularity.)

comment by Mati_Roy (MathieuRoy) · 2020-10-04T20:12:51.157Z · LW(p) · GW(p)

Moving to the US, as they are the most likely to win the AI race.