Engaging Seriously with Short Timelines

post by deluks917 · 2020-07-29T19:21:31.641Z · score: 35 (26 votes) · LW · GW · 13 comments

It seems like transformative AI might be coming fairly soon. By transformative AI I just mean AI that will rapidly accelerate economic and technological progress. Of course, I am not ruling out a true singularity either. I am assuming such technology can be created using variants of current deep learning techniques.

Paul Christiano has written up arguments for a 'slow takeoff' where "There will be a complete 4-year interval in which world output doubles, before the first 1-year interval in which world output doubles.". It is unclear to me whether that is more or less likely than a rapid and surprising singularity. But it certainly seems much easier to prepare for. I don't think we have a good model of what exactly will happen but we should prepare for as many winnable scenarios as we can.

What should we do now if think big changes are coming soon? Here are some ideas:

Work on quickly useable AI safety theory: Iterated Amplification and Distillation - Assuming timelines are short we might not have time for provably safe AI. We need ai-safety theory that can be applied quickly to neural nets. Any techniques that can quickly be used to align GPT-style AI are very high value. If you have the ability, work on them now.

IDA is a good framework to bet on imo. OpenAI seems to be betting on IDA. Here is an explanation. Here is a lesswrong discussion [LW · GW]. If you are mathematically inclined and understand the basics of deep learning now might be a great time to read the IDA papers and see if you can contribute.

Get capital while you can - Money is broadly useful and can be quickly converted into other resources in a critical moment. At the very least money can be converted into time. Be frugal, you might need your resources soon.

Besides, the value of human capital might fall. If you have a lucrative position (ex: finance or tech) now is a good time to focus on making money.  The value of human capital might fall. Investing in your human capital by going back to school is a bad idea.

Invest Capital in companies that will benefit from AI technology - Tech stocks are already expensive so great deals will be hard to find. But if things get crazy you want your capital to grow rapidly. I would especially recommend hedging 'transformative aI' if you will get rich anyway if nothing crazy happens.

I am doing something like the following portfolio:

ARKQ - 27%
Botz - 9%
Microsoft - 9%
Amazon - 9%
Alphabet - 8% (ARKQ is ~4% alphabet)

Facebook - 7%
Tencent - 6%
Baidu - 6%
Apple - 5%
IBM - 4%

Tesla - 0 (ArkQ is 10% Tesla)
Nvidia - 2% (both Botz and ARKQ hold Nvidia)
Intel - 3%
Salesforce - 2%
Twilio - 1.5%
Alteryx - 1.5%

BOTZ and ARKQ are ETFs. They have pretty high expense ratios. You can replicate them if you want to save 68-75 basis points. Botz is pretty easy to replicate with only ~10K.

Several people think that land will remain valuable in many scenarios. But I don't see a good way to operationalize a bet on land. Some people have suggested buying options since it is easier to get leverage and the upside is higher. But getting the timing right seems tricky to me. But if you think you can time things, buy options.

Physical and Emotional Preparation - You don't want your body or mind to fail you during the critical period. Invest in keeping yourself as healthy as possible. If you have issues with RSI work on fixing them now so you can give future developments your full attention.

You can also invest in mental preparation. Meditation is high value for many people. A systematic study of rationality techniques could be useful. But keep in mind that it is easy to waste time if you casually approach training. Track your results and have a system!

In general, you want to make these investments now while you still have time. Keep in mind these investments may conflict with attempts to increase your monetary capital. I would prioritize keeping yourself healthy. Make sure you are getting good returns on more speculative investments (and remember many self-improvement plans fail).

Political Organizing and Influence - Technological progress does not intrinsically help people. Current technology can be used for good ends. But they can also be used to control people on a huge scale. One can interpret the rise of humanity as singularity 1.0. By the standards of the previous eras change accelerated a huge amount. 'Singularity 1.0' did not go so well for the animals in factory farms. Even if align AI, we need to make the right choices or singularity 2.0 might not go so well for most inhabitants of the Earth.

In a slow takeoff, human governments are likely to be huge players. As Milton Friedman said, "Only a crisis - actual or perceived - produces real change". If there is a crisis coming there may be large political changes coming soon. Influencing these changes might be of high value. Politics can be influenced from both the outside and the inside. Given the political situation, I find it unlikely an AI arms race can be averted for too long. But various sorts of intergovernmental cooperation might be possible and increasing the odds of these deals could be high value.

Capabilities Research - This is a sketchy and rather pessimistic idea. But imagine that GPT-3 has already triggered an arms race or at least that GPT-4 will. In this case, it might make sense to help a relatively values-aligned organization win (such as OpenAI as opposed to the CCP). If you are, or could be, very talented at deep learning you might have to grapple with this option.

What ideas do other people have for dealing with short timelines?

Cross posted from my blog: Short Timelines

13 comments

Comments sorted by top scores.

comment by ogrok · 2020-07-29T21:27:27.813Z · score: 11 (6 votes) · LW(p) · GW(p)

You have detailed perhaps the more productive side of the advice—what to do—and I have consequently thought of some quick bullets on what not to do.

  1. Getting into a career likely to be made obsolete is bad. This will either be most of them—in which case it is prudent to distill productive activities to a very small list not unlike the one you write above—or it will be a more workable subset. Either way, much of copywriting, rudimentary coding activities for basic websites, etc. are likely to disappear, and one should either become an expert in a very specific type of these things, or avoid them; there won't be much economic advantage for someone whose body of work is the corpus for GPT-4, etc.!

  2. Paralyzing pessimism or indifference. Even if we are racing toward a singularity, inaction is worse than action, and for mental health it is very important to stay active. Not everyone should take this as "go get a job in AI research" but it might be a good time to read over 80,000 Hours to make sure your direction in life is deliberate and robust enough to fulfill you into an uncertain future.

comment by dominicq · 2020-07-31T09:10:07.088Z · score: 7 (5 votes) · LW(p) · GW(p)

Some career ideas for non-math and non-finance people:

Pursue a more primitive lifestyle: live off the land and farm. You can make it escapist (trying to ignore what's going on in the world) or a strategic fortress (if everything crumbles, I will not starve in the city). Everyone will always need food, so for as long as there are humans, there will be need for those who grow it. Also a good option because you can dial the primitive part up or down: you can either be a secluded monk or a farmer feeding the region.

Pursue a trade or human contact job: no GPT will replace a nurse, a physical therapist, a plumber, an electrician. For as long as people need things, they will need someone to do these things for them.

comment by Donald Hobson (donald-hobson) · 2020-07-29T23:34:39.564Z · score: 7 (5 votes) · LW(p) · GW(p)
Get capital while you can - Money is broadly useful and can be quickly converted into other resources in a critical moment. At the very least money can be converted into time. Be frugal, you might need your resources soon.
Besides, the value of human capital might fall. If you have a lucrative position (ex: finance or tech) now is a good time to focus on making money.  The value of human capital might fall. Investing in your human capital by going back to school is a bad idea.

I find this advice puzzling.

Firstly, I suspect that when things get sufficiently crazy, money will become increasingly worthless.

Secondly, the general paradigm of automation often involves replacing many low skill workers with a few high skill engineers. By the point that an AI can do everything that a highly skilled AI expert human can do, most human jobs are automated, and recursive self improvement is picking up.

Suppose that an AI system can do work similar to a human AI expert, at a compute cost of $10,000. (Ie less than an AI expert salary, but not orders of magnitude less).

This is the lest extreme tech you need to put an AI expert out of a good job, and even then, I suspect some people will want a human. At this stage, the improvement work on AI is being done by AI. If we haven't already developed some friendliness theory, we will have unfriendly AI's producing more powerful unfriendly AI's. Most of the alignment work that you can do on an AI that's as smart as you and twice as fast can be done on an AI far smarter than you, like post Foom ones.

In this scenario, the AI's will probably be smart enough to consider the end they are working towards. If that is profit, we are pretty much doomed. If that is being nice, we won't really need money.

comment by Yosarian T (yosarian-t) · 2020-07-30T00:00:37.632Z · score: 5 (5 votes) · LW(p) · GW(p)

I think there's a wide range of scenarios where narrow ai make certain companies more profitable and replaces a lot of jobs and maybe changes society as much as the industrial revolution did, without tipping over into recursive self improvement of that type. Or at least not right away.

comment by deluks917 · 2020-07-29T23:47:55.640Z · score: 2 (2 votes) · LW(p) · GW(p)

Most people, including most lesswrong readers, are not top AI experts. Nor will they be able to become one quickly.

comment by Donald Hobson (donald-hobson) · 2020-07-30T10:50:00.986Z · score: 4 (3 votes) · LW(p) · GW(p)

I would be surprised if we got a future where everything except top AI research is done by AI. What future are you thinking of where a few hundred AI researchers can earn a living, and no one else can.

comment by Natália Mendonça · 2020-08-01T00:16:36.901Z · score: 2 (2 votes) · LW(p) · GW(p)
if things get crazy you want your capital to grow rapidly.

Why (if by "crazy" you mean "world output increasing rapidly")? Isn't investing to try to have much more money in case world output is very high somewhat like buying insurance to pay the cost of a taxi to the lottery office in case you win the lottery? Your net worth is positively correlated with world GDP, so worlds in which world GDP is higher are worlds in which you have more money, and thus worlds in which money has a lower marginal utility to you. People do tend to value being richer than others in addition to merely being rich, but perhaps not enough to generate the numbers you need to make those investments be the obviously best choice.

(h/t to Avraham Eisenberg for this point)

comment by gwern · 2020-08-01T00:27:51.668Z · score: 9 (6 votes) · LW(p) · GW(p)

In an automation scenario, is your net worth correlated with world GDP? (Was the net worth of horses correlated with world GDP growth during the Industrial Revolution? Or chimpanzees during all human history? In an em scenario, who do the gains flow to - is it humans who own no capital and who earn income through labor?)

comment by Natália Mendonça · 2020-08-01T06:01:07.183Z · score: 1 (1 votes) · LW(p) · GW(p)

Thanks for pointing this out; you’re right that your net worth wouldn’t necessarily be correlated with world GDP in many plausible scenarios of how takeoff could happen. I suppose the viability of things like taxation and redistribution of wealth by governments as well as trade involving humans during and after a takeoff could be the main determinants of whether the correlation between the two would be as strong as it is today or closer to zero. I wonder what I should expect the correlation to be.

ETA: After all, governments don’t redistribute human wealth to either horses or chimpanzees, and humans don’t engage in trade with them.

comment by Natália Mendonça · 2020-08-01T18:43:25.575Z · score: 1 (1 votes) · LW(p) · GW(p)

I would agree more with the original post if it said something like “you should take these precautions if you expect your wealth not to grow at least as quickly as a constant multiple of world economic output” rather than “you should take these precautions if you expect a fast AI takeoff”

comment by Alexei · 2020-07-30T02:58:22.702Z · score: 2 (1 votes) · LW(p) · GW(p)

But getting the timing [on buying options] right seems tricky to me.

Decide on how much to spend per month and just buy longest available options. (I think the payout for this gets progressively better the closer you think we are to singularity.)

comment by Mitchell_Porter · 2020-07-30T02:37:45.008Z · score: 1 (4 votes) · LW(p) · GW(p)

Support metaethical.ai

comment by stoat · 2020-08-01T02:59:29.910Z · score: 4 (4 votes) · LW(p) · GW(p)

I happened upon this website once previously and couldn't quickly come to an assessment of the project, before moving on. I assume from your comment you feel this is a worthwhile project? I'd be interested to hear your take on it.