How do AI timelines affect how you live your life?
post by Quadratic Reciprocity · 2022-07-11T13:54:12.961Z · LW · GW · 9 commentsThis is a question post.
Contents
Answers 44 Aditya 41 Tomás B. 30 Shos Tekofsky 21 Quintin Pope 19 Lucius Bushnaq 17 sapphire 17 johnlawrenceaspden 14 Dagon 12 adamzerner 7 MSRayne 7 Sable 5 Randomized, Controlled 5 Alexander Gietelink Oldenziel 4 Jonas Hallgren 3 Netcentrica 3 danlucraft 2 PipFoweraker 1 Flaglandbase None 9 comments
This question is more about personal decision-making rather than for example deciding to work on AI safety for altruistic reasons. If I were thinking about this from a purely selfish perspective, it seems pretty likely that if I expect transformative AI to arrive in 20 years, I should live my life a bit differently than people who don't expect to get TAI in their lifetimes.
I'm curious about if your own beliefs on AI timelines have affected anything that you do in your personal life - perhaps decisions related to saving money, personal relationships, health etc.
Answers
I asked out my crushes. Worked out well for me.
I used to be really inhibited, now I have tried weed, alcohol and am really enjoying the moment.
I used to be an early-retirement fanatic, which I half-jokingly called "effective egotism". I took enormous quality of life hits to maximize my savings rate, which was extremely high - I now spend my money more. I also took some time off during COVID and found not-working doesn't suit me (this is probably something I should have done before devoting my twenties to early retirement) so I'll probably remain a professional programmer until I'm obsolete or a paperclip.
I try to help with alignment where I can, purchasing ads for AI-risk podcasts, occasionally got a charismatic alignment-pilled person I know on mildly-popular podcasts as an attempt to raise AI-risk awareness, but given timelines I do think these efforts are slightly more pathetic than I did last year. I used to organize meetups with Altman every year, but my timelines going down to 5-10 years makes me less enthusiastic about OpenAI's behavior, and is one of the reasons I stopped.
In terms of things that will happen before full AGI, I started writing short stories occasionally (the models starting to understand humor was quite an update) as I think human writers will be obsolete very soon, so now's the time to produce old-fashioned art while it still has a chance of being worth reading to some people.
I regret that I did not become an expert in AI and cannot contribute directly to alignment, but then again I'm not very high-g so I doubt I would have succeeded even if I tried my very hardest.
As an aside, if anyone is reading this and is a talented developer, https://www.conjecture.dev/ are hiring. (Edit, guess they are not still hiring) I've interacted enough with the people involved to know they are sincerely trying to tackle the problem while avoiding safety washing and navel gazing.
Even if you think the odds are low, "going out fighting" doesn't look like a miserable trek. It looks like joining a cool startup in London (getting a visa is pretty seamless), with very bright people, who share much of your worldview and love programming. It looks like being among "your people" working on very important problems for a historically astounding salary. If you're lucky enough to be able to join such a fight, consider it may be an enjoyable one!
If I had the brains/chops to contribute that's where I would want to be working right now.
↑ comment by Ruby · 2022-07-12T01:18:53.227Z · LW(p) · GW(p)
LessWrong is also still hiring.
↑ comment by AlphaAndOmega · 2022-07-11T20:30:10.374Z · LW(p) · GW(p)
I had hoped to be write too, someday, even if given the odds it would likely be more for my own self aggrandisement than actual financial gain. But right now, I think it would be a rather large waste of time to embark on writing a novel of any length, because I have more immediately satisfying ways of both passing the time, and certainly of making money.
When I feel mildly sad about that, I remind myself that I consume a great deal more media than I could ever produce, and since my livelihood isn't at stake, it's a net win for me to live in a world where GPT-N can produce great works of literature, especially the potential to just ask it to produce bespoke works for my peculiar tastes.
Maybe in another life my trajectory could have resembled Scott Alexander's, although if I'm being realistic he's probably a better doctor and writer than I am or could be haha. I still wish I had the chance to try without thinking it was even less fruitful..
I've recently started looking at AIS and I'm trying to figure out how I would like to contribute to the field. My sole motivation is that all timelines see either my kids or grandkids dying from AGI. I want them to die of old age after having lived a decent life.
I get the sense that motivation ranks as quite selfish, but it's a powerful one for me. If working on AIS is the one best thing I can do for their current and future wellbeing, then I'll do that.
↑ comment by Tomás B. (Bjartur Tómas) · 2022-07-11T16:35:23.239Z · LW(p) · GW(p)
>My sole motivation is that all timelines see either my kids or grandkids dying from AGI.
Would that all people were so selfish!
I don't think about it too much. It's not a very useful thing to spend mental energy on. Alignment either gets solved or we die, and there's little benefit to personally preparing for either possibility, beyond having a broad index fund to hedge against the unlikely outcome where (1) alignment is solved, (2) there's little to no UBI, (3) stock investments still provide returns, and (4) money is actually useful for me.
I'm an optimist myself (~94% odds of an okay outcome, IMO), but whether you're an optimist or a pessimist, short timelines have little immediate consequence other than reducing the expected value of long-term future preparation, beyond those orientated towards solving alignment.
I worry less about retirement savings (I'm in my twenties), and long term financial investment in general. I worry somewhat less about getting my parents signed up for cryonics (they're in their late fifties).
I'm transferring from particle physics to alignment. This is motivated by altruistic reasons, but also by not wanting me and my social circle to die.
As of 2022 the fire alarm is ringing. I think there is a decent chance I will be dead in ten years. Whatever I want to do I need to do it soon. It has been very motivational:
-- Restored my relationship of ~10 years (we broke up in 2021 and got back together in 2022)
-- Actually learned to work on crypto apps. Wrote some cool code. Previously I just did crypto finance. I've always thought crypto was sweet and wanted to contribute but didn't push myself to start writing dapp code.
-- Started ramping up animal activism (animals matter in the future. If I only have so many choices left I want to choose love)
-- Been more generous in my donations
More prosacially I've been investing in 'ai companies broadly construed'.
Well, I think we're all dead soon, so no point in cryonics, retirement planning, etc. Live for today, sod around in the sunshine while you still can.
Not caring about long-term political issues is quite relaxing!
↑ comment by Tomás B. (Bjartur Tómas) · 2022-07-11T17:32:15.491Z · LW(p) · GW(p)
>Not caring about long-term political issues is quite relaxing!
Yeah. It just becomes grimly amusing.
↑ comment by AlphaAndOmega · 2022-07-11T20:36:35.345Z · LW(p) · GW(p)
Consider me hopelessly optimistic, but I do think that were we to actually align an Superhuman AGI, your current financial condition probably wouldn't correlate much with what came after.
At any rate, were it an AGI under the control of a cabal of its creators, and not designed more prosocially, you'd likely need to be a billionaire or close to actually leverage that into getting a better deal than the typical peasant.
I'd hope they'd at least give us an UBI and great VR as panem et circenses, while they're lording over the light cone, and to be fair, I would probably go for an existence in VR even if I were one of the lucky few.
In contrast, if it goes badly, we're all going to be dead, and if it goes slowly, then you'll likely face a period of automation induced unemployment, and I'd rather have enough money to invest and live off dividends.
In both the best and worst case scenarios, it doesn't matter, or even the median one, but I still think that on the balance I'm better off making the bets that hinge on me needing the money than not, because I'd likely be doing the same kind of job either way, I can't sit on my ass and not work, my Indian parents would disown me if nothing else haha.
Replies from: jmh, Vladimir_Nesov↑ comment by jmh · 2022-07-12T03:02:35.069Z · LW(p) · GW(p)
I wonder if the investment view will fully hold. Seems to assume that "labor" incomes will be eliminated by AI robotics but one might expect that the AI(s) will also have, or create their own, ability to replicate. In other words the capital markets could just as easily be at risk from various paths TAI could take.
↑ comment by Vladimir_Nesov · 2022-07-11T22:48:06.004Z · LW(p) · GW(p)
live off dividends
Why do people keep talking about dividends? Dividends either don't matter or are bad/inconvenient (in the way of weather). The price adjusts in arbitrage after in-dividend date (the day when ownership of stock is counted towards how much dividends you get), so you could as easily have sold the equivalent amount of stock if there were no dividends, or you could've re-invested into the same stock to nullify the consequences of their payment. But the amount you get is forced outside your control, and you have to pay dividend tax.
There is no reason why you would want to convert stock to cash in a way related to how (or how much) dividends get paid, so it's purely an inconvenience. And the FIRE safe withdrawal rate is similarly in general unrelated to the dividend rate. Dividends are not relevant to anything.
Replies from: AlphaAndOmega, artifex↑ comment by AlphaAndOmega · 2022-07-12T01:43:50.427Z · LW(p) · GW(p)
I wasn't aware of that, albeit I was using the word "dividends" in the sense of all potential returns from the initial capital I had invested, and not the strict sense of stock dividends alone, and was implicitly trying to hint at the idea of the safe withdrawal rate.
I'm not astute enough to be more specific, but I'm using it in the sense that one can buy a house and then retire on the rental money, and while the rent and the price you bought it are strongly correlated, that doesn't matter as long as you get the income you expect.
↑ comment by artifex · 2022-07-12T06:23:05.159Z · LW(p) · GW(p)
There is no reason why you would want to convert stock to cash in a way related to how (or how much) dividends get paid, so it's purely an inconvenience. And the FIRE safe withdrawal rate is similarly in general unrelated to the dividend rate. Dividends are not relevant to anything.
No, because stock prices are more dependent than dividends on state variables that you don’t care about as a diversified long-term investor. See how smooth dividends are compared to stock prices: the dividends are approximately a straight line on log scale while the price is very volatile. Price declines often come with better expected returns going forward, so they’re not a valid reason to reduce your spending if the dividends you’re receiving aren’t changing.
If you’re just going to hold stocks to eat the dividends (and other cash payments) without ever selling them, how much do you care what happens to the price? The main risk you care about is economic risk causing real dividends to fall. Like if you buy bonds and eat the coupons: you don’t care what happens to the price, if it doesn’t indicate increased risk of default. Sure, interest rates go up and your bond prices go down. You don’t care. The coupons are the same—you receive the same money. Make it inflation-indexed and you receive the same purchasing power. The prices are volatile—it seems like these bonds are risky, right? But you receive the same fixed purchasing power no matter what happens—so, no, they aren’t risky, not in the way you care about.
There are many reasons you probably don’t want to just eat the dividends. By using appropriate rules of thumb and retirement planning you can create streams of cash payments that are better suited to your goals since choosing how much to withdraw gives you so much more flexibility and you have more information (your life expectancy, for example) than the companies deciding how to smooth their streams of dividends. But there also are good reasons why many people took dividends from large companies in the past and today use funds designed for high dividend yield, retirement income, and so on.
↑ comment by Josh Jacobson (joshjacobson) · 2022-07-13T14:28:45.582Z · LW(p) · GW(p)
There’s a decent argument that Cryonics takes on greater importance now.
Replies from: johnlawrenceaspden↑ comment by johnlawrenceaspden · 2022-07-17T21:32:19.240Z · LW(p) · GW(p)
do tell?
↑ comment by AlphaAndOmega · 2022-07-11T20:35:50.512Z · LW(p) · GW(p)
A lot depends on how much uncertainty you have over that timeline, and how you expect the uncertainty to change as time passes. If it's 10-200 years, with a peak at 20, you probably shouldn't do anything particularly weird in response, except keeping a close eye on 5-10 year strong indicators, at which point you flip your savings/investment mechanisms from long-term optimization to short/medium-term utility maximization.
↑ comment by Shiroe · 2022-07-12T04:09:41.547Z · LW(p) · GW(p)
Do you know of any recent arguments from individuals with timelines like you mention, which are long/broad and emphasize uncertainty? At least the pool of people making a convincing case for the >100 year timelines has been drying up, from what I can see. Even Paul Christiano expects "40% on singularity by 2040 [LW(p) · GW(p)]".
↑ comment by Yitz (yitz) · 2022-07-12T10:24:48.283Z · LW(p) · GW(p)
What in your opinion would a 5-10 year strong indicator look like?
Replies from: Dagon↑ comment by Dagon · 2022-07-12T15:49:30.587Z · LW(p) · GW(p)
I don't have great indicators. We're still pretty far from human-level performance in any major complex industrial task (from cooking to warehouse work, automation exists to exist humans, but humans are executing the difficult/interesting part). Even driving is forever just a few years away.
I also don't know of anyone arguing for >40 year timelines - most of the skeptics aren't arguing at all, they're just getting on with their lives. If you take everyone who's NOT obsessing over AI safety as arguing for very-long timelines, that's a massive weight of opinion to overcome.
Replies from: green_leaf↑ comment by green_leaf · 2022-07-14T13:03:27.004Z · LW(p) · GW(p)
It might be 6 years.
Probably not as much as it should.
People remain surprisingly passive when faced with the prospect of death. Fear of public ridicule or losing one's livelihood is more likely to drive men to extremes and the breaking of their customary habits.
It does make me quite anxious. I had already been anxious about it and now I'm maybe 30% more anxious than I had been.
My overall plan is to work my 9-5, make money, retire early, and then try to do something useful in AIS. I thought about forgetting the make money and retire early part due to the shorter timelines. But for whatever reason that sort of thing would make me anxious. I'd feel better knowing that I have the nest egg, so that seems like a good enough reason to pursue it. Plus I finally have a job that I enjoy, and I've been thinking that there might be some wisdom in what people were talking about in this thread [LW · GW] regarding the sort of grounding that a normal job provides.
I've been a little bit more liberal with spending money on fun things. I'm planning a trip to Thailand right now that I've always wanted to take and never got around to before. I think the shorter timelines has influenced that somewhat, but not too significantly.
I don't think much about long-term health stuff, a la Peter Attia. I've always gone back and forth about being a health nut, but now I feel confident that being a health nut is not worth it. I've even pushed things in the direction of not caring about eating unhealthy food, but I'm finding that eating too much unhealthy stuff isn't worth it due to the short-term impact of how it makes me feel alone, physically and cognitively.
This one is probably unique to me, but I stopped caring about the risk of death due to things like Covid and cars. Previously I cared about that risk due to my having assigned [LW · GW] a very high expected value to life, but that expected value has since gone down. Although given how things have developed with new variants being more mild and such, I probably wouldn't be caring much about Covid anyway right now. Similar with the possibility of nuclear war.
Updates
I went to the dentist today. I have problems with teeth grinding that seem like they'll cause TMJ and enamel erosion down the road (and have already started to). I could get a mouth guard. But mouth guards are uncomfortable for me to wear. So I have to weigh that discomfort against the down the road problems. The short timelines probably move me from "get the mouth guard" to "don't get the mouth guard" here.
I had been spending time learning functional programming with Haskell and Clojure. I'd like to continue that -- it's fun and will be helpful to me in the long run -- but now I feel like I don't really have time for it and need to spend my time on higher priority things. Whereas previously I guess I figured that there's time to get to the higher priority things down the road. Maybe this push towards focusing on higher priority things isn't a bad thing. Maybe it is.
↑ comment by bfinn · 2022-07-14T11:27:21.092Z · LW(p) · GW(p)
On a detail (!) there are mouth guards (‘sleep clench inhibitors’) that you wear in your sleep to train you not to clench/grind your teeth both at night & in the daytime. I’ve used one; my dentist got one custom made to fit my teeth. You wear them nightly for a week initially, then just once every week or two. Unpleasant the first couple of nights, but you soon get used to them. Worked for me!
Replies from: adamzerner↑ comment by Adam Zerner (adamzerner) · 2022-07-14T16:35:49.390Z · LW(p) · GW(p)
Interesting, thanks for pointing that out! I'll look into it.
I've always been very afraid of intimacy and extremely reclusive, even quasi-agoraphobic. For those who know the terms "NEET" and "hikikomori", they describe me pretty well. But the possibility that I will not live to be forty years old has been weighing on me more and more - in fact I've had the weird, unfounded fear that I would die young literally my entire life, which I mostly shoved in the back of my mind - and I am considering trying to find some way to escape the rut I've been in since my teenage years, leave my parents' house, and... have real life experiences, before the world changes beyond my ability to predict.
Have I actually done so yet?
...no.
↑ comment by Tomás B. (Bjartur Tómas) · 2023-05-07T23:55:10.606Z · LW(p) · GW(p)
This is a reminder that time has passed and you should consider actually trying.
Replies from: MSRayneI decreased my contribution to my 401k, as that's money I can't touch for almost four decades, and increased my near-term investments.
A number of answers have already alluded to deprioritizing long-term savings, but you could go farther and borrow from the post-singularity future. Get a mortgage or other loan. This may work out well even in some worlds with friendly superintelligence, because maybe the AGI gives us a luxury automated communism and your debt obligation is somehow disolved.
Umm... this is not financial advice.
Byyyye!
It did wonders for my mental health. This might be a psychological quirk of mine but I find it so much more satisfying that we all go together with a bang rather than by slow decay and mediocrity. It satisfies my need for drama and heroism. I suppose that's part of why doomsday cults have been so popular through the ages.
I attended my high school reunion recently. So many people worrying about things that didn't matter, living without purpose, living in the past etc.
I used to worry about death by old age, slow decay, meaningless etc. I do a lot less of that now. I don't think AI timelines were the primary catalyst of that - age & wisdom surely count too- but all-in-all it has made more motivated, more sure of myself and my path, less pre-occupied with little things that don't matter.
I will post my favourite poem to describe how I feel:
Do not go gentle into that good night,
Old age should burn and rave at close of day;
Rage, rage against the dying of the light.
Though wise men at their end know dark is right,
Because their words had forked no lightning they
Do not go gentle into that good night.
Good men, the last wave by, crying how bright
Their frail deeds might have danced in a green bay,
Rage, rage against the dying of the light.
Wild men who caught and sang the sun in flight,
And learn, too late, they grieved it on its way,
Do not go gentle into that good night.
Grave men, near death, who see with blinding sight
Blind eyes could blaze like meteors and be gay,
Rage, rage against the dying of the light.
And you, my father, there on the sad height,
Curse, bless, me now with your fierce tears, I pray.
Do not go gentle into that good night.
Rage, rage against the dying of the light.
I will not go gentle into that cold night.
I’ll be seventy later this year so I don’t worry about “the future” for myself much or how I should live my life differently. I’ve got some grandkids though and as far as my advice to them goes I tell their mom that the trades will be safer than clerical or white color jobs because robotics will lag behind AI. Sure you can teach an AI to do manual labor, like say brain surgery, but it’s not going to be making house calls. Creating a robotic plumber would be a massive investment and so not likely to happen. In my humble opinion.
Of course this assumes the world will still need plumbers in the near future. Personally I expect the world to still need plumbers and other tradespeople in the next twenty-plus years. Even if construction was cut back due to its 40% contribution to greenhouse gases there will still be homes that need maintenance.
My son is a tradesperson as is my son-in-law so I have some knowledge of that lifestyle.
I also know a bit about AI despite my age as I retired after a thirty year IT career including being a developer, software development team manager and the VP of Operations in a fifty person print and online textbook publishing company. Since retiring for the past three years I’ve been writing hard science fiction novellas and short stories about AI and Social Robots in the near future. About two thousand pages consisting of seven novellas and forty short stories so far. Hard science fiction requires a ton of research and thinking and I try to write every day.
I finished my first novella in August of 2020, a few months before Brian Christian published his book, “The Alignment Problem” and the term and issue became popularized. My own belief about AI timelines is that the adoption rate is going to be the fastest ever. See https://hbr.org/2013/11/the-pace-of-technology-adoption-is-speeding-up for technology adoption rates. AI will teach itself how to get smarter and AGI will arrive in just a few years.
Will we solve the “The Alignment Problem” before then? No – because the science of human values will turn out to be perhaps the most challenging technical work AI researchers ever face. What are values? What are they made of? How do they “work” technically? Did you mean genetically inherited values or learned values? Are they genes? If so how would we account for the effect of epigenetics on values as suggested by twin studies? Or are they neurological structures like some form of memory? How do they communicate with each other and change? Is each value made up of sub-values? How would you construct a Bayesian Network to emulate a human values system? Is there a better way? And so on. The science of human values is in its infancy so we are not going to solve the alignment problem any time soon. Unless of course… AI solves it for us. And wouldn't that be an interesting scenario. Does this mean we’re all gonna be killed by AI? As any Master of Futures Studies program faculty member will tell you, it is impossible to predict the future.
Do I think these issues will effect my grandkids? Absolutely. Can I imagine their world? Not a chance. When I was twenty the personal computer, the internet and cell phones didn’t exist. My future career didn’t exist. So I don’t have much more advice for my grandkids other than the robotics/trades angle.
What would I do differently if I was twenty-something now? Well if I didn’t go into the trades I’d plan on working for any kind of company involved in the environment in any way. Unlike in my novella series where the World Governments Federation mandates population control in the real world people will continue to have babies and for the next thirty or forty years the global population will continue to grow. Then there are things like climate change, refugees and war etc. The environment will struggle to deal with all that and need a lot of hands on deck.
Now you might be thinking there’s more to life than a career. I agree. I write not to get published but as an act of self-expression, something I consider the highest calling of life. If you know what you get the greatest personal gratification from I recommend you find a way to make time for it.
I think about my young daughters' lives a lot. One says she wants to be an artist. Another a teacher.
Do those careers make any sense on a timeframe of the next 20 years?
What interests and careers do I encourage in them that will become useless at the slowest rate?
I think about this a lot - and then I mainly do nothing about it, and just encourage them to pursue whatever they like anyway.
↑ comment by Vladimir_Nesov · 2022-07-14T01:58:12.905Z · LW(p) · GW(p)
I think it's valuable to study rationality and AI alignment (with a touch of programming) for the purpose of preparing to take advantage of post-AGI personal growth opportunities, without destroying your own extrapolated volition. This is relevant in case we survive, which I think is not unlikely [LW(p) · GW(p)] (while the unlikely good outcome [LW(p) · GW(p)] is that we keep cosmic endowment; the more likely alternative is being allowed to live on relatively tiny welfare, while the rest is taken away).
Dropping my plans of earning to give, which only really made sense before the recent flood of funding and the compression of timelines.
Increasing the amount of study I'm doing in Alignment and adjacent safety spaces. I have low confident I'll be able to help in any meaningful fashion given my native abilities and timelines, but not trying seems both foolish and psychologically damaging.
Reconsidering my plans to have children - it's more likely I'll spend time and resources on children already existing (or planned) inside my circle of caring.
Guess I'm the only one with the exact opposite fear, expecting society to collapse back into barbarism.
As IQ rates continue to decline, the most invincible force in the universe is human stupidity. It has a kind of implacable brutality that conquers everything.
I expect a grim future as the civilized countries decline to Third World status, with global mass starvation.
↑ comment by PipFoweraker · 2022-07-14T04:59:53.588Z · LW(p) · GW(p)
This implies your timelines for any large impact from AI would span multiple future generations, is that correct?
Replies from: Flaglandbase↑ comment by Flaglandbase · 2022-07-15T08:22:25.057Z · LW(p) · GW(p)
If you extrapolate the trends it implies no impact at all, as humanity continues to decline in every way like it currently is doing.
Replies from: mohammed-choudhary↑ comment by Mohammed Choudhary (mohammed-choudhary) · 2022-11-08T00:41:09.564Z · LW(p) · GW(p)
and yet GDP per capita is 10 times higher than it was 2 centuries ago on average across the world.
and iq is over 30 points higher on average than 1 century ago
could it be that you are allowing personal feelings about how bad things are to muddle your reasoning about how bad the world actually is?
9 comments
Comments sorted by top scores.
comment by AlphaAndOmega · 2022-07-11T20:20:36.212Z · LW(p) · GW(p)
I'm a doctor, relatively freshly graduated and a citizen of India.
Back when I was entering med school, I was already intimately aware of AI X-risk from following LW and Scott, but at the time, the timelines didn't appear so distressingly short, not like Metaculus predicting a mean time to human level AGI of 2035 as it was last time I checked.
I expected that to become a concern in the 2040s and 50s, and as such I was more concerned with automation induced unemployment, which I did (and still do) expect to be a serious concern for even highly skilled professionals by the 30s.
As such, I was happy at the time to have picked a profession that would be towards the end of the list for being automated away, or at least the last one I had aptitude for, I don't think I'd make a good ML researcher for example, likely the final field to be eaten alive by its own creations. A concrete example even within medicine would be avoiding imaging based fields like radiology, and also practical ones like surgery, as ML-vision and softbody robotics leap ahead. In contrast, places where human contact is craved and held in high esteem (perhaps irrationally) like psychiatry are safer bets, or at least the least bad choice. Regulatory inertia is my best, and likely only, friend, because assuming institutions similar to those of today (justified by the short horizon), it might be several years before an autonomous surgical robot is demonstrably superior to the median surgeon, and it's legal for a hospital to use them and the public cottons onto the fact that they're a superior product.
I had expected to have enough time to establish myself as a consultant, and to have saved enough money to insulate myself from the concerns of a world where UBI isn't actually rolled out, while emigrating to a First World country that could actually afford UBI, to become a citizen within the window of time where the host country is willing to naturalize me and thus accept a degree of obligation to keep me alive and fed. They latter is a serious concern in India, volatile as it already is, and while I might be well-off by local standards, unless you're a multimillionaire in USD, you can't use investor backdoors to flee to countries like Australia and Singapore, and unless you're a billionaire, you can't insulate yourself in the middle of a nation that is rapidly melting down as its only real advantage, cheap and cheerful labor, is completely devalued.
You either have the money (like the West) to buy the fruits of automation and then build the factories for it, or you have the factories (like China) which will be automated first and then can be taxed as needed. India, and much of South Asia and Africa, have neither.
Right now, it looks to me that the period of severe unemployment will be both soon and short, unlikely to be more than a few years before capable nearhuman AGI reach parity and then superhuman status. I don't expect an outright FOOM of days or weeks, but a relatively rapid change on the order of years nonetheless.
That makes my existing savings likely sufficient for weathering the storm, and I seek to emigrate very soon. Ideally, I'll be a citizen of the country of my choice within 7 years, which is already pushing it, but then it'll be significantly easier for me to evacuate my family should it become necessary by giving them a place to move to, if they're willing and able to liquidate their assets in time.
But at the end of the day, my approach is aimed at the timeline (which I still consider less likely than not) of a delayed AGI rollout with a protracted period of widespread Humans Need Not Apply in place.
Why?
Because in the case of a rapid takeoff, I have no expectations of contributing meaningfully to Alignment, I don't have the maths skills for it, and even my initial plans of donating have been obviated by the billions now pouring into EA and adjacent Alignment research, be it in the labs of the giants or more grassroots concerns like Eleuther AI etc. I'm mostly helpless in that regard, but I still try and spread the word in rat-adjacent circles when I can, because I think convincing arguments are >> than my measly Third World salary. My competitive advantage is in spreading awareness and dispelling misconceptions in the people who have the money and talent to do something about it, and while that would be akin to teaching my grandma to suck eggs on LessWrong, there are still plenty of forums where I can call myself better informed than 99% of the otherwise smart and capable denizens, even if that's a low bar to best.
However, at the end of the day, I'm hedging against a world where it doesn't happen, because the arrival of AGI is either going to fix everything or kill us all, as far as I'm concerned. You can't hide, and if you run, you'll just die tired, as Martian colonies have an asteroid dropped on them, and whatever pathetic escape craft we make in the next 20 years get swatted before they reach the orbit of Saturn.
If things surprisingly go slower than expected, I hope to make enough money to FIRE and live off dividends, while also aggressively seeking every comparative advantage I can get, such as being an early-ish adopter of BCI tech (i.e. not going for the first Neuralink rollout but the one after, when the major bugs have been dealt with), so that I can at least survive the heightened competition with other humans.
I do wish I had more time, as I genuinely expect to more likely be dead by my 40s than not, but that's balanced out by the wonders that await should things go according to plan, and I don't think that, if given the choice, I would have chosen to be alive at any other time in history. I fully intend to marry and have kids, even if I must come to terms that they'll likely not make it past childhood.. After all, if I had been killed by a falling turtle at the ripe old age of 5, I'd still rather have lived than not, and unless living standards are visibly deteriorating with no hope in sight, I think my child will have a life worth living, however short.
Also, I expect the end to be quick and largely painless. An unaligned AGI is unlikely to derive any value from torturing us, and would most likely dispatch us dispassionately and efficiently, probably before we can process what's actually happening, and even if that's not the case and I have to witness the biosphere being rapidly dismantled for parts, or if things really go to hell and the other prospect is starving to death, then I trust that I have the skills and conviction to manufacture a cleaner end for myself and the ones I've failed..
Even if it was originally intended as a curse, "may you live in interesting times" is still a boon as far as I'm concerned..
TL;DR: Shortened planning windows, conservative financial decisions, reduction in personal volatility by leaving the regions of the planet that will be first to go FUBAR, not aiming for the kinds of specialization programs that will take greater than 10 years to complete, and overall conserving my energy for scenarios in which we don't all horribly die regardless of my best contributions.
Replies from: aditya-prasad, Evan R. Murphy↑ comment by Aditya (aditya-prasad) · 2022-07-12T19:28:36.837Z · LW(p) · GW(p)
You should come for the Bangalore meet-up this Sunday. If you are near this part of India.
Replies from: AlphaAndOmega↑ comment by AlphaAndOmega · 2022-07-13T04:09:24.942Z · LW(p) · GW(p)
I wasn't aware of the meet-up, but sadly it'll be rather far for me this time. Appreciate the heads up though! Hopefully I can make it another time.
↑ comment by Evan R. Murphy · 2022-07-14T08:51:10.629Z · LW(p) · GW(p)
Fascinating comment.
Minor question on this:
I'm a doctor, relatively freshly graduated and a citizen of India [...] I had expected to have enough time to establish myself as a consultant
Are doctors often consultants where you live?
Replies from: mohammed-choudhary, AlphaAndOmega↑ comment by Mohammed Choudhary (mohammed-choudhary) · 2022-11-08T00:36:55.651Z · LW(p) · GW(p)
Evan are you confusing consultant as in "gives advice" with consultant (the non-american word for attending physician) ?
because if not it would be strange for you to ask if doctors are often consultants in india.
Replies from: Evan R. Murphy↑ comment by Evan R. Murphy · 2022-11-08T03:46:49.467Z · LW(p) · GW(p)
Yes I think I was. Thanks for the context :)
↑ comment by AlphaAndOmega · 2022-07-14T19:06:48.284Z · LW(p) · GW(p)
Becoming a consultant is definitely the end goal for most doctors who have any ambition, and is seen as the logical culmination of your career unless for either a lack of interest or aptitude you're not able to complete a postgraduate degree after your MBBS.
To not do one is a sign of failure, and at least today not having an MD or MS is tantamount to having your employment opportunities heavily curtailed.
While I can't give actual figures, I expect that the majority (~70%) of doctors do become consultants eventually here, but I might be biased given that the fact that my family is composed of established consultants, and thus the others I'm exposed to are either at my level or close enough, or senior ones I'm encountered through my social circles.
comment by Mitchell_Porter · 2022-07-12T19:01:47.910Z · LW(p) · GW(p)
If the post-GPT-3 acceleration in AI had occurred a year or two earlier, at a time when I had no other responsibilities, I might have completely reoriented my life around the task of making the best possible contribution to AI safety. However, it came at a time when I had already devoted myself to another task, one that requires everything that I have. Yet paradoxically, pursuing that other task has yielded innumerable insights and challenges that are potentially relevant for Ai safety, and which might never have come my way in the more straightforward scenario. So I keep fighting that other fight, and in spare moments I catch up on AI issues, and sometimes I tell myself that maybe this zigzag path is ultimately the best one after all.
Replies from: alexander-gietelink-oldenziel↑ comment by Alexander Gietelink Oldenziel (alexander-gietelink-oldenziel) · 2022-07-14T12:37:28.771Z · LW(p) · GW(p)
If you are willing to share - what are you doing now?