AI risk hub in Singapore?
post by Daniel Kokotajlo (daniel-kokotajlo) · 2020-10-29T11:45:16.096Z · LW · GW · 18 commentsContents
18 comments
I tentatively guess that if Singapore were to become a thriving hub for AI risk reduction, this would reduce AI risk by 16%. Moreover I think making this happen is fairly tractable and extremely neglected. In this post I sketch my reasons. I'm interested to hear what the community thinks.
My experience (and what I've been told) is that everyone generally agrees that it would be good for AI risk awareness to be raised in Asia, but conventional wisdom is that it's the job of people like Brian Tse to do that and most other people would only make things worse by trying to help. I think this is mostly right; my only disagreement is that I think the rest of us should look harder for ways to help, and be willing to sacrifice more if need be. For example, I suggested to MIRI that they move to Singapore, not because they could or should try to influence the government or anything like that, but because their presence in Singapore would make it a more attractive place for AI risk reducers (e.g. Singaporean EAs), thereby helping to create an AI risk hub there (instead of the current situation, which is brain drain from Singapore to the Bay and London).
I put my calculation of expected value at the end; for now, here are some basic facts about Singapore and the major pathways by which I expect good things from an AI risk hub there.
Thanks to Jia Yuan Loke, Vaidehi Agarwalla, and others for conversations that led to this post.
Some basic background facts about Singapore:
1. Smart, educated, english-speaking population; a tech, trade, and financial hub for Asia.
2. Cost of living lower than London but higher than Toronto. Haven’t looked into this much, just googled and found this.
3. Is already an EA hub compared to most of Asia, but has very little EA presence compared to many places in the West.
4. Singaporean government is unusually rational, in both epistemic and instrumental senses. It is a one-party state run by very smart son of Lee Kwan Yew, the man who said: “I am not following any prescription given to me by any theoretician on democracy or whatever. I work from first principles: what will get me there?”
5. Government of Singapore is widely respected throughout the world, including by China and the USA. Thousands of Chinese officials visit Singapore to learn from how they do things. Singapore has served as a bridge between East and West on multiple occasions, laying the groundwork for the ping-pong diplomacy between China and USA and hosting the talks between North Korea and USA.
6. Drugs and male-male sex are illegal in Singapore, though restrictions on the latter are poorly enforced and may be loosening.
Path to impact #1: Picks low-hanging fruit for raising awareness and recruitment
For all we know, there are loads of people in Asia who would contribute to AI risk reduction if only they found out about AI risk or had an easier way to contribute.
There may be lots of people who MIRI or some other AI risk org would want to hire, who they have trouble hiring because of immigration restrictions in the US and/or UK. (And remember, these restrictions could increase in the future!) Having a hub in Singapore--or even just one or two organizations--would provide these people with a place to work.
Path to impact #2: Singapore govt takes AI risk seriously sooner:
Importance: Singapore govt is unusually rational. Were they to become convinced that AI risk is real, they probably would do something useful to reduce it. Here are some things Singapore could do to reduce AI risk:
1. Merely publicly announcing that AI risk is real would make it much easier to convince other governments around the world to take it seriously, since Singapore’s govt is so universally respected.
2. Relatedly, there’s a history of other countries copying Singapore’s policies. It will be a lot easier for AI policy folks to convince the US govt (or the CCP) to implement policy X if something similar has already been implemented by Singapore.
3. Singapore could spearhead and organize international treaties and collaborations to reduce AI risk, given their general competence and position as respected bridge between East and West
4. Singapore could throw lots of money and talent at the problem.
5. Singapore could build AGI themselves; they spend billions on AI annually already. They could assimilate OpenAI in the process, potentially making for a better strategic situation (Singapore govt better than Microsoft+US Govt, conditional on Singapore govt convinced of AI risk)
6. Singapore could probably think of things to do that I haven’t yet thought of.
Tractability and neglectedness: From talking to Jia and Vaidehi I tentatively conclude that this is both extremely tractable and extremely neglected. There are Singaporeans concerned about AI risk, but they are mostly outside Singapore at the moment. The government has already demonstrated openness to input from some of them. If Singapore became an AI risk hub, these Singaporeans would return home, become influential, and likely (IMO) make the Singaporean government take AI risk seriously several months (and maybe several years) earlier than it otherwise would.
The calculation: (My estimate of expected value)
Here are three possible futures:
AsianTAI: Transformative AI is built first / primarily in Asia. I think there's a 20% chance of this, and I have short timelines. If I had longer timelines the probability would go up.
AsianAwarenessNeeded: AsianTAI is false, but nevertheless Asian awareness of AI risk turns out to be necessary for making the future go well, perhaps because there is a distributed slow takeoff and there needs to be worldwide coordination on AI safety standards. I say there's a 40% chance of this.
None: None of the above; TAI is created probably in the USA and what Asia thinks isn't directly relevant. I say there's a 40% chance of this.
Note that both paths to impact are beneficial in all three scenarios. However, they are obviously more beneficial in AsianAwarenessNeeded and most beneficial in AsianTAI.
My shaky unconfident guess is that having a big AI risk hub in Asia would reduce AI risk by 30% conditional on AsianTAI, 20% conditional on AsianAwarenessNeeded, and 5% conditional on None. This works out to an unconditional 16%.
(Did I mention I'm extremely unconfident in these guesses? I am. Were I to think more about this, I'd model it over time, with the intervention being "hub sooner" vs. "hub later or never" rather than hub vs. no hub. I'd also do it on a relative rather than absolute scale, e.g. "How much diminishing returns are there to reducing AI risk in the West, where loads of people are already doing it, vs. Asia?")
OK, so where in Asia should there be a hub? Not sure, but Singapore seems like a good option. It's english-speaking, which makes it easier for people from the West to contribute, and that may be important for getting the hub started. This is a major source of uncertainty for me; maybe there should be a hub in Asia but maybe Singapore isn't the best place for it. Maybe e.g. Hong Kong would be better.
Tractability: Making a hub is hard insofar as you have to compete with other hubs, and easy otherwise. (Network effects!) Currently the Bay is the biggest hub but London/Oxford/Cambridge is reasonably big too. This makes it hard. However, there are no other hubs, and in particular no hubs in Asia. This makes it easier; presumably there are quite a few people who would go to Singapore but wouldn't go to London or SF.
Neglectedness: Currently Singaporean EAs and AI risk people tend to leave Singapore and go to the west. The fact that I mentioned Brian Tse above instead of, say, an entire Asian think tank dedicated to reducing AI risk (there is none, as far as I know?) also says something about the neglectedness of this cause...
18 comments
Comments sorted by top scores.
comment by Scott Garrabrant · 2020-10-29T21:03:39.460Z · LW(p) · GW(p)
"I tentatively guess that if Singapore were to become a thriving hub for AI risk reduction, this would reduce AI risk by 16%."
The units on this claim seem bad. There is a big difference between 50% 34% and 99% 83%. I'm not sure if I would endorse this if I thought about it more, but maybe good units for a claim like the number of bits of evidence the update is equivalent to. Going from 50% to 33% would be the same as going from 99% to 98% (1 bit).
Replies from: DanielFilan, flodorner, daniel-kokotajlo↑ comment by DanielFilan · 2020-10-30T07:31:15.093Z · LW(p) · GW(p)
But for EU maximization you really do care about the raw difference in probabilities!
Replies from: daniel-kokotajlo, Scott Garrabrant↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2020-10-30T09:18:34.953Z · LW(p) · GW(p)
Hmmm. I'm confused about this. My tentative thoughts are as follows:
--For EUmax we care about the raw difference in probabilities.
--However, I don't have a good sense of that difference. I don't know whether base AI risk is 99% or 1%. All I know (or all I guess, even) is that a hub in Singapore reduces AI risk by 16%. So that would be ~16% in the former case and ~0.16% in the latter.
--However, this seems actually fine, because we are choosing between options like "Go to Singapore" and "Stay in the Bay" and the common currency we use to measure both options is how much AI risk reduction we can do.
↑ comment by Scott Garrabrant · 2020-10-30T07:40:00.662Z · LW(p) · GW(p)
Yeah, I had this in mind I said I'm not sure if I would endorse this if I thought about it more. I am still uncertain.
Replies from: DanielFilan↑ comment by DanielFilan · 2020-10-30T16:18:20.365Z · LW(p) · GW(p)
I guess the bit reduction estimate is probably more portable across different people's models, which is nice?
↑ comment by axioman (flodorner) · 2020-10-30T09:23:21.377Z · LW(p) · GW(p)
I interpreted this as a relative reduction of the probability (P_new=0.84*P_old) rather than an absolute decrease of the probability by 0.16. However, this indicates that the claim might be ambiguous which is problematic in another way.
Replies from: daniel-kokotajlo↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2020-10-30T10:24:57.436Z · LW(p) · GW(p)
You interpreted it correctly.
Replies from: DanielFilan↑ comment by DanielFilan · 2020-10-30T16:18:58.397Z · LW(p) · GW(p)
I apparently did not!
↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2020-10-30T06:51:18.773Z · LW(p) · GW(p)
Yep, that seems right. How would one do the calculation in bits? (EDIT: Not sure anymore; DanielFilan reminded me why I chose this metric in the first place...)
Replies from: Scott Garrabrant↑ comment by Scott Garrabrant · 2020-10-30T07:49:16.092Z · LW(p) · GW(p)
The number of bits of evidence is .
Basically, if you view the probability as a fraction, , then a positive bit of evidence doubles while a negative bit of evidence doubles .
You can imagine the ruler: 3% 6% 11% 20% 33% 50% 67% 80% 89% 94% 97%. Each is one more bit than the last. For probabilities near 0, it is roughly counting doublings, for probabilities near 1, it is counting doublings of the complement.
Replies from: daniel-kokotajlo↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2020-10-30T09:20:39.608Z · LW(p) · GW(p)
Thanks! I hope to use this for raising my daughter. When teaching forecasting and bayesian reasoning.
comment by DanielFilan · 2020-10-30T07:33:24.037Z · LW(p) · GW(p)
Interestingly my impression is that it would be nice to move to Singapore just for the quality of life increase caused by the technocratic governance. But maybe I'm underweighting how valuable it is to be able to take psychedelics and have sex with your male lover as a man.
Replies from: daniel-kokotajlo↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2020-10-30T09:14:54.377Z · LW(p) · GW(p)
I would be surprised if you really were underweighting that; probably that stuff is really important for people who are into it. Rather, I think it all comes down to how much difference there is between laws and enforcement. I would imagine that if it's actually dangerous to have sex with your male lover in Singapore in the privacy of your own home, that would be a deal-breaker for many people. But if it's only a problem if you do it in public, then probably it's not a big hardship.
Replies from: vanessa-kosoy, artemium, DanielFilan↑ comment by Vanessa Kosoy (vanessa-kosoy) · 2020-10-30T11:01:36.136Z · LW(p) · GW(p)
Sounds very optimistic. I expect that in a country where male-male sex is illegal, gay and bisexual men are likely to suffer substantially from homophobia (whether institutional or cultural) even if the law is not enforced. There are also implications on transgender women, especially those who haven't had SRS (apparently you can change your legal gender in Singapore iff you had SRS).
Replies from: daniel-kokotajlo↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2020-10-30T13:11:41.482Z · LW(p) · GW(p)
Yeah, fair enough. I'd ask people who live in Singapore (and ideally, who are also gay or transgender) what it's like.
↑ comment by artemium · 2020-10-30T15:38:58.748Z · LW(p) · GW(p)
Not sure about anti-gay laws in Singapore, but from what I gathered from the recent trends, the LGTB situation is starting to improve there and in East Asia in general.
OTOH the anti-drug attitudes are still super strong (for example you can still get the death penalty for dealing harder drugs), therefore I presume it's an even bigger deal-breaker giving the number of people who are experimenting with drugs in the broader rationalist community.
↑ comment by DanielFilan · 2020-10-30T16:20:39.594Z · LW(p) · GW(p)
TBC when I say I might be underrating it, I mean that I might be underrating how much it matters to all the cool people who I would also want to move there.
comment by artemium · 2020-10-30T15:45:09.836Z · LW(p) · GW(p)
None: None of the above; TAI is created probably in the USA and what Asia thinks isn't directly relevant. I say there's a 40% chance of this.
I would say it might still be relevant in this case. For example, given some game-theoretical interpretations, China might conclude that doing a nuclear first strike might be a rational move if the US creates the first TAI and suspects that will give their enemies an unbeatable advantage. Asian AI risk hub might successfully convince Chinese leadership to not do that if they have information that US TAI is built in a way that would prevent usage just for the interest of its country of origin.