Posts

Comments

Comment by UHMWPE-UwU (abukeki) on [Linkpost] Introducing Superalignment · 2023-07-06T00:44:24.600Z · LW · GW

Blow up in their faces?

Comment by UHMWPE-UwU (abukeki) on AI Safety in China: Part 2 · 2023-05-22T18:05:49.520Z · LW · GW

Why wouldn't their leadership be capable of personally evaluating arguments that this community has repeatedly demonstrated can be compressed into sub 10 minute nontechnical talks? And why assume whichever experts they're taking advice from would uniformly interpret it as "craziness" especially when surveys show most AI researchers in the west are now taking existential risk seriously? It's really not such a difficult or unintuitive concept to grasp that building a more intelligent species could go badly.

My take is the lack of AI safety activity in China is effectively due almost entirely to the language barrier, I don't see much reason they wouldn't be about equally receptive to the fundamental arguments as a western audience once presented with them competently.

Honestly, I would probably be more concerned about convincing western leaders whose "being on board" this debate seems to take as an axiom.

Comment by UHMWPE-UwU (abukeki) on AI Safety in China: Part 2 · 2023-05-22T16:29:28.783Z · LW · GW

This post is quite strange and at odds with your first one. Your own point 5 contradicts your point 6. If they're so good at taking ideas seriously, why wouldn't they respond to coherent reasoning presented by a US president? Points 7 and 8 just read like hysterical Orientalist Twitter China Watcher nonsense, to be quite frank. There is absolutely nothing substantiating that China would recklessly pursue nothing but "superiority" in AI at all costs (up to and including national suicide) beyond simplistic narratives of the CCP being a cartoon evil force seeking world domination and such.

Instead of invoking tired tropes like the Century of Humiliation, I would mention the tech/economic restrictions recently levied by the US (which are, not inaccurately, broadly seen in China as an attempt to suppress its national development, with comments by Xi to that effect). Any negotiated slowdowns in AI would have to be demonstrated to China as not to be a component of that, which it shouldn't be hard to if the US is also verifiably halting its own progress, and the AGI x-risk arguments can be clearly communicated.

Comment by abukeki on [deleted post] 2023-05-05T00:43:27.818Z

There's a new forum for this that seeks to increase discussion & coordination, reddit.com/r/sufferingrisk.

Comment by UHMWPE-UwU (abukeki) on Geoff Hinton Quits Google · 2023-05-04T15:05:04.009Z · LW · GW

Not sure if he took him up on that (or even saw the tweet reply). Am just hoping we have someone more proactively reaching out to him to coordinate is all. He commands a lot of respect in this industry as I'm sure most know.

Comment by UHMWPE-UwU (abukeki) on Geoff Hinton Quits Google · 2023-05-03T21:34:30.495Z · LW · GW

I think people in the LW/alignment community should really reach out to Hinton to coordinate messaging now that he's suddenly become the most high profile and credible public voice on AI risk. Not sure who should be doing this specifically, but I hope someone's on it.

Comment by UHMWPE-UwU (abukeki) on Catching the Eye of Sauron · 2023-04-07T17:02:36.658Z · LW · GW

Yup. I commented on how outreach pieces are generally too short on their own and should always be leading to something else here.

Comment by UHMWPE-UwU (abukeki) on [RETRACTED] It's time for EA leadership to pull the short-timelines fire alarm. · 2022-04-08T23:08:06.656Z · LW · GW

I'm pretty opposed to public outreach to get support for alignment, but the alternative goal of whipping up enough hysteria to destroy the field of AI/the AGI development groups killing us seems much more doable. Reason being from my lifelong experience observing public discourse on topics I have expert knowledge on (e.g. nuclear weapons, China), it seems completely impossible to implant the exact right ideas into the public mind, especially for a complex subject. Once you attract attention to a topic, no matter how much effort you put into presenting the proper arguments, the conversation and people's beliefs inevitably trend toward simple & meme-y/emotionally riveting ideas, instead of the accurate ones. (Looking at the popular discourse on climate change is another good illustration of this.)

But in this case, maybe even if people latch onto misguided fears about Terminator or whatever, as long as they have some sort of intense fear of AI, it can still produce the intended actions. To be clear I'm still very unsure whether such a campaign is a good idea at this point, just a thought.

I think reaching out to governments is a more direct lever: civilians don't have the power to shut down AI themselves (unless mobs literally burn down all the AGI offices), the goal with public messaging would be to convince them to pressure the leadership to ban it right? Why not cut out the middleman and make the leaders see the dire danger directly?

Comment by UHMWPE-UwU (abukeki) on MIRI announces new "Death With Dignity" strategy · 2022-04-02T03:24:08.228Z · LW · GW

The downvotes on my comment reflect a threat we all need to be extremely mindful of: people who are so terrified of death that they'd rather flip the coin on condemning us all to hell, than die. They'll only grow ever more desperate & willing to resort to more hideously reckless hail marys as we draw closer.

Comment by UHMWPE-UwU (abukeki) on MIRI announces new "Death With Dignity" strategy · 2022-04-02T01:50:36.169Z · LW · GW

Never even THINK ABOUT trying a hail mary if it also comes with an increased chance of s-risk. I'd much rather just die.

Comment by UHMWPE-UwU (abukeki) on Nuclear Preparedness Guide · 2022-03-13T02:17:53.200Z · LW · GW

Just reposting this good resource for people on places potentially hit in the US. The one I linked is his version for a full countervalue attack with 2000 warheads but he has scenarios for counterforce/mixed etc too.

I don't think anything similar exists for China yet but in the meantime a good assumption is just cities ordered by descending population. So, possibly similar to the linked one but with fewer smaller cities hit for now, until China has reached a similar quantity of warheads as Russia later this decade.

ETA: An interesting thing I found on US target lists, from a NYT article. Relevant for people who've claimed the US doesn't target civilians in nuclear policy.

Comment by UHMWPE-UwU (abukeki) on When should you relocate to mitigate the risk of dying in a nuclear war? · 2022-03-13T02:15:01.934Z · LW · GW

The Open Source RISOP by David Teter is a good resource for a non-exhaustive but still fairly comprehensive list of possible Russian targets in the US, btw.

Comment by UHMWPE-UwU (abukeki) on Nuclear Preparedness Guide · 2022-03-10T01:24:57.201Z · LW · GW

I don't know that that's true everywhere. Airbursts (detonation mode for cities) generally don't produce much fallout. Probably good advice if you're downwind of hardened targets like the 3 clusters of Minuteman silos in the Midwest though which will produce a fuckton of fallout as they're all hit with surface detonations. But the Russians/Chinese may not hit them at all if they know all those silos have been fired already.

Comment by UHMWPE-UwU (abukeki) on Nuclear Preparedness Guide · 2022-03-09T23:23:55.024Z · LW · GW

One thing I realized is that it'll likely be near impossible to travel long distances by car in the post-attack aftermath as everyone with a gun who runs out of gas would be setting up roadblocks to rob travellers of the gas in their cars + other supplies. Interstates would probably thus quickly become unusable. So you probably shouldn't expect to reach some cross-country rendezvous after the fact if you didn't get there beforehand.

Also x-posting my more lengthy comment on this post from EAF.

Comment by UHMWPE-UwU (abukeki) on Preserving and continuing alignment research through a severe global catastrophe · 2022-03-07T01:58:42.061Z · LW · GW

I wrote about this on EA Forum a few days ago. I'm glad others are starting to think about this. I do think archiving all existing alignment work is very important and perhaps equally important as efforts to keep alive people who represent existing experts & talent in the field. It would be much better for them to be able to continue their work than for new people to attempt to pick off where they left off, especially since many things like intuitions honed over time etc. may not be readily learnable.

I'm increasingly inclined to think that a massive "shock" in the near future (like a nuclear war or a severe pandemic) which effectively halts economic progress, perhaps for a few decades or more, then restarts it at a lower baseline, may be one of the few remaining scenarios we can reasonably expect to survive AGI, taking into account the grim present strategic situation as Eliezer outlined in the recent sequence. Such a world might especially favour alignment since AI work (prosaic AI especially) seems to be much more capital intensive than alignment work, so in a post-shock world with less capital available it would be disadvantaged or impossible to continue carrying out at all. There are a few other reasons such a catastrophic shock may actually increase our collective odds of success re: AI risk, such as a greatly reduced population implying fewer AGI projects & race pressures, etc., morbid as it is.

Given this, the OP's project is doubly important.

Comment by UHMWPE-UwU (abukeki) on Higher Risk of Nuclear War · 2022-03-06T05:17:27.285Z · LW · GW

but it's still the case that I don't expect to survive a full-scale nuclear exchange.

There's no reason whatsoever to expect you can't easily survive a full exchange with a few simple preparations as long as you were outside the immediate urban blast radii. Nuclear winter is effectively a myth. I'm both astounded and dismayed by the amount of misinformation and misconceptions surrounding nuclear issues within the "rationalist" community.

Nukes aren't remotely inescapable Armageddon in the same way unaligned AGI is, and people really need to stop the silly resignation to death when talking about them. People in this community can easily all survive a nuclear war if they simply understand that they can and do what needs to be done.

Comment by UHMWPE-UwU (abukeki) on When should you relocate to mitigate the risk of dying in a nuclear war? · 2022-03-05T14:59:06.872Z · LW · GW

I said that about New Zealand (and probably countries outside of NATO, Russia and China in general). Canada may well have law and order intact as well, if we don't get hit or only by a few warheads. I think commercial food availability might be restored before a decade, especially since we have more agricultural production capacity than we need, but it's just to be on the safer side, especially since stockpiling non-perishable food really doesn't cost much. Being so close to the US and sharing a massive border, we may be more destabilized than other non-attacked countries due to things like refugee flood etc.

Bottom line is: if you're in the US, you need all those things I listed prepared in advance for sure. If you're in some non-targeted country you probably don't, but it may still be nice to have them just as a hedge in case of unexpected supply disruptions or upheaval.

Comment by UHMWPE-UwU (abukeki) on When should you relocate to mitigate the risk of dying in a nuclear war? · 2022-03-05T03:57:48.978Z · LW · GW

Remember the things that ALL have to be true for a "nuclear winter" to happen at all. I'm not gonna say it's a completely debunked myth, but to me the probability is clearly low enough that I mostly ignore it in my planning. Governments have moved on from it too after the initial Soviet politically-motivated hysteria surrounding it during the 80s.

Surviving a full-scale countervalue exchange even within the US or Canada isn't hard. The most crucial thing is to preemptively relocate so you aren't caught and killed in the initial detonation. Anywhere outside an immediate urban boundary is far enough for this. Just make sure you're not adjacent to any other nuclear targets, besides countervalue ones (cities) these include military & important economic/infrastructural targets. E.g. an ICBM silo/base, a major power plant, an important military industrial complex factory, a nuclear waste storage location (ISFSIs may be targeted with groundbursts to generate fallout/denied areas), etc. As long as you can avoid dying in a blast, the other most important elements are:

  1. A good remote location, ideally with easily defendable characteristics. Isolated from large population centres and the chaos and violence being with other people can bring if society collapses.

  2. A large supply of non-perishable food that can be stored and last you for at least 10 years.

  3. Large and varied stock of medications. Not just for personal use to treat the countless diseases, injuries and issues you could develop for years to come but medications will be invaluable as a bartering chip in a post-attack world.

  4. Firearms and ammunition, pretty obvious, for self-defence.

Moving to somewhere like New Zealand may still be nice for the continuity of life, because society and infrastructure there probably wouldn't collapse at all. I mean, why would you expect a simple cessation of international trade to cause a country to collapse internally? There would doubtless be a major economic downturn caused by the loss of large countries overseas but my guess is basic law and order would remain intact.

Comment by UHMWPE-UwU (abukeki) on Russia has Invaded Ukraine · 2022-02-28T01:43:50.196Z · LW · GW

Oh and also, there's potential for this to lead to a coup/domestic upheaval/regime change in Russia which would be an exceptionally volatile situation, kind of like having 6000 loose nukes until whoever takes power consolidates control including over the strategic forces again. So factoring that in, it should perhaps be over 5%. But again there should be advance warning for those developments inside Russia.

Comment by UHMWPE-UwU (abukeki) on Russia has Invaded Ukraine · 2022-02-27T22:30:36.170Z · LW · GW

5% would be by the end of all this. Most of that probability comes from things developing in an unfortunate direction as I said, which would mean it goes against the current indications we have of neither the US nor NATO intervening militarily. This could be either them changing their minds, perhaps due to unexpectedly brutal Russian conduct during the war leading to a decision to impose a no-fly zone or something like that, or a cycle of retaliatory escalation due to unintended spillover of the war like I illustrated. Neither is too likely imo, and both will have advance warning if you're paying any attention luckily. The risk of a sudden nuclear exchange which doesn't even give enough warning for Americans to leave their cities is definitely lower, maybe 2% at most. But it's definitely present as well, due to the misjudgment risks etc. as I mentioned.

Also, see the comments I just wrote on EA Forum.

Comment by UHMWPE-UwU (abukeki) on Russia has Invaded Ukraine · 2022-02-27T19:42:36.617Z · LW · GW

I'm not overly concerned with the news from this morning. In fact I expected them to raise the nuclear force readiness prior to or simultaneously to commencing the invasion, not now, which is expected going into a time of conflict/high tension from normal peacetime readiness. I had about a 5% chance this will escalate to a nuclear war going into it, and it's not much different now, certainly not above 10% (For context, my odds of escalation to full countervalue exchange in a US intervention in a Taiwan reunification campaign would be about 75%). Virtually all that probability is split between unfavorable developments dragging in NATO and accidents/miscalculation risk, which is elevated during tense times like this (something like, if the Russians had misinterpreted the attack submarine which entered their territorial waters last week as being a ballistic missile submarine sneaking up close to launch a first strike, or an early warning radar fluke/misidentification being taken seriously when it would've been dismissed during peacetime, either of which could've caused them to launch on warning).

Unintentional nuclear exchange will have no preceding signs, but unfavorable developments will, for example a NATO shootdown of a Russian plane or Russian fire straying over the border killing NATO troops which begins an escalation spiral. If we start seeing such incidents being reported, I would tell all my LW friends to get the fuck out of NATO cities they're living in immediately.

Comment by UHMWPE-UwU (abukeki) on Russia has Invaded Ukraine · 2022-02-27T04:47:54.472Z · LW · GW

The most interesting thing out of this is Russia's threat to pull out of New START in retaliation for US sanctions, as well as Biden's decision to cut off arms control talks. Pulling out all the stops on the US-Russia nuclear competition is dangerous enough already, but this will most likely kick off a renewed all-out three-way nuclear arms race, which is of course less strategically stable than the bilateral nuclear dynamic during the Cold War. China is already expanding its nuclear arsenal to parity, which if New START were still in effect, would've been 1500 deployed warheads (incidentally today the first silo field seems to have finished construction ahead of schedule). The US had hoped to rope China into its bilateral arms control agreements with Russia; well, now there'd be nothing left to rope into.

Comment by UHMWPE-UwU (abukeki) on AI Safety Needs Great Engineers · 2022-02-08T03:33:11.206Z · LW · GW

Redwood research

Comment by UHMWPE-UwU (abukeki) on Competitive programming with AlphaCode · 2022-02-03T17:02:08.678Z · LW · GW

In which way does this news "favour Paul-verse"?

Comment by UHMWPE-UwU (abukeki) on Some abstract, non-technical reasons to be non-maximally-pessimistic about AI alignment · 2021-12-13T16:56:27.915Z · LW · GW

MIRI had a strategic explanation in their 2017 fundraiser post which I found very insightful. This was called the "acute risk period".

Comment by UHMWPE-UwU (abukeki) on I currently translate AGI-related texts to Russian. Is that useful? · 2021-11-27T22:21:59.595Z · LW · GW

Yes, but I think much more useful might be for someone to do this for Chinese.

Comment by UHMWPE-UwU (abukeki) on First Strike and Second Strike · 2021-11-25T22:28:35.167Z · LW · GW

Those 3 new silo fields are the most visible but I'd guess China is expanding the mobile arm of its land-based DF-41 force (TELs) a similar amount. You just don't see that on satellite images. The infrastructure enabling Launch on Warning is also being implemented which will make those silos much more survivable, though this also of course greatly increases the risk of accidental nuclear war. I'd argue that those silo fields are destabilizing, especially if China decides to deploy the majority of their land-based force that way, because even with a Launch on Warning posture there will be at least some use-it-or-lose-it pressure during a conflict, while the mobile and sea-based deterrent are stabilizing because they for the most part lack that issue. Similarly, hypersonic weapons including the much-discussed recent tests are stabilizing because they shatter US delusions of any protection offered by its BMD system, now and future. There are few practical differences with regular ICBM warheads besides the ability to better penetrate defences: they're in fact slower.

The issue with China's current SSBN (the Type 094) is twofold: more noisy and the SLBM they carry has relatively low range, so they have to venture further into the Pacific to hit much of the US mainland, both of which render it more vulnerable to detection. The upcoming 096 solves this, both being quieter and allowing it to fire from a protected "bastion" in Chinese coastal waters.

I'm willing to bet the Pentagon's projection that China will have 700 warheads by 2027 and 1000 by 2030 will be revised upward again next year, and some in the US military seem to agree with me. In light of this I'd strongly suggest those in the community working on nuclear risks (e.g. Rethink) shift their main focus from the US-Russia scenario to China, especially with how hard everyone in the West is dying to go to war with China these days haha.

Comment by UHMWPE-UwU (abukeki) on Postmodern Warfare · 2021-10-25T23:46:01.060Z · LW · GW

Can you give some examples of who in the "rationalist-adjacent spheres" are discussing it?

Comment by abukeki on [deleted post] 2021-10-11T20:24:21.238Z

A bunch of links here and here.

Comment by UHMWPE-UwU (abukeki) on How to think about and deal with OpenAI · 2021-10-10T19:58:26.721Z · LW · GW

I'm aware. I'm just saying a new effort is still needed because his thoughts on alignment/AI risk are still clearly very misguided listening to all his recent public comments on the topic and what he's trying to do with Neuralink etc. so someone really needs to reach out and set him straight.

Comment by UHMWPE-UwU (abukeki) on How to think about and deal with OpenAI · 2021-10-09T14:21:33.910Z · LW · GW

Agree with we should reach out to him & the community is connected enough to do so. If he's concerned about AI risk but either being misguided or doing harm (see e.g. here/here and here), then someone should just... talk to him about it? The richest man in the world can do a lot either way. (Especially someone as addicted to launching things as him, who knows what detrimental thing he might do next if we're not more proactive.)

I get the impression the folks at FLI are closest to him so maybe they are the best ones to do that.