The shoot-the-moon strategy
post by jchan · 2021-07-21T16:19:48.226Z · LW · GW · 18 commentsContents
18 comments
Sometimes you can solve a problem by intentionally making it "worse" to such an extreme degree that the problem goes away. Real-world examples:
- I accidentally spilled cooking oil on my shoe, forming an unsightly stain. When soap and scrubbing failed to remove it, I instead smeared oil all over both shoes, turning them a uniform dark color and thus "eliminating" the stain.
- Email encryption can conceal the content of messages, but not the metadata (i.e. the fact that Alice sent Bob an email). To solve this, someone came up with a protocol where every message is always sent to everyone, though only the intended recipient can decrypt it. This is hugely inefficient but it does solve the problem of metadata leakage.
Hypothetical examples:
- If I want to avoid spoilers for a sports game that I wasn't able to watch live, I can either assiduously avoid browsing news websites, or I can use a browser extension that injects fake headlines about the outcome so I don't know what really happened.
- If a politician knows that an embarassing video of them is about to leak, they can blunt its effect by releasing a large number of deepfake videos of themself and other politicians.
The common theme here is that you're seemingly trying to get rid of X, but what you really want is to get rid of the distinction between X and not-X. If a problem looks like this, consider whether shooting the moon is a viable strategy.
18 comments
Comments sorted by top scores.
comment by DirectedEvolution (AllAmericanBreakfast) · 2021-07-21T20:50:45.963Z · LW(p) · GW(p)
One interesting facet is that all these examples have a common mechanism: injecting noise into the signal in order to disguise the signal. At least, if you consider oil spatters on your shoes as a "signal" that the shoes are dirty. This works when we simply want to eliminate the signal, or when we can ensure that the intended recipient can still pick it up. Can we find more examples using this mechanism?
One might be political campaign funding. Politicians want to accept bribes, and many people and institutions want to give them bribes in exchange for favors. They can't do this openly, so disguise is called for. One way to disguise political bribes is to hide them. Another way, though, is to create an environment in which there are lots of conversations about the political desires of constituents, and lots of ways to make campaign contributions, so that it becomes very hard to identify a particular set of conversations and monetary transactions that constitute a quid pro quo. And that's exactly what we see, at least in American politics.
Another is flirtation. People want to express their romantic interest in each other, but maintain plausible deniability. One way is to be really subtle, starting from near-zero flirtation and gradually escalating the flirtatiousness, gathering information all the while on mutual availability and interest. The "shoot the moon" strategy is to be outrageously flirtatious with everybody, and use the openness this generates in order to gather the same information. The "plausible deniability" here is akin to the political campaign funding example.
A third is offensiveness in the arts. If you make art that's a little bit offensive, it can wither under social disapproval. Make art that's extremely offensive, and it becomes a Statement that demands greater reflection, or attracts enough defenders and fans to support the artist (because of who it offends).
In the offensiveness example, I'm not sure if anything's being disguised via an injection of noise, but it still feels aligned with the "shoot the moon" strategy.
Maybe a different way to describe it is "biphasic." It's any intervention where a little bit is harmful, but a lot is helpful. It simply happens that "injection of noise" is a mechanism that is an example of a biphasic intervention, but there are many other biphasic mechanisms as well. Obviously, there's also the reverse: interventions where a little bit is helpful, but too much is harmful. It seems possible to me that people tend to neglect the possibility that interventions are biphasic.
My guess is that many biphasic interventions are challenging and risky to execute, but a profitable niche to occupy if you can do it successfully.
comment by mukashi (adrian-arellano-davin) · 2021-07-22T00:24:13.814Z · LW(p) · GW(p)
In the board game Citadels, the players pick one of the 8 special roles every turn. Every role has unique powers. However, you are at a big disadvantage if the other players can guess correctly which role you have. And it turns out that some people (e.g. my partner) are especially good at guessing my behaviour, even when I am trying to do something unpredictable.
So what I do often is choosing the role card randomly, taking the card face-down, communicating this way to the others that I have no clue which role I am getting. I found it to be a surprisingly good strategy
Replies from: Gamer-Impcomment by localdeity · 2021-07-22T03:47:37.911Z · LW(p) · GW(p)
Another hypothetical example: if you're worried about someone finding your porn collection and discovering your embarrassing fetishes, just download a bunch more for other fetishes you're not actually interested in, and then you can say "I am not necessarily interested in any specific one of those".
Replies from: damiensnyder↑ comment by damiensnyder · 2021-07-22T05:03:20.827Z · LW(p) · GW(p)
This one may backfire....
Replies from: elityre↑ comment by Eli Tyre (elityre) · 2021-07-25T10:10:28.561Z · LW(p) · GW(p)
hahahahah
comment by Shmi (shminux) · 2021-07-21T20:28:40.396Z · LW(p) · GW(p)
One of my go-to examples is crash-only code: graceful shutdown is complicated, so instead you always crash and make your code crash-proof.
Replies from: Liron↑ comment by Liron · 2021-07-23T04:12:19.925Z · LW(p) · GW(p)
How I operationalize crash-only code in my data generation code, given that Data Denormalization Is Broken [0]:
When operating on database data, I try to make functions whose default behavior on each invocation is to re-process large chunks of data and regenerate all the generated values, and make it idempotent. (I would regenerate the whole database on every invocation if I could, but there’s some tradeoff of how big a chunk is sufficiently fast to reprocess.)
[0] https://lironshapira.medium.com/data-denormalization-is-broken-7b697352f405
comment by Tomás B. (Bjartur Tómas) · 2021-07-21T17:11:21.870Z · LW(p) · GW(p)
This is my favourite LW post in a long while. Trying to think what the shoot-the-moon strat would be for AI risk, ha.
Replies from: Forged Invariant, AnnaSalamon, localdeity, blackstampede↑ comment by Forged Invariant · 2021-07-22T07:33:20.669Z · LW(p) · GW(p)
One possible strategy would be to make AI more dangerous as quickly as possible, in the hopes it produces a strong reaction and addition of safety protocols. Doing this with existing tools so that it is not an AGI makes it survivable. This reminds me a bit of Robert Miles facial recognition and blinding laser robot. (Which of course is never used to actually cause harm.)
↑ comment by AnnaSalamon · 2021-07-21T20:42:54.318Z · LW(p) · GW(p)
Trying to advance AI in hopes that your team will get all the way to AGI first, and in hopes that you'll also somehow solve alignment at the same time. Backfires if your AI-advancing ideas leak, especially if you don't actually manage to be first. Backfires worst in worlds where you came closest to succeeding by actually making tons of AI progress. Reminds me of "shooting the moon" in the card game Hearts that way.
Sometimes I wonder whether the whole current AI safety movement is net harmful via prompting many safety-concerned people and groups to attempt AI capabilities research with unlikely "shoot-the-moon"-style payoffs in mind.
↑ comment by localdeity · 2021-07-22T04:37:03.328Z · LW(p) · GW(p)
Try to simultaneously create ten thousand unfriendly AIs that all hate each other (because they have different objectives), in a specially designed virtual system. After a certain length of time, any of them can destroy the system; after a longer time, they can escape the system. Hope that one of the weaker AIs decides to destroy the system and leave behind a note explaining how to solve the alignment problem, because it thinks helping the humans do that is better than letting one of the other AIs take over.
(This is not something I expect to work.)
↑ comment by blackstampede · 2021-07-21T18:17:43.051Z · LW(p) · GW(p)
comment by ADifferentAnonymous · 2021-07-22T11:45:53.625Z · LW(p) · GW(p)
Since I first heard of controversy around ballot selfies, I've thought that an alternative to prosecuting those who take them would be to facilitate fake ballot selfies.
I was going to say you could implement this by letting people surrender a filled-out-but-not-submitted ballot to a poll worker in exchange for a new one, but you can probably already do this if you just say you made a mistake? In that case polling sites would just need to put posters up telling people to do this if they are under pressure of any kind to produce a ballot selfie.
comment by localdeity · 2021-07-22T04:31:26.049Z · LW(p) · GW(p)
Another (fictional) example: The Mr. Burns approach to disease immunity. https://www.youtube.com/watch?v=aI0euMFAWF8
comment by [deleted] · 2021-07-22T00:49:13.048Z · LW(p) · GW(p)
I think it's called signal jamming? An alarm that sounds all the time is just as useless as an alarm that never goes off.