The importance of open-source cryptography for Singleton prevention

post by snarles · 2011-05-04T02:25:26.788Z · LW · GW · Legacy · 39 comments

Contents

39 comments

I'm sure most of the readers of lesswrong and overcomingbias would consider a (edit: non-FAI) singleton scenario undesirable.  (In a singleton scenario, a single political power or individual rules over most of humanity.)

Singleton could occur if a group of people developed Artificial General Intelligence with a significant lead over their competitors.  The economic advantage from sole possession of AGI technology would allow the controllers of the technology the opportunity to gain a economic or even a political monopoly in a relatively short timescale.

This particular risk, as Robin Hanson pointed out, is less plausible if the "race for AGI" involves many competitors, and no competitor can gain too large of a lead over others.  This "close race" scenario is more likely if there is an "open-source" attitude in the AGI community.  Even if private organizations attempt to maintain exclusive control of their own innovations, one might hope that hackers or internal leaks would release essential breakthroughs before the innovators could gain too much of a lead.

Then, supposing AGI is rapidly acquired by many different powers soon after its development, one can further hope that the existence of multiple organizations with AGI with differing goals would serve to prevent any one power from gaining a monopoly using AGI.

This post is concerned with what happens afterwards, when AGI technology is more or less publicly available.  In this situation, the long-term freedom of humanity is still not guaranteed, because disparities in access to computational power could still allow one power to gain a technological lead over the rest of humanity.  Technological leads in the form of conventional warfare technologies are not as likely, and perhaps not even as threatening, as technological leads in the form of breakthroughs in cryptography.

In this information-dependent post-utopia, any power which manages to take control of the computational structures of a society would gain incredible leverage.  Any military power which could augment their conventional forces with the ability to intercept all of their enemies' communications whilst protecting their own would enjoy an incredible tactical advantage.  In the post-AGI world, the key risk for singleton is exclusive access to key-cracking technology.

Therefore, a long-term plan for avoiding singleton includes not only measures to promote "open-source" sharing of AGI-relevant technologies, but also "open-source" sharing of cryptographic innovations.

Since any revolutions in cryptography are likely to come from mathematical breakthroughs, a true "open-source" policy for cryptography would include measures to make mathematical knowledge available on an unprecedented scale.  A first step to carrying out such a plan might include encoding of core mathematical results in an open-source database of formal proofs.

39 comments

Comments sorted by top scores.

comment by Manfred · 2011-05-04T03:03:54.824Z · LW(p) · GW(p)

Too many conjunctions in this prediction. For strategies, try things that have many possible avenues to work - humans typically underestimate how effective such strategies are.

comment by Paul Crowley (ciphergoth) · 2011-05-04T07:12:21.628Z · LW(p) · GW(p)

What sort of innovations in cryptography do you think are needed? I think we need better standards and better software, but that we can basically get along with the very straightforward collection of primitives that are used to build things like GPG today, can't we?

Replies from: snarles
comment by snarles · 2011-05-04T23:32:08.751Z · LW(p) · GW(p)

I am not arguing that more innovations in cryptography are needed. I am stressing the importance of keeping cryptography research as open as possible for the sake of long-term political stability.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2011-05-05T09:16:44.593Z · LW(p) · GW(p)

Can you be any clearer? You're not arguing that we need the fruits of cryptography research, but that it's important for some other reason than its fruits? What is that other reason?

Replies from: snarles
comment by snarles · 2011-05-05T10:19:16.015Z · LW(p) · GW(p)

No, you're missing the point entirely. I'm arguing that cryptography is dangerous, and the one way to make it less dangerous is to make sure that it's not secret.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2011-05-05T10:39:29.039Z · LW(p) · GW(p)

So you're saying that it would be OK if there were no further fruits of crypto research, but that the danger is that there will be further fruits and they won't be public, and that poses a great danger? What sort of danger do you have in mind?

Just so you know, crypto is my specialization; AFAIK I'm the person with the most expertise in the subject here on Less Wrong. I think we're having an "illusion of transparency" problem here, because you obviously feel you're being clear, and my efforts to understand you feel to me like pulling teeth. In your response, please could you err on the side of over-explaining, of saying as much as possible?

Thanks!

Replies from: timtyler, snarles
comment by timtyler · 2011-05-05T21:00:03.790Z · LW(p) · GW(p)

What sort of danger do you have in mind?

The post says:

the key risk for singleton is exclusive access to key-cracking technology.

So - apparently - better cryptanalysis.

Better cryptanalysis is probably not a terribly significant issue - machine intelligence would probably mostly just use robots and nanotechnology to compromise the end-points - thereby recovering the key material.

comment by snarles · 2011-05-06T14:59:26.572Z · LW(p) · GW(p)

I suppose I could have been more coherent in the original post.

My claims.

1) In the future, cryptography will be a military technology as significant as e.g. nuclear weapons

2) However, how cryptography differs from conventional weapons technology is that giving everyone access to cryptography makes it less of a threat (whereas you would not want to give nukes to everyone)

3) To make cryptography less of a threat to political stability, there should be a concerted effort on the part of cryptography researchers and politicians to enforce that all cryptography research be open.

Replies from: jimrandomh, ciphergoth
comment by jimrandomh · 2011-05-06T17:16:05.811Z · LW(p) · GW(p)

This might make more sense if you zoomed in, and talked about specific capabilities rather than "cryptography" in general. The current state of crypto is that there are off-the-shelf cyphers that are believed to be unbreakable, and they can also be combined and given larger key sizes to make extra-unbreakable versions in case of mathematical advance. There are also several publicly accessible onion networks. What new capability do you imagine being developed, which would be bad for the world iff it wasn't open?

Replies from: JoshuaZ
comment by JoshuaZ · 2011-05-06T18:53:39.993Z · LW(p) · GW(p)

I don't know what Snarles is thinking of, but there are examples that one can provide. For example, if someone finds an efficient way of factoring integers, and keeps this secret, they can probably do a lot more damage than they can if their ability is publicly known. However, I don't see how in any substantial way that sort of worry connects with snarles' apparent concern about singletons.

comment by Paul Crowley (ciphergoth) · 2011-05-06T15:42:07.821Z · LW(p) · GW(p)

Sorry, I still don't get it. At this point I'm going to leave this discussion here unless another Less Wrong participant thinks it's worth my spending more time on.

comment by jimrandomh · 2011-05-04T02:49:54.059Z · LW(p) · GW(p)

I'm sure most of the readers of lesswrong and overcomingbias would consider a singleton scenario undesirable. (In a singleton scenario, a single political power or individual rules over most of humanity.)

I very strongly disagree with this. I believe that most non-singleton scenarios are disastrous, and that a friendly AI singleton is the only outcome which is likely to be worthwhile.

Replies from: nazgulnarsil
comment by nazgulnarsil · 2011-05-05T20:55:57.873Z · LW(p) · GW(p)

the idea that multipolar power arrangements are inherently unstable equilibria might be an artifact of human fear/irrationality about costs/benefits of violent contests for dominance.

comment by Perplexed · 2011-05-04T03:47:03.287Z · LW(p) · GW(p)

To be honest, I thought you were closer to the mark with your post last year promoting network security. And I thought that Eugine_Nier was right-on in responding to your critics.

If you want to point out an area in which high-quality open-source software is desirable, I would suggest software verification (i.e. proof-of-correctness) tools. These would be valuable both to create secure networks and systems (thus helping to eliminate the hardware overhang that might contribute to an unplanned uFAI) and also to help to build trustable FAIs.

As something of a quibble, why did you choose to call a world in which AGI technology is in widespread use an "information-dependent post-utopia"? What earlier stage constituted the utopia?

Replies from: snarles
comment by snarles · 2011-05-04T23:36:35.117Z · LW(p) · GW(p)

"Post-utopia" here is used in the same sense as "post-industrial age": the utopia begins as humanity reaps the benefits of AGI-accelerated technological development. However, this is a utopia only in the sense of being a unimaginably better economic state than the present; it can be far from ideal in terms of political realities.

comment by JoshuaZ · 2011-05-04T02:37:51.947Z · LW(p) · GW(p)

Singleton could occur if a group of people developed Artificial General Intelligence with a significant lead over their competitors. The economic advantage from sole possession of AGI technology would allow the controllers of the technology the opportunity to gain a economic or even a political monopoly in a relatively short timescale.

I'd feel better about this if the "would" in the second became a "could". It might turn out that AGIs don't do much initially, if the hardware level required is very large, or if boosting their intelligence is very difficult, or if getting them to cooperate is not easy. Alternatively, they might go foom and then destroy everyone, leveling the economic playing field in the sense that all humans will be dead or transformed so much that they might as well be.

This particular risk, as Robin Hanson pointed out, is less plausible if the "race for AGI" involves many competitors, and no competitor can gain too large of a lead over others. This "close race" scenario is more likely if there is an "open-source" attitude in the AGI community.

If serious fooming is a risk then this makes things much worse. This will drastically increase the chances that any one group will activate their AGIs without adequate safety precautions (or any at all).

I don't follow your logic that crypto is somehow more important than advances in direct weapons technologies. Sure, crypto is important. But it isn't clear why it would be more important. There are historical examples where it has mattered a lot. There's no question that the Allies' cryptographic advantages in World War II mattered, but that's one of the most extreme historic examples, and in that case the consensus seems to be that they would likely have won in both theaters even without it. Similar remarks apply to other wars where one side had a cryptographic advantage (such as say the North in the US Civil War).

I'm not also sure what exactly you are calling for that is different than what we do now. There are open source applications of a lot of cryptographic protocols. The protocols that don't have open source implementations are things like fully homomorphic encryption where current protocols aren't efficient enough to be useable with computers of current capability.

A first step to carrying out such a plan might include encoding of core mathematical results in an open-source database of formal proofs.

Note that most proofs of protocols' correctness can be found easily online. There have been some attempts to make open source databases of formal proofs in general (Cameron Freer has done work in this regard) not just for encryption. This is a good thing but for purposes of crypto having that really won't change much.

Replies from: Mitchell_Porter, snarles
comment by Mitchell_Porter · 2011-05-04T11:43:53.493Z · LW(p) · GW(p)

holomorphic encryption

Followed the link and was disappointed to discover that there isn't some new encryption scheme based on complex analyticity. :-) Reminds me of the "fractal quantum Hall effect". In fact, maybe we could use that to realize "fully holomorphic encryption"...

Replies from: JoshuaZ
comment by JoshuaZ · 2011-05-04T13:39:28.540Z · LW(p) · GW(p)

Thanks. Typo fixed.

comment by snarles · 2011-05-04T23:51:17.930Z · LW(p) · GW(p)

I don't think a 'fast FOOM' is plausible; the existence of multiple competing AGI-equipped powers would serve to deter a 'slow FOOM'.

Even if cryptography is not as threatening as advances in direct weapons (e.g. you could make a case for weapons nanobots), it is certainly a large source of potentially decisive military advances. Cyber attacks are faster than direct attacks and would be more difficult to defend against. Cyber attack technology (including cryptography) is harder to reverse-engineer, and its research and deployment involves no physical manufacturing, making its illicit development under a global weapons ban more difficult to detect.

Replies from: JoshuaZ
comment by JoshuaZ · 2011-05-05T00:10:10.851Z · LW(p) · GW(p)

I don't think a 'fast FOOM' is plausible; the existence of multiple competing AGI-equipped powers would serve to deter a 'slow FOOM'.

This leads to a variety of questions:

First, regarding the fast fooming issue:

  1. How fast is "fast FOOM" in your framework?
  2. How unlikely is it to be labeled as implausible?
  3. How likely is do you think for P=NP?
  4. How likely is it do you think for BQP to contain NP?
  5. How plausible is it to you that a strong, not foomed AI could make practical quantum computers?
  6. How likely do you consider fast fooming given P=NP or NP contained in BQP?

Note that for 1 and 6 to be consistent, the probability of 1 should be higher than whatever you gave for the probability in 6 times the probability in 3 (ETA: fixed), since 3-4-5 is but one pair of pathways for an AI to plausibly go foom.

the existence of multiple competing AGI-equipped powers would serve to deter a 'slow FOOM'.

This is not obvious. Moreover, what is to prevent the AGIs from working together in a way that makes humans irrelevant? If there's a paperclip maximizer and a stamp maximizer, but they can agree to cooperate (afterall, there's very little overlap between the elements in stamps and the elements in metal paperclips) and humans are just as badly off then as if only one of them were around. Multiple strong AIs that don't share human values means we have even more intelligent competitors for resources in our approximate light cone. Increasing the number of competing AIs might make it less likely for humans to survive in any way that we'd recognize as something we want.

Even if cryptography is not as threatening as advances in direct weapons (e.g. you could make a case for weapons nanobots), it is certainly a large source of potentially decisive military advances.

Not really. Military organizations rarely need to use cutting edge cryptography. Most interesting crypographic protocols are things like public key crypto which are useful when one has a large number of distinct economic actors who can't be trusted and don't have secure communication channels. Armies have things like centralized command structures which allow one to do things like distribute one time pads or have prior agreed upon signals which make most of these issues irrelevant. There situations where armies need cryptographic protocols are situations like World War 2, where one has many small groups that one needs to communicate securely with and one doesn't have easy physical access to them. In that sort of context, modern crypto can help. But, large scale ground wars and similar situations seem like an unlikely form of warfare.

Cyber attacks are faster than direct attacks and would be more difficult to defend against.

Hang on. Are we now talking about security in general? That's a much broader set of questions than just cryptography. I don't know if it is in general more difficult to defend against such attacks. Most of those attacks have an easy answer: keep systems off line. Attacks through the internet can cause economic damage, but it is difficult for them to cause military damage unless high priority systems are connected to the internet, which is just stupid.

Cyber attack technology (including cryptography) is harder to reverse-engineer

Can you expand on this claim?

making its illicit development under a global weapons ban more difficult to detect.

Has anyone ever suggested a global ban on cryptography or anything similar? Why does that seem like a scenario worth worrying about?

Replies from: CuSithBell, snarles
comment by CuSithBell · 2011-05-05T01:15:52.939Z · LW(p) · GW(p)

1.How fast is "fast FOOM" in your framework?

[...]

6.How likely do you consider fast fooming given P=NP or NP contained in BQP?

Note that for 1 and 6 to be consistent, the probability of 1 should be higher than whatever you gave for 6, since 3-4-5 is but one pair of pathways for an AI to plausibly go foom.

(Emphasis added.) I think you've got that backwards? 1 is P(fast FOOM), 6 is P(fast FOOM | P=NP OR NP in BQP), and you're arguing that P=NP or NP in BQP would make fast FOOM more likely, so 6 should be higher. That, or 6 should be changed to ( (fast FOOM) AND (P=NP OR NP in BQP) ). Yeah?

Replies from: JoshuaZ
comment by JoshuaZ · 2011-05-05T01:49:29.559Z · LW(p) · GW(p)

The thought was coherent. The typing was wrong. The intended probability estimate was given 3 and 6 together. That is, P(fast FOOM) >= P(fast FOOM| P=NP) * P(P=NP).

Replies from: CuSithBell
comment by CuSithBell · 2011-05-05T01:51:31.042Z · LW(p) · GW(p)

Ah, cool. Thanks for the clarification.

comment by snarles · 2011-05-05T00:37:44.898Z · LW(p) · GW(p)

Fast FOOM is as plausible as P=NP, agreed.

comment by wedrifid · 2011-05-04T02:55:07.731Z · LW(p) · GW(p)

I'm sure most of the readers of lesswrong and overcomingbias would consider a singleton scenario undesirable. (In a singleton scenario, a single political power or individual rules over most of humanity.)

You are overwhelmingly mistaken. The vast majority prefer a singleton and think all the likely alternatives to be horrific. Robin Hanson is an exception - but his preferences are bizarre to the point of insanity.

Replies from: JoshuaZ
comment by JoshuaZ · 2011-05-04T03:02:51.593Z · LW(p) · GW(p)

You are overwhelmingly mistaken. The vast majority prefer a singleton and think all the likely alternatives to be horrific.

There may be some mind-projection going on here. I don't for example have any strong preferences in this regard, although I think that a Friendly singleton is a better alternative than say paperclipping. But a singleton isn't obviously going to be very nice to humans. A singleton that is almost but not quite Friendly could make a world that is very bad from a human perspective.

Robin Hanson is an exception - but his preferences are bizarre to the point of insanity.

Can you expand on this? There are people here who strongly disagree with what Robin thinks is likely to occur, but I've seen little indication that people strongly disagree with Robin's stated preferences.

Replies from: wedrifid
comment by wedrifid · 2011-05-04T03:20:26.403Z · LW(p) · GW(p)

There may be some mind-projection going on here.

Perhaps. Also an implied "in proportion to degree of familiarity with lesswrong style thinking and premises".

But a singleton isn't obviously going to be very nice to humans. A singleton that is almost but not quite Friendly could make a world that is very bad from a human perspective.

All non-singleton possibilities being horrific is not to say that the majority of possible singletons do not constitute "humanity: FAIL" as well. Just that the only significant opportunities for a desirable future happen to involve a singleton.

I've seen little indication that people strongly disagree with Robin's stated preferences.

The assertion 'bizarre to the point of insanity' was not attributed to a majority of others. That was my personal declaration. That said I am far from the only one to suggest that the scenarios Robin embraces are instead credible threats to be avoided. I am not alone in preferring not to die and have humanity and all that it values obliterated by the selection pressures of a Malthusian catastrophe as a precursor to the cosmic commons being burned.

Replies from: JoshuaZ
comment by JoshuaZ · 2011-05-04T03:26:09.958Z · LW(p) · GW(p)

I am not alone in preferring not to die and have humanity and all that it values obliterated by the selection pressures of a Malthusian catastrophe as a precursor to the cosmic commons being burned.

I don't think Robin particularly wants to die. Note that he's signed up for cryonics for example. Regarding the burning of the cosmic commons, it isn't clear to me that he is in favor of that, just that he considers it to be a likely result. Given his training as an economist, that shouldn't be that surprising.

All non-singleton possibilities being horrific is not to say that the majority of possible singletons do not constitute "humanity: FAIL" as well. Just that the only significant opportunities for a desirable future happen to involve a singleton.

Can you expand on this? I don't know if this is something that is a) agreed upon or b) sufficiently well-defined. If for example, AGI turns out to be not able to go foom, and we all get functioning cryonics and clinical immortality that seems like a pretty good outcome. I don't see how you get that non-singleton results must be horrific or even that most of them must be horrific. There may be definitional issues here in what constitutes horrific.

Replies from: wedrifid, wedrifid
comment by wedrifid · 2011-05-04T03:39:47.480Z · LW(p) · GW(p)

There may be definitional issues here in what constitutes horrific.

No, just the basic 'everybody dies and that which constitutes the human value system is obliterated without being met'.

But there are certainly issues regarding different premises which would prohibit useful discussion here without multiple-post level groundwork preparation.

comment by wedrifid · 2011-05-04T03:28:46.382Z · LW(p) · GW(p)

Regarding the burning of the cosmic commons, it isn't clear to me that he is in favor of that,

No, rather, it is the default mainline probable outcome of other scenarios that he does (explicitly) embrace. (Some of that embracing was, I will grant, made for the purpose of being deliberately provocative.)

comment by timtyler · 2011-05-04T19:53:42.065Z · LW(p) · GW(p)

A lot of serious cryptography is open-source anyway. Cryptographers and cryptanalysts have long understood the benefits of lots of eyes on the code. A lot of the closed-source efforts are snake-oil.

Of course there is a prominent exception: the NSA.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2011-05-05T09:15:22.472Z · LW(p) · GW(p)

The NSA do make some work public.

Replies from: timtyler
comment by timtyler · 2011-05-05T09:26:19.355Z · LW(p) · GW(p)

Sure. The benefits of public sharing are somewhat reduced if your secret community is sufficiently large and well-funded, though.

comment by wedrifid · 2011-05-04T03:03:22.511Z · LW(p) · GW(p)

In the post-AGI world, the key risk for singleton is exclusive access to key-cracking technology.

Not an overwhelming horde of ridiculously high tech robotic killing machines capable of enforcing any will and a network of laser defenses ready to intercept the primitive nuclear devices that are the only thing their competition could hope to use to damage them with?

Or a swarm of nanobots? Maybe an engineered super virus?

If you have a friendly (to you) AGI and they don't then you win. That's how it works.

Replies from: JoshuaZ
comment by JoshuaZ · 2011-05-04T03:13:42.154Z · LW(p) · GW(p)

If you have a friendly (to you) AGI and they don't then you win. That's how it works.

Implicit premise here that AGI will be as powerful as is often suggested here. Not everyone thinks that is likely. And there are some forms of AGI that are much less likely to be that useful without some time. The most obvious example would be if the first AGIs are uploads. Modifying uploads could be very difficult given the non-modular, very tangled nature of the human brain. In time, uploads would still likely be very helpful (especially if Moore's Law continues to hold so that uploads become cheap and fast).

Another possible problem is that the technology may just not be easily improvable. We do see diminishing marginal gains in many technologies today even when people are working very hard on them. It isn't clear that nanobots or narrowly tailored viruses are even possible (although I agree that they are certainly plausible). As to laser defenses- we've spent thirty years researching that and it seems to mainly have just given us a good understanding of why that's really, really tough. The upshot is that AGI under your control is not an automatic win button.

Replies from: timtyler, JohnH, wedrifid
comment by timtyler · 2011-05-05T21:05:01.626Z · LW(p) · GW(p)

there are some forms of AGI that are much less likely to be that useful without some time. The most obvious example would be if the first AGIs are uploads. Modifying uploads could be very difficult given the non-modular, very tangled nature of the human brain.

Uploads are terribly unlikely to beat engineered machine intelligence. Just think of the technology involved. I know some people argue this point, but essentially, their arguments look pretty baseless to me. Uploads coming first is a silly idea, IMO.

comment by JohnH · 2011-05-04T15:07:06.112Z · LW(p) · GW(p)

The US Navy is starting to use lasers as defensive weapons on some of their ships.

Lasers are also capable of shooting down missiles. A major problem with doing that was agreements not to weaponize space. However, now there isn't the cold war threatening nuclear war so there isn't the political will to implement those strategies.

comment by wedrifid · 2011-05-04T03:22:32.967Z · LW(p) · GW(p)

Implicit premise here that AGI will be as powerful as is often suggested here.

Yes.

The upshot is that AGI under your control is not an automatic win button.

No.

comment by endoself · 2011-05-05T03:20:33.989Z · LW(p) · GW(p)

A non-friendly singleton AI kills us anyways.

Also, it would be really easy to defect in your proposal by not sharing information. If just one group doesn't open-source everything, they could create a singleton.