Crypto-fed Computation
post by aaguirre · 2022-06-13T21:20:58.988Z · LW · GW · 7 commentsContents
Overview How would it work Example schemas Coin-fed Continuous signed message exchange Many keys Combinations Applications Monitoring/control/off-switch Other applications Obstacles and paths to adoption Obstacles to adoption Paths to adoption Some open questions Help wanted Acknowledgements None 7 comments
Overview
The idea of this post is to describe, discuss, and if warranted understand how to create, a model of crypto-fed computation.[1] The basic idea is that high-powered GPU (or other ML-specialized) hardware could be equipped with in-chip hardware locks such that the computational cores require a steady stream of cryptographic keys in order to continue performing. In the absence of such continually supplied keys the hardware would downgrade to a small or zero fraction of its nominal capability.
Deploying such hardware would allow three key things:
- Monitoring of how computation is being used: who/whatever is supplying the keys will know how many are being sent and to where.
- Control of computation: in a framework in which large levels of computation are regulated, this mechanism allows setup such that the continued consent of an outside regulating agent is reuired in order to continue large computational projects.
- Off-switch: the default mode of this hardware system is "idle" (or if desired, "off") so that in the unlikely but possible case that standard control of a computational system is lost or goes rogue, it could not continue without also co-opting the source of the cryptographic keys.
I'll argue that such a capabilities appears to be quite technically feasible, and there may be plausible pathways to adopting it. However there are plenty of open questions, and many details to be nailed down before attempting such an effort.
How would it work
Let's lay out some design criteria for such a system, aiming to provide at least the three services above in situations where agents attempting to subvert the system are verycapable.
- Very Strong cryptography: The cryptographic protocol used should be as strong and well-tested as possible, even against quantum computation.
- Hardware encoded: It must be impossible to subvert the need for the keys without very substantial modification of the chip hardware itself (tantamount to remanufacturing it.)
- Bi-directional: the computational cores must be able to and required to "ask" for more keys, and this must be cryptographically secure (so that it cannot be spoofed to provide keys to a third party.)
- Efficient: there should be relatively low overhead, computational or otherwise, to the system.
Example schemas
Here are a few examples of ways this might work. As this is not my area of expertise, it is extremely likely that superior schemes could be invented or already exist. I'll denote by Controller the agent that seeks to monitor/limit the activity done by the computational cores (CCs).
Coin-fed
A very simple idea is to have a crypto wallet "belonging" to a set of CCs, i.e. the CCs have the private key to the wallet and can initiate transactions on it. The wallet contains "COMPUTE" tokens. In order to do some number N of computations, the CC must send M (which is connected to N at the hardware level) tokens to some other (perhaps null) address, which is considered "burning" them. Once the wallet is empty, this can no longer be done, and computation stops. The external agent can then "feed" the computation simply by depositing coins in this wallet.
This is a pleasing scheme in that the interface layer is very simple, and well-established by cryptocurrencies; it could run on many existing blockchains. And COMPUTE could be an actual coin that is traded etc.
What's unclear to me is how reliable we can make the verification by the hardware of the "burn" transaction having taken -- i.e. eliminating the possibility if intercepting the "send to null" message and spoofing the verification, to get computation for free. Perhaps good solutions to this exist.
Continuous signed message exchange
A second method would have a hardware controller (HC) for each CC. Both the CC and the hardware controller would have a signature (private) key that is known only to them, is unreadable, and that never exists anywhere else. At initialization, the HC and CC exchange verification (public) keys.
Thereafter, the CC can send a signed message to the HC requesting permission to do N additional computations, and the HC can send permission or not. The HC side would have some software control panel that would provide an interface. It's likely that one HC could be paired with many CCs, which would be good for efficiency.
This scheme also relies on well-known and widely adopted technology, which is a big plus. One potential concern is that the communication technology on the CC feels like a nontrivial piece of software that would have to be provably secure against meddling (without extensive hardware changes) on the CC side.
Many keys
Another scheme, which is rather "low level" (i.e. feels embeddible in pretty raw form in hardware), is as follows.
- Pick "ciphers" and keys:
- Pick a random long text string.
- Choose a random set of nonces/keys (random one-time-use numbers) .
- Find the SHA-256 (or SHA3-512 or whatever) encodings of the string using (combined with) each of those nonces.
- The cipher is: provide the nonce that when used with SHA-256 encoding on the string produces one of the .
- Now hardware encode a string and set of into each CC.
- Each could be associated at the hardware level with a number of computational operations.
- In order to run O operations, the CC must be provided with a key (nonce) that matches an corresponding to > O operations.
- Once a key is used it cannot be re-used. (This would probably need to be baked in as hardware erasure or something.)
This system is a bit less flexible than others (for example each CC would have a hard limit to how many computations it could ever do), but such simplicity could also be a strength. It also requires an additional "communication" layer for the CC to request additional keys from the Controller. This does not seem like a huge security issue, however, unless the keys are stolen from the Controller, since the keys are finite and single-use.
Combinations
The above schemes can be combined. For example, the following system seems pretty strong and redundant, albeit somewhat complex.
- The CC initiates a burn transaction, sending COMPUTE to a null wallet.
- The CC then also initates a request for the corresponding amount of COMPUTE keys to an HC linked to it, along with a hash of the burn request.
- The HC verifies the burn transaction on the blockchain, and checks the public hash against the one sent by the CC.
- The HC then sends a signed message, which includes the keys per the "many keys" scheme that unlock the requisite amount of computation.
- The HC may also be used to send additional COMPUTE to the CC's wallet.
Applications
Here I describe in a bit more detail the envisaged applications of this hardware.
Monitoring/control/off-switch
A key question in terms of applications is whether the monitoring agent ("Monitor") is the same as the Controller. This is natural in the message-exchange and many-keys schemes. In these cases, the requests for keys translate directly into computation usage that can be monitored. (A possible weak point here is that the CC could "over request" keys for a time thus saving up "credits" to be used later. Mitigating this seems quite doable, however.)
If the Monitor and Controller are separate, something like the coin-fed scheme may be important. Monitoring in that scheme is simple, as the Monitor can simply watch the movements in and out of blockchain wallets.
In all cases, what is naturally monitored is computation usage. But of course other things could be required to be put into the "request" messages, as long as these are non-spoofable.
Control and the off-switch are also fairly obvious: once the tokens, or signed messages, or keys, stop being sent and run out, the hardware turns off or downcycles to a low rate. (The latter may be preferable so that processes can continue without disruption, just rather ineffectively – but this would depend upon the use case and risk assessment.)
The scheme lends itself equally well to the Monitor/Controller being part of the same organization (e.g. a security team monitoring a research team) or different organizations (e.g. an agency or hardware provider monitoring a company or lab.)
Other applications
Here are a few off-the-cuff ideas for other applications that may or may not have some market utility:
- The "reverse cloud": a CC manufacturer leases to a user the hardware at nominal cost, then the user pays for a particular amount of computation. Relative to a standard cloud computation modality, this would give the user more control over how everything is set up, but less overall control than in simply owning hardware outright. (It's unclear how appealing this would be to users as it's a bit of the worst of both worlds, but who knows.)
- Compute tokens: In a "coin-fed" scheme if the exchange rate between FLOPs and COMPUTE tokens is fixed, then it could create a market for computation, in which whoever can supply computation most cheaply can sell it for COMPUTE tokens. (Unclear to me whether this provides much utility, but again, who knows.)
- Crypto computational futures: Given that computation gets steadily cheaper, COMPUTE would be an inflationary token worth less over time. Suppose, though, tokens were "time locked" so that they could not be used before some set time. Then the value of COMPUTE tokens would form a futures market predicting the price of computation in FLOPs/$ at that time.
Obstacles and paths to adoption
Even if crypto-locked computational hardware could be engineered and built, this is no guarantee that it will actually be useful, i.e. either widely adopted or adopted in some critical areas.
Obstacles to adoption
- All else equal, having a piece of hardware that is more complex, likely at least a bit more expensive, and that (in principle) allows others to monitor or control it seems unappealing.
- It could take significant work and investment to develop and test this sort of hardware, and companies would need incentives to do so.
- There aren't many companies making state-of-the-art hardware, and causing even one of them to incorporate such a scheme into one of their products could be a very big lift.
- Even if such hardware exists, other computational hardware without such crypto-locks will seemingly be available (modulo strong law/regulation forbidding it) and any "bad actors" who don't want external control or monitoring can simply use that.
Paths to adoption
Nonetheless there may be avenues to incentivize the development of such hardware, and at least scale it to the level where it is relatively commoditized and could be rolled out much more widely if needed or desired. (How to drive very widespread or universal adoption is, I think, a topic suitable for a separate study.) As some examples:
- It is possible that some products a chip maker would like to provide could make use of the technology, so it would just be a matter of persuading them to develop a system with this dual use. (For example, I've heard that some GPUs are sold in different versions where the hardware is identical but one type is unlocked to run at a higher rate.)
- A government, or a philanthropic consortium, could provide ML computation at a highly subsidized cost, but require that it be crypto-locked, and monitored by the sponsoring agency.
- Government could require monitoring of this sort by some oversight body as part of regulations on how chip subsidies are spent, or as a part of export controls deemed to be in the national interest, or as regulations on large enough computational systems.
- Smaller and safety-aligned hardware developers could build this capability into their hardware as a public good, providing a prototype and proof-of-concept that could help in the above three possibilities.
Some open questions
- What is an actually good scheme, vetted by people who have real expertise in this sort of system design?
- Exactly how hard/slow/expensive would it really be to add a scheme of this type to high-powered ML hardware?
- Is there some sort of technical showstopper that undermines the technical desiderata or the intended use cases?
- Is there a way to set this up so that its use is incentivized without with a huge subsidy or a government regulation?
- Is something like this (but perhaps less relevant to questions of AI safety) going to be developed anyway? (If so, this is a strong reason to do it well.)
- Are there other approaches to "compute safety" likely to pay higher dividends?
Help wanted
If this plan continues to look viable it is possible that FLI could invest non-negligible fiscal or other resources into getting it off the ground. But it's still embryonic. I'd love help on any of the following:
- Suggestions for better schemes or showstoppers for the technical idea.
- Pointers to prior art in closely similar ideas (especially if they have more technical depth or detail than this post.)
- People with expertise (in crypto, and in high-end hardware design) interested in doing a deeper dive on a much better and more in-depth V2 of this document.
- Thoughts on additional use cases or paths to adoption.
Acknowledgements
I have a vague recollection that someone, I think Connor Flexman, suggested to me a version of the "many keys" scheme.
I will not apologize for the unconventional and archaic use of the noun "computation" rather than the verb "compute." But as a peace offering I've called the coins "COMPUTE" coins. ↩︎
7 comments
Comments sorted by top scores.
comment by gwern · 2022-06-13T21:43:55.327Z · LW(p) · GW(p)
Pointers to prior art in closely similar ideas (especially if they have more technical depth or detail than this post.)
As it happens, this is exactly one of the proposed use-cases for "k-time programs" in cryptography: https://eprint.iacr.org/2022/658.pdf
Alternatively, we could release a sensitive and proprietary program (such as a well-trained ML model) and be guaranteed that the program can be used only a limited number of times, thus potentially preventing over-use, mission-creep, or reverse engineering. Such programs can also be viewed as a commitment to a potentially exponential number of values, with a guarantee that only few of these values are ever opened.
(I don't buy their polymer idea though.)
Replies from: aaguirrecomment by mako yass (MakoYass) · 2022-06-14T06:16:51.216Z · LW(p) · GW(p)
Given this as a foundation, I wonder if it'd be possible to make systems that report potentially dangerously high concentrations of compute, places where an abnormally large amount of hardware is running abnormally hot, in an abnormally densely connected network (where members are communicating with very low latency, suggesting that they're all in the same datacenter).
Could it be argued that potentially dangerous ML projects will usually have that characteristic, and that ordinary distributed computations (EG, multiplayer gaming) will not? If so, a system like this could expose unregistered ML projects without imposing any loss of privacy on ordinary users.
Replies from: aaguirre↑ comment by aaguirre · 2022-06-15T15:45:10.880Z · LW(p) · GW(p)
I think this depends a lot on the use case. I envision for the most part this would be used in/on large known clusters of computation, as an independent check on computation usage and a failsafe. In that case it will be pretty easy to distinguish from other uses like gaming or cryptocurrency mining. If we're in the regime where we're worried about sneaky efforts to assemble lots of GPUs under the radar and do ML with them, then I'd expect there would be pattern analysis methods that could be used as you suggest, or the system could be set up to feed back more information than just computation usage.
comment by Jacky April (jacky-april) · 2022-06-24T17:51:47.126Z · LW(p) · GW(p)
nice
comment by trevor (TrevorWiesinger) · 2022-06-14T00:31:43.483Z · LW(p) · GW(p)
If someone tries to mint an AI safety coin, I will go explain to as many people as I can about some the details of how cryptocurrency is an obvious, obvious scam involving rich people minting worthless tokens and selling them to poor people, who are much more likely to fall for this sort of thing.
For example, anecdotal evidence of poor people from 2017 getting rich, even though the vast majority of the trading volume was money moving from poor people to rich people, since rich people knew exactly when to buy and sell, and poor people didn't know when to buy and sell because they randomly oscillated between believing false arguments that cryptocurrency could replace fiat currency without overwhelming retaliation by the government, then realizing that if it looks like a scam then it probably is, then encountering carefully-selected anecdotal evidence of poor people from 2017 becoming rich even though they themself didn't, and then oscillating back and forth from there. If you were spending cryptocurrency in 2017 and weren't a perpetrator yourself, then this almost certainly happened to you, almost exactly as I described.
All that needs to happen is zero AI safety coins get minted from this point on, and I will return to avoiding the topic unless someone else brings it up. If that's not what happens (i.e. if someone starts to mint an AI safety coin), I'll try to protect as many people as I can.
Cryptography and blockchain are fine, of course, and may be helpful for AI safety. Generating and selling "coins" to people who care about AI safety is not. There is a well-developed immune system for that here.
Replies from: aaguirre↑ comment by aaguirre · 2022-06-14T03:38:44.850Z · LW(p) · GW(p)
The purpose of the COMPUTE token and blockchain here would be to provide a publicly verifiable ledger of the computation done by the computational cores. It would not be integral to the scheme but would be useful for separating the monitoring and control, as detailed in the post. I hope it is clear that a token as a tradeable asset is not at all important to the core idea.