Why didn't Agoric Computing become popular?

post by Wei Dai (Wei_Dai) · 2019-02-16T06:19:56.121Z · LW · GW · 13 comments

This is a question post.

Contents

  Answers
    12 mr-hire
    11 PeterMcCluskey
    8 Dagon
    2 ryan_b
    1 paul
None
13 comments

I remember being quite excited when I first read about Agoric Computing. From the authors' website:

Like all systems involving goals, resources, and actions, computation can be viewed in economic terms. This paper examines markets as a model for computation and proposes a framework--agoric systems--for applying the power of market mechanisms to the software domain. It then explores the consequences of this model and outlines initial market strategies.

Until today when Robin Hanson's blog post reminded me, I had forgotten that one of the authors of Agoric Computing is Eric Drexler, who also authored Comprehensive AI Services as General Intelligence, which has stirred a lot of recent discussions in the AI safety community. (One reason for my excitement was that I was going through a market-maximalist phase, due to influences from Vernor Vinge's anarcho-captalism, Tim May's crypto-anarchy, as well as a teacher who was a libertarian and a big fan of the Austrian school of economics.)

Here's a concrete way that Agoric Computing might work:

For concreteness, let us briefly consider one possible form of market-based system. In this system, machine resources-storage space, processor time, and so forth-have owners, and the owners charge other objects for use of these resources. Objects, in turn, pass these costs on to the objects they serve, or to an object representing the external user; they may add royalty charges, and thus earn a profit. The ultimate user thus pays for all the costs directly or indirectly incurred. If the ultimate user also owns the machine resources (and any objects charging royalties), then currency simply circulates inside the system, incurring computational overhead and (one hopes) providing information that helps coordinate computational activities.

When later it appeared as if Agoric Computing wasn't going to take over the world, I tried to figure out why, and eventually settled upon the answer that markets often don't align incentives correctly for maximum computing efficiency. For example, consider an object whose purpose is to hold onto some valuable data in the form of a lookup table and perform lookup services. For efficiency you might have only one copy of this object in a system, but that makes it a monopolist, so if the object is profit maximizing (e.g., running some algorithm that automatically adjusts prices so as to maximize profits) then it would end up charging an inefficiently high price. Objects that might use its services are incentivized to try to do without the data, or to maintain an internal cache of past data retrieved, even if that's bad for efficiency.

Suppose this system somehow came into existence anyway. A programmer would likely notice that it would be better if the lookup table and its callers were merged into one economic agent which would eliminate the inefficiencies described above, but then that agent would itself still be a monopolist (unless you inefficiently maintained multiple copies of it) so then they'd want to merge that agent with its callers, and so on.

My curiosity stopped at that point and I went on to other interests, but now I wonder if that is actually a correct understanding of why Agoric Computing didn't become popular. Does anyone have any insights to offer on this topic?

Answers

answer by Matt Goldenberg (mr-hire) · 2019-02-16T14:19:52.486Z · LW(p) · GW(p)

The limiting factor on a thing being charged as a utility is that it is evolved enough and understood enough that the underlying architecture won't change (and thus leave all the consumers of that utility with broken products). We've now basically gotten there with storage, and computing time is next on the chopping block as the next wave of competitive advantage comes from moving to serverless architecture.

Once serverless becomes the defacto standard, the next step will be to commoditizie particular common functions (starting with obvious one like user login/permission systems/etc). Once these functions begin to be commoditized, you essentially have an Agora computing architecture for webapps. The limiting factor is simply the technological breakthroughs, evolution of practice, and understanding of customer needshat allowed first storage, then compute, and eventually computer functions to become commodotized. Understanding S-curves and Wardley mapping is key here to understanding the trajectory. [LW · GW]

answer by PeterMcCluskey · 2019-02-16T17:50:50.268Z · LW(p) · GW(p)

One obstacle has been security. To develop any software that exchanges services for money, you need to put substantially more thought into the security risks of that software, and you probably can't trust a large fraction of the existing base of standard software. Coauthor Mark S. Miller has devoted lots of effort to replacing existing operating systems and programming languages with secure alternatives, with very limited success.

One other explanation that I've wondered about involves conflicts of interest. Market interactions are valuable mainly when they generate cooperation among agents who have divergent goals. Most software development happens in environments where there's enough cooperation that adding market forces wouldn't provide much value via improved cooperation. I think that's true even within large companies. I'll guess that the benefits of the agoric approach only become interesting when large number of companies switch to using it, and there's little reward to being the first such company.

comment by Kaj_Sotala · 2019-02-17T10:40:10.530Z · LW(p) · GW(p)

It seems like market forces could even actively damage existing cooperation. While I'm not terribly familiar with the details, I've heard complaints of this happening at one university that I know of. There's an internal market where departments need to pay for using spaces within the university building. As a result, rooms that would otherwise be used will sit empty because the benefit of paying the rent isn't worth it.

Possibly this is still overall worth it - the system increasing the amount of spare capacity means that there are more spaces available for when a department really does need a space - but people do seem to complain about it anyway.

Replies from: Orborde, PeterMcCluskey
comment by Orborde · 2019-05-03T08:51:28.245Z · LW(p) · GW(p)
While I'm not terribly familiar with the details, I've heard complaints of this happening at one university that I know of. There's an internal market where departments need to pay for using spaces within the university building. As a result, rooms that would otherwise be used will sit empty because the benefit of paying the rent isn't worth it.

This is confusing. Why doesn't the rent on the empty rooms fall until there are either no empty rooms or no buyers looking to use rooms? Any kind of auction mechanism (which is what I'd expect to see from something described as a "market") should exhibit the behavior I've described.

comment by PeterMcCluskey · 2019-02-17T17:40:26.430Z · LW(p) · GW(p)

Those concerns would have slowed adoption of agoric computing, but they seem to apply to markets in general, so they don't seem useful in explaining why agoric computing is less popular than markets in other goods/services.

answer by Dagon · 2019-02-16T17:41:34.600Z · LW(p) · GW(p)

Note that market economies aren't pure in any other realm either. They work well only for some scales and processes, and only when there are functioning command or obligation frameworks that adjoin the markets (in government and cultural norms "above" the market, and in family and cultural norms "below" it, and in non-market competition and cooperation at the subpersonal level). We actually have well-functioning markets for compute resources, just at a somewhat courser level (but getting finer - AWS sells compute for $0.00001667 per GB-second, and makes it easy to write functions that use this compute resource to calculate whether to use more or less of it in the future) than Agoric Computing envisions.

I suspect the root cause is that many of the decisions are outside the modeling of the price/purchase system, and the inefficiency of actually having the market infrastructure (ability to offer, counteroffer, accept, perform, and pay, across time and with negotiation of penalties for failure) outweighs the inefficiency of a command economy.

I also suspect that the knowledge problem (what do participants want, and how to you measure the level of those preferences) is much reduced when the software doesn't actually have any preferences of it's own, only what the programmers/masters have specified.

Alternately, perhaps this is more integrated into current thinking than we realize, and we just didn't notice it because "the market" is bigger than we thought, and automatically incorporated (and was overwhelmed by) the larger sphere of human market interactions. Finding and tuning cost functions for algorithms to minimize is a big deal. However, there's so much impact from reducing cost on a macro scale, that reducing cost by making software calculations more efficiently is lost in the noise.

answer by ryan_b · 2019-02-26T19:59:53.817Z · LW(p) · GW(p)

I don't think the features of the theoretical system were particularly relevant. I can see several reasons why this wouldn't take off, and no reasons why it would. For example:

Objects are assumed to communicate through message passing and to interact according to the rules of actor semantics [3,4], which can be enforced at either the language or operating system level. These rules formalize the notion of distinct, asynchronous, interacting entities, and hence are appropriate for describing participants in computational markets.

This part raises a few red flags. We were pretty terrible at asynchronous anything in 2001; the only successful example I know of is communication systems running Erlang, and Erlang is inefficient at math so cost functions would have had a lot of overhead. Further, in the meantime a lot of development effort was put into exploring alternatives to the model he proposes, which we see now in things like practical Haskell and newer languages like Rust or Julia.

Further, we've gotten quite good at getting the value such a system proposes, we just write programs that manage the resources. For example, I do tech support for Hadoop, the central concept of which is adding trade-offs between storage and compute, and Google used Deepmind to manage energy usage in its datacenters. Cloud computing is basically Agoric computing at the application level.

In order for Agoric computing to be popular, there would need to be clear benefits to a lot of stakeholders that would exceed the costs of doing a complete re-write of everything. In a nutshell, it looks to me like Drexler was suggesting we should re-write all software under a market paradigm when we can get most of the same value by writing software under the current - or additional - paradigm(s) and just adding a few programs which optimize efficiency and provide direct trade-offs.

comment by ryan_b · 2019-02-26T20:09:07.452Z · LW(p) · GW(p)

An alternative, more abstract way of thinking about the problem: it is hard to create a market where there aren't currently any transactions. Until quite recently, transactions were only located where the software was sold and where it was written.

I think the effort would have been more successful if the question was not "how do we make software to use market transactions" but rather "how do we extend market transactions into how software works" because then it would be clear we need to approach it from one end or the other: in order to get software to use transactions we would need to make software production transactions more granular or software consumption transactions more granular. The current trend is firmly on the latter side.

answer by paul · 2019-02-16T22:54:02.437Z · LW(p) · GW(p)

Agoric Computing seems like a new name given to a very common mechanism employed by many programs in the software industry for decades. It is quite common to want to balance the use of resources such as time, memory, disk space, etc. Accurately estimating these things ahead of their use may use substantial resource by itself. Instead, a much simpler formula is associated with each resource usage type and that stands as a proxy for the actual cost. Some kind of control program uses these cost functions to decide how best to allocate tasks and use actual resources. The algorithms to compute costs and manipulate the market can be as simple or as complex as the designer desires.

This control program can be thought of as an operating system but it might also be done in the context of tasks within a single process. This might result in markets within markets.

I doubt many software engineers would think of these things in terms of the market analogy. For one thing, they would gain little constraining their thinking to a market-based system. I suspect many software engineers might be fascinated to think of such things in terms of markets but only for curiosity sake. I don't see how this point of view really solves any problems for which they don't already have a solution.

13 comments

Comments sorted by top scores.

comment by RobinHanson · 2019-02-17T16:01:27.096Z · LW(p) · GW(p)

My guess is that the reason is close to why security is so bad: Its hard to add security to an architecture that didn't consider it up front, and most projects are in too much of a rush to take time to do that. Similarly, it takes time to think about what parts of a system should own what and be trusted to judge what.. Easier/faster to just make a system that does things, without attending to this, even if that is very costly in the long run. When the long run arrives, the earlier players are usually gone.

comment by Brendan Long (korin43) · 2019-02-16T17:06:03.112Z · LW(p) · GW(p)

I think the problem with this is that markets are a complicated and highly inefficient tool for coordinating resource consumption among competing individuals without needing an all-knowing resource-allocator. This is extremely useful when you need to coordinate resource consumption among competing individuals, but in the case of programming, the functions in your program aren't really competing in the same way (there's a limited pool of resources, but for the most part they each need a precise amount of memory, disk space, CPU time, etc. and no more and no less).

There also is a close-enough-to-all-knowing resource allocator (the programmer or system administrator). The market model actually sounds like a plausibly-workable way to do profiling, but it would be less overhead to just instrument every function to report what resources it uses and then cental-plan your resource economy.

In short, if everyone is a mindless automaton who takes only what they need and performs exactly what others require of them, and if the central planner can easily know exactly what resources exist and who wants them, then central planning works fine and markets are overkill (at least in the sense of being a useful tool; capitalism-as-a-moral-system is out-of-scope when talking about computer programs).

Note that even in cases like Amazon Web Services, the resource tracking and currency is just there to charge the end-user. Very few programs take these costs into account while they're executing (the exception is EC2 instance spot-pricing, but I think it's a stretch to even call that agoric computing).

Also, one other thing to consider is that agoric computing trades off something really, really cheap (computing resources) for something really, really expensive (programmer time). Most people don't even bother profiling because programmer time is dramatically more valuable than computer parts.

Replies from: PeterMcCluskey
comment by PeterMcCluskey · 2019-02-17T17:32:30.099Z · LW(p) · GW(p)

The central planner may know exactly what resources exist on the system they own, but they don't know all the algorithms and data that are available somewhere on the internet. Agoric computing would enable more options for getting programmers and database creators to work for you.

Replies from: korin43
comment by Brendan Long (korin43) · 2019-02-19T15:52:33.602Z · LW(p) · GW(p)

When dealing with resources on the internet, you're running into the "trading off something cheap for something expensive" issue again. I could *right now* spend several days/ weeks write a program that dynamically looks up how expensive it is to run some algorithm on arbitrary cloud providers and run on the cheapest one (or wait if the price is too high), but it would be much faster for me to just do a quick Google search and hard-code to the cheapest provider right now. They might not always be the cheapest but it's probably not worth thousands of dollars of my time to optimize this more than that.

Regarding writing a program to dynamically lookup more complicated resources like algorithms and data.. I don't know how you would do this without a general-purpose programmer-equivalent AI. I think maybe your view of programming seriously underestimates how hard this is. Probably 95% of data science is finding good sources of data, getting them into a somewhat-machine-readable-form, cleaning them up, and doing various validations that the data makes any sense. If it was trivial for programs to use arbitrary data on the internet, there would be much bigger advancements than agoric computing.

comment by mako yass (MakoYass) · 2019-02-17T03:28:13.641Z · LW(p) · GW(p)

I think it would have happened decades ago if we'd had micropayments. There were a lot of internet denizens who didn't like the ad model. Part of the motivations of paypal was to provide an alternative (so said David Brin in The Transparent Society). If things had gone differently, many subsets of the internet would have users pay a tiny fraction of the server's costs when they requested a page. Creators would no longer have to scrape to find a way to monetise their stuff just to keep it online. It would have been pretty nice.

As far as I can tell, there hasn't been a micropayment platform, for a long time. Paypal failed, iirc, it mirrors credit cards' 30c charge per transaction. Bank transfers are slow. Most payment platforms charge very similar fees, which leads me to wonder if there's some underlying legal overhead per transaction that prevents anyone from offering the required service.

I can't see a reason it should be civically impossible to reduce transaction costs to negligibility, though. It's conceivable that money proportional to the transacted amount must always be spent policing against money-laundering, but I can't see why it should be proportionate to the number of transactions rather than the quantity transacted (obviously some cost must be proportional to the number of transactions- isp fees, bandwidth congestion, cdns, cpu time-, but that should be much lower than 30 cents)

Replies from: MakoYass, jay-molstad
comment by mako yass (MakoYass) · 2019-02-17T03:44:08.778Z · LW(p) · GW(p)

It seems Paypal have a microtransactions product where the fee per transaction is 7c https://www.paypal.com/uk/webapps/mpp/micropayments. Still garbage.

comment by Jay Molstad (jay-molstad) · 2019-02-17T13:41:16.375Z · LW(p) · GW(p)

For most people, the negative utility of deciding whether or not to do a transaction is on the order of a buck or two (based on some barely-remembered research from the '90s). Pricing transactions less than this amount is inefficient; you don't get enough extra transaction volume to compensate for the lower price (regardless of the value of whatever you're selling).

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-02-17T16:21:32.695Z · LW(p) · GW(p)

This argument doesn't apply to the Agoric computing case though, in which the microtransactions are being decided by the programs and not the human.

Replies from: MakoYass, jay-molstad
comment by mako yass (MakoYass) · 2019-02-18T00:36:51.325Z · LW(p) · GW(p)

And for common kinds of online activity, should be cheap enough that users can ignore it.

comment by Jay Molstad (jay-molstad) · 2019-02-19T00:22:23.168Z · LW(p) · GW(p)

That feels a lot like allowing your computer to write blank checks, which is a tough sell for users. If it were me, I'd want to cap the payments at some affordable maximum level. The service would likely find ways to ensure that users almost always hit the cap, after which point the cap is basically a subscription fee.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-02-19T01:30:55.404Z · LW(p) · GW(p)

Most people do this for other utilities all the time though (like power)

comment by Jay Molstad (jay-molstad) · 2019-02-16T11:58:12.182Z · LW(p) · GW(p)

Something very similar to this is happening whenever Google serves you an ad; there's a whole underlying auction architecture comparing your past habits to advertisers' bids and optimizing projected ad revenue (estimated likelihood of you clicking on the ad times advertisers bid per click on each ad).

The difference is that human attention is the limiting resource and machine resources are cheap enough to spend prodigiously. In the early days of the Web that wasn't as true, so people saw huge walls of poorly targeted ads. Those ads wasted attention (i.e. annoyed people more than necessary), so they were abandoned.

comment by Invar · 2019-02-16T07:46:44.465Z · LW(p) · GW(p)

In general, complicated pricing models are business-unfriendly. For example, Tarsnap (an encrypted backup service) advertises prices as "250 picodollars / byte-month", which is hard for business customers to actually think about.

Inside of a large enough company, a market economies to allocate resources can make sense. Google has a resource economy for teams to bid on resources. However, this doesn't fulfill the general requirement of per-request costs. Resource economies let each team's services find a balance, and enables reporting up the chain of what costs are-- things like how much Youtube's storage costs, which would be hard to estimate from raw data usage, and would be even more expensive if it required dedicated storage servers instead of using shared resources bought at auction.

Calculating the precise cost of a single request is difficult. True resource costs are nonlinear in the presence of caching, and the effects of one request on another in terms of interference is difficult to ascertain. More coarse approximations in terms of various quotas, like requests per second for different endpoints, are easy to reason about for humans.