Response to Glen Weyl on Technocracy and the Rationalist Community

post by John_Maxwell (John_Maxwell_IV) · 2019-08-22T23:14:58.690Z · LW · GW · 9 comments

Contents

  Glen's Strongest Points
  Possible Points of Disagreement
  My Take on Institution Design
  Probable Points of Disagreement
  Conclusion
None
9 comments

Economist Glen Weyl has written a long essay, "Why I Am Not A Technocrat", a major focus of which is his differences with the rationalist community.

I feel like I've read a decent number of outsider critiques of the rationalist community at this point, and Glen's critique is pretty good. It has the typical outsider critique weakness of not being fully familiar with the subject of its criticism, balanced by the strength of seeing the rationalist community from a perspective we're less familiar with.

As I was reading Glen's essay, I took some quick notes. Afterwards I turned them into this post.

Glen's Strongest Points

The fundamental problem with technocracy on which I will focus (as it is most easily understood within the technocratic worldview) is that formal systems of knowledge creation always have their limits and biases. They always leave out important consideration that are only discovered later and that often turn out to have a systematic relationship to the limited cultural and social experience of the groups developing them. They are thus subject to a wide range of failure modes that can be interpreted as reflecting on a mixture of corruption and incompetence of the technocratic elite. Only systems that leave a wide range of latitude for broader social input can avoid these failure modes.

So far, this sounds a lot like discussions I've seen previously of the book Seeing Like a State. But here's where Glen goes further:

Yet allowing such social input requires simplification, distillation, collaboration and a relative reduction in the social status and monetary rewards allocated to technocrats compared to the rest of the population, thereby running directly against the technocratic ideology. While technical knowledge, appropriately communicated and distilled, has potentially great benefits in opening social imagination, it can only achieve this potential if it understands itself as part of a broader democratic conversation.

...

Technical insights and designs are best able to avoid this problem when, whatever their analytic provenance, they can be conveyed in a simple and clear way to the public, allowing them to be critiqued, recombined, and deployed by a variety of members of the public outside the technical class.

Technical experts therefore have a critical role precisely if they can make their technical insights part of a social and democratic conversation that stretches well beyond the role for democratic participation imagined by technocrats. Ensuring this role cannot be separated from the work of design.

...

[When] insulation is severe, even a deeply “well-intentioned” technocratic class is likely to have severe failures along the corruption dimension. Such a class is likely to develop a strong culture of defending its distinctive class expertise and status and will be insulated from external concerns about the justification for this status.

...

Market designers have, over the last 30 years designed auctions, school choice mechanisms, medical matching procedures, and other social institutions using tools like auction and matching theory, adapted to a variety of specific institutional settings by economic consultants. While the principles they use have an appearance of objectivity and fairness, they play out against the contexts of societies wildly different than those described in the models. Matching theory uses principles of justice intended to apply to an entire society as a template for designing the operation of a particular matching mechanism within, for example, a given school district, thereby in practice primarily shutting down crucial debates about desegregation, busing, taxes, and other actions needed to achieve educational fairness with a semblance of formal truth. Auction theory, based on static models without product market competition and with absolute private property rights and assuming no coordination of behavior across bidders, is used to design auctions to govern the incredibly dynamic world of spectrum allocation, creating holdout problems, reducing competition, and creating huge payouts for those able to coordinate to game the auctions, often themselves market design experts friendly with the designers. The complexities that arise in the process serve to make such mass-scale privatizations, often primarily to the benefit of these connected players and at the expense of the taxpayer, appear the “objectively” correct and politically unimpeachable solution.

...

[Mechanism] designers must explicitly recognize and design for the fact that there is critical information necessary to make their designs succeed that a) lies in the minds of citizens outside the technocratic/designer class, b) will not be translated into the language of this class soon enough to avoid disastrous outcomes and c) does not fit into the thin formalism that designers allow for societal input.

...

In order to allow these failures to be corrected, it will be necessary for the designed system to be comprehensible by those outside the formal community, so they can incorporate the unformalized information through critique, reuse, recombination and broader conversation in informal language. Let us call this goal “legibility”.

...

There will in general be a trade-off between fidelity and legibility, just as both will have to be traded off against optimality. Systems that are true to the world will tend to become complicated and thus illegible.

...

Democratic designers thus must constantly attend, on equal footing, in teams or individually, to both the technical and communicative aspects of their work.

(Please let me know if you think I left out something critical)

A famous quote about open source software development states that "given enough eyeballs, all bugs are shallow". Nowadays, with critical security bugs in open-source software like Heartbleed, the spirit of this claim isn't taken for granted anymore. One Hacker News user writes: "[De facto eyeball shortage] becomes even more dire when you look at code no one wants to touch. Like TLS. There were the Heartbleed and goto fail bugs which existed for, IIRC, a few years before they were discovered. Not surprising, because TLS code is generally some of the worst code on the planet to stare at all day."

In other words, if you want critical feedback on your open source project, it's not enough just to put it out there and have lots of users. You also want to make the source code as accessible as possible--and this may mean compromising on other aspects of the design.

Academic or other in-group status games may encourage the use of big words. But we'd be better off rewarding simple explanations--not only are simple explanations more accessible, they also demonstrate deeper understanding. If we appreciated simplicity properly:

At the very least, I think, Glen wants our institutions to be like highly usable software: The internals require expertise to create and understand, but from a user's perspective, it "just works" and does what you expect.

Another point Glen makes well is that just because you are in the institution design business does not mean you're immune to incentives. The importance of self-skepticism regarding one's own incentives has been discussed [LW · GW] before around here, but this recent post [LW · GW] probably comes closes to Glen's position, that you really can't be trusted to monitor yourself.

Finally, Glen talks about the insularity of the rationalist community itself. I think this critique was true in the past. I haven't been interacting with the community in person as much over the past few years, so I hesitate to talk about the present, but I think he's plausibly right. I also think there may be an interesting counterargument that the rationalist community does a better job of integrating perspectives across multiple disciplines than your average academic department.

Possible Points of Disagreement

Although I think Glen would find some common ground with the recent post [LW · GW] I linked, it's possible he would also find points of disagreement. In particular, habryka writes:

Highlighting accountability as a variable also highlights one of the biggest error modes of accountability and integrity – choosing too broad of an audience to hold yourself accountable to.

There is tradeoff between the size of the group that you are being held accountable by, and the complexity of the ethical principles you can act under. Too large of an audience, and you will be held accountable by the lowest common denominator of your values, which will rarely align well with what you actually think is moral (if you've done any kind of real reflection on moral principles).

Too small or too memetically close of an audience, and you risk not enough people paying attention to what you do, to actually help you notice inconsistencies in your stated beliefs and actions. And, the smaller the group that is holding you accountable is, the smaller your inner circle of trust, which reduces the amount of total resources that can be coordinated under your shared principles.

I think a major mistake that even many well-intentioned organizations make is to try to be held accountable by some vague conception of "the public". As they make public statements, someone in the public will misunderstand them, causing a spiral of less communication, resulting in more misunderstandings, resulting in even less communication, culminating into an organization that is completely opaque about any of its actions and intentions, with the only communication being filtered by a PR department that has little interest in the observers acquiring any beliefs that resemble reality.

I think a generally better setup is to choose a much smaller group of people that you trust to evaluate your actions very closely, and ideally do so in a way that is itself transparent to a broader audience. Common versions of this are auditors, as well as nonprofit boards that try to ensure the integrity of an organization.

Common wisdom is that it's impossible to please everyone. And specialization of labor is a foundational principle of modern society. If I took my role as a member of "the public" seriously and tried to provide meaningful and fair accountability to everyone, I wouldn't have time to do anything else.

It's interesting that Glen talks up the value of "legibility", because from what I understand, Seeing Like a State emphasizes its disadvantages. Seeing Like a State discusses legibility in the eyes of state administrators, but Glen doesn't explain why we shouldn't expect similar failure modes when "the general public" is substituted for "state administration".

(It's possible that Glen doesn't mean "legibility" in the same sense the book does, and a different term like "institutional legibility" would pinpoint what he's getting at. But there's still the question of whether we should expect optimizing for "institutional legibility" to be risk-free, after having observed that "societal legibility" has downsides. Glen seems to interpret recent political events as a result of excess technocracy, but they could also be seen as a result of excess populism--a leader's charisma could be more "legible" to the public than their competence.)

Anyway, I assume Glen is aware of these issues and working to solve them. I'm no expert, but from what I've heard of RadicalxChange, it seems like a really cool project. I'll offer my own uninformed outsider's perspective on institution design, in the hope that the conceptual raw material will prove useful to him or others.

My Take on Institution Design

I think there's another model which does a decent job of explaining the data Glen provides:

From the perspective of this model, Glen's emphasis on legibility could be seen as yet another purported silver bullet. However, I don't see a compelling reason for it to succeed where previous bullets failed. How, concretely, are random folks like me supposed to help address the corruption Glen identifies in the wireless spectrum allocation process? There seems to be a bit of a disconnect between Glen's description of the problem and his description of the solution. (Later Glen mentions the value of "humanities, continental philosophy, or humanistic social sciences"--I'd be interested to hear specific ideas from these areas, which aren't commonly known, that he thinks are quite important & relevant for institution design purposes.)

As a recent & related example, a decade or two ago many people were talking about how the Internet would revitalize & strengthen democracy; nowadays I'd guess most would agree that the Internet has failed as a silver bullet in this regard. (In fact, sometimes I get the impression this is the only thing we can all agree on!)

Anyway... What do I think we should we do?

Under this framework, it's not enough merely to have the approval of a large number of people. If these people have similar perspectives, their inability to identify flaws offers limited evidence about the overall robustness of the design.

Legibility is useful for flaw discovery in this framework, just as cleaner code could've been useful for surfacing flaws like Heartbleed. But there are other strategies available too, like offering bug bounties [EA(p) · GW(p)] for the best available critiques.

Experiments and field trials are a bit more expensive, but it's critical to actually try things out, and resolve disagreements among bug bounty participants. Then there's the "resume-building" stage of trialing one's institution on an increasingly large scale in the real world. I'd argue one should aim to have all the kinks worked out before "resume-building" starts, but of course, it's important to monitor the roll-out for problems which might emerge--and ideally, the institution should itself have means with which it can be patched "in production" (which should get tested during experimentation & field trials).

The process I just described could itself be seen as an untested institution which is probably flawed and needs critiques, experiments, and field testing. (For example, bug bounties don't do anything on their own for legibility--how can we incentivize the production of clear explanations of the institution design in need of critiques?) Taking everything meta, and designing an institutional framework for introducing new institutions, is the real silver bullet if you ask me :-)

Probable Points of Disagreement

Given Glen's belief in the difficulty of knowledge creation, the importance of local knowledge, and the limitations of outside perspectives, I hope he won't be upset to learn that I think he got a few things wrong about the rationalist community. (I also think he got some things wrong about the EA community, but I believe he's working to fix those issues, so I won't address them.)

Glen writes:

if we want to have AIs that can play a productive role in society, our goal should not be exclusively or even primarily to align them with the goals of their creators or the narrow rationalist community interested in the AIAP.

This doesn't appear to be a difference of opinion with the rationalist community. In Eliezer's CEV paper, he writes about the "coherent extrapolated volition of humankind", not the "coherent extrapolated volition of the rationalist community".

However, now that MIRI's research is non-disclosed by default, I wonder if it would be wise for them to publicly state that their research is for the benefit of all, in a charter like OpenAI has, rather than in a paper published in 2004.

Glen writes:

The institutions likely to achieve [constraints on an AI's power] are precisely the same sorts of institutions necessary to constrain extreme capitalist or state power.

An unaligned superintelligent AI which can build advanced nanotechnology has no need to follow human laws. On the flip side, an aligned superintelligent AI can design better institutions for aggregating our knowledge & preferences than any human could.

Glen writes:

A primary goal of AI design should be not just alignment, but legibility, to ensure that the humans interacting with the AI know its goals and failure modes, allowing critique, reuse, constraint etc. Such a focus, while largely alien to research on AI and on AIAP

This actually appears to me to be one of the primary goals of AI alignment research. See 2.3 in this paper or this parable. It's not alien to mainstream AI research either: see research on explainability and interpretability (pro tip: interpretability is better).

In any case, if the alignment problem is actually solved, legibility isn't needed, because we know exactly what the system's goals are: The goals we gave it.

Conclusion

As I said previously, I have not investigated RadicalxChange in very much depth, but my superficial impression is that it is really cool. I think it could be an extremely high leverage project in a world where AGI doesn't come for a while, or gets invented slowly over time. My personal focus is on scenarios where AGI is invented relatively rapidly relatively soon, but sometimes I wonder whether I should focus on the kind of work Glen does. In any case, I am rooting for him, and I hope his movement does an astonishing job of inventing and popularizing nearly flawless institution designs.

9 comments

Comments sorted by top scores.

comment by ChristianKl · 2019-08-23T15:29:40.295Z · LW(p) · GW(p)

I think it's wrong to call this a criticism of the rationality community. The people who designed systems like market auctions for spectrum aren't member of the rationalist community.

When I'm thinking about an institution in our movement that advocates knowledge gathering through formal methods without attempts at openness no one come to mind.

You could call GiveWell an organisation that uses formal methods for knowledge gathering but they are also an institution that releases recordings of their board meetings to the public. Actions like that are a costly signal for valuing legibility.

CFAR doesn't teach people to reason with formal systems. That's not what they teach.

comment by cousin_it · 2019-08-23T00:33:42.509Z · LW(p) · GW(p)

Yeah. The econ part wasn't so bad - I lived through the shock therapy of 90s Russia, and Glen is spot on when he blames it on unaccountable technocratic governance. But when it comes to AI alignment, it seems like he hasn't heard of corrigibility and interpretability work by MIRI, FHI and OpenAI.

comment by Raemon · 2019-08-23T01:08:43.746Z · LW(p) · GW(p)

For some reason the link doesn't work.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2019-08-23T02:09:17.705Z · LW(p) · GW(p)

radicalxchange.org is not working in Google Chrome for me. I assumed it was because of one of my many Chrome extensions, but maybe it's an issue with the site itself? Works in Firefox/Opera.

Replies from: habryka4, capybaralet
comment by habryka (habryka4) · 2019-08-23T02:18:41.173Z · LW(p) · GW(p)

My adblocker completely blocks the site. I had to turn it off to get any access to it.

Replies from: ldsrrs, Pattern
comment by ldsrrs · 2019-08-25T13:58:38.529Z · LW(p) · GW(p)

I tried disabling uBlock, but I was still unable to access it in either Chromium or Firefox.

comment by Pattern · 2019-08-23T21:58:04.969Z · LW(p) · GW(p)

I use a tracker blocker, and the site works fine.

Replies from: habryka4
comment by habryka (habryka4) · 2019-08-23T23:17:54.503Z · LW(p) · GW(p)

Yeah, uBlock Origin tends to block a bunch more stuff than just trackers.

comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2019-08-26T20:29:23.989Z · LW(p) · GW(p)

I've tweeted at them twice about this problem. Not sure how else to contact them to get it fixed :/