The Transparent Society: A radical transformation that we should probably undergo

post by MakoYass · 2019-09-03T02:27:21.498Z · LW · GW · 24 comments

Contents

    
    Disadvantages
  Advice
None
24 comments

Edit (May, 2020): The purpose of this document is to examine the end, and ask whether it would be good or stable, the purpose is not to examine the means, to lay a path, or to tally the risks of trying. I don't know how clear that was. I am sorry to disappoint anyone who was looking for that. It's a sensible thing to be concerned about. I hope I'll study it later.


In 1998, David Brin published a vision of a potentially inevitable societal shift towards all-encompassing democratised surveillance. It could not be called a panopticon, because those who patrol the observation house would be as visible as anyone else.

We would be able to watch our watchers.

Privacy would disappear, but many kinds of evil would go along with it.

The book was okay. It's definitely worth reading the first few chapters and some of Brin's reflections on the development of the internet are really interseting, but I found the rest very meandering and a little bit unsatisfying. I'm going to summarise my own understanding of radical societal transparency and why I'm convinced that it would be extremely good, actually so that I don't have to keep recommending the entire book, which didn't capture my stance very well.

I'm also going to go over some of the potential problems a transparent society would have, and explain why I'm not yet deterred by them. Some of them could turn out to be lethal to the idea, but that seems unlikely to me so far. I'm eager to explore those doubts until we're sure.


Advantages

It should not surprise anyone that a radically open society would have many advantages. Information is useful. If we know more about each other, we can arrange more trades, and we can trust each other more easily.

In order of importance:



Potential Disadvantages


Advice

So here's what we should try to do in light of all of that:

If the answer to those questions is "It'll be fine, go ahead",

If the answer turns out to be "no, this would be bad actually"...

You must still try to deploy the constrained forms of global surveillance and policing proposed in Bostrom's black ball paper. It is well documented that we failed to handle nukes, and only an idiot would bet that nukes are the blackest ball that's gonna come out of the urn.

Disarmament still hasn't happened.

As long as the bomb can be hidden, there will remain an indomitable incentive to have the bomb.

Is there a good reason to think we're going to be able to defuse it with anything short of a total abolition of secrecy?

24 comments

Comments sorted by top scores.

comment by Viliam · 2019-09-04T23:23:23.640Z · LW(p) · GW(p)

If someone tried to implement this in real life, I would expect it to get implemented exactly halfway. I would expect to find out that my life became perfectly transparent for anyone who cares, but there would be some nice-sounding reason why the people at the top of the food chain would retain their privacy. (National security. Or there are a few private islands in the ocean where the surveillance is allegedly economically/technically impossible to install, and by sheer coincidence, the truly important people live there.) I would also expect this asymmetry to be abused against people who try to organize to remove it.

You know, just like those cops wearing body cams that mysteriously stop functioning exactly at the moment the recording could be used against them. That, but on a planetary scale.

From the opposite perspective, many people would immediately think about counter-measures. Secret languages; so that you can listen to me talking to my friends, but still have no idea what was the topic. This wouldn't scale well, but some powerful and well-organized groups would use it.

People would learn to be more indirect in their speech, to allow everyone to pretend that anything was a coincidence or misunderstanding. There would be a lot of guessing, and people on the autism spectrum would be at a serious disadvantage.

How would the observed data be evaluated? People are hypocrites; just because you are doing the same thing many other people are doing, and everyone can see it, it doesn't necessarily prevent the outcome where you get punished and those other people not. People are really good at being dumb when you provide them evidence they don't want to see. Not understanding things you can clearly see would become even more important social skill. There would still be taboos, and you would not be able to talk about them; not even in privacy, because that wouldn't exist anymore.

But for the people who believe this would be great... I would recommend trying the experiment on a smaller scale. To create a community of volunteers, who would install surveillance throughout their commune, accessible to all members of the commune. What would happen next?

comment by Douglas_Knight · 2019-09-05T14:52:55.175Z · LW(p) · GW(p)

The whole point of the book is that the failure mode you envision is going to happen by default. It is not a risk of inverse surveillance because it is already happening.

There is a problem that surveillance increases continuously, not in an abrupt step. At some point we must establish a norm that police turning off their cameras is a crime. The public had no trouble condemning Nixon for his 18 minute gap. But at the moment many police camera systems require positive steps of activation and downloading which have plausible deniability of having just forgot.

comment by NaiveTortoise (An1lam) · 2019-09-05T01:04:12.161Z · LW(p) · GW(p)

Strong upvoted and would add that we currently live in a world where surveillance is much more common than inverse surveillance, so proponents of a transparent society should, AFAICT, be much more focused on increasing inverse surveillance than surveillance at the moment.

comment by MakoYass · 2019-09-05T10:47:32.310Z · LW(p) · GW(p)
I would expect it to get implemented exactly halfway

Not stopping halfway is a crucial part of the proposal. If they stop halfway, that is not the thing I have proposed. If an attempt somehow starts in earnest then fails partway through, policy should be that the whole thing should be rolled back and undone completely.

Regarding the difficulty of sincerely justifying opening National Security... That's going to depend on the outcome of the wargames.. I can definitely imagine an outcome that gets us the claim "Not having secret services is just infeasible" in which case I'm not sure what I'd do. Might end up dropping the idea entirely. It would be painful.

allegedly economically/technically impossible to install

Not plausible if said people are rich and the hardware is cheap enough for the scheme to be implementable at all. There isn't an excuse like that. Maybe they could say something about being an "offline community" and not having much of a network connection.. but the data could just be stored in a local buffer somewhere. They'd be able to arrange a temporary disconnection, get away with some things, one time, I suppose, but they'd have to be quick about it.

From the opposite perspective, many people would immediately think about counter-measures. Secret languages

Obvious secret languages would be illegal. It's exactly the same crime as brazenly covering the cameras or walking out of their sight (without your personal drones). I am very curious about the possibilities of undetectable secrecy, but there are reasons to think it would be limited.

I would recommend trying the experiment on a smaller scale. To create a community of volunteers, who would install surveillance throughout their commune, accessible to all members of the commune. What would happen next?

(Hmm... I can think of someone in particular who really would have liked to live in that sort of situation, she would have felt a lot safer... ]:)

One of my intimates has made an attempt at this. It was inconclusive. We'd do it again.

But it wouldn't be totally informative. We probably couldn't justify making the data public, so we wouldn't have to deal much the omniscient antagonists thing, and the really difficult questions wouldn't end up getting answered.

One relevant small-scale experiment would be Ray Dalio's hedge fund Bridgewater, I believe they practice a form of (internal) radical openness, cameras and all. His book is on my reading list.

I would one day like to create an alternative to secure multiparty computation schemes like Ethereum by just running a devoutly radically transparent (panopticon accessible to external parties) webhosting service on open hardware. It would seem a lot simpler. Auditing, culture and surveillance as an alternative to these very heavy, quite constraining crypto technologies. The integrity of the computations wouldn't be mathematically provable, but it would be about as indisputable as the moon landing.

It's conceivable that this would always be strictly more useful than any blockchain world-computer, as far as I'm aware we need a different specific secure multiparty comptuation technique every time we want to find a way to compute on hidden information. For a radically transparent webhost, the incredible feat of arbitrary computation on hidden data at near commodity hardware efficiency (fully open, secure hardware is unlikely to be as fast as whatever intel's putting out, but it would be in the same order of magnitude) would require only a little bit of additional auditing.

comment by MakoYass · 2019-09-08T04:19:55.535Z · LW(p) · GW(p)

In Vitalik Buterin's interview on 80KHours (https://80000hours.org/podcast/episodes/vitalik-buterin-new-ways-to-fund-public-goods/ I recommend it) he brought something up that evoked a pretty stern criticism of radical transparency.

Most incentive designs rely on privacy, because by keeping a person's actions off the record, you keep the meaning of those actions limited, confined, discrete, knowable. If, on the other hand, a person's vote, say, is put onto a permanent public record, then you can no longer know what it means to them to vote. Once they can prove how they voted to external parties, they can be paid to vote a certain way. They can worry about retribution for voting the wrong way. Things that might not even exist yet, that the incentive designer couldn't account for, now interfere with their behaviour. It becomes so much harder to reason about systems of agents, every act affects every other act, what hope have we of designing a robust society under those conditions? (Still quite a lot of hope, IMO, but it's a noteworthy point)

comment by Douglas_Knight · 2019-09-05T02:50:37.448Z · LW(p) · GW(p)
ev­ery law would be con­sis­tently en­forced.

Why?

It is incredibly common today for massive arguments over video, half the world saying that it obvious yields one conclusion and other half saying it refutes it.

How about the police just ignore the law? It happens all the time today, completely publicly. Total transparency would make it difficult for two officers to get together and conspire. But they probably rarely conspire today. A video of one of them saying "this isn't a violation" and the other replying "nope" would shed no more light than today.


comment by MakoYass · 2019-09-05T10:59:22.625Z · LW(p) · GW(p)
It is incredibly common today for massive arguments over video, half the world saying that it obvious yields one conclusion and other half saying it refutes it.

Give examples. Often there is a lot of context missing from those videos and that is the problem. People who intentionally ignore readily available context will have no more power in a transparent society than they have today.

My concern there wasn't that some laws might not get consistently enforced, consistent enforcement is the thing I am afraid of. I'm not sure about this, but I've often gotten the impression that our laws were not designed to work without the mercy of discretionary enforcement. The whole idea of freedom from unwarranted search is suggestive to me that laws were designed under the expectation that they would generally not be enforced within the home. Generally, when a core expectation is broken, the results are bad.

comment by gilch · 2019-09-03T06:20:00.843Z · LW(p) · GW(p)

Privacy is a great protection against many other abuses, but I'm not sure it's a categorical good. Maybe there are good places in the moral landscape with transparent societies. But getting to there from here means either finding other ways to mitigate those abuses first, or crossing deep valleys where bad things happen a lot.

comment by MakoYass · 2019-09-03T06:44:46.790Z · LW(p) · GW(p)

Which abuses, and why would those be hard to police, once they've been drug out into the open?

comment by gilch · 2019-09-04T01:20:51.921Z · LW(p) · GW(p)

Information is not the only kind of power and information asymmetry is not the only kind of power asymmetry. How much does it help that you can watch what the police are doing when they still have all the guns? Maybe not such an issue in America, but what about Hong Kong?

Even if you have equal access to raw information, you wouldn't necessarily have equal ability to process it. Minorities can still be unfairly oppressed by majorities, even when everyone knows they're doing it. There's an ugly outrage/political correctness culture on Twitter and increasingly in academia that mobs anyone they notice who steps out of line. These people often use their real names. How do they get away with this abuse when everyone can see them doing it? I can speculate that they're more coordinated as a group than the individuals they target. If we give both sides more information, how far does it go to correct the power imbalance? Or does it just make things worse because the mob has more resources to utilize it already? Anonymity is a great defense against this abuse. Privacy helps a lot even without full anonymity. That's why the mob doxxes their victims when they can.

The general sanity waterline is currently really ridiculously low. [? · GW] More transparency might help to some degree, but if your epistemology is broken, more information doesn't help. It just gives you more ammunition to shoot your own foot with.

comment by ChristianKl · 2019-09-04T07:11:17.868Z · LW(p) · GW(p)
How much does it help that you can watch what the police are doing when they still have all the guns? Maybe not such an issue in America, but what about Hong Kong?

Public officials in the US get punished very seldom while in China, it's much easier to throw public official into prison.

China does a lot of public opinion management because public opinion matters to powerful people.

comment by Pattern · 2019-09-04T06:09:38.140Z · LW(p) · GW(p)

End note: this was really well argued. While practical details*, and how to ensure it pursing such a course wouldn't lead to a Panopticon state instead of a Transparent state, remain open, this was a great piece, and really persuasive. (The rest of this comment was written as I read it.)

There have been some posts on votes as "Yays/Boos". I upvoted this because I appreciated this discussion/the way these arguments were made, although I am against the idea being argued for.

preventing crime

This would be amazing. I'm worried about it preventing thoughtcrime.

It would be possible, for the first time, to enforce a sentence/treatment "do not speak with or listen to bad influences through any channel"

Our world still has dictatorships.

that it would allow us to get closer to figuring out what our real values are, so that we can develop truly humane systems of accountability to pursue those instead.

Sounds good. I'm worried about it enabling truly inhumane systems.

Forcing people to accept and contend with the weirdness of other people, and their own weirdness.

Good tools for aggregating information would be key.*

Enabling trade.

My first complaint here was that it would enable stealing. But if everything was observed, then maybe people wouldn't get away with it. Now I'm wondering how this degree of transparency could be achieved.

Transparency makes it possible to enforce against even the tiniest transgression against a dominant power, which may make the dominant power incontestable.

I'm not sure how this trades off against the reverse being available - the question is if people are able to coordinate against powers (which are probably already internally coordinated) which transgress. How would things would play out when both sides can see everything the other is doing?

Crossing the gap between "all is seen" and "all is known" seems to be key overall.*

It's tempting to me to propose doing a thing where black has to deal with some uncertainty about the position of its own pieces, to reflect the awkward realities of not being a transparent society, but I don't think that would be charitable.

Chess isn't a perfect analogy because it's about two players versus each other. Also keep in mind surveillance is still possible - imagine the transparent side versus the panopticon side (those in power see all).

A transparent state could be far more able to prove nonaggression.

But it's harder to bluff. ("Our nukes are set to automatically fire on you if they fire on us." "No, they're not.")

I'm not sure whether "not being able to keep technological secrets" counts as a significant weakness. The scarce asset is generally not theory, theory is hard to protect, the scarce asset is usually practitioners.

An interesting viewpoint.

The problems a transparent society has with protecting registered intellectual property are no different that the problems of a closed society.

Didn't follow this.

Figure out what a good legal system would look like in a transparent society. It is likely to be harder, considering that every law would be consistently enforced.

Not sure about harder. This seems like a benefit.

that votes in politicians on the basis of who they really are rather than how good they are at acting.

Not actually the biggest fan of this - I want a better world, but I have to ask 'why don't we have a better one already?' (First past the post is a terrible voting system.)

Figure out whether

A piece which proposes a radical change, and seeing if it won't go bad, before trying it. Phenomenal.


*These are tied together.

comment by gilch · 2019-09-03T06:13:37.853Z · LW(p) · GW(p)

I'm not convinced that radical transparency would save us from a black marble. Even with transparency, we might destroy ourselves before we see it coming, due to "failure of imagination". And even in scenarios where we could have seen it coming, that doesn't mean we will. Internet advertising is competing so hard for our attention right now that people are burning out from it. More information might just make that worse. Given the choice, people will look at what's most interesting, even if it's not the most important. Maybe we'd stop a few bad actors, but it only takes one that escapes notice for a while.

comment by MakoYass · 2019-09-03T06:42:15.486Z · LW(p) · GW(p)

Regarding the overabundance of information, we should note that a lot of monitoring will be aided by a lot of automated processes.

The internet's tendency to overconsume attention... I think that might be a temporary phase, don't you? We are all gorging ourselves on candy. We all know how stupid and hollow it is and soon we will all be sick, and maybe we'll be conditioned well enough by that sick feeling to stop doing it.

Personally, I've been thinking a lot lately about how lesswrong is the only place where people try to write content that will be read thoroughly by a lot of people over a long period of time. I don't think we're doing well, at that, but I think the value of a place like this is obvious to a lot of people. We will learn to focus on developing the structures of information that last for a long time, or at least, the people who matter will learn.

comment by ChristianKl · 2019-09-03T09:34:27.091Z · LW(p) · GW(p)

I don't think LessWrong is unique in that regard. Wikipedia is strongly focused on it. The StackExchange network also has a lot of content that's intended to be available in the future.

comment by gilch · 2019-09-04T01:34:23.605Z · LW(p) · GW(p)

Also podcasts and video. Most of the population is not as literate as we are, but they can still digest audio. There are lots of video lectures on YouTube.

comment by gilch · 2019-09-04T01:47:11.094Z · LW(p) · GW(p)

My point wasn't that internet advertising in particular would be the cause of our inattention, but that humans have real limitations when it comes to processing information, with that being one salient example. We evolved in small bands of maybe fifty individuals. Our instincts cannot handle interactions in larger groups correctly. We have compensated to a remarkable degree via learned culture, but with some obvious shortcomings [LW · GW]. More information would only amplify these problems.

I agree automation has a role to play in information processing, but that can amplify distortions on its own. Personalized search or divisive filter bubbles? Racist algorithms. Etc.

comment by MakoYass · 2019-09-03T06:19:27.930Z · LW(p) · GW(p)

Did I say that? If so I didn't mean to. The only vulnerabilities I'd expect it to protect us from fairly reliably are the "easy nukes" class. You mention the surprising strangelets class, which would do very little for.

comment by gilch · 2019-09-03T06:42:19.459Z · LW(p) · GW(p)

A black marble is any invention that would kill the civilization that invents it by default, but perhaps not inevitably. Maybe you intended gradations of the concept beyond that? Maybe how much time it takes to build a weapon that kill how many? But I really doubt even the "easy nuke"-grade black marbles can be reliably stopped this way.

comment by MakoYass · 2019-09-03T06:49:44.176Z · LW(p) · GW(p)

That's why I said "fairly reliable". Which is not reliable enough for situations like this, of course, but we don't seem to have better alternatives.

comment by Gurkenglas · 2019-09-03T10:46:40.970Z · LW(p) · GW(p)

Your irregularly scheduled reminder that FAI solves these problems just fine.

comment by Richard_Kennaway · 2019-09-03T13:48:04.610Z · LW(p) · GW(p)
Your irregularly scheduled reminder that FAI solves these problems just fine.

So does magic. One might adapt one of Arthur C. Clarke's laws: Every sufficiently speculative technology is indistinguishable from magic. Even more so than ACC's "sufficiently advanced technology": the latter is distinguished from magic by actually existing. But nobody knows how to make FAI.

comment by Gurkenglas · 2019-09-03T16:11:41.690Z · LW(p) · GW(p)

FAI is more plausible than magic to the point that we don't have to desperately try to make society transparent.

comment by MakoYass · 2019-09-06T00:16:22.297Z · LW(p) · GW(p)

While I took your point well, FAI is not a more plausible/easier technology than democratised surveillance. It may be implemented sooner due to needing pretty much no democratic support whatsoever to deploy, it might just as well take a very long time to create.