The Transparent Society: A radical transformation that we should probably undergo
post by mako yass (MakoYass) · 2019-09-03T02:27:21.498Z · LW · GW · 25 commentsContents
Advantages Potential Disadvantages Advice None 25 comments
Edit (May, 2020): The purpose of this document is to examine the end, and ask whether it would be good or stable, the purpose is not to examine the means, to lay a path, or to tally the risks of trying. I don't know how clear that was. I am sorry to disappoint anyone who was looking for that. It's a sensible thing to be concerned about. I hope I'll study it later.
In 1998, David Brin published a vision of a potentially inevitable societal shift towards all-encompassing democratised surveillance. It could not be called a panopticon, because those who patrol the observation house would be as visible as anyone else.
We would be able to watch our watchers.
Privacy would disappear, but many kinds of evil would go along with it.
The book was okay. It's definitely worth reading the first few chapters and some of Brin's reflections on the development of the internet are really interseting, but I found the rest very meandering and a little bit unsatisfying. I'm going to summarise my own understanding of radical societal transparency and why I'm convinced that it would be extremely good, actually so that I don't have to keep recommending the entire book, which didn't capture my stance very well.
I'm also going to go over some of the potential problems a transparent society would have, and explain why I'm not yet deterred by them. Some of them could turn out to be lethal to the idea, but that seems unlikely to me so far. I'm eager to explore those doubts until we're sure.
Advantages
It should not surprise anyone that a radically open society would have many advantages. Information is useful. If we know more about each other, we can arrange more trades, and we can trust each other more easily.
In order of importance:
- It would likely prevent most "easy nuke" technological x-risks discussed in Bostrom's black ball paper (Vulnerable Universe)
- That is to say, if a very harmful technology rapidly emerges, for instance, a method for manufacturing a species-ending virus requiring only very common lab equipment, the transparent society would be able to police against it without any special laws. Every government could fail utterly to recognise the threat, and we might still be able to survive it; In this example, biotech workers would simply be able to watch each others' behaviour, make a note of it if anyone reads about this apocalyptic method, keeps copies of the method around, seems to be especially moody, or whether they have, you know, physically gone to the lab at 3am and started to actuate the method. The probability of the species one day being wiped out by a few unbalanced individuals wielding humanity's inevitably growing powers decreases dramatically.
- The easy nukes concern is most often dismissed with an argument that as technology gets stronger, we will also find new technologies to police its misuse. My answer to that is that, yes, that's what the transparent society is. What were you expecting. Some kind of anti-biotech that would cancel the dangerous biotech out? (I do not envy whoever has to think about securing the human body against its many attack vectors) This is the technology that will police against the misuse of other technologies, now help us to deploy it.
- Bostrom discusses surveillance, but does not discuss this position that a surveillance state might be less inclined to tyranny and more liveable if we stopped torturing ourselves with this impossible project of privacy and just let data be free, as data seems to want.
- preventing crime
- Remember, crime includes rape and trafficking and murder. People do really really unambiguously bad things sometimes and the world would be dramatically improved if they couldn't get away with them any more. If a person does not find that possibility exciting, they might have an affect disorder.
- No doubt, a lot of laws would become too powerful under transparency and would need to be thrown out, but as long as we don't make it a boiled frog thing, there's plenty of energy around right now to get those laws thrown out if legislators can be made to realise we've hit an inflection point and we need to react to it.. Still. Worthy of more discussion.
- Watching the watchers
- Politicians, police, and anyone else in a sensitive public position would be under a lot of pressure to be as open as possible about their dealings. This opens the way to having genuinely trustworthy politicians and police, which is a big deal.
- Positive changes to the legal system as a result of detection becoming easier?
- Parole could be a lot more effective. It would be possible, for the first time, to enforce a sentence/treatment "do not speak with or listen to bad influences through any channel"
- Promoting social pressures to donate a lot more
- Humans have always gone to great expense to signal strength and moral purity. We should hope that this energy could be harnessed for useful ends, as in Raikoth's symbolic beads
- If both individuals and organisations had to be more open about their income and expenses it's a lot easier to imagine these pressures coming to bear. If the information about peoples' personal donations were exposed unconditionally, our taboo against discussing them might not be able to hold together. We would not be able to hide our friends' shame about not buying enough bednets.
- That said, I'm confused as to why there is so little social pressure to donate to things, as it is. I wonder how much of it is value-dysphoria, knowing that the values we espouse don't quite align with our hearts, everyone knowing it, softening when our friends confide in us that they aren't living up to those values, "It's okay, I understand that it's just not what you sincerely wanted to do." I hope that radical openness will allow us to, first, admit that what is agreed to be good is not always what we as individuals desire to see (to admit that we are not, at heart, altruists, as is plain from the records of our choices), second, that it would allow us to get closer to figuring out what our real values are, so that we can develop truly humane systems of accountability to pursue those instead.
- Free, complete, and accurate statistics about every facet of human life. Please think of all the science we could do.
- Forcing people to accept and contend with the weirdness of other people, and their own weirdness.
- We would no longer be able to hide from it. This is one thing that makes me hopeful that transparency wouldn't result in the emergence of some new totalitarian normative orthodoxy. There would be heretics everywhere and we'd all be able to hear them (when we choose to) and any crusade short of complete totalitarianism would never be able to completely silence them.
- Enabling trade.
- Automated systems for maintaining information about who owns what, how much they seem to use it, and then using that to arrange mutually beneficial trades (or if you're of the position transparency might obviate the need for money, think of it as; it would be easier for us to notice opportunities to make peoples' lives better by sharing things with them).
- Worrying less about surveillance capitalism/states.
- With less of an imbalance in hoarded data, The People would have just as much surveillance capacity as Google. Though. If the megacorps can analyse the data better than The People can, maybe they still have to worry. More has been written about this, which others could probably recall more easily than I could. Homo Deus anticipated large, transformative effects of Big Data Analysis, but I don't remember being moved by any specific claims, maybe Harari cites someone else, in those sections? I don't have a copy on hand to check.
Potential Disadvantages
- Even in the most open tribes, humans seem to have an instinct for shyness. I'm not sure we know what happens to humans when they're deprived of the opportunity to do things in private. Maybe it's mostly about sex. I dunno. What are the evolutionary teloses underlying humans' coyness about sex?
- There's infidelity, of course. But it seems to be mostly agreed upon that it's not good that we have the ability to commit infidelity.
- I could imagine there being a thing about obscuring paternity leading to greater tribal cohesion... but I don't think anything like that exists in any developed country to be protected. Also doesn't seem terribly hard to accommodate under standard transparency technologies.
- I guess I'm not very worried about this. In most tribes, people do most things in the company of others. It is strange to us to share so much, but there's nothing unnatural about it, no reason to think humans would thrive less under it.
- The emergence of new, hyper-strong universal orthodoxies.
- Transparency makes it possible to enforce against even the tiniest transgression against a dominant power, which may make the dominant power incontestable.
- (counterargument: see 'forcing people to accept and contend with the weirdness of other people')
- A weakness that a transparent state's non-transparent enemies could easily exploit to destroy them (assuming the future will contain any major wars, which, since the creation of the atomic bomb, it's not clear that we can have major wars any more, still, worth considering).
- Information imbalance in war
- Imagine that you're playing a game of chess, and the enemy can see all of your pieces, but you can't see any of theirs. You'd be fucked. But that isn't a legitimate analogy, it would be more like a game of chess where the enemy can see all of your pieces and you can only see a subset of theirs.
The question for me is whether the internal cohesion of a transparent society will make it strong enough that it could win such a war uphill, to extend the analogy; you can't see all of their pieces all of the time, but if your high trust society with its perfect economy of complete information is able to build more pieces than them, maybe you win anyway. You're at a disadvantage, but you also have this other advantage ready to go. Which one weighs more, the advantage or the disadvantage? I don't know. We'll need to experiment. I'm very eager to do that. - So, we could come up with some kind of asymmetric information wargame. Unfortunately, despite being a game designer, I don't personally have much strategic acumen for the logistics of war, and I've never played a realistic wargame. Lots of people do though! I imagine they'd tend to be interested in a question like this!
- We must wonder why anyone would attack a transparent society when it could demonstrate thoroughly that it is institutionally incapable of being the first one to break the treaty. A transparent state could be far more able to prove nonaggression. When they say they aren't plotting anything, an intelligence agency can see directly that they aren't plotting anything.
- I guess, on reflection, the problems a transparent society has with protecting registered intellectual property are no different that the problems of a closed society. It wouldn't be IP if it wasn't circulating in the open. The whole idea of IP is much more closely aligned with radical openness than closedness; a surreal releasing-yet-protecting of private information that enables conversations, inspiration and trade that would otherwise be impossible.
Advice
So here's what we should try to do in light of all of that:
- Investigate the problem areas described above and try to resolve the difficult remaining questions in the ways suggested. In summary,
- Figure out whether there are potentials for lastingly destructive social consensus monoculture.
- Figure out whether a radically open society with a wealth advantage of about one order of magnitude can survive aggression (war or sabotage) from a closed one.
- Figure out what a good legal system would look like in a transparent society. It is likely to be harder, considering that every law would be consistently enforced.
If the answer to those questions is "It'll be fine, go ahead",
- Develop the relevant technologies. Transparent computing (trusted computing, smart contracts, that kind of thing), cheap recording devices, better wireless networks.
- Promote the culture of radical openness. Pursue the dream of a society where honesty is rewarded, that votes in politicians on the basis of who they really are rather than how good they are at acting. Promote socially positive radically open celebrities. Ensure that the support exists.
If the answer turns out to be "no, this would be bad actually"...
You must still try to deploy the constrained forms of global surveillance and policing proposed in Bostrom's black ball paper. It is well documented that we failed to handle nukes, and only an idiot would bet that nukes are the blackest ball that's gonna come out of the urn.
Disarmament still hasn't happened.
As long as the bomb can be hidden, there will remain an indomitable incentive to have the bomb.
Is there a good reason to think we're going to be able to defuse it with anything short of a total abolition of secrecy?
25 comments
Comments sorted by top scores.
comment by Viliam · 2019-09-04T23:23:23.640Z · LW(p) · GW(p)
If someone tried to implement this in real life, I would expect it to get implemented exactly halfway. I would expect to find out that my life became perfectly transparent for anyone who cares, but there would be some nice-sounding reason why the people at the top of the food chain would retain their privacy. (National security. Or there are a few private islands in the ocean where the surveillance is allegedly economically/technically impossible to install, and by sheer coincidence, the truly important people live there.) I would also expect this asymmetry to be abused against people who try to organize to remove it.
You know, just like those cops wearing body cams that mysteriously stop functioning exactly at the moment the recording could be used against them. That, but on a planetary scale.
From the opposite perspective, many people would immediately think about counter-measures. Secret languages; so that you can listen to me talking to my friends, but still have no idea what was the topic. This wouldn't scale well, but some powerful and well-organized groups would use it.
People would learn to be more indirect in their speech, to allow everyone to pretend that anything was a coincidence or misunderstanding. There would be a lot of guessing, and people on the autism spectrum would be at a serious disadvantage.
How would the observed data be evaluated? People are hypocrites; just because you are doing the same thing many other people are doing, and everyone can see it, it doesn't necessarily prevent the outcome where you get punished and those other people not. People are really good at being dumb when you provide them evidence they don't want to see. Not understanding things you can clearly see would become even more important social skill. There would still be taboos, and you would not be able to talk about them; not even in privacy, because that wouldn't exist anymore.
But for the people who believe this would be great... I would recommend trying the experiment on a smaller scale. To create a community of volunteers, who would install surveillance throughout their commune, accessible to all members of the commune. What would happen next?
Replies from: Douglas_Knight, None, MakoYass↑ comment by Douglas_Knight · 2019-09-05T14:52:55.175Z · LW(p) · GW(p)
The whole point of the book is that the failure mode you envision is going to happen by default. It is not a risk of inverse surveillance because it is already happening.
There is a problem that surveillance increases continuously, not in an abrupt step. At some point we must establish a norm that police turning off their cameras is a crime. The public had no trouble condemning Nixon for his 18 minute gap. But at the moment many police camera systems require positive steps of activation and downloading which have plausible deniability of having just forgot.
↑ comment by [deleted] · 2019-09-05T01:04:12.161Z · LW(p) · GW(p)
Strong upvoted and would add that we currently live in a world where surveillance is much more common than inverse surveillance, so proponents of a transparent society should, AFAICT, be much more focused on increasing inverse surveillance than surveillance at the moment.
↑ comment by mako yass (MakoYass) · 2019-09-05T10:47:32.310Z · LW(p) · GW(p)
I would expect it to get implemented exactly halfway
Not stopping halfway is a crucial part of the proposal. If they stop halfway, that is not the thing I have proposed. If an attempt somehow starts in earnest then fails partway through, policy should be that the whole thing should be rolled back and undone completely.
Regarding the difficulty of sincerely justifying opening National Security... That's going to depend on the outcome of the wargames.. I can definitely imagine an outcome that gets us the claim "Not having secret services is just infeasible" in which case I'm not sure what I'd do. Might end up dropping the idea entirely. It would be painful.
allegedly economically/technically impossible to install
Not plausible if said people are rich and the hardware is cheap enough for the scheme to be implementable at all. There isn't an excuse like that. Maybe they could say something about being an "offline community" and not having much of a network connection.. but the data could just be stored in a local buffer somewhere. They'd be able to arrange a temporary disconnection, get away with some things, one time, I suppose, but they'd have to be quick about it.
From the opposite perspective, many people would immediately think about counter-measures. Secret languages
Obvious secret languages would be illegal. It's exactly the same crime as brazenly covering the cameras or walking out of their sight (without your personal drones). I am very curious about the possibilities of undetectable secrecy, but there are reasons to think it would be limited.
I would recommend trying the experiment on a smaller scale. To create a community of volunteers, who would install surveillance throughout their commune, accessible to all members of the commune. What would happen next?
(Hmm... I can think of someone in particular who really would have liked to live in that sort of situation, she would have felt a lot safer... ]:)
One of my intimates has made an attempt at this. It was inconclusive. We'd do it again.
But it wouldn't be totally informative. We probably couldn't justify making the data public, so we wouldn't have to deal much the omniscient antagonists thing, and the really difficult questions wouldn't end up getting answered.
One relevant small-scale experiment would be Ray Dalio's hedge fund Bridgewater, I believe they practice a form of (internal) radical openness, cameras and all. His book is on my reading list.
I would one day like to create an alternative to secure multiparty computation schemes like Ethereum by just running a devoutly radically transparent (panopticon accessible to external parties) webhosting service on open hardware. It would seem a lot simpler. Auditing, culture and surveillance as an alternative to these very heavy, quite constraining crypto technologies. The integrity of the computations wouldn't be mathematically provable, but it would be about as indisputable as the moon landing.
It's conceivable that this would always be strictly more useful than any blockchain world-computer, as far as I'm aware we need a different specific secure multiparty comptuation technique every time we want to find a way to compute on hidden information. For a radically transparent webhost, the incredible feat of arbitrary computation on hidden data at near commodity hardware efficiency (fully open, secure hardware is unlikely to be as fast as whatever intel's putting out, but it would be in the same order of magnitude) would require only a little bit of additional auditing.
comment by mako yass (MakoYass) · 2019-09-08T04:19:55.535Z · LW(p) · GW(p)
In Vitalik Buterin's interview on 80KHours (https://80000hours.org/podcast/episodes/vitalik-buterin-new-ways-to-fund-public-goods/ I recommend it) he brought something up that evoked a pretty stern criticism of radical transparency.
Most incentive designs rely on privacy, because by keeping a person's actions off the record, you keep the meaning of those actions limited, confined, discrete, knowable. If, on the other hand, a person's vote, say, is put onto a permanent public record, then you can no longer know what it means to them to vote. Once they can prove how they voted to external parties, they can be paid to vote a certain way. They can worry about retribution for voting the wrong way. Things that might not even exist yet, that the incentive designer couldn't account for, now interfere with their behaviour. It becomes so much harder to reason about systems of agents, every act affects every other act, what hope have we of designing a robust society under those conditions? (Still quite a lot of hope, IMO, but it's a noteworthy point)
comment by Douglas_Knight · 2019-09-05T02:50:37.448Z · LW(p) · GW(p)
every law would be consistently enforced.
Why?
It is incredibly common today for massive arguments over video, half the world saying that it obvious yields one conclusion and other half saying it refutes it.
How about the police just ignore the law? It happens all the time today, completely publicly. Total transparency would make it difficult for two officers to get together and conspire. But they probably rarely conspire today. A video of one of them saying "this isn't a violation" and the other replying "nope" would shed no more light than today.
Replies from: MakoYass
↑ comment by mako yass (MakoYass) · 2019-09-05T10:59:22.625Z · LW(p) · GW(p)
It is incredibly common today for massive arguments over video, half the world saying that it obvious yields one conclusion and other half saying it refutes it.
Give examples. Often there is a lot of context missing from those videos and that is the problem. People who intentionally ignore readily available context will have no more power in a transparent society than they have today.
My concern there wasn't that some laws might not get consistently enforced, consistent enforcement is the thing I am afraid of. I'm not sure about this, but I've often gotten the impression that our laws were not designed to work without the mercy of discretionary enforcement. The whole idea of freedom from unwarranted search is suggestive to me that laws were designed under the expectation that they would generally not be enforced within the home. Generally, when a core expectation is broken, the results are bad.
comment by gilch · 2019-09-03T06:20:00.843Z · LW(p) · GW(p)
Privacy is a great protection against many other abuses, but I'm not sure it's a categorical good. Maybe there are good places in the moral landscape with transparent societies. But getting to there from here means either finding other ways to mitigate those abuses first, or crossing deep valleys where bad things happen a lot.
Replies from: MakoYass↑ comment by mako yass (MakoYass) · 2019-09-03T06:44:46.790Z · LW(p) · GW(p)
Which abuses, and why would those be hard to police, once they've been drug out into the open?
Replies from: gilch↑ comment by gilch · 2019-09-04T01:20:51.921Z · LW(p) · GW(p)
Information is not the only kind of power and information asymmetry is not the only kind of power asymmetry. How much does it help that you can watch what the police are doing when they still have all the guns? Maybe not such an issue in America, but what about Hong Kong?
Even if you have equal access to raw information, you wouldn't necessarily have equal ability to process it. Minorities can still be unfairly oppressed by majorities, even when everyone knows they're doing it. There's an ugly outrage/political correctness culture on Twitter and increasingly in academia that mobs anyone they notice who steps out of line. These people often use their real names. How do they get away with this abuse when everyone can see them doing it? I can speculate that they're more coordinated as a group than the individuals they target. If we give both sides more information, how far does it go to correct the power imbalance? Or does it just make things worse because the mob has more resources to utilize it already? Anonymity is a great defense against this abuse. Privacy helps a lot even without full anonymity. That's why the mob doxxes their victims when they can.
The general sanity waterline is currently really ridiculously low. [? · GW] More transparency might help to some degree, but if your epistemology is broken, more information doesn't help. It just gives you more ammunition to shoot your own foot with.
Replies from: ChristianKl↑ comment by ChristianKl · 2019-09-04T07:11:17.868Z · LW(p) · GW(p)
How much does it help that you can watch what the police are doing when they still have all the guns? Maybe not such an issue in America, but what about Hong Kong?
Public officials in the US get punished very seldom while in China, it's much easier to throw public official into prison.
China does a lot of public opinion management because public opinion matters to powerful people.
comment by mako yass (MakoYass) · 2022-06-11T23:27:03.195Z · LW(p) · GW(p)
Wondering if radical transparency about (approximate) wealth + legalizing discriminatory pricing would sort of steadily, organically reduce inequality to the extent that would satisfy anyone.
Price discrimination is already all over the place, people just end up doing it in crappy ways, often by artificially crippling the cheaper versions of their products. If they were allowed to just see and use estimates of each customer's wealth or interests, the incentives to cripple cheap versions would become negative, perhaps more people would get the complete featureset.
Although many forms of discriminatory pricing promote fairness (for instance, charging professional engineers more for CAD software than students), others might promote unfairness (Generally: charging more to people who have access to fewer alternatives?). So I guess a lot of this rests on what structures of conscious capitalism do. Conscious capitalism is whatever thing you get when information about the externalities of production are made clearer. It has not been tried. I imagine it would lead to things like effective altruism, philanthropic clubs that, through natural community processes, create beneficial social accountability pressures for people to live up to their own values, where they can.
I'm not completely sure how strong or lucid they will be.
comment by Pattern · 2019-09-04T06:09:38.140Z · LW(p) · GW(p)
End note: this was really well argued. While practical details*, and how to ensure it pursing such a course wouldn't lead to a Panopticon state instead of a Transparent state, remain open, this was a great piece, and really persuasive. (The rest of this comment was written as I read it.)
There have been some posts on votes as "Yays/Boos". I upvoted this because I appreciated this discussion/the way these arguments were made, although I am against the idea being argued for.
preventing crime
This would be amazing. I'm worried about it preventing thoughtcrime.
It would be possible, for the first time, to enforce a sentence/treatment "do not speak with or listen to bad influences through any channel"
Our world still has dictatorships.
that it would allow us to get closer to figuring out what our real values are, so that we can develop truly humane systems of accountability to pursue those instead.
Sounds good. I'm worried about it enabling truly inhumane systems.
Forcing people to accept and contend with the weirdness of other people, and their own weirdness.
Good tools for aggregating information would be key.*
Enabling trade.
My first complaint here was that it would enable stealing. But if everything was observed, then maybe people wouldn't get away with it. Now I'm wondering how this degree of transparency could be achieved.
Transparency makes it possible to enforce against even the tiniest transgression against a dominant power, which may make the dominant power incontestable.
I'm not sure how this trades off against the reverse being available - the question is if people are able to coordinate against powers (which are probably already internally coordinated) which transgress. How would things would play out when both sides can see everything the other is doing?
Crossing the gap between "all is seen" and "all is known" seems to be key overall.*
It's tempting to me to propose doing a thing where black has to deal with some uncertainty about the position of its own pieces, to reflect the awkward realities of not being a transparent society, but I don't think that would be charitable.
Chess isn't a perfect analogy because it's about two players versus each other. Also keep in mind surveillance is still possible - imagine the transparent side versus the panopticon side (those in power see all).
A transparent state could be far more able to prove nonaggression.
But it's harder to bluff. ("Our nukes are set to automatically fire on you if they fire on us." "No, they're not.")
I'm not sure whether "not being able to keep technological secrets" counts as a significant weakness. The scarce asset is generally not theory, theory is hard to protect, the scarce asset is usually practitioners.
An interesting viewpoint.
The problems a transparent society has with protecting registered intellectual property are no different that the problems of a closed society.
Didn't follow this.
Figure out what a good legal system would look like in a transparent society. It is likely to be harder, considering that every law would be consistently enforced.
Not sure about harder. This seems like a benefit.
that votes in politicians on the basis of who they really are rather than how good they are at acting.
Not actually the biggest fan of this - I want a better world, but I have to ask 'why don't we have a better one already?' (First past the post is a terrible voting system.)
Figure out whether
A piece which proposes a radical change, and seeing if it won't go bad, before trying it. Phenomenal.
*These are tied together.
comment by gilch · 2019-09-03T06:13:37.853Z · LW(p) · GW(p)
I'm not convinced that radical transparency would save us from a black marble. Even with transparency, we might destroy ourselves before we see it coming, due to "failure of imagination". And even in scenarios where we could have seen it coming, that doesn't mean we will. Internet advertising is competing so hard for our attention right now that people are burning out from it. More information might just make that worse. Given the choice, people will look at what's most interesting, even if it's not the most important. Maybe we'd stop a few bad actors, but it only takes one that escapes notice for a while.
Replies from: MakoYass, MakoYass↑ comment by mako yass (MakoYass) · 2019-09-03T06:42:15.486Z · LW(p) · GW(p)
Regarding the overabundance of information, we should note that a lot of monitoring will be aided by a lot of automated processes.
The internet's tendency to overconsume attention... I think that might be a temporary phase, don't you? We are all gorging ourselves on candy. We all know how stupid and hollow it is and soon we will all be sick, and maybe we'll be conditioned well enough by that sick feeling to stop doing it.
Personally, I've been thinking a lot lately about how lesswrong is the only place where people try to write content that will be read thoroughly by a lot of people over a long period of time. I don't think we're doing well, at that, but I think the value of a place like this is obvious to a lot of people. We will learn to focus on developing the structures of information that last for a long time, or at least, the people who matter will learn.
Replies from: ChristianKl, gilch↑ comment by ChristianKl · 2019-09-03T09:34:27.091Z · LW(p) · GW(p)
I don't think LessWrong is unique in that regard. Wikipedia is strongly focused on it. The StackExchange network also has a lot of content that's intended to be available in the future.
Replies from: gilch↑ comment by gilch · 2019-09-04T01:47:11.094Z · LW(p) · GW(p)
My point wasn't that internet advertising in particular would be the cause of our inattention, but that humans have real limitations when it comes to processing information, with that being one salient example. We evolved in small bands of maybe fifty individuals. Our instincts cannot handle interactions in larger groups correctly. We have compensated to a remarkable degree via learned culture, but with some obvious shortcomings [LW · GW]. More information would only amplify these problems.
I agree automation has a role to play in information processing, but that can amplify distortions on its own. Personalized search or divisive filter bubbles? Racist algorithms. Etc.
↑ comment by mako yass (MakoYass) · 2019-09-03T06:19:27.930Z · LW(p) · GW(p)
Did I say that? If so I didn't mean to. The only vulnerabilities I'd expect it to protect us from fairly reliably are the "easy nukes" class. You mention the surprising strangelets class, which would do very little for.
Replies from: gilch↑ comment by gilch · 2019-09-03T06:42:19.459Z · LW(p) · GW(p)
A black marble is any invention that would kill the civilization that invents it by default, but perhaps not inevitably. Maybe you intended gradations of the concept beyond that? Maybe how much time it takes to build a weapon that kill how many? But I really doubt even the "easy nuke"-grade black marbles can be reliably stopped this way.
Replies from: MakoYass↑ comment by mako yass (MakoYass) · 2019-09-03T06:49:44.176Z · LW(p) · GW(p)
That's why I said "fairly reliable". Which is not reliable enough for situations like this, of course, but we don't seem to have better alternatives.
Replies from: Gurkenglas↑ comment by Gurkenglas · 2019-09-03T10:46:40.970Z · LW(p) · GW(p)
Your irregularly scheduled reminder that FAI solves these problems just fine.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2019-09-03T13:48:04.610Z · LW(p) · GW(p)
Your irregularly scheduled reminder that FAI solves these problems just fine.
So does magic. One might adapt one of Arthur C. Clarke's laws: Every sufficiently speculative technology is indistinguishable from magic. Even more so than ACC's "sufficiently advanced technology": the latter is distinguished from magic by actually existing. But nobody knows how to make FAI.
Replies from: Gurkenglas↑ comment by Gurkenglas · 2019-09-03T16:11:41.690Z · LW(p) · GW(p)
FAI is more plausible than magic to the point that we don't have to desperately try to make society transparent.
Replies from: MakoYass↑ comment by mako yass (MakoYass) · 2019-09-06T00:16:22.297Z · LW(p) · GW(p)
While I took your point well, FAI is not a more plausible/easier technology than democratised surveillance. It may be implemented sooner due to needing pretty much no democratic support whatsoever to deploy, it might just as well take a very long time to create.