Do you consider perfect surveillance inevitable?
post by samuelshadrach (xpostah) · 2025-01-24T04:57:48.266Z · LW · GW · 4 commentsThis is a question post.
Contents
Answers 6 Noosphere89 5 Dagon 4 StartAtTheEnd 4 Anon User 3 whestler 2 Benjy_Forstadt None 4 comments
A lot of my recent research work focusses on:
1. building the case for why perfect surveillance is becoming increasingly hard to avoid in the future
2. thinking through the implications of this, if it happened
When I say perfect surveillance, imagine everything your eyes see and your ears hear is being broadcast 24x7x365 to youtube (and its equivalents in countries where youtube is banned) and imagine this is true for all 8 billion people.
I'm unsure whether I should devote more of my research time to 1 or 2. If lots of people buy the hypothesis in 1, I'd rather devote time to 2. If people don't buy the hypothesis in 1, I might want to spend time on 1 instead.
Hence I wished to solicit opinions. Do you consider perfect surveillance inevitable? Why or why not?
Answers
Yes, by default.
I'd drop the inevitable word, but barring massive change like a technology regression or laws being passed that are very strong (and it surviving the AI era), near-perfect surveillance is very likely to happen, and the safe assumption is that nothing you do will be hidable/hidden by default.
I'm not addressing whether surveillance is good or bad.
↑ comment by samuelshadrach (xpostah) · 2025-01-25T02:36:28.801Z · LW(p) · GW(p)
Thanks for letting me know
I avoid terms (and concepts) like "inevitable". There are LOTS of unknowns, and many future paths that go through this, or not. Scenario-based conditionality (what would lead to this, and what would lead elsewhere) seems more fruitful.
Perfect surveillance is the default for electronic intelligence - logging is pretty universal. I think this is likely to continue in future universes where most people are electronic.
I think the answer is "Mu" for universes with no people in them.
I think the likely path is "never perfect, but mostly increasing over time" for universes with no singularity or collapse.
I'd love to more about implications of the CURRENT level of observation. Things that are just now feasible, and the things that are promoting or holding them back. For instance, individual facial recognition got a wave of reporting a few years ago, and I honestly don't know if it just quietly became universal or if the handwringing and protests actually worked to keep it only in very controlled and visible places (like border crossings).
↑ comment by Milan W (weibac) · 2025-01-24T22:19:31.245Z · LW(p) · GW(p)
I'd love to more about implications of the CURRENT level of observation
I have a feeling that the current bottleneck is data integration [LW · GW] rather than data collection.
Replies from: Dagon↑ comment by Dagon · 2025-01-24T22:38:35.455Z · LW(p) · GW(p)
I think both, by a long shot. I estimate I spend over half my time outside of easy video surveillance (room without a webcam or phone pointed in a useful direction, or outdoors not in easy LOS of a traffic or business cam), and a slightly different half for audio. For neither of these is high-fidelity POV data available at all, as described in the post.
For those times when I AM under surveillance, the quality is low and the integration is minimal. There are legal and technical challenges for anyone to use it against me. And it's VERY easy to find times/place where I'm not being recorded when I choose to.
↑ comment by samuelshadrach (xpostah) · 2025-01-25T03:02:54.241Z · LW(p) · GW(p)
Yes I agree with this. As of 2025 data collection and integration both remain unsolved. Although “solving” it often just means grunt work (like the NSA improving their codebase to analyse stuff), it’s not always a big research challenge.
↑ comment by samuelshadrach (xpostah) · 2025-01-25T02:44:02.876Z · LW(p) · GW(p)
Hi Dagon
Thanks for this reply.
Predicting and planning multiple scenarios is a good idea, maybe I should do this.
It’s difficult to pinpoint what “current level” implies. As long as there are missing pieces, it’s arguably still in the future not the present. For example, at a technical level there may be no difficulty in hiring 10,000 drone pilots to surveill a city; using off-the-shelf drones and existing workforce of drone pilots. But reality is that this has not happened yet in any city, and there might be some reasons why.
If current level only means literally the things that have already been deployed, then reading the current news already gives you a fair description of the consequences. Otherwise there is again guesswork on how data will be collected and used in hypothetical futures, not how it is being used at present.
If you could elaborate what you mean by current level that would be good. For instance do you mean examples where the data is already been collected but not used, and you want to to forecast consequences of it being used?
Replies from: Dagon↑ comment by Dagon · 2025-01-25T21:55:38.745Z · LW(p) · GW(p)
Yes, I mean the current deployed level. News hasn't really covered anything major in the last few years on the topic, and I don't know if it's stagnated or the reporting has just given up.
Replies from: xpostah↑ comment by samuelshadrach (xpostah) · 2025-01-26T03:10:15.625Z · LW(p) · GW(p)
Oh okay
I think there are news publications and independent people covering these issues they’re just not the most popular ones. If you follow the right people you’ll get the latest news.
I understand there’s value in someone (maybe me, maybe AI) collecting and summarising all the news in one place. Thanks for the suggestion.
Stated without endorsement:
Websites, interviews, books, twitter handles of:
Freedom of the press foundation
Signal
Wikileaks
Freedom house
DEFCON conferences
Edward Snowden, Julian Assange, Meredith Whittaker, Ross Ulbricht, Moxie Marlinspike, Matthew Green
There’s also publications on foreign affairs and health of democracy in various countries, from which you can extract the relevant articles.
Lastly hacker news is also worth following.
I have considered automated mass-surveillance likely to occur in the future, and tried to prevent it, since about 20 years ago. It bothers me that so many people don't have enough self-respect to feel insulted by the infringement of their privacy, and that many people are so naive that they think surveillance is for the sake of their safety.
Privacy has already been harmed greatly, and surveillance is already excessive. And let me remind you that the safety we were promised in return didn't arrive.
The last good argument against mass-surveillance was "They cannot keep an eye on all of us" but I think modern automation and data processing has defeated that argument (people have just forgotten to update their cached stance on the issue).
Enough ranting. The Unabomber argued for why increases in technology would necessarily lead to reduced freedom, and I think his argument is sound from a game theory perspective. Looking at the world, it's also trivial to observe this effect, while it's difficult to find instances in which the amount of laws have decreased, or in which privacy has been won back (also applies to regulations and taxes. Many things have a worrying one-way tendency). The end-game can be predicted with simple exterpolation, but if you need an argument it's that technology is a power-modifier, and that there's an asymmetry between attack and defense (the ability to attack grows faster, which I believe caused the MAD stalemate).
I don't think it's difficult to make a case for "1", but I personally wouldn't bother much with "2" - I don't want to prepare myself for something when I can help slow it down. Hopefully web 3.0 will make smaller communities possible, resisting the pathelogical urge to connect absolutely everything together. By which time, we can get separation back, so that I can spend my time around like-minded people rather than being moderated to the extent that no groups in existence are unhappy with my behaviour. This would work out well unless encryption gets banned.
The maximization [? · GW]of functions lead to the death of humanity (literally or figuratively), but so does minimization (I'm arguing that pro-surveillance arguments are moral in origin and that they make a virtue out of death)
↑ comment by samuelshadrach (xpostah) · 2025-01-25T02:51:03.008Z · LW(p) · GW(p)
Thanks for this reply. You do seem to be thinking on lines similar to mine, focusing on where the incentives lead in the longterm not just shortterm.
Can you identify the specific arguments from ISAIF that you find persuasive on why future humans will have reduced freedom?
I agree that since Enlightenment it has become easier and easier to destroy a region of matter (using some sort of weapon or explosive) and harder to defend against it being destroyed. (Except in a game theory way where you commit to destroying something else in return)
I’m curious how long you think you will be able to slow it down and what your ideas for doing so are. I feel most people in the web3 space are not taking internet anonymity as seriously as it needs to be taken in order to succeed at their self stated goals. (I used to once work in the cryptocurrency space.) My linked post has some thoughts on what work Id consider actually moving the needle in this space. For example I’d prefer machines inside faraday cages with locally manufactured (non-backdoorable) radios that can audit every single internet packet entering and exiting a machine. I don’t see many people in web3 space thinking along these lines.
↑ comment by StartAtTheEnd · 2025-01-25T19:24:02.233Z · LW(p) · GW(p)
Sorry in advance for the wordy reply.
Can you identify the specific arguments from ISAIF that you find persuasive
Here's my version (which might be the same. I take responsibility for any errors, but no credit for any overlap with Ted's argument)
1: New technologies seem good at first/on the surface.
2: Now that something good is available, you need to adapt it (or else you're putting yourself or others at a disadvantage, which social forces will punish you for)
3: Now that the new technology is starting to be common, people find a way to exploit/abuse it. This is because technology is neutral, it can always be use for both good and bad things, you cannot seperate the two.
4: In order to stop abuse of said technology, you need to monitor its use, restrict access with proof of identity, to regulate it, or to create new and even stronger technology.
5: Now that you're able to regulate the new technology, you must do so. If you can read peoples private emails, and you choose not to, you will be accused of aiding pedophiles and terrorists (since you could arguably have caught them if you did not respect their privacy)
This dynamic has a lot of really bad consequences, which Ted also writes about. For instance, once gene editing is possible, why would we not remove genes which results in "bad traits"? If you do not take actions which makes society safer, you will be accused of making society worse. So we might be forced to sanitize even human nature, making everyone into inoffensive and lukewarm drones (as the traits which can result in great people and terrible people are the same, the good and the bad cannot be separated. This is why new games and movies are barely making any money, and it's why Reddit is dying. They removed the good together with the bad)
I’m curious how long you think you will be able to slow it down and what your ideas for doing so are
I can slow it down for myself by not engaging in these new technologies (IoT, subscription-based technology, modern social media, etc.) and using fringe privacy-based technologies, or simply not making noise (If nothing you say escapes the environment in which you said it, you're likely safe. If what you said is not stored for longer periods of time, you're likely safe. If the environment you're in is sufficiently illegible, information is lost and you cannot be held accountable.
I'm also doing what I can to teach people that:
1: Good and Bad cannot be separated. You can only have both of them or none of them. I think this is axiomatically true, which suggests that the Waluigi Effect occurs naturally (just like intrusive thoughts).
2: You cannot have your cake and eat it too. You can have privacy OR safety, you cannot have both. You cannot have a backdoor that only "the good guys" can access. You cannot have a space where vulnerable groups can speak out, without also having a space where terrorists can discuss their plans. You cannot have freedom of speech and an environment in which nothing offensive is said.
Most people in the web3 space are not taking internet anonymity as seriously as it needs to be
This is possibly true, but the very design of web3 (decentralization, encryption) makes it so that privacy is possible. If your design makes it so that large corporations cannot control your community, it also makes it so that the governement is powerless against it, as these are equal on a higher level of abstraction.
That can audit every single internet packet entering and exiting a machine
This sounds like more surveillance rather than less. I don't think this is an optimal solution. We need to create something in which no person is really in charge, if we want actual privacy. The result will look like the Tor network, and it will have the same consequences (like illegal drug trade). If a platform is not a safe place to sell drugs, it's also not a safe platform to speak out against totalitarianism or corruption, and it's also not a safe place to be a minority, and it's also not a safe place to a criminal. I think these are equivalent, you cannot separate good and bad.
I like talking in real life, as no records are kept. What did I say, what did I do? Nobody knows, and nobody will ever know. I don't have to rely on trust or probability here. Like with encryption, I have mathematical certainty, and that's the only kind of certainty which means anything in my eyes.
Self-destructing messages are safe as well, as is talking on online forums which will cease to exist in the future, taking all the information with them (what did I say on the voice chat of Counter Strike 1.4? Nobody knows)
Communities like LW have cognitive preferences for legibility, explicitness, and systematizing, but I think the reason why Moloch [? · GW] did not bother humanity before the 1800s is because it couldn't exist. It seems like game-theoritic problems are less likely to occur when players don't have access to enough information to be able to optimize. This all suggests one thing: That information (and openness of information) is not purely good. It's sometimes a destructive force. The solution is simple to me: Minimize the storage and distribution of information.
edit: Fixed a few typos
Replies from: xpostah↑ comment by samuelshadrach (xpostah) · 2025-01-26T03:23:20.651Z · LW(p) · GW(p)
Thanks for the reply.
That is an impressive argument tbh. Let me try rephrasing it a different way. Technology gives people more ability to predict and control systems, often by removing variance in outcomes. When the system is basically the entire environment humans are interacting with, this also reduces variance in outcomes of what humans do and what humanity does.
I guess my hope lies in being able to do things that reduce the variance of humanity’s future as a side effect (don’t go extinct or run experiments that risk extinction for example) but doesn’t necessarily reduce the variance of individual human beings’ outcomes as strongly. For instance democracy in some ways reduces the variance of outcomes for the society (no dictator can come to power, stable law and order, stable economic growth possibly) but increases the variance of outcomes for the individual (social mobility is possible, more freedom to pick careers, marry, travel, read, write etc)
Sorry maybe my example on web3 wasn't clear. I mean you as the owner of your machine can audit what packets are entering or exiting it, even if you don’t trust the microprocessor to not have a hardware backdoor.
I agree as of today it’s easier to protect an in person conversation than a digital one from ever being recorded. (Although I will say even ideas created in other persons mind are an information leak, you need to really think through who else that person is going to interact with and what their computer opsec is, etc) Even one mistake blows up your entire secrecy in the presence of a smart and motivated adversary. I have this idea of a community that completely isolates geographically for decades, this ensures nobody ever comes into contact with people from the outside. I’m curious about your thoughts on that.
Minimize the storage and distribution of information.
I get where you’re coming from but again, how do we actually do this? I don’t mean at a technical level, I mean politically.
Replies from: StartAtTheEnd↑ comment by StartAtTheEnd · 2025-01-26T17:50:34.302Z · LW(p) · GW(p)
Predict and control... I'm not sure about that, actually. The world seems to be a complex system, which means that naive attempts at manipulating it often fail. I don't think we're using technology to control others in the manner that we can choose their actions for them, but we are decreasing the diversity of actions that one can take (for instance, anything which can be misunderstood seems to be no go now, as strangers will jump in to make sure that nothing bad is going on, as if it was their business to get involved in other peoples affairs). So our range of motion is reduced, but it's not locked to a specific direction which results in virtue or something.
I don't think that the world can be controlled, but I also think that attempts at controlling by force mistaken, as there's more upstream factors which influence most of society. For instance, if your population is buddhist, they will believe that treating others well is the best thing to do, which I think is a superior solution to placing CCTVs everywhere. The best solutions don't need force, and the one which use force never seem optimal (consider the war on drugs, the taboo on sexuality, attempts at stopping piracy, etc). I think the correct set of values is enough (but again, the receiver needs to agree that they're correct voluntarily). If everyone can agree on what's good, they will do what's good, even if you don't pressure them into doing so.
I'm also keeping extinction events in mind and trying to combat them, I just do so from a value perspective instead. I'm opposed to creating AGIs, and we wouldn't have them if everyone else were opposed as well. Some people naively believe that AGIs will solve all their problems, and many don't place any special value on humanity (meaning that they don't resist being replaced by robots). But there's also many people like me who enjoy humanity itself, even in its imperfection.
I mean you as the owner of your machine can audit what packets are entering or exiting it
This is likely possible, yeah. But you can design things in such a way that they're simply secure - as it's impossible for them not to be. How do you prevent a lock from being hacked? You keep it mechanical rather than digital. I don't trust websites which promise to keep my password safe, but I trust websites which don't store my password in the first place (they could run it through a one-way hash). Great design makes failure impossible (e.g. atomic operations in banking transfers)
I’m curious about your thoughts on that.
This would likely result in security, but it comes at a huge cost as well. I feel like there's better solutions, and not just for a specific organization, but for everyone. You could speak freely on the internet just 20 years ago (freely enough that you could tell the nuclear launch codes to strangers if you wanted to), so such a state is still near in a sense. Not only was it harder to spy on people back then, less people even wanted to do such a thing, and this change in mentality is important as well. I'm not trying to solve the problem in our current environment, I want to manipulate our environment to one in which the problem doesn't exist in the first place. We just have to resist the urge to collect and record everything (this collection is mainly done by malicious actors anyway, and mainly because they want to advertise to you so that you buy their products). You could go on vacation in a country which considers it bad taste to pry on others affairs and be more or less immune thanks to that alone, so you don't even need to learn opsec, you just need to be around people who don't know what that word means. You could also use VPNs which have no logs (if they're not lying of course) as nothing can be leaked if nothing is recorded. Sadly, the same forces which destroyed privacy are trying to destroy these methods, it's the common belief that we need to be safe, and that in order to be safe we need certaincy and control. I don't even think this is purely ideology, I think it's a psychological consequence of anxiety (consider 'control freaks' in relationships as well). Society is dealing with a lot of problems right now which didn't exist in the past not because they didn't happen, but because they weren't considered as problems. And if we don't consider things to be problems, then we don't suffer from them, so the people who are resonsible for creating the most suffering in life are those who point at imperfections (like discrimination and strict beauty standards) and convince everyone that life is not worth living until they're fixed.
Finally, people can leak information, but the human memory is not perfect, and people tend to paraphrase eachother, so "he said she said" situations are inherently difficult to judge. You have plausible deniability since nobody can prove what was actually said. I think all ambiguity translates into deniability, which is also why you can sometimes get away with threatening people - "It would be terrible if something bad happened to your family" is a threat, but you haven't actually shown any intent to break the law. Ambiguity is actually what makes flirting fun (and perhaps even possible), but systematizers and people in the autistism-cluster tend to dislike ambiguity, it never occurs to them that both ambiguity and certainty have pros and cons.
I mean politically
Politics is a terrible game. If possible, I'd like to return society to the state it had before everyone cared too much about political issues. Since this is not an area where reasonable ideas work, I suggest just telling people that dictators love surveillance (depending on the ideology of the person you're talking to, make up an argument for how surveillance is harmful). The consensus on things like censorship and surveillance seems to depend on the ideology one perceives it to support. Some people will say "We need to get rid of anonymity so that we can shame all these nazis!" but that same sort of person was strongly against censorship 13 years ago, because back then censorship was though to be what the evil elite used to oppress the common man. So the desire to protect the weak resulted in both "censorship is bad" and "censorship is good" being common beliefs, and it's quite easy for the media to force a new interpretation since people are easily manipulated.
By the way, I think "culture war" topics are against the rules, so I can only talk about them in a superficial and detached manner. Viligantes in the UK are destroying cameras meant to automate fining people, and as long as mentalities/attitudes like this dominate (rather than the belief that total surveillance somehow benefits us and makes us safe) I think we'll be alright. But thanks to technological development, I expect us to lose our privacy in the long run, and for the simple reason that people will beg the government to take away their rights.
Replies from: xpostah↑ comment by samuelshadrach (xpostah) · 2025-02-16T10:32:58.436Z · LW(p) · GW(p)
Sorry for delay in reply.
I think it’ll help this discussion if you sketch an alternate trajectory for the future that you actually believe is possible to achieve with non-trivial likelihood.
Can you convert all 8 billion people to Buddhism?
Can you convince all 8 billion people to stop using information technology?
Can you convince all people who have surveillance powers to not use them, and find some alternate method of reducing their anxiety?
(This is assuming anxiety is what is driving them to use the power in the first place, which is already a shaky assumption IMO)
My research does contain implicit assumptions such as “I don’t know how to convert all 8 billion people to any single value system, and I’m not seriously trying to do this because I don’t know how to do it.”
↑ comment by StartAtTheEnd · 2025-02-17T00:41:10.127Z · LW(p) · GW(p)
All good! I wrote a long response after all.
But what future do you value? Personally, I don't want to decrease the variances of life, but I do want to increase the stability.
In either case, I think my answer is "Invest in the growth and maturation of the individual, not in the external structures that we crudely use to keep people in check"
Can you convince all people who have surveillance powers to not use them
No, but we can create systems in which surveillance is impossible from an information-theoritic perspective. Web 3.0 will likely do this unless somebody stops it, and there's ways to stop it too (you could for instance argue that whoever create these systems are aiding criminals and terrorists)
Anxiety seems to be why individual people prefer transparency of information, but it's not why the system prefers it. The system merely exploits the weakness of the population to legitimize its own growth and to further its control of society.
Converting everyone to a single value system is not easy. But we can improve the average person and thus improve society in that way, or we can start teaching people various important things so that they don't have to learn them the hard way. One thing I'd like to see improved in society is parenting, it seems to have gotten worse lately, and it's leading to deterioration of the average person and thus a general worsening of society.
A society of weak people leads to fear, and fear leads to mistrust which leads to low-trust societies. By weak, I mean people who run away from trauma rather than overcoming it. You simply just need to process uncomfortable information successfully to grow, it's not even that difficult, it just requires a bunch of courage. We're all going to die sometime, but not all of us suffer from this idea and seek to run away by drinking or distracting ourselves with entertainment. Sometimes, it's even possible to turn unpleasant realities into optimism and hope, and this is basically what maturity and development is
Replies from: xpostah↑ comment by samuelshadrach (xpostah) · 2025-02-20T16:50:55.455Z · LW(p) · GW(p)
Thanks for answering.
I think my answer is "Invest in the growth and maturation of the individual, not in the external structures that we crudely use to keep people in check"
This sounds good as an ideal, it is one way to go about it. If you know of any high-leverage ways for a few people to push society in this direction, I'd be interested in hearing about them.
One thing I'd like to see improved in society is parenting
Agreed! And there are likely high-leverage ways to work on this (such as making youtube videos about it for future parents).
I liked your writeup on how emotional growth of individuals helps society.
No, but we can create systems in which surveillance is impossible from an information-theoritic perspective. Web 3.0 will likely do this unless somebody stops it
As someone who used to work in the cryptocurrency space, I'm quite pessimistic on most people from that space solving this. The incentives are not in favour of it (privacy projects don't make as much revenue as other projects, and making meme coins can be even more profitable than aiming for revenue). And the culture is only mildly in favour of it (most people in cryptocurrency space don't seem to deeply understand software or cybersecurity, or why our current internet lacks privacy in the first place).
A few rare individuals from the space could still make advances in privacy (my blog has some ideas how), I'd be happy to connect with anyone making that happen.
Replies from: StartAtTheEnd↑ comment by StartAtTheEnd · 2025-02-21T13:49:39.715Z · LW(p) · GW(p)
If you know of any high-leverage ways
This seems like a problem of infinite regress.
"Solving it is easy, just do X"
"The problems is that people don't do X, how do we make them?"
"Just do Y"
"The problem is that people don't do Y, how do we make them?"
"Just do Z"
...
To name some power upstream factors, I'd say "Increase the social value of growth and maturity". I guess this is what we did in the past, actually. Then people started complaining that our standards were harsh because it made losers low value, and then they gave power and benefits to the status of victim, and then people started competing in playing the victim rather than in improving their character to something worthy of respect.
By the way, another powerful influence in the worsening of society seems to be large companies who play on social norms, personal needs, and social perception in order to make money. "Real men do ___", "___ is pretentious", "Doing ___ is cringe". Statements like this influence how people behave and what they strive for, since the vast majority of people want to appear in a way that others approve of. We must have fallen a long way as a society, for the only positive pressure I can think of is neo-nazis who encourage others to improve themselves (to read old books and lift weights)
Let's see .. People are doing away with family core values, claiming that it's getting in their way of freedom (but I think that it's an immature dislike of responsibility and obligation, with a dash of narcissism which makes people avoid actions which do not benefit them personally). Family bonds also seem to be weakning because of politics, some families split apart because of disagreements on who to vote for, and this is a new problem to me, I don't recall hearing of such things before 2016.
Another factor making things worse is that the media reports on the absolutely stupidest people that they can find, in order to make the "political enemy" look as bad as possible. But this has the side-effect of people overestimating themselves. If somebody felt they were a math genius for knowing basic trig functions, they'd walk around feeling smug, never pushing themselves into university-level maths.
Here's a quote from a book from 2005 (it's a book on dating by the way):
"TO GIVE you an impression of how much things have been dumbed down, consider the Lord of the Rings. Today, people treat it as an epic adult story that is a bit 'too long'. When it was published, it was a simple children's story. A simple children's story is now an adult epic! And is Alice in Wonderland now considered 'literature'? Perish the thought."
Youtube videos is not a bad idea, by the way!
The incentives are not in favour of it
That's a shame. When I search "web 3.0" the results seem to hint that people understand the problem they're trying to fix, and fixing the problem leads to structures which are resistant against giant companies, and this must improve privacy (if it doesn't, then the design will be the same as what it's replacing, just with somebody else in charge. So over time, corruption will kick in, and we'll be back where we started. The structure itself must be corruption-resistant)
There are people in the world who enjoy privacy and freedom and such, and it's not just criminals. But their products are not as mainsteam as they used to be, the only privacy-oriented one I frequently hear about is protonmail. Mega.io also claims to be pro-privacy... But somehow piracy is against its rules? If it can detect if I upload copyrighted content to my private storage, then it's not a private storage. I'm not sure how that works. Many services who claim to be secure and pro-privacy seem to be lying, or at least using these words loosely or in a relative rather than absolute sense.
Replies from: xpostah↑ comment by samuelshadrach (xpostah) · 2025-02-21T16:09:04.634Z · LW(p) · GW(p)
To name some power upstream factors, I'd say "Increase the social value of growth and maturity"
How to actually do this?
It’s easy to say “I wish XYZ were high status in society”. I’m interested in concrete steps a few individuals like you or me can take. Ultimately all this world building has to translate it decisions and actions taken by you and me and other people listening to us, not a hypothetical member of society.
I agree you are pointing at real problems mostly.
When I search "web 3.0" the results seem to hint that people understand the problem they're trying to fix, and fixing the problem leads to structures which are resistant against giant companies, and this must improve privacy (if it doesn't, then the design will be the same as what it's replacing, just with somebody else in charge. So over time, corruption will kick in, and we'll be back where we started. The structure itself must be corruption-resistant)
I agree we need to think systemically about incentives.
People in web3 often understand that deteriorating user privacy means more money than protecting it. They tend to not ask deeper questions like:
why does cybersecurity favour offence over defence? (If it favoured defence instead, it might be possible for offence to be more profitable and yet lose) Software complexity is a reason cybersecurity is hard.
why does violating user privacy make so much money? why does Google’s ad model make more money than any of the other business models? Why did Apple escape this trap so far?
why does Tor not scale? In general most people in Web3 don’t talk about privacy at IP address and packet level, they often talk about just ensuring blockchain transactions aren’t doxxable
Replies from: StartAtTheEnd↑ comment by StartAtTheEnd · 2025-02-23T11:51:34.206Z · LW(p) · GW(p)
Well, we somehow changed smoking from being cool to being a stupid, expensive and unhealthy addiction. I think the method is about the same here. But the steps an individual can take are very limited. In politics, you have millons of people trying to convert other people into their own ideology, so if it was easy for an individual to change the values of society, we'd have extremists all over.
Anyway, you'd probably need to start a Youtube channel or something. Combining competence and simplicity, you could make content that most people could understand, and become popular doing that. "Hoe math" comes to mind as an example. Jordan Peterson and other such people are a little more intellectual, but there's also a large amount of people who do not understand them. Plus, if you don't run the account anonymously, you'd take some risks to your reputation proportional to how controversial your message is.
People in web3 often understand that deteriorating user privacy means more money than protecting it
That's a shame. Why are they in web3 in the first place, then? The only difference is the design, and from what I've seen, designs which give power to the users rather than some centralized mega-corporation.
Why does cybersecurity favour offence over defence?
I think this is due to attack-defense asymmetry. Attackers have to find just one vulnerability, defenders have to stop all attacks. I do however agree that very few people ask these questions.
I think Tor would scale no problem if more people used it, but it has the same problem has 8chan and the privacy-focused products and websites have: All the bad people (and those who were banned on most other sites) flock there first, and they create a scary environment or reputation, and that makes normal people not want to go there/use the service. Many privacy-oriented apps have the reputation of being used by criminals and pedophiles.
This problem would go away if there was more places where privacy was valued, since the "bad people" density would go down as the thing in question became more popular.
But I've noticed that everything gets worse over time. In order to have good products, we need new ones to be made. Skype sucked, then people jumped to Discord. Now Discord sucks, so people might soon jump to something new. It's both "enshittification" and incentives.
Taxes go up over time. We get more laws, more rules, more regulations, more advertisement, more ads. The more power a structure has, the worse it seems to treat those inside of it, and the less fair it becomes. Check out this 1999 ad for Google it's a process similar to corruption, and the only solution seems to be revolutions or collective agreements to seek out alternatives when things get bad enough. Replacing things is less costly than fixing them, which is probably why deaths and births exist. Nature just starts over in cycles, with the length of each cycle being inversely proportional to the size of the structure (average life span of companies in America seem to be 15 years, and the average life span of nations seem to be about 150 years, the average life span of a civilization seems to be 336 years)
So, in my mental model of the world, corruption and DNA damage is the same thing, enshittification is similar to cancer, and nothing lives forever because bloat/complexity/damage accumulates until the structure dies. But I can only explain how things are, coming up with solution is much more difficult.
Replies from: xpostah↑ comment by samuelshadrach (xpostah) · 2025-02-24T18:44:46.756Z · LW(p) · GW(p)
Thanks!
Your write up was useful to me.
I don’t think Tor scales in current form because it relies on altruistic donors to provide bandwidth. I agree there may be a way to scale it that doesn’t rely on altruism.
I agree you’re pointing at an important problem. Namely when there’s a large structure aimed at achieving some task for users, and it deliberately does it poorly, some of our best solutions are to ensure low cost-of-exit for users and allow for competing alternatives.
This can be slow and wasteful as millions of people need to be fired, billions of dollars of equipment lost etc everytime a large company dies and is outcompeted. In the worst case this is entire countries and continents dying a slow death while their citizens are poached by other countries or left with an inferior quality of life.
If there were incentives to fix large structures from the inside or alternatively, a way solve large tasks without requiring large top-down structures, that might improve the status quo.
Hm, not sure about it being broadcast vs consumed by a powerful AI that somebody else has at least a partial control over.
↑ comment by samuelshadrach (xpostah) · 2025-01-24T06:44:10.134Z · LW(p) · GW(p)
To be clear, when you say powerful you still mean less powerful than ASI, right?
What are your thoughts on whether this organisation will be able to secure the data they collect? My post has some my thoughts on why securing data may be difficult even if you're politically powerful.
Replies from: anon-user↑ comment by Anon User (anon-user) · 2025-01-31T00:06:16.425Z · LW(p) · GW(p)
Yes, potentially less that ASI, and security is definitely an issue, But people breaching the security would hoard their access - there will be periodic high-profile spills (e.g. celebrities engaged in sexual activities, or politicians engaged in something inappropriate would be obvious targets), but I'd expect most of the time people would have at least an illusion of privacy.
Replies from: xpostah↑ comment by samuelshadrach (xpostah) · 2025-02-16T10:27:18.750Z · LW(p) · GW(p)
Sorry for delay.
The incentives pushing for the first actor to get broken into also push for the second actor to get broken into. On a longer timescale, more and more actors get the same data, until eventually it could be public. Nobody has a strong incentive to destroy their copy of the data, so the total number of copies of data in the world is more-or-less a non-decreasing function.
I'd like to hear the arguments why you think perfect surveillance would be more likely in the future. I definitely think we will reach a state where surveillance is very high, high enough to massively increase policing of crimes, as well as empower authoritarian governments and the like, but I'm not sure why it would be perfect.
It seems to me that the implications of "perfect" surveillance are similar enough to the implications of very high levels of surveillance that number 2 is still the more interesting area of research.
↑ comment by samuelshadrach (xpostah) · 2025-01-25T02:37:11.520Z · LW(p) · GW(p)
Thanks for the reply.
You can read my linked post for more on how surveillance will increase.
But yes good to know you’d rather I write more about 2.
I don’t think perfect surveillance is inevitable.
I would prefer it, though. I don’t know any other way to prevent people from doing horrible things to minds running on their computers. It wouldn’t need to be publicly broadcast though, just overseen by law enforcement. I think this is much more likely than a scenario where everything you see is shared with everyone else.
Unfortunately, my mainline prediction is that people will actually be given very strong privacy rights, and will be allowed to inflict as much torture on digital minds under their control as they want. I’m not too confident in this though.
↑ comment by samuelshadrach (xpostah) · 2025-01-24T06:33:04.665Z · LW(p) · GW(p)
One of our cruxes is probably likelihood of law enforcement actually securing the data they collect, versus it being leaked.
↑ comment by samuelshadrach (xpostah) · 2025-01-24T06:29:48.913Z · LW(p) · GW(p)
Thanks for the reply.
Sorry, I think I'm going to avoid discussing your point about digital minds in this post, it's best for a separate post. There's a number of considerations there (ASI timelines, unipolar versus multipolar post-ASI world) that would take time to discuss.
Assuming a pre-ASI world, do you have guesses for what our crux might be? I'm not convinced perfect surveillance is inevitable either, but I probably assign higher odds to it than you.
4 comments
Comments sorted by top scores.
comment by Dom Polsinelli (dom-polsinelli) · 2025-01-24T20:18:45.289Z · LW(p) · GW(p)
I don't know about inevitable but I imagine that it is such an attractive option to governments that if the technology gets there it will be enacted before laws are passed preventing it, if any ever are. I would include a version of this where it is practically mandatory through incentives like greatly increased cost of insurance, near inability to defend yourself in a court or cross borders if you lacked it, or it just becomes the social norm to give up as much data about yourself as possible.
That said, I also think that if things go well we will have good space technology allowing relatively small communities to live in self sustaining habitats/colony ships which would kind of break any meaningful surveillance.
This is a very off the cuff remark, I haven't given this topic a great deal of thought before reading this post so make of that what you will.
Replies from: xpostah↑ comment by samuelshadrach (xpostah) · 2025-01-25T02:57:09.986Z · LW(p) · GW(p)
Hey. Thanks for the reply.
“Self sustaining” seems like the key word here. The colony would need independent supply of food, water and energy, and it would need independent military and government.
What time scale are you thinking around?
And do you expect space colonies to obtain this level of political freedom from existing nuclear powers? If yes why?
Replies from: dom-polsinelli↑ comment by Dom Polsinelli (dom-polsinelli) · 2025-01-25T15:53:39.735Z · LW(p) · GW(p)
Honestly, I'm not sure. I read about the biosphere 2 experiments a while ago and they pretty much failed to make a self sustaining colony with only a few people and way more mass than we could practically get into space. I really want us as a species to keep working on that so we can solve any potential problems in parallel with our development of rockets or other launch systems. I could see a space race esque push getting us there in under a decade but there currently isn't any political or commercial motivation to do that. I don't know if it would necessarily need a military. I could easily be very wrong but there's so much space in space and so much stuff on earth trying to conquer a habitat with a few thousand people on it seems a little unnecessary. Italy won't take over Vatican city, not because they can't but because there really isn't a good reason to. As for political freedom, that's the most speculative of all as I understand it less than the technology. My intuition is that they could simply because a self sustaining colony doesn't need to produce any surplus a government would be interested in taxing. If you set up an asteroid mining operation I can see all the governments wanting to take a cut of the profits but if all you wanted was to get away from an implicit surveillance state it would have to be truly dystopian to keep you from leaving. As long as you don't launch anything dangerous toward Earth and you aren't growing exponentially to the point where you might rival the power of a country and you aren't engaging in incredibly lucrative trade, the only motivation left to govern you would be control for control's sake and I guess I'm just optimistic enough to think that there will always be at least one place on earth with high tech that isn't that dystopian.
Replies from: xpostah↑ comment by samuelshadrach (xpostah) · 2025-01-26T03:00:01.848Z · LW(p) · GW(p)
Got it.
I’m not sure but I think building a colony (or hiding in an existing colony) in a remote rainforest or mountaineous region may be easier to achieve if the goal is just security through obscurity. Also easier to be self-sustaining, atleast with today’s tech. There’s many such groups of people that exist today, that are mostly self-sustaining yet don’t produce enough surplus that anyone else cares to find out what they’re doing.
My guess is it’ll be one of the nuclear powers who will build the first space colony to begin with so it’ll be theirs by default, no conquering needed. Also the US defence establishment in particular has a history of wanting ownership and soft power over emerging technologies long before it’s obvious what the commercial value from it will be, and I don’t see that as irrational from their point of view.