Do you consider perfect surveillance inevitable?
post by samuelshadrach (xpostah) · 2025-01-24T04:57:48.266Z · LW · GW · 1 commentThis is a question post.
Contents
Answers 4 Anon User 3 Dagon 3 StartAtTheEnd 2 Noosphere89 2 Benjy_Forstadt 1 whestler None 1 comment
A lot of my recent research work focusses on:
1. building the case for why perfect surveillance is becoming increasingly hard to avoid in the future
2. thinking through the implications of this, if it happened
When I say perfect surveillance, imagine everything your eyes see and your ears hear is being broadcast 24x7x365 to youtube (and its equivalents in countries where youtube is banned) and imagine this is true for all 8 billion people.
I'm unsure whether I should devote more of my research time to 1 or 2. If lots of people buy the hypothesis in 1, I'd rather devote time to 2. If people don't buy the hypothesis in 1, I might want to spend time on 1 instead.
Hence I wished to solicit opinions. Do you consider perfect surveillance inevitable? Why or why not?
Answers
Hm, not sure about it being broadcast vs consumed by a powerful AI that somebody else has at least a partial control over.
↑ comment by samuelshadrach (xpostah) · 2025-01-24T06:44:10.134Z · LW(p) · GW(p)
To be clear, when you say powerful you still mean less powerful than ASI, right?
What are your thoughts on whether this organisation will be able to secure the data they collect? My post has some my thoughts on why securing data may be difficult even if you're politically powerful.
I avoid terms (and concepts) like "inevitable". There are LOTS of unknowns, and many future paths that go through this, or not. Scenario-based conditionality (what would lead to this, and what would lead elsewhere) seems more fruitful.
Perfect surveillance is the default for electronic intelligence - logging is pretty universal. I think this is likely to continue in future universes where most people are electronic.
I think the answer is "Mu" for universes with no people in them.
I think the likely path is "never perfect, but mostly increasing over time" for universes with no singularity or collapse.
I'd love to more about implications of the CURRENT level of observation. Things that are just now feasible, and the things that are promoting or holding them back. For instance, individual facial recognition got a wave of reporting a few years ago, and I honestly don't know if it just quietly became universal or if the handwringing and protests actually worked to keep it only in very controlled and visible places (like border crossings).
↑ comment by Milan W (weibac) · 2025-01-24T22:19:31.245Z · LW(p) · GW(p)
I'd love to more about implications of the CURRENT level of observation
I have a feeling that the current bottleneck is data integration [LW · GW] rather than data collection.
Replies from: Dagon↑ comment by Dagon · 2025-01-24T22:38:35.455Z · LW(p) · GW(p)
I think both, by a long shot. I estimate I spend over half my time outside of easy video surveillance (room without a webcam or phone pointed in a useful direction, or outdoors not in easy LOS of a traffic or business cam), and a slightly different half for audio. For neither of these is high-fidelity POV data available at all, as described in the post.
For those times when I AM under surveillance, the quality is low and the integration is minimal. There are legal and technical challenges for anyone to use it against me. And it's VERY easy to find times/place where I'm not being recorded when I choose to.
I have considered automated mass-surveillance likely to occur in the future, and tried to prevent it, since about 20 years ago. It bothers me that so many people don't have enough self-respect to feel insulted by the infringement of their privacy, and that many people are so naive that they think surveillance is for the sake of their safety.
Privacy has already been harmed greatly, and surveillance is already excessive. And let me remind you that the safety we were promised in return didn't arrive.
The last good argument against mass-surveillance was "They cannot keep an eye on all of us" but I think modern automation and data processing has defeated that argument (people have just forgotten to update their cached stance on the issue).
Enough ranting. The Unabomber argued for why increases in technology would necessarily lead to reduced freedom, and I think his argument is sound from a game theory perspective. Looking at the world, it's also trivial to observe this effect, while it's difficult to find instances in which the amount of laws have decreased, or in which privacy has been won back (also applies to regulations and taxes. Many things have a worrying one-way tendency). The end-game can be predicted with simple exterpolation, but if you need an argument it's that technology is a power-modifier, and that there's an asymmetry between attack and defense (the ability to attack grows faster, which I believe caused the MAD stalemate).
I don't think it's difficult to make a case for "1", but I personally wouldn't bother much with "2" - I don't want to prepare myself for something when I can help slow it down. Hopefully web 3.0 will make smaller communities possible, resisting the pathelogical urge to connect absolutely everything together. By which time, we can get separation back, so that I can spend my time around like-minded people rather than being moderated to the extent that no groups in existence are unhappy with my behaviour. This would work out well unless encryption gets banned.
The maximization [? · GW]of functions lead to the death of humanity (literally or figuratively), but so does minimization (I'm arguing that pro-surveillance arguments are moral in origin and that they make a virtue out of death)
Yes, by default.
I'd drop the inevitable word, but barring massive change like a technology regression or laws being passed that are very strong (and it surviving the AI era), near-perfect surveillance is very likely to happen, and the safe assumption is that nothing you do will be hidable/hidden by default.
I'm not addressing whether surveillance is good or bad.
I don’t think perfect surveillance is inevitable.
I would prefer it, though. I don’t know any other way to prevent people from doing horrible things to minds running on their computers. It wouldn’t need to be publicly broadcast though, just overseen by law enforcement. I think this is much more likely than a scenario where everything you see is shared with everyone else.
Unfortunately, my mainline prediction is that people will actually be given very strong privacy rights, and will be allowed to inflict as much torture on digital minds under their control as they want. I’m not too confident in this though.
↑ comment by samuelshadrach (xpostah) · 2025-01-24T06:29:48.913Z · LW(p) · GW(p)
Thanks for the reply.
Sorry, I think I'm going to avoid discussing your point about digital minds in this post, it's best for a separate post. There's a number of considerations there (ASI timelines, unipolar versus multipolar post-ASI world) that would take time to discuss.
Assuming a pre-ASI world, do you have guesses for what our crux might be? I'm not convinced perfect surveillance is inevitable either, but I probably assign higher odds to it than you.
↑ comment by samuelshadrach (xpostah) · 2025-01-24T06:33:04.665Z · LW(p) · GW(p)
One of our cruxes is probably likelihood of law enforcement actually securing the data they collect, versus it being leaked.
I'd like to hear the arguments why you think perfect surveillance would be more likely in the future. I definitely think we will reach a state where surveillance is very high, high enough to massively increase policing of crimes, as well as empower authoritarian governments and the like, but I'm not sure why it would be perfect.
It seems to me that the implications of "perfect" surveillance are similar enough to the implications of very high levels of surveillance that number 2 is still the more interesting area of research.
1 comment
Comments sorted by top scores.
comment by Dom Polsinelli (dom-polsinelli) · 2025-01-24T20:18:45.289Z · LW(p) · GW(p)
I don't know about inevitable but I imagine that it is such an attractive option to governments that if the technology gets there it will be enacted before laws are passed preventing it, if any ever are. I would include a version of this where it is practically mandatory through incentives like greatly increased cost of insurance, near inability to defend yourself in a court or cross borders if you lacked it, or it just becomes the social norm to give up as much data about yourself as possible.
That said, I also think that if things go well we will have good space technology allowing relatively small communities to live in self sustaining habitats/colony ships which would kind of break any meaningful surveillance.
This is a very off the cuff remark, I haven't given this topic a great deal of thought before reading this post so make of that what you will.