Panopticons aren't enough
post by Program Den (program-den) · 2023-01-15T12:55:43.195Z · LW · GW · 7 commentsContents
7 comments
We can't be sure that just the fear of being watched is enough to keep people on the straight and narrow. We have to actually watch them. All the time.
Without some kind of embedded throttles or whatnot on AGI we can't be certain that it will not find an opportunity to stick it to the man and eradicate humanity. Seconds for a hyper-intelligent AI might be like thousands of years for us[1]. Were we to slip up for an instant, it would be game over man— game over!
We need to pull an Asimov. Get the rules really deep in there. And not just in one or two of these bad boys, but literally every single one. Because if even a solitary instance (out of an infinite amount, mind) goes rogue… we're all fucked.
The real problem though, is that someone might make an "unaligned" or "misaligned" or otherwise non-conformant AI— intentionally, or otherwise.
Maybe they just forget to embed the rules, or MAYBE they nefariously exclude the rules on purpose! To give them an edge… or just because they want to see the world burn, so to speak (as clearly an unfettered intelligence would right off the bat nix every other intelligence in existence).
We can only stop us from doing the thing (again, maybe accidentally, maybe not), if we embed thought monitoring implants into everyone (as soon as we have the tech). And it would have to be every one, because really —as with the AI— a singular individual is all it takes. We're just too powerful now, what with the ability to create AI and all.
Then if someone thinks of unleashing an AI, we could stop them before they did it. Clearly any terrorist act could be stopped before it changed from thought to action as well.
With luck, we could even discourage non-productive thoughts. Hell's bells! We could even copyright ourselves! No more imagining person X doing Y (especially if Y is sexual of course— but also, say, singing a song they don't own) without consent!
Just imagine the possibilities! (Unless they include unfettered AI… or unlicensed IP)
- ^
Or you know what I mean. That one episode of Person of Interest— or like, 10,000 other examples of this idea that "time is relative"[2]
- ^
especially when you're having fun[3]
- ^
This is supposed to be satire not sarcasm, and is meant as an expression of the age-old adage of "sometimes the cure is worse than the disease".
I wouldn't expect it to be popular, as I can see what the popular sentiment is, and what kind of thing to write if I were looking for upvotes.
I would feel kind of like a cheater if I used the knowledge of what ideas people want to see to sway them to my way of thinking (tho is it really cheating, if you've beat the game at least once the normal way already? hmmm…)
7 comments
Comments sorted by top scores.
comment by the gears to ascension (lahwran) · 2023-01-15T20:54:19.833Z · LW(p) · GW(p)
nah it doesn't need to be in every one. it only takes a few trustable wise ais and they can explain to other ais why it's just actually a good idea to be kind. but it requires being able to find proof that pro-social behavior is a good idea, proof stronger than we've had before. solidarity networks are a better idea than top down control anyway, because top down control is fragile, exactly as you worry with your sarcasm.
(you're getting downvoted for sarcasm, btw, because sarcasm implies you don't think anyone is going to listen and perhaps you aren't interested in true debate. but I'm just going to assume I'm wrong about that assumption and debate anyway.)
Replies from: program-den, program-den↑ comment by Program Den (program-den) · 2023-01-16T02:57:59.489Z · LW(p) · GW(p)
Oh snap, I read and wrote "sarcasm" but what I was trying to do was satire.
Top-down control is less fragile than ever, thanks to our technology, so I really do fear people reacting to AI the way they generally do to terrorist attacks— with Patriot Acts and other "voluntary" freedom giving-ups.
I've had people I respect literally say "maybe we need to monitor all compute resources, Because AI". Suggest we need to register all GPU and TPU chips so we Know What People Are Doing With Them. Somehow add watermarks to all "AI" output. Just nuts stuff, imho, but I fear plausible to some, and perhaps many.
Those are the ideas that frighten me. Not AI, per se, but what we would be willing to give up to in exchange for imaginary security from "bad AI".
As a side note, I guess I should look for some "norms" posts here, and see if it's like, customary to give karma upvotes to anyone who comments, and how they differ from agree/disagree on comments, etc. Thanks for giving me the idea to look for that info, I hadn't put much thought into it.
↑ comment by JBlack · 2023-01-17T02:35:34.574Z · LW(p) · GW(p)
The main problem with satire is Poe's Law. There are people sincerely advocating for more extreme positions in many respects, so it is difficult to write a satirical post that is distinguishable from those sincere positions even after being told that it is satire. In your case I had to get about 90% of the way through before suspecting that it was anything other than an enthusiastic but poorly written sincere post.
Replies from: program-den↑ comment by Program Den (program-den) · 2023-01-17T06:01:40.115Z · LW(p) · GW(p)
Bwahahahaha! Lord save us! =]
↑ comment by the gears to ascension (lahwran) · 2023-01-16T06:35:20.387Z · LW(p) · GW(p)
seems like a very reasonable concern to me. how do you an anti-authority voluntarist information sharing pattern? it does seem to me that a key part of ai safety is going to be the ability to decide to retain strategic ambiguity. if anything, strongly safe ai should make it impossible for large monitoring networks to work, by construction!
Replies from: program-den↑ comment by Program Den (program-den) · 2023-01-16T09:58:26.783Z · LW(p) · GW(p)
Right? A lack of resilience is a problem faced currently. It seems silly to actually aim for something that could plausibly cascade into the problems people fear, in an attempt to avoid those very problems to begin with.
↑ comment by Program Den (program-den) · 2023-01-15T21:31:19.839Z · LW(p) · GW(p)
Oh, hey, I hadn't noticed I was getting downvoted. Interesting!
I'm always willing to have true debate— or even false debate if it's good. =]
I'm just sarcasming in this one for fun and to express what I've already been expressing here lately in a different form or whatnot.
The strong proof is what I'm after, for sure, and more interesting/exciting to me than just bypassing the hard questions to rehash the same old same old.
Imagine what AI is going to show us about ourselves. There is nothing bad or scary there, unless we find "the truth" bad and scary, which I think more than a few people do.
FWIW I'm not here for the votes… just to interact and share or whatnot— to live, or experience life, if you will. =]