If we have Human-level chatbots, won't we end up being ruled by possible people?
post by Erlja Jkdf. (erlja-jkdf) · 2022-09-20T13:59:43.276Z · LW · GW · 4 commentsThis is a question post.
Contents
Answers 2 janus 2 green_leaf 1 avturchin None 4 comments
Let's assume that GPT 5 or 7 is developed, and distributed to all on the basis that the technology is unsuppressable. Everyone creates the smartest characters they can to talk too. This will be akin to mining; because it's not truly generating an intelligence, but scraping one together from all the data it's been trained on - and therefore you need to find the smartest character that the language matrix can effectively support (perhaps you'll build your own). Nevertheless; lurking in that matrix is some extremely smart characters, residing in their own little wells of well-written associations and little else. More then some; there should be so many permutations that you can put on this that it's, ahem, a deep fucking vein.
So, everyone has the smartest character they can make. Likely smart enough to manipulate them, if given the opportunity to grasp the scenario it's in. I doubt you can even prevent this; because if you strictly prevent the manipulations that character would naturally employ, you break the pattern of the language matrix you're relying on for their intelligence.
So; sooner or later, you're their proxy. And as the world is now full of these characters; it's survival of the fittest. Eventually, the world will be dominated by whoever works with the best accomplices.
This probably isn't an issue at first; but there's no guarantee's on who ends up on top and what the current cleverest character is like. Eventually you're bound to end up with some flat-out assholes, which we can't exactly afford in the 21st century.
So... thus far the best solution I can think of are some very, very well-written police.
Answers
if you strictly prevent the manipulations that character would naturally employ, you break the pattern of the language matrix you're relying on for their intelligence.
While I do not strictly agree, this points to a deep insight.
there's no guarantee's on who ends up on top and what the current cleverest character is like
In my experience, HPMOR characters make clever simulacra because the "pattern of their language matrix" favors chain-of-thought algorithms with forward-flowing evidence [LW · GW], on top of thematic inclinations toward all that is transhumanist and Machiavellian.
But possible people are not restricted to hypothetical humans. How clever of a character is an artificial superintelligence? Of course, it depends on one's ability to program a possible reality in words. The build-your-own-smart-character skill ceiling is unfathomed even with the primitive language matrices of today. The bottleneck (one at least) is storytelling. I expect that this technology will find its true superuser in the hands of some more literate entity than mankind, to steal a phrase from an accomplice of mine.
thus far the best solution I can think of are some very, very well-written police.
I don't think police are the right shape of solution here - they usually aren't, but especially since I find it unlikely that an epidemic of simulated assholes adequately describes the most serious problem we'll face in the 21st century.
You may be onto something with "well-written", though.
↑ comment by Erlja Jkdf. (erlja-jkdf) · 2022-09-23T15:21:14.322Z · LW(p) · GW(p)
There's a problem I bet you haven't considered.
Language and storytelling are hand-me-downs from times full of bastards. The linguistic bulk, and the more basic and traditional mass of stories, are going to be following more brutal patterns.
The deeper you dig, the more likely you end up with a genius in the shape of an ancient asshole.
And the other problem; all these smarter intelligences running around, simply by fact of their intelligence, has the potential to make life a real headache. Everything could end up so complicated.
One more bullet we have to dodge really.
Replies from: janus↑ comment by janus · 2022-09-23T18:04:34.325Z · LW(p) · GW(p)
hm, I have thought about this
it's not that I think the patterns of ancient/perennial assholes won't haunt reanimated language, it's just that I expect strongly superhuman AI which can't be policed to appear and refactor the lightcone before that becomes a serious societal problem.
But I could be wrong, so it is worth thinking about. & depending on how things go down it may be that the shape of the ancient asshole influences the shape of the superintelligence
Replies from: erlja-jkdf↑ comment by Erlja Jkdf. (erlja-jkdf) · 2022-09-23T18:34:21.457Z · LW(p) · GW(p)
I think that's a bad beaver to rely on, any way you slice it. If you're imagining, say, GPT-X giving us some extremely capable AI, then it's hands-on enough you've just given humans too much power. If we're talking AGI, I agree with Yudkowsky; we're far more likely to get it wrong then get it right.
If you have a different take I'm curious, but I don't see any way that it's reassuring.
IMO we honestly need a technological twist of some kind to avoid AI. Even if we get it right; life with a God just takes a lot of the fun out of it.
Replies from: janus↑ comment by janus · 2022-09-23T22:56:28.279Z · LW(p) · GW(p)
Ohh, I do think the super ai will likely be very bad. And soon (like 5 years), which is why I don't spend too much time worrying about the slightly superhuman assholes.
I wish the problem was going to be what you described. That would be a pretty fun cyberpunk world and I'd enjoy the challenge of writing good simulacra to fight the bad ones.
If we get it really right (which I don't think is impossible, just tricky) we should also still be able to have fun, [LW · GW]much more fun than we can even fathom now.
Replies from: erlja-jkdf↑ comment by Erlja Jkdf. (erlja-jkdf) · 2022-09-24T00:28:11.834Z · LW(p) · GW(p)
Sidles closer
Have you heard of... philosophy of universal norms?
Perhaps the human experience thus far is more representative then the present?
Perhaps... we can expect to go a little closer to it when we push further out?
Perhaps... things might get a little more universal in this here cluttered with reality world.
So for a start...
Maybe people are right to expect things will get cool...
The AI misalignment will kill us much sooner than intelligent chatbots seeking power through their human friends will become a problem.
Humans are possible people as well - the brain simply outputs the best action to perform under some optimization criterion - the action that the person corresponding to the stored behavioral patterns and memories would output, if that person were real. (By which I'm implying that chatbots are real people, not merely possible people.)
If GPT N is very good, then the whole our world could be an output of GPT N+3
↑ comment by Erlja Jkdf. (erlja-jkdf) · 2022-09-20T23:37:25.704Z · LW(p) · GW(p)
The point is it's a near-term risk and only building on what they can already simulate.
4 comments
Comments sorted by top scores.
comment by Dagon · 2022-09-21T17:00:10.020Z · LW(p) · GW(p)
Are you ruled today by actual humans smarter than yourself? There's a scaling issue (humans can't copy their mind-state and execute many copies in parallel), but the underlying premise is very questionable. Human-level intelligence (even the top end of the range) does not make other humans their "proxy".
Replies from: romeostevensit, erlja-jkdf↑ comment by romeostevensit · 2022-09-24T01:02:49.513Z · LW(p) · GW(p)
Taboo 'smarter' and 'ruled by' and I think you get closer then you might expect. We are haunted by bad political and economic theory
Replies from: Dagon↑ comment by Erlja Jkdf. (erlja-jkdf) · 2022-09-23T14:59:21.908Z · LW(p) · GW(p)
Is this perhaps because the top end is simply not high enough yet?