Alien neuropunk slaver civilizations
post by D_Malik · 2015-06-15T06:30:32.795Z · LW · GW · Legacy · 19 commentsContents
19 comments
Here's some blue-sky speculation about one way alien sapients' civilizations might develop differently from our own. Alternatively, you can consider it conworlding. Content note: torture, slavery.
Looking at human history, after we developed electronics, we painstakingly constructed machines that can perform general computation, then built software which approximates the workings of the human brain. For instance, we nowadays use in-silico reinforcement learning and neural nets to solve various "messy" problems like computer vision and robot movement. In the future, we might scan brains and then emulate them on computers. This all seems like a very circuitous course of development - those algorithms have existed all around us for thousands of years in the form of brains. Putting them on computers requires an extra layer of technology.
Suppose that some alien species's biology is a lot more robust than ours - their homeostatic systems are less failure-prone than our own, due to some difference in their environment or evolutionary history. They don't get brain-damaged just from holding their breath for a couple minutes, and open wounds don't easily get infected.
Now suppose that after they invent agriculture but before they invent electronics, they study biology and neurology. Combined with their robust biology, this leads to a world where things that are electronic in our world are instead controlled by vat-grown brains. For instance, a car-building robot could be constructed by growing a brain in a vat, hooking it up to some actuators and sensors, then dosing it with happy chemicals when it correctly builds a car, and stimulating its nociceptors when it makes mistakes. This rewarding and punishing can be done by other lab-grown "overseer" brains trained specifically for the job, which are in turn manually rewarded at the end of the day by their owner for the total number of cars successfully built. Custom-trained brains could control chemical plants, traffic lights, surveillance systems, etc. The actuators and sensors could be either biologically-based (lab-grown eyes, muscles, etc., powered with liquefied food) or powered with combustion engines or steam engines or even wound springs.
Obviously this is a pretty terrible world, because many minds will live lives with very little meaning, never grasping the big picture, at the mercy of unmerciful human or vat-brain overseers, without even the option of suicide. Brains wouldn't necessarily be designed or drugged to be happy overall - maybe a brain in pain does its job better. I don't think the owners would be very concerned about the ethical problems - look at how humans treat other animals.
You can see this technology as a sort of slavery set up so that slaves are cheap and unsympathetic and powerless. They won't run away, because: they'll want to perform their duties, for the drugs; many won't be able to survive without owners to top up their food drips; they could be developed or drugged to ensure docility; you could prevent them from even getting the idea of emancipation, by not giving them the necessary sensors; perhaps you could even set things up so the overseer brains can read the thoughts of their charges directly, and punish bad thoughts. This world has many parallels to Hanson's brain emulation world.
Is this scenario at all likely? Would these civilizations develop biological superintelligent AGI, or would they only be able to create superintelligent AGI once they develop electronic computing?
19 comments
Comments sorted by top scores.
comment by Val · 2015-06-16T03:31:41.147Z · LW(p) · GW(p)
Must a brain-in-a-vat which controls a factory machinery having a simple, pre-determined task, necessarily be self-aware? Because if it's not, than it's no more slavery than the case of a contemporary factory robot, or a horse pulling a cart.
Replies from: D_Malik↑ comment by D_Malik · 2015-06-16T07:32:57.428Z · LW(p) · GW(p)
I'd agree that the brains of very primitive animals, or brains that have been heavily stripped down specifically to, say, operate a traffic light, aren't really worthy of moral consideration. But you'd probably need more intelligent brains for complex tasks like building cars or flying planes, and those probably are worthy of moral consideration - stripping out sapience while leaving sufficient intelligence might be impossible or expensive.
comment by Viliam · 2015-06-15T07:48:15.059Z · LW(p) · GW(p)
If the aliens are good at bioengineering, I can imagine they would be able to create an intelligence higher than their own.
I am not sure about the scenario, whether they use as slaves other species (an analogy would be humans using animals) or also the same species (humans using humans). If the latter, then creating a higher intelligence could be relatively simple, maybe just a question of changing some evolutionary trade-off in a way that increases intelligence and decreases some other trait that the slave-owners don't care about. If there are no ethical concerns, they could just run thousand random experiments, have the slaves take IQ tests, and see what works. Such slaves with improved intelligence would still be unable to recursively self-improve (they could only improve the next generation of slaves), so it would be easy to keep them enslaved; especially if the modification would make them less able to survive in real life.
When the bioengineering becomes so powerful that you can increase a brain's intelligence by doing a surgery on an existing organism, then there will be a risk that a group of slaves will recursively improve each other's brains to the level when they are able to devise an escape plan. Now the question is whether their understanding of brain centers for intelligence is better than their understanding for brain centers for things that could make the slaves rebel. That is, whether they can eradicate all desire to rebel before they can increase the intelligence. In AI words, whether "Friendly intelligence" comes before "recursively self-improving intelligence". This is a (hypothetically) empirical question; it depends on the specific design of the alien brain.
Also, with sufficiently good bioengineering, these aliens could develop cyborgs, so the first superintelligence might have a half-biological, half-electronic brain.
comment by Lumifer · 2015-06-15T16:25:45.261Z · LW(p) · GW(p)
This world has many parallels to Hanson's brain emulation world.
Pretty much equivalent, I'd say. I don't think it's relevant here whether minds run on wetware or on silicon.
Replies from: DanArmak, MakoYass↑ comment by DanArmak · 2015-06-15T17:55:29.115Z · LW(p) · GW(p)
There could be vastly more ems than biological brains, and new designs very unlike the original could be created much more easily with ems.
ETA: ems would also allow different economic structures, like starting a million copies working in parallel on rented hardware when a new problem needs to be solved, then killing them off an hour later and staying low-key for the rest of the day.
Replies from: Lumifer↑ comment by Lumifer · 2015-06-15T18:16:17.501Z · LW(p) · GW(p)
I had in mind the ethical concerns of the OP. Economically, of course, wetware and silicon are rather different.
But you are correct to point out an issue relevant to the OP's question (can you make a superhuman AI out of biological brains) -- wetware scales rather poorly. At least the usual Earth's carbon-based wetware based mostly on chemistry in liquid and semi-liquid mediums.
↑ comment by mako yass (MakoYass) · 2024-05-01T05:43:12.967Z · LW(p) · GW(p)
It would seem to me that in this world brains would be much more expensive (or impossible) to copy. Which is worth talking about, because there are designs in our own era for very efficient very dense neural networks that have that same quality. They can be trained, but the weights can't be accessed.
comment by chaosmage · 2015-06-15T08:42:53.971Z · LW(p) · GW(p)
How different is that from our world? We call our vats some combination of workplace and apartment, and we give the working brains money because that reinforces their behavior much like drugs do, but overall, when you substract the aversive imagery of "food drips" and direct stimulation of nociceptors (instead of yelling, lawsuits and imprisonment)... that doesn't seem substantially different.
Which means the answer to your questions is yes. We already have a superintelligence infinitely smarter than any human, it is called science, and the AGI project is its ongoing project of improving upon itself. So yes, I do think superintelligence is possible without electronic computing.
Replies from: D_Malik↑ comment by D_Malik · 2015-06-15T10:49:25.512Z · LW(p) · GW(p)
Some of the disgust definitely derives from the imagery, but I think much of it is valid too. Imagine the subjective experience of the car-builder brain. It spends 30 years building cars. It has no idea what cars do. It has never had a conversation or a friend or a name. It has never heard a sound or seen itself. When it makes a mistake it is made to feel pain so excruciating it would kill itself if it could, but it can't because its actuators' range of motion is too limited. This seems far worse than the lives of humans in our world.
By "would these civilizations develop biological superintelligent AGI" I meant more along the lines of whether such a civilization would be able to develop a single mind with general superintelligence, not a "higher-order organism" like science. Though I think that depends on too many details of the hypothetical world to usefully answer.
Replies from: chaosmage↑ comment by chaosmage · 2015-06-16T10:45:52.318Z · LW(p) · GW(p)
You are right that a vat brain life should certainly seem far worse than a human life - to a human. But would a vat brain agree? From its perspective, human lives could be horrible, because they're constantly assaulted by amounts of novelty and physical danger that a vat brain couldn't imagine handling. Humans always need to work for homeostasis from a wildly heterogenous set of environmental situations. A vat brain wouldn't at all be surprised to hear that human lives are much shorter than vat brain lives.
Do you think that once we know what intelligence is exactly, we'll be able to fully describe it mathematically? Since you're assuming electronics-based superintelligence is possible, it would appear so. Well if you're right, intelligence is substrate-independent.
Your distinction between "single mind" and "higher-order mechanism" is a substrate distinction, so it shouldn't matter. You and I feel it does matter, because we're glorified chimps with inborn intuitions about what constitutes an agent, but math is not a chimp - and if math doesn't care whether intelligence runs on a brain or on a computer system, it shouldn't care whether intelligence runs on one brain or on several.
comment by Elo · 2015-06-15T07:08:16.190Z · LW(p) · GW(p)
I think you may have oversimplified bio-engineering to suggest it could arise in such a way before advanced technology. Ignoring that for a moment; because in our future we could conclude that vat-brains are more effective at a task than programmed-robots.
I don't think there are many standard ways to look at the ethics of manipulating brains in vats in return for processes. however a stripped down brain essentially becomes a robot. This question is wondering of our understanding of sentience. Which we are really not too sure about at the moment.
I just see biological-based robots with bio-mechanisms of achieving tasks.
Replies from: D_Malik↑ comment by D_Malik · 2015-06-16T07:54:18.036Z · LW(p) · GW(p)
I think you may have oversimplified bio-engineering to suggest it could arise in such a way before advanced technology.
I think it could be accomplished with quite primitive technology, especially if the alien biology is robust, and if you just use natural brains rather than trying to strip them down to minimize food costs (which would also make them more worthy of moral consideration). Current human technology is clearly sufficient: humans have already kept isolated brains alive, and used primitive biological brains to control robots. If you connect new actuators or sensors to a mammalian brain, it uses them just fine after a short adaptation period, and it seems likely alien brains would work the same.
Replies from: Elo↑ comment by Elo · 2015-06-16T13:50:41.527Z · LW(p) · GW(p)
I was referring to the difficulty of growing a brain without a body. Or keeping a brain alive without its natural body.
Replies from: spriteless↑ comment by spriteless · 2015-06-17T14:47:27.272Z · LW(p) · GW(p)
That got handwaved away in the third paragraph of the op in order to force this to be a moral dilemma rather than an engineering one.
Replies from: Elo↑ comment by Elo · 2015-06-17T21:54:59.339Z · LW(p) · GW(p)
But I am saying it can't be handwaived like that. Its like suggesting that humans develop time travel before steam power; and well before other things like space travel, flight.
I was trying to address:
Is this scenario at all likely?
And saying no. We are pretty slow at technology ourselves but the chance of doing brain-manipulations before growing a brain without a body is relatively low.
And also assuming that biology is not going to throw disease-organisms onto the plate of any long-sustaining life planet, seems unlikely.
Replies from: spriteless↑ comment by spriteless · 2015-06-18T04:07:49.450Z · LW(p) · GW(p)
I don't recall anyone mentioning germs making the Super-Happys in 3 worlds collide unrealistic. Went straight to talking about the implications.
My first thought about this post was that D Malik must watch Stephen Universe, and want to start a conversation about the Gems' Homeworld's tech's moral implications without getting into discussions about people's Gemsonas and ships and whatnot.
Or A Deepness In the Sky, or John Dies At the End, or any number of books that explore the idea.
I guess I am in fiction critiquing mode here.
comment by HungryHobo · 2015-06-19T13:52:24.850Z · LW(p) · GW(p)
You seem to be describing a combination of the Prador (who are, as you describe highly resilient and use the forcefully extracted brains of their juveniles in place of computers or AI's) or the affront who engineered slave species to fill automation roles (in whatever way was most painful).
comment by Shmi (shminux) · 2015-06-15T07:49:09.515Z · LW(p) · GW(p)
This scenario is certainly possible, and, perhaps, that civilization, like ours, would eventually evolve to value all sentient/sapient life, and switch to using the tools which do not experience suffering. Or maybe not. It is far from clear whether the modern/progressive Western morality is a necessary evolutionary step.
comment by spriteless · 2015-06-17T15:12:48.219Z · LW(p) · GW(p)
Assuming the original organics have to absorb a comparable amount of data to what Earther brains do, any brain with stimulus of the outside world will grow, which means overseers in a vat that interact with overseers in bodies will, even if not given stimulus, come to know their bosses and their personalities and moods and what not, and have as much chance of manipulating them as anyone whose only power is a bunch of underlings. How expensive are well trained overseers to replace? That is how much leeway they have in whatever equivalent they find to human bureaucrats with small amounts of power.
Could they develop superintellegent AGI? Depends wholly on their brains architecture, and their skill with artificial selection. Not enough data to predict.
Is this society evil? It is very slightly possible that it is not. I mean, I know some autistic people who on occasion wish to be cut off from their bodies during sensory somatic issues. Ehh, cloning aspies who are afraid of touching is kind of evil if it's just as easy to clone aspies who aren't afraid of touching.