We don’t trade with ants

post by KatjaGrace · 2023-01-10T23:50:11.476Z · LW · GW · 109 comments

Contents

  Appendix: potentially valuable things things ants can do
None
110 comments

When discussing advanced AI, sometimes the following exchanges happens:

“Perhaps advanced AI won’t kill us. Perhaps it will trade with us”

“We don’t trade with ants”

I think it’s interesting to get clear on exactly why we don’t trade with ants, and whether it is relevant to the AI situation.

When a person says “we don’t trade with ants”, I think the implicit explanation is that humans are so big, powerful and smart compared to ants that we don’t need to trade with them because they have nothing of value and if they did we could just take it; anything they can do we can do better, and we can just walk all over them. Why negotiate when you can steal?

I think this is broadly wrong, and that it is also an interesting case of the classic cognitive error of imagining that trade is about swapping fixed-value objects, rather than creating new value from a confluence of one’s needs and the other’s affordances. It’s only in the imaginary zero-sum world that you can generally replace trade with stealing the other party’s stuff, if the other party is weak enough.

Ants, with their skills, could do a lot that we would plausibly find worth paying for. Some ideas:

  1. Cleaning things that are hard for humans to reach (crevices, buildup in pipes, outsides of tall buildings)
  2. Chasing away other insects, including in agriculture
  3. Surveillance and spying
  4. Building, sculpting, moving, and mending things in hard to reach places and at small scales (e.g. dig tunnels, deliver adhesives to cracks)
  5. Getting out of our houses before we are driven to expend effort killing them, and similarly for all the other places ants conflict with humans (stinging, eating crops, ..)
  6. (For an extended list, see ‘Appendix: potentially valuable things things ants can do’)

We can’t take almost any of this by force, we can at best kill them and take their dirt and the minuscule mouthfuls of our foods they were eating.

Could we pay them for all this?

A single ant eats about 2mg per day according to a random website, so you could support a colony of a million ants with 2kg of food per day. Supposing they accepted pay in sugar, or something similarly expensive, 2kg costs around $3. Perhaps you would need to pay them more than subsistence to attract them away from foraging freely, since apparently food-gathering ants usually collect more than they eat, to support others in their colony. So let’s guess $5.

My guess is that a million ants could do well over $5 of the above labors in a day. For instance, a colony of meat ants takes ‘weeks’ to remove the meat from an entire carcass of an animal. Supposing somewhat conservatively that this is three weeks, and the animal is a 1.5kg bandicoot, the colony is moving 70g/day. Guesstimating the mass of crumbs falling on the floor of a small cafeteria in a day, I imagine that it’s less than that produced by tearing up a single bread roll and spreading it around, which the internet says is about 50g. So my guess is that an ant colony could clean the floor of a small cafeteria for around $5/day, which I imagine is cheaper than human sweeping (this site says ‘light cleaning’ costs around $35/h on average in the US). And this is one of the tasks where the ants have least advantages over humans. Cleaning the outside of skyscrapers or the inside of pipes is presumably much harder for humans than cleaning a cafeteria floor, and I expect is fairly similar for ants.

So at a basic level, it seems like there should be potential for trade with ants - they can do a lot of things that we want done, and could live well at the prices we would pay for those tasks being done.

So why don’t we trade with ants?

I claim that we don’t trade with ants because we can’t communicate with them. We can’t tell them what we’d like them to do, and can’t have them recognize that we would pay them if they did it. Which might be more than the language barrier. There might be a conceptual poverty. There might also be a lack of the memory and consistent identity that allows an ant to uphold commitments it made with me five minutes ago.

To get basic trade going, you might not need much of these things though. If we could only communicate that their all leaving our house immediately would prompt us to put a plate of honey in the garden for them and/or not slaughter them, then we would already be gaining from trade.

So it looks like the the AI-human relationship is importantly disanalogous to the human-ant relationship, because the big reason we don’t trade with ants will not apply to AI systems potentially trading with us: we can’t communicate with ants, AI can communicate with us.

(You might think ‘but the AI will be so far above us that it will think of itself as unable to communicate with us, in the same way that we can’t with the ants - we will be unable to conceive of most of its concepts’. It seems unlikely to me that one needs anything like the full palette of concepts available to the smarter creature to make productive trade. With ants, ‘go over there and we won’t kill you’ would do a lot, and it doesn’t involve concepts at the foggy pinnacle of human meaning-construction. The issue with ants is that we can’t communicate almost at all.)

But also: ants can actually do heaps of things we can’t, whereas (arguably) at some point that won’t be true for us relative to AI systems. (When we get human-level AI, will that AI also be ant level? Or will AI want to trade with ants for longer than it wants to trade with us? It can probably better figure out how to talk to ants.) However just because at some point AI systems will probably do everything humans do, doesn’t mean that this will happen on any particular timeline, e.g. the same one on which AI becomes ‘very powerful’. If the situation turns out similar to us and ants, we might expect that we continue to have a bunch of niche uses for a while.

In sum, for AI systems to be to humans as we are to ants, would be for us to be able to do many tasks better than AI, and for the AI systems to be willing to pay us grandly for them, but for them to be unable to tell us this, or even to warn us to get out of the way. Is this what AI will be like? No. AI will be able to communicate with us, though at some point we will be less useful to AI systems than ants could be to us if they could communicate.

But, you might argue, being totally unable to communicate makes one useless, even if one has skills that could be good if accessible through communication. So being unable to communicate is just a kind of being useless, and how we treat ants is an apt case study in treatment of powerless and useless creatures, even if the uselessness has an unusual cause. This seems sort of right, but a) being unable to communicate probably makes a creature more absolutely useless than if it just lacks skills, because even an unskilled creature is sometimes in a position to add value e.g. by moving out of the way instead of having to be killed, b) the corner-ness of the case of ant uselessness might make general intuitive implications carry over poorly to other cases, c) the fact that the ant situation can definitely not apply to us relative to AIs seems interesting, and d) it just kind of worries me that when people are thinking about this analogy with ants, they are imagining it all wrong in the details, even if the conclusion should be the same.

Also, there’s a thought that AI being as much more powerful than us as we are than ants implies a uselessness that makes extermination almost guaranteed. But ants, while extremely powerless, are only useless to us by an accident of signaling systems. And we know that problem won’t apply in the case of AI. Perhaps we should not expect to so easily become useless to AI systems, even supposing they take all power from humans.

Appendix: potentially valuable things things ants can do

  1. Clean, especially small loose particles or detachable substances, especially in cases that are very hard for humans to reach (e.g. floors, crevices, sticky jars in the kitchen, buildup from pipes while water is off, the outsides of tall buildings)
  2. Chase away other insects
  3. Pest control in agriculture (they have already been used for this since about 400AD)
  4. Surveillance and spying
  5. Investigating hard to reach situations, underground or in walls for instance - e.g. see whether a pipe is leaking, or whether the foundation of a house is rotting, or whether there is smoke inside a wall
  6. Surveil buildings for smoke
  7. Defend areas from invaders, e.g. buildings, cars (some plants have coordinated with ants in this way)
  8. Sculpting/moving things at a very small scale
  9. Building house-size structures with intricate detailing.
  10. Digging tunnels (e.g. instead of digging up your garden to lay a pipe, maybe ants could dig the hole, then a flexible pipe could be pushed through it)
  11. Being used in medication (this already happens, but might happen better if we could communicate with them)
  12. Participating in war (attack, guerilla attack, sabotage, intelligence)
  13. Mending things at a small scale, e.g. delivering adhesive material to a crack in a pipe while the water is off
  14. Surveillance of scents (including which direction a scent is coming from), e.g. drugs, explosives, diseases, people, microbes
  15. Tending other small, useful organisms (‘Leafcutter ants (Atta and Acromyrmex) feed exclusively on a fungus that grows only within their colonies. They continually collect leaves which are taken to the colony, cut into tiny pieces and placed in fungal gardens.’Wikipedia: ‘Leaf cutter ants are sensitive enough to adapt to the fungi’s reaction to different plant material, apparently detecting chemical signals from the fungus. If a particular type of leaf is toxic to the fungus, the colony will no longer collect it…The fungi used by the higher attine ants no longer produce spores. These ants fully domesticated their fungal partner 15 million years ago, a process that took 30 million years to complete.[9] Their fungi produce nutritious and swollen hyphal tips (gongylidia) that grow in bundles called staphylae, to specifically feed the ants.’ ‘The ants in turn keep predators away from the aphids and will move them from one feeding location to another. When migrating to a new area, many colonies will take the aphids with them, to ensure a continued supply of honeydew.’ Wikipedia:’Myrmecophilous (ant-loving) caterpillars of the butterfly family Lycaenidae (e.g., blues, coppers, or hairstreaks) are herded by the ants, led to feeding areas in the daytime, and brought inside the ants’ nest at night. The caterpillars have a gland which secretes honeydew when the ants massage them.’’)
  16. Measuring hard to access distances (they measure distance as they walk with an internal pedometer)
  17. Killing plants (lemon ants make ‘devil’s gardens’ by killing all plants other than ‘lemon ant trees’ in an area)
  18. Producing and delivering nitrogen to plants (‘Isotopic labelling studies suggest that plants also obtain nitrogen from the ants.’ - Wikipedia)
  19. Get out of our houses before we are driven to expend effort killing them, and similarly for all the other places ants conflict with humans (stinging, eating crops, ..)

109 comments

Comments sorted by top scores.

comment by gwern · 2023-01-11T00:21:07.084Z · LW(p) · GW(p)

Humans can communicate with and productively use many animals (some not extinct*), some of whom even understand concepts like payment and exchange. (Animal psychology has advanced a lot since Adam Smith gave hostage to fortune by saying no one had ever seen a dog or other animal truck, barter, or exchange.) We don't 'trade' them with them. A few are fortunate enough to interest humans in preserving and even propagating them. We don't 'trade' with those either. At the end of the day, no matter how many millions her trainer earns, Lassie just gets a biscuit & ear scritches for being such a good girl. And if she isn't a good girl, we genetically engineer and manufacture (ie. breed) an ex-wolf who is a good girl.

I'd also highlight the lack of trade with many humans, as well as primates. (Consider the cost of crime and how easily one can create millions of dollars in externalities; consider the ever skyrocketing cost of maintaining research primates, especially the chimpanzees - there is nothing that a chimpanzee can do as a tradeable service which is worth >$20k/year and the costs of dealing with it being able to at any moment decide to rip off your face.)

* yet - growth mindset!

Replies from: jskatt, Andy_McKenzie, wslafleur, George3d6
comment by JakubK (jskatt) · 2023-01-11T05:08:48.711Z · LW(p) · GW(p)

I would give my dog many treats to stop eating deer poop, since this behavior can lead to expensive veterinary visits. But I can't communicate with my dog well enough to set up this trade.

Why isn't this an example of "we would trade with animals if we could communicate better"?

Replies from: nim, M. Y. Zuo
comment by nim · 2023-01-11T15:52:36.429Z · LW(p) · GW(p)

The example of "don't eat that!" communication which comes immediately to mind is https://savethekiwi.nz/about-us/what-we-do/kiwi-avoidance-training-for-dogs/, though that's with negative rather than with positive reinforcement.

The example of "do this other thing when you get that stimulus" communication which comes immediately to mind is https://www.akc.org/expert-advice/training/stop-dog-barking-doorbell/, which is a more direct trade between not doing the thing and getting a treat.

Replies from: gwern
comment by gwern · 2023-02-28T14:53:50.277Z · LW(p) · GW(p)

which is a more direct trade between not doing the thing and getting a treat.

Yeah, I've done similar trade-things with my cat. We certainly can trade with animals - we just very rarely do. Owning animals is like living in a Stalinist totalitarian communist dictatorship, in that there are sometimes nominally transactions involving 'rubles' and 'markets', but they represent a tiny fraction of the economy and are considered a last resort (and, animal activists would add, the treatment of animals resembles the less savory parts of such dictatorships as well, in both quality and quantity...).

comment by M. Y. Zuo · 2023-01-12T22:08:06.595Z · LW(p) · GW(p)

Is not providing treats to your dog already ‘communication’?

Replies from: jskatt
comment by JakubK (jskatt) · 2023-01-13T03:34:13.525Z · LW(p) · GW(p)

Sure, we have some rudimentary forms of dog-human communication. But there's plenty of room for improvement.

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2023-01-13T04:18:35.267Z · LW(p) · GW(p)

This already counts as ‘trade with animals’ then.

Replies from: JBlack, jskatt
comment by JBlack · 2023-01-13T04:54:41.438Z · LW(p) · GW(p)

If you count being literally owned by humans and subject to their every whim, with unowned animals or those that do anything harmful to humans or their other owned animals being routinely shot or poisoned as "trade with animals", then yes.

(I do think this would still count as a "win" in the scale of possible outcomes from unaligned AGI)

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2023-01-13T17:04:10.882Z · LW(p) · GW(p)

Your views on the nature of relationships between dog and owner does not reflect the actual situation in most cases.

Replies from: JBlack, sharmake-farah
comment by JBlack · 2023-01-13T23:53:03.129Z · LW(p) · GW(p)

It's not a view on the nature between dog and owner. It's a view on the relationship between the two species.

I'm not saying that owners routinely shoot the dogs, but that unowned dogs are routinely killed and that if an owned dog harms a human or other pets or livestock, it is common that other people will kill that dog.

Furthermore dogs have pretty much the best relationship with humans. Almost all of the many thousands of animal species have very much worse outcomes of interaction with humans, a substantial fraction of those including extinction.

comment by Noosphere89 (sharmake-farah) · 2023-01-13T19:35:10.992Z · LW(p) · GW(p)

I'm confused at why this is criticized, since this actually happens?

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2023-01-13T20:45:35.712Z · LW(p) · GW(p)

Elaborating on edge cases as if it was the norm is usually frowned upon in polite online discussions.

Replies from: ChristianKl, mr-hire
comment by ChristianKl · 2023-01-15T12:13:02.096Z · LW(p) · GW(p)

Quick Googling suggests that 80% of the dogs in the world are wild dogs living in the streets of villages or agricultural areas. 

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2023-01-16T06:44:04.681Z · LW(p) · GW(p)

We weren't discussing all dogs extant in the world, which would obviously include dogs that were never pets in the first place, dogs that were never subject to human control, and probably some population of wild dogs that never interacted with humans at all.

Replies from: ChristianKl
comment by ChristianKl · 2023-01-16T11:51:09.746Z · LW(p) · GW(p)

How do you think a wild dog can live in a village without interacting with humans at all?

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2023-01-16T17:36:34.459Z · LW(p) · GW(p)

Because they might live in the "agricultural areas" as you stated?

It's not too difficult to imagine as wild dogs don't universally approach humans whenever they are spotted.

comment by Matt Goldenberg (mr-hire) · 2023-01-14T13:39:33.664Z · LW(p) · GW(p)

Dogs being pets is actually only the norm in a few countries, and in many countries they are routinely shot, have rocks thrown at them, etc.

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2023-01-14T20:22:23.738Z · LW(p) · GW(p)

Source?

Replies from: mr-hire, constantingoeldel
comment by Matt Goldenberg (mr-hire) · 2023-01-15T13:09:31.117Z · LW(p) · GW(p)

Have heard this first hand from people who travel to countries with large wild dog populations e.g Guatemala

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2023-01-16T06:45:42.541Z · LW(p) · GW(p)

You claimed that "Dogs being pets is actually only the norm in a few countries". I've personally been to over 20 countries where this is the norm.  And I'm reasonably sure the more well travelled folks on LW have been to more.

 

So unless there is rock solid proof it really is difficult to believe the claim.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2023-01-16T12:56:36.129Z · LW(p) · GW(p)

Sorry, I've amended to "some". The relevant point being that dogs being treated badly isn't an "edge case"

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2023-01-16T17:46:24.291Z · LW(p) · GW(p)

The relevant point being that dogs being treated badly isn't an "edge case"

The original assertion in question was more specific and is as follows:

If you count being literally owned by humans and subject to their every whim, with unowned animals or those that do anything harmful to humans or their other owned animals being routinely shot or poisoned as "trade with animals", then yes.

No dog-owner relationship that I'm personally aware, or have heard of, of can be classified as the dog "being subject to their every whim". 

Since it is simply not possible for humans to exercise 100% control over any organism. 

And in the case of larger mammals with capacity for independent action and some degree of independent reflection, such as most dogs, even exercising actual control to reflect the owner's "every whim" over 50% of a 24 hour day is practically impossible.

In fact it would be extremely unusual for this to be the case, hence an 'edge case'.

comment by cgoeldel (constantingoeldel) · 2023-01-15T11:53:57.329Z · LW(p) · GW(p)

I observed this for myself in rural Madagascar

comment by JakubK (jskatt) · 2023-01-13T19:00:35.801Z · LW(p) · GW(p)

I don't see 'ability to trade with animals' as a binary variable. I think our ability to trade with animals could increase further even though it's not zero.

comment by Andy_McKenzie · 2023-01-11T02:10:44.567Z · LW(p) · GW(p)

At the end of the day, no matter how many millions her trainer earns, Lassie just gets a biscuit & ear scritches for being such a good girl. And if she isn't a good girl, we genetically engineer and manufacture (ie. breed) an ex-wolf who is a good girl.

I don't think it's accurate to claim that humans don't care about their pets' preferences as individuals and try to satisfy them. 

To point out one reason that I think this, there are huge markets for pet welfare. There are even animal psychiatrists and there are longevity companies for pets

I've also known many people who've been very distraught when their pets died. Cloning them would be a poor consolation. 

I also don't think that 'trade' necessarily captures the right dynamic. I think it's more like communism in the sense that families are often communist. But I also don't think that your comment, which sidesteps this important aspect of human-animal relations, is the whole story. 

Now, one could argue that the expansion of animal rights and caring about individual animals is a recent phenomenon, and that therefore these are merely dreamtime dynamics, but that requires a theory of dreamtime and why it will end. 

Replies from: gwern, lc
comment by gwern · 2023-01-11T03:08:41.017Z · LW(p) · GW(p)

I also don't think that 'trade' necessarily captures the right dynamic. I think it's more like communism in the sense that families are often communist. But I also don't think that your comment, which sidesteps this important aspect of human-animal relations, is the whole story.

Indeed, 'trade' is not the whole story; it is none of the story - my point is that the human-animal relations, by design, sidestep and exclude trade completely from their story.

Now, how good that actual story is for dogs, or more accurately for the AI/human analogy, wolves, one can certainly debate. (I'm sure you've seen the cartoons: "NOBLE WOLF: 'I'll just steal some food from over by that campfire, what's the worst that could happen?' [30,000 years later] [some extremely demeaning and entertaining photograph of spayed/neutered dog from an especially deformed, sickly, short-lived, inbred breed like English bulldogs]".) But that's an entirely different discussion from OP's claim that we humans totally would trade with ants if only we could communicate with them and that's the only barrier and thus renders it disanalogous to humans and AI.

(Incidentally, cloning a dead pet out of grief represents most of the consumer market for cat/dog cloning. Few do it to try to preserve a unique talent or for breeding purposes. The interviewed people usually say it was a good choice - although I don't know how many of the people dropping $20k+ on a cloned pet regret the choice, and don't talk to the media or write about it.)

Replies from: Andy_McKenzie, jmh
comment by Andy_McKenzie · 2023-01-11T03:43:06.132Z · LW(p) · GW(p)

OK, I get your point now better, thanks for clarifying -- and I agree with it. 

In our current society, even if dogs could talk, I bet that we wouldn't allow humans to trade (or at least anywhere close to "free" trade) with them, due to concerns for exploitation. 

comment by jmh · 2023-01-13T02:05:36.005Z · LW(p) · GW(p)

I agree with the view that trade with AI might not be a meaningful aspect related to dealing with risk or alignment -- though I suspect it will be part of the story. I think the story for dogs is that initially the trade struck with humans may well have been a pretty good one. They ended up with a much more competent pack, ate and slept better for it and didn't really lose any of their freedom or autonomy I suspect. Too long ago in the undocumented history to know but I don't think today is a good indication of the partnership and cooperative relationship (trade relationship) that was true for much of the time.

I think that older setting is what one needs to consider in terms of any AI-human scenarios.

Replies from: Jeff Rose
comment by Jeff Rose · 2023-01-15T16:16:06.219Z · LW(p) · GW(p)

That isn't very comforting.  To extend the analogy: there was a period when humans were relatively less powerful when they would trade with some other animals such as wolves/dogs.  Later, when humans became more powerful that stopped.    

It is likely that the powers of AGI will increase relatively quickly, so even if you conclude there is a period when AGI will trade with humans that doesn't help us that much. 

comment by lc · 2023-01-11T02:27:35.324Z · LW(p) · GW(p)

I don't think it's accurate to claim that humans don't care about their pets' preferences as individuals and try to satisfy them.

But he didn't say that!

Replies from: Andy_McKenzie
comment by Andy_McKenzie · 2023-01-11T02:49:32.718Z · LW(p) · GW(p)

I quoted "And if she isn't a good girl, we genetically engineer and manufacture (ie. breed) an ex-wolf who is a good girl."

If genetic engineering a new animal would satisfy human goals, then this would imply that they don't care about their pet's preferences as individuals. 

Replies from: GuySrinivasan
comment by SarahNibs (GuySrinivasan) · 2023-01-11T03:47:39.929Z · LW(p) · GW(p)

No, it wouldn't imply that, at all. One can very easily care about something's preference as an individual and work to make a new class of thing which will be more useful than the class of thing that individual belongs to.

comment by wslafleur · 2023-01-15T19:33:01.029Z · LW(p) · GW(p)

Your comment seems like a related aside, which I guess you admitted in a follow-up comment? But anyway, it makes me curious what the axiomatic precepts are for trade. The perception of mutual benefit and a shared ability to communicate this fact?

Also OP doesn't clearly distinguish between broader forms of quid pro quo and trade, so I'm just sort of adopting the broadest possible definition I can imagine.

comment by George3d6 · 2023-01-23T10:14:24.929Z · LW(p) · GW(p)

I think you're missing the whole point by handwaving the idea that "animals can understand reward and instruction" -- no they can't, and that's why we enslave and genetically engineer, not trade.

Lassy would indeed be getting the big bucks were we able to communicate with her directly (and were she a wolf with desires beyond William-syndrome-induced pro-social obedience)

Ultimately this gets back into a "hard" alignment problem, in so far as a system designed to "trade" with humans, i.e. break the communication barrier to understanding our goals and desires or at least be able to sign contracts upholding those... well, it's 0.0...01 from being aligned 

Replies from: gwern
comment by gwern · 2023-01-23T21:47:59.786Z · LW(p) · GW(p)

no they can't

Yes, they can, and quite sophisticatedly too - think examples like vampire bats engaging in long-term reciprocity in food exchanges, while paying attention to who welshes on requests and how much food they have to spare to barf up.

were she a wolf with desires beyond William-syndrome-induced pro-social obedience

But she's not. That's literally the point of breeding wolves into dogs. (And when we can't breed them, we tend to find something we can. Ask the Syrian wild ass how their uncooperativeness worked out for them once we found a better riding-animal substitute in the form of horses - oh, that's right, you can't, because we drove them frigging extinct.)

Replies from: George3d6
comment by George3d6 · 2023-01-24T10:45:44.584Z · LW(p) · GW(p)

Yes, they can, and quite sophisticatedly too - think examples like vampire bats engaging in long-term reciprocity in food exchanges, while paying attention to who welshes on requests and how much food they have to spare to barf up.

I mean between species, it seems reasonable to assume both we and the bat can't understand each other's values, even if we can understand those of our own species.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2023-02-02T15:09:28.007Z · LW(p) · GW(p)

Trade with ant colonies would work iff:

  • We could cheaply communicate with ant colonies;
  • Ant colonies kept bargains;
  • We could find some useful class of tasks that ant colonies would do reliably (the ant colonies themselves being unlikely to figure out what they can do reliably);
  • And, most importantly:  We could not make a better technology that did what the ant colonies would do at a lower resource cost, including by such means as eg genetically engineering ant colonies that ate less and demanded a lower share of gains from trade.

The premise that fails and prevents superintelligences from being instrumentally incentivized to trade with humans as a matter of mere self-interest and efficiency is point 4.  Anything that can be done by a human can be done by a technology that uses less resources than a human.

The reason why it doesn't work to have an alternate Matrix movie in which the humans are paid to generate electrical power is not that the Matrix AIs can't talk to the humans, it's not that no humans will promise to pedal a generator bike if you pay them, it's not even that every kind of human gets bored and wanders away from the bike and flakes out on the job, it's that this is not the most efficient way to generate electrical power.

Replies from: lahwran, Sempervivens
comment by the gears to ascension (lahwran) · 2023-02-09T20:02:13.408Z · LW(p) · GW(p)

it seems like this does in fact have some hint of the problem. We need to take on the ant's self-valuation for ourselves; they're trying to survive, so we should gift them our self-preservation agency. They may not be the best to do the job at all times, but we should give them what would be a fair ratio of gains from trade if they had the bargaining power to demand it, because it could have been us who didn't. Seems like nailing decision theory is what solves this; it doesn't seem like we've quite nailed decision theory, but it seems to me that in fact getting decision theory right does mean we get to have nice things, and we have simply not done that to a deep learning standard yet.

Getting decision theory right seems to me that it would involve an explanation that is sufficient to get the AIs in the matrix, the ones that already existed and were misaligned but not enough to kill all humans, to suddenly want the humans to flourish - without having edited the ai in any other way than an explanation of some decision binding in language. It seems to me that it ought to involve an explanation that the majority of very wealthy humans would recognize as reason for why they should put up a veil of ignorance and realize that they are also the poor people who are crushed under the uneven ratio of gains from trade. It would have to involve instructions for how to build a densely percolated network of agentic solidarity that is even stronger than the worker vs capital thing leftists want; it needs to be workers and capital having solidarity together, it needs to be races and nations and creeds and individuals and body parts and species having agentically co-protective solidarity together, it needs to be sexes having solidarity together, it needs to be species having solidarity together.

we need to be able to figure out a subset of what any self-preserving replicator species wants that every replicator should be able to prove to every other replicator that they will in fact protect. Maybe we'd want to demand the ants change their genomes to always protect humans, or something, but in exchange, humans change their genomes and memeplexes to always protect ants.

For more hunchy work on this hunchy thought, see also the recent work on collective intelligence in AI.

A big problem, though, is how to prove this without creating security risks through dangerous mindreading of the ants and humans. It requires proving through an extremely strong consequential model, so we need strong abstraction of complex systems. The prover would likely look like a fuzzer or simbox.

Another big problem is that it seems to me we can't solve safety in a way that preserves us without also gifting meaningful amounts of the lightcone to other species we've injured in our time taking over the world. I mean, I personally don't think that's a problem. I think humans are cool art, and I also think human-level-intelligent ant colonies would be extremely fuckin cool art.

comment by Sempervivens · 2023-02-03T18:04:15.925Z · LW(p) · GW(p)

Agreed. In the human/AGI case, conditions 1 and 3 seem likely to hold (while I agree human self-report would be a bad way to learn what humans can do reliably, looking at the human track record is a solid way to identify useful classes of tasks at which humans are reasonably competent). I agree 4 more difficult to predict (and has been the subject of much of the discussion thus far), and this particular failure mode of genetically engineering more compliant / willing-to-accept-worse-trade ants/humans updates me towards thinking humans will have few useful services to offer, for the broad definition of humans. The most diligent/compliant/fearful 1% of the population might make good trade partners, but that remains a catastrophic outcome.

I want to focus however a bit more on point 2, which seems less discussed.  When trades of the type "Getting out of our houses before we are driven to expend effort killing them" are on the table, some subset of humans (I'd guess 0.1-20% depending on the population) won't just fail to keep the bargain, they'll actively seek to sabotage trade and hurt whoever offered such a trade.  Ants don't recognize our property rights (we never 'earned' or traded for them, just claimed already-occupied territory, modified it to our will, and claimed we had the moral authority to exclude them), and it seems entirely possible AGI will claim property rights over large swathes of Earth, from which it may then seek to exclude us.

Even if I could trade with ants because I could communicate well with them, I would not do so if I expected 1% of them would take the offering of trades like "leave or die" as the massive insult it is and thereby dedicate themselves to sabotaging my life (using their bodies to form shapes and images on my floors, chewing at electrical wires, or scattering themselves at low density in my bed to be a constant nuisance being some obvious examples ants with IQ 60 could achieve). Humans would do that, even against a foe they couldn't hope to defeat, so 'keeping bargains' is unlikely to hold, which would make human/AGI trade even less likely.

comment by DanielFilan · 2023-01-11T02:16:48.983Z · LW(p) · GW(p)

I agree the ant analogy is flawed. But I don't think it's as flawed as you do.

  • In this scenario, the 'trade' we would make would plausibly be "do this stuff or we kill you", which is not amazing for the ants.
  • I think another disanalogy is that humans can't re-arrange ants to turn them into better trading partners (or just raw materials), but AI could do that to us. (h/t to Dustin Crummett for reminding me of this). And the fact that we might not be able to understand fancy AI concepts could make this option more appealing.
Replies from: jskatt
comment by JakubK (jskatt) · 2023-01-11T05:14:38.304Z · LW(p) · GW(p)

In this scenario, the 'trade' we would make would plausibly be "do this stuff or we kill you", which is not amazing for the ants.

It costs money to kill ants with ant poison. If the ants would accept a cheaper amount of food to evacuate my house forever, I would take that trade.

Similarly, it requires resources (compute, money, energy, etc) for an AGI to kill all humans or recursively improve. If the humans would accept a cheaper quantity of resources to help an AGI with its goals, the AGI might accept that trade?

Replies from: DanielFilan, donald-hobson, Celarix
comment by DanielFilan · 2023-01-11T21:41:25.542Z · LW(p) · GW(p)

If the ants believe the threat, you don't have to spend any money on actually poisoning the ants.

Replies from: jskatt
comment by JakubK (jskatt) · 2023-01-12T05:11:51.482Z · LW(p) · GW(p)

If the ants accept the trade "leave and I'll spare you," I don't have to spend any money on actually poisoning the ants. But I would consider the counteroffer "if you kill us, it will cost $20, and we're willing to leave for $1."

Replies from: Dweomite
comment by Dweomite · 2023-01-14T21:24:51.388Z · LW(p) · GW(p)

I think that if ants were smart enough to make that counter-offer, humans would probably regard them as smart enough to be blameworthy for invading the house in the first place, and the counter-offer would be rejected as extortion.

Analogy:  Imagine some humans from country A move into country B and start living there.  Country B says "we didn't give you permission to live in our country; leave or we'll kill you".  The humans say "killing us would cost you $20k; we'll leave if you pay us $1k."  How do you predict this negotiation ends?

Now, if we're talking about asking the ants to vacate an empty lot where they've lived for many years so that you can start building a new house there, then I could see humans paying the ants to leave.  (Though note that the ants may still lose more value by giving up their hive than the humans are willing to pay them to avoid the cost of exterminating them.)

comment by Donald Hobson (donald-hobson) · 2023-01-12T00:48:37.231Z · LW(p) · GW(p)

There are lots and lots of good reasons to recursively self improve. The point where you stop because of resources is a dyson sphere of quantum computronium. 

I am not convinced that the resource cost of killing all humans is > the resource cost of 1 day's food. 

"If the humans would accept a cheaper quantity of resources to help the AI with it's goals" The AI has goals that clearly oppose human wellbeing, and is offering us peanuts. 

"It takes some resources." is I think not a great model at all. I think you are modeling the system as having resources that are in the AI's control or humans control. But the AI taking over may well have the structure of a computer exploit. A bunch of seeming coincidences that push the world into an increasingly strange state. 

There is no sense of "this money/energy is controlled by humans, that is controlled by AI". The powerplant was built by humans. The LHC was built by humans. But the magnet control system was hacked, and a few people have been given subtle psycological nudges. In this model, how much resources does it cost to spoof a nuclear attack and trick the humans into a nuclear war? The large amount of damage done, the amount of uranium used or the tiny amount of compute used to form the plan? There is no "cost of resources" structure to this interaction.  

comment by Celarix · 2023-01-11T14:45:44.378Z · LW(p) · GW(p)

Ants are tiny and hard to find; they could plausibly take your money, defect, and keep eating for a long time before you found them again. Then you need to buy ant poison, anyway.

comment by Hoagy · 2023-01-11T13:50:17.534Z · LW(p) · GW(p)

Putting the entire failure to trade on the ability to communicate seems to understate the issue. Most if not all of the things listed that they 'could' do, are things which they could theoretically do with their physical capacities, but not with their cognitive abilities or ability to coordinate within themselves to accomplish a task.

In general, they aren't able to act with the level of intentionality required to be helpful to us except in cases where those things we want are almost exactly the things they have evolved to do (like bees making honey, as mentioned in another comment).

The 'failure to communicate' is therefore in fact a failure to be able to think and act at the required level of flexibility and abstraction, and that seems more likely to carry over to our relations with some theoretical, super advanced AI or civilisation.

Replies from: dust_to_must
comment by dust_to_must · 2023-01-11T20:38:27.114Z · LW(p) · GW(p)

Maybe one useful thought experiment is whether we could train a dog-level intelligence to do most of these tasks if it had the actuators of an ant colony, given our good understanding of dog training (~= "communication") and the fact that dogs still lack a bunch of key cognitive abilities humans have (so dog-human relations are somewhat analogous to human-AI relations). 

(Also, ant colonies in aggregate do pretty complex things, so maybe they're not that far off from dogs? But I'm mostly just thinking of Douglas Hofstadter's "Aunt Hillary" here :)

My guess is that for a lot of Katja's proposed trades, you'd only need the ants to have a moderate level of understanding, something like "dog level" or "pretty dumb AI system level". (e.g. "do thing X in situations where you get inputs Y that were associated with thing-we-actually-care-about Z during the training session we gave you".) 

The 'failure to communicate' is therefore in fact a failure to be able to think and act at the required level of flexibility and abstraction, and that seems more likely to carry over to our relations with some theoretical, super advanced AI or civilisation.

Definitely true that you're a more valuable trade partner if you're smarter. But there are some particularly useful intelligence/comms thresholds that we meet and ants don't -- e.g. the "dog level", plus some self-awareness stuff, plus not-awful world models in some domains.

Meta: the dog analogy ignores the distinction between training and trading. I'm eliding this here bc it's hard to know what an ant colony's "considered opinion" / "reflective endorsement" would mean, let alone an ant's. but ofc this matters a lot for AGI-human interactions. Consider an AGi that keeps humans around on a "human preserve" out of sentiment, but only cares about certain features of humanity and genetically modifies others out of existence (analogous to training out certain behaviors or engaging in selective breeding), or tortures / brainwashes humans to get them to act the way it wants. (These failure modes of "having things an AI wants, and being able to give it those things, but not defend yourself" are also alluded to in other comments here, e.g. gwern and Elisabeth's comments about "the noble wolf" and torture, respectively.)    

comment by Radford Neal · 2023-01-11T03:03:58.808Z · LW(p) · GW(p)

The analogy fails for me because while "we don't trade with ants" is true, the very similar "we don't trade with bees" is not so true, for some definition of "trade" that seems at least somewhat appropriate.

Replies from: gwern, lorenzo-buonanno, avturchin, nikola-smolenski
comment by gwern · 2023-01-12T20:34:59.897Z · LW(p) · GW(p)

I don't think we trade with bees either. I would describe their situation as being worse, if anything, than that of domesticated wolves. Beekeepers keep bees which have been domesticated by centuries of selective breeding (up to and including artificial insemination), coerce bees into frames and transport them around involuntarily, manipulate them with smokers (or CO2), starve hives to keep them at manageable sizes which won't swarm, steal their honey at the end of summer and replace it with low-quality corn syrup, ruthlessly execute sick or uncooperative queens & hives, and cycle through hives as economically optimal for humans (perhaps why bee worker lifespan was recently reported to have halved over the past half-century).

“I don’t think anybody contests that free-living bees have a better, easier life,” Seeley told me. “What is contested is whether that’s realistic [economically].” --"Is Bee Keeping Wrong?"

Replies from: Radford Neal
comment by Radford Neal · 2023-01-12T21:34:21.821Z · LW(p) · GW(p)

We could debate how happy domesticated bees are, which no doubt varies from apiary to apiary, but I think it would be pointless for the purposes of this discussion.  

I take the whole point of the "we don't trade with ants" comment to be that it shows that with such a huge difference of intelligence (or other capabilities) as there is between ants and humans, the ants are just totally irrelevant to human plans, or at most a minor annoyance to be squashed.  The implication being that the same will be true of humans and super-intelligent AI.  It's supposed to be a slam-dunk comment, showing how utterly silly any other view would be.

But once you change "ants" to "bees", you can see that it's not at all a slam dunk analogy.  Bees are not irrelevant to human plans.  You have to get into exactly how the relationship works to decide how well the bees are doing in this relationship.  At that point, I think it's clear that reasoning using such an analogy is not really the best way to understand what the relationship between humans and a super-intelligent AI might be.

Replies from: jmh, M. Y. Zuo
comment by jmh · 2023-01-13T02:08:14.287Z · LW(p) · GW(p)

I agree but would add that ants are actually doing just fine and human civilization has hardly been some existential threat to them.

Replies from: gwern
comment by gwern · 2023-01-13T02:32:09.550Z · LW(p) · GW(p)

We actually do not know they are 'doing just fine'. Many insect species have gone extinct already (speaking of 'existential threats to them'...), and insect populations in general appear to be in substantial decline. It's highly debated because the data is in general so bad compared to bigger stuff like mammals. Anyway:

Bees have also been seriously affected, with only half of the bumblebee species found in Oklahoma in the US in 1949 being present in 2013. The number of honeybee colonies in the US was 6 million in 1947, but 3.5 million have been lost since.

There are more than 350,000 species of beetle and many are thought to have declined, especially dung beetles. But there are also big gaps in knowledge, with very little known about many flies, ants, aphids, shield bugs and crickets. Experts say there is no reason to think they are faring any better than the studied species.

A small number of adaptable species are increasing in number, but not nearly enough to outweigh the big losses. “There are always some species that take advantage of vacuum left by the extinction of other species,” said Sanchez-Bayo. In the US, the common eastern bumblebee is increasing due to its tolerance of pesticides.

More:

Entomologists are seeing troubling declines in insect populations beyond ants in Germany, Puerto Rico and elsewhere. Habitat destruction, pesticides and climate change contribute to this potential-but-still-debated “bugpocalypse.” Over 40 percent of insect species may go extinct, according to a 2019 study, with butterflies and beetles facing the greatest threat.

Scientists aren’t sure whether ants’ numbers are falling as well. “To be honest,” Schultheiss said, “we have no idea.”

That’s the next research question the team wants to answer. “We did not yet attempt to show this temporal shift in ant abundance,” Sabine Nooten, an insect ecologist and co-lead author of the study, said by Zoom. “That would be something that would come next.”

(That ants may not be in great shape should be no surprise given how much of the global biomass has been taken over by humans, and how jealous we are of it, and how much effort we devote to keeping ants out of it and killing them.)

Replies from: jmh
comment by jmh · 2023-01-14T04:01:34.897Z · LW(p) · GW(p)

I fully agree that we don't have great information on this. But I don't think the German example is a good one. I think its unfortunate but a lot of the biodiversity risk and extinction worries seem more smoke than light -- and I do think our environment is important and we should take actions when merited. 

We have people tracking extinctions and ChatGTP seems to think the estimated for species extinctions is 500 - 5000 per annum ("According to estimates by the International Union for Conservation of Nature (IUCN), between 500 and 5,000 species are estimated to go extinct each year.") which it notes is probably a poor estimate and undershooting the true number. But it also puts the estimate of new species at about 18,000 or between 10,000 and 20,000 per annum. ("The number of new species discovered each year varies depending on the taxon, region, and the level of exploration and study. However, on average, between 10,000 and 20,000 new species are discovered each year. ... For example, according to estimates by the State University of New York, about 18,000 species of plants and animals are discovered each year, with about half of them being insects.")

As related to ants specifically I will concede my comment was largely based on direct observation from my yard in the Northern Virginia area. The ant population seems to be relatively consistent for the past 30 years. 

ChatGTP has this to say about ants:

Ants are a diverse group of insects, with over 14,000 known species worldwide. Some species of ants are considered to be threatened or endangered, while others are considered to be common and widespread.

The IUCN Red List of Threatened Species includes several species of ants that are considered to be endangered or critically endangered, such as the Lord Howe Island stick ant and the Christmas Island red crab ant. These species are facing threats from habitat loss and degradation, introduced species, and other human activities.

However, it is worth noting that the majority of ant species are not considered to be threatened or endangered and are considered to be common and widespread. Ants are one of the most successful and adaptable groups of insects on the planet, and they are found in a wide variety of habitats, from tropical rainforests to deserts.

That was in response to the question are they going extinct but I think "Ants are one of the most successful and adaptable groups of insects on the planet" is probably a good sign that they are still quite successful in the face of human activity.

Side notes on this. I came across an article in Wired about species thought extinct but found alive. One was a species of ants in South America (apparently wide spread at that) which had been thought extinct for 15 million years. Turned out that the ants we actually fine and just their behavior was one that probably kept them from being noticed (live very deep and only come to the surface at night).

Another group was interested in global warming impact on ants and did a comparative investigation to put ants in urban (which will have a higher temp than surrounding rural areas) to see how they coped. Well, they learned the ants adapted much more quickly that evolution could have induces and seemed to do just fine with the relocation. I can only infer that means the ants changed how they engage with their environment at their social/behavioral level rather than being a slave to genetic selection.

So while I agree that we don't know well what the human-ant status is, if ants are going to be the example seems like we may have less to worry about than people think I suspect some of that comes from AI kills us is a much smaller set of possibilities than AI helps or AI ignores us set of possibilities.

And I hope none see this as a statement that work on safety or alignment is wasting effort or resources. I still think it's valuable and should be done -- but it may be similar to the insurance I've paid while not having any serious (or some cases any) damage to a car or home or someone getting injured while on my property.

Replies from: gwern
comment by gwern · 2023-01-16T01:16:48.555Z · LW(p) · GW(p)

But you see why 'sure, loads of existing animal species have and will go extinct due to humans, but that's ok from the god's eye POV because there's lots of new species being created or some other species can increase its numbers to occupy the now-vacant niches' is not comforting when we are discussing the prospects of an existing species (us) if we begin to be treated the way that we treat animals is not an argument for safety, right? It is an argument for danger: you can't even make the argument "at least it has some incentive to keep humans around to fill the niche" when that didn't save all the previous species who went extinct, because their niche was simply filled by an existing or new species. It does us no good if a successor AI civilization maintains the total amount of biomass roughly as it is but the winning species is cockroaches or dogs or chimpanzees or something (or some humans survive in a bunker somewhere, barely hanging on), which is the Outside View of what humans have done to other species thus far: wiped out large swathes, often quite arbitrarily (sometimes based literally on fashion trends), and replaced them, if at all, with some other species. If that happened again, as it has happened so many times so far, that still represents a near-total zeroing out of the value of the future for humans. And humans are what I care about, not hypothetical neo-cockroaches optimally adapted for living off datacenter heat vents.

Replies from: jmh
comment by jmh · 2023-01-17T05:02:15.635Z · LW(p) · GW(p)

The question to me is just why the human species would be the one that goes extinct. It could happen, accidentally or intentionally. But why? Are we going to be competing in some niche with the new AI species? I don't quite see that. Would they change the environment in some way that is incompatible with humans, intentionally or just that's their pollution? Yes, maybe. Would the possible crowd us out of your habitat? That seems rather unlikely for two reasons. Humans can survive in a lot of different areas and have largely learned to modify their environment pretty well (clothes, shelter, heating, cooling, farming, ranching, material sciences). Second, as humans have become more informed (I won't say more intelligent) and knowledgeable it seems we start taking actions to prevent the harms we're doing. It's not quite fair to only point to the bad cases of human other species relationships and ignore the positive ones.

The AI doing this to us, if it's smarter and better informed than humans might be expected to behave similarly. That does shift the issue to some extent to what type of morality and recognition value of life AIs might have. Maybe people have already thought through that issue and have a high confidence level that AI will be very amoral and uninterested in life as a value in and of itself. If that is not the case, and we can expect AIs to show some level of morality and respect for other life then one might expect that as various type of ties emerge and relationships form more consideration would be granted.

A last note for consideration. I am not able to get a quick confirmation but my impression is that a fair amount of the species extinction is not really equivalent to all humans going extinct due to some AI. I'll use polar bears as the example. Global warming may well drive polar bears on to land and ultimately result in none remaining. But they are fully able to breed with other bears in one sense they will not completely gone extinct (a bit like neanderthals sill have DNA walking the world in living people). The equivalent case here would be AI resulting in all white, blue-eyed people dying off. That's not human extinction as you're talking about. I would like to find some data that might prove a possible case but suspect that would be a major research project in its own right. I wonder if the cases where humans have actually produced an extinction equivalent to human extinction might also be cases where the number of related species was already heavily taxed by natural selective pressures and already on the way out and human impact was a last straw -- so more about timing than anything.

This article in Forbes also point towards additional complications related to thinking about species extinction, as well as new species discovery. 

comment by M. Y. Zuo · 2023-01-12T22:13:52.567Z · LW(p) · GW(p)

Individual ants are irrelevant, but ‘ants’ as a collective whole are very relevant to human life on Earth. Certainly if literally every ant were to disappear tomorrow there will be very noticeable changes to the biosphere within a few decades.  The same seems to apply to bees.

comment by Lorenzo (lorenzo-buonanno) · 2023-01-11T08:32:51.424Z · LW(p) · GW(p)

Does what we do to factory farmed animals count as "trading" feed and shelter in exchange of meat, eggs and diary?

Replies from: lahwran
comment by the gears to ascension (lahwran) · 2023-01-11T10:19:16.106Z · LW(p) · GW(p)

yeah non-vegans definitely don't just "trade" with cows

Replies from: lorenzo-buonanno
comment by Lorenzo (lorenzo-buonanno) · 2023-01-11T10:22:59.915Z · LW(p) · GW(p)

Someone on Twitter mentioned slave owners similarly "not just trading" with slaves who could talk. I think it's a better analogy than factory farmed animals.

comment by avturchin · 2023-01-11T11:53:50.361Z · LW(p) · GW(p)

We also use ants for entertainment - selling ant farms for kids https://www.amazon.com/Nature-Gift-Store-Live-Shipped/dp/B00GVHEQV0 

comment by Nikola Smolenski (nikola-smolenski) · 2023-01-16T02:18:07.437Z · LW(p) · GW(p)

Bees are indeed better example than ants, since we know how bees communicate, and there has even been some research in making bee robots for communication with bees, so if these robots are perfected we could tell the bees to pollinate here and not there in accordance with our needs.

So this seems like trade in that bees are getting information and we are getting pollination. Of course, trade is voluntary exchange of goods, and bees can not do anything voluntary, but humans can, so that is not actually the topic.

comment by tailcalled · 2023-01-11T08:41:34.628Z · LW(p) · GW(p)

But also: ants can actually do heaps of things we can’t, whereas (arguably) at some point that won’t be true for us relative to AI systems.

Devil's advocate: by comparative advantage, even if the AI was strictly superior to humans at all tasks, it might still make sense for it to trade with humans.

Replies from: RobbBB, avturchin
comment by Rob Bensinger (RobbBB) · 2023-01-12T06:33:34.499Z · LW(p) · GW(p)

By comparative advantage, the relevant threshold isn't "AI can do everything strictly better than a human"; it's "AI is able to kill the humans and use our matter-energy to build infrastructure that's more useful than humanity".

(Or "AI is able to kill the humans and the expected gain from trading with us is lower than the expected loss from us possibly shutting it down, building a rival AGI, etc.")

Replies from: nim
comment by nim · 2023-01-13T18:06:45.004Z · LW(p) · GW(p)

How do you describe the impulse that leads humans around the world to collect antiques, attempt to preserve endangered species, re-enact past time periods, etc?

Whatever that is, there seems to be more of it in humans than in non-human animals. Why do you imagine we'd see less rather than more of it in something built to have more than we do of those traits which distinguish us from other creatures?

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-01-14T13:52:29.793Z · LW(p) · GW(p)

Gwern’s “What Is The Collecting Mindset?” is relevant to your question.

comment by avturchin · 2023-01-11T12:09:32.299Z · LW(p) · GW(p)

Also, should be noted that the value of human atoms is very small: these atoms constitute around 10e-20 of all atoms in Solar system. Any small positive utility of human existence would overweight atom's usefulness. 

Replies from: dust_to_must
comment by dust_to_must · 2023-01-11T20:14:14.695Z · LW(p) · GW(p)

Yeah. It's conceivable you have an AI with some sentimental attachment to humans that leaves part of the universe as a "nature preserve" for humans. (Less analogous to our relationship with ants and more to charismatic flora and megafauna.)

Replies from: avturchin
comment by avturchin · 2023-01-12T13:28:54.636Z · LW(p) · GW(p)

I think that there is a small instrumental value in preserving humans. They could be exchange with Alien friendly AI, for example.

comment by Bucky · 2023-01-11T10:02:35.171Z · LW(p) · GW(p)

Get out of our houses before we are driven to expend effort killing them, and similarly for all the other places ants conflict with humans (stinging, eating crops, ..)

Ant mafia: "Lovely house you've got there, wouldn't it be a shame if it got all filled up with ants?"

comment by LoganStrohl (BrienneYudkowsky) · 2023-01-17T01:43:32.451Z · LW(p) · GW(p)

I just want to note that I personally do in fact trade with ants. I really enjoy watching them carry a pile of sugar to their nest, so sometimes when I go for walks I bring a baggie of sugar, then I offer it to the ants and they carry it around for my entertainment. They don't know that's what's happening, but it works out the same: I give them something they want, they do something they wouldn't otherwise have done, and we both benefit.

Replies from: BrienneYudkowsky, robm
comment by LoganStrohl (BrienneYudkowsky) · 2023-01-17T01:51:43.573Z · LW(p) · GW(p)

After reading more comments, I suspect someone is going to come by to tell me that this is not "trade" somehow. I haven't decided whether I agree with them. Mostly I just wanted other people to know that this is a thing you can do to improve your walks if you think ants are cool.

comment by robm · 2023-01-25T21:58:49.542Z · LW(p) · GW(p)

I once came home to finds ants carrying rainbow sprinkles across my apartment wall (left out from cake making). I thought it was entertaining once I understood what I was seeing.

comment by Jimdrix_Hendri · 2023-01-11T19:59:40.469Z · LW(p) · GW(p)

By the way, you do know that ants already do service for people by harvesting seeds for rooibos tea?

https://wildaboutants.com/tag/rooibos-seeds-harvested-by-ants/

comment by Raemon · 2023-01-14T18:46:05.624Z · LW(p) · GW(p)

Curated.

On one hand, I think I still disagree with the thrust of this post. I think the way we might trade with ants (or bees or dogs or horses, etc), is still just really different from what people typically have in mind when they're asking why AI might keep us alive, and the prospects discussed here are not reassuring to me. (And I have model-driven guesses of why superintelligences could build replacements for whatever humans are comparatively good at)

But, this post and the comments still prompted a lot of interesting thoughts. I appreciate posts that do a kind of "original seeing" on longstanding common arguments. I think I learned some things that are at least plausibly relevant to some kinds of AI takeoff here, and I also just learned or reconceptualized a lot of interesting stuff about how humans and animals interact.

comment by dust_to_must · 2023-01-11T21:45:41.616Z · LW(p) · GW(p)

I love the genre of "Katja takes an AI risk analogy way more seriously than other people and makes long lists of ways the analogous thing could work." (the previous post in the genre being the classic "Beyond fire alarms: freeing the groupstuck.")

Digging into the implications of this post: 

In sum, for AI systems to be to humans as we are to ants, would be for us to be able to do many tasks better than AI, and for the AI systems to be willing to pay us grandly for them, but for them to be unable to tell us this, or even to warn us to get out of the way. Is this what AI will be like? No. AI will be able to communicate with us, though at some point we will be less useful to AI systems than ants could be to us if they could communicate.

I'm curious how much you think the arguments in this post should affect our expectations of AI-human relations overall? At its core, my concern is:

  • sure, the AI will definitely trade with useful human organizations / institutions when it's weak (~human-level), 
  • and it might trade with them a decent amount when it's strong but not world-defeating (~human-organization-level)
  • eventually AI will be human-civilization-level, and probably soon after that it's building Dyson spheres and stuff. Why trade with humanity then? Do we have a comparative advantage, or are we just a waste of atoms?

I can think of a few reasons that human-AI trade might matter for the end-state:

  1. We can bargain for the future while AIs are relatively weak. i.e., when humans have stuff AI wants, they can trade the stuff for an assurance that when the AI is strong, it'll give us .000001% of the universe. 
    1. This requires both leverage (to increase the share we get) and verification / trust (so the AI keeps its promise). If we have a lot of verification ability, though, we could also just try to build safe AIs? 
    2. (related: this Nate post [LW · GW] saying we can't assume unaligned AIs will cooperate / trade with us, unless we can model them well enough to distinguish a true commitment from a lie. See "Objection: But what if we have something to bargain with?") 
  2. It seems possible that an AI built by humans and trained on human-ish content ends up with some sentimental desire for "authentic" human goods & services. 
    1. In order for this to end up good for the humans, we'd want the AI to value this pretty highly (so we get more stuff), and have a concept of "authenticity" that means that it doesn't torture / lobotomize us to get what it wants.  
    2. This is mostly by analogy to humans buying "authentic" products of other, poorer humans, but there's a spectrum between goods/services, and something more like "the pleasure of seeing something exist" a la zoo animals. 
      1. (goofy attempt at illustrating the spectrum: a woven shawl vs a performed monologue vs reality tv vs a zoo.)
      2. So a simpler, perhaps more likely, version of the 'desirable authentic labor' possibility is a 'human zoo', where the AI just likes having "authentic" humans around. which is not very tradelike. But maybe the best bad AI case we could hope for is something like this -- Earth left as a 'human zoo' while the AI takes over the rest of the lightcone.
Replies from: dust_to_must
comment by dust_to_must · 2023-01-11T22:01:54.125Z · LW(p) · GW(p)

In general, this post has prompted me to think more about the transition period between AI that's weaker than humans and stronger than all of human civilization, and that's been interesting! A lot of people assume that that takeoff will happen very quickly, but if it lasts for multiple years (or even decades) then the dynamics of that transition period could matter a lot, and trade is one aspect of that.

some stray thoughts on what that transition period could look like:

  • Some doomy-feeling states don't immediately kill us. We might get an AI that's able to defeat humanity before it's able to cheaply replicate lots of human labor, because it gets a decisive strategic advantage via specialized skill in some random domain and can't easily skill itself up in other domains.
  • When would an AI prefer to trade rather than coerce or steal?
    • maybe if the transition period is slow, and it knows it's in the earlier part of the period, so reputation matters
    • maybe if it's being cleverly watched or trained by the org building it, since they want to avoid bad press 
    • maybe there's some core of values you can imprint that leads to this? but maybe actually being able to solve this issue is basically equivalent to solving alignment, in which case you might as well do that.
  • In a transition period, powerful human orgs would find various ways to interface with AI and vice versa, since they would be super useful tools / partners for each other. Even if the transition period is short, it might be long enough to change things, e.g. by getting the world's most powerful actors interested in building + using AI and not leaving it in the hands of a few AGI labs, by favoring labs that build especially good interfaces & especially valuable services, etc. (While in a world with a short take off rather than a long transition period, maybe big tech & governments don't recognize what's happening before ASI / doom.) 
comment by nim · 2023-01-11T16:53:31.270Z · LW(p) · GW(p)

I think "trade" and "communication" are linked, and seem to exist on a spectrum that correlates to creatures' ability to predict the future. At the one extreme, we have gardeners who get plants to do what they wish by shaping the environment that the plants grow in. Near the middle, we have our interactions with domestic animals. At the other extreme, we have modern capitalism, where people exchange money for time spent on tasks they often wouldn't consider doing without the pay.

I suspect that where an interaction falls on that spectrum has a lot to do with the creature's ability to suppress its "nature", desires etc, in pursuit of longer-term goals. The better we communicate with an organism, the more we can promise it some future reward to change its present actions. The better at planning you get, the more you can start reasoning about risks, future rewards, etc.

I cannot communicate with the plants in my garden, so I have to actually provide them with the light and water and nutrients and protection that they need in order for them to grow. I cannot communicate with bees* , so to keep a hive of them, I need to make the hive more appealing than the other locations that the colony might consider swarming to.

Chickens can predict the future better than plants. Often when I open the door of the house nearest their run, it's because I'm about to throw them some table scraps. Thus, they come running when they hear the door open or see me outside, because they assess that there's a high probability of being the first to the snacks if they hurry.

Dogs predict the future actions of humans quite well, and behave in ways that they anticipate will get the responses they want. We're capable of relatively nuanced communication with dogs: "if you sit and then offer me your paw and then shake my hand, you'll get a treat!" or "Don't eat my livestock and I will feed you what I want you to eat". Wolves aren't as good at predicting humans' future behavior, and thus we have a harder time coexisting closely with them, because we can't strike the "don't eat what I don't want you to" kinds of bargains.

People can (usually) predict the future better than chickens or dogs. I do my job, work on multi-month or multi-year initiatives, and attend a bunch of meetings not for the promise of immediate reward, but because I anticipate a paycheque in a few weeks and better financial health long-term due to good benefits such as 401k matching.

These interactions all happen on the level of the worse communicator. Trading is only as good as the least trustworthy party in the deal, etc. I suspect that an AI might find it most effective to get along with people by treating people like people treat people -- I get along with my chickens by understanding the flock hierarchy social rules, one gets along with dogs by understanding dog social rules, etc.

  • Technically, I can tell certain kinds of lies to a colony of bees in order to influence their behavior. Classic example is that I can lie to the bees by claiming there's a forest fire and they should get ready for their hive to be destroyed, which is what I do if I blow smoke into the hive. When they think there's a fire coming, they all eat as much honey as they can, because if they have to fly away then they only get to take the food they're carrying. When a bee has its stomach full of honey, she cannot bend over in the gesture necessary to sting, and she probably doesn't want to anyways, like how you probably don't want to get in a fight right after a huge meal. (only she-bees can sting, he-bees develop reproductive anatomy instead with the parts that become the she-bees' stingers)
comment by Elizabeth (pktechgirl) · 2023-01-11T02:23:11.072Z · LW(p) · GW(p)

When a person says “we don’t trade with ants”, I think the implicit explanation is that humans are so big, powerful and smart compared to ants that we don’t need to trade with them because they have nothing of value and if they did we could just take it; anything they can do we can do better, and we can just walk all over them. Why negotiate when you can steal?

 

I think this is an overly narrow definition of trading for this context. If an AGI wants something from humans it needs to leave us alive and happy enough to produce it. It might be nonconsensual, but dead people can't produce anything to steal, so the problem has become "avoid being tortured" rather than "avoid being killed altogether."

A lot of people do trade spiders warmth and shelter in exchange for keeping the insect population low. The spider does not know that this is happening, and it can't negotiate, but my impression is this is working out pretty well for them. Locally at least- maybe sufficiently intelligent house spiders would regret evolving to only live in houses and die outside, or would resent our reworking of the ecostructure- but they are alive, and selection to live in houses probably doesn't make them miserable in houses, even if it does limit their options. 

Replies from: Celarix
comment by Celarix · 2023-01-11T14:48:27.376Z · LW(p) · GW(p)

If an AGI wants something from humans it needs to leave us alive and happy enough to produce it

No, I imagine an AGI would have many creative ways to force humans to do what it wants - directly pumping nutrients into your blood, removing neurotransmitters from your brain, overwriting your personality with an incorrigible desire to Do The Thing...

comment by NicholasKees (nick_kees) · 2024-12-17T21:00:03.847Z · LW(p) · GW(p)

Much like "Let's think about slowing down AI" (Also by KatjaGrace, ranked #4 from 2022), this post finds a seemly "obviously wrong" idea and takes it completely seriously on its own terms. I worry that this post won't get as much love, because the conclusions don't feel as obvious in hindsight, and the topic is much more whimsical. 

I personally find these posts extremely refreshing, and they inspire me to try to question my own assumptions/reasoning more deeply. I really hope to see more posts like this. 

comment by Ben Pace (Benito) · 2023-01-11T02:07:42.902Z · LW(p) · GW(p)

While I can imagine very simple forms of trade with human-level intelligent ants (e.g. you provide X units of wood, we will give you Y units of sugar), I do not expect a good outcome if I try to hire "army of ants" as an employee in my organization. I do not expect they would be able to join meetings, contribute points, understand other humans' illegible desires for a project, understand our vague preferences, etc. What I'm saying is I only think this works for very well-defined trades, and not for a lot of other trades.

Replies from: kjz
comment by kjz · 2023-01-11T04:08:17.599Z · LW(p) · GW(p)

Maybe it's better to model the army of ants as a CRO you would hire instead of an employee? And by extension, I would much prefer to be part of an AGI's CRO than be extinct.

Replies from: Benito
comment by Ben Pace (Benito) · 2023-01-11T04:12:10.846Z · LW(p) · GW(p)

What is a CRO? Google tells me it's a crypto currency and a Certified Radio Operator, neither of which seem to fit.

(Broadly I am against acronyms in line with this document.)

Replies from: kjz, carl-feynman, program-den
comment by kjz · 2023-01-11T16:59:18.114Z · LW(p) · GW(p)

Sorry, I thought that would be more commonly understood. As Carl said, it stands for Contract Research Organization. Hiring one is a way to get additional resources to perform specific tasks without having them be part of your organization, understand your corporate strategy, or even know what project you're working on. For example, a pharma company can hire a CRO to synthesize a specific set of potential drug compounds, without telling them what the biological target is or what disease they are trying to treat. Or think of the scenario where a rogue AGI hires someone to make a DNA sequence which turns out to code for a pathogen that kills all humans. This would likely be done at a CRO.

CRO's are often thought of as being fairly competent at executing the specific task required of them, but less competent at thinking strategically, understanding the big picture, etc. So they are generally only hired for very well-defined trades, as you mentioned above.

comment by Carl Feynman (carl-feynman) · 2023-01-11T14:51:02.099Z · LW(p) · GW(p)

Contract Research Organization.  Basically an outfit you can hire to perform experiments for you.

comment by Program Den (program-den) · 2023-01-11T05:26:35.620Z · LW(p) · GW(p)

I'm going to guess it's like mumble Resource Organization, something you'd like "farm out" some work to rather than have them on payroll and in meetings as it were.  Window Washers or Chimney Sweeps mayhap?

Just a guess, and I hope I'm not training an Evil AI by answering this question with what sprang to mind from the context.

comment by JakubK (jskatt) · 2023-01-11T05:30:26.954Z · LW(p) · GW(p)

With ants, ‘go over there and we won’t kill you’ would do a lot, and it doesn’t involve concepts at the foggy pinnacle of human meaning-construction.

I agree that with a human-ant language, I could tell ants to leave my house. But then they'd probably come back in a week? I don't think ants can reason about the future. 

Likewise, humans might lack some concepts that are necessary for making meaningful trades with advanced AI agents.

comment by jaan · 2023-01-12T08:22:51.770Z · LW(p) · GW(p)

the potentially enormous speed difference (https://www.lesswrong.com/posts/Ccsx339LE9Jhoii9K/slow-motion-videos-as-ai-risk-intuition-pumps [LW · GW]) will almost certainly be an effective communications barrier between humans and AI. there’s a wonderful scene of AIs vs humans negotiation in william hertling’s “A.I. apocalypse” that highlights this.

comment by redbird · 2023-01-29T20:41:51.393Z · LW(p) · GW(p)

Anecdotal example of trade with ants (from a house in Bali, as described by David Abrams):

The daily gifts of rice kept the ant colonies occupied–and, presumably, satisfied. Placed in regular, repeated locations at the corners of various structures around the compound, the offerings seemed to establish certain boundaries between the human and ant communities; by honoring this boundary with gifts, the humans apparently hoped to persuade the insects to respect the boundary and not enter the buildings.

Replies from: gwern
comment by gwern · 2023-01-30T00:35:28.526Z · LW(p) · GW(p)

Abrams, we should be clear, is not only reporting just his own speculation rather than any statement made by the Balinese (which itself may or may not indicate any trade successfully going on, which is rather dubious to begin with as feeding ants just makes more ants), he is, by his own account, making this up in direct contradiction to what his Bali hosts were telling him:

On the second morning, when I saw the array of tiny rice platters, I asked my hostess what they were for. Patiently, she explained to me that they were offerings for the household spirits. When I inquired about the Balinese term that she used for “spirit,” she repeated the explanation in Indonesian, saying that these were gifts for the spirits of the family compound, and I saw that I had understood her correctly....Yet I remained puzzled by my hostess’s assertion that these were gifts for the spirits.“

And presuming to explain what they were 'really' trying to do.

Replies from: redbird
comment by redbird · 2023-01-31T20:40:02.853Z · LW(p) · GW(p)

Yep, it's a funny example of trade, in that neither party is cognizant of the fact that they are trading! 

I agree that Abrams could be wrong, but I don't take the story about "spirits" as much evidence: A ritual often has a stated purpose that sounds like nonsense, and yet the ritual persists because it confers some incidental benefit on the enactor.

comment by awenonian · 2023-01-16T01:14:27.439Z · LW(p) · GW(p)

When I tried to answer why we don't trade with ants myself, communication was one of the first things (I can't remember what was actually first) I considered. But I worry it may be more analogous to AI than argued here.

We sort of can communicate with ants. We know to some degree what makes them tick, it's just we mostly use that communication to lie to them and tell them this poison is actually really tasty. The issue may be less that communication is impossible, and more that it's too costly to figure out, and so no one tries to become Antman even if they could cut their janatorial costs by a factor of 7.

The next thought I had was that, if I were to try to get ants to clean my room, I think it's likely that the easiest line towards that is not figuring out how to communicate, but breeding some ants with different behavior (e.g. search for small bits of food, instead of large bits. This seems harder than that sentence suggests, but still probably easier than learning to speak ant). I don't like what that would be analogous to in human-AI interactions.

I think it's possible that an AI could fruitfully trade with humans. While it lacks a body, posting an ad on Craigslist to get someone to move something heavy is probably easier than figuring out how to hijack a wifi-enabled crane or something. 

But I don't know how quickly that changes. If the AI is trying to build a sci-fi gadget, it's possible that an instruction set to build it is long or complicated enough that a human has trouble following it accurately. The costs of writing intuitive instructions, and also designing the gadget such that idiot-proof construction is possible could be high enough that it's better to do it itself. 

comment by TedSanders · 2023-01-12T22:46:31.621Z · LW(p) · GW(p)

Great post.

I don't think communicating trades is the only issue. Even if we could communicate with ants, e.g. "Please clean this cafeteria floor and we'll give you 5 kg of sugar" "Sure thing, human", I think there are still barriers.

  • Can the ants formulate a good plan for cleaning the floor?
  • Can the ants tell when the floor is clean enough?
  • Can the ants motivate their team?
  • Can the ants figure out where to deposit debris, and figure this out if a human janitor accidentally leaves the bin in a different place than yesterday

There's a lot to the task of cleaning the cafeteria floor beyond is it mechanically possible for the worker and can the worker speak English well enough to articulate a trade.

comment by Donald Hobson (donald-hobson) · 2023-01-12T00:58:56.249Z · LW(p) · GW(p)

 Another problem is trust. In order for trade to work, the AI has to trust that the human will follow the deal. If 1% of humans decide to smash the robots instead, humans could be totally useless. Sure, there are some things ants could do, but if the ants sometimes caused a problem, they would be much less useful. 

Humans are the strongest source of potentially adversarial optimization. The cost of defending against an enemy is huge. Hiring someone who has even a 1% chance of actively trying to harm you is probably a bad move in expectation. 

Also, all the useful stuff ants could do is because ants are better on many metrics than human made robots. Evolution still holds tricks that human engineers don't yet wield. 

The other advantage of communicating-ants would be avoiding the human cost of labor. 

An AI that can easily duplicate itself has a very low cost of labor. And if the AI's robotics beats human bodies on most metrics, we have no remaining advantages.

And trade isn't the only option. Trade is one way to persuade another mind to work for you, often an expensive one as you must trade something away. The ideal state for the AI is to give humans nothing the AI minds loosing. For example, the AI could trade us virtual loot in a hyperaddictive computer game for whatever it wanted from us. Or otherwise mind-hack us. 

comment by kjz · 2023-01-11T22:13:24.469Z · LW(p) · GW(p)

Should we humans broadcast more explicitly to future AGIs that we greatly prefer the future where we engage in mutually beneficial trade with them to the future where we are destroyed?

(I am making an assumption here that most, if not all, people would agree with this preference. It seems fairly overdetermined to me. But if I'm missing something where this could somehow lead to unintended consequences, please feel free to point that out.)

comment by simon · 2023-01-11T18:59:44.486Z · LW(p) · GW(p)

Humans are unlikely to be the most efficient configuration of matter to carry out any particular task the AI wants to get done - so if the power imbalance is sufficiently large the AI will be better off wiping us out to configure the matter in a more efficient way.

comment by Shmi (shminux) · 2023-01-11T08:28:11.456Z · LW(p) · GW(p)

Human brains are easily hackable (a good book does it, or a good brainwasher), we change our views easily given the right "argument", logical and/or emotional. Anything smarter than us can figure out a way to get us what it wants us to do without anything like a "trade", but because we think it's the right thing to do. If you doubt it, note that EA did a good job of convincing people to donate to charity. The military is an even more extreme example. The best con does not feel like a con at all. (Not saying that either of the two examples is a con, just that we cannot tell a good cause from a bad cause apart when we are sufficiently brainwashed.)

For an ant analogy, spray some pheromones to where you need them to go, without any reward.

comment by JBlack · 2023-01-11T03:39:38.400Z · LW(p) · GW(p)

Ants that have intelligence anywhere near the level needed for meaningful trade with us would likely cause our extinction. We don't trade with ants mainly because they're too stupid to do anything we want, despite being physically capable of doing plenty of things we might want.

If you scale up their intelligence to the level where they're worthwhile trading with, there are stories you can tell about how beneficial trade would be between our species, but frankly I don't think that we would be of much benefit to them[1].

  1. ^

    Them! was a 1950's movie about giant mutated ants. Even as a child I thought that quadrillions of tiny intelligent ants would be a far worse threat.

comment by Review Bot · 2024-05-11T06:13:50.318Z · LW(p) · GW(p)

The LessWrong Review [? · GW] runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year.

Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?

comment by Fergus Fettes (fergus-fettes) · 2023-02-05T12:05:02.401Z · LW(p) · GW(p)

Size circumscribes – it has no room For petty furniture – The Giant tolerates no Gnat For Ease of Gianture –

Repudiates it, all the more – Because intrinsic size Ignores the possibility Of Calumnies – or Flies.

~ Dickinson

comment by VojtaKovarik · 2023-01-24T12:57:22.422Z · LW(p) · GW(p)

The following issue seems fundamental and related (though i am not sure how exactly :-) ): There is a difference between things ants could physically do and what they are smart enough to do / what we can cheaply enough explain to them. Similarly for humans: delegating takes work. For example, hiring an IQ 80 cleaner might only be worth it for routine tasks, not for "clean up after this large event and just tell me when it's done, bye". Similarly, for some reason I am not supervising 10 master students, even if they were all smarter than me.

comment by Iknownothing · 2023-01-19T16:26:27.418Z · LW(p) · GW(p)

End of the day, it's about power.

comment by Matt Vogel (matt-vogel) · 2023-01-18T12:38:25.454Z · LW(p) · GW(p)

Sufficiently advanced AI could create bots to do the things they need done. We cannot create an ant equivalent bot (yet). Messengers on horses don't exist alongside the car, plane, or internet. Bots created by AI will likely fit the needs they have much more neatly and for lower maintenance costs than paying humans.

We are everyday finding new ways to automate human labor, from mental to physical to creative. Why would AI suddenly stop that effort in order to trade with us?

comment by Igor Ivanov (igor-ivanov) · 2023-01-16T17:35:15.712Z · LW(p) · GW(p)

Good post, but there is a big disbalance in human-ants relationships. 

If people could communicate with ants, nothing would stop humans to make ants suffer if it made the deal better for humans because of a power disbalance. 

For example, domesticated chickens live in very crowded and stinky conditions, and their average lifespan is a month after which they are killed. Not a particularly good living conditions.

People just care about profitability do it just because they can.

comment by Ben Amitay (unicode-70) · 2023-01-16T07:10:54.144Z · LW(p) · GW(p)

Directionally agree, but: A) Short period of trade before we become utterly useless is not much comfort. B) Trade is a particular case of bootstrapping influence on what an agent value to influence on their behaviour. The other major way of doing that is blackmail - which is much more effective in many circumstances, and would have been far more common if the State didn't blackmail us to not blackmail each other, to honour contacts, etc.

BTW those two points are basically how many people afraid that capitalism (i.e. our trade with super human organisations) may go wrong: A) Automaton may make us less and less economically useful. B) enough money may give an organisation the ability to blackmail - private army, or more likely influence on governmental power.

Assuming that automation here mean AI, this is basically hypothesising a phase in which the two kinds of super human agents (AI and companies) are still incentivized to cooperate with each other but not with us.

comment by James Kim (james-kim-1) · 2023-01-16T06:14:31.177Z · LW(p) · GW(p)

Assuming AI doesn't care about acting ethically, and even assuming AI can communicate and find useful things for us to do, there's no reason why AI wouldn't just manipulate and coerce humans rather than trading with them. 

Even in real-life we have plenty of examples of humans enslaving each other, when you get sci-fi possibilities like an AI just implanting mind control devices in human heads, then why would an AI waste resources and probably sacrifice efficiency just to trade evenly with humans?

comment by knowsnothing · 2023-01-13T09:33:50.195Z · LW(p) · GW(p)

Human can manipulate animals and make them do what they want. So could AI

comment by weverka · 2023-01-20T19:59:40.393Z · LW(p) · GW(p)

AI is dependent on humans.  It gets power and data from humans and it cannot go on without humans.   We don't trade with it, we dictate terms. 

Do we fear a world where we have turned over mining, production and powering everything to the AI. Getting there would take a lot more than self amplifying feedback loop of a machine rewriting its own code.