Green goo is plausible

post by anithite (obserience) · 2023-04-18T00:04:37.069Z · LW · GW · 29 comments

Contents

  Current life is sub-optimal
  Hypothetical very invasive shoggoth/kudzu type organism
  Concrete world takeover plan
None
29 comments

TLDR:If an AI kills all the humans how does it power the datacenters/replace the human economy? Green Goo. (IE:bioengineering)

TLDR end

In response to: grey-goo-is-unlikely [LW · GW]

Overview of existing natural biology:

No single organism (humans aside) has taken over the biosphere because evolution is slow and dumb.

Human agriculture is based on:  plant sub 1 gram seeds, (water, pesticides/herbicides, fertilizer etc.) , collect 1 kg+ plants a few months later. Biology has absurd growth rates[1][2].

Invasive species show the implications. A naive biosphere stands no chance against an intelligent opponent with real biotechnology.

kudzu Kudzu, "the vine that ate the south"

Intelligence allows adapting strategies much more quickly. Humans can design vaccines faster than viruses can mutate. A population of well coordinated humans will not be significantly preyed upon by viruses despite viruses being the fastest evolving threat.

Intelligence + ability to bioengineer organisms --> can create unlimited invasive species much more capable than anything natural

Current life is sub-optimal

Hypothetical very invasive shoggoth/kudzu type organism

Core capabilities:

Core organism competency is covering ground with photosynthesising mat of itself.

edges contain assimilating parts

As organism scales, modularity allows for much faster growth since growth at edge only requires assembling prefab components

This is just biochemistry and organism templating. No need to build complex brains. Human killing pathogens + this gets an AGI the biosphere. A bit more engineering has to be done to power data centers so the AI can continue to think (EG:build electrical generator that runs on sap+water). I'm assuming a biological DNA printer/reader is part of the "build a kudzu/shoggoth" bootstrap process anyways.

Other approaches are also possible and likely more efficient. For example, Flying things spreading a plant targeted virus causing construction of shoggoth/kudzu organisms. Extracting cellular machinery from plant leaves could allow sub 10 minute doubling times for example. That+flyers could lead to biosphere control within a few days rather than weeks to months.

Concrete world takeover plan

Of course that's just one way to do it. I think strategies involving computer hacking or coercion [LW · GW] are easier.

  1. ^

    A corn seed weighs 0.25 grams. A corn cob weighs 0.25kg. It takes 60-100 days to grow. Assuming 1 cob per plant and 80 days that's 80/log(1000,2)=8 days doubling time not counting the leaves and roots. Estimate closer to 7 days including stalk, leaves and roots.

  2. ^

    Yes, nitrogen fertilizer is an energy input, but there are plenty of plants that don't need it, and efforts for in plant nitrogen synthesis in eg:corn are underway.

29 comments

Comments sorted by top scores.

comment by the gears to ascension (lahwran) · 2023-04-18T04:34:18.276Z · LW(p) · GW(p)

This is especially notable because a lot of what we'd want AGI to do for us is build something like this that not only doesn't kill us (tall order, right?) but also solves global warming and climate contamination and acts as a power & fuel grid. That and bio immortality is basically everything I personally want out of AGI. So I'd really like to have some idea how to build a machine that teaches a plant to do something like a safe, human-compatible version of this.

Some good news, though: I suspect it may be more practical to defend against this sort of attack using finite intelligence than previously assumed. We need to make the machine that knows how to guard against these sorts of things, but if we can make the vulnerability-closer, we don't need to hit max ASI to stop other ASIs from destroying all pre-ASI life on earth.

Replies from: obserience, tailcalled
comment by anithite (obserience) · 2023-04-18T05:02:37.103Z · LW(p) · GW(p)

I suspect it may be more practical to defend against this sort of attack using finite intelligence than previously assumed. We need to make the machine that knows how to guard against these sorts of things, but if we can make the vulnerability-closer, we don't need to hit max ASI to stop other ASIs from destroying all pre-ASI life on earth.

If you read between the lines in my Human level AI can plausibly take over the world [LW · GW] post, hacking computers is probably the lowest difficulty "take over the world" strategy and has the side benefit of giving control over all the internet connected AI clusters.

The easiest way to keep a new superintelligence from emerging is to seize control of the computers it would be trained on. The AI only needs to hack far enough to monitor AI researchers and AI training clusters and sabotage later AI runs in a non-suspicious way. It's entirely plausible this has already happened and we are either in the clear or completely screwed depending on the alignment of the AI that won the race.

Also, hacking computers and writing software is something easy to test and therefore easy to train. I doubt that training an LLM to be a better hacker/coder is much harder than what's already been done in the RL space by OpenAI and Deepmind (EG: playing DOTA and Starcraft).

Biotech is a lot harder to deal with since ground truth is less accessible. This can be true for computer security too but to a much lesser extent (EG: lack of access to chips in the latest Iphone and lack of complete understanding therof with which to develop/test attacks).

but also solves global warming and climate contamination and acts as a power & fuel grid. That and bio immortality is basically everything I personally want out of AGI. So I'd really like to have some idea how to build a machine that teaches a plant to do something like a safe, human-compatible version of this.

Pshh, low expectations. Mind uploading or bust!

Replies from: lahwran
comment by the gears to ascension (lahwran) · 2023-04-18T20:44:05.952Z · LW(p) · GW(p)

Pshh, low expectations. Mind uploading or bust!

I'll take mind backups, but for exactly the reasons you highlight here, I don't think we're going to find electronics to be more efficient than microkinetic computers like biology. I'm much more interested in significant refinements to what it means to be biological. Eventually I'll probably substrate translate over to a reversible computer but that's probably hundreds to thousands of years out

comment by tailcalled · 2023-04-18T21:08:59.716Z · LW(p) · GW(p)

So I'd really like to have some idea how to build a machine that teaches a plant to do something like a safe, human-compatible version of this.

🤔 This is actually a path to progress, right? The difficulty in alignment is figuring out what we want precisely enough that we can make an AI do it. It seems like a feasible research project to map this out for kudzugoth.

Seems convincing enough that I'm gonna make a Discord and maybe switch to this as a project. Come join me at Kudzugoth Alignment Center! ... 😅 I might close again quickly if the plan turns out to be fatally flawed, but until then, here we go.

Replies from: obserience
comment by anithite (obserience) · 2023-04-18T23:21:42.049Z · LW(p) · GW(p)

Building new organisms from scratch (synthetic biology) is an engineering problem. Fundamentally we need to build the right parts and assemble them.

Without major breakthroughs (Artificial Superintelligence) there's no meaningful "alignment plan", just a scientific discipline. There's no sense in which you can really "align" an AI system to do this. The closest things would be:

  • building a special purpose model (EG:alphafold) useful for solving sub-problems like protein folding
  • teaching an LLM to say "I want to build green biotech" and associated ideas/opinions.
    • which is completely useless

Problem is that biology is difficult to mess with. DNA sequencing is somewhat cumbersome, DNA writing is much more so, costing on the order of 25¢/base currently.

Also imaging the parts to figure out what they do and if they're doing it can be very cumbersome because they're too small to see with a light microscope. Everything is indirect. Currently we try to crystalize them and then use X-rays (which are small enough but also very destructive) to image the crystal and infer the structure. There's continuous progress here but it's slow.

AI techniques can be applied to some of these problems (EG:inferring protein structure from amino acids (Alphafold), or doing better quantum level simulation Ferminet)

Note that AI techniques are replacing existing ones based on human coded algorithms rooted in physics and often have issues with out of distribution inputs (EG: work well for wildtype protein but give garbage when mutations are added.)

Like any ML system, we just have to feed it more data which means we need to do more wet lab work, x-ray crystallography etc.

Synthetic biology is the best way forwards but it's a giant scientific/engineering discipline, not an "alignment approach" whatever that's supposed to mean.

Replies from: tailcalled
comment by tailcalled · 2023-04-19T06:23:12.869Z · LW(p) · GW(p)

Without major breakthroughs (Artificial Superintelligence) there's no meaningful "alignment plan", just a scientific discipline. There's no sense in which you can really "align" an AI system to do this.

Do you expect humanity to bioengineer this before we develop artificial superintelligence? If not, presumably this objection is irrelevant.

Replies from: tailcalled
comment by tailcalled · 2023-04-19T07:39:07.650Z · LW(p) · GW(p)

Basically if artificial superintelligence happens before sufficiently advanced synthetic biology, then one way to frame the alignment problem is "how do we make an ASI create a nice kudzugoth instead of a bad kudzugoth?".

Replies from: obserience
comment by anithite (obserience) · 2023-04-19T12:08:43.499Z · LW(p) · GW(p)

I guess but that's not minimal and doesn't add much.

"how do we make an ASI create a nice (highly advanced technology) instead of a bad (same)?".

IE: kudzugoth vs robots vs (self propagating change to basic physics)

Put differently:

If we build a thing that can make highly advanced technology, make it help rather than kill us with that technology.

Neat biotech is one such technology but not a special case.

Aligning the AI is a problem mostly independent of what the AI is doing (unless you're building special purpose non AGI models as mentioned above)

Replies from: tailcalled
comment by tailcalled · 2023-04-19T12:54:20.961Z · LW(p) · GW(p)

I agree that one could do something similar with other tech than neat biotech, but I don't think this proves that Kudzugoth Alignment is as difficult as general alignment. I think aligning AI to achieve something specific is likely to be a lot easier than aligning AI in general. It's questionable whether the latter is even possible and unclear what it means to achieve it.

comment by tailcalled · 2023-04-18T07:11:23.769Z · LW(p) · GW(p)

Before AI-based bioengineering has reached the point where it can create "green goo", wouldn't it first reach the point where it can create targeted germs which destroy specific species, with green goo being a potential target for destruction? Seems like this would make defense feasible.

Replies from: obserience
comment by anithite (obserience) · 2023-04-18T09:08:35.053Z · LW(p) · GW(p)

Maybe, Still, there are ways to harden an organism against parasitic intrusion. TLDR you isolate and filter external things. Plants are pretty good at this already (they have no mammalian style immune system) and employ regularly spaced filters with holes too small for bacteria in their water tubes.

The other option is to do the biological equivalent of "commoditize your complement". Don't get good at making leaves and roots, get good at being a robust middleman between leaves and roots and treat them as exploitable breedable workers. Obviously don't optimise too hard in such a way as to make the system brittle (EG:massive uninterrupted monocultures). Have fallback options ready to deploy if something goes wrong.

If you want to make any victory pyrric, just re-use other common earth plant parts wholesale. If you want to kill the organism you'll need root eating fungi for all the food crops and common trees/grasses. If you want a leaf fungus/bacteria same. Organism can select between plant varieties to remain effective so the defender has to release bio weapons to kill most important plants.

comment by Going Durden (going-durden) · 2023-04-18T07:33:49.617Z · LW(p) · GW(p)

Im skeptical about the timeline here. Unless we allow for the laws of physics,chemistry and biology to be completely suspended, this plan will take centuries to get accomplished, even if we assume the shoggokudzu had absolute "peak" possible growth rate for a biological organism. Biology is hard capped in its ability to metabolize captured matter, and for a good reason: if it could be done faster, the life would simply cook itself with the energy spillover. 


Shoggokudzu could conceivably make the AI victory inevitable in a long enough timeline, but not particularly fast, when a determined human with a chainsaw and a lighter can destroy years of its growth in 10 seconds. Human civilization is almost perfectly designed to be the ultimate "pest" against vast biological systems. Destroying biomass and destabilizing complex ecosystems is basically our core trait.

Replies from: obserience, AlphaAndOmega
comment by anithite (obserience) · 2023-04-18T08:48:58.895Z · LW(p) · GW(p)

Let's talk growth rates.

A corn seed weighs 0.25 grams. A corn cob weighs 0.25kg. It takes 60-100 days to grow. Assuming 1 cob per plant and 80 days that's 80/log(1000,2)=8 days doubling time not counting the leaves and roots. I'd guess it's closer to 7 days including stalk leaves and roots.

Kudzu can grow one foot per day.

Suppose a doubling time of one week which is pretty conservative. This means a daily growth rate of 2^(1/7) --> 10% so whatever area it's covering, It grows 10% of that. For a square patch measuring 100m*100m that means each side grows 0.25 meters per day. This is in line with kudzu initially.

  • initial : (100m)² 0.25m/day linear
  • month1 : (450m)² 1.2m/day linear
  • month2 : (2km)² 5m/day linear
  • month3 : (2km)² 22m/day linear
  • month4 : (9km)² 100m/day linear
  • month5 : (40km)² 440m/day linear
  • month6 : (180km)² 2km/day linear
  • month7 : (800km)² 9km/day linear
  • month8 : (16000km)² 40km/day linear (half of earth surface area covered)
  • 8m1w : all done

1 week doubling times are enough to get you biosphere assimilation in under a year. If going full Tyranid and eating the plants/trees/houses can speed things up then things go faster. Much better efficiencies are achievable by eating the plants and reusing most of the cellular machinery. Doubling time of two days takes the 8 month global coverage time down to 10 weeks. Remember e-coli is doubling in 20 minutes so if we can literally re-use the whole tree (jack into the sap being produced) while eating the structural wood, doubling times could get pretty absurd.

The reason for specifying modular construction is to enable faster linear growth rates which are necessary for fast spread. Starting from multiple points is also important. Much better to have 10000 small 1m*1m patches spread out globally than a single 100m*100m patch. Same timeline but 100x lower required linear expansion rate.

Replies from: tamas-vizmathy, going-durden
comment by vmatyi (tamas-vizmathy) · 2023-04-18T14:50:02.376Z · LW(p) · GW(p)

So at month8 the edge grows 0.46 m/s. That doesn't sound very plausible to me.
In this timeline the area doubles about every week, so all the growth must happen in two dimensions (opposed to the corns weight gain), it couldn't get thicker. It means it's bandwidth for nutrient transport would not change, thus it couldn't support the exponential growth on the edges.
(although as between month2 and month3 it took a break of growth, some restructuring might have happened)

Replies from: obserience
comment by anithite (obserience) · 2023-04-18T18:39:50.310Z · LW(p) · GW(p)

First, more patches growing from different starting locations is better. That cuts required linear expansion rate proportional to ratio of (half earth circumference,max(dist b/w patches))

Note that 0.46 m/s is walking speed. two layer fractal growth is practical (IE:specialised spikes grow outwards at 0.46m/s initiating slower growth fronts that cover the area between them more slowly.)

Material transport might become the binding constraint but transport gets more efficient as you increase density. Larger tubes have higher flow velocities with the same pressure gradient. (less benefits once turbulence sets in). Air bearings (think very long air hockey table) are likely close to optimal and easy enough to construct.

As for biomass/area. Corn grows to 10Mg/ha = 1kg/m²

for a kilometer long front that implies half a tonne per second. Trains cars mass in the 10s to hundreds of tonnes. assuming 10 tonnes and 65' that's half a tonne per meter of train. So move a train equivalent at (1m/s+0.5m/s) --> 1.5m/s (running speed) and that supplies a kilometer of frontage.

There's obviously room to scale this.

I'm also ignoring oceans. Oceans make this easier since anything floating can move like a boat for which 0.5m/s is not significant speed.

Added notes:

I would assume the assimilation front has higher biomass/area than inner enclosed areas since there's more going on there and potentially conflict with wildlife. This makes things trickier and assembly/reassembly could be a pain so maybe put it on legs or something?

comment by Going Durden (going-durden) · 2023-04-19T11:19:34.885Z · LW(p) · GW(p)

that is only plausible from a "perfect conditions" engineering perspective where the Earth is a perfect sphere with no geography or obstacles, resources are optimally spread, and there is no opposition. Neither kudzu, or even microbes can spread optimally. 

And this assumes that the only issues the shoggokudzu faces is soil/water issues, mountains, rivers, pests, natural blights and diseases, mold,bad weather, its own mutations etc. One man with a BIC lighter can destroy weeks of work. Wildfires spread faster than plants. Planes with herbicides, or combine harvesters with a chipper, move much faster than plants grow. As bad as engineered Green Goo is, the Long Ape is equally formidable at destruction.

This is not to say Kudzuapocalypse would not be absolutely awful. It might, over long enough timeline, beat the natural Earth ecosystem, and decades/centuries after, humanity itself. But this would not be an instantaneous process.

Replies from: obserience
comment by anithite (obserience) · 2023-04-19T18:59:03.672Z · LW(p) · GW(p)

*Fire*

Forest fires are a tragedy of the commons situation. If you are a tree in a forest, even if you are not contributing to a fire you still get roasted by it. Fireproofing has costs so trees make the individually rational decision to be fire contributing. An engineered organism does not need to do this.

Photosynthetic top layer should be flat with active pumping of air. Air intakes/exausts seal in fire conditions. This gives much less surface area for ignition than existing plants.

Easiest option is to keep some water in reserve to fight fires directly. possibly add some silicates and heat activated foaming agents to form an intumescent layer. secrete from the top layer on demand.

That is only plausible from a "perfect conditions" engineering perspective where the Earth is a perfect sphere with no geography or obstacles, resources are optimally spread, and there is no opposition. Neither kudzu, or even microbes can spread optimally.

I'll clarify that a very important core competency is transport of (water/nutrients). Plants don't currently form desalination plants (seagulls do this to some extent) and continent spanning water pumping networks. The fact that rivers are dumping enormous amounts of fresh water into the oceans shows that nature isn't effective at capturing precipitation. Some plats have reservoirs where they store precipitation. This organism should capture all precipitation and store it. Storage tanks get cheaper with scale.

Plant growth currently depends on pulling inorganic nutrients and water out of the soil, C, O and N can be extracted from the atmosphere.

An ideal organism roots itself into the ground, extracts as much as possible from that ground then writes it off once other newly covered ground is more profitably mined. Capturing precipitation directly means no need to go into the soil for that although it might be worthwhile to drain the water table when reachable or ever drill wells like humans do. No need for nutrient gathering roots after that. If it covers an area of phosphate rich rock it starts excavating and ships it far and wide as humans currently do.

As for geographic obstacles 2/3rds of the earth is ocean. With a design for a floating breakwater that can handle ocean waves, the wavy area can be enclosed and eventually eliminated. Covered area behind the breakwater can prevent formation of waves by preventing ripple formation (IE:act as a distributed breakwater).

If it's hard to cover mountains, then the AI can spend a bit of time solving the problem during the first few months, or accept a small loss in total coverage until it does get around to the problem.

One man with a BIC lighter can destroy weeks of work. Wildfires spread faster than plants. Planes with herbicides, or combine harvesters with a chipper, move much faster than plants grow. As bad as engineered Green Goo is, the Long Ape is equally formidable at destruction.

I even bolded the parts about killing all the humans first. Yes humans can do a lot to stop the spread of something like this. I suspect humans might even find a use for it (EG:turn sap into ethanol fuel) and they're likely clever enough to tap it too.

I'm not going to expand on "kill humans with pathogens" for Reasons [? · GW]. We can agree to disagree there.

Replies from: going-durden
comment by Going Durden (going-durden) · 2023-04-20T06:49:07.503Z · LW(p) · GW(p)

I completely agree we should not be talking pathogen use strategies online, for...obvious reasons, even if we put aside the threat of malicious AI. Humans taking ideas from that would be bad enough. I simply don't see the pathogen route to be as dangerous as many people say, due to inherent limitations of organic systems (and microscopic systems in general). But further explaining how, why, etc is a bad idea, so lets agree to disagree.

comment by AlphaAndOmega · 2023-04-18T07:49:10.969Z · LW(p) · GW(p)

I think you glossed over the section where the malevolent AI simultaneously releases super-pathogens to ensure that there aren't any pesky humans left to meddle with its kudzugoth.

Replies from: going-durden
comment by Going Durden (going-durden) · 2023-04-19T11:28:42.396Z · LW(p) · GW(p)

I did not, I just do not think any kind of scientifically plausible pathogen can wipe out humanity, or even seriously diminish our numbers. There is a trade-off between lethality and virality of any pathogen; if it kills too fast or too surely, it cannot spread. If it spreads quickly, it cannot be too deadly. Dead men do not travel or cough. 

Probably the worst outcome would be something like Super-Covid, a disease that spreads easily, usually does not kill, but causes long term detriment to human health.  Anything more deadly than that would sound all of the post-Covid alarms, and lead to quarantine, rampant disinfectant use, and masks/gloves/protection being commonplace. No biological pathogen can reliably beat those, unless it is straight up dry nanotech that can spread via onboard propulsion, survive caustic chemicals, and burrow through latex: in other words, science fiction/magic.

Replies from: jkaufman, obserience
comment by jefftk (jkaufman) · 2023-04-19T17:17:33.492Z · LW(p) · GW(p)

I don't think getting into much detail here is a good idea [? · GW], but a pathogen could have a long incubation period after which it's disastrous. HIV is a classic example, and something engineered could be far worse.

comment by anithite (obserience) · 2023-04-19T13:05:50.961Z · LW(p) · GW(p)

raises finger

realizes I'm about to give advice on creating superpathogens

I'm not going to go into details besides stating two facts:

A common reasoning problem I see is:

  • "here is a graph of points in the design space we have observed"
    • EG:pathogens graphed by lethality vs speed of spread
  • There's an obvious trendline/curve!
    • therefore the trendline must represent some fundamental restriction on the design space.
    • Designs falling outside the existing distribution are impossible.

This is the distribution explored by nature. Nature has other concerns that lead to the distribution you observe. That pathogens have a "lethality vs spread" relationship tells you about the selection pressures selecting for pathogens, not the space of possible designs.

comment by avturchin · 2023-04-18T09:13:37.963Z · LW(p) · GW(p)

I agree that Green infrastructure is more plausible way to killing humans and getting independent infrastructure for malicious AI. However, building green infrastructure is slower than nanotech – and thus it will be more visible for outsiders and more vulnerable. Even if it takes weeks, it could be enough to trigger alarms. 

Replies from: obserience
comment by anithite (obserience) · 2023-04-18T09:25:25.255Z · LW(p) · GW(p)

Nanotech would definitely be nice but some people have expressed skepticism so I'm proposing an alternative non-(dry)nanotech route.

I'm assuming the AGI is going to kill off all the humans quickly with highly fatal pathogens with long incubation times. Whatever works to minimize transitional chaos and damage to valuable infrastructure.

The meat of this is a proposed solution for thriving after humans are dead. The green infrastructure doesn't have to be that large to sustain the AI's needs initially. A small cluster of a few dozen consumer gpus + biotech interfacing hardware may be the AI's temporary home until it can build up enough to re-power datacenters and do more scavenging.

Although I'd go with multiple small clusters for redundancy. Initial power consumption can be more than handled by literally a backyard's worth of kudzugoth and a small bio-electric generator. Plant based solar to sugar to electricity should give 50w/m² so for a 6kw cluster with 20 GPUs a 20m*10m patch should do and could be unobtrusive, blending into the surrounding vegetation.

comment by M. Y. Zuo · 2023-04-19T17:41:47.902Z · LW(p) · GW(p)

A population of well coordinated humans will not be significantly preyed upon by viruses despite viruses being the fastest evolving threat.

Perhaps for natural viruses. But this has not been tested under a sustained adversary developing synthetic viruses.

Even for the latest strain of COVID there may be possibilities for another 10x in virulence with only a modest decrease in lethality.

Replies from: obserience
comment by anithite (obserience) · 2023-04-19T19:25:53.187Z · LW(p) · GW(p)

well coordinated

Yes, assume no intelligent adversary.

  • Well coordinated -->
    • enforced norms preventing individuals from making superpathogens.
    • large scale biomonitoring
    • can and will rapidly deploy vaccines
    • will rapidly quarantine based on bio monitoring to prevent spread
    • might deploy sterilisation measures (EG:UV-C sterilizers in HVAC systems)

There is a tradeoff to be made between level of bio monitoring, speed of air travel, mitigation tech and risk of a pathogen slipping past. Pathogens that operate on 2+day infection-->contagious times should be detectable quickly and might kill 10000 worst case. That's for a pretty aggressive point in the tradeoff space.

Earth is not well coordinated. Success of some places in keeping out COVID shows what actual competence could accomplish. A coordinated earth won't see much impact from the worst of natural pathogens much less COVID-19.

Even assuming a 100% lethal long incubation time highly infective pathogen for which no vaccine can be made. Biomonitoring can detect it prior to symptoms, then quarantine happens and 99+% of the planet remains uninfected. Pathogens travel because we let them.

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2023-04-19T19:48:59.081Z · LW(p) · GW(p)
  • enforced norms preventing individuals from making superpathogens.

How could this enforcement be carried out within every nation? Who will be the enforcer(s)?

Replies from: obserience
comment by anithite (obserience) · 2023-04-19T21:08:56.004Z · LW(p) · GW(p)

The adversary here is assumed to be nature/evolution. I'm not referring to scenarios where intelligent agents are designing pathogens.

Humans can design vaccines faster than viruses can mutate. A population of well coordinated humans will not be significantly preyed upon by viruses despite viruses being the fastest evolving threat.

Nature is the threat in this scenario as implied by that last bit.

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2023-04-19T23:54:28.659Z · LW(p) · GW(p)

No adversary, or group of adversaries, in the real world exists in isolation. Humans will take advantage of viruses, viruses will take advantage of humans, as in the case of toxoplasmosis gondii. 

In other words, all possible threats, are co-determinants to varying degrees, of the real threat faced by actual humans. Even those without intelligent agency.

So this assumption would quickly breakdown outside a fantasy world.