Does biology reliably find the global maximum, or at least get close?

post by Noosphere89 (sharmake-farah) · 2022-10-10T20:55:35.175Z · LW · GW · 3 comments

This is a question post.

Contents

  Answers
    17 Steven Byrnes
    14 jacob_cannell
    11 Charlie Steiner
    6 AllAmericanBreakfast
    3 Noosphere89
    3 ChristianKl
None
3 comments

Jacob Cannell has claimed that biological systems at least get within 1 OOM of not a local, but global maximum in abilities.

His comment about biology nearing various limits are reproduced here:

The paper you linked seems quite old and out of date. The modern view is that the inverted retina, if anything, is a superior design vs the everted retina, but the tradeoffs are complex.

This is all unfortunately caught up in some silly historical "evolution vs creationism" debate, where the inverted retina was key evidence for imperfect design and thus inefficiency of evolution. But we now know that evolution reliably finds pareto optimal designs:

biological cells operate close to the critical Landauer Limit, and thus are pareto-optimal practical nanobots.

eyes operate at optical and quantum limits, down to single photon detection.

the brain operates near various physical limits, and is probably also near pareto-optimal in its design space.

Link to comment here:

https://www.lesswrong.com/posts/GCRm9ysNYuGF9mhKd/?commentId=aGq36saoWgwposRHy [LW · GW]

I am confused about the Landauer limit for biological cells other than nerve cells, as it only applies to computation, but I want to ask, is this notion actually true?

And if this view was true, what would the implications be for technology?

Answers

answer by Steven Byrnes · 2022-10-11T00:30:38.530Z · LW(p) · GW(p)
  • The existence of invasive species proves that, at any given time, there are probably loads of possible biological niches that no animal is exploiting.

  • I believe that plants are ≳ 1 OOM below the best human solution for turning solar energy into chemical energy, as measured in power conversion efficiency.[1] (Update: Note that Jacob is disputing this claim and I haven’t had a chance to get to the bottom of it. See thread below.) (Then I guess someone will say “evolution wasn't optimizing for just efficiency; it also has to get built using biologically-feasible materials and processes, and be compatible with other things happening in the cell, etc.” And then I'll reply, “Yeah, that's the point. Human engineers are trying to do a different thing with different constraints.”)


  1. The best human solution would be a 39%-efficient quadruple-junction solar cell, wired to a dedicated electrolysis setup. The electrolysis efficiency seems to be as high as 95%? Multiply those together and we get ≈10× the “peak” plant efficiency number mentioned here, with most plants doing significantly worse than that. ↩︎

comment by jacob_cannell · 2022-10-11T16:50:50.309Z · LW(p) · GW(p)

Do you have a source in mind for photosynthesis efficiency?

According to this source some algae have photosynthetic efficiency above 20%:

On the other hand, studies have shown the photosyntheticefficiency of microalge could well be in the range of 10–20 % or higher (Huntley and Redalje2007). Simple structure of algae allows them to achieve substantially higher PE valuescompared to terrestrial plants. PE of different microalgal species has been given in (Table 2)

Pirt et al. (1980) have suggested that even higher levels of PE can be attained by microalgae.They questioned the current model of photosynthesis, which predicts that the maximum PEwith absorbed light (efficiency of conversion to chemical energy) of mean wavelength 575 nmshould not exceed 29 % and the minimum quantum demand, n08. He reported a conversion efficiency of QPARto cell mass up to 47 %. Such high efficiency is impossible if the minimal quantum demand is, n08, so therefore alternate routes must exist at low incident light (Pirt1983).

More from that source, which claims measured efficiency of 47% (for algal culture):

In Chlorella strain 211/8k the maximum PE was 34.7% which corresponds to a quantum demand (n) of 6.6 per O2 molecule evolved. In the mixed culture MA003 the maximum PE was 46.8% with 95% confidence limits, 42.7–51.5. This PE value corresponds to a quantum demand (n) of 4.8 per O2 molecule evolved. These results call in question the current model of photosynthesis which predicts that the maximum PE with absorbed light of mean wavelength 575 nm should not exceed 29% and the minimum quantum demand, n = 8. From our results with culture MA003 it is deduced that the maximum practicable storage of total solar energy by algal biomass growth in vitro is 18%.

Other sources I can find quickly have net efficiency of conversion to ATP around 28%, and then further conversion to glucose is around 9% net efficient, but that seems to be for terrestrial plants [1]:

The thermodynamics of solar energy conversion has been discussed for many years and the real energy conversion figure is 9% (for the conversion into sugar) or even 28% (for the conversion into the natural fuel for the plant cells—ATP and NADPH). The photosynthetic efficiency is dependent on the wavelength of the light absorbed. Photosynthetically active radiation (400–700 nm) constitutes only 45% of the actual daylight. Therefore the maximum theoretical efficiency of the photosynthesis process is approximately 11%.

Solar cells haven't hit 50% efficiency last I checked.

But then this article says:

Where plants outpace PV cells, however, is in the amount of light they absorb. Both photosynthesis and photovoltaic systems absorb very high-energy light, but plants are nearly 100% efficient at absorbing light from the visible spectrum — the range of colors from red to blue. PV cells absorb light over a large range of the spectrum, too, but not as well as collard greens, kale, or goosegrass. And depending on the conditions under which they evolved, some plants are better at this than others. ”Dark green, leafy plants have very active chloroplasts and they’re very good at converting light to energy,” says Boghossian. (Popeye was clearly on the right track.) As researchers learn more about photosynthesis and understand the mechanisms that affect its efficiency, they’re able to combine botany and technology in the creation of more effective PV systems. Currently under study is the design of molecules that replicate those in chloroplasts and leaves. Once installed in PV panels, the “artificial leaves” may absorb light as efficiently as plants.

Which doesn't really make sense - why all this interest in reverse engineering photosynthesis ?

Regardless, efficiency around 10% is still within an OOM of absolute optimality, and still compatible with pareto-optimality depending on other tradeoffs such as storage density (and the best solar cells + battery tech is far less power dense).


  1. Artificial Leaves: Towards Bio-Inspired Solar Energy Converters ↩︎

Replies from: steve2152, AllAmericanBreakfast
comment by Steven Byrnes (steve2152) · 2022-10-11T18:33:22.515Z · LW(p) · GW(p)

Thanks. As it happened, I had edited my original comment to add a source, shortly before you replied (so you probably missed it). See footnote. Sorry that I didn’t do that when I first posted.

Your first source continues:

In fact, in any case, plants don’t use all incoming sunlight (due to respiration, reflection, light inhibition and light saturation) and do not convert all harvested energy into biomass, which brings about a general photosynthetic proficiency of 3%–6% based on total solar radiation. (See Table 1.)

When we say solar cells are 39% efficient, that’s as a fraction of all incoming sunlight, so the 3-6% is the correct comparison point, not the 11%, right?

Within the 3-6% range, I think (low confidence) the 6% would be lower-intensity light and the 3% would be direct sunlight—I recall that plants start deliberately dumping light when intensity gets too high, because if downstream chemical reactions can’t keep up with upstream ones then you wind up with energetic intermediate products (free radicals) floating around the cell and destroying stuff.

(Update: Confirmed! Actually this paper says it’s even worse than that: “In leaves in full sun, up to 80% of the absorbed energy must be dissipated or risk causing serious damage to the system (41).”)

There are likewise solar cells that also can’t keep up with the flux of direct sunlight (namely dye-sensitized solar cells), but the most commonly-used solar cells are perfectly happy with direct sunlight—indeed, it can make their efficiency slightly higher. The 39%-efficiency figure I mentioned was tested under direct sunlight equivalent. So the best point of comparison would probably be more like 3% than 6%, or maybe even less than 3%? Though it would depend a lot on local climate (e.g. how close to the equator? How often is it cloudy?)

Wikipedia says “C4 plants, peak” is 4.3%. But the citation goes here which says:

A theoretical limit of ~ 12% for the efficiency of photosynthetic glucose production from CO2 and water (based on free energy) can be calculated by considering the chlorophyll band-edge absorption and the two-photosystem structure of oxygenic photosynthesis (6, 13). Taking into account the known losses in light harvesting, overpotentials, and respiration, the maximum limit to photosynthetic efficiency is reduced to 4.6 and 6.0% for C3 and C4 plants, respectively (7). Short-term (rapid-growth phase) conversion efficiencies come within 70 to 75% of meeting these limits.

(Not sure why wikipedia says 4.3% not 4.5%.) Again, we probably need to go down from there because lots of sunlight is the intense direct kind where the plant starts deliberately throwing some of it out.

Anyway, I stand by “≳ 1 OOM below the best human solution” based on what I know right now.

still compatible with pareto-optimality depending on other tradeoffs such as storage density (and the best solar cells + battery tech is far less power dense).

I would say: plants are solving plant problems using plant technology, and humans are solving human problems using human technology. I’m generally kinda negative on how useful it is to frame things like this as a horse-race between the two. ¯\_(ツ)_/¯ (I’m engaging here because it’s fun, not because I think it’s particularly important.)

I agree that your second excerpt is kinda poorly-explained, maybe involving a bit of PR hype. I do think that if people are going to do basic scientific research that is relevant to renewable energy, studying the nuts and bolts of photosynthesis seems like a perfectly reasonable thing to do. But the path-to-applications would probably be pretty indirect, if any.

Replies from: jacob_cannell
comment by jacob_cannell · 2022-10-11T18:59:53.066Z · LW(p) · GW(p)

When we say solar cells are 39% efficient, that’s as a fraction of all incoming sunlight, so the 3-6% is the correct comparison point, not the 11%, right?

No if you look at the Table 1 in that source, the 3-6% is useful biomass conversion from crops, which is many steps removed.

The maximum efficiency is:

  • 28%: (for the conversion into the natural fuel for the plant cells—ATP and NADPH).
  • 9.2%: conversion to sugar after 32% efficient conversion of ATP and NADPH to glucose
  • 3-6%: harvestable energy, as plants are not pure sugar storage systems and have various metabolic needs

So it depends what one is comparing ... but it looks individual photosynthetic cells can convert solar energy to ATP (a form of chemical energy) at up to 28% efficiency (53% of spectrum * 70% leaf efficiency (reflection/absorption etc) * 76% chlorophyll efficiency). That alone seems to defeat the > 1 OOM claim, and some algae may achieve solar cell level efficiency.

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2022-10-11T19:15:27.552Z · LW(p) · GW(p)

Overall, this debate would benefit from clarity on the specific metrics of comparison, along with an explanation for why we should care about that specific metric.

Photosynthesis converts light into a form of chemical energy that is easy for plants to use for growth, but impractical for humans to use to power their machines.

Solar cell output is an efficient conversion of light energy into grid-friendly electrical energy, but we can’t exploit that to power plant growth without then re-converting that electrical energy back into light energy.

I don’t understand why we are comparing the efficiency of plants in generating ATP with the efficiency of solar cells generating grid power. It just doesn’t seem that meaningful to me.

Replies from: jacob_cannell
comment by jacob_cannell · 2022-10-11T19:28:53.115Z · LW(p) · GW(p)

I'm simply evaluating and responding to the claim:

I believe that plants are ≳ 1 OOM below the best human solution for turning solar energy into chemical energy, as measured in power conversion efficiency

It's part of a larger debate on pareto-optimality of evolution in general, probably based on my earlier statement:

But we now know that evolution reliably finds pareto optimal designs:

(then I gave 3 examples: cellular computation, the eye/retina, and the brain)

So the efficiency of photovoltaic cells vs photosynthesis is relevant as a particular counterexample (and based on 30 minutes of googling it looks like biology did find solutions roughly on par - at least for conversion to ATP).

comment by DirectedEvolution (AllAmericanBreakfast) · 2022-10-11T17:41:33.158Z · LW(p) · GW(p)

One source of interest is the prospect of improving food production efficiency by re-engineering photosynthesis.

answer by jacob_cannell · 2022-10-11T09:33:26.009Z · LW(p) · GW(p)

I am confused about the Landauer limit for biological cells other than nerve cells, as it only applies to computation, but I want to ask, is this notion actually true?

Biological cells are robots that must perform myriad physical computations, all of which are tightly constrained by the thermodynamic Landauer Limit. This applies to all the critical operations of cells including DNA/cellular replication, methylation, translation, etc.

The lower Landauer bound [LW(p) · GW(p)] is 0.02 eV, which translates into a minimal noise voltage of 20mV. Ion flows in neural signals operate on voltage swings around 100mV, close to the practical limits at low reliability levels.

The basic currency of chemical energy in biology is ATP, which is equivalent to about 1e-19J or roughly 1 eV, the practical limit for reliable computation. Proteins can perform various reliable computations from single or few ATP transactions, including transcription.

A cell has shared read only storage through the genome, and then a larger writable storage system via the epigenome, which naturally is also near thermodynamically optimal, typically using 1 ATP to read or write a bit or two reliably.

From "Information and the Single Cell":

Thus, the epigenome provides a very appreciable store of cellular information, on the order of 10 gigabytes per cell. It also operates over a vast range of time scales, with some processes changing on the order of minutes (e.g. receptor transcription) and others over the lifetime of the cell (irreversible cell fate decisions made during development). Finally, the processing costs are low: reading a 2-bit base-pair costs only 1 ATP.

Computation by wetware is vastly less expensive than cell signaling [11]; a 1-bit methylation event costs 1 ATP (though maintaining methylation also incurs some expense [63]).

According to estimates in "Science and Engineering Beyond Moore’s Law", an E Coli cell has a power dissipation rate of 1.4 e-13 W and takes 2400s for replication, which implies a thermodynamic limit of at most ~1e11 bits, which is close to their estimates of the cell's total information content:

This result is remarkably close to the experimental estimates of the informational content of bacterial cells based on microcalorimetric measurements which range from 1e11 to 1e13 bits per cell. In the following, it is assumed that 1 cell = 1e11 bit, i.e., the conservative estimate is used

comment by DirectedEvolution (AllAmericanBreakfast) · 2022-10-11T22:26:14.410Z · LW(p) · GW(p)

A concrete setting in which to think about this would be the energy cost of an exonuclease severing a single base pair from a DNA molecule that was randomly synthesized and inserted into a test tube in a mixture of nucleotide and nucleoside monomers. The energy cost of severing the base pair, dissociating their hydrogen bond, and separating them irretrievably into the random mixture of identical monomers using thermal energy, would be the cost in energy of deleting 2 bits of information.

Unfortunately, I haven't been able to find the amount of ATP consumed by a particular exonuclease per base pair severed, so I don't have a good way to compute how close this energy cost would be to the Landauer limit.

Let's note that in a real biological context, there is massive redundancy of genetic information, as genetic information is also stored in mRNA, potentially in the other chromosome copy, and, with some potential loss, in proteins. Completely eradicating 2 bits of genetic information from a single cell would take vastly more energy than merely severing a couple of bonds on the DNA backbone. This amount would be enormously higher than the Landauer limit.

Clearly, then, cells use vastly more materials to store bits of genetic information than the theoretical minimum, and as a consequence, it would require vastly more energy to eradicate that information from the cell. Since the cell, and not the individual randomly synthesized DNA molecule, is the system you have in mind for comparison with nanotech, this seems the more apt comparison. I suspect a well-designed nanobot could store, retrieve, manipulate, and delete bits of information using less redundancy, and thus less energy, than a cell requires - particularly if it was designed specifically to achieve minimal energy usage to accomplish these operations.

Replies from: jacob_cannell
comment by jacob_cannell · 2022-10-12T00:23:58.320Z · LW(p) · GW(p)

Let's note that in a real biological context, there is massive redundancy of genetic information, as genetic information is also stored in mRNA, potentially in the other chromosome copy, and, with some potential loss, in proteins.

This is like saying there is massive redundancy in a GPU chip because the same bits are stored on wires in transit, in various caches, in the register file, and in correlated intermediate circuit states - and just as ridiculous.

The comparison here is the energy required for important actions such as complete replication of the entire nanobot, which cells accomplish using efficiency close to the minimal thermodynamic limit.

Any practical nanobot that does anything useful - such as replicate itself - will also need to read from its long term storage (with error correction), which will induce correlations into the various mechanical motor and sensor systems that combine the info from long-term storage with that sensed from the environment and perform the necessary intermediate computations chaining into motor outputs.

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2022-10-12T00:42:22.835Z · LW(p) · GW(p)

This is like saying there is massive redundancy in a GPU chip because the same bits are stored on wires in transit, in various caches, in the register file, and in correlated intermediate circuit states - and just as ridiculous.

Far from ridiculous, I think this is a key point. As you point out, both cells and nanobots require information redundancy to replicate. We can consider the theoretical efficiency of information deletion in terms of two components:

  1. The energy required to delete one bit from an individual information-storing structure, such as a DNA molecule.
  2. The average amount of redundancy per bit in the cell or nanobot.

These are two separate factors and we have to consider them both to understand whether or not nanobots can operate with greater energy efficiency than cells.

Replies from: jacob_cannell
comment by jacob_cannell · 2022-10-12T01:08:40.441Z · LW(p) · GW(p)

Replication involves copying and thus erasing bits from the environment, not from storage.

The optimal non redundant storage nanobot already exists - a virus. But it’s hardly interesting and regardless the claim I originally made is about Pareto optimality.

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2022-10-12T02:25:58.240Z · LW(p) · GW(p)

Popping out to a meta-level, I am not sure if your aim in these comments is to communicate an idea clearly and defend your claims in a way that's legible and persuasive to other people?

For me personally, if that is your aim, there are two or three things that would be helpful.

  1. Use widely accepted jargon in ways that clearly (from other people's perspective) fit the standard definition of those terms. Otherwise, supply a definition, or an unambiguous example.
  2. Make an effort to show how your arguments and claims tie into the larger point you're trying to make. If the argument is getting away from your original point, explain why, and suggest ways to reorient.
  3. If your conversational partner offers you examples to illustrate their thinking, and you disagree with the examples or interpretation, then try using those examples to make your point. For example, you clearly disagree with some aspect of my previous comment about redundancy, but based on your response, I can't really discern what you're disagreeing with or why.

I'm ready to let go of this conversation, but if you're motivated to make your claims and arguments more legible to me, then I am happy to hear more on the subject. No worries either way.

Replies from: jacob_cannell
comment by jacob_cannell · 2022-10-12T14:05:18.425Z · LW(p) · GW(p)

Upstream this subthread started when the OP said:

I am confused about the Landauer limit for biological cells other than nerve cells, as it only applies to computation, but I want to ask, is this notion actually true?

To which I replied

Biological cells are robots that must perform myriad physical computations, all of which are tightly constrained by the thermodynamic Landauer Limit. This applies to all the critical operations of cells including DNA/cellular replication, methylation, translation, etc.

You then replied with a tangental thread (from my perspective) about 'erasing genetic information', which is not a subgoal of a biological cell (if anything the goal of a biological cell is the exact opposite - to replicate genetic information!)

So let me expand my claim/argument:

A robot is a physical computer built out of atomic widgets: sensors, actuators, connectors, logic gates, ROM, RAM, interconnect/wires, etc. Each of these components is also a physical computer bound by the Landauer limit.

A nanobot/cell in particular is a robot with the unique ability to replicate - to construct a new copy of itself. This requires a large number of bit erasures and thus energy expenditure proportional to the information content of the cell.

Thermodynamic/energy efficiency is mostly a measure of the fundamental widgets themselves. For example in a modern digital computer, the thermodynamic efficiency is a property of the node process, which determines the size, voltage, and electron flow of transistors and interconnect. CMOS chips have increased in thermodynamic efficiency over time ala Moore's Law.

So then we can look at a biological cell, as a nanbot, and analyze the thermodynamic efficiency of its various elemental computational widgets, which includes DNA to RNA transcription (reading from DNA ROM to RNA RAM cache), computations (various RNA operations, methylation, protein interactions, etc), translation (RNA to proteins) and I provided links to sources establishing that these operations all are efficient down to the Landauer Limit.

Then there is only one other notion of efficiency we may concern ourselves with - which is system level circuit efficiency. I mostly avoided discussing this because it's more complex to analyze and also largely orthogonal from low level thermodynamic/energy efficiency. For example you could have 2 different circuits that both add 32 bit numbers, and one uses 100k logic gates and the other uses 100M logic gates - and obviously the second circuit uses more energy(assuming the same process node) but that's really a question of circuity efficiency/inefficiency, not thermodynamic efficiency (which is a process node property).

But that being said, I gave one example of a whole system operation - which is the energy required for a cell to self-replicate during mitosis, and sources indicating this uses near the minimal energy given estimates of the bit entropy of the cell in question (E coli).

So either:

  1. You disagree with my sources that biological cells are near thermodynamically optimal/efficient in their elementary atomic subcomputations (which has nothing to do with your tangental test tube example). If you do disagree here, specify exactly which important atomic subcomputation(s) you believe are not close to optimal
  2. Grant 1.) but disagree that cells are efficient at the circuit level for replication/mitosis, which then necessarily implies that for each biological cell (ie e coli), there is some way to specify a functionally equivalent cell which does all the same things and is just as effective at replicating e coli's DNA, but also has much lower (ie OOM lower) bit entropy (and thus is probably much smaller, or much less coherent/structured)
  3. You now agree on these key points and we can conclude a successful conversation and move on

My standard for efficiency differences is OOM

Replies from: Algon, AllAmericanBreakfast
comment by Algon · 2022-10-12T15:47:01.949Z · LW(p) · GW(p)

The e-coli calculations make no sense to me. They posit huge orders of magnitude differences between an "optimal" silicon based machine and a carbon one (e-coli cell). I attribute this to bogus calculations

The one part I scrutinized: they use equation 7 to estimate the information content of an E-coli bacterium is ~1/2 TB. Now that just sounds absurd to me. That sounds like the amount you'd need to specify the full state of an E-coli at a given point in time (and indeed, that is what equation seven seems to be doing). They then say that E-coli performs the task of forming an atomically precise machine out of a max entropy state, instead of the actual task of "make a functioning e-coli cell, nevermind the exact atomic conditions", and see how long it would take some kind of gimped silicon computer because "surely silicon machines can't function in kilo kelvin temperatures?" to do that task. Then they say "oh look, silicon machines are 3 OOM slower than biological cells". 

Replies from: jacob_cannell
comment by jacob_cannell · 2022-10-13T18:05:19.510Z · LW(p) · GW(p)

They then say that E-coli performs the task of forming an atomically precise machine out of a max entropy state, instead of the actual task of "make a functioning e-coli cell, nevermind the exact atomic conditions", and see how long it would take some kind of gimped silicon computer because "surely silicon machines can't function in kilo kelvin temperatures?" to do that task. Then they say "oh look, silicon machines are 3 OOM slower than biological cells".

The methodology they are using to estimate the bit info content of the bio cell is sound, but the values they plug in result in conservative overestimate. A functioning e-coli cell does require atomically precise assembly of at least some components (notably DNA) - but naturally there is some leeway in the exact positioning and dynamic deformation of other components (like the cell wall), etc. But a bio cell is an atomically precise machine, more or less.

They assume 32 bits of xyz spatial position for each component and they assume atoms as the building blocks and they don't consider alternate configurations, but that seems to be a difference of one or a few OOM, not many.

And indeed from my calc their estimate is 1 OOM from the maximum info content as implied by the cell's energy dissipation and time for replication (which worked out to 1e11 bits I think). There was another paper linked earlier which used a more detailed methodology and got an estimate of a net energy use of only 6x the lower unreliable landauer bound, which also constrains the true bit content to be in the range of 1e10 to 1e11 bits.

Then they say "oh look, silicon machines are 3 OOM slower than biological cells".

Not quite, they say "a minimalist serial von neumman silicon machine is 2 OOM slower:

For this, the total time needed to emulate the bio-cell task (i.e., equivalent of 1e11 output bits) will be 510 000 s, which is more than 200 larger than time needed for the bio-cell.��

Their silicon cell is OOM inefficient because: 1.) it is serial rather than parallel, and 2.) it uses digital circuits rather than analog computations

comment by DirectedEvolution (AllAmericanBreakfast) · 2022-10-12T16:41:59.511Z · LW(p) · GW(p)

Thanks for taking the time to write this out, it's a big upgrade in terms of legibility! To be clear, I don't have a strong opinion on whether or not biological cells are or are not close to being maximum thermodynamic efficiency. Instead, I am claiming that aspects of this discussion need to be better-defined and supported to facilitate productive discussion here.

I'll just do a shallow dive into a couple aspects.

Here's a quote from one of your sources:

Finally, the processing costs [of transcription] are low: reading a 2-bit base-pair costs only 1 ATP.

I agree with this source that, if we ignore the energy costs to maintain the cellular architecture that permits transcription, it takes 1 ATP to add 1 rNTP to the growing mRNA chain.

In connecting this to the broader debate about thermodynamic efficiency, however, we have a few different terms and definitions for which I don't yet see an unambiguous connection.

  • The Landauer limit, which is defined as the minimum energy cost of deleting 1 bit.
  • The energy cost of adding 1 rNTP to a growing mRNA chain and thereby (temporarily) copying 1 bit.
  • The power per rNTP required to maintain a copy of a particular mRNA in the cell, given empirical rates of mRNA decay.

I don't see a well-grounded way to connect these energy and power requirements for building and maintaining an mRNA molecule to the Landauer limit. So at least as far as mRNA goes, I am not sold on (1).

disagree that cells are efficient at the circuit level for replication/mitosis, which then necessarily implies that for each biological cell (ie e coli), there is some way to specify a functionally equivalent cell which does all the same things and is just as effective at replicating e coli's DNA, but also has much lower (ie OOM lower) bit entropy (and thus is probably much smaller, or much less coherent/structured)

I'm sure you understand this, but to be clear, "doing all the same things" as a cell would require being a cell. It's not at all obvious to me why being effective at replicating E. coli's DNA would be a design requirement for a nanobot. The whole point of building nanobots is to use different mechanisms to accomplish engineering requirements that humans care about. So for example, "can we build a self-replicating nanobot that produces biodiesel in a scalable manner more efficiently than a genetically engineered E. coli cell?" is a natural, if still underdefined, way to think about the relative energy efficiency of E. coli vs nanobots.

Instead, it seems like you are asking whether it's possible to, say, copy physical DNA into physical mRNA using at least 10x less energy than is needed during synthesis by RNA polymerase. To that, I say "probably not." However, I also don't think we can learn anything much from that conclusion about the potential to improve medicine or commercial biosynthesis, in terms of energy costs or any other commercially or medically relevant metric, by using nanobots instead of cells. If you are exclusively concerning yourself with questions like "is there a substantially energetically cheaper way to copy DNA into mRNA," I will, with let's say 75% confidence, agree with you that the answer is no.

Replies from: jacob_cannell, Gunnar_Zarncke, sharmake-farah
comment by jacob_cannell · 2022-10-12T18:15:50.042Z · LW(p) · GW(p)

Entropy is conserved. Copying a bit of dna/rna/etc necessarily erases a bit from the environment. Launder limit applies.

I'm sure you understand this, but to be clear, "doing all the same things" as a cell would require being a cell. It's not at all obvious to me why being effective at replicating E. coli's DNA would be a design requirement for a nanobot.

This is why I used the term Pareto optimal and the foundry process analogy. A 32nm node tech is not Pareto optimal - a later node could do pretty much everything it does, only better.

If biology is far from Pareto optimal, then it should be possible to create strong nanotechnology - artificial cells that do everything bio cells do, but OOM better. Most importantly - strong nanotech could replicate Much faster and using Much less energy.

Strong nanotech has been proposed as one of the main methods that unfriendly AI could near instantly kill humanity.

If biology is Pareto optimal at what it does then only weak nanotech is possible which is just bioengineering by another (unnecessary) name.

This relates to the debate about evolution: my prior is that evolution is mysterious, subtle, and superhuman. If you think you found a design flaw, you are probably wrong. This has born out well so far - inverted retina is actually optimal, some photosynthesis is as efficient as efficient solar cells etc

None of this has anything to do with goals other than biological goals. Considerations of human uses of biology are irrelevant

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2022-10-12T18:38:14.699Z · LW(p) · GW(p)

Entropy is conserved. Copying a bit of dna/rna/etc necessarily erases a bit from the environment. Launder limit applies.

This is not a legible argument to me. To make it legible, you would need a person who does not have all the interconnected knowledge that is in your head to be able to examine these sentences and (quickly) understand how these arguments prove the conclusion. N of 1, but I am a biomedical engineering graduate student and I cannot parse this argument. What is "the environment?" What do you mean mechanistically when you say "copying a bit?" What exactly is physically happening when this "bit" is "erased" in the case of, say, adding an rNTP to a growing mRNA chain?

If biology is far from Pareto optimal, then it should be possible to create strong nanotechnology - artificial cells that do everything bio cells do, but OOM better. Most importantly - strong nanotech could replicate Much faster and using Much less energy.

Strong nanotech has been proposed as one of the main methods that unfriendly AI could near instantly kill humanity.

If biology is Pareto optimal at what it does then only weak nanotech is possible which is just bioengineering by another (unnecessary) name.

Here's another thing you could do to flesh things out:

Describe a specific form of "strong nanotech" that you believe some would view as a main method an AI could use to kill humanity nearly instantly, but that is ruled out based on your belief that biology is Pareto optimal. Obviously, I'm not asking for blueprints. Just a very rough general description, like "nanobots that self-replicate, infect everybody's bodies, and poison them all simultaneously at a signal from the AI."

Replies from: jacob_cannell
comment by jacob_cannell · 2022-10-12T20:17:43.483Z · LW(p) · GW(p)

I may be assuming familiarity with the physics of computation and reversible computing.

Copying information necessarily overwrites and thus erases information (whatever was stored prior to the copy write). Consider a simple memory with 2 storage cells. Copying the value of cell 0 to cell 1 involves reading from cell 0 and then writing said value to cell 1, overwriting whatever cell 1 was previously storing.

The only way to write to a memory without erasing information is to swap, which naturally is fully reversible. So a reversible circuit could swap the contents of the storage cells, but swap is fundamentally different than copy. Reversible circuits basically replace all copys/erasures with swaps, which dramatically blows up the circuit (they always have the same number of outputs as inputs, so simple circuits like AND produce an extra garbage output which must propagate indefinitely).

An assembler which takes some mix of atoms/parts from the environment and then assembles them into some specific structure is writing information and thus also erasing information. The assembly process removes/erases entropy from the original configuration of the environment (atoms/parts) memory, which necessarily implies increase of entropy somewhere else - so you could consider the Landauer limit as an implication of the second law of thermodynamics. Every physical system is a memory, and physical transitions are computations. To be irreversible, the assembler would have to permanently store garbage bits equivalent to what it writes, which isn't viable.

As a specific example, consider a physical system constrained to a simple lattice grid of atoms each of which can be in one of two states, and thus stores a single bit. An assembler which writes a specific bitmap (say an image of the mona lisa) to this memory must then necessarily store all the garbage bits previously in the memory, or erase them (which just moves them to the environment). Information/entropy is conserved.

Replies from: AllAmericanBreakfast, philh
comment by DirectedEvolution (AllAmericanBreakfast) · 2022-10-12T21:51:26.179Z · LW(p) · GW(p)

This is very helpful. I am definitely unfamiliar with the physics of computation and reversible computing, but your description was quite clear.

If I'm following you, "delete" in the case of mRNA assembly would means that we have "erased" one rNTP from the solution, then "written" it into the growing mRNA molecule. The Landauer limit gives the theoretical minimal energy required for the "delete" part of this operation.

You are saying that since 1 high energy P bond (~1 ATP) is all that's required to do not only the "delete," but also the "write," and since the energy contained in this bond is pretty close to the Landauer limit, that we can say there's relatively little room to improve the energy efficiency of an individual read/write operation by using some alternative mechanism.

As such, mRNA assembly approaches not only Pareto optimality, but a true minimum of energy use for this particular operation. It may be that it's possible to improve other aspects of the read/write operation, such as its reliability (mRNA transcription is error-prone) or speed. However, if the cell is Pareto optimal, then this would come at a tradeoff with some other trait, such as energy efficiency.

If I am interpreting you correctly so far, then I think there are several points to be made. 

  1. There may be a file drawer problem operating here. Is a paper finding that some biological mechanism is far from Pareto optimal or maximally thermodynamically efficient going to be published? I am not convinced about how confidently extrapolate beyond specific examples. This makes me quite hesitant to embrace the idea that individual computational operations, not to mention whole cell-scale architectures, are maximally energy efficient.
  2. The energy of ATP hydrolysis is still almost 30x the Landauer limit, even ignoring the energy-consuming cellular context in which its energy can be used to do useful delete/copy operations. So there seems to be theoretical room to improve copy operations by 1 OOM even in a cellular context, not to mention gains by reorganizing the large-scale architecture.
  3. Cells are certainly not Pareto optimal for achieving useful outcomes from the perspective of intelligent agents, such as a biosynthesis company or a malevolent AI. Even if I completely accepted your argument that wild-type cells are both Pareto optimal self-replicators and, for practical purposes, approaching the limit of energy efficiency in all their operations, this would have little bearing on the ability of agents to design cells/nanobots to accomplish specific practical tasks more efficiently than wild-type cells on any given metric of performance you care to name, by many OOMs.

In the vein of considering our appetite for disagreement [LW · GW], now that I understand the claims you are making more clearly, I think that, with the exception of the tractability of engineering grey goo, any differences of opinion between you and I are over levels of confidence. My guess is that there's not much room to converge, because I don't have the time to devote to this specific research topic.

All in all, though I appreciate the effort you put into making these arguments, and I learned something valuable about physics of computation. So thank you for that.

Replies from: jacob_cannell
comment by jacob_cannell · 2022-10-12T22:42:25.619Z · LW(p) · GW(p)
  1. If anything I'd say the opposite is true - inefficiency for key biochemical processes that are under high selection pressure is surprising and more notable. For example I encountered some papers about the apparent inefficiency of a key photosynthesis enzyme the other day.

  2. I don't know quite what you are referring to here, but i'm guessing you are confusing the reliable vs unreliable limits which I discussed in my brain efficiency post and linked somewhere else in this thread.

That paper Gunnar found analyzes replication efficiency in more depth:

More significantly, these calculations also establish that the E. coli bacterium produces an amount of heat less than six times (220npep/42npep) as large as the absolute physical lower bound dictated by its growth rate, internal entropy production, and durability. In light of the fact that the bacterium is a complex sensor of its environment that can very effectively adapt itself to growth in a broad range of different environments, we should not be surprised that it is not perfectly optimized for any given one of them. Rather, it is remarkable that in a single environment, the organism can convert chemical energy into a new copy of itself so efficiently that if it were to produce even a quarter as much heat it would be pushing the limits of what is thermodynamically possible! This is especially the case since we deliberately underestimated the reverse reaction rate with our calculation of phyd, which does not account for the unlikelihood of spontaneously converting carbon dioxide back into oxygen. Thus, a more accurate estimate of the lower bound on β⟨Q⟩ in future may reveal E. coli to be an even more exceptionally well-adapted self-replicator than it currently seems.

I haven't read the paper in detail enough to know whether that 6x accounts for reliability/errors or not.

https://aip.scitation.org/doi/10.1063/1.4818538

comment by philh · 2022-10-19T11:53:08.588Z · LW(p) · GW(p)

or erase them (which just moves them to the environment)

I don't follow this. In what sense is a bit getting moved to the environment?

I previously read deconfusing Landauer's principle [LW · GW] here and... well, I don't remember it in any depth. But if I consider the model shown in figures 2-4, I get something like: "we can consider three possibilities for each bit of the grid. Either the potential barrier is up, and if we perform some measurement we'll reliably get a result we interpret as 1. Or it's up, and 0. Or the potential barrier is down (I'm not sure if this would be a stable state for it), and if we perform that measurement we could get either result."

But then if we lower the barrier, tilt, and raise the barrier again, we've put a bit into the grid but it doesn't seem to me that we've moved the previous bit into the environment.

I think the answer might be "we've moved a bit into the environment, in the sense that the entropy of the environment must have increased"? But that needs Landauer's principle to see it, and I take the example as being "here's an intuitive illustration of Landauer's principle", in which case it doesn't seem to work for that. But perhaps I'm misunderstanding something?

(Aside, I said in the comments of the other thread something along the lines of, it seems clearer to me to think of Landauer's principle as about the energy cost of setting bits than the energy cost of erasing them. Does that seem right to you?)

Replies from: jacob_cannell
comment by jacob_cannell · 2022-10-19T17:11:42.550Z · LW(p) · GW(p)

I think the answer might be "we've moved a bit into the environment, in the sense that the entropy of the environment must have increased"?

Yes, entropy/information is conserved, so you can't truly erase bits. Erasure just moves them across the boundary separating the computer and the environment. This typically manifests as heat.

Landauer's principle is actually about the minimum amount of energy required to represent or maintain a bit reliably in the presence of thermal noise. Erasure/copying then results in equivalent heat energy release.

comment by Gunnar_Zarncke · 2022-10-12T20:48:47.552Z · LW(p) · GW(p)

I want to jump in a provide another reference that supports jacob_cannell's claim that cells (and RNA replication) operate close to the thermodynamic limit.

>deriving a lower bound for the amount of heat that is produced during a process of self-replication in a system coupled to a thermal bath. We find that the minimum value for the physically allowed rate of heat production is determined by the growth rate, internal entropy, and durability of the replicator, and we discuss the implications of this finding for bacterial cell division, as well as for the pre-biotic emergence of self-replicating nucleic acids.
Statistical physics of self-replication - Jeremy England
https://aip.scitation.org/doi/10.1063/1.4818538

There are some caveats that apply if we compare this to different nanobot implementations:

  • a substrate needing fewer atoms/bonds might be used - then we'd have to assemble fewer atoms and thus need less energy. DNA is already very compact, there is no OOM left to spare, but maybe the rest of the cell content could be improved. As mentioned, for viruses there is really no OOM left. 
  • A heat bath and a solution of needed atoms are assumed. But no reuse of more complicated molecules. Maybe there are sweet spots in engineering space between macroscopic source materials (refined silicon, iron, pure oxygen, etc., as in industrial processes) and a nutrient soup.
comment by Noosphere89 (sharmake-farah) · 2022-10-12T16:53:40.576Z · LW(p) · GW(p)

This part about function is important, since I don't think the things we want out of nanotech perfectly overlap with biology itself, and that can cause energy efficiency to increase or decrease.

Replies from: jacob_cannell
comment by jacob_cannell · 2022-10-12T18:22:05.221Z · LW(p) · GW(p)

My comment above addresses this

answer by Charlie Steiner · 2022-10-10T22:52:45.565Z · LW(p) · GW(p)

No animals do nuclear fusion to extract energy from their food, meaning that they're about 11 orders of magnitude off from the optimal use of matter.

The inverted vs. everted retina thing is interesting, and it makes sense that there are space-and-mass-saving advantages to putting neurons inside the eye, especially if your retinas are a noticeable fraction of your weight (hence the focus on "small, highly-visual species"). But it seems like for humans in particular having an everted retina would likely be better "The results from modelling nevertheless indicate clearly that the inverted retina offers a space-saving advantage that is large in small eyes and substantial even in relatively large eyes. The advantage also increases with increasingly complex retinal processing and thus increasing retinal thickness. [...] Only in large-eyed species, the scattering effect of the inverted retina may indeed pose a disadvantage and the everted retina of cephalopods may be superior, although it also has its problems." (Kroger and Biehlmaher 20019)

But anyhow, which way around my vs. octopuses' retinas are isn't that big a mistake either way - certainly not an order of magnitude.

To get that big of an obvious failure you might have to go to more extreme stuff like the laryngeal nerve of the giraffe. Or maybe scurvy in humans.

Overall, [shrug]. Evolution's really good at finding solutions but it's really path-dependent. I expect it to be better than human engineering in plenty of ways, but there are plenty of ways the actual global optimum is way too weird to be found by evolution.

comment by jacob_cannell · 2022-10-11T09:37:53.960Z · LW(p) · GW(p)

No animals do nuclear fusion to extract energy from their food, meaning that they're about 11 orders of magnitude off from the optimal use of matter.

That isn't directly related to any of the claims I made, which specifically concerned the thermodynamic efficiency of cellular computations, the eye, and the brain.

Nuclear fusion may simply be impossible to realistically harness by a cell sized machine self assembled out of common elements.

Replies from: Charlie Steiner, lahwran
comment by Charlie Steiner · 2022-10-11T13:11:47.517Z · LW(p) · GW(p)

That isn't directly related to any of the claims I made, which specifically concerned the thermodynamic efficiency of cellular computations, the eye, and the brain.

Hence why it's an answer to a question called "Does biology reliably find the global maximum, or at least get close?" :P

By analogy, I think it is in fact correct for brains as well. Brains don't use quantum computing or reversible computing, so they're very far from the global optimum use of matter for computation. Those are also hard if not impossible to realistically harness with something made out of living cells.

Replies from: M. Y. Zuo, sharmake-farah
comment by M. Y. Zuo · 2022-10-11T15:33:05.440Z · LW(p) · GW(p)

Brains don't use quantum computing or reversible computing, so they're very far from the global optimum use of matter for computation.

Neither of the alternatives have been proven to work at scale though?

In fact there are still theoretical hurdles for a human brain-size implementation in either case that have not been fully addressed in the literature.

Replies from: Charlie Steiner
comment by Charlie Steiner · 2022-10-11T16:22:55.781Z · LW(p) · GW(p)

Go on, what are some of the theoretical hurdles for a brain-scale quantum computer?

Replies from: M. Y. Zuo, sharmake-farah
comment by M. Y. Zuo · 2022-10-11T20:41:17.317Z · LW(p) · GW(p)

Interconnections between an enormous number of qubits?

Replies from: sharmake-farah
comment by Noosphere89 (sharmake-farah) · 2022-10-11T20:49:20.754Z · LW(p) · GW(p)

If you're talking about decoherence issues, that's solvable with error correcting codes, and we now have a proof that it's possible to completely solve the decoherence problem via quantum error correcting codes.

Link to article here:

https://www.quantamagazine.org/qubits-can-be-as-safe-as-bits-researchers-show-20220106/

Link to study:

https://arxiv.org/abs/2111.03654

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2022-10-12T18:00:48.092Z · LW(p) · GW(p)

I'm referring to the real world engineering problem that interconnection requirements scale exponentially with the number of qubits. There simply isn't enough volume to make it work beyond an upper threshold limit of qubits, since they also have to be quite close to each other.

It's not at all been proven what this upper limit is or that it allows for capabilities matching or exceeding the average human brain.

If the size is scaled down to reduce the distances another problem arises in that there's a maximum limit to the amount of power that can be supplied to any unit volume, especially when cryogenic cooling is required, as cooling and refrigeration systems cannot be perfectly efficient. 

Something with 1/100th the efficiency of the human brain and the same size might work, i.e. 2kW instead of 20 watts.

But something with 1/1000000 the efficiency of the human brain and the same size would never work. Since it's impossible for 20MW of power to be supplied to such a concentrated volume while cooling away the excess heat sufficiently. That is a hard thermodynamic limit.

There is the possibility of the qubits being spread around quite a bit farther from each other, i.e. in a room-size space, but that goes back to the first issue as it brings exponentially increasing losses, from such things as signalling issues. Which may be partially mitigated by improvements from such things as error correcting codes. But there cannot exist a 'complete' solution.

As perfectly lossless information transmission is only an ideal and not achievable in practice.

comment by Noosphere89 (sharmake-farah) · 2022-10-11T17:41:26.285Z · LW(p) · GW(p)

One of the bigger problems that was solved recently is error correction. Without actively cooling things down, quantum computers need error correction, and it used to be a real issue.

However, this was solved a year ago, at least in theory.

It also solves the decoherence problem, which allows in theory room temperature computers. It's at least a possibility proof.

The article's link is here:

https://www.quantamagazine.org/qubits-can-be-as-safe-as-bits-researchers-show-20220106/

And the actual paper is here:

https://arxiv.org/abs/2111.03654

Other than that, the problems are all practical.

Replies from: Charlie Steiner
comment by Charlie Steiner · 2022-10-11T18:16:28.797Z · LW(p) · GW(p)

Oh, cool! I'm not totally clear on what this means - did things like the toric code provide error correction in a linear number of extra steps, while this new result paves the way for error correction in a logarithmic number of extra steps?

Replies from: sharmake-farah
comment by Noosphere89 (sharmake-farah) · 2022-10-11T18:45:11.909Z · LW(p) · GW(p)

Basically, the following properties hold for this code (I'm trusting quanta magazine to report the study correctly)

  1. It is efficient like classical codes.

  2. It can correct many more errors than previous codes.

  3. It has constant ability to suppress errors, no matter how large the sequence of bits you've started with.

  4. It sums up a very low number of bits/qubits, called the LDPC property in the quanta article.

  5. It has local testability, that is errors can't hide themselves, and any check can reveal a large proportion of errors, evading Goodhart's Law.

comment by Noosphere89 (sharmake-farah) · 2022-10-11T13:20:42.353Z · LW(p) · GW(p)

Yeah, that's the big one for brains. I might answer using a similar example soon, but that might be a big one, as provisionally the latter has 35 more orders of magnitude worth of computing power.

comment by the gears to ascension (lahwran) · 2022-10-11T09:40:42.353Z · LW(p) · GW(p)

one might say you're talking about costreg foom, not kardashev foom

Replies from: sharmake-farah
comment by Noosphere89 (sharmake-farah) · 2022-10-11T15:03:54.162Z · LW(p) · GW(p)

Even here, that doesn't apply to quantum/reversible computers, or superconducting wires.

comment by jacob_cannell · 2022-10-11T17:06:31.179Z · LW(p) · GW(p)

The article I linked argues that the inverted retina is near optimal, if you continue reading . ..

The scattering effects are easily compensated for:

Looking out through a layer of neural tissue may seem to be a serious drawback for vertebrate vision. Yet, vertebrates include birds of prey with the most acute vision of any animal, and even in general, vertebrate visual acuity is typically limited by the physics of light, and not by retinal imperfections.

So, in general, the apparent challenges with an inverted retina seem to have been practically abolished by persistent evolutionary tweaking. In addition, opportunities that come with the inverted retina have been efficiently seized. In terms of performance, vertebrate eyes come close to perfect.

The everted retina has an issue :

A challenge that comes with the everted retina is to find suitable space for early neural processing. The solution seems to have been to make an absolute minimum of early processing in the retina: photoreceptor axons project straight to the optic lobes, which lie directly behind the eyes.

The inverted retina with internal space for extensive retinal circuitry performs high efficiency video compression (roughly equivalent to H.264 in compression rate), which enormously reduces the space and energy expensive wiring requirement for video output to the brain via a compact optic nerve cable, whereas the invertebrate everted retina instead has a massive set of axon bundles connecting an optic lobe directly to the back of the eye, impeding free rotation. This advantage scales with eye/retina size.

Replies from: Charlie Steiner
comment by Charlie Steiner · 2022-10-11T18:05:26.877Z · LW(p) · GW(p)

The benefit of the inverted retina doesn't scale with size. It decreases with size.

Amount of retina scales like r^2, while amount of eyeball to put neurons in scales like r^3. This means that the smaller you are, the harder it is to find space to put neurons, while the bigger you are, the easier it is. This is why humans have eyeballs full of not-so-functional vitreous humor, while the compound eyes of insects are packed full of optical neurons.

Yes, cephalopods also have eye problems. In fact, this places you in a bit of a bind - if evolution is so good at making humans near-optimal, why did evolution make octopus eyes so suboptimal?

The obvious thing to do is to just put the neurons that are currently in front of the retina in humans behind the retina instead. Or if you're an octopus, the obvious thing to do is to put some pre-processing neurons behind the retina. But these changes are tricky to evolve as a series of small mutations (the octopus eye changes less so - maybe they have hidden advantages to their architecture). And they're only metabolically cheap for large-eyed, large-bodied creatures - early vertebrates didn't have all this free space that we do.

Replies from: jacob_cannell
comment by jacob_cannell · 2022-10-11T18:21:51.949Z · LW(p) · GW(p)

The benefit of the inverted retina doesn't scale with size. It decreases with size

It's the advantage of compression reduction that generally scales with size/resolution due to the frequency power spectrum of natural images.

The obvious thing to do is to just put the neurons that are currently in front of the retina in humans behind the retina instead.

Obvious perhaps, but also wrong, it has no ultimate advantage.

Yes, cephalopods also have eye problems. In fact, this places you in a bit of a bind - if evolution is so good at making humans near-optimal, why did evolution make octopus eyes so suboptimal?

Evidence for near-optimality of inverted retina is not directly evidence for sub-optimality of everted retina: it could just be that either design can overcome tradeoffs around the inversion/eversion design choice.

comment by M. Y. Zuo · 2022-10-11T02:54:43.727Z · LW(p) · GW(p)

How do you view the claim that human cells are near a critical upper limit?

Replies from: Charlie Steiner, None
comment by Charlie Steiner · 2022-10-11T14:04:43.673Z · LW(p) · GW(p)

Here's what I'd agree with: Specific cell functions are near a local optimum of usefulness, in terms of small changes to DNA that could have been supported against mutation with the fraction of the selection budget that was allocated to those functions in the ancestral environment.

This formulation explains why human scurvy is allowed - producing vitamin C was unimportant in our ancestral environment, so the gene for it was allowed to degrade. And it doesn't fault us for not using fusion to extract energy from food - there's no small perturbation to our current digestive tract that starts a thermonuclear reaction.

comment by [deleted] · 2022-10-11T04:01:38.591Z · LW(p) · GW(p)

It's probably just wrong. For a trivial disproof : I will assume as stated that human neurons are at the Landauer limit.

Well we know from measurements and other studies that nerve cells are unreliable. This failure to fire, exhausting their internal fuel supply so they stop pulsing when they should, all the numerous ways the brain makes system level errors, and the slow speed of signaling mean as a system the brain is nowhere close to optimal. (I can provide sources for all claims) That Landauer limit is for error free computations. When you inject random errors you lose information and system precision, and thus a much smaller error free system would be equal in effectiveness to the brain.

This is likely why we are hitting humanlike performance in many domains with a small fraction of the estimated compute and memory of a brain.

Also when you talk about artificial systems: human brain has no expansion ports, upload or download interfaces, any way to use a gigawatt of power to solve more difficult problems, etc.

So even if we could never do better for the 20 watts the brain uses, in practice that doesn't matter.

answer by DirectedEvolution (AllAmericanBreakfast) · 2022-10-10T23:09:38.921Z · LW(p) · GW(p)

I'm assuming you're using "global maximum" as a synonym for "pareto optimal," though I haven't heard it used in that sense before. There are plenty of papers arguing that one biological trait or another is pareto optimal. One such (very cool) paper, "Motile curved bacteria are Pareto-optimal," aggregates empirical data on bacterial shapes, simulates them, and uses the results of those simulations to show that the range of shapes represent tradeoffs for "efficient swimming, chemotaxis, and low cell construction cost."

It finds that most shapes are pretty efficient swimmers, but slightly elongated round shapes and curved rods are fastest, and long straight rods are notably slower. However, these long straight rod-shaped bacteria have the lowest chemotactic signal/noise ratio, because they can better resist being jostled around by random liquid motion. Finally, spherical shapes are probably easiest to construct, since you need special mechanical structures to hold rod and helical shapes. Finally, they show that all but two bacterial species they examined have body shapes that are on the pareto frontier.

If true, what would this "pareto optimality" principle mean generally?

Conservatively, it would indicate that we won't often find bad biological designs. If a design appears suboptimal, it suggests we need to look harder to identify the advantage it offers. Along this theme, we should be wary of side effects when we try to manipulate biological systems. These rules of thumb seem wise to me.

It's more of a stretch to go beyond caution about side effects and claim that we're likely to hit inescapable tradeoffs when we try to engineer living systems. Human goals diverge from maximizing reproductive fitness, we can set up artificial environments to encourage traits not adaptive in the wild, and we can apply interventions to biological systems that are extremely difficult, if not impossible, for evolution to construct.

Take the bacteria as an example. If this paper's conclusions are true, then elongated rods have the highest chemotactic SNR, but are difficult to construct. In the wild, that might matter a lot. But if we want to grow a f*ckload of elongated rod bacteria, we can build some huge bioreactors and do so. In general, we can deal with a pareto frontier by eliminating the bottleneck that locks us into the position of the frontier.

Likewise, the human body faces a tradeoff between being too vigilant for cancer (and provoking harmful autoimmune responses) and being too lax (and being prone to cancer). But we humans can engineer ever-more-sophisticated systems to detect and control cancer, using technologies that simply are not available to the body (perhaps in part for other pareto frontier reasons). We still face serious side effects when we administer chemo to a patient, but we can adjust not only the patient's position on the pareto frontier, but also the location of that frontier itself.

comment by jacob_cannell · 2022-10-11T18:09:19.623Z · LW(p) · GW(p)

The most relevant pareto-optimality frontiers are computational: biological cells being computationally near optimal in both storage density and thermodynamic efficiency seriously constrains or outright dashes the hopes of nanotech improving much on biotech. This also indirectly relates to brain efficiency.

Replies from: sharmake-farah
comment by Noosphere89 (sharmake-farah) · 2022-10-11T18:33:13.231Z · LW(p) · GW(p)

The most relevant pareto-optimality frontiers are computational: biological cells being computationally near optimal in both storage density and thermodynamic efficiency seriously constrains or outright dashes the hopes of nanotech improving much on biotech. This also indirectly relates to brain efficiency.

Not really, without further assumptions. The 2 largest assumptions that are there are:

  1. We are strictly limited to classical computing for the future and that there's no superconducting materials to help us. Now I have a fairly low probability for superconduction/reversible/quantum computers this century, like on the order of 2-3%. Yet my view is conditional on no x-risk, and assuming 1,000-10,000 years are allowed, then I have 1-epsilon probability on quantum computers and superconductors being developed, and reversible computing more like 10-20%.

  2. We can't use more energy. Charlie Steiner gives an extreme case, but if we can increase the energy, we can get much better results.

And note that this is disjunctive, that is if one assumption is wrong, your case collapses.

Replies from: jacob_cannell, porby
comment by jacob_cannell · 2022-10-11T18:44:38.518Z · LW(p) · GW(p)

Neither 1 or 2 are related to thermodynamic efficiency of biological cells or hypothetical future nanotech machines. Very unlikely that exotic superconducting/reversible/quantum computing is possible for cell sized computers in a room temperature heat bath environment. Too much entropy to deal with.

Replies from: sharmake-farah
comment by Noosphere89 (sharmake-farah) · 2022-10-11T19:46:46.481Z · LW(p) · GW(p)

My point is your implications only hold if other assumptions hold, not just the efficiency assumption.

Also, error correction codes exist for quantum computers, which deal with the decoherence issue in room temperature you're talking about, which is why I'm so confident about quantum computers working. Superconductors are also known how they work, link to article here:

https://www.quantamagazine.org/high-temperature-superconductivity-understood-at-last-20220921/

Link to article here:

https://www.quantamagazine.org/qubits-can-be-as-safe-as-bits-researchers-show-20220106/

And the actual study:

https://arxiv.org/abs/2111.03654

Replies from: jacob_cannell
comment by jacob_cannell · 2022-10-11T21:13:00.162Z · LW(p) · GW(p)

How does reversible/quantum computing help with protein transcription or DNA replication? Neither of those exotic computing techniques reduce the fundamental thermodynamic cost of physical bit erasures/copies from what I understand.

Replies from: sharmake-farah
comment by Noosphere89 (sharmake-farah) · 2022-10-11T21:23:51.028Z · LW(p) · GW(p)

Because we get to use the Margolus-Levitin limit, which states:

A quantum system of energy E needs at least a time of h/4e to go from one state to an orthogonal state, where h is the Planck constant (6.626×10−34 J⋅Hz−1[1]) and E is average energy.

This means we get 15 orders of magnitude decrease from your estimates of 1E-19 joules for one bit, which is much better news for nanotech.

I have even better news for total computation limits: 5.4x10^50 operations per second for a kilogram of matter.

The limit for speed is this:

The Margolus–Levitin theorem, named for Norman Margolus and Lev B. Levitin, gives a fundamental limit on quantum computation (strictly speaking on all forms on computation). The processing rate cannot be higher than 6 × 10^33 operations per second per joule of energy.

And since you claimed that computational limits matter for biology, the reason is obvious.

A link to the Margolus-Levitin theorem:

https://en.m.wikipedia.org/wiki/Margolus–Levitin_theorem

In the fully reversible case, the answer is zero energy is expended.

Replies from: jacob_cannell
comment by jacob_cannell · 2022-10-11T22:23:22.451Z · LW(p) · GW(p)

That doesn't help with bit erasures and is thus irrelevant to what I'm discussing - the physical computations cells must perform.

Replies from: sharmake-farah
comment by Noosphere89 (sharmake-farah) · 2022-10-12T12:13:26.735Z · LW(p) · GW(p)

The nice thing about quantum computers is that they're mostly reversible, ie swaps can always be done with zero energy, until you make a measurement. Once you do, you have to pay the energy cost, which I showed in the last comment. We don't need anything else here.

Thanks to porby for mentioning this.

Replies from: jacob_cannell
comment by jacob_cannell · 2022-10-13T17:04:22.383Z · LW(p) · GW(p)

The nice thing about quantum computers is that they're mostly reversible, ie bit erasures can always be done with zero energy,

You seem confused here - reversible computations do not, can not erase/copy bits, all they can do is swap/transfer bits, moving them around within the computational system. Bit erasure is actual transference of the bit entropy into the external environment, outside the bounds of the computational system (which also breaks internal quantum coherence from what I recall, but that's a side point).

Replication/assembly requires copying bits into (and thus erasing bits from) the external environment. This is fundamentally an irreversible computation.

comment by porby · 2022-10-11T19:54:52.814Z · LW(p) · GW(p)

Now I have a fairly low probability for superconduction/reversible/quantum computers this century, like on the order of 2-3%.

Could you elaborate on this? I'm pretty surprised by an estimate that low conditioned on ~normalcy/survival, but I'm no expert.

Replies from: sharmake-farah
comment by Noosphere89 (sharmake-farah) · 2022-10-11T20:19:21.302Z · LW(p) · GW(p)

Admittedly this is me thinking worst case scenario, where no technology can reliably improve the speed of getting to those technologies.

If I had to compute an average case, I'd operationalize the following predictions:

Will a quantum computer be sold to 10,000+ customers with a qubit count of at least 1,000 by 2100? Probability: (15-25%.)

Will superconductors be used in at least 1 grid in Europe, China or the US by 2100? Probability: (10-20%).

Will reversible computers be created by a company with at least $100 million in market cap by 2100? Probability: (1-5%).

Now I'm somewhat pessimistic about reversible computers, as they may not exist, but I think there's a fair chance of superconductors and quantum computers by 2100.

Replies from: porby
comment by porby · 2022-10-11T20:44:41.968Z · LW(p) · GW(p)

Thanks!

My understanding is that a true quantum computer would be a (mostly) reversible computer as well, by virtue of quantum circuits being reversible. Measurements aren't (apparently) reversible, but they are deferrable. Do you mean something like... in practice, quantum computers will be narrowly reversible, but closer to classical computers as a system because they're forced into many irreversible intermediate steps?

Replies from: sharmake-farah
comment by Noosphere89 (sharmake-farah) · 2022-10-11T20:53:12.020Z · LW(p) · GW(p)

Not really. I'm focused on fully reversible systems here, as they theoretically allow you to reverse errors without dissipating any energy, so the energy stored there can keep on going.

It's a great advance, and it's stronger than you think since we don't need intermediate steps anymore, and I'll link to the article here:

https://www.quantamagazine.org/computer-scientists-eliminate-pesky-quantum-computations-20220119/

But I'm focused on full reversibility, ie the measurement step can't be irreversible.

Replies from: sharmake-farah
comment by Noosphere89 (sharmake-farah) · 2024-08-23T17:51:05.828Z · LW(p) · GW(p)

In retrospect, I agree with you @porby [LW · GW] that my estimate was way lower than it should be, and I now think that the chances of reversible computing in total until 2100 is more like 50-90% than 2-3%.

answer by Noosphere89 · 2022-10-11T20:31:28.051Z · LW(p) · GW(p)

Basically, as far as I can tell, the answer is no, except with a bunch of qualifiers. Jacob Cannell has at least given some evidence that biology reliably finds pareto optimalish designs, but not global maximums.

In particular, his claims about biology never being improved by nanotech are subject to Extremal Goodhart.

For example, quantum computing/reversible computing or superconductors would entirely break his statement about optimal nanobots.

Ultimate limits from reversible computing/quantum computers come here:

https://arxiv.org/abs/quant-ph/9908043

From Gwern:

No, it's not. As I said, a skyscraper of assumptions each more dubious than the last. The entire line of reasoning from fundamental physics is useless because all you get is vacuous bounds like 'if a kg of mass can do 5.4e50 quantum operations per second and the earth is 6e24 kg then that bounds available operations at 3e65 operations per second' - which is completely useless because why would you constrain it to just the earth? (Not even going to bother trying to find a classical number to use as an example - they are all, to put it technically, 'very big'.) Why are the numbers spat out by appeal to fundamental limits of reversible computation, such as but far from limited to, 3e75 ops/s, not enough to do pretty much anything compared to the status quo of systems topping out at ~1.1 exaflops or 1.1e18, 57 orders of magnitude below that one random guess? Why shouldn't we say "there's plenty of room at the top"? Even if there wasn't and you could 'only' go another 20 orders of magnitude, so what? what, exactly, would it be unable to do that it would if you subtracted or added 10 orders of magnitude* and how do you know that? why would this not decisively change economics, technology, politics, recursive AI scaling research, and everything else? if you argue that this means it can't do something in seconds and would instead take hours, how is that not an 'intelligence explosion' in the Vingean sense of being an asymptote and happening far faster than prior human transitions taking millennia or centuries, and being a singularity past which humans cannot see nor plan? Is it not an intelligence explosion but an 'intelligence gust of warm wind' if it takes a week instead of a day? Should we talk about the intelligence sirocco instead? This is why I say the most reliable part of your 'proof' are also the least important, which is the opposite of what you need, and serves only to dazzle and 'Eulerize' the innumerate.

  • btw I lied; that multiplies to 3e75, not 3e65. Did you notice?

Landauer's limit only 'proves' that when you stack it on a pile of assumptions a mile high about how everything works, all of which are more questionable than it. It is about as reliable a proof as saying 'random task X is NP-hard, therefore, no x-risk from AI'; to paraphrase Russell, arguments from complexity or Landauer have all the advantages of theft over honest toil...

Links to comments here:

https://www.lesswrong.com/posts/yenr6Zp83PHd6Beab/?commentId=PacDMbztz5spAk57d [LW · GW]

https://www.lesswrong.com/posts/yenr6Zp83PHd6Beab/?commentId=HH4xETDtJ7ZwvShtg [LW · GW]

One important implication is that in practice, it doesn't matter whether biology has found a pareto optimal solution, since we can usually remove at least one constraint that applies to biology and evolution, even if it's as simple as editing many, many genes at once to completely redesign the body.

This also regulates my Foom probabilities. My view is that I hold a 1-3% chance that the first AI will foom by 2100. Contra Jacob Cannell, Foom is possible, if improbable. Inside the model, everything checks out, but outside the model, it's where he goes wrong.

comment by jacob_cannell · 2022-10-13T16:56:55.656Z · LW(p) · GW(p)

For example, quantum computing/reversible computing or superconductors would entirely break his statement about optimal nanobots.

Reversible/Quantum computing is not as general as irreversible computing. Those paradigms only accelerate specific types of computations, and they don't help at all with bit erasing/copying. The core function of a biological cell is to replicate, which requires copying/erasing bits, which reversible/quantum computing simply don't help with at all, and in fact just add enormous extra complexity.

answer by ChristianKl · 2022-10-11T15:18:59.622Z · LW(p) · GW(p)

If biology would find the maximum we would expect that different species find the same photosynthesis process and that we can't improve the photosynthesis process of one species by swapping it out with that of another.

https://www.science.org/content/article/fight-climate-change-biotech-firm-has-genetically-engineered-very-peppy-poplar suggests that you can make trees grow faster by adding pumpkin and green algae genes. 

comment by lc · 2022-10-11T16:45:42.877Z · LW(p) · GW(p)

Without reading the link, that sounds like the exact opposite of the conclusion you should reach. Are they implanting specific genes, or many genes?

Replies from: ChristianKl
comment by ChristianKl · 2022-10-11T17:33:11.922Z · LW(p) · GW(p)

Green algae had more generation cycles to optimize their photosynthesis than trees have and achieved a better solution as a result. 

That clearly suggests that organisms with generation cycles like trees or humans don't reliably find global maxima. 

Replies from: lc
comment by lc · 2022-10-12T17:58:16.321Z · LW(p) · GW(p)

Or green algae have reached a different local maxima? Right?

3 comments

Comments sorted by top scores.

comment by Shmi (shminux) · 2022-10-10T23:43:58.423Z · LW(p) · GW(p)

Unless by "gobal" you mean "local", I don't see why this statement would hold? Land animals never invented the wheel even where wheeling is more efficient than walking (like steppes and non-sandy deserts). Same with catalytic coal burning for energy (or other easily accessible energy-dense fossil fuel consumption). Both would give extreme advantages in speed and endurance. There are probably tons of other examples, like direct brain-to-brain communication by linking nerve cells instead of miming and vocalizing. 

comment by DirectedEvolution (AllAmericanBreakfast) · 2022-10-10T21:44:01.907Z · LW(p) · GW(p)

Is this all there is to Jacob’s comment? Does he cite sources? It’s hard to interrogate without context.

Replies from: sharmake-farah
comment by Noosphere89 (sharmake-farah) · 2022-10-10T21:59:01.631Z · LW(p) · GW(p)

His full comment is this:

The paper you linked seems quite old and out of date. The modern view is that the inverted retina, if anything, is a superior design vs the everted retina, but the tradeoffs are complex.

This is all unfortunately caught up in some silly historical "evolution vs creationism" debate, where the inverted retina was key evidence for imperfect design and thus inefficiency of evolution. But we now know that evolution reliably finds pareto optimal designs:

biological cells operate close to the critical Landauer Limit, and thus are pareto-optimal practical nanobots. eyes operate at optical and quantum limits, down to single photon detection. the brain operates near various physical limits, and is probably also near pareto-optimal in its design space.

He cites one source on the inverted eye's superiority over the everted eye, link here:

https://scholar.google.com/scholar?cluster=3322114030491949344&hl=en&as_sdt=2005&sciodt=0,5

His full comment is linked here:

https://www.lesswrong.com/posts/GCRm9ysNYuGF9mhKd/?commentId=aGq36saoWgwposRHy [LW · GW]

That's the context.