How to Make Superbabies

post by GeneSmith, kman · 2025-02-19T20:39:38.971Z · LW · GW · 129 comments

Contents

  How to make (slightly) superbabies
  How to do better than embryo selection
  Maximum human life expectancy
  Is everything a tradeoff?
  How to make an edited embryo
  Embryos from stem cells
  Iterated CRISPR
  Sergiy Velychko and the story of Super-SOX
    What is going on?
    Super-SOX
    Mice from stem cells
  Why does super-SOX matter?
  How do we do this in humans?
  What if super-SOX doesn’t work?
    Eggs from Stem Cells
    Fluorescence-guided sperm selection
    Embryo cloning
  What if none of that works?
  What about legal issues?
  How we make this happen
  Ahh yes, but what about AI?
    There is currently no backup plan if we can’t solve alignment
  Team Human
  Appendix
    iPSCs were named after the iPod
    On autoimmune risk variants and plagues
    Two simples strategies for minimizing autoimmune risk and pandemic vulnerability
    “I don’t want someone else’s genes in my child”
    Could I use this technology to make a genetically enhanced clone of myself?
    Why does super-SOX work?
    How was the IQ grain graph generated?
None
130 comments

Working in the field of genetics is a bizarre experience. No one seems to be interested in the most interesting applications of their research.

We’ve spent the better part of the last two decades unravelling exactly how the human genome works and which specific letter changes in our DNA affect things like diabetes risk or college graduation rates. Our knowledge has advanced to the point where, if we had a safe and reliable means of modifying genes in embryos, we could literally create superbabies. Children that would live multiple decades longer than their non-engineered peers, have the raw intellectual horsepower to do Nobel prize worthy scientific research, and very rarely suffer from depression or other mental health disorders.

The scientific establishment, however, seems to not have gotten the memo. If you suggest we engineer the genes of future generations to make their lives better, they will often make some frightened noises, mention “ethical issues” without ever clarifying what they mean, or abruptly change the subject. It’s as if humanity invented electricity and decided the only interesting thing to do with it was make washing machines.

I didn’t understand just how dysfunctional things were until I attended a conference on polygenic embryo screening in late 2023. I remember sitting through three days of talks at a hotel in Boston, watching prominent tenured professors in the field of genetics take turns misrepresenting their own data and denouncing attempts to make children healthier through genetic screening. It is difficult to convey the actual level of insanity if you haven’t seen it yourself.

As a direct consequence, there is low-hanging fruit absolutely everywhere. You can literally do novel groundbreaking research on germline engineering as an internet weirdo with an obsession and sufficient time on your hands. The scientific establishment is too busy with their washing machines to think about light bulbs or computers.

This blog post is the culmination of a few months of research by myself and my cofounder into the lightbulbs and computers of genetics: how to do large scale, heritable editing of the human genome to improve everything from diabetes risk to intelligence. I will summarize the current state of our knowledge and lay out a technical roadmap examining how the remaining barriers might be overcome.

We’ll begin with the topic of the insane conference in Boston; embryo selection.

How to make (slightly) superbabies

Two years ago, a stealth mode startup called Heliospect began quietly offering parents the ability to have genetically optimized children [LW · GW]. 

The proposition was fairly simple; if you and your spouse went through IVF and produced a bunch of embryos, Heliospect could perform a kind of genetic fortune-telling.

They could show you each embryo’s risk of diabetes. They could tell you how likely each one was to drop out of high school. They could even tell you how smart each of them was likely to be.

After reading each embryo's genome and ranking them according to the importance of each of these traits, the best would be implanted in the mother. If all went well, 9 months later a baby would pop out that has a slight genetic advantage relative to its counterfactual siblings.

The service wasn’t perfect; Heliospect’s tests could give you a rough idea of each embryo’s genetic predispositions, but nothing more.

Still, this was enough to increase your future child’s IQ by around 3-7 points or increase their quality adjusted life expectancy by about 1-4 years. And though Heliospect wasn’t the first company to offer embryo selection to reduce disease risk, they were the first to offer selection specifically for enhancement.

The curious among you might wonder why the expected gain from this service is “3-7 IQ points”. Why not more? And why the range?

There are a few variables impacting the expected benefit, but the biggest is the number of embryos available to choose from. 

Each embryo has a different genome, and thus different genetic predispositions. Sometimes during the process of sperm and egg formation, one of the embryos will get lucky and a lot of the genes that increase IQ will end up in the same embryo.

The more embryos, the better the best one will be in expectation. There is a “scaling law” describing how good you can expect the best embryo to be based on how many embryos you’ve produced.

With two embryos, the best one would have an expected IQ about 2 points above parental average. WIth 10 the best would be about 6 points better.

But the gains plateau quickly after that. 100 embryos would give a gain of 10.5 points, and 200 just 11.5.

If you graph IQ gain as a function of the number of embryos available, it pretty quickly becomes clear that we simply aren’t going to make any superbabies by increasing the number of embryos we choose from.

The line goes nearly flat after ~40 or so. If we really want to unlock the potential of the human genome, we need a better technique.

How to do better than embryo selection

When we select embryos, it’s a bit like flipping coins and hoping most of them land on heads. Even if you do this a few dozen times, your best run won’t have that many more heads than tails.

If we could somehow directly intervene to make some coins land on heads, we could get far, far better results.

The situation with genes is highly analogous; if we could swap out a bunch of the variants that increase cancer risk for ones that decrease cancer risk, we could do much better than embryo selection.

Gene editing is the perfect tool to make this happen. It lets us make highly specific changes at known locations in the genome where simple changes like swapping one base pair for another is known to have some positive effect.

Let’s look again at the IQ gain graph from embryo selection and compare it with what could be achieved by editing using currently available data.

See the appendix for a full description of how this graph was generated and the assumptions we make.

If we had 500 embryos, the best one would have an IQ about 12 points above that of the parents. If we could make 500 gene edits, an embryo would have an IQ about 50 points higher than that of the parents. 

Gene editing scales much, much better than embryo selection.

Some of you might be looking at the data above and wondering “well what baseline are we talking about? Are we talking about a 60 IQ point gain for someone with a starting IQ of 70?”

The answer is the expected gain is almost unaffected by the starting IQ. The human gene pool has so much untapped genetic potential that even the genome of a very, very smart person still has thousands of IQ decreasing variants that could potentially be altered.

What’s even crazier is this is just the lower bound on what we could achieve. We haven’t even used all the data we could for fine-mapping, and if any of the dozen or so biobanks out there decides to make an effort to collect more IQ phenotypes the expected gain would more than double.

Like machine learning, gene editing has scaling laws. With more data, you can get a larger improvement out of the same number of edits. And with a sufficiently large amount of data, the benefit of gene editing is unbelievably powerful.

Already with just 300 edits and a million genomes with matching IQ scores, we could make someone with a higher predisposition towards genius than anyone that has ever lived.

This won’t guarantee such an individual would be a genius; there are in fact many people with exceptionally high IQs who don’t end up making nobel prize worthy discoveries.

But it will significantly increase the chances; Nobel prize winners (especially those in math and sciences) tend to have IQs significantly above the population average.

It will make sense to be cautious about pushing beyond the limit of naturally occurring genomes since data about the side-effects of editing at such extremes is quite limited. We know from the last few millennia of selective breeding in agriculture and husbandry that it’s possible to push tens of standard deviations beyond any naturally occurring genome (more on this later), but animal breeders have the advantage of many generations of validation over which to refine their selection techniques. For the first generation of enhanced humans, we’ll want to be at least somewhat conservative, meaning we probably don’t want to push much outside the bounds of natural human variation.

Maximum human life expectancy

Perhaps even more than intelligence, health is a near universal human good. An obvious question when discussing the potential of gene editing is how large of an impact we could have on disease risk or longevity if we edited to improve them.

The size of reduction we could get from editing varies substantially by disease. Some conditions, particularly autoimmune diseases like inflammatory bowel disease and type 1 diabetes, can be virtually eliminated with just a handful of edits. 

Others, like Prostate cancer and osteoarthritis take far more edits and can’t be targeted quite as effectively.

You might wonder why there’s such a large difference between conditions. Perhaps this is a function of how heritable these diseases are. But that turns out to be only part of the story.

The other part is the effect size of common variants. Some diseases have several variants that are both common among the population and have huge effect sizes.

And the effect we can have on them with editing is incredible. Type 1 diabetes, inflammatory bowel disease, psoriasis, Alzheimer’s, Parkinson’s, and multiple sclerosis can all be virtually eliminated with less than a dozen changes to the genome.

Interestingly, a large proportion of conditions with this property of being highly editable are autoimmune diseases. Anyone who knows a bit about human evolution over the last ten thousand years should not be too surprised by this; there has been incredibly strong selection pressure on the human immune system during that time. Millenia of plagues have made genetic regions encoding portions of the human immune system the single most genetically diverse and highly selected regions in the human genome.

As a result the genome is enriched for “wartime variants”; those that might save your life if the bubonic plague reemerges, but will mess you up in “peacetime” by giving you a horrible autoimmune condition.

This is, not coincidentally, one reason to not go completely crazy selecting against risk of autoimmune diseases: we don't want to make ourselves that much more vulnerable to once-per-century plagues. We know for a fact that some of the variants that increase their risk were protective against ancient plagues like the black death (see the appendix for a fuller discussion of this [LW · GW]).

With most trait-affecting genetic variants, we can make any trade-offs explicit; if some of the genetic variants that reduce the risk of hypertension increase the risk of gallstones, you can explicitly quantify the tradeoff.

Not so with immune variants that protect against once-per-century plagues. I dig more into how to deal with this tradeoff in the appendix [LW · GW] but the TL;DR is that you don’t want to “minimize” risk of autoimmune conditions. You just want to reduce their risk to a reasonable level while maintaining as much genetic diversity as possible.

Is everything a tradeoff?

A skeptical reader might finish the above section and conclude that any gene editing, no matter how benign, will carry serious tradeoffs. 

I do not believe this to be the case. Though there is of course some risk of unintended side-effects (and we have particular reason to be cautious about this for autoimmune conditions), this is not a fully general counterargument to genetic engineering.

To start with, one can simply look at humans and ask “is genetic privilege a real thing?”

And the answer to anyone with eyes is obviously “yes”. Some people are born with the potential to be brilliant. Some people are very attractive. Some people can live well into their 90s while smoking cigarettes and eating junk food. Some people can sleep 4 hours a night for decades with no ill effects.

And this isn’t just some environmentally induced superpower either. If a parent has one of these advantages, their children are significantly more likely than a stranger to share it. So it is obvious that we could improve many things just by giving people genes closer to those of the most genetically privileged.

But there is evidence from animal breeding that we can go substantially farther than the upper end of the human range when it comes to genetic engineering.

Take chickens. While literally no one would enjoy living the life of a modern broiler chicken, it is undeniable that we have been extremely successful in modifying them for human needs.

We’ve increased the weight of chickens by about 40 standard deviations relative to their wild ancestors, the red junglefowl. That’s the equivalent of making a human being that is 14 feet tall; an absurd amount of change. And these changes in chickens are mostly NOT the result of new mutations, but rather the result of getting all the big chicken genes into a single chicken.

Some of you might point out that modern chickens are not especially healthy. And that’s true! But it’s the result of a conscious choice on the part of breeders who only care about health to the extent that it matters for productivity. The health/productivity tradeoff preferences are much, much different for humans. 

So unless the genetic architecture of human traits is fundamentally different from those of cows, chickens, and all other domesticated animals (and we have strong evidence this is not the case), we should in fact be able to substantially impact human traits in desirable ways and to (eventually) push human health and abilities to far beyond their naturally occurring levels.

But we can do even better than these vague arguments. Suppose you’re worried that if we edit genes to decrease the risk of one disease, we might inadvertently increase the risk of another. To see how big of an issue this might be, let’s look at a correlation matrix of the genetic variants involved in determining complex disease risks like diabetes and breast cancer:

With a few notable exceptions, there is not very much correlation between different diseases. Most disease have a genetic correlation of between 0 and 5%. 

And the correlations that DO exist are mostly positive. That's why most of the boxes are yellowish instead of bluish. Editing embryos to reduce the risk of one disease usually results in a tiny reduction of others.

To the extent it doesn’t, you can always edit variants to target BOTH diseases. Even if they are negatively correlated, you can still have a positive effect on both.

This kind of pre-emptive editing targeting multiple diseases is where I think this field is ultimately headed. Those of you in the longevity field have long understood that even if we cure one or two of the deadliest conditions like heart disease or cancers, it would only add a couple of years to human life expectancy. Too many other bodily systems are breaking down at around the same time.

But what if we could go after 5 diseases at once? Or ten? What if we stopped thinking about diseases as distinct categories and instead asked ourselves how to directly create a long, healthy life expectancy?

In that case we could completely rethink how we analyze the genetics of health. We could directly measure life expectancy and edit variants that increase it the most.

How large of an impact could we have with editing? My cofounder pulled data from the largest genome-wide association study of life expectancy we could find and graphed the results.

15 years. That’s how much of an impact we could have on life expectancy with editing. This level of impact would be on par with that of vaccines or general sanitation improvements, each of which added 10-20 years to life expectancy over the past hundred or so years.

Interestingly, we are much more data limited for life expectancy than we are for IQ or diseases. If we could just get more data on how long people live and what genes they have, we could do substantially better than the graph above.

 

Life expectancy gain from editing as a function of the amount of data used in the training set. If you want to figure out life expectancy after editing, just add ~77 years to whatever is shown on the graph.

At the upper limit, a life expectancy increase of 75 years is perhaps not too implausible.

I think in practice we would probably not push much beyond 50 additional years of life expectancy simply because that would already take us to the ragged edge of how long naturally occurring humans stick around. But in a few generations we could probably start pushing lifespans in the high hundreds or low 200s. 

Some might worry that by targeting life expectancy directly rather than diseases, the result might be people who stay very sick for a very long time. I think that’s extremely unlikely to be the case. 

For one thing, even very expensive modern medicine can’t keep very sick people alive for more than a few extra years in most cases. But for another, we can actually zoom in to the graph shown above and LOOK at exactly which variants are being targeted. 

When we do, we find that many of the variants that have the largest impact on life expectancy are in fact risk variants for various diseases. In other words, editing for life expectancy directly targets many of the diseases that bring life to a premature end.

This chart also shows why treating aging is so hard; the genetics of breakdown aren’t just related to one organ system or one source of damage like epigenetic aging. They’re ALL OVER the place; in dozens of different organ systems.

Results like these are one of the reasons why I think virtually all current anti-aging treatments are destined to fail. The one exception is tissue replacement, of the kind being pursued by Jean Hebert and a few others.

Gene editing is another exception, though one with more limited power than replacement.  We really can edit genes that affect dozens if not hundreds of bodily systems.

So to summarize; we have the data. We have at least a reasonably good probabilistic idea of which genes do what. And we know we can keep side-effects of editing relatively minimal. 

So how do we actually do the editing?

How to make an edited embryo

The easiest way to make a gene-edited embryo is very simple; you fertilize an egg with a sperm, then you squirt some editors (specifically CRISPR) onto the embryo. These editors get inside the embryo and edit the genes.

This method has actually been used in human embryos before! In 2018 Chinese scientist He Jiankui created the first ever gene edited embryos by using this technique. All three of the children born from these embryos are healthy 6 years later (despite widespread outrage and condemnation at the time).

Today we could probably do somewhat more interesting editing with this technique by going after multiple genes at once; Alzheimer’s risk, for example, can be almost eliminated with maybe 5-10 edits. The same goes for several other brain diseases like Parkinson’s and Multiple Sclerosis and some of the diseases shown in the first graph like diabetes and inflammatory bowel disease.

But there are issues. Editors mess up every now and then, and ideally one would hope to VERIFY that there were no unintentional edits made during the editing process.

This CAN be done! If all the cells in the embryo have the same genome, you can take a little sample of the embryo and sequence the genome to figure out whether any unintentional edits were made. And we already do this kind of embryo sampling as a routine part of fertility treatments.

But cells are only guaranteed to have the same genome if all the edits are made before the first cell division. If an edit is made afterwards, then some cells will have it and some cells won’t.

You can mostly guarantee no editing after the first cell division by injecting anti-CRISPR proteins into the embryo before the first cell division. This disables any remaining editors, ensuring all the embryo’s tissues have the same genome and allowing you to check whether you’ve made the edits you want.

The other option is you can just shrug and say “Well if our testing shows that this process produces few enough off-targets, it probably doesn’t matter if some of the cells get edited and some don’t. As long as we don’t mess up anything important it will be fine”. After all, there are already substantial genetic differences between different cells in the same person, so the current evidence suggests it’s not that big of a deal.

But either way, there are fundamental limits to the number of edits you can make this way. You can only cram so many editors inside the cell at once without killing the cell. The cellular repair processes crucial for editing can only work so fast (though there are ways to upregulate some of them). And after a few cell divisions the embryo’s size increases, making delivery of editors to the inner tissues very difficult. 

So while you can make perhaps up to 5 edits in that 1 day window (possibly more if my company succeeds with our research), that isn’t nearly enough to have a significant effect on highly polygenic traits like depression risk or intelligence or life expectancy.

Fortunately, there is another way; make the edits in a stem cell, then turn that stem cell into an embryo. And there has been a significant breakthrough made in this area recently.

Embryos from stem cells

On December 22nd 2023, an interesting paper was published in the pages of Cell. Its authors claimed to have discovered what had long been considered a holy grail in stem cell biology: a method of creating naive embryonic stem cells.

I first learned of the paper two weeks later when the paper’s principal investigator, Sergiy Velychko, left a comment [LW(p) · GW(p)] about the work on my LessWrong blog post [LW · GW].

It’s not often that I have a physical response to a research paper, but this was one of the few exceptions. Goosebumps; they did what?? Is this actually real?

Velychko and his collaborators had discovered that by modifying a single amino acid in one of the proteins used to create stem cells, they could create a more powerful type of stem cell capable of giving rise to an entirely new organism. And unlike previous techniques, Velychko’s wasn’t just limited to mice and rats; it seemed to work in all mammals including humans.

If Velychko’s technique works as well in primates as the early data suggests, it could enable gene editing on a previously impossible scale. We could make dozens or even hundreds of edits in stem cells, then turn those stem cells into embryos. Once we can do this, germline gene editing will go from being a niche tool useful for treating a handful of diseases, to perhaps the most important technology ever developed.

Iterated CRISPR

To explain why, we need to return to the method used by He Jiankui to make his embryos. The main limitation with Jiankui’s technique is the limited time window during which edits can be made, and the inability to go back and fix mistakes or “weed out” cells that are damaged by the editors.

With Jiankui’s technique, all edits have to be made in the first 24 hours, or at most the first few days. Mistakes can’t be corrected, and any serious ones do irreparable damage to the embryo.

If we could somehow grow the cells separately, we could take our time making edits and VERIFY them all before implanting an embryo. We could edit,make copies of the cells, verify the edits, then make more edits, all in a loop. A protocol might look something like the following:

However, there’s one issue with the above protocol; the moment you remove stem cells from an embryo and begin to grow them, they lose their developmental potential. The stem cells become “primed”, meaning they can no longer form a complete organism. 

So even if you are able to edit a bunch of genes in your stem cells to decrease heart attack risk, increase intelligence and decrease depression risk, it doesn’t matter. Your super cells are just that; they can’t make a baby.

And until late 2023, this is where the field was stuck.

Then along came Sergiy Velychko.

Sergiy Velychko and the story of Super-SOX

The story of his discovery, what it is, how it works, and how it was made, is one of the most interesting I’ve ever stumbled across.

In the early 2020s, Sergiy was a post doc at the Max Planck Institute in Germany where he was working on research related to stem cells.

Stem cells are normally created by “turning on” four proteins inside a skin cell. These proteins are called “Yamanaka factors” after the Japanese scientist who discovered them. When they are turned on, the gene expression of the skin cell is radically altered, changing it into a stem cell.

Sergiy had been experimenting with modifications to one particular Yamanaka factor named Oct4. He was trying to increase the efficiency with which he could convert skin cells into stem cells. Normally it is abysmally bad; less than 1%. 

Unfortunately, very few of his experiments had yielded anything interesting. Most of his modifications to Oct4 just made it worse. Many broke it completely.

After a few years of running into one dead end after another, Sergiy gave up and moved on to another project converting skin cells into blood cells. 

He ran a series of experiments involving a protein called Sox17 E57K, a mutant of Sox17 known for its surprising ability to make stem cells (normal Sox17 can’t do this). Sergiy wanted to see if he could combine the mutant Sox17 with one of his old broken Oct4 mutants to directly make blood cells from skin.

To prove that the combination worked, Sergiy needed to set up a control group. Specifically he needed to show that the combination did NOT produce stem cells. Without this control group there would be no way to prove that he was making skin cells DIRECTLY into blood cells instead of making them into stem cells which became blood cells afterwards.

This should have been easy. His previous work had shown the Oct4 mutant wasn’t capable of making stem cells, even when combined with all the other normal reprogramming factors.

But something very surprising happened; the control group failed. The broken Oct4, which he had previously shown to be incapable of making stem cells, was doing just that.

What is going on?

Most scientists would have chalked up this outcome to contamination or bad experimental setup, or perhaps some weird quirk of nature that wasn’t particularly interesting. Indeed many of Sergiy’s colleagues who he informed of the result found it uninteresting.

So what if you could make stem cells with a weird combination of proteins? It was still less efficient than normal Yamanaka factors and there didn’t seem to be any very compelling reasons to believe it was worth looking into.

But Sergiy felt differently. He had spent enough time studying the structure and functionality of Yamanaka factors to realize this result indicated something much deeper and stranger was going on; somehow the mutant Sox17 was “rescuing” the Oct4 mutant. 

But how?

Determined to understand this better, Sergiy began a series of experiments. Piece by piece he began swapping parts of the mutant Sox17 protein into Sox2, the normal reprogramming factor, trying to better understand what exactly made the mutant Sox17 so special. 

Sox17 differs in many places from Sox2, so it was reasonable to assume that whatever made it special involved multiple changes.

But that was not what Sergiy found. Instead, he found that he could replicate the behavior by changing a single amino acid. 

Just one.

By swapping out an Alanine for a Valine at the 61st position in Sox2, it too could rescue the Oct4 mutants in the same way that the Sox17 mutant could.

What was going on? Sergiy pulled up some modeling software to try to better understand how such a simple change was making such a large difference. 

When he saw the 3d structure, it all clicked. The amino acid swap occurred at the exact spot where Sox2 and the Oct4 mutants stuck to each other. It must be changing something about the bond formed between the two.

Further experiments confirmed this to be the case; the Alanine to Valine swap was increasing the strength of the bond between Sox and Oct by a thousand fold.

To those familiar with Yamanaka factors, it might seem exceptionally strange that modifying the bond BETWEEN Yamanaka factors could enable reprogramming. 

Yamanaka factors are normally thought to work their magic by sticking to DNA and forming a sort of landing pad for a larger protein complex involved in gene expression. 

But somehow the bond BETWEEN these Yamanaka factors was turning out to be just as important. (I dig more into why this is the case in the appendix [LW · GW])

After understanding the mechanism, Sergiy had a new idea. If one amino acid swap could enable decent reprogramming efficiency with a broken Oct4, what would it enable if it was combined with a working Oct4? Sergiy quickly ran the experiment, which produced another breakthrough. When combined with normal Oct4, super-SOX was making stem cells with at least ten times the efficiency of standard techniques.

After years of failed attempts, he had finally found his super reprogramming factor.

Super-SOX

Sergiy began testing other changes such as substituting Sox17’s C terminus domain (the part that sticks onto other proteins) into Sox2. By the time he was done with all of his improvements, his modified stem cell recipe was able to create stem cells at a rate 50 times that of normal reprogramming. The best starting cells were creating stem cells with 200x the efficiency. Sergiy dubbed the best version “super-SOX”.

By itself this would have been a huge discovery and a boon to stem cell scientists all over the world. But there was something even more interesting happening. 

When Sergiy began looking at the gene expression of stem cells created with super-SOX, he noticed something incredible; these cells did not look like normal stem cells used in experiments for decades. Instead they looked like stem cells taken directly from a very early embryo.

These kinds of stem cells had been created in mice long ago, but they had never before been created for any other species except rats. Yet one after another, the signs (most notably activation of the Oct4 distal enhancer) indicated that Sergiy’s super-SOX was doing just that. It was making naive embryonic stem cells.

Mice from stem cells

To test whether what he thought he was seeing was real, Sergiy began experimenting on mice, trying to test the developmental potential of these new stem cells.

The logic of the experiment was simple; if he could grow a whole new mouse from super-SOX-derived stem cells, it would mean those cells had the same developmental potential as ones harvested directly from an early embryo.

This had been done before, but the efficiency was very low. Embryos made from stem cells often failed to develop after transfer, and those that did often died shortly after birth. 

The experiment worked. And not only did it work, but the results were incredible. Super-SOX led to an 800% increase in the adult survival rate of stem cell derived mice relative to normal Sox2.

The red bar shows the adult survival rate of mice made from super-SOX derived iPSCs, when compared with the standard Yamanaka cocktail.

One particular cell line stood out; embryos derived from a line with an integrated SKM transgene were resulting in live births 90% of the time. That is ABOVE the live birth rate from normal conception, meaning super-SOX was able to beat nature.

Not everything is perfect; the experiments showed some loss of imprinting during super-SOX culturing. But there are ways to address this issue, and Sergiy believes there are still further optimizations to be made.

Why does super-SOX matter?

Remember how we previously had no way to turn the edited stem cells into an embryo? If super-SOX works as well in humans as the early data seems to indicate, that will no longer be the case. We’ll plausibly be able to take edited stem cells, apply super-SOX, stick them in a donor embryo, and MAKE a new person from them.

Readers might wonder why we need a donor embryo. If super-SOX cells can form an entirely new organism, what’s the point of the donor? 

The answer is that although super-SOX cells form the fetus, they can’t form the placenta or the yolk sack (the fluid filled sack that the fetus floats in). Making cells that can form those tissues would require a further step beyond what Sergiy has done; a technique to create TOTIPOTENT stem cells.

So until someone figures out a way to do that, we’ll still need a donor embryo to form those tissues.

Let’s put it all together; if super-SOX works as well in humans as it does in mice, this is how you would make superbabies:

“Knock out fetal development genes” is one option to prevent the donor embryo’s cells from contributing to the fetus. There are other plausible methods to achieve this goal, such as overexpressing certain genes to solidify cell fate of the donor embryo’s cells before introducing super-SOX-derived cells.

How do we do this in humans?

The early data from the super-SOX in humans looks promising. Many of the key markers for naive pluripotency are activated by super-SOX, including the Oct4 distal enhancer, the most reliable cross-species indicator of naive developmental potential (note this discovery was made after publications so you actually can’t find it in the paper). Sergiy also showed that super-SOX induced HUMAN stem cells could contribute to the inner cell mass of mouse embryos, which is the first time this has ever been demonstrated. 

But we don’t have enough evidence yet to start testing this in humans. 

Before we can do so, we need to test super-SOX in primates. We need to show that you can make a monkey from super-SOX derived cells, and that those monkeys are healthy and survive to adulthood no less often than those that are conceived naturally.

If that can be demonstrated, ESPECIALLY if it can be demonstrated in conjunction with an editing protocol, we will have the evidence that we need to begin human trials.

Frustratingly, Sergiy has not yet been able to find anyone to fund these experiments. The paper on super-SOX came out a year ago, but to date I’ve only been able to raise about $100k for his research.

Unfortunately monkeys (specifically marmosets) are not cheap. To demonstrate germline transmission (the first step towards demonstrating safety in humans), Sergiy needs $4 million.

If any rich people out there think making superhumans is a worthwhile cause, this is a shovel ready project that is literally just waiting for funding; the primatology lab is ready to go, Sergiy is ready to go, they just need the money. 

What if super-SOX doesn’t work?

So what if super-SOX doesn’t work? What if the primate trials conclude and we find that despite super-SOX showing promising early data in human stem cells and very promising mouse data, it is insufficient to make healthy primates? Is the superbabies project dead?

No. There are multiple alternatives to making an embryo from edited stem cells. Any of them would be sufficient to make the superbabies plan work (though some are more practical than others). 

Eggs from Stem Cells

The first and most obvious alternative is creating eggs from stem cells. There are a half dozen startups, such as Conception and Ovelle, working on solving this problem right now, and if any of them succeeded we would have a working method to turn edited stem cells into a super egg.

These superbabies wouldn’t be quite as superlative as super-SOX derived embryos since only half of their DNA would be genetically engineered. But that would still be sufficient for multiple standard deviations of gain across traits such as intelligence, life expectancy, and others.

Another option is to make edited sperm.

Fluorescence-guided sperm selection

Edited sperm are potentially easier to make because you don’t need to recapitulate the whole sperm maturation process. The testicles can do it for you.

To explain I need to give a little bit of background on how sperm is formed naturally.

The inside of the testicles contain a bunch of little tubes called “seminiferous tubules”. These are home to a very important bunch of cells called spermatogonial stem cells. As you might be able to guess from the name, spermatogonial stem cells are in charge of making sperm. They sit inside the tubes and every now and then (though a process I won’t get into) they divide and one of the two resulting cells turns into sperm.

There’a complicated process taking place inside these tubules that allows for maturation of the sperm. You NEED this maturation process (or at least most of it) for the sperm to be capable of making a baby.

But we can’t recreate it in the lab yet. 

So here’s an idea; how about instead of editing stem cells and try to turn them into sperm, we edit spermatogonial stem cells and stick them back in the testicles? The testicles could do the maturation for us.

You would need some way to distinguish the edited from the unedited sperm though, since the edited and the unedited sperm would get mixed together. You can solve this by adding in a protein to the edited stem cells that makes the sperm formed from them glow green. You can then use standard off-the-shelf flow cytometry to pick out the edited from the unedited sperm. 

You also probably don’t want the baby to glow green, so it’s best to put the green glowing protein under the control of a tissue specific promoter. That way only the baby’s sperm would glow green rather than its whole body.

From a technical perspective, we’re probably not that far away from getting this working. We’ve successfully extracted and cultured human spermatogonial stem cells back in 2020. And we’ve managed to put them back in the testicles in non-human primates (this made functional sperm too!) So this is probably possible to do in humans.

The monkey experiments used alkylating chemotherapy to get rid of most of the existing stem cells before reinjecting the spermatogonial stem cells. Most people are not going to want to undergo chemotherapy to have superbabies, so there probably needs to be additional research done here to improve the transplantation success rates.

Still, most of the pieces are already in place for this to be tested.

Embryo cloning

Lastly, there is somatic cell nuclear transfer, or SCNT. SCNT is how dolly the sheep was cloned, though in this context we’d be using it to clone an embryo rather than an adult organism. SCNT is not currently safe enough for use in humans (many of the animals born using the technique have birth defects), but should advancements be made in this area it may become viable for human use.

What if none of that works?

If none of the above works, and the only technology my company can get working is the multiplex editing, we can always create super cows in the meantime. We have pretty good genetic predictors for a lot of traits in cows, and you can already use SCNT to make an edited cell into an embryo. The success rates aren’t as high as they are with natural conception, but no one really cares too much about that; if you can make one super bull, it can create many, many offspring with desirable traits such as improved disease resistance, better milk production, or a better metabolism that more efficiently converts feed into beef.

Unlike past GMO technologies, this one could work without inserting any non-cow genes into the cow genome; we could literally just put a bunch of the “extra beefy cow” genes into the same cow. You’d get a similar result from using traditional breeding or embryo selection; the editing would just massively speed up the process.

We could also likely use this tech to make farmed animals suffer less. A big reason factory farming sucks so much is because we’ve bred animals to get better at producing milk and meat but we’ve left all their instincts and wild drives intact. This creates a huge mismatch between what the animals want (and what they feel) and their current environment.

We could probably reduce suffering a decent bit just by decreasing these natural drives and directly reducing pain experienced by animals as the result of these practices.

Many animal advocates hate this idea because they believe we just shouldn’t use animals for making meat (and frankly they have a good point). But in the interim period where we’re still making meat from animals, this could make the scale of the moral disaster less bad, even if it still sucks.

At the moment, there is a rider attached to the annual appropriations bill in the United States that bans the FDA from considering any applications to perform germline gene editing.

Whether or not the FDA has the authority to enforce this ban is a question which has not yet been tested in court. The FDA does not have regulatory authority over IVF, so there is some reason to doubt its jurisdiction over this area.

Still, the legal situation in the United States at the moment isn’t exactly the most friendly to commercialization.

Fortunately, the United States is not the only country in the world. There are over a dozen countries which currently have no laws banning germline gene editing, and 5 where there are legal carve outs for editing in certain circumstances, such as editing to prevent a disease. Belgium, Colombia, Italy, Panama, and the UAE all have exceptions that allow heritable gene editing in certain circumstances (mostly related to health issues).

The very first application of gene editing will almost certainly be to prevent a disease that the parents have that they don’t want to pass on to their children. This is a serious enough issue that it is worth taking a small risk to prevent the child from going on to live an unhealthy, unhappy life.

From a technical standpoint, we are ready to do single gene editing in embryos RIGHT NOW. There are labs that have done this kind of editing in human embryos with undetectably low levels of off-target edits. My understanding is they are still working on improving their techniques to ensure no mosaicism of the resulting embryos, but it seems like they are pretty close to having that problem solved. 

I am trying to convince one of them to launch a commercial entity outside the United States and get the ball rolling on this. This technology won’t make superbabies, but it COULD prevent monogenic diseases and reduce the risk of things like heart disease by editing PCSK9. If anyone is interested in funding this please reach out.

How we make this happen

Here’s an oversimplified diagram of various protocols that could be used to make superbabies:

In this blog post I’ve mostly focused on the path that goes multiplex editing → embryos from stem cells → superbabies.

But there are other approaches to make an optimized genome. Chromosome selection is one I’ve only briefly mentioned in other posts [LW · GW], but one which completely bypasses the need for the kind of epigenetic correction necessary for other techniques. And there has been virtually no research on it despite the project appearing at least superficially feasible (use micromanipulators to move chromosomes from different sperm into the same egg).

Iterated meiotic selection is another approach to creating an optimized genome which I haven’t even touched on because it deserves its own post. 

In my view the most sensible thing to do here is to place multiple bets; work on chromosome selection AND eggs from stem cells AND embryos from stem cells (a la super-SOX) AND sperm from stem cells (a la hulk sperm) AND multiplex editing all at the same time.

In the grand scheme, none of these projects are that expensive; my company will be trying to raise $8 million for our seed round in a few weeks which we believe will be enough for us to make significant progress on showing the ability to make dozens to hundreds of edits in stem cells (current state of the art is about 10 cumulative edits). Sergiy needs $4 million for super-SOX validation in primates, and probably additional funding beyond that to show the ability to create adult monkeys entirely from stem cells in conjunction with an editing protocol.

I don’t have precise estimates of the amounts needed for chromosome selection, but my guess is we could at the very least reduce uncertainty (and possibly make significant progress) with a year and $1-3 million.

Creating eggs from stem cells is somewhat more expensive. Conception has raised $38 million so far, and other companies have raised a smaller amount.

The approach I like the most here is the one taken by Ovelle, who is planning to use growth and transcription factors to replicate key parts of the environment in which eggs are produced rather than grow actual feeder cells to excrete those factors. If it works, this approach has the advantage of speed; it takes a long time to mature primordial germ cells into eggs replicating the natural process, so if you can recreate the process with transcription factors that saves a lot of time. Based on some conversations I’ve had with one of the founders I think $50 million could probably accelerate progress by about a year (though they are not looking to raise that much at the moment).

Making eggs from stem cells also has a very viable business even if none of the other technologies work; there are hundreds of thousands of couples desperate to have children that simply can’t produce enough eggs to have the kids they want.

This is the case for most of these technologies; multiplex editing will have a market to make super cows, gene therapy, basic research, and to do basic gene editing in embryos even if none of the other stuff works. Creating sperm from stem cells will have a market even without editing or iterated meiotic selection because you’ll be able to solve a certain kind of male infertility where the guy can’t create sperm. Embryo selection ALREADY has a rapidly growing market with an incredibly well-resourced customer base (you wouldn’t believe the number of billionaires and centimillionaires who used our services while I was working on embryo selection at Genomic Prediction). Chromosome selection might be able to just make superbabies in like a couple of years if we’re lucky and the research is straightforward.

So I think even pretty brain dead investors who somehow aren’t interested in fundamentally upgrading the human race will still see value in this.

Ahh yes, but what about AI?

Now we come to perhaps the biggest question of all.

Suppose this works. Suppose we can make genetically engineered superbabies. Will there even be time for them to grow up, or will AI take over first?

Given the current rate of improvement of AI, I would give a greater than 50% chance of AI having taken over the world before the first generation of superbabies grows up.

One might then reasonably ask what the point of all this is. Why work on making superbabies if it probably won’t matter?

There is currently no backup plan if we can’t solve alignment

If it turns out we can’t safely create digital gods and command them to carry out our will, then what? What do we do at that point?

No one has a backup plan. There is no solution like “oh, well we could just wait for X and then we could solve it.”

Superbabies is a backup plan; focus the energy of humanity’s collective genetic endowment into a single generation, and have THAT generation to solve problems like “figure out how to control digital superintelligence”.

It’s actually kind of nuts this isn’t the PRIMARY plan. Humanity has collectively decided to roll the dice on creating digital gods we don’t understand and may not be able to control instead of waiting a few decades for the super geniuses to grow up.

If we face some huge AI disaster, or if there’s a war between the US and China and no one can keep their chip fabs from getting blown up, what does that world look like? Almost no one is thinking about this kind of future.

But we should be. The current trajectory we’re on is utterly insane. Our CURRENT PLAN is to gamble 8 billion lives on the ability of a few profit driven entities to control digital minds we barely understand in the hopes it will give them and a handful of government officials permanent control over the world forever. 

I really can’t emphasize just how fucking insane this is. People who think this is a sensible way for this technology to be rolled out are deluding themselves. The default outcome of the trajectory we’re on is death or disempowerment.

Maybe by some miracle that works and turns out well. Maybe Sam Altman will seize control of the US government and implement a global universal basic income and we’ll laugh about the days when we thought AGI might be a bad thing. I will just note that I am skeptical things will work out that way. Altman in particular seems to be currently trying to dismantle the non-profit he previously put into place to ensure the hypothetical benefits of AGI would be broadly distributed to everyone.

If the general public actually understood what these companies were doing and believed they were going to achieve it, we would be seeing the biggest protests in world history. You can’t just threaten the life and livelihood of 8 billion people and not expect pushback. 

We are living in a twilight period where clued in people understand what’s coming but the general public hasn’t yet woken up. It is not a sustainable situation. Very few people understand that even if it goes “well”, their very survival will completely depend on the generosity of a few strangers who have no self-interested reason to care about them.

But people are going to figure this out sooner or later. And when they do, it would be pretty embarrassing if the only people with an alternative vision of the future are neo-luddite degrowthers who want people to unplug their refrigerators.

We need to start working on this NOW. Someone with initiative could have started on this project five years ago when prime editors came out and we finally had a means of editing most genetic variants in the human genome.

But no one has made it their job to make this happen. The academic institutions in charge of exploring these ideas are deeply compromised by insane ideologies. And the big commercial entities are too timid to do anything truly novel; once they discovered they had a technology that could potentially make a few tens of billions treating single gene genetic disorders, no one wanted to take any risks; better to take the easy, guaranteed money and spend your life on a lucrative endeavor improving the lives of 0.5% of the population than go for a hail mary project that will result in journalists writing lots of articles calling you a eugenicist.

I think in most worlds, gene editing won’t play a significant role in the larger strategic picture. But in perhaps 10-20%, where AGI just takes a long time or we have some kind of delay of superintelligence due to an AI disaster or war, this will become the most important technology in the world.

Given the expected value here and the relatively tiny amount of money needed to make significant progress (tens to hundreds of millions rather than tens to hundreds of billions), it would be kind of insane if we as a civilization didn’t make a serious effort to develop this tech.

Team Human

There are fundamentally two different kinds of futures that lie before us. In the first, we continue to let technology develop in every area as fast as it can with no long term planning or consideration of what kind of future we actually want to build.

We follow local incentive gradients, heedless of the consequences. No coordination is needed because nothing can possibly go wrong.

This is the world we are building right now. One in which humans are simply biological bootloaders for our digital replacements.

In the second world we take our time before summoning the digital gods. We make ourselves smarter, healthier, and wiser. We take our time and make sure we ACTUALLY UNDERSTAND WHAT WE’RE DOING before opening Pandora’s box.

This latter world is much more human than the first. It involves people making decisions and learning things. You and your children will actually have power and autonomy over your own lives in this world. 

There will still be death and suffering and human fallibility in the second world (though less of all of these). We don’t get to magically paper over all problems in this world by saying “AI will fix it” and just crossing our fingers that it will be true. Instead we ourselves, or at least our children, will have to deal with the problems of the world.

But this world will be YOURS. It will belong to you and your children and your friends and your family. All of them will still matter, and if at some point we DO decide to continue along the road to build digital superintelligence, it will be because humanity (or at least its selected representatives) thought long and hard and decided that was a step worth taking.

If we want this kind of world, we need to start building it now. We need to start funding companies to work on the technical roadblocks to bringing superbabies to life. We need to break this stupid taboo around talking about creating genetically engineered people and make sure policymakers are actually informed of just how much this technology could improve human life. It is ludicrous beyond belief that we have gene therapies designed to edit genes for millions of dollars, yet editing those exact same genes in an embryo for a fraction of the money in a more controlled, more effective, more verifiable way is considered unethical.

If you’re interested in making this happen, be it as a biologist working in a wet lab, a funder, or a policymaker, please reach out to me. You can reach me at genesmithlesswrong@gmail.com or simply through a LessWrong private message. My company will be raising our seed round to work on multiplex editing in the next couple of weeks, so if you’re in venture capital and you want to make this happen, please get a hold of me.

Appendix

iPSCs were named after the iPod

You might wonder why the i in iPSCs is lowercase. It’s literally because Yamanaka was a fan of the iPod and was inspired to use the same lowercase naming convention.

On autoimmune risk variants and plagues

The studies on genes that have been protective against past plagues are kind of insane. There’s a genetic variant in the ERAP2 gene that jumped from 40% prevalence to 70% prevalence in ~100 years in Denmark thanks to its ability to reduce mortality from the bubonic plague.

For anyone not familiar with evolution, this is an INCREDIBLY fast spread of a genetic variant. It’s so fast that you can’t even really explain it by people with this gene out-reproducing those without it. You can only explain it if a large portion of the people without the genetic variant are dying in a short time period.

Today this same genetic variant is known to increase the risk of Crohn's disease and a variety of other autoimmune conditions.

ERAP variants aren’t the ONLY ones that protect against plague risk. There are half a dozen others mentioned in the study. So we aren’t going to make the entire population fragile to plague just by editing this variant.

Two simples strategies for minimizing autoimmune risk and pandemic vulnerability

There are two fairly straightforward ways to decrease the risk of autoimmune disease while minimizing population level vulnerability to future pandemics.

First of all, we can dig up plague victims from mass graves and look at their DNA. Variants that are overrepresented among people in mass burial sites compared with the modern population probably weren’t very helpful for surviving past pandemics. So we should be more cautious than usual about swapping people’s genetic variants to those of plague victims, even if it decreases the risk of autoimmune diseases.

Second, we should have an explicit policy of preserving genetic diversity in the human immune system. There’s a temptation to just edit the variants in a genome that have the largest positive impact on a measured trait. But in the case of the immune system, it’s much better from a population level perspective to decrease different people’s autoimmune risk in different ways.

“I don’t want someone else’s genes in my child”

One thing we didn’t mention in the post is what could be done with gene editing if we JUST restrict ourselves to editing in variants that are present in one of the two parents.

We ran some simulations and came to a somewhat surprising conclusion: there probably won’t be that big of a reduction if you do this! 

So even if you’re an inheritance maximalist, you can still get major benefits from gene editing.

Could I use this technology to make a genetically enhanced clone of myself?

For some versions of this technology (in particular super-SOX or SCNT-derived solutions), you could indeed make a genetically enhanced clone of yourself.

Genetically enhanced clones are too weird to be appealing to most people, so I don’t think we’re ever going to see this kind of thing being done at scale. But maybe someday someone will start the “Me 2” movement.

Why does super-SOX work?

Super-SOX is a modification of the sox2 protein designed to increase the strength of the bond between it and Oct4. You might wonder why increasing the strength of this bond increases reprogramming efficiency and makes more developmentally potent stem cells.

There are two pieces to the answer. But to explain them I need to give you a bit of background first.

Sox2 (and all the other Yamanaka factors, for that matter), are transcription factors. This means they stick to DNA. They also stick to other proteins. You can think of them like a person that’s holding hands with the DNA and holding hands with another protein.

It’s beyond the scope of even this appendix to talk about all the other proteins than can bind to, but among the most important are a bunch of proteins that form something called the RNA transcriptase complex. RNA transcriptase is the giant enzyme that turns DNA into messenger RNA (which are then converted into proteins). RNA transcriptase can’t directly bind to DNA, so in order for it to do its thing, it needs a bunch of helper proteins which get it into position.

Sox2 and Oct4 are two such “helper proteins”. They’re crucial because they bind directly to DNA, which means the RNA transcriptase complex can’t even START forming unless Sox2 or Oct4 or both start the process off by sticking to DNA in the right spots.

This DNA binding ability is apparent in their very structure.

 

See figure 6a of this nature paper

See how the protein itself is folded in a way such that it kind of “wraps around” the DNA? That’s by design; it allows the protein to bind to the DNA if and only if the sequence matches with Oct2’s binding domain. So sox2 doesn’t just bind to any sequence of DNA. It needs a very specific one: 5′-(A/T)(A/T)CAAAG-3′ if you want to get technical about it.

This means that it doesn’t just bind anywhere. It only binds to DNA sequences that match with its binding domain.

And not every gene has such a sequence. Only some genes do, which means Sox2 won’t activate transcription of every gene.

So sox2 can bind to DNA and initiate transcription of certain genes, but the bonds Sox2 forms with DNA are… kind of weak. It often “comes off” the DNA when something else hits it or when the DNA just wiggles a little too much.

So sox2, along with all the other Yamanaka factors, are constantly coming and going from these binding sites on the DNA.

The fraction of time that sox2 and the other Yamanaka factors spend stuck to the DNA is a huge determining factor in whether or not the RNA transcriptase complex actually forms; the higher the percentage of the time they are bound to the DNA, the more often that complex forms and the more of that protein gets produced.

If we could somehow increase the strength of that bond, it would significantly increase the amount of proteins produced.

One way to do this would be to directly modify the protein so that it can stick to the DNA better. But another way would be to modify the strength of the bond it has with something ELSE that sticks to DNA in the same region.

And that’s exactly what super-SOX does. It increases the strength of the bond with oct4, which helps sox2 “hold on” to the DNA. Even if the DNA wiggles too much and sox2 gets knocked off, it will still be held down by oct4, which is bound right next to it.

It should be noted that not all genes have a binding motif for sox2 and oct4 right next to each other. But (incredibly), it seems that most of the key pluripotency genes, the ones that are really important for making naive embryonic stem cells, DO have such a binding motif.

That’s why super-SOX works. It increases transcription of genes with a sox-oct motif next to the gene. And it just so happens that the key genes to make naive cells have a sox-oct motif next to them.

This is almost certainly not an accident. Sox and Oct are naturally occuring proteins that play key roles in embryonic development. So the fact that sox/oct motifs play a key role in creating and maintaining naive embryonic stem cells is not all that surprising.

How was the IQ grain graph generated?

The graph for IQ was generated by ranking genetic variants by their effect sizes and showing how they add up if you put them all in the same genome. We take into account uncertainty about which of a cluster of nearby genetic variants is actually CAUSING the observed effect. We also adjust for the fact that the IQ test issued by UK Biobank isn’t particularly good. I believe it has a test/retest correlation of 0.61, which is significantly below the 0.9 of a gold standard IQ test. 

We also account for assortative mating on IQ (our assumption is a correlation of 0.4, which we think is reasonable based on the research literature). 

A huge amount of work went into dealing with missing SNPs in the datasets used to train these predictors. There’s too much math-y detail to get into in this post, but some of the genetic variants we would hope to measure a missing from the datasets and kman had to write custom fine-mapping software to deal with this.

We couldn’t find anyone else who had done this before, so we’ll probably publish a paper at some point explaining our technique and making the software kman wrote available to other genetics researchers.

If you’ve made it this far, please send me a DM! Most people don’t read 30 page blog posts and I always enjoy hearing from people that do. Let me know why you read all the way through this, what you enjoyed most, and what you think about the superbabies project.

129 comments

Comments sorted by top scores.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2025-02-20T00:45:46.116Z · LW(p) · GW(p)

One of the most important projects in the world.  Somebody should fund it.

Replies from: valley9, niplav, kave, sharmake-farah
comment by Ebenezer Dukakis (valley9) · 2025-02-20T22:38:26.873Z · LW(p) · GW(p)

I think this project should receive more red-teaming before it gets funded.

Naively, it would seem that the "second species argument" matches much more strongly to the creation of a hypothetical Homo supersapiens than it does to AGI.

We've observed many warning shots regarding catastrophic human misalignment. The human alignment problem isn't easy. And "intelligence" seems to be a key part of the human alignment picture. Humans often lack respect or compassion for other animals that they deem intellectually inferior -- e.g. arguing that because those other animals lack cognitive capabilities we have, they shouldn't be considered morally relevant. There's a decent chance that Homo supersapiens would think along similar lines, and reiterate our species' grim history of mistreating those we consider our intellectual inferiors.

It feels like people are deferring to Eliezer a lot here, which seems unjustified given how much strategic influence Eliezer had before AI became a big thing, and how poorly things have gone (by Eliezer's own lights!) since then. There's been very little reasoning transparency in Eliezer's push for genetic enhancement. I just don't see why we're deferring to Eliezer so much as a strategist, when I struggle to name a single major strategic success of his.

Replies from: TsviBT, Purplehermann
comment by TsviBT · 2025-02-20T22:48:55.613Z · LW(p) · GW(p)

You shouldn't and won't be satisfied with this alone, as it doesn't deal with or even emphasize any particular peril; but to be clear, I have definitely thought about the perils: https://berkeleygenomics.org/articles/Potential_perils_of_germline_genomic_engineering.html

comment by Purplehermann · 2025-02-21T09:58:38.206Z · LW(p) · GW(p)

Right now only low-E tier human intelligences are being discussed, they'll be able to procreate with humans and be a minority.

Considering current human distributions and a lack of 160+ IQ people having written off sub-100 IQ populations as morally useless I doubt a new sub-population at 200+ is going to suddenly turn on humanity

If you go straight to 1000IQ or something sure,  we might be like animals compared to them

comment by niplav · 2025-02-21T12:43:28.948Z · LW(p) · GW(p)

At the end of 2023, MIRI had ~$19.8 mio. in assets. I don't know much about the legal restrictions of how that money could be used, or what the state for financial assets is now, but if it's similar then MIRI could comfortably fund Velychko's primate experiments, and potentially some additional smaller projects.

(Potentially relevant: I entered the last GWWC donor lottery with the hopes of donating the resulting money to intelligence enhancement, but wasn't selected.)

comment by kave · 2025-02-20T22:33:04.449Z · LW(p) · GW(p)

Copying over Eliezer's top 3 most important projects from a tweet:

1.  Avert all creation of superintelligence in the near and medium term.

2.  Augment adult human intelligence.

3.  Build superbabies.

comment by Noosphere89 (sharmake-farah) · 2025-02-20T18:44:34.281Z · LW(p) · GW(p)

TBH, I don't particularly think it's one of the most important projects right now, due to several issues:

  1. There's no reason to assume that we could motivate them any better than what we already do, unless we are in the business of changing personality, which carries it's own problems, or we are willing to use it on a massive scale, which simply cannot be done currently.

  2. We are running out of time. The likely upper bound for AI that will automate basically everything is 15-20 years from Rafael Harth and Cole Wyeth, and unfortunately there's a real possibility that the powerful AI comes in 5-10 years, if we make plausible assumptions about scaling continuing to work, and given that there's no real way to transfer any breakthroughs to the somatic side of gene editing, it will be irrelevant by the time AI comes.

Thus, human intelligence augmentation is quite poor from a reducing X-risk perspective.

Replies from: jessica.liu.taylor, teradimich
comment by jessicata (jessica.liu.taylor) · 2025-02-20T20:57:46.005Z · LW(p) · GW(p)

On EV grounds, "2/3 chance it's irrelevant because of AGI in the next 20 years" is not a huge contributor to the EV of this. Because, ok, maybe it reduces the EV by 3x compared to what it would otherwise have been. But there are much bigger than 3x factors that are relevant. Such as, probability of success, magnitude of success, cost effectiveness.

Then you can take the overall cost effectiveness estimate (by combining various factors including probability it's irrelevant due to AGI being too soon) and compare it to other interventions. Here, you're not offering a specific alternative that is expected to pay off in worlds with AGI in the next 20 years. So it's unclear how "it might be irrelevant if AGI is in the next 20 years" is all that relevant as a consideration.

Replies from: sharmake-farah
comment by Noosphere89 (sharmake-farah) · 2025-02-20T21:30:12.807Z · LW(p) · GW(p)

Usually, the other interventions I compare it to are preparing for AI automation of AI safety by doing preliminary work to control/align those AIs, or AI governance interventions that are hopefully stable for a very long time, and at least for the automation of AI safety, I assign much higher magnitudes of success, conditioning on success, like multiple OOMs combined with moderately better cost effectiveness and quite larger chances of success than the genetic engineering approach.

To be clear, the key variable is conditional on success, the magnitude of that success is very, very high in a way that no other proposal really has, such that even with quite a lot lower probabilities for success than me, I'd still consider preparing for AI automation of AI safety and doing preliminary work such that we can trust/control these AIs to be the highest value alignment target by a mile.

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2025-02-21T03:18:56.458Z · LW(p) · GW(p)

Oh, to be clear I do think that AI safery automation is a well targeted x risk effort conditioned on the AI timelines you are presenting. (Related to Paul Christiano alignment ideas, which are important conditional on prosaic AI)

comment by teradimich · 2025-02-20T20:22:00.672Z · LW(p) · GW(p)

EY is known for considering humanity almost doomed. 
He may think that the idea of human intelligence augmentation is likely to fail. But it's the only hope. Of course, many will disagree with this.

He writes more about it here or here.

Replies from: sharmake-farah
comment by Noosphere89 (sharmake-farah) · 2025-02-20T20:31:20.100Z · LW(p) · GW(p)

The problem is that from a relative perspective, human augmentation is probably more doomed than AI safety automation, which in turn is more doomed than AI governance interventions, though I may have gotten the relative ordering of AI safety automation and I think the crux is I do not believe in the timeline for human genetic augmentation in adults being only 5 years, even given a well-funded effort, and I'd expect it to take 15-20 years, minimum for large increases in adult intelligence, which basically rules out the approach given the very likely timelines to advanced AI either killing us all or being aligned to someone.

Replies from: teradimich, Purplehermann
comment by teradimich · 2025-02-20T21:19:08.844Z · LW(p) · GW(p)

Yudkowsky may think that the plan 'Avert all creation of superintelligence in the near and medium term — augment human intelligence' has <5% chance of success, but your plan has <<1% chance. Obviously, you and he disagree not only on conclusions, but also on models.

comment by Purplehermann · 2025-02-20T20:42:00.933Z · LW(p) · GW(p)

He already addressed this. 

If somehow international cooperation gives us a pause on going full AGI or at least no ASI - what then?

Just hope it never happens, like nuke wars? 

The answer now is to set later generations up to be more able. 

This could mean doing fundamental research (whether in AI alignment or international game theory or something else), it could mean building institutions to enable it, and it could mean making them actually smarter.

Genes might be the cheapest/easist way to affect marginal chances given the talent already involved in alignment and the amount of resources required to get involved politically or in building institutions

Replies from: sharmake-farah
comment by Noosphere89 (sharmake-farah) · 2025-02-20T20:57:01.230Z · LW(p) · GW(p)

If somehow international cooperation gives us a pause on going full AGI or at least no ASI - what then?

Just hope it never happens, like nuke wars?

The answer is no, but this might have to happen under certain circumstances.

The usual case (assuming that the government bans or restricts compute resources, and/or limits algorithimic research), is to use this time to either let the government fund AI alignment research, or go for a direct project to make AIs that are safe to automate AI safety research, and given that we don't have to race against other countries, we could afford far more safety taxes than usual to make AI safe.

I think the key crux is I don't particularly think genetic editing is the cheapest/easiest way to affect marginal chances of doom, because of time lag plus needing to reorient the entire political system, which is not cheap, and the cheapest/easiest strategy to me to affect doom probabilities is to do preparatory AI alignment/control schemes such that we can safely hand off the bulk of the alignment work to the AIs, which then solve the alignment problem fully.

Replies from: Purplehermann
comment by Purplehermann · 2025-02-21T09:39:06.148Z · LW(p) · GW(p)

Your direction sounds great - but how well can $4M move the needle there? How well can genesmith move the needle with his time and energy?

I think you're correct about the cheapest/easist strategy in general, but completely off in regards to marginal advantages.

Major labs will already be pouring massive amounts of money and human capital into direct AI alignment and using AIs to align AGI if we get to a freeze,  and the further along in capabilities we get the more impactful such research would be. 

Genesmith's strategy benefits much more from starting now and has way less human talent and capital involved, hence higher marginal value

comment by David Gross (David_Gross) · 2025-02-19T22:53:06.177Z · LW(p) · GW(p)

How robust are these calculations against the possibility that individual gene effects aren't simply additional but might even not play well together? i.e. gene variant #1 raises your IQ by 2 points, variant #2 raises your IQ by 1 point, but variants #1+2 together make you able to multiply twelve-digit numbers in your head but unable to tie your shoes; or variant #3 lifts your life expectancy by making you less prone to autoimmune disease A, variant #4 makes you less prone to autoimmune disease B, but variants #3+4 together make you succumb to the common cold because your immune system is not up to the task.

It's hard for me to tell from the level of detail in your explanation here, but at times it seems like you're just naively stacking the ostensible effects of particular gene variants one on top of the other and then measuring the stack.

Replies from: GeneSmith
comment by GeneSmith · 2025-02-19T23:24:58.624Z · LW(p) · GW(p)

It's a good question. The remarkable thing about human genetics is that most of the variants ARE additive.

This sounds overly simplistic, like it couldn't possible work, but it's one of the most widely replicated results in the field.

There ARE some exceptions. Personality traits seem to be mostly the result of gene-gene interactions, which is one reason why SNP heritability (additive variance explained by common variants) is so low.

But for nearly all diseases and for many other traits like height and intelligence, ~80% of variance is additive.

This might seem like a weird coincidence. After all, we know there is a lot of non-linearity in actual gene regulatory networks. So how could it be that all the common variants simply add together?

There's a pretty clear reason from an evolutionary point of view: evolution is able to operate on genes with additive effects much more easily than on those with non-additive effects.

The set of genetic variants inherited is scrambled every generation during the sperm and egg formation process. Those that need other common variants present to work their effects just have a much harder time spreading among the population because their benefits are inconsistent across generations.

So over time the genome ends up being enriched for additivity.

There IS lots of non-additivity happening in genes which are universal among the human population. If you were to modify two highly conserved regions, the effects of both edits could end up being much greater or much less than the sum of the effects of the two individual variants. But that's also not that surprising; evolution has had a lot of time to build dependencies on these regions, so we should expect modifying them to have effects that are hard to predict.

You also had a second question embedded within your first, which is about second order effects from editing, like increased IQ resulting in more mental instability or something.

You can just look at people who naturally have high IQ to see whether this is a concern. What we see is that, with the exception of aspbergers, higher IQ actually tends to be associated with LOWER rates of mental illness.

Also you can see from my chart looking at genetic correlations between diseases that, with a few exceptions, there just isn't that much correlation between diseases. The set of variants that affects two different diseases are mostly disjoint sets.

Replies from: jimrandomh, TsviBT, p.b., lumire
comment by jimrandomh · 2025-02-21T01:04:40.830Z · LW(p) · GW(p)

The remarkable thing about human genetics is that most of the variants ARE additive.

I think this is likely incorrect, at least where intelligence-affecting SNPs stacked in large numbers are concerned.

To make an analogy to ML, the effect of a brain-affecting gene will be to push a hyperparameter in one direction or the other. If that hyperparameter is (on average) not perfectly tuned, then one of the variants will be an enhancement, since it leads to a hyperparameter-value that is (on average) closer to optimal.

If each hyperparameter is affected by many genes (or, almost-equivalently, if the number of genes greatly exceeds the number of hyperparameters), then intelligence-affecting traits will look additive so long as you only look at pairs, because most pairs you look at will not affect the same hyperparameter, and when they do affect the same hyperparameter the combined effect still won't be large enough to overshoot the optimum. However, if you stack many gene edits, and this model of genes mapping to hyperparameters is correct, then the most likely outcome is that you move each hyperparameter in the correct direction but overshooting the optimum. Phrased slightly differently: intelligence-affecting genes may be additive on current margins, but not remain additive when you stack edits in this way.

To make another analogy: SNPs affecting height may be fully additive, but if the thing you actually care about is basketball-playing ability, there is an optimum amount of editing after which you should stop, because while people who are 2m tall are much better at basketball than people who are 1.7m tall, people who are 2.6m tall are cripples.

For this reason, even if all the gene-editing biology works out, you will not produce people in the upper end of the range you forecast.

You can probably somewhat improve this situation by varying the number of edits you do. Ie, you have some babies in which you edit a randomly selected 10% of known intelligence-affecting SNPs, some in which you've edited 20%, some 30%, and so on. But finding the real optimum will probably require understanding what the SNPs actually do, in terms of a model of brain biology, and understanding brain biology well enough to make judgment calls about that.

Replies from: kman
comment by kman · 2025-02-21T02:42:15.696Z · LW(p) · GW(p)

I definitely don't expect additivity holds out to like +20 SDs. We'd be aiming for more like +7 SDs.

comment by TsviBT · 2025-02-20T18:41:27.872Z · LW(p) · GW(p)

~80% of variance is additive.

Is this what you meant to say? Citation?

Replies from: kman
comment by kman · 2025-02-20T19:27:34.194Z · LW(p) · GW(p)

This paper found that the heritability of most traits is ~entirely additive, supposedly including IQ according to whatever reference I followed to the paper, though I couldn't actually find where in the paper it said/implied that.

Replies from: TsviBT
comment by TsviBT · 2025-02-20T19:33:15.978Z · LW(p) · GW(p)

And then suddenly it's different for personality? Kinda weird.

Replies from: Archimedes
comment by Archimedes · 2025-02-21T03:56:58.280Z · LW(p) · GW(p)

I don’t find that surprising at all. IMO, personality is a more of an emergent balancing of multidimensional characteristics than something like height or IQ (though this is mostly vibes-based speculation).

comment by p.b. · 2025-02-21T07:00:21.891Z · LW(p) · GW(p)

Do you have a reference for the personality trait gene-gene interaction thing? Or maybe an explanation how that was determined? 

comment by lumire · 2025-02-21T01:43:44.697Z · LW(p) · GW(p)

If the variance for intelligence is primarily additive, then why are IQ GWAS heritability estimates significantly under the heritability estimates you see from twin studies (or even GWAS heritability for height)? 

comment by Metacelsus · 2025-02-20T03:13:32.342Z · LW(p) · GW(p)

Ovelle, who is planning to use growth and transcription factors to replicate key parts of the environment in which eggs are produced rather than grow actual feeder cells to excrete those factors. If it works, this approach has the advantage of speed; it takes a long time to grow the feeder cells, so if you can bypass some of that you can make eggs more quickly. Based on some conversations I’ve had with one of the founders I think $50 million could probably accelerate progress by about a year.

 

A few comments on this:
1. The "feeder cells" you're discussing here are from the method in this paper from the Saitou lab, who used feeder cells to promote development of human PGC-like cells to oogonia. But "takes a long time to grow the feeder cells" is not the issue. In fact, the feeder cells are quite easy to grow. The issue is that it takes a long time for PGC-like cells to develop to eggs, if you're strictly following the natural developmental trajectory.

2. The $50 million number is for us to set up our own nonhuman primate research facility, which would accelerate our current trajectory by *approximately* a year. On our current trajectory, we are going to need to raise about $5-10 million in the near future to scale up our research. We have already raised $2.15 million and we will start fundraising again this summer. But it's not like we need $50 million to make progress (although it would certainly help!)

Replies from: GeneSmith, lcmgcd
comment by GeneSmith · 2025-02-20T03:29:24.208Z · LW(p) · GW(p)

The issue is that it takes a long time for PGC-like cells to develop to eggs, if you're strictly following the natural developmental trajectory.

Thanks for the clarification. I'll amend the original post.

comment by lemonhope (lcmgcd) · 2025-02-21T09:00:59.387Z · LW(p) · GW(p)

Could you do all the research on a boat in the ocean? Excuse the naive question.

comment by Metacelsus · 2025-02-20T03:01:49.909Z · LW(p) · GW(p)

On the topic of SuperSOX and how it relates to making eggs from stem cells: 

The requirement for an existing embryo (to transfer the edited stem cells into) means that having an abundant source of eggs is important for this method, both for optimizing the method by screening many conditions, and for eventual use in the clinic.

So, in vitro oogenesis could play a key role here.

For both technologies, I think the main bottleneck right now is nonhuman primate facilities for testing.

Finally: we need to be sure not to cause another He Jiankui event (where an irresponsible study resulted in a crackdown on the field). Epigenetic issues could cause birth defects, and if this happens, it will set back the field by quite a lot. So safety is important! Nobody cares if their baby has the genes for 200 IQ, if the baby also has Prader-Willi syndrome.

comment by LWLW (louis-wenger) · 2025-02-20T01:55:47.539Z · LW(p) · GW(p)

How much do people know about the genetic components of personality traits like empathy? Editing personality traits might be almost as or even more controversial than modifying “vanity” traits. But in the sane world you sketched out this could essentially be a very trivial and simple first step of alignment. “We are about to introduce agents more capable than any humans except for extreme outliers: let’s make them nice.” Also, curing personality disorders like NPD and BPD would do a lot of good for subjective wellbeing. 

I guess I’m just thinking of a failure mode where we create superbabies who solve task-alignment and then control the world. The people running the world might be smarter than the current candidates for god-emperor, but we’re still in a god-emperor world. This also seems like the part of the plan most likely to fail. The people who would pursue making their children superbabies might be disinclined towards making their children more caring.

Replies from: GeneSmith, David_Gross
comment by GeneSmith · 2025-02-20T02:05:16.566Z · LW(p) · GW(p)

Very little at the moment. Unlike intelligence and health, a lot of the variance in personality traits seems to be the result of combinations of genes rather than purely additive effects.

This is one of the few areas where AI could potentially make a big difference. You need more complex models to figure out the relationship between genes and personality.

But the actual limiting factor right now is not model complexity, but rather data. Even if you have more complex models, I don't think you're going to be able to actually train them until you have a lot more data. Probably a minimum of a few million samples.

We'd like to look into this problem at some point and make scaling law graphs like the ones we made for intelligence and disease risk but haven't had the time yet.

Replies from: valley9, louis-wenger
comment by Ebenezer Dukakis (valley9) · 2025-02-20T21:10:17.415Z · LW(p) · GW(p)

This is starting to sound a lot like AI actually. There's a "capabilities problem" which is easy, an "alignment problem" which is hard, and people are charging ahead to work on capabilities while saying "gee, we'd really like to look into alignment at some point".

Replies from: TsviBT
comment by TsviBT · 2025-02-20T21:19:42.712Z · LW(p) · GW(p)

It's utterly different.

  • Humans are very far from fooming.
    • Fixed skull size; no in silico simulator.
    • Highly dependent on childhood care.
    • Highly dependent on culturally transmitted info, including in-person.
  • Humans, genomically engineered or not, come with all the stuff that makes humans human. Fear, love, care, empathy, guilt, language, etc. (It should be banned, though, to remove any human universals, though defining that seems tricky.) So new humans are close to us in values-space, and come with the sort of corrigibility that humans have, which is, you know, not a guarantee of safety, but still some degree of (okay I'm going to say something that will trigger your buzzword detector but I think it's a fairly precise description of something clearly real) radical openness to co-creating shared values.
Replies from: mateusz-baginski, valley9
comment by Mateusz Bagiński (mateusz-baginski) · 2025-02-21T06:59:21.393Z · LW(p) · GW(p)

Fixed skull size

Artificial wombs may remove this bottleneck.

Replies from: TsviBT
comment by TsviBT · 2025-02-21T07:06:02.669Z · LW(p) · GW(p)

No I mean like a person can't 10x their compute.

comment by Ebenezer Dukakis (valley9) · 2025-02-20T21:38:38.483Z · LW(p) · GW(p)

Humans are very far from fooming.

Tell that to all the other species that went extinct as a result of our activity on this planet?

I think it's possible that the first superbaby will be aligned, same way it's possible that the first AGI will be aligned. But it's far from a sure thing. It's true that the alignment problem is considerably different in character for humans vs AIs. Yet even in this particular community, it's far from solved -- consider Brent Dill, Ziz, Sam Bankman-Fried, etc.

Not to mention all of history's great villains, many of whom believed themselves to be superior to the people they afflicted. If we use genetic engineering to create humans which are actually, massively, undeniably superior to everyone else, surely that particular problem is only gonna get worse. If this enhancement technology is going to be widespread, we should be using the history of human activity on this planet as a prior. Especially the history of human behavior towards genetically distinct populations with overwhelming technological inferiority. And it's not pretty.

So yeah, there are many concrete details which differ between these two situations. But in terms of high-level strategic implications, I think there are important similarities. Given the benefit of hindsight, what should MIRI have done about AI back in 2005? Perhaps that's what we should be doing about superbabies now.

Replies from: TsviBT, sharmake-farah
comment by TsviBT · 2025-02-20T21:45:10.703Z · LW(p) · GW(p)

Tell that to all the other species that went extinct as a result of our activity on this planet?

Individual humans.

Brent Dill, Ziz, Sam Bankman-Fried, etc.

  1. These are incredibly small peanuts compared to AGI omnicide.
  2. You're somehow leaving out all the people who are smarter than those people, and who were great for the people around them and humanity? You've got like 99% actually alignment or something, and you're like "But there's some chance it'll go somewhat bad!"... Which, yes, we should think about this, and prepare and plan and prevent, but it's just a totally totally different calculus from AGI.
Replies from: sharmake-farah, valley9
comment by Noosphere89 (sharmake-farah) · 2025-02-20T22:18:53.783Z · LW(p) · GW(p)

I'd flag here that the 99% number seems very easy to falsify, solely based on the 20th century experience of both the 2 great wars, as well as the genocides/civil wars of the 20th century, and it's quite often that one human group is vastly unaligned to another human group, causing mass strife and chaos.

Replies from: TsviBT, kman
comment by TsviBT · 2025-02-20T22:37:44.396Z · LW(p) · GW(p)

I'm saying that (waves hands vigorously) 99% of people are beneficent or "neutral" (like, maybe not helpful / generous / proactively kind, but not actively harmful, even given the choice) in both intention and in action. That type of neutral already counts as in a totally different league of being aligned compared to AGI.

one human group is vastly unaligned to another human group

Ok, yes, conflict between large groups is something to be worried about, though I don't much see the connection with germline engineering. I thought we were talking about, like, some liberal/techie/weirdo people have some really really smart kids, and then those kids are somehow a threat to the future of humanity that's comparable to a fast unbounded recursive self-improvement AGI foom.

Replies from: sharmake-farah
comment by Noosphere89 (sharmake-farah) · 2025-02-20T23:00:23.256Z · LW(p) · GW(p)

I'm saying that (waves hands vigorously) 99% of people are beneficent or "neutral" (like, maybe not helpful / generous / proactively kind, but not actively harmful, even given the choice) in both intention and in action. That type of neutral already counts as in a totally different league of being aligned compared to AGI.

I think this is ultimately the crux, at least relative to my values, I'd expect at least 20% in America to support active efforts to harm me or my allies/people I'm altruistic to, and do so fairly gleefully (an underrated example here is voting for people that will bring mass harm to groups they hate, and hope that certain groups go extinct).

Ok, yes, conflict between large groups is something to be worried about, though I don't much see the connection with germline engineering. I thought we were talking about, like, some liberal/techie/weirdo people have some really really smart kids, and then those kids are somehow a threat to the future of humanity that's comparable to a fast unbounded recursive self-improvement AGI foom.

Okay, the connection was to point out that lots of humans are not in fact aligned with each other, and I don't particularly think superbabies are a threat to the future of humanity that is comparable to AGI, so my point was more so that the alignment problem is not naturally solved in human-to human interactions.

Replies from: TsviBT
comment by TsviBT · 2025-02-20T23:13:32.956Z · LW(p) · GW(p)

lots of humans are not in fact aligned with each other,

Ok... so I think I understand and agree with you here. (Though plausibly we'd still have significant disagreement; e.g. I think it would be feasible to bring even Hitler back and firmly away from the death fever if he spent, IDK, a few years or something with a very skilled listener / psychic helper.)

The issue in this discourse, to me, is comparing this with AGI misalignment. It's conceptually related in some interesting ways, but in practical terms they're just extremely quantitatively different. And, naturally, I care about this specific non-comparability being clear because it says whether to do human intelligence enhancement; and in fact many people cite this as a reason to not do human IE.

Replies from: sharmake-farah
comment by Noosphere89 (sharmake-farah) · 2025-02-20T23:28:30.539Z · LW(p) · GW(p)

The issue in this discourse, to me, is comparing this with AGI misalignment. It's conceptually related in some interesting ways, but in practical terms they're just extremely quantitatively different. And, naturally, I care about this specific non-comparability being clear because it says whether to do human intelligence enhancement; and in fact many people cite this as a reason to not do human IE.

Re human vs AGI misalignment, I'd say this is true, in that human misalignments don't threaten the human species, or even billions of people, whereas AI does, so in that regard I admit human misalignment is less impactful than AGI misalignment.

Of course, if we succeed at creating aligned AI, than human misalignments matter much, much more.

(Rest of the comment is a fun tangentially connected scenario, but ultimately is a hypothetical that doesn't matter that much for AI alignment.)

Ok... so I think I understand and agree with you here. (Though plausibly we'd still have significant disagreement; e.g. I think it would be feasible to bring even Hitler back and firmly away from the death fever if he spent, IDK, a few years or something with a very skilled listener / psychic helper.)

At the very least, that would require him to not be in control of Germany by that point, and IMO most value change histories rely on changing their values in the child-teen years, because that's when their sensitivity to data is maximal. After that, the plasticity/sensitivity of values goes way down when you are an adult, and changing values is much, much harder.

Replies from: TsviBT
comment by TsviBT · 2025-02-20T23:53:47.026Z · LW(p) · GW(p)

I'd say this is true, in that human misalignments don't threaten the human species, or even billions of people, whereas AI does, so in that regard I admit human misalignment is less impactful than AGI misalignment.

Right, ok, agreed.

the plasticity/sensitivity of values goes way down when you are an adult, and changing values is much, much harder.

I agree qualitatively, but I do mean to say he's in charge of Germany, but somehow has hours of free time every day to spend with the whisperer. If it's in childhood I would guess you could do it with a lot less contact, though not sure. TBC, the whisperer here would be considered a world-class, like, therapist or coach or something, so I'm not saying it's easy. My point is that I have a fair amount of trust in "human decision theory" working out pretty well in most cases in the long run with enough wisdom [LW · GW].

I even think something like this is worth trying with present-day AGI researchers (what I call "confrontation-worthy empathy [LW · GW]"), though that is hard mode because you have so much less access.

comment by kman · 2025-02-20T22:44:42.809Z · LW(p) · GW(p)

My interpretation is that you're 99% of the way there in terms of work required if you start out with humans rather than creating a de novo mind, even if many/most humans currently or historically are not "aligned". Like, you don't need very many bits of information to end up with a nice "aligned" human. E.g. maybe you lightly select their genome for prosociality + niceness/altruism + wisdom, and treat them nicely while they're growing up, and that suffices for the majority of them.

Replies from: sharmake-farah, TsviBT
comment by Noosphere89 (sharmake-farah) · 2025-02-20T23:01:35.583Z · LW(p) · GW(p)

I'd actually maybe agree with this, though with the caveat that there's a real possibility you will need a lot more selection/firepower as a human gets smarter, because you lack the ability to technically control humans in the way you can control AIs.

comment by TsviBT · 2025-02-20T22:46:53.274Z · LW(p) · GW(p)

Also true, though maybe only for O(99%) of people.

Replies from: sharmake-farah
comment by Noosphere89 (sharmake-farah) · 2025-02-20T23:03:33.410Z · LW(p) · GW(p)

I'd probably bump that down to O(90%) at max, and this could get worse (I'm downranking based on the number of psychopaths/sociopaths and narcissists that exist).

comment by Ebenezer Dukakis (valley9) · 2025-02-20T22:15:46.930Z · LW(p) · GW(p)

These are incredibly small peanuts compared to AGI omnicide.

The jailbreakability and other alignment failures of current AI systems are also incredibly small peanuts compared to AGI omnicide. Yet they're still informative. Small-scale failures give us data about possible large-scale failures.

You're somehow leaving out all the people who are smarter than those people, and who were great for the people around them and humanity? You've got like 99% actually alignment or something

Are you thinking of people such as Sam Altman, Demis Hassabis, Elon Musk, and Dario Amodei? If humans are 99% aligned, how is it that we ended up in a situation where major lab leaders look so unaligned? MIRI and friends had a fair amount of influence to shape this situation and align lab leaders, yet they appear to have failed by their own lights. Why?

When it comes to AI alignment, everyone on this site understands that if a "boxed" AI acts nice, that's not a strong signal of actual friendliness. The true test of an AI's alignment is what it does when it has lots of power and little accountability.

Maybe something similar is going on for humans. We're nice when we're powerless, because we have to be. But giving humans lots of power with little accountability doesn't tend to go well.

Looking around you, you mostly see nice humans. That could be because humans are inherently nice. It could also be because most of the people around you haven't been given lots of power with little accountability.

Dramatic genetic enhancement could give enhanced humans lots of power with little accountability, relative to the rest of us.

[Note also, the humans you see while looking around are strongly selected for, which becomes quite relevant if the enhancement technology is widespread. How do you think you'd feel about humanity if you lived in Ukraine right now?]

Which, yes, we should think about this, and prepare and plan and prevent, but it's just a totally totally different calculus from AGI.

I want to see actual, detailed calculations of p(doom) from supersmart humans vs supersmart AI, conditional on each technology being developed. Before charging ahead on this, I want a superforecaster-type person to sit down, spend a few hours, generate some probability estimates, publish a post, and request that others red-team their work. I don't feel like that is a lot to ask.

Replies from: TsviBT
comment by TsviBT · 2025-02-20T22:43:02.564Z · LW(p) · GW(p)

Small-scale failures give us data about possible large-scale failures.

But you don't go from a 160 IQ person with a lot of disagreeability and ambition, who ends up being a big commercial player or whatnot, to 195 IQ and suddenly get someone who just sits in their room for a decade and then speaks gibberish into a youtube livestream and everyone dies, or whatever. The large-scale failures aren't feasible for humans acting alone. For humans acting very much not alone, like big AGI research companies, yeah that's clearly a big problem. But I don't think the problem is about any of the people you listed having too much brainpower.

(I feel we're somewhat talking past each other, but I appreciate the conversation and still want to get where you're coming from.)

Replies from: valley9
comment by Ebenezer Dukakis (valley9) · 2025-02-20T22:51:02.147Z · LW(p) · GW(p)

For humans acting very much not alone, like big AGI research companies, yeah that's clearly a big problem.

How about a group of superbabies that find and befriend each other? Then they're no longer acting alone.

I don't think the problem is about any of the people you listed having too much brainpower.

I don't think problems caused by superbabies would look distinctively like "having too much brainpower". They would look more like the ordinary problems humans have with each other. Brainpower would be a force multiplier.

(I feel we're somewhat talking past each other, but I appreciate the conversation and still want to get where you're coming from.)

Thanks. I mostly just want people to pay attention to this problem. I don't feel like I have unique insight. I'll probably stop commenting soon, since I think I'm hitting the point of diminishing returns.

Replies from: TsviBT
comment by TsviBT · 2025-02-20T22:58:48.868Z · LW(p) · GW(p)

I mostly just want people to pay attention to this problem.

Ok. To be clear, I strongly agree with this. I think I've been responding to a claim (maybe explicit, or maybe implicit / imagined by me) from you like: "There's this risk, and therefore we should not do this.". Where I want to disagree with the implication, not the antecedent. (I hope to more gracefully agree with things like this. Also someone should make a LW post with a really catchy term for this implication / antecedent discourse thing, or link me the one that's already been written.)

But I do strongly disagree with the conclusion "...we should not do this", to the point where I say "We should basically do this as fast as possible, within the bounds of safety and sanity.". The benefits are large, the risks look not that bad and largely ameliorable, and in particular the need regarding existential risk is great and urgent.

That said, more analysis is definitely needed. Though in defense of the pro-germline engineering position, there's few resources, and everyone has a different objection.

comment by Noosphere89 (sharmake-farah) · 2025-02-20T21:49:40.357Z · LW(p) · GW(p)

I will go further, and say the human universals are nowhere near strong enough to assume that alignment of much more powerful people will automatically/likely happen, or that not aligning them produces benevolent results, and the reason for this is humans are already misaligned, in many cases very severely to each other, so allowing human augmentation without institutional reform makes things a lot worse by default.

It is better to solve the AI alignment problem first, then have a legal structure created by AIs that can make human genetic editing safe, rather than try to solve the human alignment problem:

https://www.lesswrong.com/posts/DfrSZaf3JC8vJdbZL/how-to-make-superbabies#jgDtAPXwSucQhPBwf [LW(p) · GW(p)]

comment by LWLW (louis-wenger) · 2025-02-20T02:40:23.493Z · LW(p) · GW(p)

I’m sure you’ve already thought about this, but it seems like the people who would be willing and able to jump through all of the hoops necessary would likely have a higher propensity towards power-seeking and dominance. So if you don’t edit the personality as well, what was it all for besides creating a smarter god-emperor? I think that in the sane world you’ve outlined where people deliberately avoid developing AGI, an additional level of sanity would be holding off on modifying intelligence until we have the capacity to perform the personality edits to make it safe.


I can just imagine this turning into a world where the rich who are able to make their children superbabies compete with the rest of the elite over whose child will end up ruling the world. 

I’m sorry but I’d rather be turned into paper-clips then live in a world where a god-emperor can decide to torture me with their AGI-slave for the hell of it. How is that a better world for anyone but the god-emperor? But people are so blind and selfish, they just assume that they or their offspring would be god-emperor. At least with AI people are scared enough that they’re putting focused effort into trying to make it nice. People won’t put that much effort into their children.


I mean hell, figuring out personality editing would probably just make things backfire. People would choose to make their kids more ruthless, not less. 

Replies from: GeneSmith, kman
comment by GeneSmith · 2025-02-20T03:28:20.079Z · LW(p) · GW(p)

It's a fair concern. But the problem of predicting personality can be solved! We just need more data.

I also worry somewhat about brilliant psychopaths. But making your child a psychopath is not necessarily going to give them an advantage.

Also can you imagine how unpleasant raising a psychopath would be? I don't think many parents would willingly sign up for that.

Replies from: louis-wenger
comment by LWLW (louis-wenger) · 2025-02-20T03:35:04.167Z · LW(p) · GW(p)

I certainly wouldn’t sign up to do that, but the type of individual I’m concerned about likely wouldn’t mind sacrificing nannies if their lineage could “win” in some abstract sense. I think it’s great that you’re proposing a plan beyond “pray the sand gods/Sam Altman are benevolent.” But alignment is going to be an issue for superhuman agents, regardless of if they’re human or not.

Replies from: GeneSmith
comment by GeneSmith · 2025-02-20T03:42:26.750Z · LW(p) · GW(p)

Agreed. I've actually had a post in draft for a couple of years that discusses some of the paralleles between alignment of AI agents and alignment of genetically engineered humans.

I think we have a huge advantage with humans simply because there isn't the same potential for runaway self-improvement. But in the long term (multiple generations), it would be a concern.

Replies from: louis-wenger, sharmake-farah, valley9
comment by LWLW (louis-wenger) · 2025-02-20T03:58:15.168Z · LW(p) · GW(p)

That sounds very interesting! I always look forward to reading your posts. I don’t know if you know any policy people, but in this world, it would need to be punishable by jail-time to genetically modify intelligence without selecting for pro-sociality. Any world where that is not the case seems much, much worse than just getting turned into paper-clips.

comment by Noosphere89 (sharmake-farah) · 2025-02-20T04:07:33.150Z · LW(p) · GW(p)

I think the runaway self-improvement problem is IMO vastly outweighed by other problems on aligning humans, like the fact that any control technique on AI would be illegal because of it being essentially equivalent to brainwashing, such that I consider AIs much more alignable than humans, and I think the human intelligence augmentation path is way more risky and fraught than people think for alignment purposes.

Replies from: louis-wenger, kman
comment by LWLW (louis-wenger) · 2025-02-20T04:11:22.747Z · LW(p) · GW(p)

I agree. At least I can laugh if the AGI just decides it wants me as paperclips. There will be nothing to laugh about with ruthless power-seeking humans with godlike power.

comment by kman · 2025-02-20T17:49:34.472Z · LW(p) · GW(p)

like the fact that any control technique on AI would be illegal because of it being essentially equivalent to brainwashing, such that I consider AIs much more alignable than humans

 

A lot of (most?) humans end up nice without needing to be controlled / "aligned", and I don't particularly expect this to break if they grow up smarter. Trying to control / "align" them wouldn't work anyway, which is also what I predict will happen with sufficiently smart AI.

Replies from: sharmake-farah
comment by Noosphere89 (sharmake-farah) · 2025-02-20T18:02:49.246Z · LW(p) · GW(p)

I think this is my disagreement, in that I don't think that most humans are in fact nice/aligned to each other by default, and the reason why this doesn't lead to catastrophe broadly speaking is a combo of being able to rely on institutions/mechanism design that means even if people are misaligned, you can still get people well off under certain assumptions (capitalism and the rule of law being one such example), combined with the inequalities not being so great that individual humans can found their own societies, except in special cases.

Even here, I'd argue that human autocracies are very often misaligned to their citizens values very severely.

To be clear about what I'm not claiming, I'm not saying that alignment is worthless, or alignment always or very often fails, because it's consistent with a world where >50-60% of alignment attempts are successful.

This means I'm generally much more scared of very outlier smart humans, for example a +7-12 SD human that was in power of a large group of citizens, assuming no other crippling disabilities unless they are very pro-social/aligned to their citizenry.

I'm not claiming that alignment will not work, or even that will very often not work, but rather that the chance of failure is real and the stakes are quite high long-term.

(And that's not even addressing how you could get super-smart people to work on the alignment problem).

Replies from: louis-wenger, kman
comment by LWLW (louis-wenger) · 2025-02-20T18:22:45.855Z · LW(p) · GW(p)

This is just a definition for the sake of definition, but I think you could define a human as aligned if they could be given an ASI slave and not be an S-risk. I really think that under this definition, the absolute upper bound of “aligned” humans is 5%, and I think it’s probably a lot lower.

Replies from: sharmake-farah, kman
comment by Noosphere89 (sharmake-farah) · 2025-02-20T18:26:08.668Z · LW(p) · GW(p)

I'm more optimistic, in that the upper bound could be as high as 50-60%, but yeah the people in power are unfortunately not part of this, and I'd only trust 25-30% of the population in practice if they had an ASI slave.

Replies from: kman
comment by kman · 2025-02-20T18:29:00.675Z · LW(p) · GW(p)
comment by kman · 2025-02-20T18:29:33.393Z · LW(p) · GW(p)

What would it mean for them to have an "ASI slave"? Like having an AI that implements their personal CEV?

Replies from: louis-wenger
comment by LWLW (louis-wenger) · 2025-02-20T18:31:05.200Z · LW(p) · GW(p)

Yeah something like that, the ASI is an extension of their will.

Replies from: kman
comment by kman · 2025-02-20T23:23:30.127Z · LW(p) · GW(p)

So you think that, for >95% of currently living humans, the implementation of their CEV would constitute an S-risk in the sense of being worse than extinction in expectation? This is not at all obvious to me; in what way do you expect their CEVs to prefer net suffering?

comment by kman · 2025-02-20T18:18:51.403Z · LW(p) · GW(p)

(And that's not even addressing how you could get super-smart people to work on the alignment problem).

I mean if we actually succeeded at making people who are +7 SD in a meaningful way, I'd expect that at least good chunk of them would figure out for themselves that it makes sense to work on it.

Replies from: sharmake-farah
comment by Noosphere89 (sharmake-farah) · 2025-02-20T18:29:04.778Z · LW(p) · GW(p)

That requires either massive personality changes to make them more persuadable, or massive willingness of people to put genetic changes in their germline, and I don't expect either of these to happen before AI automates everything and either takes over, leaving us extinct or humans/other AI control/align AIs successfully.

(A key reason for this is that Genesmith admitted that the breakthroughs in germline engineering can't transfer to the somatic side, and that means we'd have to wait 25-30 years in order for it to grow, minimum given that society won't maximally favor the genetically lucky, and that's way beyond most plausible AI timelines at this point)

Replies from: kman
comment by kman · 2025-02-20T18:35:50.038Z · LW(p) · GW(p)

If they're that smart, why will they need to be persuaded?

Replies from: sharmake-farah
comment by Noosphere89 (sharmake-farah) · 2025-02-20T18:41:15.876Z · LW(p) · GW(p)

Because they might consider that other problems are more worth their time, since smartness changes change their values little.

And maybe they believe that AI alignment isn't impactful for technical/epistemic reasons.

I'm confused/surprised I need to make this point, because I don't automatically think they will be persuaded that AI alignment is a big problem they will need to work on, and some effort will likely still need to be required.

Replies from: kman
comment by kman · 2025-02-20T23:11:20.166Z · LW(p) · GW(p)

Because they might consider that other problems are more worth their time, since smartness changes change their values little.

I mean if they care about solving problems at all, and we are in fact correct about AGI ruin, then they should predictably come to view it as the most important problem and start to work on it?

Are you imagining they're super myopic or lazy and just want to think about math puzzles or something? If so, my reply is that even if some of them ended up like that, I'd be surprised if they all ended up like that, and if so that would be a failure of the enhancement. The aim isn't to create people who we will then carefully persuade to work on the problem, the aim is for some of them to be smart + caring + wise enough to see the situation we're in and decide for themselves to take it on.

Replies from: sharmake-farah
comment by Noosphere89 (sharmake-farah) · 2025-02-20T23:30:35.349Z · LW(p) · GW(p)

More so that I'm imagining they might not even have heard of the argument, and it's helpful to note that people like Terence Tao, Timothy Gowers and more are all excellent people at their chosen fields, but most people that have a big impact on the world don't go into AI alignment.

Remember, superintelligence is not omniscience.

So I don't expect them to be self motivated to work on this specific problem without at least a little persuasion.

I'd expect a few superintelligent adults to join alignment efforts, but nowhere near thousands or tens of thousands, and I'd upper bound it at 300-500 new researchers at most in 15-25 years.

Much less impactful than automating AI safety.

Replies from: kman
comment by kman · 2025-02-20T23:50:11.318Z · LW(p) · GW(p)

Much less impactful than automating AI safety.

I don't think this will work.

Replies from: sharmake-farah
comment by Noosphere89 (sharmake-farah) · 2025-02-20T23:57:21.810Z · LW(p) · GW(p)

How much probability do you assign to automating AI safety not working in time? Because I believe the preparing to automate AI safety work is probably the highest-value in pure ability to reduce X-risk probability, assuming it does work, so I assign much higher EV to automating AI safety, relative to other approaches.

Replies from: kman
comment by kman · 2025-02-21T00:13:29.949Z · LW(p) · GW(p)

I think I'm at <10% that non-enhanced humans will be able to align ASI in time, and if I condition on them succeeding somehow I don't think it's because they got AIs to do it for them. Like maybe you can automate some lower level things that might be useful (e.g. specific interpretability experiments), but at the end of the day someone has to understand in detail how the outcome is being steered or they're NGMI. Not sure exactly what you mean by "automating AI safety", but I think stronger forms of the idea are incoherent (e.g. "we'll just get AI X to figure it all out for us" has the problem of requiring X to be aligned in the first place).

Replies from: sharmake-farah
comment by Noosphere89 (sharmake-farah) · 2025-02-21T00:25:57.362Z · LW(p) · GW(p)

As far as what a plan to automate AI safety would work out in practice, assuming a relatively strong version of the concept is in this post below, and there will be another post that comes out by the same author talking more about the big risks discussed in the comments below:

https://www.lesswrong.com/posts/TTFsKxQThrqgWeXYJ/how-might-we-safely-pass-the-buck-to-ai [LW · GW]

In general, I think the crux is that in most timelines (at a lower bound, 65-70%) that have AGI developed relatively soon (so timelines from 2030-2045, roughly), and the alignment problem isn't solvable by default/it's at least non-trivially tricky to solved, conditioning on alignment success looks more like "we've successfully figured out how to prepare for AI automation of everything, and we managed to use alignment and control techniques well enough that we can safely pass most of the effort to AI", rather than other end states like "humans are deeply enhanced" or "lawmakers actually coordinated to pause AI, and are actually giving funding to alignment organizations such that we can make AI safe."

comment by Ebenezer Dukakis (valley9) · 2025-02-20T21:14:22.883Z · LW(p) · GW(p)

If you look at the grim history of how humans have treated each other on this planet, I don't think it's justified to have a prior that this is gonna go well.

I think we have a huge advantage with humans simply because there isn't the same potential for runaway self-improvement.

Humans didn't have the potential for runaway self-improvement relative to apes. That was little comfort for the apes.

comment by kman · 2025-02-20T16:41:25.245Z · LW(p) · GW(p)

I mean hell, figuring out personality editing would probably just make things backfire. People would choose to make their kids more ruthless, not less. 

Not at all obvious to me this is true. Do you mean to say a lot of people would, or just some small fraction, and you think a small fraction is enough to worry?

Replies from: louis-wenger
comment by LWLW (louis-wenger) · 2025-02-20T17:28:13.176Z · LW(p) · GW(p)

I should have clarified, I meant a small fraction and that that is enough to worry. 

Replies from: TsviBT, kman
comment by TsviBT · 2025-02-20T18:08:31.570Z · LW(p) · GW(p)

After I finish my methods article, I want to lay out a basic picture of genomic emancipation. Genomic emancipation means making genomic liberty a right and a practical option. In my vision, genomic liberty is quite broad: it would include for example that parents should be permitted and enabled to choose:

  • to enhance their children (e.g. supra-normal health; IQ at the outer edges of the human envelope); and/or
  • to propagate their own state even if others would object (e.g. blind people can choose to have blind children); and/or
  • to make their children more normal even if there's no clear justification through beneficence (I would go so far as to say that, for example, parents can choose to make their kid have a lower IQ than a random embryo from the parents would be in expectation, if that brings the kid closer to what's normal).

These principles are more narrow than general genomic liberty ("parents can do whatever they please"), and I think have stronger justifications. I want to make these narrower "tentpole" principles inside of the genomic liberty tent, because the wider principle isn't really tenable, in part for the reasons you bring up. There are genomic choices that should be restricted--perhaps by law, or by professional ethics for clinicians, or by avoiding making it technically feasible, or by social stigma. (The implementation seems quite tricky; any compromise of full genomic liberty does come with costs as well as preventing costs. And at least to some small extent, it erodes the force of genomic liberty's contraposition to eugenics, which seeks to impose population-wide forces on individual's procreative choice.)

Examples:

  • As you say, if there's a very high risk of truly egregious behavior, that should be pushed against somehow.
    • Example: People should not make someone who is 170 Disagreeable Quotient and 140 Unconscientiousness Quotient, because that is most of the way to being a violent psychopath.
    • Counterexample: People should, given good information, be able to choose to have a kid who is 130 Disagreeable Quotient and 115 Unconscientiousness Quotient, because, although there might be associated difficulties, that's IIUC a personality profile enriched with creative genius.
  • People should not be allowed to create children with traits specifically designed to make the children suffer. (Imagine for instance a parent who thinks that suffering, in itself, builds character or makes you productive or something.)
  • Case I'm unsure about, needs more investigation: Autism plus IQ might be associated with increased suicidal ideation (https://www.sciencedirect.com/science/article/abs/pii/S1074742722001228). Not sure what the implication should be.

Another thing to point out is that to a significant degree, in the longer-term, many of these things should self-correct, through the voice of the children (e.g. if a deaf kid grows up and starts saying "hey, listen, I love my parents and I know they wanted what was best for me, but I really don't like that I didn't get to hear music and my love's voice until I got my brain implant, please don't do the same for your kid"), and through seeing the results in general. If someone is destructively ruthless, it's society's job to punish them, and it's parents's job to say "ah, that is actually not good".

comment by kman · 2025-02-20T18:07:22.428Z · LW(p) · GW(p)

In that case I'd repeat GeneSmith's point from another comment: "I think we have a huge advantage with humans simply because there isn't the same potential for runaway self-improvement." If we have a whole bunch of super smart humans of roughly the same level who are aware of the problem, I don't expect the ruthless ones to get a big advantage.

I mean I guess there is some sort of general concern here about how defense-offense imbalance changes as the population gets smarter. Like if there's some easy way to destroy the world that becomes accessible with IQ > X, and we make a bunch of people with IQ > X, and a small fraction of them want to destroy the world for some reason, are the rest able to prevent it? This is sort of already the situation we're in with AI: we look to be above the threshold of "ability to summon ASI", but not above the threshold of "ability to steer the outcome". In the case of AI, I expect making people smarter differentially speeds up alignment over capabilities: alignment is hard and we don't know how to do it, while hill-climbing on capabilities is relatively easy and we already know how to do it.

I should also note that we have the option of concentrating early adoption among nice, sane, x-risk aware people (though I also find this kind of cringe in a way and predict this would be an unpopular move). I expect this to happen by default to some extent.

comment by David Gross (David_Gross) · 2025-02-20T03:23:24.593Z · LW(p) · GW(p)

There are some promising but under-utilized interventions for improving personality traits / virtues in already-developed humans,* [? · GW] and a dearth of research about possible interventions for others. If we want more of that sort of thing, we might be better advised to fill in some of those gaps rather than waiting for a new technology and a new generation of megalopsychebabies.

comment by Kris Moore · 2025-02-20T17:16:02.746Z · LW(p) · GW(p)

The "Black Death selection" finding you mention was subject to a very strong rebuttal preprinted in March 2023 and published yesterday in Nature. The original paper committed some pretty basic methodological errors[1] and, in my opinion, it's disappointing that Nature did not decide to retract it. None of their claims of selection – neither the headline ERAP2 variant or the "half a dozen others" you refer to – survive the rebuttal's more rigorous reanalysis. I do some work in ancient DNA and am aware of analyses on other datasets (published and unpublished) that fail to replicate the original paper's findings.

  1. ^

    Some of the most glaring (but not necessarily most consequential): a failure to correctly estimate the allele frequencies underlying the selection analysis; use of a genotyping pipeline poorly suited to ancient DNA which meant that 80% of the genetic variants they "analysed" were likely completely artefactual and did not exist.

Replies from: lcmgcd
comment by lemonhope (lcmgcd) · 2025-02-21T08:51:28.815Z · LW(p) · GW(p)

use of a genotyping pipeline poorly suited to ancient DNA which meant that 80% of the genetic variants they "analysed" were likely completely artefactual and did not exist.

Brutal!! I didn't know this gotcha existed. I hope there aren't too many papers silently gotch'd by it. Sounds like the type of error that could easily be widespread and unnoticed, if the statistical trace it leaves isn't always obvious.

comment by Purplehermann · 2025-02-20T08:27:00.107Z · LW(p) · GW(p)

Have you thought about how to get the data yourself?

 

Perhaps offering payment to people willing to get iq tested and give a genetic sample,  and paying more for higher scores on the test?

I understand that money is an issue, but as long as you're raising this seems like an area you could plug infinite money into and get returns

comment by PipFoweraker · 2025-02-19T21:40:31.130Z · LW(p) · GW(p)

Thanks for the write-up, I recall a conversation introducing me to all these ideas in Berkeley last year and it's going to be very handy having a resource to point people at (and so I don't misremember details about things like the Yamanaka factors!).

Am I reading the current plan correctly such that the path is something like:
Get funding -> Continue R+D through primate trials  -> Create an entity in a science-friendly, non-US state for human trials -> first rounds of Superbabies?  That scenario seems like it would require a bunch of medical tourism, which I imagine is probably not off the table for people with the resources and mindset willing to participate in this.

Replies from: GeneSmith
comment by GeneSmith · 2025-02-19T21:55:17.818Z · LW(p) · GW(p)

Yes, that's more or less the plan. I think it's pretty much inevitable that the United States will fully legalize germline gene editing at some point. It's going to be pretty embarassing if rich American parents are flying abroad to have healthier children.

You can already see the tide starting to turn on this. Last Month Nature actually published an article about germline gene editing. That would NEVER have happened even just a few years ago.

When you go to CRISPR conferences on gene editing, many of the scientists will tell you in private that germline gene editing makes sense. But if you ask them to go on the record as supporting it publicly, they will of course refuse.

At some point there's going to be a preference cascade. People are going to start wondering why the US government is blocking technology to its future citizens healthier, happier, and smarter.

comment by RichardJActon · 2025-02-20T20:45:01.876Z · LW(p) · GW(p)

There are a couple of major problems with naively intervening to edit sites associated with some phenotype in a GWAS or polygenic risk score.

  1. The SNP itself is (usually) not causal Genotyping arrays select SNPs the genotype of which is correlated with a region around the SNP, they are said to be in linkage with this region as this region tends to be inherited together when recombination happens in meiosis. This is a matter of degree and linkage scores allow thresholds to be set for how indicative a SNP is about the genotype a given region.
    If it is not the SNP but rather something with which the SNP is in linkage that is causing the effect editing the SNP has no reason the effect the trait in question.
    It is not trivial to figure out what in linkage with a SNP might be causing an effect.
    Mendelian randomisation (explainer: https://www.bmj.com/content/362/bmj.k601) is a method that permits the identification of causal relationships from observational genetic studies which can help to overcome this issue.

  2. In practice epistatic interactions between QTLs matter for effects sizes and you cannot naively add up the effect sizes of all the QTLs for a trait and expect the result to reflect the real effect size, even if >50% effect are additive.

    Terminology:
    epistasis - when the effect of a genetic variant is dependent on the genotype of another gene or genes to have an effect.
    QTL - quantitative trait locus, a location in the genome where the genotype is correlated with a quantitative phenotype e.g. height

    A hypothetical example of how epistasis can lead to non-additivity in QTLs:
    SNPs linked with genes A, B and C are associated with some trait.
    Variant A is more a more active kinase than regular A that phosphorylates and activates C, So is variant B
    Phosphorylation of C is effectively binary, if either A or B does it does not matter so editing either has the same effect.
    Variant C is active even when not phosphorylated so editing A and/or B has no effect beyond that of editing C - except maybe side effects from now phosphorylating something else.

    In agronomy where this has been best studied with the goal of engineering crops with specific complex traits once you start trying this epistatic effects show up.

    For example:
    https://doi.org/10.1038/s41598-018-20690-w
    https://doi.org/10.1007/s00122-010-1517-0

The (much) bigger problem is not editing a bunch of bases in the embryo it's knowing which ones to edit (safely).

Replies from: kman, lcmgcd
comment by kman · 2025-02-20T22:33:04.858Z · LW(p) · GW(p)

The SNP itself is (usually) not causal Genotyping arrays select SNPs the genotype of which is correlated with a region around the SNP, they are said to be in linkage with this region as this region tends to be inherited together when recombination happens in meiosis. This is a matter of degree and linkage scores allow thresholds to be set for how indicative a SNP is about the genotype a given region.

This is taken into account by our models, and is why we see such large gains in editing power from increasing data set sizes: we're better able to find the causal SNPs. Our editing strategy assumes that we're largely hitting non-causal SNPs.

In practice epistatic interactions between QTLs matter for effects sizes and you cannot naively add up the effect sizes of all the QTLs for a trait and expect the result to reflect the real effect size, even if >50% effect are additive.

I'm not aware of any evidence for substantial effects of this sort on quantitative traits such as height. We're also adding up expected effects, and as long as those estimates are unbiased the errors should cancel out as you do enough edits.

One thing we're worried about is cases where the haplotypes have the small additive effects rather than individual SNPs, and you get an unpredictable (potentially deleterious) effect if you edit to a rare haplotype even if all SNPs involved are common. Are you aware of any evidence suggesting this would be a problem?

Replies from: RichardJActon
comment by RichardJActon · 2025-02-21T00:51:11.722Z · LW(p) · GW(p)

Could you expand on what sense you have 'taken this into account' in your models? What are you expecting to achieve by editing non-causal SNPs?

The first paper I linked is about epistasic effects on the additivity of a QTLs for quantitative trait, specifically heading date in rice, so this is evidence for this sort of effect on such a trait.

The general problem is without a robust causal understanding of what an edit does it is very hard to predict what shorts of problem might arise from novel combinations of variants in a haplotype. That's just the nature of complex systems, a single incorrect base in the wrong place may have no effect or cause a critical cascading failure. You don't know until you test it or have characterized the system so well you can graph out exactly what is going to happen. Just testing it in humans and seeing what happens is eventually going to hit something detrimental. When you are trying to do enhancement you tend to need a positive expectation that it will be safe not just no reason to think it won't be. Many healthy people would be averse to risking good health for their kid, even at low probability of a bad outcome.

Replies from: kman
comment by kman · 2025-02-21T02:45:08.114Z · LW(p) · GW(p)

Could you expand on what sense you have 'taken this into account' in your models? What are you expecting to achieve by editing non-causal SNPs?

If we have a SNP that we're 30% sure is causal, we expect to get 30% of its effect conditional on it being causal. Modulo any weird interaction stuff from rare haplotypes, which is a potential concern with this approach.

The first paper I linked is about epistasic effects on the additivity of a QTLs for quantitative trait, specifically heading date in rice, so this is evidence for this sort of effect on such a trait.

I didn't read your first comment carefully enough; I'll take a look at this.

Replies from: TsviBT, RichardJActon
comment by TsviBT · 2025-02-21T02:49:52.710Z · LW(p) · GW(p)

Can you comment your current thoughts on rare haplotypes?

comment by RichardJActon · 2025-02-21T07:50:12.429Z · LW(p) · GW(p)

I'm curious about the basis on which you are assigning a probability of causality without a method like mendelian randomisation, or something that tries to assign a probability of an effect based on interpreting the biology like a coding of the output of something like SnpEff to an approximate probability of effect.

The logic of 30% of its effect based on 30% chance it's causal only seems like it will be pretty high variance and only work out over a pretty large number of edits. It is also assuming no unexpected effects of the edits to SNPs that are non-causal for whatever trait you are targeting but might do something else when edited.

comment by lemonhope (lcmgcd) · 2025-02-21T08:40:02.953Z · LW(p) · GW(p)

To dumb it down a bit, here's my made up example: you get +1 IQ if your brain has surplus oxygen in the blood flowing through it. There's 1000 ways to get a bit more oxygen in there, but with +1000 oxygen, you still only get +1 IQ.

Is that the idea?

Replies from: RichardJActon
comment by RichardJActon · 2025-02-21T10:39:52.380Z · LW(p) · GW(p)

Kind of, there are many ways that changed in isolation get you a bit more oxygen but many of them act through the same mechanism so you change 1000 things that get you +1 oxygen on their own but in combination only get you +500.

To use a software analogy imagine an object with two methods where if you call either of them a property of an object is set to true, it doesn't matter if you call both methods or if you have a bunch of functions that call those methods you still just get true. Calling either method or any function that calls them is going to be slightly correlated with an increased probability the the property of the object will be true but it does not add. There are many way to make it true but making it true more times does not make it 'more true'.

If we change this from a boolean to an integer then some methods might only increment it if it is not already greater than some value specific to the method.

comment by Metacelsus · 2025-02-20T03:32:56.161Z · LW(p) · GW(p)

Unfortunately monkeys (specifically marmosets) are not cheap. To demonstrate germline transmission (the first step towards demonstrating safety in humans), Sergiy needs $4 million.

 

And marmosets are actually the cheapest monkey. (Also, as New World monkeys, marmosets are more distantly related to humans than rhesus or cynomolgus monkeys are.)

Replies from: GeneSmith
comment by GeneSmith · 2025-02-20T03:40:30.401Z · LW(p) · GW(p)

Do you have any estimate of how much more expensive testing in cynomolgus macaques or rhesus monkeys would be?

Replies from: Metacelsus
comment by Metacelsus · 2025-02-20T03:56:28.364Z · LW(p) · GW(p)

As a rough estimate, I think 3x to 5x more expensive. Marmosets are smaller (smaller than squirrels) whereas macaques (rhesus/cyno) are about 10x bigger (6 kg). And macaques take longer to develop (3 years vs. 18 months until adulthood). Finally, macaques are in high demand and low supply for pharma research.

But the benefit is that methods developed in macaques are more likely to translate to humans, due to the closer evolutionary relationship. Marmosets are a bit unusual in their embryonic development (two twin embryos share a common, fused placenta!) 
 

comment by Mars_Will_Be_Ours · 2025-02-20T07:51:58.216Z · LW(p) · GW(p)

Thank you for writing this article! It was extremely informative and I am very pleased to learn about super-SOX. I have been looking for a process which can turn somatic cells into embryonic stem cells due to unusual personal reasons, so by uncovering this technology you have done me a great service. Additionally, I agree that pursing biological superintelligence is a better strategy than pursuing artificial superintelligence. People inherit some of their moral values from their parents, so a superintelligent human has a reasonable probability of being a good person as long as their parents are. Unfortunately, due to selection effects this is not a given.

Have you read the research of Suzana Herculano-Houzel, particularly her paper The Human Brain In Numbers: A Linearly Scaled-Up Primate Brain? This research argues that humans are intelligent for two reasons. 

  1. The primate brain is the only mammalian brain which maintains a nearly constant neuron density as the brain's size increases. For comparison, the neuron density of rodents  decreases as brain size increases.  
    1. A 10 fold increase in the number of primate neurons requires a 11 fold increase in brain volume.
    2. A 10 fold increase in the number of rodent neurons requires a 35 fold increase in brain volume.
  2. Humans have the largest primate brain.

I bring this up because I agree with the conclusions presented in Suzana's work, with brain size being directly related to intelligence. I think it is better to select for a trait directly related to intelligence, brain size, than for a trait indirectly related to intelligence, such as IQ. 

Replies from: GeneSmith
comment by GeneSmith · 2025-02-20T18:16:01.153Z · LW(p) · GW(p)

I'm glad you liked the article!

Brain size is correlated with intelligence at maybe 0.3-0.4. If you were to just brain size max I think it would probably not yield the outcomes you actually want. It's better to optimize as directly as you can for the outcome you want.

Replies from: Mars_Will_Be_Ours
comment by Mars_Will_Be_Ours · 2025-02-21T03:16:03.437Z · LW(p) · GW(p)

Good point. I am inherently drawn to the idea of increasing brain size because I favor extremely simple solutions whenever possible. However, a more focused push towards increasing intelligence will produce better results as long as the metric used for measuring intelligence is reliable. 

I still think that increasing brain size will take a long time to reach diminishing returns due to its simplicity. Keeping all other properties of a brain equal, a larger brain should be more intelligent. 

There is also one other wildly illegal approach which may be viable if you focus on increasing brain size. You might be able to turn a person, perhaps even yourself, into a biological superintelligence. By removing much of a person's skull and immersing the exposed brain in synthetic cerebrospinal fluid, it would be possible to restart brain growth in an adult. You could theoretically increase a person's brain size up to the point where it becomes difficult to sustain via biological or artificial means. With their physical abilities crippled, the victim must be connected to robot bodies and sense organs to interact with the world. I don't recommend this approach and would only subject myself to it if humanity is in a dire situation and I have no other way of gaining the power necessary to extract humanity from it. 

Replies from: lcmgcd
comment by lemonhope (lcmgcd) · 2025-02-21T08:56:15.782Z · LW(p) · GW(p)

Women/girls with big heads tend to hit their heads but you can solve that with bigger arms.

comment by Pretentious Penguin (dylan-mahoney) · 2025-02-20T04:10:09.271Z · LW(p) · GW(p)

Nobel prize winners (especially those in math and sciences) tend to have IQs significantly above the population average.

 

There is no Nobel prize in math. And the word "especially" would imply that there exists data on the IQs of Nobel laureates in literature and peace which shows a weaker trend than the trend for sciences laureates; has anybody ever managed to convince a bunch of literature Nobel laureates to take IQ tests? I can't find anything by Googling, and I'm skeptical.

To be clear, the general claim that people who win prestigious STEM awards have above-average IQs is obviously true.

Replies from: Mo Nastri
comment by Mo Putera (Mo Nastri) · 2025-02-20T11:24:10.338Z · LW(p) · GW(p)

(To be clear: I agree with the rest of the OP, and with your last remark.)

has anybody ever managed to convince a bunch of literature Nobel laureates to take IQ tests? I can't find anything by Googling, and I'm skeptical.

I just read this piece by Erik Hoel which has this passage relevant to that one particular sentence you quoted from the OP:

Consider a book from the 1950s, The Making of a Scientist by psychologist and Harvard professor Anne Roe, in which she supposedly measured the IQ of Nobel Prize winners. The book is occasionally dug up and used as evidence that Nobel Prize winners have an extremely high IQ, like 160 plus. But it’s really an example of how many studies of genius are methodologically deeply flawed. ...

Roe never used an official IQ tests on her subjects, the Nobel Prize winners. Rather, she made up her test, simply a timed test that used SAT questions of the day. Why? Because most IQ tests have ceilings (you can only score like a 130 or 140 on them) and Roe thought—without any evidence or testing—that would be too low for the Nobel Prize winners. And while she got some help with this from the organization that created the SATs, she admits:

The test I used is not one that has been used before, at least in this form.

And furthermore:

I was not particularly concerned at the outset over the fact that I had no norms for this test. That is, I had no idea what any other population would do on the same test.

In other words, she had an untested set of SAT questions that she gave to Nobel prize winners not knowing how anyone else would do on them. This is pretty problematic. Normally IQ tests try to achieve some form of group-level neutrality; e.g., many of the major modern IQ tests are constructed from the outset so as not show any average difference between male and female takers, to be as culturally-invariant as possible, etc. And while Roe didn’t publish without any comparison group to her chosen geniuses whatsoever, the comparison that she did use was only a graduating class of PhD students (sample size unknown, as far as I can tell) who also took some other more standard IQ tests of the day, and she basically just converted from their scores on the other tests to scores on her make-shift test of SAT questions. Yet, here are the raw numbers of how the Nobel-prize winners do on the test she created:

From Gwern.net

Notice anything? The Nobel Prize winners all scored rather average. In fact, pretty low, in some cases. But Roe then goes on to claim that their IQ is extremely high, based on her statistical transformations:

I must caution that these equivalents have been arrived at by a series of statistical transformations based on assumptions which are generally valid for this type of material but which have not been specifically checked for these data. Nevertheless I believe that they are meaningful and a fair guide to what the situation is. The median score of this group on this verbal test is approximately equivalent to an IQ of 166.

Wait a minute. How did this conversion to a median IQ of 166 take place? After all, the scientists are scoring in the middle of the range on the test. They are getting a lot of questions wrong. E.g., Biologists who won the Nobel Prize got a 56.6 on the Verbal but we know that was far from the maximum score, Experimental Physicists got an even lower 46.6, etc. How then did she arrive at the group altogether having an astoundingly-high median verbal IQ of 166? Assuming that those at the upper range of scoring got close to most of the questions right (she mentions this is true, some only missed 4-10 questions at the maximum range), then how can getting only roughly two-thirds of the questions right translate to an IQ in the 160s?

Perhaps these SAT questions were just impossibly hard? Judge for yourself. Here’s one of the two examples she gives of the type of questions the Nobel Prize winners answered:

In each item in the first section, four words were given, and the subject had to pick the two which were most nearly opposite in meaning and underline them.

Here is one of the items: 1. Predictable 2. Precarious 3. Stable 4. Laborious.

This. . . isn’t very hard (spoiler: 2 & 3). So the conclusion of a median verbal IQ of 166 is deeply questionable, and totally reliant on this mysterious conversion she performed.

This sort of experimental setup would never fly today (my guess is the statistical conversion had all sorts of problems, e.g., Roe mentions extraordinarily high IQ numbers for PhD students at the time that don’t make sense, like an avg. IQ of 140). A far more natural reading of her results is to remove the mysterious conversion and look at the raw data, which is that the Nobel-prize-winning scientists scored well but not amazingly on SAT questions, indicating that Nobel Prize winners would get test scores above average but would not ace the SATs, since the average was far below the top of the possible range.

(I don't think Erik's arguments here have any relevance whatsoever to the OP's project though.)

Replies from: dylan-mahoney
comment by Pretentious Penguin (dylan-mahoney) · 2025-02-20T15:42:46.324Z · LW(p) · GW(p)

It should be noted that the psychologists and anthropologists in the above tables were not selected based on winning a Nobel prize, nor any prize. On pages 51-52 of The Making of a Scientist Roe writes

For the psychologists the preliminary list was made up by me in consultation, separately, with Dr. E. G. Boring and Dr. David Shakow. We simply went over the membership list of the American Psychological Association and put down everyone we knew to be actively engaged in research and otherwise qualified. This preliminary list was then rated, in the usual fashion, by Dr. Boring, of Harvard University, [...]

and then lists a bunch of other professors involved in rating the list, and "the men who ranked at the top were selected, with some adjustment so as to include representatives of different sorts of psychology."

(Incidentally, I wonder whether Professor Boring's lectures lived up to his name.)

comment by lemonhope (lcmgcd) · 2025-02-20T20:07:00.239Z · LW(p) · GW(p)

Someone please tell Altman and Musk they can spend their fortunes on millions of uber-genius children if they please, and they don't have to spend it all on their contest to replace ourselves with the steel & copper successors.

Replies from: valley9
comment by Ebenezer Dukakis (valley9) · 2025-02-20T21:20:58.972Z · LW(p) · GW(p)

Altman and Musk are arguably already misaligned relative to humanity's best interests. Why would you expect smarter versions of them to be more aligned? That only makes sense if we're in an "alignment by default" world for superbabies, which is far from obvious.

Replies from: lcmgcd
comment by lemonhope (lcmgcd) · 2025-02-20T21:47:43.832Z · LW(p) · GW(p)

I would vote to be ruled by their carbon children instead of their silicon children for certain

Replies from: valley9
comment by Ebenezer Dukakis (valley9) · 2025-02-20T22:16:49.228Z · LW(p) · GW(p)

There's a good chance their carbon children would have about the same attitude towards AI development as they do. So I suspect you'd end up ruled by their silicon grandchildren.

Replies from: lcmgcd
comment by lemonhope (lcmgcd) · 2025-02-21T08:26:35.275Z · LW(p) · GW(p)

Good point! I didn't think that far ahead

comment by Kris Moore · 2025-02-20T18:33:25.011Z · LW(p) · GW(p)

It is becoming increasingly clear that for many traits, the genetic effect sizes estimated by genetic association studies are substantially inflated for a few reasons. These include confounding due to uncontrolled population stratification, such as dynastic effects, and perhaps also genetic nurture[1]. It is also clear that traits strongly mediated through society and behaviour, such as cognitive ability, are especially strongly affected by these mechanisms. 

You can avoid much of this confounding by performing GWAS on only the differences between siblings ("within-sibship GWAS") or between other pairs of family members ("family GWAS"). When you do this for cognitive ability, you find substantial deflation: heritability estimates decrease by around 45%, and the effect sizes estimated in population-level GWAS only correlate with these more direct effect estimates by about 0.55.

Does your analysis take this into account, for instance by using effect sizes estimated by within-sibship/family GWAS? If not, it would follow that genome editing would yield substantially lower increases in IQ than you estimate.

  1. ^

    Genetic nurture is a little complicated. The classic example in the GWAS context is parental genetic nurture. Here, you find an effect of a genetic variant on a trait in people, but the actual "direct" effect manifests only in those people's parents – one of whom must carry the variant, otherwise it would not be observed in the people in your study – and affected how they nurtured their kid, which then affected the trait you were measuring in their offspring. Genetically editing such a variant into an embryo would thus have no effect on that embryo when they are born and develop.

Replies from: kman
comment by kman · 2025-02-20T18:41:26.434Z · LW(p) · GW(p)

We accounted for inflation of effect sizes due to assortative mating, assuming a mate IQ correlation of 0.4 and total additive heritability of 0.7 for IQ.

IIUC that R = 0.55 number was just the raw correlation between the beta values of the sibling and population GWASes, which is going to be very noisy given the small sample sizes and given that effects are sparse. You can see that the LDSC based estimate is nearly 1, suggesting ~null indirect effects.

Replies from: kman, kman
comment by kman · 2025-02-20T18:51:35.942Z · LW(p) · GW(p)

Subtle population stratification not accounted for by the original GWAS could still be an issue, though I don't expect this would inflate the effects very much. If we had access to raw data we could take into account small correlations between distant variants during finemapping, which would automatically handle assortative mating and stratification.

comment by kman · 2025-02-20T18:58:38.716Z · LW(p) · GW(p)

IIUC that R = 0.55 number was just the raw correlation between the beta values of the sibling and population GWASes

Actually I don't think this is correct, it accounted for sampling error somehow. I'll need to look into this deeper.

comment by lemonhope (lcmgcd) · 2025-02-20T19:58:59.492Z · LW(p) · GW(p)

You could also make people grow up a bit faster. Some kids are more mature, bigger, etc than others at the same wall-clock age. If this doesn't conflict with lifespan then it would allow the superbabies to be productive sooner. Wouldn't want to rob someone of their childhood entirely, but 12 years of adolescence is long enough for lots of chase tag and wrestling.

Replies from: TsviBT
comment by TsviBT · 2025-02-20T20:05:05.748Z · LW(p) · GW(p)

This is a big ethical issue. Also, I haven't checked, but I'd guess that generally to have much of a noticeable effect, you're stepping somewhat to the edge of / outside of the natural range, which carries risks. Separately, this might not even be good on purely instrumental grounds; altriciality is quite plausibly really important for intelligence!

Replies from: lcmgcd
comment by lemonhope (lcmgcd) · 2025-02-20T20:11:01.688Z · LW(p) · GW(p)

The wikipedia page picture has some evidence that you are right

comment by AnthonyC · 2025-02-20T12:58:39.824Z · LW(p) · GW(p)

I definitely want to see more work in this direction, and agree that improving humans is a high-value goal.

But to play devil's advocate for a second on what I see as my big ethical concern: There's a step in the non-human selective breeding or genetic modification comparison where the experimenter watches several generations grow to maturity, evaluates whether their interventions worked in practice, and decides which experimental subjects if any get to survive or reproduce further. What's the plan for this step in humans, since "make the right prediction every time at the embryo stage" isn't a real option? '

Concrete version of that question:  Suppose we implement this as a scalable commercial product and find out that e.g. it causes a horrible new disease, or induces sociopathic or psychopathic criminal tendencies, that manifest at age 30, after millions of parents have used it. What happens next?

Replies from: GeneSmith, TsviBT, lcmgcd
comment by GeneSmith · 2025-02-20T18:13:10.499Z · LW(p) · GW(p)

I think almost everyone misunderstands the level of knowledge we have about what genetic variants will do.

Nature has literally run a randomized control trial for genes already. Every time two siblings are created, the set of genes they inherit from each parent are scrambled and (more or less) randomly assigned to each. That's INCREDIBLY powerful for assessing the effects of genes on life outcomes. Nature has run a literal multi-generational randomized control trial for the effect of genes on everything. We just need to collect the data.

This gives you a huge advantage over "shot-in-the-dark" type interventions where you're testing something without any knowledge about how it performs over the long run.

Also, nature is ALREADY running a giant parallelized experiment on us every time a new child is born. Again, the genes they get from their parents are randomized. If reshuffling genetic variants around were extremely dangerous we'd see a huge death rate in the newborn population. But that is not in fact what we see. You can in fact change around some common genetic variants without very much risk.

And if you have a better idea about what those genes do (which we increasingly do), then you can do even better.

There are still going to be risks, but the biggest ones I actually worry about are about getting the epigenetics right.

But there we can just copy what nature has done. We don't need to modify anything.

Replies from: AnthonyC
comment by AnthonyC · 2025-02-20T19:25:20.758Z · LW(p) · GW(p)

True, and this does indicate that children produced from genes found in 2 parents will not be outside the range which a hypothetical natural child of theirs could occupy. I am also hopeful that this is what matters, here. 

However, there are absolutely, definitely viable combinations of genes found in a random pair of parents which, if combined in a single individual, result in high-IQ offspring predisposed to any number of physical or mental problems, some of which may not manifest until long after the child is born. In practice, any intervention of the type proposed here seems likely to create many children with specific combinations of genes which we know are individually helpful for specific metrics, but which may not often (or ever) have all co-occurred. This is true even in the cautious, conservative early generations where we stay within the scope of natural human variations. Thereafter, how do we ensure we're not trialing someone on an entire generation at once? I don't want us to end up in a situation where a single mistake ends up causing population-wide problems because we applied it to hundreds of millions of people before the problem manifested.

comment by TsviBT · 2025-02-20T18:17:09.445Z · LW(p) · GW(p)

This is a good argument for not going outside the human envelope in one shot. But if you're firmly within the realm where natural human genomes are, we have 8 billion natural experiments running around, some of which are sibling RCTs.

comment by lemonhope (lcmgcd) · 2025-02-20T21:03:55.473Z · LW(p) · GW(p)

GeneSmith forgot to explicitly say that you can and should weight against sociopathy. Parents will be motivated to do this because if your kid is a jerk then your life will be miserable. (I do think if you select for success without selecting against sociopathy then you'll get lots of sociopaths.)

I would bet against some weird disease manifesting, especially if you are weighting for general health.

Replies from: AnthonyC
comment by AnthonyC · 2025-02-21T00:42:25.593Z · LW(p) · GW(p)

And that makes perfect sense. I guess I'm just not sure I trust any particular service provider or research team to properly list the full set of things it's important to weight against. Kind of feels like a lighter version of not trusting a list of explicit rules someone claims will make an AI safe.

comment by Sodium · 2025-02-20T05:22:04.091Z · LW(p) · GW(p)

One thing not mentioned here (and I think should be talked about more) is that the naturally occurring genetic distribution is very unequal in a moral sense. A more egalitarian society would put a stop to Eugenics Performed by a Blind, Idiot God.  [LW · GW]

Have your doctor ever asked about if you have a family history of [illness]? For so many diseases, if your parents have it, you're more likely to have it, and your kids are more likely to have it. These illnesses plague families for generations. 

I have a higher than average chance of getting hypertension. Without technology, so will my future kids. With gene editing, we can just stop that, once and for all. A just world is a world where no child is born predetermined to endure avoidable illness simply because of ancestral bad luck.

comment by Anders Lindström (anders-lindstroem) · 2025-02-21T10:44:11.833Z · LW(p) · GW(p)

I know many of you dream of having an IQ of 300 to become the star researcher and avoid being replaced by AI next year. But have you ever considered whether nature has actually optimized humans for staring at equations on a screen? If most people don’t excel at this, does that really indicate a flaw that needs fixing?

Moreover, how do you know that a higher IQ would lead to a better life—for the individual or for society as a whole? Some of the highest-IQ individuals today are developing technologies that even they acknowledge carry Russian-roulette odds of wiping out humanity—yet they keep working on them. Should we really be striving for more high-IQ people, or is there something else we should prioritize?

comment by Amy Johnson (Amy Minge) · 2025-02-20T04:31:04.448Z · LW(p) · GW(p)

Another way of looking at this is that the human body and genetic codebase is, frankly, shit. (We literally have shit inside of us.) If we digitize human consciousness (for life in virtual worlds or as downloads onto robotic bodies) then we would be able to edit our code, including intelligence, much more quickly. Besides, rather than adding a few years to our lives, we could live indefinitely, and we could have backups. We would get our energy directly from electricity, rather than having to rely on the inefficient sun>plants>animals chain, and all the adjacent, well... shit.

Replies from: kman
comment by kman · 2025-02-20T22:36:33.606Z · LW(p) · GW(p)

Sure, sounds hard though.