post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by GeneSmith · 2022-04-13T01:26:42.461Z · LW(p) · GW(p)

I'm glad someone else is thinking about this. This is pretty much my current life plan for solving the alignment problem. I would probably give it a 20-30% chance of actually having a major impact. There's a bunch of conditions, all of which must be satisfied for this to make a major positive impact.

  1. The technologies necessary to meaningfully modify human traits must be developed
  2. Those technologies must be adopted by a sufficiently large group of parents
  3. The enhanced traits parents choose must asymmetrically impact existential risk and existential safety.
  4. Children whose genomes are selected/created in this manner must reach the age at which they are able to have a substantial impact in the field of AI research before AGI is developed.

It's a long list. 20-30% might even be too optimistic. But I think it is nonetheless worth pursuing because it allows us to do something no other approach to the alignment problem can: fundamentally enhance human problem-solving capabilities.

The biggest problem is time. If progress in AI keeps accelerating as it has done recently, there will likely not be enough time for children born via whatever enhancement techniques we use to grow up and have an impact in the field of AI. Even the brightest geniuses don't really start having much of an impact on a field until about 20. And it takes further time for ideas to spread, so perhaps they'd need to reach the age of 30. If it takes 10 years to develop powerful techniques and get them to scale, then AGI would have to come no earlier than 2062, which assumes pretty slow progress.

Current and Near-Future Prospects

In the latest study of educational attainment, researchers were able to explain 12.3% of the variation in GPA within the general population. If we assume GPA is as heavily g-loaded as IQ test (probably not quite true, but likely reasonable) and we assume 82% of the variance is caused by direct effects (that number is from another paper on educational attainment), and we assume a young woman who is interested in doing this goes through 3 rounds of egg retrieval, I estimate you could raise IQ about 6 points right now given ideal circumstances and current predictors.

I've read estimates that with a sample of a million SNP genotyped people, all of whom have test scores for a reasonably high-quality IQ test (SAT scores, actual IQ tests, etc), we could probably make a predictor that explained about 35% of the variance. That would allow us to raise IQ by about 12 points.

Another short-term technology not even mentioned on your list is gamete sequencing. Sperm and eggs are produced in groups of four, with two complementary pairs per stem cell. If we could figure out how to get a good enough read from three of those cells, we could infer the genome of the fourth and pair up the best sperm and egg. That would naively allow us to double the gain, so 24 points.

There are other technologies like in-vitro oogenesis that could raise the gain by perhaps 50% (assuming we could produce a couple of thousand embryos). And there are groups that are working on that right now.

I don't think genome synthesis is likely to be possible in time. CRISPR or some other editing technique might work in the next 10 years, but the public seems to be much less comfortable with editing as opposed to selection, so that might be more politically difficult.

So right now we can raise IQ by about 0.4 standard deviations, and within ten years from now, we might be able to raise it by about 1.5 standard deviations. I think that would likely have a major impact on the field of AI (not to mention the rest of society)

Will raising intelligence actually help with the alignment problem?

This is probably the second biggest open question after AI timelines. There are a huge number of very smart people working extremely hard to create unaligned AGI as quickly as possible. So clearly intelligence alone is not sufficient.

So what to do? The most brain-dead solution I can think of would be to create a GWAS for "working on x-risk-reduction" vs "working on x-risk enhancement". But so far as I know, no one has ever done any research on this so we don't know how heritable it is (which affects the efficacy of selection). Also, given the sample sizes needed to make a good GWAS predictor (half a million or more for most traits), it's certain that we'd need to rely on some proxy measure.

Lastly, even if we could create such a predictor, what weirdo parents would select for "likely to work on x-risk-reduction"? The parents themselves would have to be convinced that x-risk is a problem, so it's a somewhat circular solution.

Perhaps another trait such as "conscientiousness" or "pro-sociality" would be fairly well correlated with likelihood to work on x-risk-reduction. But again, there's like no research on this, and parents would have to choose to select for such traits.

I am very nervous about any solutions which require the government to enforce selection for certain traits. Naively it sounds like a solution to the problem of "how to get parents to select for things that are good for children and society as a whole", but it doesn't take much imagination to think of ways such a scheme could go terribly wrong. It also seems like a gross overreach of government into the most personal aspects of life.

The only strategy that seems viable to me is enhanced intelligence + changing the memetic environment. I don't think genetics is going to provide a substitute for the work that has to be done by us stone-brainers to convince more people that misaligned AI is a serious threat.

I am also optimistic that more intelligent people would better grasp the arguments about AI safety and other sources of X-risk. There's also some research about intelligent people's disproportionate tendency to support enforcement of rules encouraging positive-sum cooperation that I wrote about in my first post on genetic engineering [LW · GW], so I can see this potentially helping with the coordination aspects of AI and other fields.

Replies from: kman
comment by kman · 2022-04-13T03:26:57.927Z · LW(p) · GW(p)

Even the brightest geniuses don't really start having much of an impact on a field until about 20. And it takes further time for ideas to spread, so perhaps they'd need to reach the age of 30.

We could probably create humans vastly smarter than have ever previously existed with full genome synthesis, who could have a huge impact at a much younger age. But otherwise I agree.

Another short-term technology not even mentioned on your list is gamete sequencing. Sperm and eggs are produced in groups of four, with two complementary pairs per stem cell. If we could figure out how to get a good enough read from three of those cells, we could infer the genome of the fourth and pair up the best sperm and egg. That would naively allow us to double the gain, so 24 points.

Wouldn't it be a factor of sqrt(2), not double?

There are other technologies like in-vitro oogenesis that could raise the gain by perhaps 50% (assuming we could produce a couple of thousand embryos). And there are groups that are working on that right now.

That sounds fairly promising and worth looking into.

I don't think genome synthesis is likely to be possible in time. CRISPR or some other editing technique might work in the next 10 years, but the public seems to be much less comfortable with editing as opposed to selection, so that might be more politically difficult.

Agreed, which makes my previous point somewhat moot. I'm tempted to say we should at least keep synthesis in the back of our minds in case the problems on the critical path end up being easier than expected.

Lastly, even if we could create such a predictor, what weirdo parents would select for "likely to work on x-risk-reduction"? The parents themselves would have to be convinced that x-risk is a problem, so it's a somewhat circular solution.

Alignment-problem-aware people could be early adopters of embryo-selection-for-G. There are lots of smart alignment-problem-aware people who read this forum and may be open to this idea, so it's not necessarily circular.

I am very nervous about any solutions which require the government to enforce selection for certain traits.

I think it's super unlikely we'd be able to get this sort of large scale coordination anyways.

The only strategy that seems viable to me is enhanced intelligence + changing the memetic environment. I don't think genetics is going to provide a substitute for the work that has to be done by us stone-brainers to convince more people that misaligned AI is a serious threat.

I don't think large scale awareness is necessary (see my above point). Even if you could do it, pushing for large scale awareness could backfire by drawing the wrong sort of attention (e.g. by resulting in public outrage about selection-for-G so politicians move to ban it). Though I admittedly don't place much confidence in my current ability to gauge the likelihood of this sort of thing. More awareness of the alignment problem is probably good.

I am also optimistic that more intelligent people would better grasp the arguments about AI safety and other sources of X-risk. There's also some research about intelligent people's disproportionate tendency to support enforcement of rules encouraging positive-sum cooperation that I wrote about in my first post on genetic engineering [LW · GW], so I can see this potentially helping with the coordination aspects of AI and other fields.

Agreed, society wide gains in G would likely have the general effect of raising the sanity waterline [LW · GW].

 

Anyhow, I've updated slightly towards focusing more on thinking about near-term embryo selection strategies as a result of reading and responding to this.

Replies from: GeneSmith
comment by GeneSmith · 2022-04-13T04:45:15.197Z · LW(p) · GW(p)

We could probably create humans vastly smarter than have ever previously existed with full genome synthesis, who could have a huge impact at a much younger age. But otherwise I agree.

This is true, but the farther out into the tails of the distribution we get the more likely we are to see negative effects that from traits that aren't part of the index we're selecting on. For example, I would be pretty surprised if we could increase IQ by 10 standard deviations in one generation without some kind of serious deleterious effects.

Wouldn't it be a factor of sqrt(2), not double?

I have to admit, I haven't actually done the math here, but Gwern seems to think it would roughly double the effect.

Alignment-problem-aware people could be early adopters of embryo-selection-for-G. There are lots of smart alignment-problem-aware people who read this forum and may be open to this idea, so it's not necessarily circular.

Yeah, this is one of my hopes. I will probably write something about this in the future.

I don't think large scale awareness is necessary (see my above point). Even if you could do it, pushing for large scale awareness could backfire by drawing the wrong sort of attention (e.g. by resulting in public outrage about selection-for-G so politicians move to ban it). Though I admittedly don't place much confidence in my current ability to gauge the likelihood of this sort of thing. More awareness of the alignment problem is probably good.

I mostly think the value would be in more actual understanding of alignment difficulties among people working on AI capabilities.

Thanks for the response.

Replies from: kman
comment by kman · 2022-04-13T05:45:22.778Z · LW(p) · GW(p)

This is true, but the farther out into the tails of the distribution we get the more likely we are to see negative effects that from traits that aren't part of the index we're selecting on.

True, but we wouldn't need to strictly select for G by association with IQ via GWASes. I suspect G variation is largely driven by mutation load, in which case simply replacing each rare variant with one of its more common counterparts should give you a huge boost while essentially ruling out negative pleiotropy. To hedge your bets you'd probably want to do a combined approach.

I guess there's some risk that rare variants are involved in people who, e.g., tend to take x-risk very seriously, but I doubt this. I suspect that, to whatever extent this is heritable, it's controlled by polygenic variation over relatively common variants at many loci. So if you started out with the genomes of people who care lots about x-risk and then threw out all the rare variants, I predict you'd end up with hugely G boosted people who are predisposed to care about x-risk.

As you pointed out, this is moot if genome synthesis is out of reach.

I mostly think the value would be in more actual understanding of alignment difficulties among people working on AI capabilities.

Seems sensible.

comment by tailcalled · 2022-04-12T21:13:03.471Z · LW(p) · GW(p)

It seems to me like this would accelerate capabilities research to a similar degree that it would accelerate alignment research, making it a wash.

Replies from: kman
comment by kman · 2022-04-12T21:41:13.374Z · LW(p) · GW(p)

(Edited because I don't think my original terse reply made my thoughts on this very clear)

If we're in a (very) long timeline world, I suspect the default thing that ends up happening is that embryo selection is gradually adopted, and G slowly rises population-wide. The reason timelines are long in such a world is that AGI ended up being way harder than it currently looks, so the gradually rising G levels would indeed increase the probability that unaligned AGI is created, unless this somewhat-higher-G world also manages to get large scale coordination right (don't hold your breath). Alignment research would directly benefit from more capable researchers, and would probably benefit from far greater public awareness than it currently receives (due to generally higher sanity and also just more time for the ideas to percolate into the mainstream), which in turn means far more researchers working on it. People in alignment-aware communities would likely be early adopters of embryo selection, which could give alignment a head start (this is one strategy we might wish to consider: the point of my post was to get us to start thinking about these sorts of strategies).

If we're only in a medium~longish timeline world (AGI in the latter half of this century, say) then there won't be enough time for this sort of large scale adoption: a quick G boosting intervention would be used by a small group of early-adopters long before it catches on more broadly. So, strategically, we'd want to be thinking about making sure that the "small group of early-adopters" is alignment-aware.

comment by Rossin · 2022-04-12T20:35:12.795Z · LW(p) · GW(p)

Current bioethics norms will strongly condemn this sort of research, which may make it challenging to pursue in the nearish term. The consensus is strongly against, which will make acquiring funding difficult and any human CRISPR editing is completely off the table for now. For example, He Jiankui CRISPR edited some babies in China to make them less susceptible to HIV and went to prison for it.

Replies from: hold_my_fish, kman
comment by hold_my_fish · 2022-04-13T08:02:01.971Z · LW(p) · GW(p)

He Jiankui had issues beyond just doing something bioethically controversial. He didn't make the intended edits cleanly in any embryo (instead there were issues with off-target edits and mosaicism). If I remember correctly, he also misled the parents about the nature of the intervention.

All in all, if you look into the details of what he did, he doesn't come out looking good from any perspective.

Replies from: Rossin
comment by Rossin · 2022-04-13T14:05:34.406Z · LW(p) · GW(p)

That’s true, there was a huge amount of outrage even before those details came out however.

comment by kman · 2022-04-12T21:32:54.457Z · LW(p) · GW(p)

Good point, I didn't address this at all in the post. Germline editing is indeed outside the current Overton window. One thing I'm curious about is whether there are any shreds of hope that we might be able to accelerate any of the relevant technical research: one thing this implies is not specifically focusing on the use case of enhancement, to avoid attracting condemnation (which would risk slowing existing research due to e.g. new regulations being levied).

For some techniques this seems harder than for others: iterated embryo selection is pretty clearly meant for enhancement (which could also mean animal enhancement, i.e. efficient livestock breeding). The Cas9 stuff has lots of potential uses, so it's currently being heavily pursued despite norms. There's also lots of ongoing work on the synthesis of simple genomes (e.g. for bacteria), with many companies offering synthesis services. Of course, the problems I identified as likely being on the critical path to creating modal human genomes are pretty enhancement specific (again, the only other application that comes to mind is making better livestock) which is unfortunate, given the massive (and quick!) upside of this approach if you can get it to work.