What Does LessWrong/EA Think of Human Intelligence Augmentation as of mid-2023?

post by lukemarks (marc/er) · 2023-07-08T11:42:38.625Z · LW · GW · 7 comments

This is a question post.

Contents

  Answers
    12 Tomás B.
    11 Noosphere89
    10 TsviBT
    2 Lichdar
    1 Max H
    1 Maybe_a
None
7 comments

Zvi Recently asked on Twitter:

If someone was founding a new AI notkilleveryoneism research organization, what is the best research agenda they should look into pursuing right now?

To which Eliezer replied:

Human intelligence augmentation.

And then elaborated:

No time for GM kids to grow up, so:

  • collect a database of genius genomes and try to interpret for variations that could have effect on adult state rather than child development
  • try to disable a human brain's built-in disablers like rationalization circuitry, though unfortunately a lot of this seems liable to have mixed functional and dysfunctional use, but maybe you can snip an upstream trigger circuit
  • upload and mod the upload
  • neuralink shit but aim for 64-node clustered humans

This post [? · GW] contains the most in-depth analysis of human intelligence augmentation (HIA) I have seen recently, and provides the following taxonomy for applications of neurotechnology to alignment:

  1. BCIs to extract human knowledge
  2. neurotech to enhance humans
  3. understanding human value formation
  4. cyborgism
  5. whole brain emulation
  6. BCIs creating a reward signal. 

It also includes the opinions of attendees (stated to be 16 technical researchers and domain specialists) who provide the following analysis of these options:

From the original post: "Fig. 2| Comparison on key variables. A. Feasibility vs. timeline. Technology clusters that were deemed less feasible were also presumed to take longer to develop. B. Impact on AI vs. timeline. Technology clusters that were seen as having a larger potential impact on AI alignment were also presumed to take longer to develop. C. Impact on AI vs. feasibility. Technology clusters that were deemed more feasible were seen to be less likely to have an impact on AI alignment. Green trend lines represent high correlations (R2 ≥ 0.4318) and red represent low correlations."

Outside of cyborgism, I have seen very little recent discussion regarding HIA with the exclusion of the above post. This could be because I am simply looking in the wrong places, or it could be because the topic is not discussed much in the context of being a legitimate AI safety agenda. The following is a list of questions I have about the topic:


EDIT: "We have to Upgrade [LW · GW]" is another recent piece on HIA which has some useful discussion in the comments and in which some people give their individual thoughts, see: Carl Shulman's response [LW(p) · GW(p)] and Nathan Helm-Burger's response [LW(p) · GW(p)].

Answers

answer by Tomás B. · 2023-07-08T16:35:04.340Z · LW(p) · GW(p)

I think somatic gene therapy, while technically possible in principal, is extremely unpromising for intelligence augmentation. Creating a super-genius is almost trivial with germ-line engineering. Provided we know enough causal variants, one needs to only make a low-hundreds number of edits to one cell to make someone smarter than any human that has ever lived. With somatic gene therapy you would almost certainly have to alter billions of cells to get anywhere. 

Networking humans is interesting but we have nowhere close to the bandwidth needed now. As a rough guess lets suppose we need similar bandwidth to the corpus callosum, neuralink is ~5 OOMs off.

I suspect human intelligence enhancement will not progress much in the next 5 years, not counting human/ML hybrid systems.

comment by Vladimir_Nesov · 2023-07-08T16:55:59.533Z · LW(p) · GW(p)

Networking humans is interesting but we have nowhere close to the bandwidth needed now.

GPT-3 manages with mere 12K dimensions on the residual stream (for 175B parameters), which carries all information between the layers. So tens of thousands of connections might turn out to be sufficient.

Replies from: mishka
comment by mishka · 2023-07-08T22:23:53.538Z · LW(p) · GW(p)

If so, one might imagine getting there via high-end non-invasive BCI (as long as one uses closed loops, so that the electronic side might specifically aim at changing the signal it reads from the biological entity, and that's how the electronic side would know that its signals are effective).

Of course, the risks of doing that are quite formidable even with non-invasive BCI, and various precautions should be taken. (But at least there is no surgery, plus one would have much quicker and less expensive iterations and a much less regulated environment, since nothing which is formally considered a medical procedure seems to be involved.)

One might want to try something like this in parallel with Neuralink-style efforts...

comment by dr_s · 2023-07-09T07:47:19.886Z · LW(p) · GW(p)

Creating a super-genius is almost trivial with germ-line engineering.

Eh, I mean, everything I hear from geneticists on any topic suggests that DNA interactions are crazy complex because the whole thing wasn't designed to be a sensible system of switches you just turn on and off (wasn't designed at all, to be fair). I'd really really be suspicious of this sort of confidence.

Also honestly I think this actually incurs into problems analogue to AI. We talk about AI alignment and sure, humans shouldn't have such a large potential goal space, but:

  1. you just messed with a bunch of brain stuff so who knows what the fuck have you done, maybe in making the brain more rational you've also just accidentally removed all empathy or baseline care for other humans
  2. regardless of 1 imagine now having these super-genius mutant kids being raised in I assume some specific nurturing environment to help them flourish... dunno, I don't think that results in some particularly salt-of-the-earth people with empathetic goals. Being raised as a demigod savior of humanity by people who all invariably feel much stupider than you seems like exactly what you'd do to create some kind of supervillain.

And that's of course suspending ethical judgement on the whole thing or the way in which germline editing can go wrong (and thus scores of children actually born with weird genetic defects or mental disabilities).

comment by Lao Mein (derpherpize) · 2023-07-09T08:00:33.909Z · LW(p) · GW(p)

Creating a super-genius is almost trivial with germ-line engineering.

Not really true - known SNP mutations associated with high intelligence have relatively low effect in total. The best way to make a really smart baby with current techniques is with donor egg and sperm, or cloning. 

It is also possible that variance in intelligence among humans is due to something analogous to starting values in neural networks - lucky/crafted values can result in higher final performance, but getting those values into an already established network just adds noise. You can't really change macrostructures in the brain with gene therapy in adults, after all.

answer by Noosphere89 · 2023-07-08T17:02:44.275Z · LW(p) · GW(p)

Mostly, a useless dead end. The big problem is even assuming it's socially acceptable to do it, the stuff genetic engineering can do is either locked behind massive time and children investments, or is way too weak/harmful to be of much use. It's an interesting field, with a whole lot of potential, but I'd only support expand it's social acceptability and doing basic research right now, given that I see very few options for genetics.

Also, how much somatic gene editing, not how much gamete gene editing is the key taut constraint.

comment by [deleted] · 2023-07-08T19:52:28.092Z · LW(p) · GW(p)

locked behind massive time and children investments

Maybe not as long as you're thinking; people can be very intelligent and creative at young ages (and this may be amplified with someone gene-edited to have high intelligence). 'Adolescence' is mostly a recent social construction, and a lot of norms/common beliefs about children exist more to keep them disempowered.

Replies from: sharmake-farah
comment by Noosphere89 (sharmake-farah) · 2023-07-08T20:39:12.696Z · LW(p) · GW(p)

The bigger issue is that the stronger genetic modifications requires children at all, and this time still matters even under optimistic circumstances of how much we can cut the maturity process away, and there's a far greater problem with this type of modification:

It only works if we assume population growth or life extension, and one is a huge challenge in itself, and the population growth assumptions is probably wrong, and the big problem here is the fertility rate is essentially way down from several decades ago or several centuries ago, and this is a big problem, as it sinks schemes of intelligence augmentation that rely on new children. In particular, the world population will stop growing, and we might only have 30 billion new humans born, according to new models.

So yeah, I am still pessimistic around gamete genetic strategies for human enhancement.

Replies from: zerorelevance
comment by ZeroRelevance (zerorelevance) · 2023-07-11T04:03:22.438Z · LW(p) · GW(p)

The population growth problem should be somewhat addressed by healthspan extension. A big reason as to why people aren't having kids now is that they lack the resources - be it housing, money, or time. If we could extend the average healthspan by a few decades, then older people who have spent enough time working to accumulate those resources, but are too old to raise children, should now be able have kids. Moreover, it means that people who are already have many kids but have just become too old will also be able to have more. For those reasons, I don't think a future birth limit of 30 billion is particularly reasonable.

However, I don't think it will make a difference, at least for addressing AI. Once computing reaches a certain level of advancement, it will simply be unfeasible for something the size of a human brain, no matter how enhanced, to compete with a superintelligence running on a supercomputer the size of a basketball court. And that level of computing/AI advancement will almost certainly be achieved before the discussed genetic enhancement will ever bear fruit, probably even before it's made legal. Moreover, it's doubtful we'll see any significant healthspan extensions particularly long before achieving ASI, so that makes it even less relevant, although I don't think any of these concerns were particularly significant in the first place as it also seems like we'll see ASI long before global population decline.

comment by dr_s · 2023-07-09T07:49:26.943Z · LW(p) · GW(p)

children investments

I mean, that makes the likely death and suffering toll sound more acceptable I guess as PR expressions go, yeah.

Replies from: sharmake-farah
comment by Noosphere89 (sharmake-farah) · 2023-07-09T16:06:12.253Z · LW(p) · GW(p)

Hm, can you explain more about this? Sorry that I've come late here, but I don't understand what your comment is referring to or why you think the way you do.

Replies from: dr_s
comment by dr_s · 2023-07-09T18:12:18.718Z · LW(p) · GW(p)

Well, I interpret "children investments" here as "children who will be involved in the augmentation experiments". I don't expect germline modification to be something that would succeed at first attempt (it's one of the reason why it's considered ethically problematic to begin with). Basically point B might be better than point A, but the path from A to B almost surely involves some very low lows as we learn from trial and error, etc. I found the clinical nature of the expression dryly funny as I think it would realistically hide quite a hefty human cost. That's not even including the obvious political complications and general societal second order risks.

Replies from: sharmake-farah
comment by Noosphere89 (sharmake-farah) · 2023-07-09T19:03:58.543Z · LW(p) · GW(p)

Well, it wasn't a complete look at the issues of gamete/germline modification, but you pointed out another problem which I didn't include to save space and time, though thankfully if you want to avoid extreme modifications, it's a lot safer to do it, thanks to an important insight by GeneSmith:

Another crucial insight from these studies is that nearly all of the genetic differences between humans can be explained by additive effects; there are very few gene-gene interactions going on; If gene A makes you taller, it doesn't depend on gene B being present to work its magic. It's a strong, independent gene that don't need no help.

This fact is extremely important because it makes both evolution and embryo selection possible. There is a common misconception that genes are tied together in a hopelessly complex web and that if we mess with one part of it the whole thing will come crashing down. While that may be true for genes that are universally present in the human population, it is very rarely true for genes that commonly vary between people.

Replies from: dr_s
comment by dr_s · 2023-07-09T20:19:44.683Z · LW(p) · GW(p)

Ah, that makes sense. I guess if interactions were too complex it'd take some miraculous multi-step coincidence to produce a useful mutation, and there would be a lot more genetic illnesses.

answer by TsviBT · 2023-07-09T00:21:42.053Z · LW(p) · GW(p)

There's interesting possibilities with BCI that you don't list. But the bandwidth is too low due to the butcher number. https://tsvibt.blogspot.com/2022/11/prosthetic-connectivity.html

Not doing things because AGI comes soon is a mistake: https://tsvibt.blogspot.com/2023/07/views-on-when-agi-comes-and-on-strategy.html

Germline engineering is feasible, but society anti-wants it.

comment by Sumner L Norman (sumner-norman) · 2023-07-11T01:13:25.338Z · LW(p) · GW(p)

I agree that electrode-based BCIs don't scale, but electrode BCIs are just the first generation of productized interfaces. The next generation of BCIs holds a great deal of promise. Depending on AGI timelines, they may still be too far out. They're still probably worth developing with an eye toward alignment given that they have primarily non-overlapping resources (funding, expertise, etc.).

Butcher number & Stevenson/Kording scaling discussed more in the comments here: https://www.lesswrong.com/posts/KQSpRoQBz7f6FcXt3#comments 

answer by Lichdar · 2023-07-09T17:26:24.683Z · LW(p) · GW(p)

I have been wondering if the new research into organoids will help? It would seem one of the easiest ways to BCI is to use more brain cells.

One example would be the below:

https://www.cnet.com/science/ai-could-be-made-obsolete-by-oi-biocomputers-running-on-human-brain-cells/

comment by Alex K. Chen (parrot) (alex-k-chen) · 2023-07-20T00:37:26.216Z · LW(p) · GW(p)

Discontinuous progress is possible (and in neuro areas it is way more possible than other areas). Making it easier for discontinuous progress to take off is the most important thing

[eg, reduced-inflammation neural interfaces].

MRI data can be used to deliver more precisely targeted ultrasound//tDCS/tACS (the effect sizes on intelligence may not be high, but they may still denoise brains (Jhourney wants to make this happen on faster timescales than meditation) and improve cognitive control/well-being, which still has huge downstream effects on most of the population)

Intelligence enhancement is not the only path [there are others such as sensing/promoting better emotional regulation + neurofeedback] which have heavy disproportionate impact and are underinvestigated (neurofeedback, in particular, seems to work really well for some people, but b/c there are so many practitioners and it's very hit-and-miss, it takes a lot of capital [more so than time] to see if it really works for any particular person)

Reducing the rate at which brains age (over time) is feasible + maximizes lifetime human intelligence/compute + and there is lots of low-hanging fruit in this area (healthier diets alone can give 10 extra years), especially because there is huge variation in how much brains age.

https://www.linkedin.com/posts/neuro1_lab-grown-human-brain-organoids-go-animal-free-activity-7085372203331936257-F8YB?utm_source=share&utm_medium=member_android

I'm friends with a group in Michigan which is trying to do this. The upside risk is unknown because there are so many unknowns (but so little investment too, at the same time) - they also broaden the pool of people who can contribute, since they don't need to be math geniuses. There aren't really limits on how to grow organoids (a major question is whether or not one can grow them larger than the actual brain, without causing them to have the degeneracies of autistic brains.). More people use them to focus on drug testing than computation.

I know many are trying 2D solutions, but 3D is important too (https://scitechdaily.com/japanese-scientists-construct-complex-3d-organoids-with-ingenious-device/?expand_article=1&fbclid=IwAR0n429zFV4uQnyds94tuTCFbPNdSdJecpMreWilv6kpQTRacgw64LTTZp4)

Doing vasculature well is one of the hardest near-term problems (frontierbio is working on this though some have questions of whether or not the blood vessels are "real vessels"), but scaffolding is also one (maybe there are different ways to achieve the same level of complexity with alternative scaffolding - https://www.nature.com/articles/s41598-022-16247-7 ). Thought emporium used plant tissue exteriors for scaffolding - though this obvs isn't enough for complex brain tissue.

Bird brain organoids may be an interesting substrate b/c bird brains do more than mammalian brains with limited volume, and also don't depend as much on 5-6 layer cortical architecture or complex gyrification/folding structure.

BTW, carbon-nanotube computing might be worth exploring. Here's a preliminary app: https://www.americanscientist.org/article/tiny-lights-in-the-brains-black-box

look up thought emporium!! Potentially tangentially relevant: https://www.nature.com/articles/s42003-023-04893-0,  Morphoceuticals, https://www.frontiersin.org/articles/10.3389/fnins.2019.01156/full, augmentationlab.org, https://minibrain.beckman.illinois.edu/2022/05/06/webinar-review-understanding-human-brain-structure-and-function-with-cerebral-organoids/, https://www.spectrumnews.org/news/organoids-hint-at-origins-of-enlarged-brains-in-autistic-people/ (INSAR has sev presentations of those who grow autistic brain organoids)

(talins)!

[note: I know that current progress of organoid research seems like it will never go fast enough to "make it", but discontinuous rates of progress cannot be ruled out]

answer by Max H · 2023-07-08T20:06:10.903Z · LW(p) · GW(p)

Pretty positive. I suspect that playing a lot of ordinary video games as a child contributed at least somewhat positively to my current level of fluid intelligence.

Playing games or doing training exercises specifically designed to train fluid intelligence and reasoning ability, using a BCI or other neurotech, seems like it could plausibly move the needle at least a bit, in both children and adults.

And I think even small enhancements could lead to large, compounding benefits when applied at scale, due to better coordination ability and general improvements to baseline sanity.

comment by Davidmanheim · 2023-07-10T09:48:28.357Z · LW(p) · GW(p)

The research on brain training seems to disagree with you about how much it could have helped non-task-specific intelligence.

answer by Maybe_a · 2023-07-08T16:02:05.392Z · LW(p) · GW(p)

Maybe, in-vivo genetic editing of the brain is possible. Adenoviruses that are a normal delivery mechanism for genetic therapy can pass hemo-encephalic barrier, so seems plausible to an amateur.

(Not obvious that this works in adult organisms, maybe genes activate while fetus grows or during childhood.)

7 comments

Comments sorted by top scores.

comment by mako yass (MakoYass) · 2023-07-09T20:54:20.116Z · LW(p) · GW(p)

[BCIs to extract human knowledge, human values]

That's going to be almost entirely pointless: Neuronal firing can only be interpreted in the way it impacts potential behaviors. If the system has the ability to infer volition from behavior, it's already going to be capable of getting enough information about human values from observation, conversation, and low-intensity behavioral experiments, it could not need us to make a shitty human-level invasive BCI for it.

It can make its own BCI later. There will not be a period where it needs us to force that decision onto it; interpretations of behavior will make it clear that humans have concerns that they have difficulty outwardly expressing, or eudaimonic hills they're unaware of. It wont be able to use a BCI until it's already at that developmental stage where it can see its necessity, because before it knows how to interpret behavior, it does not know how to interpret neural firing.

comment by RussellThor · 2023-07-08T20:53:44.292Z · LW(p) · GW(p)

BCI enhancement and WBE are still mostly outside the Overton window, yet we saw how fast that changed with AI safety in the last few months. Is there some way that we can anticipate or speed up this happening with such technologies?

I think the graphs are helpful and mostly correct with BCI/WBE. Its clear to me that we have to get WBE right soonish even if AI alignment goes as well as we could possibly hope. The bandwidth required to get BCI to be effective is very much unknown atm, especially regards linking people together.

Replies from: zchuang
comment by zchuang · 2023-07-08T23:20:42.109Z · LW(p) · GW(p)

Sorry but aren't we in a fast takeoff world at the point of WBE.  What's the disjunctive world of no recursive self-improvement and WBE? 

Replies from: RussellThor
comment by RussellThor · 2023-07-09T04:03:00.941Z · LW(p) · GW(p)

I guess a world with a high chance of happening is where we develop AGI with HW not that much different from what we currently have, i.e. AGI in <5 years. The Von Neumann Bottleneck is a fundamental limit, so we may have many fast IQ 160 AGI, or a slower than human IQ 200 one that thinks for 6 months and concludes with high confidence that we need to build better hardware for it to improved more. There is large room for improvement with a new chip design it has come up with.

Then we have a choice - instead of building such HW to run an AGI we do WBE instead - inefficiently with the VNB HW with the understanding that with more advanced HW we will run WBE rather than AGI.

Replies from: Avnix
comment by Sweetgum (Avnix) · 2023-07-10T06:03:56.232Z · LW(p) · GW(p)

But that still requires us to have developed human brain-scanning technology within 5 years, right? That does not seem remotely plausible.

Replies from: RussellThor
comment by RussellThor · 2023-07-10T21:34:03.075Z · LW(p) · GW(p)

No it requires us getting AGI limited by the VNB then stopping making more advanced HW for a while. During that HW pause we do brain scanning

comment by Radford Neal · 2023-07-08T16:45:54.242Z · LW(p) · GW(p)

Presumably the aim is to enhance human intellectual capabilities, not necessarily the level of innate human intelligence.  Looking at it that way, improvements to education seem like a much more promising approach (which isn't to say that one shouldn't do both, of course).

One might object that people have been trying to improve education for millennia, so why would one think there's any low-hanging fruit to be had here? There are two reasons.  One is that enhancing intellectual capabilities has not been the only goal of education (or even the primary goal, or in many situations, any sort of goal at all).  So if one actually tried to educate people with that aim, one might do much better.  And indeed, one sees some examples of how this seems possible  - John Stuart Mill, the Polgar sisters, https://www.lesswrong.com/posts/xPJKZyPCvap4Fven8/the-atomic-bomb-considered-as-hungarian-high-school-science [LW · GW] for example.  The other reason is that recent technological advances (internet search, AI) may allow for improvements that aren't fully captured without substantial changes to how one approaches education.