Overview of strong human intelligence amplification methods

post by TsviBT · 2024-10-08T08:37:18.896Z · LW · GW · 103 comments

Contents

  Summary (table of made-up numbers)
  Call to action
  Context
    The goal
    Constraint: Algernon's law
    How to know what makes a smart brain
      Figure it out ourselves
      Copy nature's work
  Brain emulation
    The approach
    Problems
  Genomic approaches
    Adult brain gene editing
      The approach
      Problems
    Germline engineering
      The approach
      Problems
  Signaling molecules for creative brains
    The approach
    Problems
  Brain-brain electrical interface approaches
    Problems with all electrical brain interface approaches
    Massive cerebral prosthetic connectivity
    Human / human interface
    Interface with brain tissue in a vat
  Massive neural transplantation
    The approach
    Problems
  Support for thinking
    The approaches
    Problems
  FAQ
    What about weak amplification
    What about ...
    The real intelligence enhancement is ...
    Is this good to do?
None
103 comments

How can we make many humans who are very good at solving difficult problems?

Summary (table of made-up numbers)

I made up the made-up numbers in this table of made-up numbers; therefore, the numbers in this table of made-up numbers are made-up numbers.

Call to action

If you have a shitload of money, there are some projects you can give money to that would make supergenius humans on demand happen faster. If you have a fuckton of money, there are projects whose creation you could fund that would greatly accelerate this technology.

If you're young and smart, or are already an expert in either stem cell / reproductive biology, biotech, or anything related to brain-computer interfaces, there are some projects you could work on.

If neither, think hard, maybe I missed something.

You can DM me or gmail me at tsvibtcontact.

Context

The goal

What empowers humanity is the ability of humans to notice, recognize, remember, correlate, ideate, tinker, explain, test, judge, communicate, interrogate, and design. To increase human empowerment, improve those abilities by improving their source: human brains.

AGI is going to destroy the future's promise of massive humane value. To prevent that, create humans who can navigate the creation of AGI. Humans alive now can't figure out how to make AGI that leads to a humane universe.

These are desirable virtues: philosophical problem-solving ability, creativity, wisdom, taste, memory, speed, cleverness, understanding, judgement. These virtues depend on mental and social software, but can also be enhanced by enhancing human brains.

How much? To navigate the creation of AGI will likely require solving philosophical problems that are beyond the capabilities of the current population of humans, given the available time (some decades). Six standard deviations is 1 in 10^9, seven standard deviations is 1 in 10^12. So the goal is to create many people who are 7 SDs above the mean in cognitive capabilities. That's "strong human intelligence amplification". (Why not more SDs? There are many downside risks to changing the process that creates humans, so going further is an unnecessary risk.)

It is my conviction that this is the only way forward for humanity.

Constraint: Algernon's law

Algernon's law: If there's a change to human brains that human-evolution could have made, but didn't, then it is net-neutral or net-negative for inclusive relative genetic fitness. If intelligence is ceteris paribus a fitness advantage, then a change to human brains that increases intelligence must either come with other disadvantages or else be inaccessible to evolution.

Ways around Algernon's law, increasing intelligence anyway:

How to know what makes a smart brain

Figure it out ourselves

Copy nature's work

Brain emulation

The approach

Method: figure out how neurons work, scan human brains, make a simulation of a scanned brain, and then use software improvements to make the brain think better.

The idea is to have a human brain, but with the advantages of being in a computer: faster processing, more scalable hardware, more introspectable (e.g. read access to all internals, even if they are obscured; computation traces), reproducible computations, A/B testing components or other tweaks, low-level optimizable, process forking. This is a "figure it out ourselves" method——we'd have to figure out what makes the emulated brain smarter.

Problems

Fundamentally, brain emulations are a 0-to-1 move, whereas the other approaches take a normal human brain as the basic engine and then modify it in some way. The 0-to-1 approach is more difficult, more speculative, and riskier.

Genomic approaches

These approaches look at the 7 billion natural experiments and see which genetic variants correlate with intelligence. IQ is a very imperfect but measurable and sufficient proxy for problem-solving ability. Since >7 of every 10 IQ points are explained by genetic variation, we can extract a lot of what nature knows about what makes brains have many capabilities. We can't get that knowledge about capable brains in a form usable as engineering (to build a brain from scratch), but we can at least get it in a form usable as scores (which genomes make brains with fewer or more capabilities). These are "copy nature's work" approaches.

Adult brain gene editing

The approach

Method: edit IQ-positive variants into the brain cells of adult humans.

See "Significantly Enhancing ... [LW · GW]".

Problems

Germline engineering

This is the way that will work. (Note that there are many downside risks to germline engineering, though AFAICT they can be alleviated to such an extent that the tradeoff is worth it by far.)

The approach

Method: make a baby from a cell that has a genome that has many IQ-positive genetic variants.

Subtasks:

These tasks don't necessarily completely factor out. For example, some approaches might try to "piggyback" off the natural epigenomic reset by using chromosomes from natural gametes or zygotes, which will have the correct epigenomic state already.

See also Branwen, "Embryo Selection ...".

More information on request. Some of the important research is happening, but there's always room for more funding and talent.

Problems

Signaling molecules for creative brains

The approach

Method: identify master signaling molecules that control brain areas or brain developmental stages that are associated with problem-solving ability; treat adult brains with those signaling molecules.

Due to evolved modularity, organic systems are governed by genomic regulatory networks. Maybe we can isolate and artificially activate GRNs that generate physiological states that produce cognitive capabilities not otherwise available in a default adult's brain. The hope is that there's a very small set of master regulators that can turn on larger circuits with strong orchestrated effects, as is the case with hormones, so that treatments are relatively simple, high-leverage, and discoverable. For example, maybe we could replicate the signaling context that activates childish learning capabilities, or maybe we could replicate the signaling context that activates parietal problem-solving in more brain tissue.

I haven't looked into this enough to know whether or not it makes sense. This is a "copy nature's work" approach: nature knows more about how to make brains that are good at thinking, than what is expressed in a normal adult human.

Problems

Brain-brain electrical interface approaches

Brain-computer interfaces don't obviously give an opportunity for large increases in creative philosophical problem-solving ability. See the discussion in "Prosthetic connectivity". The fundamental problem is that we, programming the computer part, don't know how to write code that does transformations that will be useful for neural minds.

But brain-brain interfaces——adding connections between brain tissues that normally aren't connected——might increase those abilities. These approaches use electrodes to read electrical signals from neurons, then transmit those signals (perhaps compressed/filtered/transformed) through wires / fiber optic cables / EM waves, then write them to other neurons through other electrodes. These are "copy nature's work" approaches, in the sense that we think nature made neurons that know how to arrange themselves usefully when connected with other neurons.

Problems with all electrical brain interface approaches

Massive cerebral prosthetic connectivity

Source: https://www.neuromedia.ca/white-matter/

Half of the human brain is white matter, i.e. neuronal axons with fatty sheaths around them to make them transmit signals faster. White matter is ~1/10 the volume of rodent brains, but ~1/2 the volume of human brains. Wiring is expensive and gets minimized; see "Principles of Neural Design" by Sterling and Laughlin. All these long-range axons are a huge metabolic expense. That means fast, long-range, high bandwidth (so to speak——there are many different points involved) communication is important to cognitive capabilities. See here.

A better-researched comparison would be helpful. But vaguely, my guess is that if we compare long-range neuronal axons to metal wires, fiber optic cables, or EM transmissions, we'd see (amortized over millions of connections): axons are in the same ballpark in terms of energy efficiency, but slower, lower bandwidth, and more voluminous. This leads to:

Method: add many millions of read-write electrodes to several brain areas, and then connect them to each other.

See "Prosthetic connectivity" for discussion of variants and problems. The main problem is that current brain implants furnish <10^4 connections, but >10^6 would probably be needed to have a major effect on problem-solving ability, and electrodes tend to kill neurons at the insertion site. I don't know how to accelerate this, assuming that Neuralink is already on the ball well enough.

Human / human interface

Method: add many thousands of read-write electrodes to several brain areas in two different brains, and then connect them to each other.

If one person could think with two brains, they'd be much smarter. Two people connected is not the same thing, but could get some of the benefits. The advantages of an electric interface over spoken language are higher bandwidth, lower latency, less cost (producing and decoding spoken words), and potentially more extrospective access (direct neural access to inexplicit neural events). But it's not clear that there should be much qualitative increase in philosophical problem-solving ability.

A key advantage over prosthetic connectivity is that the benefits might require a couple ooms fewer connections. That alone makes this method worth trying, as it will be probably be feasible soon.

Interface with brain tissue in a vat

Method: grow neurons in vitro, and then connect them to a human brain.

The advantage of this approach is that it would in principle be scalable. The main additional obstacle, beyond any neural-neural interface approaches, is growing cognitively useful tissue in vitro. This is not completely out of the question——see "DishBrain"——but who knows if it would be feasible.

Massive neural transplantation

The approach

Method: grow >10^8 neurons (or appropriate stem cells) in vitro, and then put them into a human brain.

There have been some experiments along these lines, at a smaller scale, aimed at treating brain damage.

The idea is simply to scale up the brain's computing wetware.

Problems

Support for thinking

Generally, these approaches try to improve human thinking by modifying the algorithm-like elements involved in thinking. They are "figure it out ourselves" approaches.

The approaches

There is external support:

Method: create artifacts that offload some elements of thinking to a computer or other external device.

E.g. the printing press, the text editor, the search engine, the typechecker.

There is mental software:

Method: create methods of thinking that improve thinking.

E.g. the practice of mathematical proof, the practice of noticing rationalization, the practice of investigating boundaries.

There is social software:

Method: create methods of social organization that support and motivate thinking.

E.g. a shared narrative in which such-and-such cognitive tasks are worth doing, the culture of a productive research group.

Method: create methods of social organization that constitute multi-person thinking systems.

E.g. git.

Problems

FAQ

What about weak amplification

Getting rid of lead poisoning should absolutely be a priority. It won't greatly increase humanity's maximum intelligence level though.

What about ...

The real intelligence enhancement is ...

Look, I'm all for healing society, healing trauma, increasing collective consciousness, creating a shared vision of the future, ridding ourselves of malign egregores, blah blah. I'm all for it. But it's a difficult, thinky problem. ...So difficult that you might need some good thinking help with that thinky problem...

Is this good to do?

Yeah, probably. There are many downside risks, but the upside is large and the downsides can be greatly alleviated.

103 comments

Comments sorted by top scores.

comment by Raemon · 2024-10-11T19:55:19.334Z · LW(p) · GW(p)

Curated. Augmenting human intelligence seems like one of the most important things-to-think-about this century. I appreciated this post's taxonomy.

I appreciate the made of graph of made up numbers that Tsvi made up being clearly labeled as such.

I have a feeling that this post could be somewhat more thorough, maybe with more links to the places where someone could followup on the technical bits of each thread.

Replies from: None
comment by [deleted] · 2024-10-13T03:10:12.536Z · LW(p) · GW(p)

.

Replies from: Raemon, TsviBT
comment by Raemon · 2024-10-13T03:14:47.968Z · LW(p) · GW(p)

The point of made up numbers is that they are a helpful tool for teasing out some implicit information from your intuitions, which is often better than not doing that at all, but, it's important that they are useful in a pretty different way from numbers-you-empirically-got-from-somewhere, and thus it's important that they be clearly labeled as made up numbers that Tsvi made up numbers.

See: If it's worth doing, it's worth doing with Made Up Statistics

During this particular tutorial, Julia tried to explain Bayes’ Theorem to some, er, rationality virgins. I record a heavily-edited-to-avoid-recognizable-details memory of the conversation below:

Julia: So let’s try an example. Suppose there’s a five percent chance per month your computer breaks down. In that case…
Student: Whoa. Hold on here. That’s not the chance my computer will break down.
Julia: No? Well, what do you think the chance is?
Student: Who knows? It might happen, or it might not.
Julia: Right, but can you turn that into a number?
Student: No. I have no idea whether my computer will break. I’d be making the number up.
Julia: Well, in a sense, yes. But you’d be communicating some information. A 1% chance your computer will break down is very different from a 99% chance.
Student: I don’t know the future. Why do you want to me to pretend I do?
Julia: (who is heroically nice and patient) Okay, let’s back up. Suppose you buy a sandwich. Is the sandwich probably poisoned, or probably not poisoned?
Student: Exactly which sandwich are we talking about here?

In the context of a lesson on probability, this is a problem I think most people would be able to avoid. But the student’s attitude, the one that rejects hokey quantification of things we don’t actually know how to quantify, is a pretty common one. And it informs a lot of the objections to utilitarianism – the problem of quantifying exactly how bad North Korea shares some of the pitfalls of quantifying exactly how likely your computer is to break (for example, “we are kind of making this number up” is a pitfall).

The explanation that Julia and I tried to give the other student was that imperfect information still beats zero information. Even if the number “five percent” was made up (suppose that this is a new kind of computer being used in a new way that cannot be easily compared to longevity data for previous computers) it encodes our knowledge that computers are unlikely to break in any given month. Even if we are wrong by a very large amount (let’s say we’re off by a factor of four and the real number is 20%), if the insight we encoded into the number is sane we’re still doing better than giving no information at all (maybe model this as a random number generator which chooses anything from 0 – 100?)

Replies from: None
comment by [deleted] · 2024-10-13T11:36:03.132Z · LW(p) · GW(p)

.

Replies from: Raemon
comment by Raemon · 2024-10-13T17:38:37.392Z · LW(p) · GW(p)

Why do you think the table is the most important thing in the article?

A different thing Tsvi could have done was say “here’s my best guess of which of these are most important, and my reasoning why”, but this would have essentially the same thing as the table + surrounding essay but with somewhat less fidelity of what his guesses were for the ranking.

Meanwhile I think the most important thing was laying out all the different potential areas of investigation, which I can now reason about on my own.

Replies from: None
comment by [deleted] · 2024-10-13T18:05:37.186Z · LW(p) · GW(p)

.

Replies from: Raemon
comment by Raemon · 2024-10-13T18:44:40.665Z · LW(p) · GW(p)

First, reiterating, the most important bit here is the schema, and drawing attention to this as an important area of further work.

Second, I think calling it "baseless speculation" is just wrong. Given that you're jumping to a kinda pejorative framing, it looks like your mind is kinda made up and I don't feel like arguing with you more. I don't think you actually read the scott article in a way that was really listening to it and considering the implications.

But, since I think the underlying question of "what is LessWrong curated for" is nuanced and not clearly spelled out, I'll go spell that out for the benefit of everyone just tuning in.

Model 1: LessWrong as "full intellectual pipeline, from 'research office watercooler' to 'published'"

The purpose of LW curated is not to be a peer reviewed journal, and the purpose of LW is not to have quite the same standards for published academic work. Instead, I think of LW has tackling "the problem that academia is solving" through a somewhat different lens, which includes many of the same pieces but organizes them differently.

What you see in a finished, published journal article is the very end of a process, and it's not where most of the generativity happens. Most progress is happening in conversations around watercoolers at work, slack channels, conference chit-chat, etc.

LW curation is not "published peer review." The LessWrong Review [? · GW] is more aspiring to be that (I also think The Review fails at achieving all my goals with "the good parts of peer review," although it achieves other goals, and I have thoughts on how to improve it on that axis)

But the bar for curated is something like "we've been talking about this around the watercooler for weeks, the people involved in the overall conversation have found this a useful concept and they are probably going to continue further research that builds on this and eventually you will see some more concrete output.

In this case, the conversation has already been ongoing awhile, with posts like Significantly Enhancing Adult Intelligence With Gene Editing May Be Possible [LW · GW] (another curated post, which I think is more "rigorous" in the classical sense). 

I don't know if there's a good existing reference post for "here in detail is the motivation for why we want to do human intelligence enhancement and make it a major priority." Tsvi sort of briefly discusses that here but mostly focuses on "where might we want to focus, given this goal."

Model 2. "Review" is an ongoing process.

One way you can do science is to do all of the important work in private, and then publish at the end. That is basically just not how LW is arranged. The whole idea here is to move the watercooler to the public area, and handle the part where "ideas we talk about at the watercooler are imprecise and maybe wrong" with continuous comment-driven review and improving our conversational epistemics.

I do think the bar for curated is "it's been at least a few days, the arguments in the post make sense to me (the curator), and nobody was raised major red flags about the overall thrust of the post." (I think this post meets that bar)

I want people to argue about both the fiddly details of the post, or the overall frame of the post. The way you argue that is by making specific claims about why the post's details are wrong, or incomplete, or why the posts's framing is pointed in the wrong direction. 

The fact that this post's framing seems important is more reason to curate it, if we haven't found first-pass major flaws and I want more opportunity for people to discuss major flaws.

Saying "this post is vague and it's made up numbers aren't very precise" isn't adding anything to the conversation (except for providing some scaffold for a meta-discussion on LW site philosophy, which is maybe useful to do periodically since it's not obvious at a glance)

Revisiting the guesswork / "baseless speculation" bit

If a group of researchers have a vein they have been discussing at the watercooler, and it has survived a few rounds of discussion and internal criticism, and it'll be awhile before a major legible rigorous output is published:

I absolutely want those researchers intuitions and best guesses about which bits are important. Those researchers have some expertise and worldmodels. They could spend another 10-100 hours articulating those intuitions with more precision and backing them up with more evidence. Sometimes it's correct to do that. But if I want other researchers to be able to pick up the work and run with it, I don't want them bottlenecked on the first researchers privately iterating another 10-100 hours before sharing it.

I don't want us to overanchor on those initial intuitions and best guesses. And if you don't trust those researcher's intuitions, I want you to have an easy time throwing them out and thinking about them from scratch.

comment by TsviBT · 2024-10-13T09:30:19.661Z · LW(p) · GW(p)

Basically what Raemon said. I wanted to summarize my opinions, give people something to disagree with (both the numbers and the rubric), highlight what considerations seem important to me (colored fields); but the numbers are made up (because they are predictions, which are difficult; and they are far from fully operationalized; and they are about a huge variety of complex things, so would be difficult to evaluate; and I've thought hard about some of the numbers, but not about most of them). It's better than giving no numbers, no?

Replies from: None
comment by [deleted] · 2024-10-13T11:31:01.921Z · LW(p) · GW(p)

.

Replies from: Raemon
comment by Raemon · 2024-10-13T22:51:13.980Z · LW(p) · GW(p)

FYI I do think the downside of "people may anchor off the numbers" is reasonable to weigh in the calculus of epistemic-community-norm-setting.

I would frame the question: "is the downside of people anchoring off potentially-very-off-base numbers worse than the upside of having intuitions somewhat more quantified, with more gears exposed?". I can imagine that question resolving in the "actually yeah it's net negative", but, if you're treating the upside as "zero" I think you're missing some important stuff.

comment by PeterMcCluskey · 2024-10-08T22:32:30.681Z · LW(p) · GW(p)

Brain emulation looks closer than your summary table indicates.

Manifold estimates a 48% chance by 2039.

Eon Systems is hiring for work on brain emulation.

Replies from: papetoast, TsviBT, Max Lee
comment by papetoast · 2024-10-10T07:43:35.640Z · LW(p) · GW(p)

Manifold is pretty weak evidence for anything >=1 year away because there are strong incentives to bet on short term markets.

comment by TsviBT · 2024-10-08T22:43:24.036Z · LW(p) · GW(p)

I'm not sure how to integrate such long-term markets from Manifold. But anyway, that market seems to have a very vague notion of emulation. For example, it doesn't mention anything about the emulation doing any useful cognitive work!

comment by Max Lee · 2024-10-12T23:33:51.078Z · LW(p) · GW(p)

Once we get superintelligence, we might get every other technology that the laws of physics allow, even if we aren't that "close" to these other technologies.

Maybe they believe in a  chance of superintelligence by 2039.

PS: Your comment may have caused it to drop to 38%. :)

Replies from: PeterMcCluskey
comment by PeterMcCluskey · 2024-10-14T02:14:54.756Z · LW(p) · GW(p)

Manifold estimates an 81% chance of ASI by 2036, using a definition that looks fairly weak and subjective to me.

I've bid the brain emulation market back up a bit.

comment by khafra · 2024-10-09T06:13:29.911Z · LW(p) · GW(p)

This is great! Everybody loves human intelligence augmentation, but I've never seen a taxonomy of it before, offering handholds for getting started. 

I'd say "software exobrain" is less "weaksauce," and more "80% of the peak benefits are already tapped out, for conscientious people who have heard of OneNote or Obsidian." I also am still holding out for bird neurons with portia spider architectural efficiency and human cranial volume; but I recognize that may not be as practical as it is cool.

comment by RogerDearnaley (roger-d-1) · 2024-10-10T01:13:31.035Z · LW(p) · GW(p)

If there's a change to human brains that human-evolution could have made, but didn't, then it is net-neutral or net-negative for inclusive relative genetic fitness. If intelligence is ceteris paribus a fitness advantage, then a change to human brains that increases intelligence must either come with other disadvantages or else be inaccessible to evolution.

You're assuming a steady state. Firstly, evolution takes time. Secondly, if humans were, for example, in an intelligence arms-race with other humans (for example, if smarter people can reliably con dumber people out of resources often enough to get a selective advantage out of it), then the relative genetic fitness of a specific intelligence level can vary over time, depending on how it compares to the rest of the population. Similarly, if much of the advantage of an IQ of 150 requires being able to find enough IQ 150 coworkers to collaborate with, then the relative genetic fitness of IQ 150 depends on the IQ profile of the rest of the population. 

Replies from: nathan-helm-burger, nathan-helm-burger
comment by Nathan Helm-Burger (nathan-helm-burger) · 2024-10-16T05:39:42.519Z · LW(p) · GW(p)

An example I love of a helpful brain adaptation with few downsides that I know of, which hasn't spread far throughout mammals is one in seal brains. Seals, unlike whales and dolphins, had an evolutionary niche which caused them to not get as good at holding their breathe as would be optimal for them. They had many years of occasionally diving too deep and dying from brain damage related to oxygen deprivation (ROS in neurons). So, some ancient seal had a lucky mutation that gave them a cool trick. The glial cells which support neurons can easily grow back even if their population gets mostly wiped out. Seals have extra mitochondria in their glial cells and none in their neurons, and export the ATP made in the glial cells to the neurons. This means that the reactive oxygen species from oxygen deprivation of the mitochondria all occur in the glia. So, when a seal stays under too long, their glial cells die instead of their neurons. The result is that they suffer some mental deficiencies while the glia grow back over a few days or a couple weeks (depending on the severity), but then they have no lasting damage. Unlike in other mammals, where we lose neurons that can't grow back.

Given enough time, would humans evolve the same adaptation (if it does turn out to have no downsides)? Maybe, but probably not. There just isn't enough reproductive loss due to stroke/oxygen-deprivation to give a huge advantage to the rare mutant who lucked into it.

But since we have genetic engineering now... we could just give the ability to someone. People die occasionally competing in deep freediving competitions, and definitely get brain damage. I bet they'd love to have this mod if it were offered.

comment by Nathan Helm-Burger (nathan-helm-burger) · 2024-10-16T05:30:05.854Z · LW(p) · GW(p)

Also, sometimes there are 'valleys of failure' which block off otherwise fruitful directions in evolution. If there's a later state that would be much better, but to get there would require too many negative mutations before the positive stuff showed up, the species may simply never get lucky enough to make it through the valley of failure.

This means that evolution is heavily limited to things which have mostly clear paths to them. That's a pretty significant limitation!

comment by sarahconstantin · 2024-10-08T18:44:03.304Z · LW(p) · GW(p)

ditto

we have really not fully explored ultrasound and afaik there is no reason to believe it's inherently weaker than administering signaling molecules. 

Replies from: TsviBT
comment by TsviBT · 2024-10-08T18:50:28.199Z · LW(p) · GW(p)

Signaling molecules can potentially take advantage of nature's GRNs. Are you saying that ultrasound might too?

Replies from: sarahconstantin
comment by sarahconstantin · 2024-10-08T19:30:40.544Z · LW(p) · GW(p)

Neuronal activity could certainly affect gene regulation! so yeah, I think it's possible (which is not a strong claim...lots of things "regulate" other things, that doesn't necessarily make them effective intervention points)

Replies from: TsviBT
comment by TsviBT · 2024-10-08T19:36:01.957Z · LW(p) · GW(p)

Yeah, of course it affects gene regulation. I'm saying that -- maayybe -- nature has specific broad patterns of gene expression associated with powerful cognition (mainly, creativity and learning in childhood); and since these are implemented as GRNs, they'll have small, discoverable on-off switches. You're copying nature's work about how to tune a brain to think/learn/create. With ultrasound, my impression is that you're kind of like "ok, I want to activate GABA neurons in this vague area of the temporal cortex" or "just turn off the amygdala for a day lol". You're trying to figure out yourself what blobs being on and off is good for thinking; and more importantly you have a smaller action space compared to signaling molecules -- you can only activate / deactivate whatever patterns of gene expression happen to be bundled together in "whatever is downstream of nuking the amygdala for a day".

comment by StartAtTheEnd · 2024-10-14T22:09:03.770Z · LW(p) · GW(p)

Short note: We don't need 7SDs to get 7SDs.

If we could increase the average IQ by 2SDs, then we'd have lots of intelligent people looking into intelligence enhancement. In short, intelligence feeds into itself, it might be possible to start the AGI explosion in humans.

Replies from: TsviBT
comment by TsviBT · 2024-10-15T00:47:02.366Z · LW(p) · GW(p)

(Just acknowledging that my response is kinda disorganized. Take it or leave it, feel free to ask followups.)

Most easy interventions work on a generational scale. There's pretty easy big wins like eliminating lead poisoning (and, IDK, feeding everyone, basic medicine, internet access, less cannibalistic schooling) which we should absolutely do, regardless of any X-risk concerns. But for X-risk concerns, generational is pretty slow.

This is both in terms of increasing general intelligence, and also in terms of specific capabilities. Even if you bop an adult on the head and make zer +2SDs smarter, ze still would have to spend a bunch of time and effort to train up on some new field that's needed for the next approach to further increasing intelligence. That's not a generational scale exactly, maybe more like 10 years, but still.

We're leaking survival probability mass to an AI intelligence explosion year by year. I think we have something like 0-2 or 0-3 generations before dying to AGI.

To be clear, I'm assuming that when you say "we don't need 7SDs", you mean "we don't need to find an approach that could give 7SDs". (Though to be clear, I agree with that in a literal sense, because you can start with someone who's already +3SDs or whatever.) A problem with this is that approaches don't necessarily stack or scale, just because they can give a +2SD boost to a random person. If you take a starving person and feed zer well, ze'll be able to think better, for sure. Maybe even +2SDs? I really don't know, sounds plausible. But you can't then feed them much more and get them another +2SDs -- maybe you can get like +.5SD with some clever fine-tuning or something. And you can't then also get a big boost from good sleep, because you probably already increased their sleep quality by a lot; you're double counting. Most (though not all!) people in, say, the US, probably can't get very meaningfully less lead poisoned.

Further, these interventions I think would generally tend to bring people up to some fixed "healthy Gaussian distribution", rather than shift the whole healthy distribution's mean upward. In other words, the easy interventions that move the global average are more like "make things look like the developed world". Again, that's obviously good to do morally and practically, but in terms of X-risk specifically, it doesn't help that much. Getting 3x samples from the same distribution (the healthy distribution) barely increases the max intelligence. Much more important (for X-risk) is to shift the distribution you're drawing from. Stronger interventions that aren't generational, such as prosthetic connectivity or adult brain gene editing, would tend to come with much more personal risks, so it's not so scalable--and I don't think in terms of trying to get vast numbers of people to do something, but rather just in terms of making it possible for people to do something if they really want to.

So what this implies is that either

  1. your approach can scale up (maybe with more clever technology, but still riding on the same basic principle), or
  2. you're so capable that you can keep coming up with different effective, feasible approaches that stack.

So I think it matters to look for approaches that can scale up to large intelligence gains.

To put things in perspective, there's lots of people who say that {nootropics, note taking, meditation, TCMS, blood flow optimization, ...} give them +1SD boost or something on thinking ability. And yet, if they're so smart, why ain't they exploding?

All that said, I take your point. It does make it seem slightly more appealing to work on e.g. prosthetic connectivity, because that's a non-generational intervention and could plausibly be scaled up by putting more effort into it, given an initial boost.

I think brain editing is maybe somewhat less scalable, though I'm not confident (plausibly it's more scalable; it might depend for example on the floor of necessary damage from editing rounds, and might depend on a ceiling of how much you can get given that you've passed developmental windows). Support for thinking (i.e. mental / social / computer tech) seems like it ought to be scalable, but again, why ain't you exploding? (Or in other words, we already see the sort of explosion that gets you; improving on it would take some major uncommon insights, or an alternative approach.) Massive neural transplantation might be scalable, but is very icky. If master regulator signaling molecules worked shockingly well, my wild guess is that they would be a little scalable (by finding more of them), but not much (there probably isn't that much room to overclock neurons? IDK); they'd be somewhat all-or-nothing, I guess?

Replies from: StartAtTheEnd
comment by StartAtTheEnd · 2024-10-15T02:05:58.130Z · LW(p) · GW(p)

You're correct that the average IQ could be increased in various ways, and that increasing the minimum IQ of the population wouldn't help us here. I was imagining shifting the entire normal distribution two SDs to the right, so that those who are already +4-5SDs would become +5-7SDs.

As far as I'm concerned, the progress of humanity stands on the shoulders of giants, and the bottom 99.999% aren't doing much of a difference.

The threshold for recursive self-improvement in humans, if one exists, is quite high. Perhaps if somebody like Neumann lived today it would be possible. By the way, most of the people who look into nootropics, meditations and other such things do so because they're not functional, so in a way it's a bit like asking "Why are there so many sick people in hospitals if it's a place for recovery?" thought you could make the argument that geniuses would be doing these things if they worked.

My score on IQ tests has increased about 15 points since I was 18, but it's hard to say if I succeeded in increasing my intelligence or if it's just a result of improving my mental health and actually putting a bit of effort into my life. I still think that very high levels of concentration and effort can force the brain to reconstruct itself, but that this process is so unpleasant that people stop doing it once they're good enough (for instance, most people can't read all that fast, despite reading texts for 1000s of hours. But if they spend just a few weeks practicing, they can improve their reading speed by a lot, so this kind of shows how improvement stops once you stop applying pressure)

By the way, I don't know much about neurons. It could be that 4-5SD people are much harder to improve since the ratio of better states to worse states is much lower

Replies from: TsviBT
comment by TsviBT · 2024-10-15T02:15:41.580Z · LW(p) · GW(p)

I was imagining shifting the entire normal distribution two SDs to the right,

Right, but those interventions are harder (shifting the right tail further right is especially hard).

Also, shifting the distribution is just way different numerically from being able to make anyone who wants be +7SD. If you shift +1SD, you go from 0 people at +7SD to ~8 people.

(And note that the shift is, in some ways, more unequal compared to "anyone who wants, for the price of a new car, can reach the effective ceiling".)

Replies from: StartAtTheEnd
comment by StartAtTheEnd · 2024-10-15T10:40:44.254Z · LW(p) · GW(p)

Right, I agree with that.

A right shift by 2SDs would make people like Hawkings, Einstein, Tesla, etc. about 100 times more common, and make it so that a few people who are 1-2SDs above these people are likely to appear soon. I think this is sufficient, but I don't know enough about human intelligence to guarantee it.

I think it depends on how the SD is increased. If you "merely" create a 150-IQ person with a 20-item working memory, or with a 8SD processing speed, this may not be enough to understand the problem and to solve it. Of course, you can substitute with verbal intelligence, which I think a lot of mathematicians do. I can't rotate 5D objects in my head, but I can write equations on paper which can rotate 5D objects and get the right answer. I think this is how mathematics is progressing past what we can intuitively understand. Of course, if your non-verbal intelligence can keep up, you're much better off, since you can combine any insights from any area of life and get something new out of it.

comment by Chris_Leong · 2024-10-08T23:28:16.608Z · LW(p) · GW(p)

I think you're underestimating meditation.

Since I've started meditating I've realised that I've been much more sensitive to vibes.

There's a lot of folk who would be scarily capable if the were strong in system 1, in addition to being strong in system 2.

Then there's all the other benefits that mediation can provide if done properly: additional motivation, better able to break out of narratives/notice patterns.

Then again, this is dependent on their being viable social interventions, rather than just aiming for 6 or 7 standard deviations of increase in intelligence.

Replies from: rhollerith_dot_com, TsviBT, alex-k-chen
comment by RHollerith (rhollerith_dot_com) · 2024-10-08T23:34:29.583Z · LW(p) · GW(p)

Meditation has been practiced for many centuries and millions practice it currently.

Please list 3 people who got deeply into meditation, then went on to change the world in some way, not counting people like Alan Watts who changed the world by promoting or teaching meditation.

Replies from: Jackson Wagner
comment by Jackson Wagner · 2024-10-09T02:45:04.793Z · LW(p) · GW(p)

I think there are many cases of reasonably successful people who often cite either some variety of meditation, or other self-improvement regimes / habits, as having a big impact on their success. This random article I googled cites the billionaires Ray Dalio, Marc Benioff, and Bill Gates, among others. (https://trytwello.com/ceos-that-meditate/)

Similarly you could find people (like Arnold Schwarzenegger, if I recall?) citing that adopting a more mature, stoic mindset about life was helpful to them -- Ray Dalio has this whole series of videos on "life principles" that he likes. And you could find others endorsing the importance of exercise and good sleep, or of using note-taking apps to stay organized.

I think the problem is not that meditation is ineffective, but that it's not usually a multiple-standard-deviations gamechanger (and when it is, it's probably usually a case of "counting up to zero from negative", as TsviBT calls it), and it's already a known technique. If nobody else in the world meditated or took notes or got enough sleep, you could probably stack those techniques and have a big advantage. But alas, a lot of CEOs and other top performers already know to do this stuff.

(Separately from the mundane life-improvement aspects, some meditators claim that the right kind of deep meditation can give you insight into deep philosophical problems, or the fundamental nature of conscious experience, and that this is so valuable that achieving this goal is basically the most important thing you could do in life. This might possibly even be true! But that's different from saying that meditation will give you +50 IQ points, which it won't. Kinda like how having an experience of sublime beauty while contemplating a work of art, might be life-changing, but won't give you +50 IQ points.)

Replies from: Viliam, MondSemmel
comment by Viliam · 2024-10-09T14:53:48.302Z · LW(p) · GW(p)

To compare to the obvious alternative, is the evidence for meditation stronger than the evidence for prayer? I assume there are also some religious billionaires and other successful people who would attribute their success to praying every day or something like that.

Replies from: Jackson Wagner
comment by Jackson Wagner · 2024-10-09T19:21:39.783Z · LW(p) · GW(p)

Maybe other people have a very different image of meditation than I do, such that they imagine it as something much more delusional and hyperreligious? Eg, some religious people do stuff like chanting mantras, or visualizing specific images of Buddhist deities, which indeed seems pretty crazy to me.

But the kind of meditation taught by popular secular sources like Sam Harris's Waking Up app, (or that I talk about in my "Examining The Witness" youtube series about the videogame The Witness), seems to me obviously much closer to basic psychology or rationality techniques than to religious practices. Compare Sam Harris's instructions about paying attention to the contents of one's experiences, to Gendlin's idea of "Circling", or Yudkowsky's concept of "sit down and actually try to think of solutions for five minutes", or the art of "noticing confusion", or the original Feynman essay where he describes holding off on proposing solutions. So it's weird to me when people seem really skeptical of meditation and set a very high burden of proof that they wouldn't apply for other mental habits like, say, CFAR techniques.

I'm not like a meditation fanatic -- personally I don't even meditate these days, although I feel bad about not doing it since it does make my life better. (Just like how I don't exercise much anymore despite exercise making my day go better, and I feel bad about that too...) But once upon a time I just tried it for a few weeks, learned a lot of interesting stuff, etc. I would say I got some mundane life benefits out of it -- some, like exercise or good sleep, that only lasted as long as I kept up the habit. and other benefits were more like mental skills that I've retained to today. I also got some very worthwhile philosophical insights, which I talk about, albeit in a rambly way mixed in with lots of other stuff, in my aforementioned video series. I certainly wouldn't say the philosophical insights were the most important thing in my whole life, or anything like that! But maybe more skilled deeper meditation = bigger insights, hence my agnosticism on whether the more bombastic metitation-related claims are true.

So I think people should just download the Waking Up app and try meditating for like 10 mins a day for 2-3 weeks or whatever-- way less of a time commitment than watching a TV show or playing most videogames -- and see for themselves if it's useful or not, instead of debating.

Anyways. For what it's worth, I googled "billionares who pray". I found this article (https://www.beliefnet.com/entertainment/5-christian-billionaires-you-didnt-know-about.aspx), which ironically also cites Bill Gates, plus the Walton Family and some other conservative CEOs. But IMO, if you read the article you'll notice that only one of them actually mentions a daily practice of prayer. The one that does, Do Won Chang, doesn't credit it for their business success... seems like they're successful and then they just also pray a lot. For the rest, it's all vaguer stuff about how their religion gives them a general moral foundation of knowing what's right and wrong, or how God inspires them to give back to their local community, or whatever.

So, personally I'd consider this duel of first-page-google-results to be a win for meditation versus prayer, since the meditators are describing a more direct relationship between scheduling time to regularly meditate and the assorted benefits they say it brings, while the prayer people are more describing how they think it's valuable to be christian in an overall cultural sense. Although I'm sure with more effort you could find lots of assorted conservatives claiming that prayer specifically helps them with their business in some concrete way. (I'm sure there are many people who "pray" in ways that resemble meditation, or resemble Yudkowsky's sitting-down-and-trying-to-think-of-solutions-for-five-minutes-by-the-clock, and find these techniques helpful!)

IMO, probably more convincing than dueling dubious claims of business titans, is testimony from rationalist-community members who write in detail about their experiences and reasoning. Alexey Guzey's post here is interesting, as he's swung from being vocally anti-meditation, to being way more into it than I ever was. He seems to still generally have his head on straight (ie hasn't become a religious fanatic or something), and says that meditation seems to have been helpful for him in terms of getting more things done: https://guzey.com/2022-lessons/

Replies from: Viliam
comment by Viliam · 2024-10-10T11:31:47.025Z · LW(p) · GW(p)

Thanks for answering my question directly in the second half.

I find the testimonies of rationalists who experimented with meditation less convincing than perhaps I should, simply because of selection bias. People who have pre-existing affinity towards "woo" will presumably be more likely to try meditation. And they will be more likely to report that it works, whether it does or not. I am not sure how much should I discount for this, perhaps I overdo it. I don't know.

A proper experiment would require a control group -- some people who were originally skeptical about meditation and Buddhism in general, and only agreed to do some exactly defined exercises, and preferably the reported differences should be measurable somehow. Otherwise, we have another selection bias, that if there are people for whom meditation does nothing, or is even harmful, they will stop trying. So at the end, 100% of people who tried will report success (whether real or imaginary), because those who didn't see any success have selected themselves out.

I approve of making the "secular version of Buddhism", but in a similar way, we could make a "secular version of Christianity". (For example, how is gratitude journaling significantly different from thanking God for all his blessing before you go sleep?) And yet, I assume that the objection against "secular Christianity" on Less Wrong would be much greater than against "secular Buddhism". Maybe I am wrong, but the fact that no one is currently promoting "secular Christianity" on LW sounds like weak evidence. I suspect, the relevant difference is that for an American atheist, Christianity is outgroup, and Buddhism is fargroup. Meditation is culturally acceptable among contrarians, because our neighbors don't do it. But that is unrelated to whether it works or not.

Also, I am not sure how secular the "secular Buddhism" actually is, given that people still go to retreats organized by religious people, etc. It feels too much for me to trust that someone is getting lots of important information from religious people, without unknowingly also getting some of their biases.

comment by MondSemmel · 2024-10-11T10:44:32.687Z · LW(p) · GW(p)

Re: successful people who meditate, IIRC in Tim Ferriss' book Tools of Titans, meditation was one of the most commonly mentioned habits of the interviewees.

Replies from: TsviBT
comment by TsviBT · 2024-10-11T10:54:50.282Z · LW(p) · GW(p)

Are these generally CEO-ish-types? Obviously "sustainably coping with very high pressure contexts" is an important and useful skill, and plausibly meditation can help a lot with that. But it seems pretty different from and not that related to increasing philosophical problem solving ability.

Replies from: MondSemmel
comment by MondSemmel · 2024-10-11T11:19:36.442Z · LW(p) · GW(p)

This random article I found repeats the Tim Ferriss claim re: successful people who meditate, but I haven't checked where it appears in the book Tools of Titans:

In his best-selling book Tools of Titans: The Tactics, Routines, and Habits of Billionaires, Icons, and World-Class Performers, Tim Ferriss interviews more than 200 executives, leaders, and world-class performers. He found that more than 80 percent practiced some form of mindfulness or meditation. Among some of the most successful people in the world, Ferriss uncovered the Most Consistent Pattern Of All, connecting world-class athletes with billionaire investors: meditation.

Other than that, I don't see why you'd relate meditation just to high-pressure contexts, rather than also conscientiousness, goal-directedness, etc. To me, it does also seem directly related to increasing philosophical problem-solving ability. Particularly when it comes to reasoning about consciousness and other stuff where an improved introspection helps most. Sam Harris would be kind of a posterchild for this, right?

What I can't see meditation doing is to provide the kind of multiple SD intelligence amplification you're interested in, plus it has other issues like taking a lot of time (though a "meditation pill" would resolve that) and potential value drift.

comment by TsviBT · 2024-10-08T23:32:12.366Z · LW(p) · GW(p)

Got any evidence?

Replies from: Chris_Leong
comment by Chris_Leong · 2024-10-09T04:13:44.887Z · LW(p) · GW(p)

Not really.

comment by Alex K. Chen (parrot) (alex-k-chen) · 2024-10-15T02:11:35.205Z · LW(p) · GW(p)

How about TMS/tFUS/tACS => "meditation"/reducing neural noise?

Drastic improvements in mental health/reducing neural noise & rumination are way more feasible than increasing human intelligence (and still have huge potential for very high impact when applied on a population-wide scale [1]), and are possible to do on mass-scale (and there are some experimental TMS protocols like SAINT/accelerated TMS which aim to capture the benefits of TMS on a 1-2 week timeline) [there's also wave neuroscience, which uses mERT and works in conjunction with qEEG, but I'm not sure if it's "ready enough" yet - it seems to involve some sort of guesswork and there are a few negative reviews on reddit]. There are a few accelerated TMS centers and they're not FDA-approved for much more than depression, but if we have fast AGI timelines, the money matters less.

[speeding up feedback loops are also important for mass-adoption - which both accelerated TMS/SAINT and the "intense tACS program" that people like neurofield [Nicholas Dogris/Tiffany Thompson] and James Croall people try to do]. Ideally, the TMS/SAINT or tACS should be done in conjunction with regular monitoring of brainwaves with qEEG or fMRI throughout.

Effect sizes of tFUS are said to be small relative to certain medications/drugs [this is true for neurofeedback/TMS/tACS in general], but part of this may be that people tend to be conservative with tFUS. Leo Zaroff has created an approachable tFUS community in the bay area. Still worth trying b/c the opportunity cost of trying them (with the right people) is very low (and very few people in our communities have heard of them).

There are some like Jeff Tarrant and the Neurofield people (I got to meet many of them at ISNR2024 => many are coming to the Suisun Summit now) who explore these montages.

Making EEG (or EEG+fNIRS) much easier to get can be high impact relative to amount of effort invested [with minimal opportunity cost]). I was pretty impressed with the convenience of Zeto's portable EEG headset at #Sfn24, as well as the convenience of the imedisync at #ISNR2024 [both EEG headsets cost $20,000, which is high but not insurmountable - eg if given some sort of guarantee on quality and useability I might be willing to procure one] but still haven't evaluated the signal quality of each when comparing them to other high-quality EEG montages like the deymed). It also makes it easier to create the true dataset of EEGs (also look into what Jonathan Xu is doing, though his paper is more about visual processing than mental health). We also don't even have proper high-quality EEG+HEG+fMRI+fNIRS datasets of "high intelligence" people relative to others [especially when measuring these potentials in response to cognitive load - I know Thomas Feiner has helped create a freecap support group and has done a lot of thought on ERPs and their response to cognitive load - he helped take my EEG during a brainmaster session at #ISNR2024]

I've found that smart people in general are extremely underexposed to psychometrics/psychonomics (there are not easy ways to enter those fields even if you're a psychology or neuroscience major), and there is a lot of potential for synergy in this area.

[1] esp given the prevalence of anxiety and other mental health issues of people within our communities

comment by Towards_Keeperhood (Simon Skade) · 2024-10-08T11:54:06.130Z · LW(p) · GW(p)

Questions I have:

  1. Why do you think the potential capability improvement of human-human interface is that high? Can you say more on how you imagine that working?
  2. For WBE my current not amazingly informed model thinks the bottleneck is finding a way to run it that wouldn't result in huge value drift. Are the 2% your guess that we could run it successfully without value drift, or that we can run it at all in a way that fooms even if it breaks alignment and potentially causes s-risk? For the latter case I'd have higher probability on that we could get that within 20years, and if you think that's only 2% I'd be curious why. (Though obviously I'd not want current earth to attempt it (except maybe in unlikely scenarios like "Eliezer is completely in charge of the project").)
Replies from: TsviBT
comment by TsviBT · 2024-10-08T14:38:02.696Z · LW(p) · GW(p)

These are both addressed in the post.

Replies from: Simon Skade
comment by Towards_Keeperhood (Simon Skade) · 2024-10-08T18:36:37.908Z · LW(p) · GW(p)

Well for (1) I don't see what's written in the post matches your 2-20std estimation. You said yourself "But it's not clear that there should be much qualitative increase in philosophical problem-solving ability.".

Like higher communication bandwidth would be nice, but it's not like more than 30 people can do significantly useful alignment research and even within those who can there's a huge heavytail IMO.

If you could just write more like e.g. if you imagine a smart person effectively getting sth like a bigger brain by recruting areas from some other person (though then it'd presumably require a decent amount of artificial connections again?). Or do you imagine many people turing into sth like a hivemind (and how more precisely might the hivemind operate and why would they be able to be much smarter together than individually)? Such details would be helpful.

For (2) I just want to ask for clarification whether your 2% estimate in the table includes mitigating the value drift problems you mentioned. (Which then would seem reasonable to me. But one might also read the table as "2% that it works at all and even then there would probably be significant value drift".) Like with a few billion dollars we could manufacture enough electronmicroscopes to get a human connectome and i'd unfortunately expect that it's not too hard to guess some of the important learning rules and simulate a bunch until the connectome seems like a plausible equilibrium given the firing and learning rules and then it can sorta run and bootstrap even if there's significant divergence from the original human.

Replies from: TsviBT, TsviBT
comment by TsviBT · 2024-10-08T19:03:55.858Z · LW(p) · GW(p)

why would they be able to be much smarter together than individually

Ok some examples:

  • Multiple attention heads.

    • One person solves a problem that induces genuine creative thinking; the other person watches this, and learns how genuine creative thinking works. Not very feasible with current setup, maybe feasible with low-cost hardware access.
    • One person works on a difficult, high-context question; the other person remembers the stack trace, notices and remembers paths [noticed, but not taken, and then forgotten], debugs including subtle shifts, etc. Not very feasible currently without a bunch of distracting exposition. See TAP.
  • More direct (hence faster, deeper) implicit knowledge/skill sharing.

But a lot of the point is that there are thoughtforms I'm not aware of, which would be created by networked people. The general idea is as I stated: you've genuinely moved somewhat away from several siloed human minds, toward something more integrated.

comment by TsviBT · 2024-10-08T18:49:47.293Z · LW(p) · GW(p)

(1):

If one person could think with two brains, they'd be much smarter. Two people connected is not the same thing, but could get some of the benefits. The advantages of an electric interface over spoken language are higher bandwidth, lower latency, less cost (producing and decoding spoken words), and potentially more extrospective access (direct neural access to inexplicit neural events).

Do you think that one person with 2 or more brains would be 2-20 SDs?

Such details would be helpful.

I have no idea, that's why the range is so high.

(2):

The .02 is, as the table says, "as described"; so it should be plausibly a realistic emulation of the human brain. That would include getting slower dynamics right-ish, but wouldn't exclude getting value drift anyway.

it's not too hard to guess some of the important learning rules

Maybe. Why do you think this?

Replies from: Simon Skade
comment by Towards_Keeperhood (Simon Skade) · 2024-10-08T19:21:56.890Z · LW(p) · GW(p)

Do you think that one person with 2 or more brains would be 2-20 SDs?

If I had another copy of my brain I'd guess that might give me like +1std or possibly +2std but very hard to predict.

If a +6std person would get another brain from a +5std person the effect would be much lower I'd guess, maybe yielding overall +6.4std or possibly +6.8std.

But idk the counterfactual seems hard to predict because I cannot imagine it that concretely. Could be totally wrong.

it's not too hard to guess some of the important learning rules

Maybe. Why do you think this?

This was maybe not that well expressed. I mostly don't know but it doesn't seem all that unlikely it could work. (I might read your timelines post within a week or so and maybe then I have a better model of your model to better locate cruxes, idk.)

Replies from: TsviBT
comment by TsviBT · 2024-10-08T19:31:38.140Z · LW(p) · GW(p)

I mostly don't know but it doesn't seem all that unlikely it could work.

My main evidence is

  1. It's much easier to see the coarse electrical activity, compared to 5-second / 5-minute / 5-hour processes. The former, you just measure voltage or whatever. The latter you have to do some complicated bio stuff (transcriptomics or other *omics).
  2. I've asked something like 8ish people associated with brain emulation stuff about slow processes, and they never have an answer (either they hadn't thought about it, or they're confused and think it won't matter which I just think they're wrong about, or they're like "yeah totally but we've already got plenty of problems just understanding the fast electrical stuff").
  3. We have very little understanding of how the algorithms actually do their magic, so we're relying on just copying all the details well enough that we get the whole thing to work.
Replies from: Simon Skade
comment by Towards_Keeperhood (Simon Skade) · 2024-10-08T20:26:46.446Z · LW(p) · GW(p)

I mean you can look at neurons in vitro and see how they adopt to different stimuli.

Idk I'd weakly guess that the neuron level learning rules are relatively simple, and that they construct more complex learning rules for e.g. cortical minicolumns and eventually cortical columns or sth, and that we might be able to infer from the connectome what kind of function cortical columns perhaps implement, and that this can give us a strong hint for what kind of cortical-column-level learning rules might select for the kind of algorithms implemented there abstractly, and that we can trace rules back to lower levels given the connectome. Tbc i don't think it might look exactly like that, just saying sth roughly like that, where maybe it's actually some common circut loops instead of cortical columns which are interesting or whatever.

comment by Towards_Keeperhood (Simon Skade) · 2024-10-08T11:42:25.346Z · LW(p) · GW(p)

Thanks for writing this amazing overview!

Some comments:

  • I think different people might imagine quite different intelligence levels when under +7std thinkoompf.
    • E.g. I think that from like +6.3std the heavytail becomes even a lot stronger because those people can bootstrap themselves extremely good mental software. (My rough guess is that Tsvi::+7std = me::+6.5std, though I'd guess many readers would need to correct in the other direction (aka they might imagine +7std as less impressive than Tsvi).)
  • I think one me::Tsvi::+7std person would probably be enough to put humanity on a path to success (given Tsvi timelines), so the "repeatedly" criterion seems a bit off to me. (Though maybe my model of Tsvi is bad and me::Tsvi::+7std > Tsvi::+7std.) (Also I would not expect them to want to build aligned superintelligence directly but rather find some way to transition civilization to a path towards dath ilan.)
  • By how many standard deviations you can increase intelligence seems to me to extremely heavily depend on what level you're starting from.
    • E.g. for adult augmentation 2-3std for the average person seems plausible, but for the few +6std people on earth it might just give +0.2std or +0.3std, which tbc I think is incredibly worthwhile. (Again I'd expect I think small-std improvements starting from 6std make a much bigger effect than most people think.)
    • However I think for mental software and computer software it's sorta vice versa that extremely smart individuals might find ways to significantly leverage their capability through that even though the methods won't work for non-supergeniuses. (And it seems possible to me that for those there would then be AI capability externalities, aka high dual use.)
Replies from: TsviBT, Morpheus
comment by TsviBT · 2024-10-08T14:49:25.217Z · LW(p) · GW(p)

I think that from like +6.3std the heavytail becomes even a lot stronger because those people can bootstrap themselves extremely good mental software.

I agree something like this happens, I just don't think it's that strong of an effect.

I think one me::Tsvi::+7std person would probably be enough to put humanity on a path to success (given Tsvi timelines), so the "repeatedly" criterion seems a bit off to me.

  • A single human still has pretty strong limitations. E.g. fixed skull size (without further intervention); other non-scalable hardware (~one thread of attention, one pair of eyes and hands); self-reprogramming is just hard; benefits of self-reprogramming don't scale (hard to share with other people).
  • Coercion is bad; without coercion, a supergenius might just not want to work on whatever is strategically important for humanity.
  • It doesn't look to me like we're even close to being able to figure out AGI alignment, or other gnarly problems for that matter (such as a decoding egregores). So we need a lot more brainpower, lots of lottery tickets.
  • There's a kind of power that comes from having many geniuses--think Manhattan project.

for the few +6std people on earth it might just give +0.2std or +0.3std,

Not sure what you're referring to here. Different methods have different curves. Adult brain editing would have diminishing returns, but nowhere near that diminishing.

it's sorta vice versa that extremely smart individuals might find ways to significantly leverage their capability

Plausibly, though I don't know of strong evidence for this. For example, my impression is that modern proof assistants still aren't in a state where a genius youngster with a proof assistant can unlock what feels like the possibility of learning a seemingly superhuman amount of math via direct dialogue with the truth--but I could imagine this being created soon. Do you have other evidence in mind?

Replies from: Simon Skade
comment by Towards_Keeperhood (Simon Skade) · 2024-10-08T19:03:56.701Z · LW(p) · GW(p)

There's a kind of power that comes from having many geniuses--think Manhattan project.

Basically agree, but I think alignment is the kind of problem where one supergenius might matter more. E.g. for general relativity Einstein basically managed to find in 3 times faster or sth than the rest of physics would've. I don't think a Manhatton project would've helped there because even after Einstein published GR only relatively few people understood it (if i am informed correctly), and I don't think they could've made progress in the same way Einstein did but would've needed more experimental evidence.

Plausible to me that there are other potentially pivotal problems that have something of this character, but idk.

Do you have other evidence in mind?

Well not very legible evidence, and I could be wrong, but some of my thoughts on mental software:

It seems plausible to me that someone with +6.3std would be able to do some bootstrapping loop very roughly like:

  • find better ontology for modelling what is happening in my mind.
  • train to relatively-effortlessly model my thoughts in the new better ontology that compresses observations more and thus lets me notice a bit more of what's happening in my mind (and notice pieces where the ontology doesn't seem to fit well).
  • repeat.

The "relatively-effortlessly model well what is happening in my mind" part might help significantly for getting much faster and richer feedback loops for learning thinking skills.

When you have a good model of what happened in your mind to produce some output you can better see the parts that were useless and the parts that were important and see how you want your cognitive algorithms to look like and plan how to train yourself to shape them that way.

When you master this kind of review-and-improving really well you might be able to apply the skill on itself and bootstrap your review process.

It's generally hard to predict what someone smarter might figure out so I wouldn't be confident it's not possible.

Replies from: TsviBT, TsviBT
comment by TsviBT · 2024-10-08T19:07:48.666Z · LW(p) · GW(p)

I agree that peak problem-solving ability is very important, which is why I think strong amplification is such a priority. I just... so far I'm either not understanding, or else you're completely making up some big transition between 6 and 6.5?

Replies from: Simon Skade
comment by Towards_Keeperhood (Simon Skade) · 2024-10-08T19:16:18.541Z · LW(p) · GW(p)

Yeah I sorta am. I feel like that's what I see from eyeballing the largest supergeniuses (in particular Einstein and Eliezer) but idk it's very few data and maybe I'm wrong.

Replies from: TsviBT, Avnix
comment by TsviBT · 2024-10-08T19:26:33.100Z · LW(p) · GW(p)

My guess would be that you're seeing a genuine difference, but that flavor/magnitude of difference is not not very special to the 6 -> 6.5 transition. See my other comment.

comment by Sweetgum (Avnix) · 2024-10-11T03:01:23.704Z · LW(p) · GW(p)

I think you're massively overestimating Eliezer Yudkowsky's intelligence. I would guess it's somewhere between +2 and +3 SD.

Replies from: Mo Nastri
comment by Mo Putera (Mo Nastri) · 2024-10-11T03:28:49.939Z · LW(p) · GW(p)

Seems way underestimated. While I don't think he's at "the largest supergeniuses" level either, even +3 SD implies just top 1 in ~700 i.e. millions of Eliezer-level people worldwide. I've been part of more quantitatively-selected groups talent-wise (e.g. for national scholarships awarded on academic merit) and I've never met anyone like him.

Replies from: Avnix
comment by Sweetgum (Avnix) · 2024-10-11T04:03:13.755Z · LW(p) · GW(p)

But are you sure the way in which he is unique among people you've met is mostly about intelligence rather than intelligence along with other traits?

comment by TsviBT · 2024-10-08T19:06:26.000Z · LW(p) · GW(p)

not very legible evidence

Wait are you saying it's illegible, or just bad? I mean are you saying that you've done something impressive and attribute that to doing this--or that you believe someone else has done so--but you can't share why you think so?

Replies from: Simon Skade
comment by Towards_Keeperhood (Simon Skade) · 2024-10-08T19:14:15.336Z · LW(p) · GW(p)

Maybe bad would be a better word. Idk I feel like I have a different way of thinking about such intelligence-explosion-dynamics stuff that most people don't have (though Eliezer does) and I cannot really describe it all that well and I think it makes sensible predictions but yeah idk I'd stay sceptical given that I'm not that great at saying why I believe what I believe there.

No I don't know of anyone who did that.

It's sorta what I've been aiming for since very recently and I don't particularly expect a high chance of success but I'm also not quite +6.3std I think (though I'm only 21 and the worlds where it might succeed are the ones where I continue getting smarter for some time). Maybe I'm wrong but I'd be pretty surprised if sth like that wouldn't work for someone with +7std.

Replies from: TsviBT
comment by TsviBT · 2024-10-08T19:25:11.228Z · LW(p) · GW(p)

I mean, I agree that intelligence explosion is a thing, and the thing you described is part of it, and humans can kinda do it, and it helps quite a lot to have more raw cognitive horsepower...

I guess I'm not sure we're disagreeing about much here, except that

  1. I don't know why you're putting some important transition around 6 SDs. I expect that many capabilities will have shitty precursors in people with less native horsepower; I also expect some capabilities will basically not have such precursors, and so will be "transitions"; I just expect there to be enough such things that you wouldn't see some major transition at one point. I do think there's an important different between 5.5 SD and 7.5 SD, which is that now you've created a human who's probably smarter than any human who's ever lived, so you've gone from 0 to 1 on some difficult thoughts; but I don't think that's special about this range, it would happen at any range.
  2. I think that adding more 6 SD or 7 SD is really important, but you maybe don't as much? Not sure what you think.
Replies from: Simon Skade
comment by Towards_Keeperhood (Simon Skade) · 2024-10-08T20:38:43.072Z · LW(p) · GW(p)

First tbc, I'm always talking about thinkoompf, not just what's measured by IQ tests but also sanity and even drive.

Idk I'm not at all sure about that but it seems to me like Nate and Eliezer might be a decent chunck more competent than all the other people I'm aware of. So maybe for the current era (by which I mostly mean "after the sequences were published") it's like 1 Person (Nate) per decade-or-a-bit-more who becomes really competent, which is very roughly +6std. (EDIT: Retracted because evidence too shaky. It still seems to me like the heavytail of intelligence gets very far very quickly though.)

Like I'd guess before the sequences and without having the strong motivator of needing to save humanity the transition might rather have been +6.4std -- +6.8std. Idk. Though tbc I don't really expect to be like "yeah maybe from 6.3std it enters a faster improvement curve which is then not changing that much" but more like the curve just getting steeper and steeper very fast without there being a visible kink.

I feel like if we now created someone with +6.3std the person would already become smarter than any person who ever lived because there are certain advantages of being born now which would help a lot for getting up to speed (e.g. the sequences, the Internet).

comment by Morpheus · 2024-10-08T14:45:21.538Z · LW(p) · GW(p)

adult augmentation 2-3std for the average person seems plausible, but for the few +6std people on earth it might just give +0.2std or +0.3std, which tbc I think is incredibly worthwhile.

Such high diminishing returns in g based on genes seems quite implausible to me, but would be happy if you can point to evidence to the contrary. If it works well for people with average Intelligence, I'd expect it to work at most half as well with +6sd.

Replies from: Simon Skade
comment by Towards_Keeperhood (Simon Skade) · 2024-10-08T18:16:20.384Z · LW(p) · GW(p)

Idk I'd be intuitively surprised if adult augmentation would get someone from +6 to +7. I'm like from +0 to +3 is a big difference, and from +6 to +6.3 is an almost as big difference too. But idk maybe not. Maybe partially it's also that I think that intelligence augmentation interventions get harder once you get into higher intelligence levels. Where there are previously easy improvement possibilities there might later need to be more entangled groups of genes that are good and it's harder to tune those. And it's hard to get very good data on what genes working together actually result in very high intelligence because we don't have that many very smart people.

comment by Abhimanyu Pallavi Sudhir (abhimanyu-pallavi-sudhir) · 2024-10-12T09:42:46.823Z · LW(p) · GW(p)

I don't understand. The hard problem of alignment/CEV/etc. is that it's not obvious how to scale intelligence while "maintaining" utility function/preferences, and this still applies for human intelligence amplification.

I suppose this is fine if the only improvement you can expect beyond human-level intelligence is "processing speed", but I would expect superhuman AI to be more intelligent in a variety of ways.

Replies from: TsviBT
comment by TsviBT · 2024-10-12T11:41:20.145Z · LW(p) · GW(p)

Yeah, there's a value-drift column in the table of made-up numbers. Values matter and are always at stake, and are relatively more at stake here; and we should think about how to do these things in a way that avoids core value drift.

You have major advantages when creating humans but tweaked somehow, compared to creating de novo AGI.

  • The main thing is that you're starting with a human. You start with all the stuff that determines human values--a childhood, basal ganglia giving their opinions about stuff, a stomach, a human body with human sensations, hardware empathy, etc. Then you're tweaking things--but not that much. (Except for in brain emulation, which is why it gets the highest value drift rating.)
  • Another thing is that there's a strong built-in limit on the strength of one human: skullsize. (Also other hardware limits: one pair of eyes and hands, one voicebox, probably 1 or 1.5 threads of attention, etc.) One human just can't do that much--at least not without interfacing with many other humans. (This doesn't apply for brain emulation, and potentially applies less for some brain-brain connectivity enhancements.)
  • Another key hardware limit is that there's a limit on how much you can reprogram your thinking, just by introspection and thinking. You can definitely reprogram the high-level protocols you follow, e.g. heuristics like "investigate border cases"; you can maybe influence lower-level processes such as concept-formation by, e.g., getting really good at making new words, but you maybe can't, IDK, tell your brain to allocate microcolumns to analyzing commonalities between the top 1000 best current candidate microcolumns for doing some task; and you definitely can't reprogram neuronal behavior (except through the extremely blunt-force method of drugs).
  • A third thing is that there's a more plausible way to actually throttle the rate of intelligence increase, compared to AI. With AI, there's a huge compute overhang, and you have no idea what dial you can turn that will make the AI become a genuinely creative thinker, like a human, but not go FOOM. With humans, for the above reasons, you can guess pretty reasonably that creeping up the number of prosthetic connections, or the number of transplanted neurons, or the number of IQ-positive alleles, will have a more continuous effect.
  • A fourth major advantage is that you can actually see what happens. In the genomic engineering case, you can see which alleles lead to people being sociopaths or not. You get end-to-end data. And then you can just select against those (but not too strongly). (This is icky, and should be done with extreme care and caution and forethought, but consider status quo bias--are the current selection pressures on new humans's values and behaviors really that great?)
comment by Logan Zoellner (logan-zoellner) · 2024-10-11T05:10:17.472Z · LW(p) · GW(p)

Is "give the human a calculator and a scratchpad" not allowed in this list?  i.e. if you give a human brain the ability to instantly recall any fact and solve any math problem (by connecting the human brain to a computer via neuralink) seems like this would make us smarter.

We already see this effect in part. For example, having access to chatGPT allows me to program more complicated projects because I can offload sub-problems to the AI (thereby freeing up working-memory to focus on the remaining complexity).  Even just having a piece of paper I can write things down on increases my intelligence from "I can barely do 2 digit multiplication" to a much higher level.

I suppose the complaint here is "what if the AI is misaligned", but if we restrict the AI to:

  1. recalling facts stored in its database
  2. giving mathematically verifiable answers to well-defined questions

it seems like the alignment-risk of such a system is basically 0.

I think this is how Terry Tao imagines the future of AI in math: basically the human will be responsible for all of the "leaps of logic" and the AI will be in charge of filling in the details.

Replies from: TsviBT
comment by TsviBT · 2024-10-11T10:51:41.793Z · LW(p) · GW(p)

instantly recall any fact and solve any math problem

You mean, recall any fact that's been put into text-searchable form in the past and by you, and solve any calculation problem that's in a reasonably common form.

I'm saying that the effect on philosophical problem-solving is just not very large. Yeah, if you've been spending 80% of your time on manually calculating things and 20% on "leaps of logic", and you could just as well spend 90% on the leaps, then calculators help a lot. But it's not by making you be able to do significantly better leaps. Maybe you can become better by getting more practice or something? But generally skills tend to plateau pretty sharply--there's always new bottlenecks, like a clicker game. If an improvement only addresses some smallish subset of the difficulty involved in some overall challenge, the overall challenge isn't addressed that much.

Like, if you could do calculations with 10x less effort, what calculations would you do to solve alignment, or get AGI banned, or make sure everyone gets food, or fix the housing crisis, or ....?

To put it a different way, I don't think Gödel's lack of a big fast calculator mattered too much?

Replies from: logan-zoellner
comment by Logan Zoellner (logan-zoellner) · 2024-10-11T21:37:40.343Z · LW(p) · GW(p)

You mean, recall any fact that's been put into text-searchable form in the past and by you, and solve any calculation problem that's in a reasonably common form.

 

No, I do not mean that at all.

An ideal system would store every piece of information its user has ever seen or heard in addition to every book/article/program ever written or recorded and be able to translate problems given in "common english" into objective mathematical proofs then giving an explanation of the answer in English again.

But generally skills tend to plateau pretty sharply--there's always new bottlenecks, like a clicker game.

This is an empirical question, but based on my own experience I would speculate the gain is quite significant.  Again, merely giving me access to a calculator and a piece of paper makes me better at math than 99.99% of people who do not have access to such tools.

Like, if you could do calculations with 10x less effort, what calculations would you do to solve alignment, or get AGI banned, or make sure everyone gets food, or fix the housing crisis, or ....?

would I

"solve alignment"?

Yes.

"get AGI banned"

No, because I solved alignment.

"make sure everyone gets food, or fix the housing crisis"

Both of these are political problems that have nothing to do with "intelligence".  If everyone was 10x smarter, maybe they would stop voting for retarded self-destructive polices. Idk, though.

Replies from: TsviBT
comment by TsviBT · 2024-10-11T21:47:36.608Z · LW(p) · GW(p)

An ideal system would store every piece of information its user has ever seen or heard in addition to every book/article/program ever written or recorded and be able to translate problems given in "common english" into objective mathematical proofs then giving an explanation of the answer in English again.

That's what I said. It excludes, for example, a fact that the human thinks of, unless ze speaks or writes it.

Again, merely giving me access to a calculator and a piece of paper makes me better at math than 99.99% of people who do not have access to such tools.

It makes you better at calculation, which is relevant for some kinds of math. It doesn't make you better at math in general though, no. If you're not familiar with higher math (the sort of things that grad students and professors do), you might not be aware: Most of the stuff that most of them do involves not very much that one would plug in to a calculator.

would I "solve alignment"? Yes.

What calculations would you plug into your fast-easy-calculator that result in you solving alignment?

Replies from: logan-zoellner
comment by Logan Zoellner (logan-zoellner) · 2024-10-12T01:50:34.687Z · LW(p) · GW(p)

What calculations would you plug into your fast-easy-calculator that result in you solving alignment?

 

Already wrote [LW · GW] an essay about this.

Replies from: TsviBT
comment by TsviBT · 2024-10-12T02:34:11.881Z · LW(p) · GW(p)

I don't think most of those proposals make sense, but anyway, the ones that do make sense only make sense with a pretty extreme math oracle--not something that leaves the human to fill in the "leaps of logic". It's just talking about AGI, basically. Which defeats the purpose.

Replies from: logan-zoellner
comment by Logan Zoellner (logan-zoellner) · 2024-10-12T11:19:06.208Z · LW(p) · GW(p)

It's just talking about AGI, basically. Which defeats the purpose.

 

A "math proof only" AGI avoids most alignment problems.  There's no need to worry about paperclip maximizing or instrumental convergence.

Replies from: TsviBT
comment by TsviBT · 2024-10-12T11:44:56.921Z · LW(p) · GW(p)

Not true. This isn't the place for this debate, but if you want to know:

  1. To get an AGI that can solve problems that require lots of genuinely novel thinking, you're probably pulling an agent out of a hat, and then you have an agent with unknown values and general optimization channels.
  2. Even if you only want to solve problems, you still need compute, and therefore wish to conquer the universe (for science!).
Replies from: logan-zoellner
comment by Logan Zoellner (logan-zoellner) · 2024-10-12T11:49:25.656Z · LW(p) · GW(p)

you're probably pulling an agent out of a hat

 

An agent that only thinks about math problems isn't going to take over the real world (it doesn't even have to know the real world exists, as this isn't a thing you can deduce from first principles).

Even if you only want to solve problems, you still need compute

We're going to get compute anyway.  Mundane uses of deep learning already use a lot of compute.

comment by Foyle (robert-lynn) · 2024-10-09T00:23:49.679Z · LW(p) · GW(p)

I read some years ago that average IQ of kids is approximately 0.25*(Mom IQ + Dad IQ + 2x population mean IQ).  So simplest and cheapest means to lift population average IQ by 1 standard deviation is just use +4 sd sperm (around 1 in 30000), and high IQ ova if you can convince enough genius women to donate (or clone, given recent demonstration of male and female gamete production from stem cells).  +4sd mom+dad = +2sd kids on average.  This is the reality that allows ultra-wealthy dynasties to maintain ~1.3sd IQ average advantage over general population by selecting (attractive/exciting) +4sd mates.

Probably the simplest and cheapest thing you can do to lift population IQ over long term is to explain this IQ-heritability reality to every female under the age of 40, make it common knowledge and a lot of them will choose genius sperm for themselves.

Beyond that intervention that can happen immediately there is little point in trying to do anything.  In 20 years when ASIs stradle the earth like colossuses, and we are all their slaves or pets, they will (likely in even best case scenarios) be dictating our breeding and culling - or casually ignoring/exterminating us.  In optimistic event of Banksian Culture like post-singularity utopia magic ASI tech will be developed to near-universally optimize human neural function via nootropics or genetic editing to reach a peak baseline (or domesticate us into compliant meat-bots).  I think even a well aligned ASI is likely to push this on us.

comment by Cole Wyeth (Amyr) · 2024-10-08T16:11:26.450Z · LW(p) · GW(p)

I think I'm more optimistic about starting with relatively weak intelligence augmentation. For now, I test my fluid intelligence at various times throughout the day (I'm working on better tests justified by algorithmic information theory in the style of Prof Hernandez-Orallo, like this one but it sucks to take https://github.com/mathemajician/AIQ but for now I use my own here: https://github.com/ColeWyeth/Brain-Training-Game), and I correlate the results with everything else I track about my lifestyle using reflect: https://apps.apple.com/ca/app/reflect-track-anything/id6463800032 which I endorse, though I should note it's owned/invented by a couple of my friends/former coworkers. I'll post some intermediate results soon. Obviously this kind of approach alone will probably only provide a low single digit IQ boost at most, but I think it makes sense to pick the low-hanging fruit first (then attempt incrementally harder stuff with the benefit of being slightly smarter). Also, accurate metrics and data collection should be established as early as possible. Ultimately I want to strap some AR goggles on and measure my fluid intelligence in real time ideally from eye movements in response to some subconscious stimulation (haven't vetted the plausibility of this idea at all). 

Replies from: TsviBT
comment by TsviBT · 2024-10-08T16:21:31.838Z · LW(p) · GW(p)

I think it makes sense to pick the low-hanging fruit first (then attempt incrementally harder stuff with the benefit of being slightly smarter)

No, this doesn't make sense.

I think the stuff you're doing is probably fun / cool / interesting / helpful / something you like. That's great! You don't need to make an excuse for doing it, in terms of something about something else.

But no, that's not the right way to make really smart humans. The right way is to directly create the science and tech. You're saying something like "it stands to reason that if we can get a 5% boost on general intelligence, we should do that first, and then apply that to the tech". But

  • It's not a 5% boost to the cognitive capabilities that are the actual bottlenecks to creating the more powerful tech. It's less than that.
  • What you're actually doing is doing the 5% boost, and never doing the other stuff. Doing the other stuff is better for the purposes of making a bunch of supergeniuses. (Which, again, doesn't have to be your goal!)
Replies from: Amyr
comment by Cole Wyeth (Amyr) · 2024-10-08T16:36:21.052Z · LW(p) · GW(p)

I think there's a reasonable chance everything you said is true, except:

What you're actually doing is doing the 5% boost, and never doing the other stuff.

I intend to do the other stuff after finishing my PhD - though its not guaranteed I'll follow through. 

The next paragraph is low confidence because it is outside of my area of expertise (I work on agent foundations, not neuroscience):

The problem with neuralink etc. is that they're trying to solve the bandwith problem which is not currently the bottleneck and will take too long to yield any benefits. A full neural lace is maybe similar to a technical solution to alignment in the sense that we won't get either within 20 years at our current intelligence levels. Also, I am not in a position where I have enough confidence in my sanity and intelligence metrics to tamper with my brain by injecting neurons into it and stuff. On the other hand, even minor non-invasive general fluid intelligence increase at the top of the intelligence distribution would be incredibly valuable and profits could be reinvested in more hardcore augmentation down the line. I'd be interested to here where you disagree with this. 

It almost goes without saying that if you can make substantial progress on the hardcore approaches that would be much, much more valuable than what I am suggesting, and I encourage you to try.

Replies from: TsviBT
comment by TsviBT · 2024-10-08T16:44:46.088Z · LW(p) · GW(p)

which is not currently the bottleneck and will take too long to yield any benefits

My guess is that it would be very hard to get to millions of connections, so maybe we agree, but I'm curious if you have more specific info. Why is it not the bottleneck though?

confidence in my sanity and intelligence metrics to tamper with my brain by injecting neurons into it and stuff.

That's fair. Germline engineering is the best approach and mostly doesn't have this problem--you're piggybacking off of human-evolution's knowledge about how to grow a healthy human.

minor non-invasive general fluid intelligence increase at the top of the intelligence distribution would be incredibly valuable and profits could be reinvested in more hardcore augmentation down the line

You're talking about a handful of people, so the benefit can't be that large. A repeatable method to make new supergeniuses is vastly more valuable.

Replies from: Amyr
comment by Cole Wyeth (Amyr) · 2024-10-08T17:04:46.212Z · LW(p) · GW(p)

My guess is that it would be very hard to get to millions of connections, so maybe we agree, but I'm curious if you have more specific info. Why is it not the bottleneck though?

I'm not a neuroscientist / cognitive scientist, but my impression is that rapid eye movements are already much faster than my conscious deliberation. Intuitively, this means there's already a lot of potential communication / control / measurement bandwidth left on the table. There is definitely a point beyond which you can't increase human intelligence without effectively adding more densely connected neurons or uploading and increasing clock speed. Honestly I don't think I'm equipped to go deeper into the details here. 

You're talking about a handful of people, so the benefit can't be that large.

I'm not sure I agree with either part of this sentence. If we had some really excellent intelligence augmentation software built into AR glasses we might boost on the order of thousands of people. Also I think the top 0.1% of people contribute a large chunk of economic productivity - say on the order of >5%.  

Replies from: TsviBT
comment by TsviBT · 2024-10-08T17:07:48.634Z · LW(p) · GW(p)

this means there's already a lot of potential communication / control / measurement bandwidth left on the table.

I'm talking about neuron-neuron bandwith. https://tsvibt.blogspot.com/2022/11/prosthetic-connectivity.html

I agree that neuron-computer bandwidth has easier ways to improve it--but I don't think that bandwidth matters very much.

Replies from: Amyr
comment by Cole Wyeth (Amyr) · 2024-10-08T17:29:55.232Z · LW(p) · GW(p)

Personally I'm unlikely to increase my neuron-neuron bandwidth anytime soon, sounds like a very risky intervention even if possible.

comment by [deleted] · 2024-10-13T01:18:33.025Z · LW(p) · GW(p)Replies from: TsviBT
comment by TsviBT · 2024-10-14T19:34:48.090Z · LW(p) · GW(p)

Is the situation so dire with AI intelligence explosion that a human one must exist to counter balance it?

I wouldn't exactly say "counter balance". It's more like we, as humans, want to get ahead of the AI intelligence explosion. Also, I wouldn't advocate for a human intelligence explosion that looks like what an AI explosion would probably look like. An explosion sounds like gaining capability as fast as possible, seizing any new mental technology that's invented and immediately overclocking it to invent the next and the next mental technology. That sort of thing would shred values in the process.

We would want to go about increasing the strength of humanity slowly, taking our time to not fuck it up. (But we wouldn't want to drag our feet either--there are, after all, still people starving and suffering, decaying and dying, by the thousands and tens of thousands every day.)

But yes, the situation with AI is very dire.

augmented humans stand a decent chance of wanting to go full throttle on all things Artificial

I'm not following. Why would augmented humans have worse judgement about what is good for what we care about? Or why would they care about different things?

comment by tup99 · 2024-10-12T14:36:26.127Z · LW(p) · GW(p)

I feel like it would be beneficial to add another sentence or two to the “goal” section, because I’m not at all convinced that we want this. As someone new to this topic, my emotional reaction to reading this list is terror.

Any of these techniques would surely be available to only a small fraction of the world’s population. And I feel like that would almost certainly result in a much worse world than today, for many of the same reasons as AGI. It will greatly increase the distance between the haves and the have-nots. (I get the same feeling reading this as I get from reading e/acc stuff — it feels like the author is only thinking about the good outcomes.)

Your answer would be that (1) AGI will be far more catastrophic, and (2) this is the only way to avoid an AGI catastrophe. Personally I’m not convinced. And even if I was, it would be really emotionally difficult to devote my resources to making the world much worse (even to save it from something even worse than that). So overall, I’d much rather bet on something else that will not *itself* make the world a much worse place.

Relatedly: Does your “value drift” column include the potential value drift of simply being much, much smarter than the rest of humanity (the have-nots)? Anecdotally, I think there’s somewhat of an inverse correlation between intelligence and empathy in humans. I’m not as worried about it as I am with AGI, but I’m much more worried than your column suggests. Imagine a super-intelligent Sam Altman.

And tangentially related: We actually have no idea if we can make this superintelligent baby “sane”. What you mean is that we can protect it from known genetic mental health problems, sure, but that’s not the whole picture. Superintelligence will probably affect a person’s personality/values in ways we can’t predict. It could cause depression, psychopathic behavior, who knows.

Replies from: TsviBT, Raemon
comment by TsviBT · 2024-10-13T00:45:28.306Z · LW(p) · GW(p)

Ok, I added some links to "Downside risks of genomic selection".

Any of these techniques would surely be available to only a small fraction of the world’s population.

Not true! This consideration is the main reason I included a "unit price" column. Germline engineering should be roughly comparable to IVF, i.e. available to middle class and up; and maybe cheaper given more scale; and certainly ought be subsidized, given the decreased lifetime healthcare costs alone.

greatly increase the distance between the haves and the have-nots

Eh, unless you can explain this more, I think you've been brainwashed by Gattaca or something. Gattaca conflates class with genetic endowment, which is fine because it's a movie about class via a genetics metaphor, but don't be confused that it's about genetics. Did the invention of smart phones increase or decrease the distance? In general, some technologies scale with money, and other technologies scale by bodycount. Each person only gets one brain to receive implants and stuff. Elon Musk, famously extremely rich and baby-obsessed, has what... 12 kids? A peasant could have 12 kids if they wanted to! Germline engineering would therefore be extremely democratic, at least for middle class and up. The solution, of course, is to make the tech even cheaper and more widely available, not to inflict preventable disease and disempowerment on everyone's kids.

Anecdotally, I think there’s somewhat of an inverse correlation between intelligence and empathy in humans.

Stats or GTFO.

Superintelligence will probably affect a person’s personality/values in ways we can’t predict. It could cause depression, psychopathic behavior, who knows.

First, the two specific things you listed are quite genetically heritable. Second, 7 SDs -- which is the most extreme form that I advocate for -- is only a little bit outside the Gaussian human distribution. It's just not that extreme of a change. It seems quite strange to postulate that a highly polygenic trait, if pushed to 5350 out of 10000 trait-positive variants, would suddenly cause major psychological problems, whereas natural-born people with 5250 or 5300 out of 10000 trait-positive variants are fine.

comment by Raemon · 2024-10-12T23:41:22.522Z · LW(p) · GW(p)

I think the terror reaction is honestly pretty reasonable. ([edit: Not, like, like, necessarily meaning one shouldn't pursue this sort of direction on balance. I think the risks of doing this badly are real and I think the risks of not doing anything are also quite real and probably great for a variety of reasons])

One reason I nonetheless think this is very important to pursue is that we're probably going to end up with superintelligent AI this century, and it's going to be dramatically more alien and scary than the tail-risk outcomes here.

I do think the piece would be improved if it acknowledged and grappled with that more.

Replies from: TsviBT
comment by TsviBT · 2024-10-13T00:25:32.463Z · LW(p) · GW(p)

The essay is just about the methods. But I added a line or two linking to https://tsvibt.blogspot.com/2022/08/downside-risks-of-genomic-selection.html

comment by Rosoe · 2024-10-12T14:09:33.325Z · LW(p) · GW(p)

The genetic portions of this seem like a manifesto for creating highly intelligent, highly depressed, and thus highly unproductive people.

Replies from: TsviBT
comment by TsviBT · 2024-10-12T20:24:27.223Z · LW(p) · GW(p)

What do you mean? Why would they be depressed? Do you mean because they'd be pressured into working on AGI alignment, or something? Yeah, don't do that. Same as with any other kids, you teach them to be curious and good and kind and strong and free and responsible and so on.

comment by wassname · 2024-10-12T05:19:11.359Z · LW(p) · GW(p)

I made up the made-up numbers in this table of made-up numbers; therefore, the numbers in this table of made-up numbers are made-up numbe

These hallucinated outputs are really getting out of hand

comment by Chris_Leong · 2024-10-08T23:25:08.677Z · LW(p) · GW(p)

I think you're underestimating meditation.

Since I've started meditating I've realised that I've been much more sensitive to vibes.

There's a lot of folk who would be scarily capable if the were strong in system 1, in addition to being strong in system 2.

Then there's all the other benefits that mediation can provide if done properly: additional motivation, better able to break out of narratives/notice patterns.

comment by notfnofn · 2024-10-08T19:43:02.903Z · LW(p) · GW(p)

Thanks for the detailed writeup. I would personally be against basically all of the suggested methods that could create a significant improvement because the hard problem of consciousness remains hard and it seems very possible that an unconscious human race could result. I was a bit surprised to see no mention of this in the essay.

Replies from: TsviBT
comment by TsviBT · 2024-10-08T19:48:52.729Z · LW(p) · GW(p)

I guess that falls under "value drift" in the table. But yeah I think that's extremely unlikely to happen without warning, except in the case of brain emulations. I do think any of these methods would be world-changing, that therefore extremely dangerous and would demand lots of care and caution.

Replies from: notfnofn
comment by notfnofn · 2024-10-08T23:37:54.817Z · LW(p) · GW(p)

But yeah I think that's extremely unlikely to happen without warning, except in the case of brain emulations.

Could you explain what sort of warnings we'd get with, for instance, the interfaces approach? I don't see how that's possible.

Also this is semantics I guess, but I wouldn't classify this under "value drift". If there is such a thing as the hard problem of consciousness and these post-modified humans don't have whatever that is, I wouldn't care whether or not their behaviors and value functions resemble those of today's humans

Replies from: TsviBT
comment by TsviBT · 2024-10-08T23:40:57.591Z · LW(p) · GW(p)

Someone gets some kind of interface, and then they stop being conscious. So they act weird, and people are like "hey they're acting super weird, they seem not conscious anymore, this seems bad". https://www.lesswrong.com/posts/fdEWWr8St59bXLbQr/zombies-zombies [LW · GW]

Replies from: notfnofn
comment by notfnofn · 2024-10-09T01:36:00.426Z · LW(p) · GW(p)

If there is such a thing as the hard problem of consciousness

Yudkowsky's essay is explaining why he believes there is no hard problem of consciousness.

comment by Alvaro Chaveste (alvaro-chaveste) · 2024-10-14T08:52:08.227Z · LW(p) · GW(p)

I think youre making this more complicated than it has to be. Why try to move a river to you when you can move to the river? Social Engineering is the way, I think. The same way that flat-surfaced guardrails on stars encourage people leaving trash/drinks/whatever there so does everything else in our public life (the life that we have when interacting with others- going to the store; filling with gas; waiting in lines; shopping etc). Combining microhabits with de-atrophication of our brains is the easiest and most widely possible solution. Maybe create a program where if you can tell the cashier the change you should be getting you get a discount or some points or some sort of reward. Or in banks or dps provide pen and paper or multiplecolored pencils/pens and have prompts on the screen or drawing challanges and completing them gets you to teh front of the line. if going from gradeschool to graduation material in 2 months gets your speeding ticket expunged I bet more people will have that knowledge fresh in mind. Or even while in jail, encouraging people to take (advanced) classes will not only better prepare them for when they finish serving their time but a secret benefit would be that, as the adult population with the most time in their hands, they will be able to digest and advance whatever subject is they were learning. 

This would clear the consent/ethical 'issues' most of your suggestions posed. Also is more empowering (which in turn will encourage more learning). 

I think a more 'complete' species-improvement can be had if more people got less stupid rather than if there were a handful of super notstupid people. 

I also think you fall into a trap by discrediting(?), discouraging(?), disparaging(?) just how much emotions/feelings/nonthoughts would help us evolve as a species. That we learned to lego-tize our thoughts so that others could understand and use them was the first step in our fetishisiation of thoughts over emotions. It made us forget that before thoughts were emotions and that those emotions are the ground on which we pave our thoughts. 

comment by Cameron Berg (cameron-berg) · 2024-10-11T21:24:11.633Z · LW(p) · GW(p)

Somewhat surprised that this list doesn't include something along the lines of "punt this problem to a sufficiently advanced AI of the near future." This could potentially dramatically decrease the amount of time required to implement some of these proposals, or otherwise yield (and proceed to implement) new promising proposals. 

It seems to me in general that human intelligence augmentation is often framed in a vaguely-zero-sum way with getting AGI ("we have to all get a lot smarter before AGI, or else..."), but it seems quite possible that AGI or near-AGI could itself help with the problem of human intelligence augmentation.

Replies from: TsviBT
comment by TsviBT · 2024-10-11T21:42:43.049Z · LW(p) · GW(p)

So your suggestion for accelerating strong human intelligence amplification is ...checks notes... "don't do anything"?

Or are you suggesting accelerating AI research in order to use the improved AI faster? I guess technically that would accelerate amplification but seems bad to do.

Maybe AI could help with some parts of the research. But

  1. we probably don't need AI to do it, so we should do it now, and
  2. if we're not all dead, there will still be a bunch of research that has to be done by humans.

On a psychologizing note, your comment seems like part of a pattern of trying to wriggle out of doing things the way that is hard that will work. Looking for such cheat codes is good but not if you don't aggressively prune the ones that don't actually work -- hard+works is better than easy+not-works.

Replies from: cameron-berg
comment by Cameron Berg (cameron-berg) · 2024-10-12T19:17:14.095Z · LW(p) · GW(p)

I am not suggesting either of those things. You enumerated a bunch of ways we might use cutting-edge technologies to facilitate intelligence amplification, and I am simply noting that frontier AI seems like it will inevitably become one such technology in the near future.

On a psychologizing note, your comment seems like part of a pattern of trying to wriggle out of doing things the way that is hard that will work.

Completely unsure what you are referring to or the other datapoints in this supposed pattern. Strikes me as somewhat ad-hominem-y unless I am misunderstanding what you are saying.

AI helping to do good science wouldn't make the work any less hard—it just would cause the same hard work to happen faster. 

hard+works is better than easy+not-works

seems trivially true. I think the full picture is something like:

efficient+effective > inefficient+effective > efficient+ineffective > inefficient+ineffective

Of course agree that if AI-assisted science is not effective, it would be worse to do than something that is slower but effective. Seems like whether or not this sort of system could be effective is an empirical question that will be largely settled in the next few years.

comment by localdeity · 2024-10-08T11:58:20.641Z · LW(p) · GW(p)

I don't think you mentioned "nootropic drugs" (unless "signaling molecules" is meant to cover that, though it seems more specific).  I don't think there's anything known to give a significant enhancement beyond alertness, but in a list of speculative technologies I think it belongs.

Replies from: mateusz-baginski
comment by Mateusz Bagiński (mateusz-baginski) · 2024-10-08T12:10:33.443Z · LW(p) · GW(p)

mentioned in the FAQ