Wacky, risky, anti-inductive intelligence-enhancement methods?

post by Nicholas / Heather Kross (NicholasKross) · 2022-07-14T01:40:28.137Z · LW · GW · 2 comments

This is a question post.

Contents

  Answers
    35 Quintin Pope
    8 Tomás B.
    7 Nathan Helm-Burger
    4 gilch
    4 gilch
    3 gilch
    3 Tomás B.
None
2 comments

Especially targeting working-memory, long-term memory, and conceptual understanding.

This thread is exclusively for things like highly-risky self-gene-therapy, or getting a brain-computer interface surgically implanted. No "get more sleep" or "try melatonin" here.

(If the idea is really good/anti-inductive, you might DM or email it to me instead.)

Answers

answer by Quintin Pope · 2022-07-14T05:33:54.024Z · LW(p) · GW(p)

Human brains are most likely undertrained on text data. 

Looking at the scaling laws from DeepMind's Chinchilla work, it looks like a system with as many parameters and as much compute as the brain should be trained on vastly more text than any human can read in their lifetime. Thus, it seems plausible that "lack of training data" is a significant bottleneck on human cognitive capabilities. 

Thus, the question: how do we best increase the efficiency of our own text training process? There are two dimensions to this question:

  • What text should we include? 
    • High quality training text, which is related to the downstream domain of interest, is best. Finding such text is often a bottleneck in consuming it, so ideally we'd compile a large corpus of excellent quality human pretraining text.
    • One option might be to train a "relevant text classifier", which identifies well-written text about domains that are useful for alignment, such as alignment research, ML, math, neuroscience, biology, game theory, etc. Then, use that classifier to scour the internet and all journals / books / etc. for useful text and compile the results.
  • How should we "train" on as much text as possible?
    • One simple option is to just read more, but reading is slow. There's only so much time in the day, and reading takes time away from other activities. 
    • Another option is to convert the text into audio form using text to speech. This makes it vastly more convenient to listen to large quantities of text, but has other issues such as:
      • Images are unavailable.
      • Pausing or replaying past text is often inconvenient.
      • Math or LaTex are almost never captured well.
    • The third option, and the one I believe would be most powerful / scalable, is to use a multi-modal pretrained model to convert the text + images + math into latent representations, then to feed those latents (or dimension-reduced versions of those latents) to the human via one or more of their sensory channels. 
      • What I mean by that is to have some way of translating the latent representation of the text into sensory input for the human, e.g., into an audio signal played into the human's ears, or into vibrations which a system such as the Eyeronman vests delivers as tactile sensations.
        • Doing so would require training the human to decode these sensory representations, but that should be manageable. We'd show the human side-by-side instances of the original text / images / math along with the compressed sensory representations.
      • This approach allows for a greater throughput of information into the human, while avoiding the expense, risk and technical difficulties associated with brain computer interfaces. 
      • It also allows the human to take advantage of the preprocessing provided by the pretrained transformer, meaning that the effective compute available to the human increases as well. 
      • It may also lead to a form of "knowledge distillation" from the pretrained transformer to the human. 
        • Knowledge distillation is an approach in machine learning where the latent knowledge contained in a larger, more powerful ML model is "distilled" into a smaller model. The typical approach is for the smaller model to be trained to imitate the latent representations of the larger model on some reduced corpus of training data.
        • In this case, the human learns to process the latents generated by the model. If those latents contain representations of super-humanly performant abstractions, the human may pick up on such abstractions.
        • This could potentially aid in interpretability as well, if the human in question develops a sense for the model's internal representations.
      • We could also reduce the dimensionality of the model's latent representations or strip away irrelevant information so as to further increase the richness / density of the human's input data.

Overall, the approach that I think would be most effective is:

  • Collect a corpus of high quality text, images and math from books and articles that might be alignment relevant (maybe ~20 GB of text).
  • Take a multi-modal transformer pretrained on much more data than the corpus (basically, you'd use the best multi-modal model available).
  • Find some scalable method of translating the model's latents into information-dense, human-learnable sensory input. 
    • These would initially appear like random noise / images / vibrations to a human, but with exposure, start to make sense as the brain adapts to the new encoding. 
    • Analogously, the Eyeronman vests I mentioned translate 3-D scene representations into vibrations. After enough time with one, people can pick up a sense of what the environment around them is like through the vibrations from the vest.
  • Translate the corpus into those sensory representations.
  • Feed them to the human.

This is a pretty basic setup. Information only flows in one direction, from the model to the human. Most likely, there are ways of improving things by having the model learn to produce latent representations that are more useful for the cognitive tasks the human intends to perform. E.g., the methodology in "Training Language Models with Language Feedback" can be adapted so that the human can provide feedback on what sorts of things the model should focus more / less on. 

Note that the approach of translating external information into sensory inputs handles the "getting lots of information into the human" problem. The "getting lots of information out of the human" problem isn't quite so easy to handle. Humans receive more information from their senses than they transmit via their actions, so just watching human actions probably doesn't have as high a throughput. Potentially, we can use non-invasive brain imaging tech, which seem to be progressing faster than "read + write" brain computer interfaces. Having high-throughput input + output channels for the brain would let us properly do the whole "merging with technology thing" and keep up with mildly superhuman AIs[1], for a while at least.

  1. ^

    I expect horse versus automobiles comparisons in response to this point. I think this analogy isn't actually illuminating here because learning systems can be combined together much more easily than physical systems. E.g., deepmind's multi-modal Flamingo model took frozen layers from the text-only Chinchilla model and integrated them with a smaller number of trainable parameters for handling the image and image-to-text side of things. Provided you let two learning systems adapt to each other (or even just let one learning system adapt to the other), it's relatively straightforward to combine learning systems together.

comment by gilch · 2022-07-14T21:42:35.853Z · LW(p) · GW(p)

The idea of generating and directly transferring a pre-digested latent representation is super interesting, but my prior is that this couldn't work. However a neural network trained from initially randomized weights represents concepts is likely to be highly idiosyncratic to that particular network. Perhaps this could be accomplished between AIs if we can somehow make that process and initial state less random, but how could that ever work for humans?

The highest-bandwith sensory input for humans is their eyes. Doesn't this idea just amount to diagrams of high-dimensional data?

Replies from: quintin-pope
comment by Quintin Pope (quintin-pope) · 2022-07-14T23:00:44.591Z · LW(p) · GW(p)

It works for AIs very easily. Just feed the patents from AI 1 into AI 2. No need for special engineering of the two AIs.

It also works for humans, at least somewhat. E.g., the Eyeronman vests I mentioned translate 3-D scene representations into vibrations. After enough time with one, people can pick up a sense of what the environment around them is like through the vibrations from the vest.

Translating LLM patents into visual input wouldn’t look like normal diagrams. It would look like a random-seeming mishmash of colors and shapes which encode the LLM’s latents. A person would then be shown many pairs of text and the encoded latents the model generated for the text. In time, I expect the person would gain a “text sense” where they can infer the meaning of the text from just the visual encoding of the model’s latents.

Replies from: gilch
comment by gilch · 2022-07-16T02:25:06.665Z · LW(p) · GW(p)

I think I'm lacking some jargon here. What's a latent/patent in the context of a large language model? "patent" is ungoogleable if you're not talking about intellectual property law.

The Eyeronman link didn't seem very informative. No explanation of how it works. I already knew sensory substitution was a thing, but is this different somehow? Is there some neural net pre-digesting its outputs? Is it similarly a random-seeming mismash? Are there any other examples of this kind of thing working for humans? Visually?

Would the mismash from a smaller text model be any easier/faster for the human to learn?

Replies from: ESRogs
comment by ESRogs · 2022-08-04T00:35:53.968Z · LW(p) · GW(p)

What's a latent/patent in the context of a large language model? "patent" is ungoogleable if you're not talking about intellectual property law.

My money's on: typo.

comment by Nicholas / Heather Kross (NicholasKross) · 2022-07-14T20:00:34.609Z · LW(p) · GW(p)

Oooh, this is very promising! I had a semi-similar idea for images instead of text, basically like this but in reverse.

answer by Tomás B. · 2022-07-14T19:52:55.011Z · LW(p) · GW(p)

More of a joke, but this post has some ideas: Mad-science IQ Enhancement:

Slatestarcodex has a great post on the intelligence of birds.1

The big take-away: due do the weight restriction of flight, birds have been under huge evolutionary pressure to miniaturize their neurons. The result?

Driven by the need to stay light enough to fly, birds have scaled down their neurons to a level unmatched by any other group. Elephants have about 7,000 neurons per mg of brain tissue. Humans have about 25,000. Birds have up to 200,000. That means a small crow can have the same number of neurons as a pretty big monkey.

Bird brains are 10x more computationally dense than our own? This is a big deal. To put this in perspective, if you replaced just 10% of your brain volume with crow neurons you could almost double your computational capacity.

I know this sounds like complete insanity, but given how every intervention to raise IQ has been ineffective, this bird-brain scheme is probably much more promising than any pharmaceutical based approach.

This raises all sorts of interesting questions. How does miniaturization effect heat dissipation? Oxygenation? Energy consumption? Could one build a human-sized brain with bird-dense neurons?

Though the brain is immunologically privileged, there is still the neuroimmune system. Can we genetically modify crow cortical neuron progenitor cells to not trigger the neuroimmune system? Is there any chance of human neurons and bird-neurons integrating usefully?

Brain grafts have been unpromising for intelligence enhancement because you would have to replace much of the brain to make a difference. With bird tissue, this problem is 10x less relevant.

Or more ominously, what if we breed crows for brain size? Hyper-Intelligent flightless crows pecking keyboards in hedgefund basements? To paraphrase Douglas Adams, there is another theory which states that this is already happening.

comment by gilch · 2022-07-14T21:12:10.967Z · LW(p) · GW(p)

Could one build a whale-sized brain with fairy wasp–sized neurons?

comment by gilch · 2022-07-14T21:04:54.625Z · LW(p) · GW(p)

Fairy wasp neurons are even smaller [citation needed].

comment by gilch · 2022-07-14T21:35:34.135Z · LW(p) · GW(p)

Neanderthals had a bigger braincase than Homo Sapiens. It's not likely that they were any smarter than us, but we can't totally rule that out. They're so closely related to us that their genes for bigger heads ought to be pretty compatible if we splice them in where they belong. Would literally bigger brain volume with otherwise Sapiens neurons make us any smarter?

answer by Nathan Helm-Burger · 2022-07-14T18:29:16.821Z · LW(p) · GW(p)

I actually spent several years studying this question intensely while in grad school studying neuroscience. I have a lot of thoughts about promising avenues, but my ultimate conclusion was that there just wasn't time to get a working product before the world developed superintelligent AGI, and thus, it wasn't worth pursuing.

comment by TekhneMakre · 2022-07-14T19:42:31.706Z · LW(p) · GW(p)

For the record, this seems bonkers to me. (Hope this isn't rude, I just want to be frank; not saying you're bonkers or anything, just that I super super don't see the logic here.) Like, are you saying that you're 90% sure we'll get AGI in the next 5 years? Or are you saying that you're 90% sure we'll get AGI in the next 15 years, and that we wouldn't get a 30 IQ point boosting drug within the next 10 years if we tried? IMO it would be valuable to the world for you to write up, even in very rough form, your ideas for intelligence enhancement, and your back of envelope wild guesses as to their costs and benefits and what it would take to investigate/test them.

Replies from: nathan-helm-burger
comment by Nathan Helm-Burger (nathan-helm-burger) · 2022-08-01T20:34:09.427Z · LW(p) · GW(p)

Yeah, I'm around 95% on AGI in the next fifteen years and less than 1% on 30 IQ boosting drug in that time even with lots of funding and smart people on the problem. What seems bonkers to me is that anyone smart enough to do novel neuroscience work of that caliber not already working full time on AGI alignment. I am of the opinion you just need to be smart, competent/agentic, and somewhat scientifically/mathematically educated to have a chance of meaningfully contributing to alignment research. I want more such people focusing on that ASAP.

Replies from: TekhneMakre
comment by TekhneMakre · 2022-08-01T20:59:33.497Z · LW(p) · GW(p)

I'm around 95% on AGI in the next fifteen years 

 

[Maybe hard to explore this question in this context, but this seems likely mistaken (in particular, poorly calibrated). Curious why you think this if you're willing to share.]
 

and less than 1% on 30 IQ boosting drug in that time even with lots of funding and smart people on the problem.

Could you say more? What about 15 IQ points? What are the obstacles? What are methods you considered and why did they seem infeasible or ineffective?

Replies from: nathan-helm-burger
comment by Nathan Helm-Burger (nathan-helm-burger) · 2022-08-02T03:20:57.020Z · LW(p) · GW(p)

I'm working on a post about my neuroscience-informed timelines, and where my understanding agrees and disagrees with Ajeya's Bio Anchors report.

I'll separately keep in mind your request for summary of my neuroscience research and why I think it's not tractable in the timeframe I think we have.

Replies from: TekhneMakre
comment by TekhneMakre · 2022-08-02T03:48:05.506Z · LW(p) · GW(p)

Thanks!

answer by gilch · 2024-02-21T20:07:48.340Z · LW(p) · GW(p)

I recently saw What's up with psychonetics? [LW · GW]. It seems like a kind of meditation practice, but one focused on gaining access to and control of mental/perceptual resources. Not sure how risky this is, but the linked text had some warnings about misuse. It might be applicable to working or long-term memory, and specifically talks about conceptual understanding ("pure meanings") as a major component of the practice.

answer by gilch · 2022-07-14T22:06:11.444Z · LW(p) · GW(p)

There's a tuplamancy-adjacent construct called a "servitor", which sounds like a kind of persistent hallucination that might be able to perform various automatic functions. I can't imagine such a thing being any more useful than a smartphone (probably much less), but perhaps it would have a faster direct-mental interface.

For example, I wonder if some kind of "notepad" servitor could expand one's working memory, which seems like a major bottleneck in humans. I.e. quickly offload writing/images to the persistent hallucination, and then just look at it to reload it. This would be easy enough to test with something like digit-span or dual-n-back.

Given the way reading fails to work in dreams (it gets completely confabulated based on expectations, and then erased/regenerated as soon as you look away/back), I think there's a significant chance a servitor couldn't persist writing with any reliability, but it might be worth a try.

answer by gilch · 2022-07-14T21:50:24.295Z · LW(p) · GW(p)

Trepanning. I've heard rumors that this reduces CSF pressure on the brain allowing it to expand a bit and increase intelligence. It was once common practice among many ancient cultures, but that doesn't prove they had good reasons. I find the supposed effect and proposed mechanism of action highly dubious, and, of course, brain damage/infection may well kill you if done improperly, but you asked for "risky" and "wacky".

answer by Tomás B. · 2022-07-14T19:04:23.282Z · LW(p) · GW(p)
comment by TekhneMakre · 2022-07-14T19:36:59.440Z · LW(p) · GW(p)

 if I were a billionaire with some means of getting around the FDA

Do you have to get around the FDA if you're doing non-commercialized, non-academically-published research? Like, basically, friends making legal (because novel) chemicals for friends to take voluntarily?

comment by Nicholas / Heather Kross (NicholasKross) · 2022-07-14T19:57:58.661Z · LW(p) · GW(p)

IIRC Gwern's Nootropics and "Algernon Argument" pages basically came to the same idea: bearish on most nootropics, except stimulants and moda, due to evolutionary/biological tradeoffs.

Replies from: TekhneMakre
comment by TekhneMakre · 2022-07-14T20:28:08.775Z · LW(p) · GW(p)

Algernon's Law doesn't apply if you can eliminate the tradeoff. E.g. if the tradeoff was intelligence vs. energy costs, and it's no longer difficult to feed your intelligence-enhanced human 5000 calories a day reliably.

Replies from: NicholasKross
comment by Nicholas / Heather Kross (NicholasKross) · 2022-07-14T20:37:43.535Z · LW(p) · GW(p)

Exactly, hence the support of stimulants and/or modafinil, which trades off calories.

Replies from: TekhneMakre
comment by TekhneMakre · 2022-07-14T20:41:05.953Z · LW(p) · GW(p)

It would be interesting to see analyses of other tradeoffs, and use that to hypothesize other classes of nootropics. E.g. if there's some neural operation that's bottlenecked on some chemical that takes a long time to synthesize or requires rare materials or causes damage or something, we could supply that chemical, or figure out how to prevent that damage, or something. 

2 comments

Comments sorted by top scores.

comment by gilch · 2022-07-14T22:21:07.478Z · LW(p) · GW(p)

Perhaps not wacky enough, so I'll just comment, but language is something of a tool of thought. Proper math notations can be the difference between something being unthinkable and obvious. Some notations, like APL, are extremely terse, allowing one to fit entire algorithms in ones head at once that would otherwise only be understandable in smaller chunks. Perhaps learning and expanding upon such notations could be valuable.

Similarly, there are conlangs with extreme terseness, such as Ithkuil. This language is extremely difficult, and even its creator cannot speak it with any fluency (and even if someone could, they probably couldn't express themselves any more quickly than they could in a natural language like English), however, once generated, one could probably fit more information in their auditory loop at once, effectively expanding ones (linguistic) working memory a bit.

Replies from: Mo Nastri
comment by Mo Putera (Mo Nastri) · 2023-12-13T04:08:14.893Z · LW(p) · GW(p)

fit entire algorithms in ones head at once that would otherwise only be understandable in smaller chunks. Perhaps learning and expanding upon such notations could be valuable.

My first reaction was to wonder how this is any different from what already happens in pure math, theoretical physics & TCS etc. Reflecting on this led to my second reaction, which is that jargon brevity correlates with (utility x frequency) which is domain-specific (cf. Terry Tao's remarks on useful notation), and cross-domain work requires a lot of overhead (to manage stuff like avoiding namespace collisions, but the more general version of this) and this overhead work plausibly increases superlinearly with number of domains, which would be reflected in the language as the sort of thing the late Fields medalist Bill Thurston mentioned re: formalizing math:

Mathematics as we practice it is much more formally complete and precise than other sciences, but it is much less formally complete and precise for its content than computer programs. The difference has to do not just with the amount of effort: the kind of effort is qualitatively different. In large computer programs, a tremendous proportion of effort must be spent on myriad compatibility issues: making sure that all definitions are consistent, developing “good” data structures that have useful but not cumbersome generality, deciding on the “right” generality for functions, etc. The proportion of energy spent on the working part of a large program, as distinguished from the bookkeeping part, is surprisingly small. Because of compatibility issues that almost inevitably escalate out of hand because the “right” definitions change as generality and functionality are added, computer programs usually need to be rewritten frequently, often from scratch.

In practice the folks who I'd trust most to have good opinions on how useful such notations-for-thought would be are breadth + detail folks (e.g. Gwern), people who've thought a lot about adjacent topics (e.g. Michael Nielsen and Bret Victor), and generalists who frequently correspond with experts (e.g. Drexler). I'd be curious to know what they think.