Posts
Comments
I don't really see any reason why you couldn't just do a setwise comparison and check which of the extraversion increasing variants (or combinations of variants if epistatic effects dominate) increase the trait without increasing conformity to social desirability.
In fact if you just select for disagreeableness as well that might just fix the problem.
The key distinction is that IQ demonstrates a robust common pathway structure - different cognitive tests correlate with each other because they're all tapping into a genuine underlying cognitive ability. In contrast, personality measures often fail common pathway tests, suggesting that the correlations between different personality indicators might arise from multiple distinct sources rather than a single underlying trait. This makes genetic selection for personality traits fundamentally different from selecting for IQ - not just in terms of optimal selection strength, but in terms of whether we can meaningfully select for the intended trait at all.
There is such a thing as a "general factor of personality". I'm not sure how you can say that the thing IQ is measuring is real while the general factor of personality isn't.
Sure big 5 aren't the end-all be-all of personality but they're decent and there's no reason you couldn't invent a more robust measure for the purpose of selection.
I think we would probably want to select much less hard on personality than on IQ. For virtually any one of the big five personality traits there is obviously a downside to becoming too extreme. For IQ that's not obviously the case.
Paper 1 is pretty interesting and is related to one of the methods of brain delivery I've looked at before.
I'm not sure we really want to have a lot of T cells floating around inside the central nervous system, but a friend of mine who spent a few days looking into this earlier this year thought we might be able to repurpose the same tech with synthetic circuits to use microglia cells in the brain to do delivery.
Microglia (and to a lesser extent oligodendrocytes and astrocytes) naturally migrate around the brain. If you could get them to deliver RNA, DNA or RNP payloads to cells when they encounter a tissue-specific cell surface receptor, that might actually solve the brain delivery issue better than any other method.
This would take a lot of work to develop, but if you managed to get it working you could potentially continuously dose a patient's brain with an arbitrary protein for an indefinite time period. That could be pretty damn valuable.
Alternatively you might be able to use a conditionally activated system like Tet-On to temporarily turn on expression of gene editors or gene editor RNA for some time period to do editing.
Yes. Once this tech works you can use it for basically anything so long as you can make enough edits and the genes involved are known.
So we really don’t need to make any special considerations for memory.
We will be hiring fairly soon. Reach out to me genesmithlesswrong@gmail.com
Good question! What’s the name of the skin condition you have?
Yeah I pretty much agree with this assessment. I think you could probably get to 80% with 100 million and ten years and maybe 50% with 30 million and 7 years. Perhaps I'm optimistic, but right now the entire field is bottlenecked by the need for $4 million to do primate testing.
Aspen dental is a franchise based venture capital funded organization that already does this.
This is interesting, thanks for sharing.
I asked my friend about your other concerns regarding enshittification of the dental industry. If you're interested, this was their response:
Patients tend to do better with DSOs. There’s a number of reasons:
- The first is doctor supervision—when doctors have their own offices, no one checks their work or holds them accountable. In a corporate setting, there is typically a clinical director that the doctors will report to.
- The second issue is that the worst case is actually the doctor being the owner—that’s when they have the strongest financial incentive to do bullshit. The more the doctor is removed from being the owner, the less I think their judgement is influenced.
Now in nearly all DSOs and private practices, revenue is the chief KPI for doctors. So the pressure is there still to a degree.
With us, revenue is not a KPI—we don’t ever tell the doctors how much they produce And so we remove the financial biasing of their diagnosis and treatment.
But we are unique in this—definitely an outlier in how much we are trying to have the doctors be unbiased by the finances.
What’s also ironic about some of the replies is that our lobby goals are to actually get real regulation put into place so patients are protected from doctors doing whatever to maximize revenue But these somewhat personal ideals and goals being acted out—that do run counter to the pure capitalist logic
@towards_keeperhood yes this is correct. Most research seems to show ~80% of effects are additive.
Genes are actually simpler than most people tend to think
You’re ignoring several facts:
- A significant fraction of cells turn over frequently in adults so the number of divisions for those cell types is far, far higher than 45 divisions. Those cell divisions CAN cause cancer, it a single extra cell division is going to have negligible impact on risk.
- There’s an enzyme called telomerase which can extend telomeres. It’s active in embryos. So this isn’t really a concern.
I'm hosting laser tag again at 8:30 PM after the reading group. All are welcome!
I’m hosting laser tag tomorrow at 8:30 after the reading group. Everyone is welcome!
I appreciate the offer. It’s going to be a long time before we start human trials. At least five years, probably longer.
Billionaires read LessWrong. I have personally had two reach out to me after a viral blog post I made back in December of last year.
The way this works is almost always that someone the billionaire knows will send them an interesting post and they will read it.
Several of the people I've mentioned this to seemed surprised by it, so I thought it might be valuable information for others.
Can you link to a source about the increased cancer risk? Every source I've seen on this subject indicates retinoids DECREASE cancer risk, though I agree with you that this doesn't make much intuitive sense.
According to Claude they were first studied for cancer but the first actual FDA approval was for acne.
I’ve read the article and notice that all the negative side-effects listed are due to issues with oral Retinoids rather than topical Retinoids.
Several of the articles I’ve read indicate that topical retinoids actually DECREASE the risk of cancer, though I agree this is confusing since they supposedly increase cell turnover rates, which should theoretically cause more mitosis-induced mutations to occur. But Retinoids are actually frequently used as anti-cancer drugs.
Google’s AI summarizer says you the mechanism of action is promotion of cell differentiation and inhibiting the progression of pre-malignant cells to malignant cells.
It also reduces "keratinocyte differentiation and decrease keratin deposition" which makes skin more fragile, and it increases sensitivity to UV light.
This is correct of course and why I specifically recommend in the post that people wear sunscreen when using retinoids.
Hard to say. Retinols are recommended as a preventative. Though incidentally I have noticed acne fading much faster after using them. I suspect this is because they speed up the healing process of the skin.
I've started a gene therapy company, raised money, opened a lab, hired the inventor of one of the best multiplex gene editing techniques to be our chief scientific officer, and am currently working on cell culture experiments with the help of a small team.
I may write a post about what's happened at some point. But things are moving.
Can you explain in more detail what the problems are?
You can definitely extrapolate out of distribution on tests where the baseline is human performance. We do this with chess ELO ratings all the time.
I don't think this is the case. You can make a corn plant with more protein than any other corn plant, and using standard deviatios to describe it will still be useful.
Granted, you may need a new IQ test to capture just how much smarter these new people are, but that's different than saying they're all the same.
Apart from coming across as quite repulsive to most people, I wonder at the cost and success rate of maturing immature oocytes.
This is already an issue for child cancer patients who want to preserve future fertility but haven't hit puberty yet. As of 2021, there were only ~150 births worldwide using this technique.
The costs are also going to be a major issue here. Gain scales with sqrt(ln(number of embryos)). But cost per embryo produced and tested scales almost linearly. So the cost per IQ point is going to scale at like , which is hilariously, absurdly bad.
I'd also want to see data on the number of immature oocytes we could actually extract with this technique, the rate at which those immature oocytes could be converted into mature oocytes, and the cost per oocyte.
So a human with IQ 300 is probably about the same as IQ 250 or IQ 1000 or IQ 10,000, i.e. at the upper limit of that range.
I would be quite surprised if this were true. We should expect scaling laws for brain volume alone to continue well beyond the current human range, and brain volume only explains about 10% of the variance in intelligence.
Without any double-strand breaks, base editors are less toxic to cells and less prone to off-target effects.
It's worth noting that most base editors actually DO involve nicking of one strand, which is done after the chemical base alternation to bias the cell towards repairing the non-edited strand.
The editing efficiency of non-nicking base editors is significantly lower than that of nicking versions (though the precise ratio varies depending on the specific edit site)
Finally the cell's enzymes also notice a mismatch between the strand with the new template DNA and the old strand without it, and decide that the longer, newer strand is “correct” and connect it back to the main DNA sequence.
It's worth noting that this only happens some of the time. Often the cell will either fail to ligate the edited strand back together or it will remove the edited bases, undoing the first half of the edit.
In regards to bridge RNAs, I do not yet believe they will work for any human applications. The work in Hsu's paper was all done in prokaryotes. If this tool worked in plants or animals, they would have shown it.
In fact, despite what human geneticists often say about epistasis being minor and rare, plant genetics people seem to find that interactions between genes are a big deal, explaining most (!) of the variance in crop yields.[5] So, if I’m not mistaken, “of all these genetic variants statistically associated with the polygenic trait, what’s the best subset of edits to make, if I want the largest expected impact” is a nontrivial question.[6]
I was curious about the finding of epistatic effects explaining more of the variance than traditionally assumed, so I took a look at the study you referenced and found something worth mentioning.
The study is ludicrously underpowered to detect anything like what they're trying to show. They only have 413 genetically distinct rice plants in the study, compared to 36,901 SNPs.
This study is underpowered to detect even SNPs that have an effect on the trait in question, let alone epistatic effects. So I don't give their results that much weight.
I agree with your overall conclusion though; I think we'll likely see the first applications of polygenic embryo selection in animals (perhaps cows?) before we see it in humans.
I have to say, I don’t find these bullet point lists developed by AIs to be very insightful. They are too long with too few insights.
It’s also worth pointing out that the AI almost certainly doesn’t know about recent developments in iterated meiosis or super-SOX
If we actually even tried to launch a global scale effort to genetically engineer "superhumans" it might take at least 10 years to develop the technology
This is definitely wrong. A global effort to develop this tech could easily bring it to fruition in a couple of years. As it is, I think there's maybe a 50% chance we get something working within 3-5 years (though we would still have to wait >15 years for the children born with its benefits to grow up)
From the current advances in AI it does not seem plausible that we have 30 or more years before foom.
I agree this is unlikely, though there remains a non-trivial possibility of some major AI disaster before AGI that results in a global moratorium on development.
This forum is quite concerned about AI alignment. Aligning superhumans might be much much more difficult than with AI. At least with AI there is known programming -- with humans the programming is anything but digital (and often not that logical).
I agree that aligning superhumans could potentially be a concern, but consider the benefits we would have over AGI:
- Literally hundreds of years of knowledge about the best ways to raise and train humans (not to mention a log of biologically primed psychology that helps that training process along)
- The ability to edit and target other traits besides intelligence such as kindness or altruism
- We have guaranteed slow takeoff because they will take over a decade to reach cognitive capabilities of the level of our smartest people.
Consider also the limitations of even genetically engineered superhumans:
- They have no ability to rapidly and recursively self-improve
- They cannot copy themselves to other bodies
While these don't guarantee success, they do give us significant advantages over an AGI-based path to superintelligence. You get more than one shot with superhumans. Failure is not catastrophic.
I've switched to Claude when Opus came out. If the delta between whatever OpenAI has and the next best model from a more ethical company is small enough then it seems worth it to me to switch.
When Opus came out and started scoring about as well on most benchmarks I decided the penalty was small enough.
Send me an email: genesmithlesswrong@gmail.com
Give me a summary of your background, what stage of your career you're at, and your motivations for working on this tech.
Send me an email: genesmithlesswrong@gmail.com with some info about what your skills are. There are some companies you might be able to apply to if you're at working age. If you're in university you can upskill in certain areas.
Yeah it will be a pretty huge thing. If not for AGI I'd say it would be the most important technology ever.
I can only cross my fingers that we have enough time before AGI for this to become an actual concern.
There’s so much variance in the human gene pool that people on the left tail won’t gain much more than those on the right tail.
I haven’t done the simulations to figure out how large the reduction would be, but there’s like 1700 IQ points worth of additive variance in the human gene pool. It would take a truly stupid amount of selection to make a dent in that.
Interesting. I took Lumina around December and have noticed no change, though I don't think I've actually been hungover during that time period. I definitely didn't notice any difference in the AMOUNT needed to produce a hangover; I drunk a substantial amount and didn't get hangovers any more easily.
To the extent that transposon sequences affect traits and risks but can't be measured we should expect that be reflected in "missing heritability".
You may very well be right that highly repetitive sequences like transposons do affect traits, but that's accounted for.
Thanks a lot for the comment. I'll amend the post with some of this information in the next week. If your numbers are correct (and I have no current reason to doubt them, that substantially increases my estimate of the effectiveness of whole genome embryo sequencing.
I've been meaning to write a whole post about the different screening companies but a combination of little time due to starting a new company and a lack of clear data have preventing me from doing so thus far. With this information I might reconsider.
One more thing I'd like to ask at some point is whether you're going to publish the AUCs of all the predictors in your panel within some reference population. That would be extremely helpful for patients trying to compare Orchid vs Genomic Prediction or any other company.
I am just now learning the origin of the quokka meme. The first and only time I ever saw the reference was with no explanation when someone posted this meme on Twitter
Who do you recommend asking to be a reader?
That's a difficult question. I always tell readers that the number one thing I'm interested in is where they got bored and stopped reading. I ask them to be brutally honest and not feel like they need to keep reading to flatter my ego or because they are afraid of being harsh on me.
If they aren't interested in the topic in the first place it's harder. You need to be able to at least find an audience that is interested in sitting down to read it. Can you like join a hobbyist club for this stuff, or find a subreddit for it?
Here's a kind of galaxy-brained idea that might just work for finding your crowd:
- Go onto reddit and find the subreddit community closest to the thing you're interested in/writing about
- Go to https://subredditstats.com and enter the name of that subreddit to see which communities it has the most overlap with.
- Go to meetup.com and see if you can find a local group dedicated to one of those related topics (or better yet, the topic itself)
- Go to the meetup, pitch your thing, and see if people are into it. Maybe just TALK about what you've written first and if people seem interested offer to send them what you've written.
If you decide to actually give the above a shot, tell me how it goes. I'd be very interested to hear whether my idea works.
https://www.nature.com/articles/s41598-020-69927-7
This is one of the better papers I know of examining sibling validation. To quote from the article:
Sibling comparisons are a powerful method with which to validate genomic prediction in humans. Siblings (i.e., children who share the same mother and father) have typically experienced similar environments while growing up: family social status, exposure to toxins, diet, climate, etc. all tend to be similar3,4. Furthermore, siblings are concordant for ancestry and display negligible differences in population structure.
- If environmental conditions in a specific region, such as, e.g., Northern England, affect disease risk, the predictor trained on UK data might assign nonzero effect sizes to SNPs associated with ancestries found in that region—i.e., the predictor learns to use population structure correlated to environmental conditions. These specific SNPs are correlated to disease risk for environmental reasons, but might not have any connection to genetic mechanisms related to the disease. They likely have little power to differentiate between siblings, who experienced similar family conditions and have have identical ancestry.
- It is also possible that some SNP variants affect nurture (the way that parents raise their children). These SNPs could affect the child phenotype via an environmental mechanism under parental control, not a biochemical pathway within the child. This is sometimes referred to as a genetic nurture effect9,10,11,12,13. Note, siblings raised together would both be affected by parental genetic nurture variants, so these effects are weakened in family designs.
Sibling comparisons reduce the impact of factors such as those described above. We expect some reduction in power when predictors trained in a population of non-sibling individuals are tested among sibs. Sibling validation likely yields a better estimate of truly causal genetic effects. A more complicated measure of familial relatedness might lead to even better results14, but we restrict our analyses here to siblings.
There's more in the paper if you care to take a look.
A relevant tweet from Nate Silver on the methodology used to conduct the survey:
This is not a scientific way to do a survey. The biggest issue is that it involved personalized outreach based on a totally arbitrary set of criteria. That's a huge no-no. It also, by design, had very few biosafety or biosecurity experts.
The tweet has some screenshots of relevant parts of the paper
I've landed three jobs thanks to my writing, which is a bit strange to say because I still feel like I have substantial room for improvement. But that's still a pretty good track record, so I'll tell you what has worked for me:
- Ask people to read your stuff and tell you at what point they get bored or want to stop. Tell them to be brutally honest. The most important part of writing is keeping your audience's attention, and many writers make no effort to do this.
- Write about something that's actually important, and that interests you. I've done a large amount of high-quality writing about interest rates, banking and crypto. So far as I can tell it was a complete waste because no one cared. 80% of the battle is just picking the right topic.
- Put the most imortant ideas at the start of whatever you're writing. The drop-off among readers (even on a site like LessWrong) is shockingly high. I received almost 400 upvotes on my post about adult intelligence enhancement, and only four people sent me a DM in response to my request they do so that I placed at the end of the appendix.
- Write the narrative of a story in the evening and then correct language and facts in the morning. My best, most productive narrative writing often occurs late at night, but when I re-read it in the morning it sounds sloppy and full of mistakes. However, I nearly always need to start with the sloppy, emotional version because good narrative writing is the one thing I can't do well when my brain is functioning at full capacity.
- Don't be afraid to start a new draft of a post if you feel like you haven't gotten the narrative right.
This reminds me a bit of my own hiring process. I wanted to work for a company doing polygenic embryo screening, but I didn't fit any of the positions they were hiring for on their websites, and when I did apply my applications were ignored.
One day Scott Alexander posted "Welcome Polygenically Screened Babies", profiling the first child to be born using those screening methods. I left a comment doing a long cost-effectiveness analysis of the technology, and it just so happened that the CEO of one of the companies read it and asked me if I'd like to collaborate with them.
The collaboration went well and they offered me a full-time position a month later.
All because a comment I left on a blog.
Gene editing can fix major mutations, to nudge IQ back up to normal levels, but we don't know of any single genes that can boost IQ above the normal range
This is not true. We know of enough IQ variants TODAY to raise it by about 30 points in embryos (and probably much less in adults). But we could fix that by simply collecting more data from people who have already been genotyped.
None of them individually have a huge effect, but that doesn’t matter much. It just means you need to perform more edits.
If we want safe AI, we have to slow AI development.
I agree this would help a lot.
EDIT: added a graph
None of the stuff that you suggested has worked for any animal.
Has anyone done 2500 edits in the brain cells of an animal? No. The graphs are meant to illustrate the potential of editing to affect IQ given a certain set of assumptions. I think there are still significant barriers that must be overcome. But like... the trend here is pretty obvious. Look at how much editors have improved in just the last 5 years. Look at how much better our predictors have gotten. It's fairly clear where we are headed.
Also, to say that none of this stuff has been done in animals seems a bit misleading. Here's a paper where the authors were able to make a desired edit in 60% of mouse brain cells. Granted, they were using AAVs, but for some oligogenic conditions that may be sufficient; you can pack a single AAV with a plasmid holding DNA sufficient to make sgRNA for 31 loci using base editors. There are several conditions for which 30 edits would be sufficient to result in a >50% reduction in disease risk even after taking into account uncertanties about which allele is causal.
Granted, if we can't improve editing efficiency in neurons to above 5% then the effect will be significantly reduced. I guess I am fairly optimistic on this front: if an allele is having an effect in brains, it seems reasonable to assume that some portion of the time it will not be methylated or wrapped around a histone, and thus be amenable to editing.
Regarding lipid nanoparticles as a delivery vehicle for editors: Verve-101 is a clinical trial underway right now evaluating safety and efficacy of lipid nanoparticles with a base editor to target PCSK9 mutations causing familial hypercholesterolemia.
There are other links in the post such as one showing transcytosis of BBB endothelial cells using angiopep conjugated LNPs. And here's a study showing about 50% transfection efficiency of LNPs to brain cells following intracranial injection in mice.
it's technically challenging if not impossible
Technically challenging? Yes.
Impossible?
Obviously not. You can get payloads into the brain. You can make edits in cells. And though there are issues with editing efficiency and delivery, both continue to improve every year. Eventually we will be able to do this.
if we want to achieve a true revolution in cognition, we need to target brain development not already developed brain!
If your contention is that it is easier to get a large effect by editing embryos vs the adult brain, I would of course agree! But consider all the conditions that are modulated by the timing and level of protein expression. It would be quite surprising to me if intelligence were not to modulated in a similar manner.
Furthermore, given what is happening in AI right now, we probably don't have 25 years left for the technology for embryo editing to mature and for the children born with its benefits to grow up.
Imagine a monkey thinking of enhancing its abilities by injecting virus in its brain - will it ever reach a human level cognition? Sounds laughable. Who cares about +5 points to IQ
I have doubts we can enhance chimpanzee intelligence. We don't have enough chimpanzees or enough intelligence phenotypes to create GWAS for chimp intelligence (or any other mental trait for that matter).
We could try porting human predictors but well... we already see substantial dropoff in variance explained when predictors are ported from one genetic ancestry group to another. Imagine how large the dropoff would be between species.
Granted, a lot of the dropoff seems to be due to differences in allele frequencies and LD structure. So maybe there's some chance that a decent percentage of the variants would cause similar effects across species. But my current guess is few of the variants will have effects in both species.
Also, if I expected +5 IQ points to be the ceiling of in-vivo editing I wouldn't care about this either. I do not expect that to be the ceiling, which is reflected in some of the later graphs in the post.
For >40 years, way before the discovery of CRISPRs and base editors, we've been successfully genetically engineering mice, but not other species. Why only mice? Because we can culture mouse embryonic stem cells that can give rise to complete animals. We did not understand why mouse cells were so developmentally potent, and why this didn't work for other species. Now we do (I'm the last author): Highly cooperative chimeric super-SOX induces naive pluripotency across species - ScienceDirect
I've spent the better part of the afternoon reading and trying to understand this paper.
First, it's worth saying just how impressive this work is. The improvement of success rates over existing embryogenesis techniques like SCNT. I have a few questions I wasn't able to find answers to in the paper:
- Do the rates of full-term and adult survival rates in iPSC mice match that which could be achieved by normal IVF, or do they indicate that there is still some suboptimality in culturing of tetraploid aggregated iPSC embryos? I'm not familiar with the normal rates of survival for mice so I wasn't able to tell from the graph whether there is still room for improvement.
- How epigenetically different are embryos produced with Sox2-17 compared to those produced through the normal IVF process?
- If this process or an improved one in the future were capable of inducing embryo-viable iPSC's, would you be able to tell this was the case in humans with the current data available? If not, what data are you missing? I'm particularly wondering about whether you feel that there is sufficient data available regarding the epigenetic state of normal embryonic cells at the blastocyst stage.
When you engineer stem cells rather than adult animals, all of those concerns you listed are gone: low efficiency, off-target mutations, delivery, etc. Pluripotent stem cells are immortal and clonogenic, which means that even if you get 1 in 1000 cells with correct edits and no off-target mutations, you can expand it indefinitely, verify by sequencing, introduce more edits, and create as many animals as you want. The pluripotent stem cells can either be derived from the embryos or induced artificially from skin or blood cells. The engineered pluripotent stem cells can either be used directly to create embryos or can be used to derive sperm and eggs; both ways work well for mice.
You are of course correct about everything here. And if we had unlimited time I think the germline editing approach would be better. But AGI appears to be getting quite near. If we haven't alignment by the point that AI can recursively self-improve, then I think this technology becomes pretty much irrelevant. Meat-based brains, even genetically enhanced ones, are going to be irrelevant in a post-AGI world.
One would need to start with animals. I propose starting with rats, which are a great model of cognitive studies
How exactly do you propose to do this given we don't have cognitive ability GWASes for rats, don't have a feasible technique for getting them without hundreds of thousands of phenotypes, and given the poor track record of candidate gene studies in establishing causal variants?
I keep finding examples of decision making that can be explained by shard theory. In particular, here's an example I read in ACX today about how depressed people make decisions that keep them depressed, which sounds an awful lot like "shards guiding behavior in ways that tend to lead to their own future self-reinforcement":
Millgram et al (2015) find that depressed people prefer to listen to sad rather than happy music. This matches personal experience; when I'm feeling down, I also prefer sad music. But why? Try setting aside all your internal human knowledge: wouldn’t it make more sense for sad people to listen to happy music, to cheer themselves up?
A later study asks depressed people why they do this. They say that sad music makes them feel better, because it’s more "relaxing" than happy music. They’re wrong. Other studies have shown that listening to sad music makes depressed people feel worse, just like you’d expect. And listening to happy music makes them feel better; they just won’t do it.
Scott's preferred explanation is one of a kind of "mood setpoint" which the depressed individual's actions are trying to reach:
Depression is often precipitated by some psychosocial event (like loss of a job, or the death of a loved one). It’s natural to feel sad for a little while after this. But instead of correctly activating regulatory processes to get mood back to normal, the body accepts the new level as its new set point, and tries to defend it.
By “defend it”, I mean that healthy people have a variety of mechanisms to stop being sad and get their mood back to a normal level. In depression, the patient appears to fight very hard to prevent mood getting back to a normal level. They stay in a dark room and avoid their friends. They even deliberately listen to sad music!
Self-reinforcing "depression shards" are obviously a mechanism through which depressive states can be maintained. But then the question becomes why are some people more vulnerable to this kind of depression homeostasis than others?
There's certainly a genetic component, but given how polygenic depression risk is (>30k variants involved) the mechanisms are likely multi-causal.
I think it’s plausible we could go higher but I’m fairly certain the linear model will break down at some point. I don’t know exactly where , but somewhere above the current human range is a good guess.
You’ll likely need a “validation generation” to go beyond that, meaning a generation or very high IQ people who can study themselves and each other to better understand how real intelligence has deviated from the linear model at high IQ ranges.
The reason they have lower IQs than humans can be explained entirely by neuron count.
Not true. Humans have an inbuilt propensity for language in a way that gorillas and other non-human primates don’t. There are other examples like this.
In Lex Fridman’s interview with Eliezer Yudkowsky, Eliezer presents no compelling path forward — and paints the future as almost non-existent.
It's worth pointing out that Eliezer's views on the relative hopelessness of the situation do not reflect those of the rest of the field. Nearly everyone else outside of MIRI is more optimistic than he is (though that is of course no guarantee he is wrong).
As an interested observer who has followed the field from a distance for about 6 years at this point, I don't think there has ever been a more interesting time with more things going on than now. When I talk to some of my friends that work in the field, many of their agendas sound kind of obvious to me, which is IMO an indication that there's a lot of low-hanging fruit in the field. I don't think you have to be a supergenius to make progress (unless perhaps you're working on agent foundations).
• The probability of doom given the development of AGI, + the probability of solving aging given AGI, nearly equals 1.
I'm not sure I understand what this means. Do you mean the "and" instead of "+"? Otherwise this statement is a little vague.
If you consider solving aging a high priority and are concerned that delays of AI might delay such a solution, here are a few things to cosider:
- Probably over a hundred billion people have died building the civilization we live in today. It would be pretty disrespectful to their legacy if we threw all that away at the last minute just because we couldn't wait 20 more years to build a machine god we could actually control. Not to mention all the people who will live in the future if we get this thing right. In the grand scheme of the cosmos, one or two generations is nothing.
- If you care deeply about this, you might consider working on cryonics both to make it cheaper for everyone and to increase the odds of personality and memory recovery following the revival process.
I live in Scandinavia and see no major (except for maybe EA dk?) political movements addressing these issues. I’m eager to make an impact but feel unsure about how to do so effectively without dedicating my entire life to AI risk.
One potential answer here is "earn to give". If you have a chance to enter a lucrative career you can use your earnings from that career to help fund work done by others.
If that's not an option or doesn't sound like something you'd enjoy, perhaps you could move? There are programs like SERI MATS you could attempt to enroll in if you're a newcomer to the field of AI safety but have a relevant background in math or computer science (or are willing to teach yourself before the program begins).
A bad enough error in a regulatory region could cause a protein to stop being expressed. But the insertion or deletion of a single base pair is not nearly as devastating.
Let me explain by talking through how a promoter works.
Promoters sit upstream of a region of the genome that codes for a protein. They generally serve as a binding site for a very important enzyme called RNA polymerase, whose job it is to transcribe DNA into mRNA, which can then be exported from the nucleus and turned into proteins.
You can delete a letter from a promoter and RNA polymerase will still be able to bind. The "binding affinity" (meaning the strength of the bond) will be affected by this deletion, but except in rare circumstances it will still work.
You can see this reflected in the distribution of insertion and deletions throughout the genome; there's not that many in coding regions, but there are tons in non-coding regions (on the order of 3-5 million).
You know, I actually looked into this at one point.
At the time I didn't find any obvious reason why it wouldn't work. But I didn't spend that much time digging into the details, so my prior is it will be hard for some reason I haven't discovered yet.
If you could actually find a way to present aribtrary "self-antigens" to T and B cells during the development phase within the thymus, that would be an incredibly powerful technology. It seems plausible to me that we could potentially cure a large percentage of autoimmune conditions with that technology, provided we knew which epitopes were triggering a particular immune response. But I know much less about this area than about gene editing, so it's entirely plausible I'm wrong.
There's already a few therapies that basically take this approach; allergy shots are probably the most basic, though I don't believe they actually do anything with the thymus. The general term for this approach seems to be "Immune Tolerance".
With a short search, I don't see anything about reprogramming the thymus.