Comment by Alex K. Chen (alex-k-chen) on Effects of Castration on the Life Expectancy of Contemporary Men · 2021-06-01T22:03:05.970Z · LW · GW


Among invertebrates, birds and mammals, experimental paradigms that limit reproductive investment also cause lifespan extension [232]. Hypothetically, the need for repairing and preventing damage to the germline dominates resource allocation strategies, while the somatic tissues age and deteriorate [112]. In support of such theories, modulations of reproduction that eliminate germ cells in C. elegans and D. melanogaster provide effective mechanisms for extending lifespan [232-234], phenotypes that may be caused by heightened resource availability and proteome stability within the post-mitotic soma [17, 235]. Inhibiting germline proliferation delays the onset of PolyQ-dependent aggregation and toxicity [235]. Proteasome activity and RPN-6 protein levels are increased in germline-lacking worms [17]. In these long-lived animals, increased proteasome activity, rpn-6 expression and longevity are modulated by DAF-16 [17]. Similar to these long-lived worms, FOXO4 is necessary for increased proteasome activity and PSMD11/Rpn6 levels in immortal hESCs [28, 236]. Interestingly, it has been recently reported that DNA damage in germ cells of C. elegans induce a systemic response that protects somatic tissues by increasing their proteasome activity [237].

Possibly slightly relevant

Comment by Alex K. Chen (alex-k-chen) on Core Pathways of Aging · 2021-03-31T09:06:46.181Z · LW · GW -> laura deming recently ran a cool twitch on methylation - MAKE SURE TO SAVE THE VIDEOS BEFORE THEY GET DELETED BY TWITCH IN 14 DAYS

Comment by Alex K. Chen (alex-k-chen) on Core Pathways of Aging · 2021-03-31T09:05:40.371Z · LW · GW

Have you thought of to reduce labor costs?

Comment by Alex K. Chen (alex-k-chen) on How often do you check this forum? · 2021-01-26T07:05:11.739Z · LW · GW

You know, I barely checked LW from 2015 to 2020, and now I check it like, every time I feel like I need some novelty refresh, almost as much as I do Twitter... It has definitely improved since a few years ago

Comment by Alex K. Chen (alex-k-chen) on Are index funds still a good investment? · 2020-12-06T10:21:30.326Z · LW · GW (and this was pre-covid)

Comment by Alex K. Chen (alex-k-chen) on Are index funds still a good investment? · 2020-12-03T20:02:50.666Z · LW · GW

The dot-com crash was also preceded by an extremely obvious and unique bubble that has not been seen since - diversifying/rebalancing during a massive/obvious bubble doesn't take that much special skill or awareness, and we're more aware of bubble dynamics now than 2000.

Comment by Alex K. Chen (alex-k-chen) on Are index funds still a good investment? · 2020-12-02T22:37:36.624Z · LW · GW

Throughout the last decade (or last 15 years, really), FAANG stocks (and QQQ) have consistently overperformed the market/index funds, with roughly comparable maximum drawdowns relative to even the S&P. It was clear to many of us technophilic early adopters even in the late 2000s that Amazon/Google were going to take over the world (though I'd replace Netflix with NVIDIA as NVIDIA is just more innovative), and their returns have massively outperformed the market, with much smaller drawdowns. COVID only accelerated the returns from FAANG - however - with their monopolization (and penetration into all markets, reducing what upside risk there is left), I'm not sure if FAANG has as much market capture, going forward, as there was 5-10 years ago. I know some have said that it is safe to invest in "singularity stocks" like the cloud - ones that have non-zero chance of precipitating the singularity (or feeding into the data-heavy thesis that accelerationism happens when you have more data/compute power/better algorithms, and only tech-heavy companies have really embraced this trend - some even liken Tesla's valuation to one that you can only understand if it were a "tech stock"). 

This year has pretty much accelerated the growth of all "new technology" stocks too (eg everything and anything to do with "new tech" exploded in value => to be fair ALL the "meme stocks" performed well), but many are now  that they're overvalued and the upside risk to them is not as high as they used to be. Ark Invest is the closest thing there is to a "hedge fund" that tries to understand "new technology" (even traditional hedge fund people like Bill Ackman and Ray Dalio aren't technophiles or well-versed in "technology" => it's known that hedge fund people tend not to outperform the market on long timescales, but a surprisingly small percent of them are like, technophilic), and it has had amazing market-beating returns over the last few years (where you don't have to spend that much time paying attention to it). Also, despite technophilia, the ARKK funds haven't really beat index funds pre-COVID (similarly to solar ETFs, which somehow exploded post-COVID for who-knows-what reason)

You have to look at companies with managers who constantly keep up with new technology/trends (rather than dig into what has always worked for them) and who can be expected to never stagnate.

I'm a little concerned about post-COVID overvaluations across the "tech sector" (especially given the "stagnation hypothesis" that many, particularly the Thielosphere, is concerned about), but I would still put some into QQQ, as QQQ has vastly outperformed index funds (but QQQ may be in a bubble itself). John Hussman has sounded the alarm for years, but if you were paying attention to him, you would have lost out on the returns over the last 5 years. I've observed that most of the high-profile companies that have recently IPO'd (especially in the tech sector) and which have rapidly-growing userbases have had much higher returns than most other companies - just look at how far up Twilio and Slack and Spotify and Cloudflare have gone up. Many recent biotech companies that have targeted CRISPR have also gone way up but they're more at risk of a sudden catastrophic drop if a clinical trial doesn't pan out.

Many I know are bullish in cryptocurrency [particularly BTC\ethereum] again, esp given the prevalence of money printing/devaluation as a response to the COVID crisis (and perhaps as an easier/more politically feasible way to "get money into the economy" than is higher taxation ), and since BTC is near ATH and still nowhere at risk of being at a bubble.

A heuristic I might use: What products/technologies are the smartest/most innovative people (eg those at DeepMind) using? [eg note how viciously smart people in AI have massive salaries and actually, like, use their salaries on smg] Their resourcefulness + financial resources will only improve with time, and them having higher ability to have frictionless workflows (minimizing the amount of time they spend on unnecessary logistical things such as upgrading their PC/changing homes/buying a new car/backing up data/preventative medicine/etc)+ collect data + store energy for data centers + make use of these massive datasets depends on them having access to certain resources (be it energy, speed, technology, SSDs, advanced materials). Think of what they will be like 10 years in the future, and of the materials they will use to maximize their ability to make money-time (or money-time-energy) tradeoffs. Sufficiently resourceful companies will never saturate - they will figure out how to create demand in areas where previous demand was not thought of as existing (kind of like how if you create enough products and advertise them to people, you may convince them that they have a "need" they might not have thought out themselves). If you use this valuation in itself, you wouldn't be surprised at the massive increases in NVIDIA/AMD/GOOGLE/LRCX/GLW/cloud stocks/whatever..

[given current tech valuations, I feel that materials science is the sector that has the most potential to improve/advance, and I am somewhat invested in LRCX/MU/AMAT, but I feel like there still isn't, like, an equivalent to "big tech" for the materials science sector - there isn't a huge market [yet] for photonic or neuromorphic computing, for instance. This is overdue, and it's possible that "AI" can catalyze a massive shift in materials science innovation that could lead to fast AI takeoff]. Also, with quantitative easing and other novel financial instruments (such as potentially cryptocurrency fintech), we may be able to more quickly "manufacture ourselves" out of recessions/crises than before, without being overly dependent on politics or which party wins the white house and dictates tax policy => this flexibility is also why a great depression a la 1929 is unlikely to happen ever again). 

Comment by Alex K. Chen (alex-k-chen) on [Linkpost] AlphaFold: a solution to a 50-year-old grand challenge in biology · 2020-12-01T04:21:41.195Z · LW · GW

Does knowing the structure of a protein help with simulating how it responds to any arbitrary/unknown protein/molecule/agonist/antagonist/superagonist? [it seems that even with all the protein structures that we do know well, that finding appropriate agonists of the protein with the desired action is still a huge unsolved problem]. Is simulation a much more difficult problem than "folding"?

This allows us to design "efficient" proteins (proteins designed "intelligently" often do tend to be smaller, less "messy" and "bulky" than naturally-evolved proteins [which also cross over at the most pedagogically unhelpful sites ever], and with protein folding solved, it may be easier for us to design proteins that are less complicated/more amenable to simulation than the natural set of proteins that exist => not to mention that it may be possible to find a specific transferase protein that is able to precisely add a methyl or carboxyl group to any molecule at any location, or a ligase that is able to split a molecule at any arbitrary location). We may also be able to design them based on properties like how easy it is to introduce them into the cell via mRNA (the genes for many natural proteins are not easy to introduce into the cell via CRISPR or AAV, but as protein design-space is so large, you can probably design another protein that carries out the same function that can be delivered into cells via mRNA or CMV-based vectors, without needing to force the corresponding gene at the right location at the cell's nucleus). 

Anyhow, designing proteins for industrial chemistry (eg properly degrade polyethylene plastics in the ocean) [and also those with a specific physical property rather than those that perform a very specific function] is a much easier problem than, say, figuring out how to make an extremely particular histone acetyltransferase or DNA methyltransferase or chaperone enzyme [often those at the center of hub networks and whose evolved messiness naturally evolves due to the necessity of needing to have other extremely precise interactions with other proteins that have also evolved to become messy bloated behemoths] localize/diffuse at the locations where it can precisely do the right things at {X} sites and not do the wrong things at the {Y} other sites. 

Also, this helps us develop a "periodic table of protein function" where you can design proteins that can carry out X function if you change certain motifs to it, and it will turn out as much cleaner/more organizeable/more predictable than the natural super-messy [and hard to organize] set of protein motifs we find in the wild. I think this is especially relevant for manufacturing and industrial chemistry - proteins that broadly carry out functions sort of similar to zymogen. 

The whole field of structural biology was 95% useless anyway.

As long as it produces machine-interpretable output, it's useful for training new algorithms, even if the vast majority of humans are unable to properly interpret protein structure.

^Anyhow, this post was replying to the idealized version. Protein folding is still far from solved, as explains. It's an exciting advance to be sure. I think this allows us to better figure out what a stable system of ultrastructural scaffolds is first before figuring out what precise things can be built USING those ultrastructural scaffolds.

Comment by Alex K. Chen (alex-k-chen) on How do you assess the quality / reliability of a scientific study? · 2020-11-11T04:39:26.232Z · LW · GW

Is there an online way to better tag which studies are suspect and which ones aren't - for the sake of everyone else who reads after?

Comment by Alex K. Chen (alex-k-chen) on When Money Is Abundant, Knowledge Is The Real Wealth · 2020-11-11T04:30:33.688Z · LW · GW

So, two years ago I quit my monetarily-lucrative job as a data scientist and have mostly focused on acquiring knowledge since then. I can worry about money if and when I know what to do with it.

Also this knowledge only matters if you do something useful with that knowledge, which I'm convinced that you are, for instance. many other people are not able to create useful knowledge and thus may be better suited for earning2give.

Comment by Alex K. Chen (alex-k-chen) on Gears-Level Models are Capital Investments · 2020-11-11T04:11:01.018Z · LW · GW

Do you think that applying black box models can result in "progress"? Say, molecular modeling/docking or climate modeling or whole-cell modeling or certain finite-element models? [climate models kind of work with finite element analysis but most people who run them don't understand all the precise elements used in the finite element analysis or COMSOL]? It always seems that there are many many more people who run the models than there are people who develop the models, and the many people who run the models (some of whom are students) are often not as knowledgeable about the internals as those who develop them - yet they still can produce unexpected leads/insights  [or stories - which CAN be deceiving, but which in an optimal world helps others understand the system better even if they aren't super-familiar with the GFD equations of motions that run inside climate models or COMSOL] that might be better than chance.

Comment by Alex K. Chen (alex-k-chen) on When Money Is Abundant, Knowledge Is The Real Wealth · 2020-11-11T03:53:35.432Z · LW · GW

>In line with John’s argument here, we should develop a robust gears-level understanding of scientific funding and organization before assuming that more power or more money can’t help.

How about a metaculus/prediction market for scientific advances given an investment in X person or project? (where people put stake into the success of a person or project?) is this susceptible to bad incentives?

Comment by Alex K. Chen (alex-k-chen) on When Money Is Abundant, Knowledge Is The Real Wealth · 2020-11-11T03:20:08.749Z · LW · GW

in the space of aging (or models in bioscience research in general), you should contact Alexey Guzey and Jose Ricon and Michael Nielsen and Adam Marblestone and Laura Deming. You'd particularly click with some of these people, and many of them recognize the low number of independent thinkers in the area.

I think you have a kind of thinking that almost everyone else in aging I know seems to lack (If I showed your writing to most aging researchers, they'd most likely glare over what you wrote), so writing a good way to, say, put a physical principles framework to aging can result in a lot of people wanting to fund you (a la Pascal's wager - there are LOTS of people who are willing to throw money into the field even if it doesn't have a huge chance of producing results - and a good physical framework can make others want you to make the most out of your time, especially as many richer/older people lack the neuroplasticity to change how aging research is fine). Many many many papers have already been written on the field (many by people making guesses as to what matters most) - a lot of them being very messy and not very first-principles (even JP de magalhaes's work, while important, is kind of "messy" guessing at the factors that matter).  

Are you time-limited? Do you have all the money needed to maximize your output on the world? (note for making the most out of your limited time, I generally recommend being like mati roy and trying to create a simulation of yourself that future you/others can search, which generally requires a lot of HD/streaming - though even that is not that expensive). 

It seems that you can understand a broad range of extremely technical fields that few other people do (esp optimization theory and category theory), and that you get a lot out of what you read (the time investment of other people reading a technical textbook may not be as high as that of you reading one) - thus you may be more suited for theoretical/scaleable work than you are for work that's less generalizeable/scaleable (one issue with bioscience research is that most people in bioscience research spend a lot of time on busywork that may be automated later, so most biologists aren't as broad or generalizeable as you are, and you can put together broad frameworks that can improve the efficiencies/rigor of future people who read you, so you should optimize for things that are highly generalizeable.)

[you also put them all in a clear/explainable fashion that makes me WANT to return back to reading your posts, which is not something I can say for most textbooks].

There are tradeoffs between spending more time on ONE area vs spending time on ANOTHER area of academic knowledge - though there are areas where good thinking in one area can transfer to another (eg optimization theory => whole cell modeling/systems biology in biology/aging). Building general purpose models (if described well) could be an area you might have unique comparative advantage over others in, where you could guide someone else's thinking on the details even if you did not have the time to look at the individual implementations of your model on the system at hand. 

If you become someone who everyone else in the area wants to follow (eg Laura Deming), you can ask question and get pretty much every expert swarming over you, wanting to answer your questions.

You seem good at theory (which is low-cost), but how much would you want to ideally budget for sample lab space and experiments? [the more details you put in your framework - along with how you will measure the deliverables, the easier it would be to get some sort of starter funding for your ideas]. Doing some small cheap study (and putting all the output in an open online format that transcends academic publishing) can help net you attention and funding for more studies (it certainly seems that with every nascent field, it takes a certain something to get noticed, but once you do get noticed, things can get much easier over time, particularly if you're the independent kind of person). Wrt biology, I do get the impression that you don't interact much with other biologists, which might make the communication problems more difficult for now [like, if I sent your aging posts as is to most biologists I know, I don't think they would be particularly responsive or excited].

BTW - regarding wealth - fightaging has a great definition at

Wealth is a measure of your ability to do what you would like to do, when you would like to do it - a measure of your breadth of immediately available choice. Therefore your wealth is determined by the resources you presently own, as everything requires resources.

Generally speaking, due to aging [and the loss of potential that comes with it] most people's wealth decreases with age (it's said that the wealthiest people are really those that are born) - however, your ability to imagine what you can do with wealth (within an affordance space - or what you can imagine doing over the next year if given all the resources you can handle - framework) can increase over time. Mental models are only wealth inasmuch as they actively work to improve people's decision-making on the margin relative to an alternative model (they are necessary for innovation, but there are now so many mental models that taking time to understand one reduces the amount of time one has to understand another mental model) - I do believe that compressible mental models (or network models) that explain a principle elegantly can offload the time investment it takes to use a model to act on a decision (eg superforecasters use elegant models that others believe and can act on - thus knowing when to use the expertise of superforecasters can help decision-making). Not many people can create an elegant mental model, and fewer can create one that is useful on top of all the other models that have been developed (useful in the sense that it makes it more useful for others to read your model than all the confusing model renditions used by others) - obviously there is vast space for improvement on this front (as you can see if you read quantum country) as most people forget the vast majority of what they read from textbooks or from conversations with others. Presentism is an ongoing issue as more papers/online content is published than there are total eyeballs to read them (+all the material published in the past)

The best kind of wealth you can create, in this sense, is a model/framework/tool that everyone uses. Think of how wealth was created with the invention of a new programming language, for example, or with Stack Exchange/Hacker News, or a game engine, or the wealth that could be created with automating tedious steps in biology, or the kind that makes it far easier for other people to make or write almost anything. The more people cite you, the more wealth and influence (of a certain kind) you get. This generalizes better than putting your entire life into studying a single protein or model organism, especially if you find a model/technique that is easily-adoptable and makes it easy to do/automate high-throughput "-omics" of all organisms and interventions at once (making it possible for others to speed up and generalize biology research where it used to be super-slow). Bonus points if you make it machine-readable and put in a database that can be queried so that it is useful even if no one reads it at first [as amount of data generated is higher/faster than the total mental bandwidth/capacity of all humans who can read it]. 

[btw, attention also correlates with wealth, and money/attention/wealth is competitive in a way that knowledge is not (wisdom may be which knowledge to read in which order - wisdom is how you can use knowledge to maximize the wealth that you can use with that knowledge)]

[Shaping people's framework by causing them to constantly refer to your list of causes, btw, is another way to create influence/wealth - but this may get in the way of maximizing social wealth over a lifetime if your frameworks end up preventing people from modeling or envisioning how they can discover new anomalies in the data that do not fit within those frameworks - this is also why we just need a better concrete framework with physical observables for measuring aging rate, where our ability to characterize epigenetic aging is a local improvement. ]

In the area of aging already there is too much "knowledge" (though not all of it particularly insightful), but does the sum of all aging papers published constitute as knowledge? Laura Deming mentions on her twitter that she thinks about what not to read, rather than what to read, and recommends students study math/CS/physics rather than biochemistry. There can be a way to compress all this knowledge into a more organized physical principles format that better helps other people map what counts as knowledge and what doesn't count - but at this moment the sum of all aging research is still a disorganized mess, and it may be that the details of much of what we know now will become superseded by new high-throughput papers that publish data/meta-data rather than as papers (along with a publicly accessible annotation service that better guides people as to which aging papers represent true progress and which papers will simply obsolete quickly.). Guiding people to the physical insight of a cell is more important for this kind of true understanding of aging, even though we can still get things done through rudimentary insight-free guesses like more work on rapamycin and calorie restriction.

Comment by Alex K. Chen (alex-k-chen) on Three Open Problems in Aging · 2020-11-11T02:50:37.774Z · LW · GW

More on lipid oxidation: it also depends on the composition (polyunsaturated to saturated fat ratio) of the lipids that compose the cell's walls. You also need to account for all the other highly reactive cell oxidation byproducts (eg 9-HNE, michael adducts, methylglyoxal, CML) as well as the overall redox potential of the cell (eg glutathionine seems to be an abundant antioxidant that reacts with many highly reactive byproducts..)

Additionally in the case of arteriosclerosis, it also depends on the ratio/supply of oxidized cholesterol (particularly keto-cholesterol). What you described for arteriosclerosis seems generalized for any cell, but it's clear that there's something happening on the artery walls at a faster rate that than what's happening to other cells/tissues and you haven't fully accounted for the difference yet.

Fenton chemistry has feedback loops with ascorbic acid (higher levels of metals can turn ascorbic acid into a pro-oxidant). 

[quantification of AGEs - ]

Comment by Alex K. Chen (alex-k-chen) on Bet On Biden · 2020-11-09T10:16:35.229Z · LW · GW

538 totally outperformed in 2012 on intrade - it seems like there were whales pushing up the romney price on intrade.

Comment by Alex K. Chen (alex-k-chen) on (How) should we pursue human longevity? · 2020-11-05T05:35:15.753Z · LW · GW

Comment by Alex K. Chen (alex-k-chen) on (How) should we pursue human longevity? · 2020-10-26T02:28:46.028Z · LW · GW

Is it even possible to map out "root causes" in a complex system (eg maybe Granger causality in neural networks) when the "cause" could be multiple factors that are jointly necessary - none of them sufficient enough to cause the irreversible feedback loop in itself?

Comment by Alex K. Chen (alex-k-chen) on What Decision Theory is Implied By Predictive Processing? · 2020-10-25T22:16:42.515Z · LW · GW

"A prototypical example here would be an abstraction-based decision theory. There, the notion of "success" would not be "system achieves the maximum amount of utility", but rather "system abstracts into a utility-maximizing agent". The system's "choices" will be used both to maximize utility and to make sure the abstraction holds. The "supporting infrastructure" part - i.e. making sure the abstraction holds - is what would handle things like e.g. acting as though the agent is deciding for simulations of itself (see the link for more explanation of that)."


isn't this kind kind of like virtue ethics as opposed to utilitarianism?

Comment by Alex K. Chen (alex-k-chen) on Public Static: What is Abstraction? · 2020-10-25T20:47:13.165Z · LW · GW

This points toward a more general class of questions: when, and to what extent, does it all add up to normality? We learned the high-level ideal gas laws long before we learned the low-level molecular theory, but we knew the low-level had to at least be consistent with that high-level structure. What low-level structures did that constraint exclude? More generally: to what extent does our knowledge of the high-level model structure constrain the possible low-level structures?

One good class of structure for these sorts of questions is causal structure: to what extent does high-level causal structure constrain the possible low-level causal structures? I'll probably have a post on that soon-ish.

Doesn't high-level structure entail statistical averages and not necessarily Boltzmann brains in the low-level structure? Like - what of the nonequilibrium statistical mechanics?

Comment by Alex K. Chen (alex-k-chen) on Category Theory Without The Baggage · 2020-10-25T19:28:52.948Z · LW · GW

So like, can you use morphisms to map paths described in one graph to paths described in another graph even if the nodes are different or loosely defined? (eg a functor from one graph to another that creates paths all the nodes that are tagged as "high probability" or all the nodes that have "connectivity matrix exceeding X" to a second graph that is very different from the first graph but which has nodes that still can be ordered by connectivity and have connectivity values that may exceed X?) Where X may be a fixed number or a number that scales according to the dimension of the graph?

Like, I can try to plot out my "weird learning strategy" whenever I enter new environments and I can maybe construct a path that maps out this "weird learning strategy" (focus on highly connected clusters that already have lots of information outputted, focus on nodes that are not individually overwhelming, focus on nodes that already have some connectivity with my own original graph [the possible relationships/morphisms between my graph and that of environment1 and environment2 are different - however - they're still enough to impose some kind of structure that can be used to establish morphisms between me+environment1 and me+environment2])



Also, aren't real world categories so murky that any morphism between two categories is loosely rather than absolutely defined? [eg [in paths you execute comparing one to the other you will expect that your morphisms will be wrong some X percent of the time]]. I would expect that maybe your own graph/category corresponds to your own "world model" and you might be trying to create a new beahvioral graph/path for a new person where you map your world model to that of another person [where some nodes are missing] and specify them to take actions that are contrary to the actions you do?

Comment by Alex K. Chen (alex-k-chen) on Comparative Advantage is Not About Trade · 2020-10-25T19:10:21.011Z · LW · GW

doesn't pareto-optimal imply lack of convexity/concavity?

Comment by Alex K. Chen (alex-k-chen) on (How) should we pursue human longevity? · 2020-10-25T19:04:47.709Z · LW · GW

Structural genes like the extremely long-lived proteins in nuclear pore complexes don't turn over (similarly, damage to nuclear histone proteins is very difficult to repair). Even small changes in these genes can affect the ability of mRNA and all of the spliceosome proteins to be properly assembled where they're most needed => this gradually sums up to a corrosion of cellular information

Comment by Alex K. Chen (alex-k-chen) on Book Review: Working With Contracts · 2020-10-25T19:00:31.583Z · LW · GW

Shouldn't smart contracts with staking also allow you to more readily enter contracts where payoffs are unknown? (eg you're not sure if investing in a person or decision will result in the payoffs you want - there's rather a distribution/ambiguity of outcomes). You mention rebalancing - this is where formalized smart contracts allow you to rebalance contracts based on another element of risk if you notice that you've staked too much on options that are volatile in response to investments that have too much time-correlated X1 in them?

You might even be unsure as to what your value function is (many people are!) but still have some aesthetic discernment/taste that allows you to make contracts in those areas where you are discerning

Comment by Alex K. Chen (alex-k-chen) on (How) should we pursue human longevity? · 2020-10-25T18:44:40.729Z · LW · GW

Damage/dysregulation to the control sites are more central to the network - repair genes/proteins like OGG1/ERCC1 or the upstream control factors of everything or kinases. For whatever reason, expression of most repair genes (and heat shock proteins) goes down with time.

Spliceosomes are esp impt too, as are the upstream genes behind lysosome synthesis ( and proteaosome synthesis.

Damage to structural components (like extremely long lived proteins) are harder to repair and simultaneously make it harder for repair proteins to properly localize to places where needed.

It's not a matter of simple downexpression or up-expression - though if I were to bet I wouldn't say that damage to the repair proteins or proteasomes are totally causal - it's just the simultaneously distributed damage of everything that ultimately builds up and I don't think it can be summed into any neat causes other than changed damage to repair ratio.

If I were to bet on one mechanism, it would be repair genes that get jammed/make errors during repair. Statistically speaking, some percent of DNA repair enzymes will screw up the process of repair (or introduce further damage), and liposomes/proteasomes will get traffic jams that are difficult to remove/clear.

Comment by Alex K. Chen (alex-k-chen) on (How) should we pursue human longevity? · 2020-10-25T18:13:01.686Z · LW · GW

Well, the root cause is ultimately the accumulation of small kinds of damage and dislocation (like oxidative/carbonylated damage on proteins/DNA or increase of clogged proteasomes/lysosomes or inappropriate DNA adducts) that ultimately do not get corrected. An oxidative damage event in itself is nothing, but when you combine all of the events integrated in a lifetime, amounts of something.

Comment by Alex K. Chen (alex-k-chen) on (How) should we pursue human longevity? · 2020-10-25T18:07:44.889Z · LW · GW

But it's pretty suspicious if two different causes both increase the risk of every single one of those things - if we had a complete graph with random weights on all the connections, then some factors should push some diseases up, while others should push down. Instead, we see a variety of mutations/interventions (like progerias, calorie restriction, etc) which all push most of these things in the same direction, which pretty strongly suggests that they're operating through the same pathway.

It's all about the damage to repair/replacement balance. Birds have higher repair rates, as do NMRs, as do bowhead whales. CR decreases synthesis of proteins that could be damaged while simultaneously (and strangely) upregulating repair. It might be instrumental to look at all the TFs activated by CR.

Comment by Alex K. Chen (alex-k-chen) on The Lens, Progerias and Polycausality · 2020-10-25T18:04:27.025Z · LW · GW

>Usually it will copy into non-coding DNA, and then be suppressed, so there's no noticeable effect. But over time, the transposon count increases, the suppressor count doesn't increase, and eventually the transposons get out of control.  

Wouldn't it expand the size of the genome and potentially affect the distance between promoters/enhancers and target genes, causing a loss in a cell's ability to appropriately regulate translation in response to perturbation?

I know some people (like genesis lung) who actively take lysine or antiretrovirals to suppress transposon activity - antiretrovirals may be aassociated with longevity./

Comment by Alex K. Chen (alex-k-chen) on (How) should we pursue human longevity? · 2020-10-25T17:23:43.692Z · LW · GW

I should add that much of the most encouraging progress is in the area of xenotransplantation/stem cell transplantation, artificial organs, and neuronal replacement therapy (it's already being done for Parkinson's, though regeneration of the basal ganglia may be easier than regeneration of the entire brain - the lab is particularly good to follow on this). You don't need an entire mimic of the brain's memory or identity to maintain the continuity of consciousness.

You don't need a full understanding of aging to safely transmit stem cells or tissue from one organism to another, and there are scientists who are working on the immunorejection problem wrt non-allogenic stem cells (eg see ). 

Hell, some people [like Dave Asprey] already boldly inject themselves with stem cells ( ) and while he doesn't know what the hell he's doing (and it seems like the FDA isn't even holding him back from this), the experimentation provides great evidence for the rest of us.

Gradual "chimerism" is an encouraging direction that does not require us to understand all kinds of age-related damage (eg ). "chimerism", broadly speaking, is inclusive of cells "transferring mitochondria" to other cells, cells "transferring telomere to other cells", AAV'ing or CMV'ing genes [like enhanced SIRT6] from centenarians or bowhead whales into humans.

Even glial cells and astrocytes can be "reprogrammed" into neurons (though whether they will retain the information is another issue), and there are ways for cells to "expel" their aggregates/debris to be cleared up or removed by macrophages/astrocytes/cerebrospinal fluid (hell, even the Li-Huei Tsai lab has shown that playing 60Hz frequencies can help remove amyloid plaque from neurons). One of the real questions I have is whether there is a way for a cell to expel all kinds of intracellular junk, including the notoriously impliable lipofuscin and ceroid aggregates (there's a kind of lipid/protein structure that is amenable to being engulfed by exosomes that can clear out the damage). 

it's important to note that bowhead whales can live to 200+ years even without obvious age-associated pathology, and we may be better prodded to build more "robust" mitochondria/cell membranes just by studying them (eg degree of membrane unsaturation seems anticorrelated to longevity in organisms like birds), and we can simply try to make our cell membranes less prone to ROS (one way is to incorporate deuterated PUFAs/omega-3's), which somehow still does not get as much research as claimed [some degree of enhanced deuteration is also helpful for longevity].

Overall you want to maintain the cell's ability to sense damage and to properly reduce this damage before it reaches high levels [genomic damage/mosaicism is the hardest to sense, but this is ultimately a more distal problem than more immediate problems that come from loss of proteostasis]

Comment by Alex K. Chen (alex-k-chen) on (How) should we pursue human longevity? · 2020-10-25T17:17:45.161Z · LW · GW

[on the stem cells - ]

Comment by Alex K. Chen (alex-k-chen) on (How) should we pursue human longevity? · 2020-10-25T17:05:21.313Z · LW · GW

Uh, I think loss of proteostasis and increased damage to proteins/lipids can be implicated in all types of age-disease (you could theoretically have perfect genome integrity and loss of proteostasis and aging would still occur, though at some pt the loss of proteaostasis would hit the genome.) Similarly, you can have an organism age without inflammation (think of single-celled organisms), telomere damage, oxidative stress (though oxidative damage is one of the most common forms of damage), or senescence (all of these are just accelerants). More complex organisms just have more ways to get damaged (they also have more sophisticated methods of damage control, especially birds/naked mole rats/bowhead whales)

But reduced ability to maintain the specificity, stoichiometry, and precise control offered by the genome/proteome due to changes in the cell's abilty to synthesize the proteins needed to properly sense perturbations from equilibrium [and being able to properly translate and distribute the proteins that act on such perturbations] - is fundamentally a root cause of aging in all organisms. "Damage" to a proteome (or lipidome) - some of which is sensed throughout the organism - ultimately leads to the other "accelerants" like telomere attrition, stem cell loss, or senescence that further compromise a cell's ability to do proper repair .

>fully-connected graph of many causes

This is probably the best way to "explain" a "cause" even though it isn't great for linguistically compressing causality (or even compressing causality by pearl's notation).

Comment by Alex K. Chen (alex-k-chen) on If a "Kickstarter for Inadequate Equlibria" was built, do you have a concrete inadequate equilibrium to fix? · 2020-10-22T17:43:30.388Z · LW · GW

The chances of climate change making Phoenix uninhabitable >>> the changes of being cryonically revived. Keep in mind that the energy required for AC increases as the square of temperature difference between inside and outside, and very few people really know how to deal with temperatures that regularly go above 120F, which could very well happen to Phoenix in 60 years.

Comment by Alex K. Chen (alex-k-chen) on Is Stupidity Expanding? Some Hypotheses. · 2020-10-18T17:52:44.531Z · LW · GW

Have you looked into the reverse flynn effect? eg see (shows reverse flynn effect for norwegian cohorts)

Some speculate it happens b/c more educated/smarter people have fewer children. But this may not apply when you control for sibling effects.

Blood lead levels ( peaked for both boomers and GenX, but drastically decreased for the GenX/Millennial transition, but the above study shows cohorts born from 1961 to 1990, and shows that the millenials (presumably with the lowest lead levels!) also have the lowest raw IQ. [the study above is for Norway - I don't know how much lead was present in Norwegians mid-century, but it appears that Norway had a lead problem just as the rest of the states had.

In the new study, the researchers observed IQ drops occurring within actual families, between brothers and sons – meaning the effect likely isn't due to shifting demographic factors as some have suggested, such as the dysgenic accumulation of disadvantageous genes across areas of society.

Instead, it suggests changes in lifestyle could be what's behind these lower IQs, perhaps due to the way children are educated, the way they're brought up, and the things they spend time doing more and less (the types of play they engage in, whether they read books, and so on).

Another possibility is that IQ tests haven't adapted to accurately quantify an estimate of modern people's intelligence – favouring forms of formally taught reasoning that may be less emphasised in contemporary education and young people's lifestyles.

It is worth noting that air and water pollution levels are significantly lower now than several decades ago, and organochlorine pesticides have been phased out (in favor of organophosphate pesticides - organochlorines seem to cause greater hits to IQ and epigenetic age), so environmental pollution probably isn't as important here as other factors. (at the same time, it's possible that people have been exposed to increased levels of possibly-IQ-decreasing pollutants such as microplastics or flame retardants)

Perception of reduced intelligence/creativity could also simply be caused by longer life courses (the social capital gerontological glut - - which causes many young people to define their life paths around this glut and careful about what they say for fear of alienating this glut) causing people to take longer to grow up before they can get in positions where they can produce widely-read important work (which is related but not identical to aging of the population). People are often not at their most organic selves when trying to "reach a social bar" where the average age of the people who make it (eg R01 investigators, university faculty positions, leadership/management positions) only continue to increase. I'm not sure if this applies to much of the valid intelligence-showing work that is produced online and then doesn't get deleted, but it certainly seems like people have a tendency to fail to archive everything they've produced online during their years of peak intelligence.

Overall, we know that real intelligence, g, is slowly declining in Western nations and China (possibly in other locations as well). For a good, easily understandable, explanation of the FE and the decline in g, read At Our Wits’ End: Why We’re Becoming Less Intelligent and What It Means for the Future, by E. A. Dutton & M. A. Woodley of Menie. Exeter, UK: Imprint Academic. If you want a reasonably long list of papers that have addressed the decline in intelligence, ask and I will post a list.

Comment by Alex K. Chen (alex-k-chen) on The Achilles Heel Hypothesis for AI · 2020-10-16T18:28:14.156Z · LW · GW

Has anyone else noticed this paper is much clearer on definitions and much more readable than the vast majority of AI safety literature, much of what it draws on? Like it has a lot of definitions that could be put in an "encyclopedia for friendly AI" so to speak.

Some extra questions:

  • How much time/effort did it take for you to write this all? What was the hardest part of this?
  • Do most systems today unintentionally have corrigibility simply b/c they are not complex enough to represent "being turned off" as a strong negative in its reward function?
  • Are Newcombian problems rarely found in the real world, but much more likely to be found in the AI world (esp b/c the AI has a modeler that should model what they would do?)
Comment by Alex K. Chen (alex-k-chen) on If a "Kickstarter for Inadequate Equlibria" was built, do you have a concrete inadequate equilibrium to fix? · 2020-10-15T17:37:49.544Z · LW · GW

But aren't cooling costs to room temperatures higher in Phoenix than other places? (esp given the longer duration of heat?)

Comment by Alex K. Chen (alex-k-chen) on If a "Kickstarter for Inadequate Equlibria" was built, do you have a concrete inadequate equilibrium to fix? · 2020-10-14T12:49:08.091Z · LW · GW

Climate change is causing the American southwest (and Phoenix) to warm up even faster than other places - plus the Colorado River's flow is drying up at its sources - is Phoenix even a sustainable choice 30-40 years down the line? Especially for cryopreservations? In Phoenix's favor, it is surrounded by deserts which should make use of the area's solar power, so maybe cost of electricity to cool the damn city down may not be as much of an issue as before, but I'd still worry about the constant electricity needed to cool the cryopreserved bodies throughout the year.

Keep in mind that the energy required for AC increases as the square of temperature difference between inside and outside, and very few people really know how to deal with temperatures that regularly go above 120F, which could very well happen to Phoenix in 60 years. It seems that scaling laws of "removing waste heat" are such that the feedback loops are all positive as temperature further increases (eg more AC, more venting of heated air outside, more people who stay inside b/c they can't go outside, water becomes far scarcer and more expensive to import, etc)

It still seemed plenty hot. Over the past century, the hot season in Phoenix has extended almost three weeks in each direction, while overnight temperatures, which used to provide a respite from the daytime heat, have increased by as much as 12°. Researchers expect these trends to continue as Phoenix grows, because adding more heat-retaining pavement and structures—­­plus more heat-producing people and machines—contributes to the "urban heat island effect," which makes the city hotter than the surrounding desert.

"I don't think we can rule out 130° in Phoenix in the future," says David Hondula, an Arizona State University professor who studies the health effects of extreme heat. A 2016 report by Climate Central predicts that by 2050, Phoenix will be among 25 U.S. cities in which heat poses a danger to human health for more than half the year.

Comment by Alex K. Chen (alex-k-chen) on Why Boston? · 2020-10-14T12:37:46.448Z · LW · GW

This really depends on many factors such as social connectedness (where your connectedness may be higher where most of your friends are, or where it's easiest to make new friends). The highest longevities in the US are the "ski resort" counties  [high altitude may play a role in this] in Colorado, but they're too expensive for most. 

Boston is significantly more disaster-proof than the Bay Area - one of the most disaster-proof of the major hubs outside of Europe.

Comment by Alex K. Chen (alex-k-chen) on Why Boston? · 2020-10-12T16:37:05.184Z · LW · GW

I'm rarely a typical example of anything, but I never noticed anything in the dimension of prudishness or rudeness (I grew up in the Seattle area, now live in Boston). Also there definitely are some communities of "weird people" in "Camberville" (as they call it) too, though they perhaps don't define the predominant culture [I think it's easier for people to feel like they're out of place if they too weird]

Comment by Alex K. Chen (alex-k-chen) on Why Boston? · 2020-10-12T16:33:01.287Z · LW · GW

It used to be much more active and frictionless (the Citadel), but the Citadel got evicted sometime late 2016/2017.

Comment by Alex K. Chen (alex-k-chen) on Why Boston? · 2020-10-11T04:44:06.201Z · LW · GW

Why isn't Boston more popular? (even among the VC crowd)? It just self-evidently seems to the second best place to be. I mean, many Harvard/MIT students I know seem to all want to go to the Bay Area after Boston simply b/c much more happens in the Bay Area (and their friend groups and grouphouses are all there) - and I guess NYC takes second place for "amount of things that happen" and it tends have more communities that are radically open/weird. 

Also there used to be the Citadel grouphouse there, but people tend to forget it now.

For lower housing costs, you can also possibly try the outskirts around Boston. I feel Providence is also underappreciated amongst many.

BTW I also appreciate how clean Boston's air is for a major city (there certainly seems to be less car volume here than in NYC or the Bay Area) - shows that car traffic contributes less to pollution here than other cities.

Comment by Alex K. Chen (alex-k-chen) on How will internet forums like LW be able to defend against GPT-style spam? · 2020-07-30T12:50:46.963Z · LW · GW

How about integrate with the underlay ? FYI I personally connected some of the team members in the project with each other.

Comment by Alex K. Chen (alex-k-chen) on The case for C19 being widespread · 2020-07-08T16:12:33.883Z · LW · GW

Will all the black swam ETFs (like taleb assistant one universa) make it more efficient in that direction?

Comment by Alex K. Chen (alex-k-chen) on What information, apart from the connectome, is necessary to simulate a brain? · 2020-07-06T18:03:48.923Z · LW · GW

Epigenetic information might be important too (especially since epigenetic information may determine a neuron's sensitivity to "updates"). In metamorphosis, a butterfly seems to retain some of its caterpillar memories even though all of its neuronal arbors are completely reorganized and lost, and that information may be stored in the epigenome.

Also, there's a hypothesis that memory may even be stored in RNA (see Gaurav)

Comment by Alex K. Chen (alex-k-chen) on Project Proposal: Gears of Aging · 2020-05-16T08:14:09.434Z · LW · GW

Comment by Alex K. Chen (alex-k-chen) on Project Proposal: Gears of Aging · 2020-05-16T07:20:10.993Z · LW · GW

Comment by Alex K. Chen (alex-k-chen) on Project Proposal: Gears of Aging · 2020-05-16T02:17:07.241Z · LW · GW

If you had the perfect bioinformatics database + genomically-obsessed autist, it would be easier to deal with larger quantities of genes. Like, the human genome has 20k genes, and let's say 1% are super-relevant for aging or brain preservation - that would be 2k genes, and that would be super-easy for an autistically-obsessed person to manage

Comment by Alex K. Chen (alex-k-chen) on Project Proposal: Gears of Aging · 2020-05-14T22:38:05.594Z · LW · GW

IDEALLY, such a model would allow people to creative putative links to hand-annotate (with a dropdown menu) all the papers in support and against support of the model. exists but it isn't great for mechanism as it's just a long list of genes that seems to have been insight-free scraped. A lot of the aging-related genes people have studied in-depth that have shown the strongest associations for healthy aging (eg foxo3a/IGF1) sure *help* and then there are IGF mutants [oftentimes they dont directly increase repair] but I don't feel that they're as *fundamental* as, say, variations in proteasome function or catalase or splicesome/cell cycle checkpoint/DNA repair genes.

Comment by Alex K. Chen (alex-k-chen) on Project Proposal: Gears of Aging · 2020-05-14T22:24:05.874Z · LW · GW

Each protein also has to be analyzed in and of itself, b/c upstream of each protein contains numerous alternative splicing variants, and proteins with more splicing variants should presumably be more susceptible to mis-translation than proteins with fewer splicing variants [splicesome function also decreases with age - see william mair on this, so we need a whole discussion on splicesomes, especially as to how they're important for important protein complexes on the ETC].

Proteins also have different variants between species (eg bowhead whales and kakapos have hypofunctioning p53). They have different half-lives in the cell - some of them have rapid turnover, and some of them (especially the neuronal proteins are extremely long-lived proteins). The extremely long-lived proteins (like nuclear pore complexes or others at ) do not go through "degradation/recycling" as frequently as short-lived proteins, so it may be that their rate of damage is not necessarily reduced AS MUCH by increases in autophagy [THIS HAS TO BE MAPPED - there is a lot of waste that continues to accumulate in the cell when it can't be dumped out into the bloodstream/kidneys, and glomeular filtration rate declines with age].

We have to map out which proteins are **CONTROLLERS** of the aging rate, such as protein-repair enzymes [ ], DNA damage sensing/repair enzymes, Nrf2/antioxidant response elements, and stabilizing proteins like histones [loss of the histone subunits often accelerates aging of the genome by exposing more DNA as unstable euchromatin where it is in more positions to be damaged]. [note i dont include mTOR complex here b/c mTOR reduction is easy but also b/c mTOR doesn't inherently *damage* the cell]

Comment by Alex K. Chen (alex-k-chen) on Project Proposal: Gears of Aging · 2020-05-14T22:17:48.387Z · LW · GW

Ok so here's a model I'm thinking. Let's focus on the proteasome alone for instance, which basically recycles proteins. It pulls a protein through the 19S subunit into the 20S barrel that has the "fingers" that can deaminate the amino acids of each protein, one by one.

We know that reduction in proteasome function is one of the factors associated with aging, esp b/c damage in proteasome function accelerates the damage of *all other proteins* [INCLUDING transcription factors/control factors for ALL the other genes of the organism] so it acts as an important CONTROL POINT in our complex system (we also know that proteasome function declines with age). We also know that increases in certain beta3/ subunits of 20S proteasome function help increase lifespan/healthspan (ASK THE QUESTION THEN: why uniquely beta3 more so than the other elements of the proteasome?). Proteins only work in complexes, and this often requires a precise stoichiometry of proteins in order to fit - otherwise you may have too much of one protein in a complex, which [may or may not] cause issues. Perhaps the existence of some subunits help recruit some *other* subunits that are complementary, while it may do negative interference on its own synthesis [there's often an upstream factor telling the cell not to synthesize more of a protein].

I know one prof studies Rpn13 in particular.

The proteasome has a 20S and two 19S regulatory subunits. The 19S subunit consists of 19 individual proteins. Disruptions in synthesizing *any* of the subunits or any of the proteins could make the protein synthesis go wrong.

We need to know:

  • is reduction in proteasome function primarily due to reduced proteasome synthesis [either through reduced transcription, splicosome errors, reduced translation, improper stoichiometry, or mislocalization] or damaged proteasomes that continue to stay in the cell and wreak havoc?
  • Can proteasomes recognize and degrade proteins with amino acids that have common sites of damage? (many of them known as non-enzymatic modifications)?
  • the pdb parameters of proteasomes (as well as the rough turnover rates of each of their subunits)
  • what are the active sites of proteasomes, and what amino acids do they primarily consist of? (in particular, do they consist of easily damaged amino acids like cysteines or lysines?)
  • What are the precise mechanisms in which the active sites of proteasomes get damaged?
  • How does a cell "clear out" damaged proteasomes? What happens to damaged proteasomes during mitosis?
  • If a cell accumulates damaged proteasomes, how much do these damaged proteasomes reduce the synthesis and function of other properlyly functioning proteasomes in the cell. Will the ubiquitin system improperly target some proteins into proteasomes that have ceased to exist?
Comment by Alex K. Chen (alex-k-chen) on Project Proposal: Gears of Aging · 2020-05-14T04:00:55.327Z · LW · GW
Like an airplane blueprint, the goal is to show how all the components connect - a system-level point of view. Much research has already been published on individual components and their local connections - anything from the elastin -> wrinkles connection to the thymic involution -> T-cell ratio connection to the stress -> sirtuins -> heterochromatin -> genomic instability pathway. A blueprint should summarize the key parameters of each local component and its connections to other components, in a manner suitable for tracing whole chains of cause-and-effect from one end to the other.

aren't there better methods of characterizing these connecting components than a textbook? Textbooks are super-linear and ill-suited for complex demands where you want to do things like "cite/search for all examples of H3K27 trimethylation affecting aging in each and every studied model organism". They're not great for characterizing all the numerous rare-gene variants and SNPs that may help (such as, say, the SNP substitution in bowhead whale and kakapo p53 and how this single SNP *mechanistically* affects interactions between p53 and all the other downstream effects of p53 - such as whether it increases/decreases/etc). There are many databases of aging already (esp those compiled by JP de Magalhaes and the ones recently outputted by the Glen-Corey lab and Nathan Batisty senescent cell database) but the giant databases return giant lists of genes/associations and effect sizes but also contain no insight in them.

The aging field moves fast and there are already zillions of previous textbooks that people don't read anymore simply b/c they expect a lot of redundancy on top of what they already know.

In particular I'd like a database that lists prominent anomalies/observations (eg naked mole rat enhanced proteasome function or naked mole rat enhanced translational fidelity or "naked mole rat extreme cancer resistance which is removed if you ablate [certain gene]") which then could be made searchable in a format that allows people to search for their intuitions

Anyone who cares about this should friend/follow

Comment by Alex K. Chen (alex-k-chen) on Competitive safety via gradated curricula · 2020-05-14T03:53:46.557Z · LW · GW


The key hypothesis is that it’s not uniformly harder to train AGIs in the safer regimes - rather, it’s primarily harder to get started in those regimes. Once an AI reaches a given level of intelligence, then transitioning to a safer regime might not slow down the rate at which it gains intelligence very much - but might still decrease the optimisation pressure in favour of that AI being highly agentic and pursuing large-scale goals.

Can't choice of programming language (or coding platform) affect the optimization pressures? [if everyone ends up learning poorly-designed choices, it can cause a lot of weird behaviors long-run, so a safer regime would include, like, a decent programming language]. It's like harder to get started on blockchains that aren't as bloated as bitcoin or ethereum.