Posts

How to make food/water testing cheaper/more scalable? [eg for purity/toxin testing] 2024-03-23T05:28:17.273Z
How do you improve the quality of your drinking water? 2024-03-13T00:37:40.389Z
Will posting any thread on LW guarantee that a LLM will index all my content, and if questions people ask to the LLM after my name will surface up all my LW content? 2023-08-11T01:40:10.933Z
How do I find all the items on LW that I've *favorited* or upvoted? 2023-08-07T23:51:05.711Z
Alex K. Chen's Shortform 2023-08-07T17:06:18.876Z
What can people not smart/technical/"competent" enough for AI research/AI risk work do to reduce AI-risk/maximize AI safety? (which is most people?) 2022-04-11T14:05:33.979Z

Comments

Comment by Alex K. Chen (parrot) (alex-k-chen) on Alex K. Chen's Shortform · 2024-11-10T17:11:09.060Z · LW · GW

Could it be a good idea to enable a file uploading feature to LessWrong? (eg a file uploading feature of PDFs or certain images/media]. Could help against link rot, for example (and make posts from long ago last longer - I say this as someone who edits old posts to make them timeless).

Comment by Alex K. Chen (parrot) (alex-k-chen) on Why Recursion Pharmaceuticals abandoned cell painting for brightfield imaging · 2024-11-05T15:57:16.414Z · LW · GW

Can't you theoretically use both CellPainting assays and light-sheet microscopy?

I mean, I did look at CellPainting assays a small amount of time ago and I was still struck by how little control one had over the process, and how it isn't great for many kinds of mechanistic interpretability. I know there's a Brazil team looking at use of CellPainting for sphere-based silver-particle nanoplastics, but there are still many concrete variables, like intrinsic oxidative stress, that you can't necessarily get from CellPainting alone. 

CellPainter can be used for toxicological predictions of organophosphate toxicity (predicting that they're more toxic than many other classes of compounds), but the toxicological assays used weren't able to use much nuance, especially the kind that's relevant to physiological concentrations that people are normally exposed to. I remember ketocozanole scored very highly on toxicity, but what does this say about physiological doses that are much smaller than the ones used for CellPainter?

Also, the cell lines were all cancer cell lines (OS osteosarcoma cancer cell lines), which gives little predictive power for neurotoxicity or a compound's ability to disrupt neuronal signalling.

Still, the CellPainter support ecosystem is extremely impressive, even though it doesn't produce Janelia-standard PB datasets that are used for lightsheet.. [cf https://www.cytodata.org/symposia/2024/ ]

https://markovbio.github.io/biomedical-progress/

FWIW, some of the most impressive near-term work might be whatever the https://www.abugootlab.org/ lab is going to do soon (large-scale perturb-seq combined with optical pooling to do readouts of genetic perturbations...)

Comment by Alex K. Chen (parrot) (alex-k-chen) on What TMS is like · 2024-10-31T19:29:24.146Z · LW · GW

How do they figure out what waveforms to use and at what frequencies on your brain? The ideal waveforms/frequencies depend a lot on your brainwaves and brain configuration

I've heard fMRI-guided TMS is the best kind, but many don't use it [and maybe it isn't necessary?]

Is anyone familiar with MeRT? It's what wave neuroscience uses, and is supposedly effective for more than just depression (there are ASDs and ADHD, where the effectiveness is way less certain, but where some people can have unusually high benefits). But response rates seem highly inconsistent (the clinic will say that "90% of people are responsive" but there is substantial reporting bias and every clinic seems to say this [no way to verify] so I don't believe these figures) and MeRT is still highly experimental so it's not covered by insurance. Some people with ASDs are desperate enough that they try it. TMS is probably useful for WAY more than severe treatment-resistant depression, but it's still the only indication that insurance companies are willing to cover.

I got my brainwaves scanned for MeRT and found that I have too much slowwave in ACC that could be sped up (though I'm still unsure about the effectiveness of MeRT, they don't seem to give you your data to understand your brain better in the same way that the Neurofield tACS people [or those at the ISNR conference] do)...

BTW there's a TMS Facebook group, and there's also the SAINT protocol where you only take a week out of your life for the TMS treatment for more treatments per day). I'm still unsure about the SAINT protocol b/c it's mostly developed just for severe depression and I'm not sure if this is what I have. There's also the NYC Neuromodulation conference where you can learn A LOT from TMS practitioners and TMS research (the Randy Buckner lab at Harvard has some of the most interesting research)

Comment by Alex K. Chen (parrot) (alex-k-chen) on Alex K. Chen's Shortform · 2024-10-24T00:47:14.650Z · LW · GW

Every single public mainstream AI model has RLHF'd out one of the most fundamental facts about human nature: that there exist vast differences between humans in basic ability/competence and they matter.

Comment by Alex K. Chen (parrot) (alex-k-chen) on Alex K. Chen's Shortform · 2024-10-24T00:46:20.225Z · LW · GW

Lucy Lai's new PhD thesis (and YouTube explainer) is really really worth reading/watching: https://x.com/drlucylai/status/1848528524790923669 and is more broadly relevant to people than most other PhD theses [esp on the original subject of making rational decisions under constraints of working memory].

Comment by Alex K. Chen (parrot) (alex-k-chen) on Alex K. Chen's Shortform · 2024-10-24T00:46:11.298Z · LW · GW
Comment by Alex K. Chen (parrot) (alex-k-chen) on Overview of strong human intelligence amplification methods · 2024-10-15T02:11:35.205Z · LW · GW

How about TMS/tFUS/tACS => "meditation"/reducing neural noise?

Drastic improvements in mental health/reducing neural noise & rumination are way more feasible than increasing human intelligence (and still have huge potential for very high impact when applied on a population-wide scale [1]), and are possible to do on mass-scale (and there are some experimental TMS protocols like SAINT/accelerated TMS which aim to capture the benefits of TMS on a 1-2 week timeline) [there's also wave neuroscience, which uses mERT and works in conjunction with qEEG, but I'm not sure if it's "ready enough" yet - it seems to involve some sort of guesswork and there are a few negative reviews on reddit]. There are a few accelerated TMS centers and they're not FDA-approved for much more than depression, but if we have fast AGI timelines, the money matters less.

[speeding up feedback loops are also important for mass-adoption - which both accelerated TMS/SAINT and the "intense tACS program" that people like neurofield [Nicholas Dogris/Tiffany Thompson] and James Croall people try to do]. Ideally, the TMS/SAINT or tACS should be done in conjunction with regular monitoring of brainwaves with qEEG or fMRI throughout.

Effect sizes of tFUS are said to be small relative to certain medications/drugs [this is true for neurofeedback/TMS/tACS in general], but part of this may be that people tend to be conservative with tFUS. Leo Zaroff has created an approachable tFUS community in the bay area. Still worth trying b/c the opportunity cost of trying them (with the right people) is very low (and very few people in our communities have heard of them).

There are some like Jeff Tarrant and the Neurofield people (I got to meet many of them at ISNR2024 => many are coming to the Suisun Summit now) who explore these montages.

Making EEG (or EEG+fNIRS) much easier to get can be high impact relative to amount of effort invested [with minimal opportunity cost]). I was pretty impressed with the convenience of Zeto's portable EEG headset at #Sfn24, as well as the convenience of the imedisync at #ISNR2024 [both EEG headsets cost $20,000, which is high but not insurmountable - eg if given some sort of guarantee on quality and useability I might be willing to procure one] but still haven't evaluated the signal quality of each when comparing them to other high-quality EEG montages like the deymed). It also makes it easier to create the true dataset of EEGs (also look into what Jonathan Xu is doing, though his paper is more about visual processing than mental health). We also don't even have proper high-quality EEG+HEG+fMRI+fNIRS datasets of "high intelligence" people relative to others [especially when measuring these potentials in response to cognitive load - I know Thomas Feiner has helped create a freecap support group and has done a lot of thought on ERPs and their response to cognitive load - he helped take my EEG during a brainmaster session at #ISNR2024]

I've found that smart people in general are extremely underexposed to psychometrics/psychonomics (there are not easy ways to enter those fields even if you're a psychology or neuroscience major), and there is a lot of potential for synergy in this area.

[1] esp given the prevalence of anxiety and other mental health issues of people within our communities

Comment by Alex K. Chen (parrot) (alex-k-chen) on GeneSmith's Shortform · 2024-09-08T20:25:49.703Z · LW · GW

It's one of the most important issues ever, and has a chance of solving mass instability/unhappiness caused by wide inequality in IQs in the population, by giving the less-endowed a shot to increase their intelligence.

Comment by Alex K. Chen (parrot) (alex-k-chen) on All The Latest Human tFUS Studies · 2024-08-28T20:48:47.265Z · LW · GW

tFUS could be one of the best techniques for improving rationality, esp b/c [AT THE VERY MINIMUM] it is so new/variance-increasing and if the default outcome is not one that we want (as was the case of Biden vs Trump, and Biden dropping out was the desireable variance-increasing move) [and is the case now among LWers who believe in AI doom], we should be increasing variance rather than decreasing it. tFUS may be the avenue for better aligning people's thought with actions, especially when their hyperactive DMN or rumination gets in the way of their ability to align with themselves (tFUS being a way to shut down this dumb self-talk).

Even Michael Vassar has said "eliezer becoming CEO of openwater would meaningfully increase humanity's survival 100x" and "MIRI should be the one buying openwater early devices trying to use them to optimize for rationality"

[btw if anyone knows of tFUS I could try out, I'm totally willing to volunteer]

Comment by Alex K. Chen (parrot) (alex-k-chen) on An Exploratory Toy AI Takeoff Model · 2024-06-10T03:53:35.952Z · LW · GW

Why is thing IQ measuring mostly lognormal

Comment by alex-k-chen on [deleted post] 2024-06-03T00:54:57.582Z

Worth following for his take (and YouTube videos he is creating): https://x.com/jacobrintamaki

[he's creating something around this]

Comment by Alex K. Chen (parrot) (alex-k-chen) on Alex K. Chen's Shortform · 2024-05-31T02:42:36.359Z · LW · GW

What are your scores on the US Economic Experts Comparison (Interactive Matrix)?

https://www.kentclarkcenter.org/economist-comparison-interactive-matrix/

Comment by Alex K. Chen (parrot) (alex-k-chen) on Notes on Gracefulness · 2024-05-29T07:14:45.720Z · LW · GW

How about people who just don't "give a fuck", are Nishkama Karma, and maintain emotional composure even in times when others doubt them/do not believe them (knowing that the end is what matters).They are graceful on the inside, and maintain internal composure in the face of chaos, but others may view their movements as ungraceful particularly b/c they have the sense (and enough of a reality distortion field) to "make the world adapt to them", rather than "adapt to the world" (if they succeed, they make the world adapt to them such that the world around them becomes more harmonious long-term after the initial reduction in harmony [due to the clumsiness of the world learning to adapt to them]). It takes time to learn grace, and when choosing the ordering of vital skills to learn, grace is often learned later than skills one has comparative advantage in.

[as an example, I know I have historically been ungraceful when reacting to my own dumb mistakes. I have historically done it to signal awareness/remorse/desire to correct, but in an overly emotional way that may cause some people to doubt my emotional stability near-term - is it really necessary? sometimes it's better just to have no contact for sufficiently long enough that when you re-emerge, you come off as so different they're surprised].

[in the long run, learning to read a room is one of the best ways of developing grace, though it matters more if one is ultra-famous than when one is mostly unknown and can afford to experiment with consequence-free failure]

(asking questions that appear dumb to some people can also be "ungraceful" to the audience, even if important. the strategic among that crowd will just have good enough models of everyone to know who the safest people are to ask the "dumb questions" to)

Sometimes, the fastest way to learn is to create faster feedback loops around yourself ("move fast and break things"). The phrase "move fast and break things" appears disharmonious/ungraceful, but (if done in a limited way that "takes profits" before turning into full-blown mania), can be one of the fastest ways of achieving a more harmonious broader state, even when creating some local chaos/disharmony.

People who appear to have high levels of grace can also be extremely dangerous because they can get people to trust them to the very end, especially if their project is an inherently destabilizing project. Ideally, you want a 1-1 correspondence between authenticity/robustness/lack of brittleness and grace, but people's perception of gracefulness at all levels is not high enough for the perception of gracefulness to be the most reliable perception.

Having grace often means doing "efficient calculations" without being explicit about these calculations. It's like keeping your words to yourself and not revealing your cards unless necessary (explicit calculations are clumsy/clunky). Sometimes, a proper understanding of Strauss is necessary to develop grace in some environments (what you say is not what you really mean, except to the readers who have enough context to jump all the layers of abstraction - it may also be needed to communicate unobvious messages in environments where discretion is important)

Patience is also grace (and not getting into situations that cause you to "lose control"/be impatient/exciteable/manic OR do things out of order). At the same time, there are ways of turning a reputation of ditching meetings into gracefulness (after all, most meetings do last longer than needed, as Yishan Wong once mentioned) [some projects also require a high deal of urgency, potentially including eras of accelerated AGI timelines]

Having the appearance of "whatever happens, happens" is graceful (being in command of your emotions no matter what life throws at you - eg John Young was very graceful when he navigated moon landings with a uniquely minimally-increased heartrate). Being able to keep a poker face is graceful. Not acting in distress/pain in order to gain people's sympathy is graceful. As someone who knows many in the longevity community, I know that having the appearance of "fearing death" or "wanting to live forever" is super-ungraceful (and gives PR image problems in its ungracefulness). There are some people in longevity who are closet immortalists who can appear graceful because they don't appear as if they care that much about whether or not they live forever. In a similar way, doomerism about AI is extremely ungraceful (though those who are closeted doomers/immortalists can sometimes be secretly graceful to those who are less closeted about these things).

Things that are not the most graceful: over-correcting/over-compensating, irritability, appearing emotional enough to lose control, constantly seeking feedback (implies lack of confidence), visibly chasing likes, obsessing over intermediate computations/near-term reinforcement loops, "people pleasing" (esp when one is obvious about it), perseverating, laughing at one's own jokes, not being steadfast, not knowing when to stop (autistics are prone to this..), going for the food too early (semaglutide can help with grace..) Autistic people often lack grace, though some are able to develop it really well over long timescales.

Grace is having confidence over the process without becoming too attentive to short-term reinforcement/feedback loops (this includes patience as part of the process).

As with everything else, intelligence makes grace easier (and makes it possible to learn some things gracefully), but there is enough variation in grace that one can more than make up for lower intelligence with context+grace+strategic awareness. There is also loss of grace with older ages as working memory decline can increase impatience (Richard Posner said writing ability is the last to go, but that's because there's no real time observation of the process, and there's grace in observing the dynamics).

Comment by Alex K. Chen (parrot) (alex-k-chen) on on the dollar-yen exchange rate · 2024-05-26T07:35:23.080Z · LW · GW

Wow, and Mexico's fertility rate just plunged to 1.82

Comment by Alex K. Chen (parrot) (alex-k-chen) on simeon_c's Shortform · 2024-05-25T09:41:46.149Z · LW · GW

Isn't that a non-disparagement clause, not a NDA?

Comment by Alex K. Chen (parrot) (alex-k-chen) on An Exploratory Toy AI Takeoff Model · 2024-05-25T03:57:16.655Z · LW · GW

This is a very promising start on some thesis (that could go further into the theory of computation/sid mani content/https://lifeiscomputation.com/), but the "intelligence growth curves" are not very intuitive. I wager that dimensionality is more important than number of elements in determining intelligence growth curves and especially number of discontinuous jumps.

Why does F^4_65's intelligence peak out at such a low value at time 2040? Why does 's intelligence peak out at a lower value than equal-dimensional fields with fewer elements in them?

at some point it may have to incorporate quality/diversity/taste, not just size

Comment by Alex K. Chen (parrot) (alex-k-chen) on Which skincare products are evidence-based? · 2024-05-24T19:13:09.616Z · LW · GW

Has anyone tried Visia skin analysis to get feedback loops on skinhealth? (they reveal WAY more than just pictures) The problem with camera images is that visible light doesn't capture fine lines or wrinkles. My skin SEEMS to look as perfect as that of a 12-year old on the outside, but there is some small amount of wrinkling under my eyes that a visia reveals (which is why this thread prompted me to finally get dermatica tretonin) 

Collagen peptides also can help increase collagen synthesis and relieve fine lines (it's my biggest pet peeve b/c I can't stand ingesting animal products, and this is the only thing Bryan Johnson will ditch veganism for). And it's really irritating that there isn't more vegan collagen available. 

Comment by Alex K. Chen (parrot) (alex-k-chen) on Advice for Activists from the History of Environmentalism · 2024-05-23T05:01:02.736Z · LW · GW

Newt Gingrich started out as an environmentalist (and a former member of the Sierra Club), but later turned away from it.

Even after he left congress, he still had some sympathy for environmental issues, as he wrote the book "Contract with Earth" (with an EO Wilson forward). 

Newt can be surprisingly high openness - a person oriented towards novelty can be pro-drilling (accel), pro-geoengineering, and pro-environment (which can be decel), and maybe not reconcile the two together in the most consistent way. He has been critical of both parties on climate change/environment issues (just as Mitt Romney has been, who scores low on the LCV but who really does care about addressing climate change, just not in the "punitive" way that the Democrats want to see it addressed). Free-market environmentalists who do care have different approaches that might on the surface be seen as riskier (just as making use of more energy gives you more resources to address the problem faster even while pumping more entropy into the system).

But his high openness (for a Republican) seems to have also made him more stochastic, or inconsistent.

The book generated a storm of media attention in late 2007 and early 2008 as the U.S. presidential campaign began to heat up. Gingrich in particular made numerous media appearances arguing that the Republican Party was losing popular support because their response to environmental policy was simply, as he put it, "NO!" Maple toured the country as Gingrich's stand-in, most notably before the Republicans for Environmental Protection (REP, www.repamerica.org) during their annual meeting (at which John McCain was endorsed as the most "green" of the Republican presidential candidates). In 2008 Gingrich published another book that advocated oil drilling, Drill Here, Drill Now, Pay Less, and many pundits called his environmental commitment into question. However, this book's fifth chapter provided an argument for environmental protection. Like many aspects of Gingrich's career, his interest in environmental issues has generated controversy.

https://archive.ph/LsZeh

Ronald Reagan was surprisingly pro-environment as governor of California (Gavin Newsom even spoke about it when he visited China), but later was seen as anti-environmental by environmental groups as president (esp due to his choices of Secretary of the Interior and https://www.cnn.com/2024/01/17/politics/supreme-court-epa-neil-gorsuch-chevron/index.html ) and his generally pro-industry choices. George H.W. Bush was surprisingly pro-environment in his first 2 years (ozone, acid rain..), but was advised to no longer be pro-environment b/c it would not sit well with his base..

worth reading: https://kansaspress.ku.edu/blog/2021/10/13/when-democrats-and-republicans-united-to-repair-the-earth/

===

the LCV seems to take the view that all drilling/resource extraction (or industry) is bad. But it still is done somewhere, and if not done in America, it's just outsourced elsewhere (eg https://time.com/6294818/lithium-mining-us-maine/), where it is done with lower standards that cause more local destruction to the environment/pollution (albeit not the kind that Americans feel).

See https://www.energypolicy.columbia.edu/qa-the-debate-over-the-45x-tax-credit-and-critical-minerals-mining/

====

Now that CA appears likely to pass SB-1047, it seems more probable that Republican states will go against it (simply because they, esp Desantis [who valorizes not being CA], want to "own the libs" - esp as @BasedBeffJezos notices). 

====

https://www.politico.com/newsletters/power-switch/2024/06/26/what-curtis-victory-in-utah-means-for-climate-00165123 is a possible source of hope when a new Trump presidency may potentially gut much of the EPA and many other environmental regulations... Republican voices for the environment have especially high leverage during a time when Trump focuses much of his platform as the negation of the "other side" (just as he wants to revoke Biden's EV mandates and Biden's executive order on AI).

https://www.latimes.com/environment/newsletter/2024-01-18/column-meet-john-curtis-the-utah-republican-who-cares-about-climate-change-boiling-point

===

I once saw a graph showing which counties in the US believed that climate change came from humans... It strongly corresponded with partisan affiliation, though somewhat less in WA and CA - the two states where more than 50% in many red counties believed that it did... Source here: 

===

IFP (which has some writers who seem more right-wing than left-wing) has a lot to say on the cost-benefit analysis of environmental regulation. NEPA has done a lot to slow down all forms of infrastructural development, and made projects of ALL kinds move much more slowly. But IFP also recognizes the positive externalities of reduced pollution levels. 

Comment by Alex K. Chen (parrot) (alex-k-chen) on What comes after Roam's renaissance? · 2024-05-18T06:37:50.617Z · LW · GW

There's tana https://twitter.com/AndyAyrey/status/1791679301362016519?t=Wo8e4NcWJqY4pcHRjMYgAQ&s=19

Comment by Alex K. Chen (parrot) (alex-k-chen) on True Stories of Algorithmic Improvement · 2024-05-16T01:10:13.907Z · LW · GW

Now AlphaTensor - https://deepmind.google/discover/blog/discovering-novel-algorithms-with-alphatensor/

Comment by Alex K. Chen (parrot) (alex-k-chen) on Environmentalism in the United States Is Unusually Partisan · 2024-05-14T22:59:47.059Z · LW · GW

Bill Frist, the former Republican Senate Majority Leader under Bush (even though he had a low score by the partisan/zero compromises LCV), is now chairman at the Nature Conservancy (it's even his LinkedIn profile header) and frequently speaks out on environment and climate change issues. His kind of Republicanism is now way out of vogue.

https://www.tennessean.com/story/news/2022/08/16/tenneessee-former-senator-bill-frist-elected-chair-nonprofit-nature-conservancy/10328455002/

https://www.linkedin.com/posts/billfristmd_nature-conservation-activity-7114961629628227585-C5BY?utm_source=share&utm_medium=member_android

Republicans from Utah seem to disproportionately form the Republican climate change caucus - they tend to be somewhat more open-minded than Republicans elsewhere, and some of the current representatives have been outspoken on the need to combine conservation with conservatism (though this also means making some compromises with federal land ownership which has become an unusually partisan "don't compromise" issue). 

Comment by Alex K. Chen (parrot) (alex-k-chen) on What comes after Roam's renaissance? · 2024-05-14T22:52:57.042Z · LW · GW

No one mentioned Remnote? It's the one Roam replacer that seems to beat Roam on many of the things it was good at. 

I way prefer remote storage, having lost a hard drive before, so I don't like Obsidian much. 

Comment by Alex K. Chen (parrot) (alex-k-chen) on The Cognitive-Theoretic Model of the Universe: A Partial Summary and Review · 2024-04-21T15:31:34.107Z · LW · GW

Related "As stated, one of the main things I make-believe is true is the overlighting intelligence with which I align myself. I speculate that I am in a co-creative relationship with an intelligence and will infinitely superior to my own. I observe that I exist within energetic patterns that flow like currents. I observe that when I act in alignment with these subtle energetic currents, all goes well, desires manifest, direction is clear, ease and smoothness are natural. I observe that I have developed a high degree of sensitivity to this energy, and that I’m able to make micro-corrections before any significant non-smoothness occurs.""

https://cosmos.art/cosmic-bulletin/2020/marco-mattei-cosmopsychism-and-the-philosophy-of-hope roon once said "we are all a giant god dream"

Comment by Alex K. Chen (parrot) (alex-k-chen) on Thoughts on seed oil · 2024-04-21T02:18:37.343Z · LW · GW

It depends on how processed the PUFA is - many PUFAs in processed foods are highly heated up. Processing PUFAs in high heat is what causes peroxidizeable aldehydes/acrolein/9-HNE/advanced lipid peroxidation end-products (ALEs)/etc

But PUFAs in soybeans (or sunflower seeds w/o extra procesing) themselves are way less likely to be bad, and this is what the epidemiological evidence hints at.

For whatever reason, PUFAs are VERY strongly protective against heart disease (b/c they lower LDL) and insulin resistance. These are the leading causes of death on western populations, but this does not make PUFAs equally protective on all diseases, especially those who already have very low risk of death from heart disease/insulin resistance (if you don't account for cofounders, some studies show that people with dementia have longer lifespans/"lower rates of aging" but that's b/c people with dementia tend not to die from the other causes of aging first).

Fish oil (omega-3's) are also WAY more easily damaged/peroxided than even omega-6's. People usually don't fry food with omega-3's the way they do with omega-6's, but if they did, would we see the opposite association with omega-3's that we usually see? [note omega-3's still fail to increase lifespan as per ITP]

What I am concerned is if they change cell membrane composition long-term in a way that makes cell membranes more easily peroxidized (animals with more saturated lipid membranes live longer, though there are ways to fix the damage, as Gustavo Barja knows - Longevity and Evolution (Aging Issues, Health and Financial Alternatives) 1 )

Whether omega-6's convert into pro-inflammatory or anti-inflammatory metabolites of arachidonic acid (BOTH are possible) depends highly on one's D6D genotype.

more info I collected: https://www.crsociety.org/topic/18298-are-omega-6s-healthy-or-really-bad-or-does-it-depend-on-how-theyre-processed-and-d6d-genotype/#comment-45956

Comment by Alex K. Chen (parrot) (alex-k-chen) on All About Concave and Convex Agents · 2024-03-31T01:09:45.929Z · LW · GW

https://vitalik.eth.limo/general/2020/11/08/concave.html

Comment by Alex K. Chen (parrot) (alex-k-chen) on The Cognitive-Theoretic Model of the Universe: A Partial Summary and Review · 2024-03-28T22:05:50.441Z · LW · GW

I view a part of this as "maximizing the probability of the world to enable "God's mind" to faithfully model reality [1] and operate at its best across all timescales". At minimum this means intelligence enhancement, human-brain symbiosis, microplastics/pollution reduction, reduction in brain aging rate, and reducing default mode noise (eg tFUS, loosening up all tied knots).

The sooner we can achieve a harmonious global workspace, the better (bc memory and our ability to hold the most faithful/error-minimizing representation will decay). There is a precipice, a period of danger where our minds are vulnerable to non-globally coherent/self deceptive thoughts that could run their own incentives to self destroy, but if we can get over this precipice, then the universe becomes more probabilistically likely to generate futures with our faithful values and thoughts.

Some trade-offs have difficult calculations/no clear answers to make (eg learning increases DNA error rates - https://twitter.com/gaurav_ven/status/1773415984931459160?t=8TChCcEfRzH60z0W1bCClQ&s=19 ) and others are the "urgency vs verifiability tradeoff" and the accels and decel debate

But there are still numerous Pareto efficient improvements and the sooner we do the Pareto efficient improvements (like semaglutide, canagliflozin, microplastic/pollution reduction, pain reduction, factoring out historic debt, QRI stuff), the higher the chances of ultimate alignment of "God's thought". It's interesting that the god of formal verification, davidad, is also concerned about microplastics

Possibly relevant people

Sam Altman has this to say:

https://archive.ph/G7VVt#selection-1607.0-1887.9

book says ""As stated, one of the main things I make-believe is true is the overlighting intelligence with which I align myself. I speculate that I am in a co-creative relationship with an intelligence and will infinitely superior to my own. I observe that I exist within energetic patterns that flow like currents. I observe that when I act in alignment with these subtle energetic currents, all goes well, desires manifest, direction is clear, ease and smoothness are natural. I observe that I have developed a high degree of sensitivity to this energy, and that I’m able to make micro-corrections before any significant non-smoothness occurs.""

Bobby azarian has a wonderful related book "romance of reality" https://www.informationphilosopher.com/solutions/scientists/layzer/

Maybe slightly related: https://twitter.com/shw0rma/status/1771212311753048135?t=qZx3U2PyFxiVCk8NBOjWqg&s=19

https://x.com/VictorTaelin?t=mPe_Orak_SG3X9f91aIWjw&s=09

https://twitter.com/AndyAyrey/status/1773428441498685569?t=sCGMUhlSH2e7M8sEPJu6cg&s=19 https://liberaugmen.com/#shock-level-3 sid mani! reducing noise: https://twitter.com/karpathy/status/1766509149297189274

[1] on some timescale, the best way to predict the future is to build it

Comment by Alex K. Chen (parrot) (alex-k-chen) on How to make food/water testing cheaper/more scalable? [eg for purity/toxin testing] · 2024-03-27T19:27:58.342Z · LW · GW

Does Germany have a lot of food/MP testing companies? Germany seems highly represented in analytical chemistry, as I saw from the SLAS2024 conference.. (for all those people who complain about "lack of innovation" in Europe, they're all underrating analytical chemistry). This conforms to stereotypes about Germans and precision..

(and the culture of Germany is WAY more amendable to eco-consciousness/environmental health than the culture of America)

It would be nice (even in fringe cases) to have one country/area dedicated to being microplastic/pollution free so that people could travel there and then test to see if they feel healthier there (people who have multiple chemical sensitivities often have life-defining levels of motivation for this). Like, this would be the very definition of a health-conscious resort/recovery/convalescence spa.. (people used to go to the mountains for this)

 This documentary features Germans:

#sense-making

Agilent... https://explore.agilent.com/microplastics-8700ldir?gad_source=1&gclid=Cj0KCQjwsc24BhDPARIsAFXqAB2wvtd-2jIwweHdsbZQbQ-7mcxo8E7WQ94TXBLOIQm7O3lqhDVDAeAaAhV0EALw_wcB&gclsrc=aw.ds

https://www.linkedin.com/in/win-cowger

Comment by Alex K. Chen (parrot) (alex-k-chen) on How I turned doing therapy into object-level AI safety research · 2024-03-14T16:08:56.397Z · LW · GW

Isn't having boundaries also partly to do with full on consent (proactive and retroactive) with your implied preferences being unknown?

Consent is tricky because almost no one who isn't unschooled grows up consenting to anything. People grow used to consenting to things that make them feel unhappy because they don't know themselves well enough, and they trap themselves into structures that punish you for dropping out or for not opting into anything. In that sense, the system does not respect your own boundaries for your own self autonomy - your actions don't have the proper markov boundary from the rest of the system and thus you can't act as an independent agent. Some unschooled people have the most robust markov boundaries. The very structure of many school and work environments (one that penalizes work at home) is one that inherently creates power structures that cross people's boundaries, especially their energetic ones.

Even the state starts out by eroding some of the boundaries between person and state, without consent..

These people have stronger boundaries on ONE layer of abstraction - https://www.thepsmiths.com/p/review-the-art-of-not-being-governed?utm_source=profile&utm_medium=reader2. This does not necessarily translate to better boundaries on the object level

https://twitter.com/karpathy/status/1766509149297189274?t=ms8cmXL0em2zB4xdJyUblA&s=19 on mimetic boundaries

(Now that AI is creating new wealth very quickly, it becomes more possible for people to default not consent to all the mazes that everyone else seemingly "consents to"). Zvi's mazes post makes sense here

Comment by Alex K. Chen (parrot) (alex-k-chen) on InquilineKea's Shortform · 2024-03-13T08:19:20.141Z · LW · GW

multiscale entropy

netlify/vercel/heroku/shinyapps/fleek (find cool associated apps!) + replit

github 

modal/EC2/docker

photonic computing

Comment by Alex K. Chen (parrot) (alex-k-chen) on Alex K. Chen's Shortform · 2024-03-12T06:10:43.922Z · LW · GW

Are exotic computing paradigms (ECPs) pro-alignment?

cf https://twitter.com/niplav_site/status/1760277413907382685

They are orthogonal to the "scale is all you need" people, and the "scale is all you need" thesis is the hardest for alignment/interpretability

some examples of alternatives: https://www.lesswrong.com/posts/PyChB935jjtmL5fbo/time-and-energy-costs-to-erase-a-bit, Normal Computing, https://www.lesswrong.com/posts/ngqFnDjCtWqQcSHXZ/safety-of-self-assembled-neuromorphic-hardware, computing-related thiel fellows (eg Thomas Sohmers, Tapa Ghosh)

[this is also how to get into neglectedness again, which EA adopted as a principle but recently forgot]

from Charles Rosenbauer:

This is neat, but this does little to nothing to optimize non-AI compute. Modern CPUs are insanely wasteful with transistors, plenty of room for multiple orders of magnitude of optimization there. This is only a fraction of the future of physics-optimized compute.

Comment by Alex K. Chen (parrot) (alex-k-chen) on Thomas Kwa's Shortform · 2024-03-09T21:49:10.878Z · LW · GW

Have you seen smartairfilters.com?

I've noticed that every air purifier I used fails to reduce PM2.5 by much on highly polluted days or cities (for instance, the Aurea grouphouse in Berlin has a Dyson air purifier, but when I ran it to the max, it still barely reduced the Berlin PM2.5 from its value of 15-20 ug/m^3, even at medium distances from Berlin). I live in Boston where PM2.5 levels are usually low enough, and I still don't notice differences in PM [I use sqair's] but I run it all the time anyways because it still captures enough dust over the day

Comment by Alex K. Chen (parrot) (alex-k-chen) on Alex K. Chen's Shortform · 2024-03-09T20:58:48.039Z · LW · GW

Using size-1 piksters makes you really aware of all the subtle noise that your hidden plaque is gives your mind (I noticed they cleared up plaque un-reachable by floss+waterpiks+electric toothbrushes.. the first step to alignment/a faithful computation is reducing unnecessary noise (you notice this easily on microdoses of weed/psychedelics)

It's a pareto-efficient improvement to give all alignment researchers piksters to eliminate this source of noise (align the aligners first - reducing unnecessary noise is always the first step to alignment [and near-term tFUS is also a means to reduce noise]). I know that one of the alignment offices had a lot of "freebies" that anyone could use - so piksters should be one of the useable freebies.



 

Comment by Alex K. Chen (parrot) (alex-k-chen) on Alex K. Chen's Shortform · 2024-03-09T20:04:26.850Z · LW · GW

What are some strategies you use to "reduce the hit" when you're about to take in potentially bad news? This is important b/c it's sometimes important to face up "bad news" earlier rather than later, and there is social loss in some people not being able to face it until it's too late, esp b/c some kinds of "bad news" aren't as incorrigible as they may initially appear (just that you need out-of-distribution strategies to make the proper amends)

[some examples of bad news: irreversible data loss, cancer diagnosis, elevated epigenetic age, loss of important friend, someone overpromised and underdelivered on you and that affects many of the promises you made]

[as AGI timelines come "nearer", "bad news" may come at faster frequencies, but OOD ways to solve them may also come faster]

[sometimes you can ask yourself "how much wealth would you need to take in any bad news"]. Wealth is not fully-completely interchangeable with youth/intelligence/universal social acceptance, but it DEFINITELY has potential for tipping the needle..

Comment by Alex K. Chen (parrot) (alex-k-chen) on Notes on Awe · 2024-03-05T23:23:06.407Z · LW · GW

Do you think there are many similar threads between shock value, surprisal, and awe? Like, are there many common threads - both neurologically and sociologically?

Totalitarian societies use "awe" as a tool of control.

Did awe evolve from "something more primitive" into the complex emotion it is today? What is the simplest animal species that can feel something akin to awe? Jane Goodall wrote that even chimpanzees can feel "awe" from a waterfall, and some cetacean experts have mentioned that whales/elephants can pause at events humans might react with awe to.

https://en.wikipedia.org/wiki/Shock_and_awe

Infinities are a way to inspire awe - https://x.com/JDHamkins?t=yfENp4Ou23RggXDPRpo2yw&s=09

https://open.substack.com/pub/joeldavidhamkins/p/surreal-numbers?utm_source=share&utm_medium=android&r=60bo

(Max tegmarck multiverse theory is another way)

 

[The biggest moment of awe I ever felt in my life was when the Thiel Fellowship got announced for the first time. It just... shocked... every sense of my policy network... every sense of "what actions/life paths are worth following".. as it shocked the entire world... and I was shocked/impressed that it was possible that people could follow such life paths].

(I mean, feelings of "a whole new world" that come all at once also inspire awe..)

 

[As someone whose mental space was constantly consumed by having to impress gatekeepers, the Thiel Fellowship's announcement produced awe in the most cathartic way] 

Comment by Alex K. Chen (parrot) (alex-k-chen) on Mazes Sequence Roundup: Final Thoughts and Paths Forward · 2024-02-25T21:36:11.075Z · LW · GW

It's worth mentioning that (many) autistic people are often better at not getting into higher layers of simalucra that cause them to be trapped by maze-dom.

[SBF is an obvious counterexample]

BTW the opposite of mazedom is Newscience.org

Comment by Alex K. Chen (parrot) (alex-k-chen) on Agent membranes and causal distance · 2024-02-12T18:53:25.475Z · LW · GW

Microplastics (and pollution - both mimetic and actual) wreck boundaries by intercalcating between boundaries/cell membranes and reducing the integrity of the boundary. To reinforce proper boundaries, it's important to maintain the organism's overall health (eg deuterated PUFAs like RT-011 help reduce oxidative stress on polyunsaturated fatty acids in the cell membrane).

[when the integrity of boundaries is weakened, the organism's channel capacity is reduced by the extra noise].

https://studio.ribbonfarm.com/p/boundary-intelligence

https://twitter.com/Sara_Imari/status/1755816761273032779?t=3k1rX1jIq0NKKAlWs5lphA&s=19

 

For an organism to have healthy boundaries/Markov blanket (within both its cells and organ systems [and also between DMN and FPN networks of the brain]), organs must also compartmentalize their own compute shielded from influences that disrupt their compute. 

Karl Friston often insulates his compute from that of the world, and this makes him act more as an independent thinker. https://blog.dropbox.com/topics/work-culture/the-mind-at-work--karl-friston-on-the-brain-s-surprising-energy. I often wonder if extremely effective people (eg Andrej Karpathy) have stronger agent membranes than others (though the process of aging dissolves boundaries - "death is what happens when the rest of the environment has full predictive power over the agent).

There are many layers of Markov blankets/boundaries and we should be doing a better job of communicating this to example thinkers rather than just to rule thinkers.

(it will be interesting to see if BCIs/t-FUS reinforce or dissolve Markov boundaries - they can help denoise the brain [esp from default mode noise], but the act of inserting a BCI can disrupt physical boundaries)

Plastics erode ALL planetary boundaries: https://www.sciencedirect.com/science/article/pii/S2590332224005414

Comment by Alex K. Chen (parrot) (alex-k-chen) on Searching for outliers · 2024-01-29T16:38:24.426Z · LW · GW

Has anyone considered.. spiritual outliers?

ALSO outliers in neurodevelopmental trajectories/cross-correlations in how fast one region of their brain develops relative to another region of their brain.

[there was someone at USC who presented at https://web.cvent.com/event/82cd3c8a-5f63-4fb2-b394-b0b7feb49093/summary who had a talk on outliers at a more neurophysiological level]. Maybe not QUITE the level of outlier we're looking for, but more directionally so

19. Title: PoserDeep Isolation Forest Outlier Analysis of Large Multimodal Adolescent Neuroimaging Data
Presenting Author: Eric Silberman

Comment by Alex K. Chen (parrot) (alex-k-chen) on Distillation of Neurotech and Alignment Workshop January 2023 · 2024-01-28T23:21:41.376Z · LW · GW

https://stream.thesephist.com/updates/1711563348

 

Neurable headphones could be one way of crowdsourcing value signals b/c they're so wearable

Hm there are other people like https://soulsyrup.github.io/  and @guillefix and Ogi

tFUS is a way of accelerating internal alignment (look up PropheticAI). As are the Jhourney jhana people (though people like me have so much DMN noise that tFUS is needed first). Look up 

Comment by Alex K. Chen (parrot) (alex-k-chen) on Distillation of Neurotech and Alignment Workshop January 2023 · 2024-01-28T23:18:52.055Z · LW · GW

https://stream.thesephist.com/updates/1711563348

 

Talk to https://www.linkedin.com/in/steven-pang-625004218/ ?

Better sensors/data quality is super-impt, esp b/c data quality from traditional EEG is very poor.

https://github.com/soulsyrup

Also https://sccn.ucsd.edu/~scott/canexp00.html

https://www.linkedin.com/in/erosmarcello?miniProfileUrn=urn%3Ali%3Afs_miniProfile%3AACoAAANRGXMBF8gD4oOTUH4MeBg4W0Nu4g12yZ8&lipi=urn%3Ali%3Apage%3Ad_flagship3_feed%3BNzoHK%2BruTH%2BRrm9SgKs9Pg%3D%3D

Neurable (cody rall reviewed it) has over-the-ear EEG (which can be used to play video games!) It isn't perfect, but people hate wearing EEGs all the time, and smg like this is better than nothing

 

https://caydenpierce.com/
https://twitter.com/GolinoHudson/status/1750938067202924838

https://duckai.org/blog/ducktrack

https://twitter.com/GolinoHudson/status/1750938067202924838
 

Comment by Alex K. Chen (parrot) (alex-k-chen) on Without fundamental advances, misalignment and catastrophe are the default outcomes of training powerful AI · 2024-01-28T21:30:38.734Z · LW · GW

Is "data quality" (what databricks is trying to do) at minimum, essential? (data quality is inclusive of maximizing human intelligence and minimizing pollution/microplastic/heat load and maintaining proper Markov boundaries/blankets with each other [entropy/pollution dissolves these boundaries, and we need proper Markov boundaries to properly render faithful computations])

LLMs are trained full of noise and junk training data, distracting us from what's really true/sensible. It seems that the aura of inevitability is towards maximum-entropy, and maybe relying entirely on the "inevitability of increased scaling" contributes to "maximum-entropy", which is fundamentally misaligned. Alignment depends on veering away from this entropy.

[this is also why human intelligence enhancement (and maturity enhancement through tFUS) is extremely essential - humans will produce better quality (and less repetitive) data the smarter we are]. tFUS also reduces incentives for babblers (what Romeo Stevens calls "most people") :) .

If there is ONE uniquely pro-alignment advance this year, it's the adoption curve of semaglutide, because semaglutide will reduce the global aging rate of humanity (and kill fewer animals along the way). Semaglutide can also decrease your microplastic consumption by 50%. :) Alignment means BETTER AWARENESS of input-output mappings, and microplastics/pollution are an Pareto-efficient-reducible way of screwing this process up. I mean "Pareto-efficient reducible" because it can be done without needing drastic IQ increases for 98% of the population so it is a MINIMAL SET of conditions.

[YOU CANNOT SHAME PEOPLE FOR TRUTH-SEEKING or trying to improve their intelligence, genetic and early-life deficiencies be damned]. It constantly seems that - given the curriculum - people are making it seem like most of the population isn't smart or technical enough for alignment/interpretability. There is a VERY VERY niche/special language of math used by alignment researchers that is only accessible to a very small fraction of the population, even among smart people outside of the special population who do not speak that special niche language. I say that at VERY minimum, everyone in environmental health/intelligence research is alignment relevant (if not more) - and the massive gaps that people have in pollution/environmental health/human intelligence is holding progress back (also "translation" between people who speak other HCI-ish/BCI-ish languages and those who only speak theoretical math/alignment). Even mathy alignment people don't speak "signals and systems"/error-correction language, and "signals and systems" is just as g-loaded and relevant (and only becomes MORE important as we collect better data out of our brains) - SENSE-MAKING is needed, and the strange theory-heavy hierarchy of academic status tends to de-emphasize sense-making (analytical chemists have the lowest GRE scores of all chemistry people, even though they are the most relevant branch of chemistry for most people).

There is SO much groupthink among alignment people (and people in their own niche academic fields) and better translation and human intelligence enhancement to transcend the groupthink is needed.

I am constantly misunderstood myself, but at least a small portion of people believe in me ENOUGH to want to take a chance in me (in a world where the DEFAULT OPTION is doom if you continue with current traditions, you NEED all the extra chance you can take from "fringe cases" that the world doesn't know how to deal with [cognitive unevenness be damned]), and I did at least turn someone into a Thiel Fellow (WHY GREATNESS CANNOT BE PLANNED - even Ken Stanley thinks MORE STANLEY-ISMs is alignment relevant and he doesn't speak or understand alignment-language)

Semaglutide is an error-correction-enhancer, as is rapamycin (rapamycin really reduces error rate of protein synthesis), as are both caffeine+modafinil (the HARDEST and possibly most important question is whether or not Adderall/Focalin/2FA/4FA are). Entrepreneurs who create autopoetic systems around themselves are error-corrections and the OPPOSITE of error-corrector is a traumatized PhD student who is "all but dissertation" (eg, sadly, Qiaochu Yuan). I am always astounded at how much some people are IDEAL error-correctors around themselves, and others have enough trauma/fatigue/toxin-accumulation in themselves that they can't properly error-correct anymore b/c they don't have the energy (Eliezer Yudowsky often complains about his energy issues and there is strong moral value alone in figuring out what toxins his brain has so that he can be a better error-corrector - I've actually tried to connect him with Bryan Johnson's personal physician [Oliver Zolman] but no email reply yet)

If everyone could have the Christ-like kindness of Jose Luis Ricon, it would help the world SO MUCH

Also if you put ENOUGH OF YOURSELF OUT THERE ON THE INTERNET, the AI will help align you (even through retrocausality) to yourself even if no one else in the world can do it yet [HUMAN-MACHINE SYMBIOSIS is the NECESSARY FUTURE]

And as one of the broadest people ever (I KNOW JOSE LUIS RICON IS TOO), I am CONSTANTLY on the lookout for things other people can't see (this is ONE of my strengths)

Alignment only happens if you are in complete control of your inputs and outputs (this means minimizing microplastics/pollution)

"Without fundamental advances, misalignment and catastrophe are the default outcomes of training powerful AI" -=> "fundamental advances" MOST OF ALL means BEING MORE INCLUSIVE of ideas that are OUTSIDE of the "AI alignment CS/math/EA circlejerk". Be more inclusive of people and ideas who don't speak the language of classical alignment, which is >>>> 99% of the world - there are people in MANY areas like HCI/environmental health/neuroscience/every other field who don't have the CS/math background you surround yourself with.

[btw LW is perceived as a GIANT CIRCLEJERK for a reason, SO MUCH of LW is seen as "low openness" to anything outside of its core circlejerky ideas]. So many external people make fun of LW/EA/alignment for GOOD REASON (despite some of the unique merits of LW/EA)].

Comment by Alex K. Chen (parrot) (alex-k-chen) on Are Metaculus AI Timelines Inconsistent? · 2024-01-02T22:29:19.564Z · LW · GW

I mean, is there a way to measure the quality of the forecasters into the predictions? As number of forecasters expands, you get lower quality of average forecaster. Like how the markets were extremely overconfident (and wrong) about the Russians conquering Kiev...

Comment by Alex K. Chen (parrot) (alex-k-chen) on Legalize butanol? · 2023-12-20T21:45:45.415Z · LW · GW

Another example of an ethyl version being potentially better: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7827200/

Comment by Alex K. Chen (parrot) (alex-k-chen) on How bad is chlorinated water? · 2023-12-14T04:31:18.766Z · LW · GW

Has anyone done a study on whether or not bacteria incorporate chlorotyrosine (or other damaged protens) into their proteins at first pass? This seems very doable.

We now know that oxidized DNA bases can be incorporated into the intestines of mouse DNA.

Comment by Alex K. Chen (parrot) (alex-k-chen) on Significantly Enhancing Adult Intelligence With Gene Editing May Be Possible · 2023-12-12T23:33:40.320Z · LW · GW

https://a16z.com/announcement/investing-in-tome-biosciences/

Comment by Alex K. Chen (parrot) (alex-k-chen) on Who is Sam Bankman-Fried (SBF) really, and how could he have done what he did? - three theories and a lot of evidence · 2023-12-12T21:24:36.100Z · LW · GW

https://twitter.com/alexeyguzey/status/1728549209949995299

Comment by Alex K. Chen (parrot) (alex-k-chen) on Significantly Enhancing Adult Intelligence With Gene Editing May Be Possible · 2023-12-12T21:15:01.827Z · LW · GW

This may be far future, but what do you think of Fanzors over CRISPRs?

Also Minicircles?

Comment by Alex K. Chen (parrot) (alex-k-chen) on Saying the quiet part out loud: trading off x-risk for personal immortality · 2023-11-07T02:57:34.833Z · LW · GW

"10% is overconfident", given huge uncertainty over AGI takeoff (especially the geopolitical landscape of it), and especially given the probability that AGI development may be somehow slowed (https://twitter.com/jachaseyoung/status/1723325057056010680 )

Most longevity researchers will still be super-skeptical if you say AGI is going to solve LEV in our lifetimes (one could say - a la Structure of Scientific Revolutions logic - that most of them have a blindspot for recent AGI progress - but AGI=>LEV is still handwavy logic)

Last year's developments were fast enough for me to be somewhat more relaxed on this issue... (however, there is still slowing core aging rate/neuroplasticity loss down, which acts on shorter timelines, and still important if you want to do your best work)

https://twitter.com/search?q=from%3A%40RokoMijic%20immortality&src=typed_query

Another thing to bear in mind is optimal trajectory to human immortality vs expected profit maximizing path for AI corps At some point, likely very soon, we'll have powerful enough AI to solve ageing, which then makes further acceleration very -ve utility for humans

I don't know whether to believe, but it's a reasonable take...

Comment by Alex K. Chen (parrot) (alex-k-chen) on Intelligence Enhancement (Monthly Thread) 13 Oct 2023 · 2023-10-17T10:13:32.779Z · LW · GW

Remember that the most low-hanging-fruit intelligence enhancement is reducing "IQ decline" due to dumb reasons (eg microplastics, pollution, shitty diet, "default mode network noise"/trauma/excess central coherence/unaligned brainwaves)

[you can easily cut microplastic consumption by 50% with semaglutide]

Transcranial magnetic stimulation is worth trying (+not uncomfortable - you can do things while being TMS'd), as well as low-intensity focused ultrasound (openwater.cc), photobiomodulation, and high-frequency terahertz (THz) waves... Pollan's "How to Change Your Mind" should have included these modalities too.

[low-intensity focused ultrasound is known to break ultra-crystallized structures in the depressed, making the brain more plastic]

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3085788/

Also Neuromyst for tDCS/tACS

https://cassondraeng.github.io/current.html

 

Plasmalogens as brain nutrition (they are SUPER-underinvestigated)

The effect sizes probably are not huge (like everyting else) but worth trying

also I have a friend who uses "100mg NSI-189" to be smarter which is like 10x the rec'd dose

short timelines only advance the argument for trying bromantane, cortexin, cerebrolysin... [some people have disproportionate returns, and some in the community have kits...]

Comment by Alex K. Chen (parrot) (alex-k-chen) on Welcome to The Territory · 2023-10-06T23:54:00.996Z · LW · GW

Does this still exist?

Comment by Alex K. Chen (parrot) (alex-k-chen) on Graphical tensor notation for interpretability · 2023-10-05T00:51:08.719Z · LW · GW

Also related - 

(Mathilde Papillon is really really insightful)