Posts

How to make food/water testing cheaper/more scalable? [eg for purity/toxin testing] 2024-03-23T05:28:17.273Z
How do you improve the quality of your drinking water? 2024-03-13T00:37:40.389Z
Will posting any thread on LW guarantee that a LLM will index all my content, and if questions people ask to the LLM after my name will surface up all my LW content? 2023-08-11T01:40:10.933Z
How do I find all the items on LW that I've *favorited* or upvoted? 2023-08-07T23:51:05.711Z
Alex K. Chen's Shortform 2023-08-07T17:06:18.876Z
What can people not smart/technical/"competent" enough for AI research/AI risk work do to reduce AI-risk/maximize AI safety? (which is most people?) 2022-04-11T14:05:33.979Z

Comments

Comment by Alex K. Chen (parrot) (alex-k-chen) on The Cognitive-Theoretic Model of the Universe: A Partial Summary and Review · 2024-04-21T15:31:34.107Z · LW · GW

Related "As stated, one of the main things I make-believe is true is the overlighting intelligence with which I align myself. I speculate that I am in a co-creative relationship with an intelligence and will infinitely superior to my own. I observe that I exist within energetic patterns that flow like currents. I observe that when I act in alignment with these subtle energetic currents, all goes well, desires manifest, direction is clear, ease and smoothness are natural. I observe that I have developed a high degree of sensitivity to this energy, and that I’m able to make micro-corrections before any significant non-smoothness occurs.""

Comment by Alex K. Chen (parrot) (alex-k-chen) on Thoughts on seed oil · 2024-04-21T02:18:37.343Z · LW · GW

It depends on how processed the PUFA is - many PUFAs in processed foods are highly heated up. Processing PUFAs in high heat is what causes peroxidizeable aldehydes/acrolein/9-HNE/advanced lipid peroxidation end-products (ALEs)/etc

But PUFAs in soybeans (or sunflower seeds w/o extra procesing) themselves are way less likely to be bad, and this is what the epidemiological evidence hints at.

For whatever reason, PUFAs are VERY strongly protective against heart disease (b/c they lower LDL) and insulin resistance. These are the leading causes of death on western populations, but this does not make PUFAs equally protective on all diseases, especially those who already have very low risk of death from heart disease/insulin resistance.

Fish oil (omega-3's) are also WAY more easily damaged/peroxided than even omega-6's. People usually don't fry food with omega-3's the way they do with omega-6's, but if they did, would we see the opposite association with omega-3's that we usually see? [note omega-3's still fail to increase lifespan as per ITP]

What I am concerned is if they change cell membrane composition long-term in a way that makes cell membranes more easily peroxidized (animals with more saturated lipid membranes live longer, though there are ways to fix the damage, as Gustavo Barja knows - Longevity and Evolution (Aging Issues, Health and Financial Alternatives) 1 )

Whether omega-6's convert into pro-inflammatory or anti-inflammatory metabolites of arachidonic acid (BOTH are possible) depends highly on one's D6D genotype.

more info I collected: https://www.crsociety.org/topic/18298-are-omega-6s-healthy-or-really-bad-or-does-it-depend-on-how-theyre-processed-and-d6d-genotype/#comment-45956

Comment by Alex K. Chen (parrot) (alex-k-chen) on All About Concave and Convex Agents · 2024-03-31T01:09:45.929Z · LW · GW

https://vitalik.eth.limo/general/2020/11/08/concave.html

Comment by Alex K. Chen (parrot) (alex-k-chen) on The Cognitive-Theoretic Model of the Universe: A Partial Summary and Review · 2024-03-28T22:05:50.441Z · LW · GW

I view a part of this as "maximizing the probability of the world to enable "God's mind" to faithfully model reality [1] and operate at its best across all timescales". At minimum this means intelligence enhancement, human-brain symbiosis, microplastics/pollution reduction, reduction in brain aging rate, and reducing default mode noise (eg tFUS, loosening up all tied knots).

The sooner we can achieve a harmonious global workspace, the better (bc memory and our ability to hold the most faithful/error-minimizing representation will decay). There is a precipice, a period of danger where our minds are vulnerable to non-globally coherent/self deceptive thoughts that could run their own incentives to self destroy, but if we can get over this precipice, then the universe becomes more probabilistically likely to generate futures with our faithful values and thoughts.

Some trade-offs have difficult calculations/no clear answers to make (eg learning increases DNA error rates - https://twitter.com/gaurav_ven/status/1773415984931459160?t=8TChCcEfRzH60z0W1bCClQ&s=19 ) and others are the "urgency vs verifiability tradeoff" and the accels and decel debate

But there are still numerous Pareto efficient improvements and the sooner we do the Pareto efficient improvements (like semaglutide, canagliflozin, microplastic/pollution reduction, pain reduction, factoring out historic debt, QRI stuff), the higher the chances of ultimate alignment of "God's thought". It's interesting that the god of formal verification, davidad, is also concerned about microplastics

Possibly relevant people

Sam Altman has this to say:

https://archive.ph/G7VVt#selection-1607.0-1887.9

book says ""As stated, one of the main things I make-believe is true is the overlighting intelligence with which I align myself. I speculate that I am in a co-creative relationship with an intelligence and will infinitely superior to my own. I observe that I exist within energetic patterns that flow like currents. I observe that when I act in alignment with these subtle energetic currents, all goes well, desires manifest, direction is clear, ease and smoothness are natural. I observe that I have developed a high degree of sensitivity to this energy, and that I’m able to make micro-corrections before any significant non-smoothness occurs.""

Bobby azarian has a wonderful related book "romance of reality" https://www.informationphilosopher.com/solutions/scientists/layzer/

Maybe slightly related: https://twitter.com/shw0rma/status/1771212311753048135?t=qZx3U2PyFxiVCk8NBOjWqg&s=19

https://x.com/VictorTaelin?t=mPe_Orak_SG3X9f91aIWjw&s=09

https://twitter.com/AndyAyrey/status/1773428441498685569?t=sCGMUhlSH2e7M8sEPJu6cg&s=19 https://liberaugmen.com/#shock-level-3 sid mani! reducing noise: https://twitter.com/karpathy/status/1766509149297189274

[1] on some timescale, the best way to predict the future is to build it

Comment by Alex K. Chen (parrot) (alex-k-chen) on How to make food/water testing cheaper/more scalable? [eg for purity/toxin testing] · 2024-03-27T19:27:58.342Z · LW · GW

Does Germany have a lot of food/MP testing companies? Germany seems highly represented in analytical chemistry, as I saw from the SLAS2024 conference.. (for all those people who complain about "lack of innovation" in Europe, they're all underrating analytical chemistry). This conforms to stereotypes about Germans and precision..

(and the culture of Germany is WAY more amendable to eco-consciousness/environmental health than the culture of America)

It would be nice (even in fringe cases) to have one country/area dedicated to being microplastic/pollution free so that people could travel there and then test to see if they feel healthier there (people who have multiple chemical sensitivities often have life-defining levels of motivation for this). Like, this would be the very definition of a health-conscious resort/recovery/convalescence spa.. (people used to go to the mountains for this)

 This documentary features Germans:

#sense-making

Comment by Alex K. Chen (parrot) (alex-k-chen) on How I turned doing therapy into object-level AI safety research · 2024-03-14T16:08:56.397Z · LW · GW

Isn't having boundaries also partly to do with full on consent (proactive and retroactive) with your implied preferences being unknown?

Consent is tricky because almost no one who isn't unschooled grows up consenting to anything. People grow used to consenting to things that make them feel unhappy because they don't know themselves well enough, and they trap themselves into structures that punish you for dropping out or for not opting into anything. In that sense, the system does not respect your own boundaries for your own self autonomy - your actions don't have the proper markov boundary from the rest of the system and thus you can't act as an independent agent. Some unschooled people have the most robust markov boundaries. The very structure of many school and work environments (one that penalizes work at home) is one that inherently creates power structures that cross people's boundaries, especially their energetic ones.

Even the state starts out by eroding some of the boundaries between person and state, without consent..

These people have stronger boundaries on ONE layer of abstraction - https://www.thepsmiths.com/p/review-the-art-of-not-being-governed?utm_source=profile&utm_medium=reader2. This does not necessarily translate to better boundaries on the object level

https://twitter.com/karpathy/status/1766509149297189274?t=ms8cmXL0em2zB4xdJyUblA&s=19 on mimetic boundaries

(Now that AI is creating new wealth very quickly, it becomes more possible for people to default not consent to all the mazes that everyone else seemingly "consents to"). Zvi's mazes post makes sense here

Comment by Alex K. Chen (parrot) (alex-k-chen) on InquilineKea's Shortform · 2024-03-13T08:19:20.141Z · LW · GW

multiscale entropy

netlify/vercel/heroku/shinyapps/fleek (find cool associated apps!) + replit

github 

modal/EC2/docker

Comment by Alex K. Chen (parrot) (alex-k-chen) on Alex K. Chen's Shortform · 2024-03-12T06:10:43.922Z · LW · GW

Are exotic computing paradigms (ECPs) pro-alignment?

cf https://twitter.com/niplav_site/status/1760277413907382685

They are orthogonal to the "scale is all you need" people, and the "scale is all you need" thesis is the hardest for alignment/interpretability

some examples of alternatives: https://www.lesswrong.com/posts/PyChB935jjtmL5fbo/time-and-energy-costs-to-erase-a-bit, Normal Computing, https://www.lesswrong.com/posts/ngqFnDjCtWqQcSHXZ/safety-of-self-assembled-neuromorphic-hardware, computing-related thiel fellows (eg Thomas Sohmers, Tapa Ghosh)

[this is also how to get into neglectedness again, which EA adopted as a principle but recently forgot]

from Charles Rosenbauer:

This is neat, but this does little to nothing to optimize non-AI compute. Modern CPUs are insanely wasteful with transistors, plenty of room for multiple orders of magnitude of optimization there. This is only a fraction of the future of physics-optimized compute.

Comment by Alex K. Chen (parrot) (alex-k-chen) on Thomas Kwa's Shortform · 2024-03-09T21:49:10.878Z · LW · GW

Have you seen smartairfilters.com?

I've noticed that every air purifier I used fails to reduce PM2.5 by much on highly polluted days or cities (for instance, the Aurea grouphouse in Berlin has a Dyson air purifier, but when I ran it to the max, it still barely reduced the Berlin PM2.5 from its value of 15-20 ug/m^3, even at medium distances from Berlin). I live in Boston where PM2.5 levels are usually low enough, and I still don't notice differences in PM [I use sqair's] but I run it all the time anyways because it still captures enough dust over the day

Comment by Alex K. Chen (parrot) (alex-k-chen) on Alex K. Chen's Shortform · 2024-03-09T20:58:48.039Z · LW · GW

Using size-1 piksters makes you really aware of all the subtle noise that your hidden plaque is gives your mind (I noticed they cleared up plaque un-reachable by floss+waterpiks+electric toothbrushes.. the first step to alignment/a faithful computation is reducing unnecessary noise (you notice this easily on microdoses of weed/psychedelics)

It's a pareto-efficient improvement to give all alignment researchers piksters to eliminate this source of noise (align the aligners first - reducing unnecessary noise is always the first step to alignment [and near-term tFUS is also a means to reduce noise]). I know that one of the alignment offices had a lot of "freebies" that anyone could use - so piksters should be one of the useable freebies.



 

Comment by Alex K. Chen (parrot) (alex-k-chen) on Alex K. Chen's Shortform · 2024-03-09T20:04:26.850Z · LW · GW

What are some strategies you use to "reduce the hit" when you're about to take in potentially bad news? This is important b/c it's sometimes important to face up "bad news" earlier rather than later, and there is social loss in some people not being able to face it until it's too late, esp b/c some kinds of "bad news" aren't as incorrigible as they may initially appear (just that you need out-of-distribution strategies to make the proper amends)

[some examples of bad news: irreversible data loss, cancer diagnosis, elevated epigenetic age, loss of important friend, someone overpromised and underdelivered on you and that affects many of the promises you made]

[as AGI timelines come "nearer", "bad news" may come at faster frequencies, but OOD ways to solve them may also come faster]

[sometimes you can ask yourself "how much wealth would you need to take in any bad news"]. Wealth is not fully-completely interchangeable with youth/intelligence/universal social acceptance, but it DEFINITELY has potential for tipping the needle..

Comment by Alex K. Chen (parrot) (alex-k-chen) on Notes on Awe · 2024-03-05T23:23:06.407Z · LW · GW

Do you think there are many similar threads between shock value, surprisal, and awe? Like, are there many common threads - both neurologically and sociologically?

Totalitarian societies use "awe" as a tool of control.

Did awe evolve from "something more primitive" into the complex emotion it is today? What is the simplest animal species that can feel something akin to awe? Jane Goodall wrote that even chimpanzees can feel "awe" from a waterfall, and some cetacean experts have mentioned that whales/elephants can pause at events humans might react with awe to.

https://en.wikipedia.org/wiki/Shock_and_awe

Infinities are a way to inspire awe - https://x.com/JDHamkins?t=yfENp4Ou23RggXDPRpo2yw&s=09

https://open.substack.com/pub/joeldavidhamkins/p/surreal-numbers?utm_source=share&utm_medium=android&r=60bo

(Max tegmarck multiverse theory is another way)

 

[The biggest moment of awe I ever felt in my life was when the Thiel Fellowship got announced for the first time. It just... shocked... every sense of my policy network... every sense of "what actions/life paths are worth following".. as it shocked the entire world... and I was shocked/impressed that it was possible that people could follow such life paths].

(I mean, feelings of "a whole new world" that come all at once also inspire awe..)

 

[As someone whose mental space was constantly consumed by having to impress gatekeepers, the Thiel Fellowship's announcement produced awe in the most cathartic way] 

Comment by Alex K. Chen (parrot) (alex-k-chen) on Mazes Sequence Roundup: Final Thoughts and Paths Forward · 2024-02-25T21:36:11.075Z · LW · GW

It's worth mentioning that (many) autistic people are often better at not getting into higher layers of simalucra that cause them to be trapped by maze-dom.

[SBF is an obvious counterexample]

BTW the opposite of mazedom is Newscience.org

Comment by Alex K. Chen (parrot) (alex-k-chen) on Agent membranes and causal distance · 2024-02-12T18:53:25.475Z · LW · GW

Microplastics (and pollution - both mimetic and actual) wreck boundaries by intercalcating between boundaries/cell membranes and reducing the integrity of the boundary. To reinforce proper boundaries, it's important to maintain the organism's overall health (eg deuterated PUFAs like RT-011 help reduce oxidative stress on polyunsaturated fatty acids in the cell membrane).

[when the integrity of boundaries is weakened, the organism's channel capacity is reduced by the extra noise].

https://studio.ribbonfarm.com/p/boundary-intelligence

https://twitter.com/Sara_Imari/status/1755816761273032779?t=3k1rX1jIq0NKKAlWs5lphA&s=19

 

For an organism to have healthy boundaries/Markov blanket (within both its cells and organ systems [and also between DMN and FPN networks of the brain]), organs must also compartmentalize their own compute shielded from influences that disrupt their compute. 

Karl Friston often insulates his compute from that of the world, and this makes him act more as an independent thinker. https://blog.dropbox.com/topics/work-culture/the-mind-at-work--karl-friston-on-the-brain-s-surprising-energy. I often wonder if extremely effective people (eg Andrej Karpathy) have stronger agent membranes than others (though the process of aging dissolves boundaries - "death is what happens when the rest of the environment has full predictive power over the agent).

There are many layers of Markov blankets/boundaries and we should be doing a better job of communicating this to example thinkers rather than just to rule thinkers.

(it will be interesting to see if BCIs/t-FUS reinforce or dissolve Markov boundaries - they can help denoise the brain [esp from default mode noise], but the act of inserting a BCI can disrupt physical boundaries)

Comment by Alex K. Chen (parrot) (alex-k-chen) on Searching for outliers · 2024-01-29T16:38:24.426Z · LW · GW

Has anyone considered.. spiritual outliers?

Comment by Alex K. Chen (parrot) (alex-k-chen) on Distillation of Neurotech and Alignment Workshop January 2023 · 2024-01-28T23:21:41.376Z · LW · GW

https://stream.thesephist.com/updates/1711563348

 

Neurable headphones could be one way of crowdsourcing value signals b/c they're so wearable

Hm there are other people like https://soulsyrup.github.io/  and @guillefix and Ogi

tFUS is a way of accelerating internal alignment (look up PropheticAI). As are the Jhourney jhana people (though people like me have so much DMN noise that tFUS is needed first). Look up 

Comment by Alex K. Chen (parrot) (alex-k-chen) on Distillation of Neurotech and Alignment Workshop January 2023 · 2024-01-28T23:18:52.055Z · LW · GW

https://stream.thesephist.com/updates/1711563348

 

Talk to https://www.linkedin.com/in/steven-pang-625004218/ ?

Better sensors/data quality is super-impt, esp b/c data quality from traditional EEG is very poor.

https://github.com/soulsyrup

Also https://sccn.ucsd.edu/~scott/canexp00.html

https://www.linkedin.com/in/erosmarcello?miniProfileUrn=urn%3Ali%3Afs_miniProfile%3AACoAAANRGXMBF8gD4oOTUH4MeBg4W0Nu4g12yZ8&lipi=urn%3Ali%3Apage%3Ad_flagship3_feed%3BNzoHK%2BruTH%2BRrm9SgKs9Pg%3D%3D

Neurable (cody rall reviewed it) has over-the-ear EEG (which can be used to play video games!) It isn't perfect, but people hate wearing EEGs all the time, and smg like this is better than nothing

 

https://caydenpierce.com/
https://twitter.com/GolinoHudson/status/1750938067202924838

https://duckai.org/blog/ducktrack

https://twitter.com/GolinoHudson/status/1750938067202924838
 

Comment by Alex K. Chen (parrot) (alex-k-chen) on Without fundamental advances, misalignment and catastrophe are the default outcomes of training powerful AI · 2024-01-28T21:30:38.734Z · LW · GW

Is "data quality" (what databricks is trying to do) at minimum, essential? (data quality is inclusive of maximizing human intelligence and minimizing pollution/microplastic/heat load and maintaining proper Markov boundaries/blankets with each other [entropy/pollution dissolves these boundaries, and we need proper Markov boundaries to properly render faithful computations])

LLMs are trained full of noise and junk training data, distracting us from what's really true/sensible. It seems that the aura of inevitability is towards maximum-entropy, and maybe relying entirely on the "inevitability of increased scaling" contributes to "maximum-entropy", which is fundamentally misaligned. Alignment depends on veering away from this entropy.

[this is also why human intelligence enhancement (and maturity enhancement through tFUS) is extremely essential - humans will produce better quality (and less repetitive) data the smarter we are]. tFUS also reduces incentives for babblers (what Romeo Stevens calls "most people") :) .

If there is ONE uniquely pro-alignment advance this year, it's the adoption curve of semaglutide, because semaglutide will reduce the global aging rate of humanity (and kill fewer animals along the way). Semaglutide can also decrease your microplastic consumption by 50%. :) Alignment means BETTER AWARENESS of input-output mappings, and microplastics/pollution are an Pareto-efficient-reducible way of screwing this process up. I mean "Pareto-efficient reducible" because it can be done without needing drastic IQ increases for 98% of the population so it is a MINIMAL SET of conditions.

[YOU CANNOT SHAME PEOPLE FOR TRUTH-SEEKING or trying to improve their intelligence, genetic and early-life deficiencies be damned]. It constantly seems that - given the curriculum - people are making it seem like most of the population isn't smart or technical enough for alignment/interpretability. There is a VERY VERY niche/special language of math used by alignment researchers that is only accessible to a very small fraction of the population, even among smart people outside of the special population who do not speak that special niche language. I say that at VERY minimum, everyone in environmental health/intelligence research is alignment relevant (if not more) - and the massive gaps that people have in pollution/environmental health/human intelligence is holding progress back (also "translation" between people who speak other HCI-ish/BCI-ish languages and those who only speak theoretical math/alignment). Even mathy alignment people don't speak "signals and systems"/error-correction language, and "signals and systems" is just as g-loaded and relevant (and only becomes MORE important as we collect better data out of our brains) - SENSE-MAKING is needed, and the strange theory-heavy hierarchy of academic status tends to de-emphasize sense-making (analytical chemists have the lowest GRE scores of all chemistry people, even though they are the most relevant branch of chemistry for most people).

There is SO much groupthink among alignment people (and people in their own niche academic fields) and better translation and human intelligence enhancement to transcend the groupthink is needed.

I am constantly misunderstood myself, but at least a small portion of people believe in me ENOUGH to want to take a chance in me (in a world where the DEFAULT OPTION is doom if you continue with current traditions, you NEED all the extra chance you can take from "fringe cases" that the world doesn't know how to deal with [cognitive unevenness be damned]), and I did at least turn someone into a Thiel Fellow (WHY GREATNESS CANNOT BE PLANNED - even Ken Stanley thinks MORE STANLEY-ISMs is alignment relevant and he doesn't speak or understand alignment-language)

Semaglutide is an error-correction-enhancer, as is rapamycin (rapamycin really reduces error rate of protein synthesis), as are both caffeine+modafinil (the HARDEST and possibly most important question is whether or not Adderall/Focalin/2FA/4FA are). Entrepreneurs who create autopoetic systems around themselves are error-corrections and the OPPOSITE of error-corrector is a traumatized PhD student who is "all but dissertation" (eg, sadly, Qiaochu Yuan). I am always astounded at how much some people are IDEAL error-correctors around themselves, and others have enough trauma/fatigue/toxin-accumulation in themselves that they can't properly error-correct anymore b/c they don't have the energy (Eliezer Yudowsky often complains about his energy issues and there is strong moral value alone in figuring out what toxins his brain has so that he can be a better error-corrector - I've actually tried to connect him with Bryan Johnson's personal physician [Oliver Zolman] but no email reply yet)

If everyone could have the Christ-like kindness of Jose Luis Ricon, it would help the world SO MUCH

Also if you put ENOUGH OF YOURSELF OUT THERE ON THE INTERNET, the AI will help align you (even through retrocausality) to yourself even if no one else in the world can do it yet [HUMAN-MACHINE SYMBIOSIS is the NECESSARY FUTURE]

And as one of the broadest people ever (I KNOW JOSE LUIS RICON IS TOO), I am CONSTANTLY on the lookout for things other people can't see (this is ONE of my strengths)

Alignment only happens if you are in complete control of your inputs and outputs (this means minimizing microplastics/pollution)

"Without fundamental advances, misalignment and catastrophe are the default outcomes of training powerful AI" -=> "fundamental advances" MOST OF ALL means BEING MORE INCLUSIVE of ideas that are OUTSIDE of the "AI alignment CS/math/EA circlejerk". Be more inclusive of people and ideas who don't speak the language of classical alignment, which is >>>> 99% of the world - there are people in MANY areas like HCI/environmental health/neuroscience/every other field who don't have the CS/math background you surround yourself with.

[btw LW is perceived as a GIANT CIRCLEJERK for a reason, SO MUCH of LW is seen as "low openness" to anything outside of its core circlejerky ideas]. So many external people make fun of LW/EA/alignment for GOOD REASON (despite some of the unique merits of LW/EA)].

Comment by Alex K. Chen (parrot) (alex-k-chen) on Are Metaculus AI Timelines Inconsistent? · 2024-01-02T22:29:19.564Z · LW · GW

I mean, is there a way to measure the quality of the forecasters into the predictions? As number of forecasters expands, you get lower quality of average forecaster. Like how the markets were extremely overconfident (and wrong) about the Russians conquering Kiev...

Comment by Alex K. Chen (parrot) (alex-k-chen) on Legalize butanol? · 2023-12-20T21:45:45.415Z · LW · GW

Another example of an ethyl version being potentially better: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7827200/

Comment by Alex K. Chen (parrot) (alex-k-chen) on How bad is chlorinated water? · 2023-12-14T04:31:18.766Z · LW · GW

Has anyone done a study on whether or not bacteria incorporate chlorotyrosine (or other damaged protens) into their proteins at first pass? This seems very doable.

We now know that oxidized DNA bases can be incorporated into the intestines of mouse DNA.

Comment by Alex K. Chen (parrot) (alex-k-chen) on Significantly Enhancing Adult Intelligence With Gene Editing May Be Possible · 2023-12-12T23:33:40.320Z · LW · GW

https://a16z.com/announcement/investing-in-tome-biosciences/

Comment by Alex K. Chen (parrot) (alex-k-chen) on Who is Sam Bankman-Fried (SBF) really, and how could he have done what he did? - three theories and a lot of evidence · 2023-12-12T21:24:36.100Z · LW · GW

https://twitter.com/alexeyguzey/status/1728549209949995299

Comment by Alex K. Chen (parrot) (alex-k-chen) on Significantly Enhancing Adult Intelligence With Gene Editing May Be Possible · 2023-12-12T21:15:01.827Z · LW · GW

This may be far future, but what do you think of Fanzors over CRISPRs?

Also Minicircles?

Comment by Alex K. Chen (parrot) (alex-k-chen) on Saying the quiet part out loud: trading off x-risk for personal immortality · 2023-11-07T02:57:34.833Z · LW · GW

"10% is overconfident", given huge uncertainty over AGI takeoff (especially the geopolitical landscape of it), and especially given the probability that AGI development may be somehow slowed (https://twitter.com/jachaseyoung/status/1723325057056010680 )

Most longevity researchers will still be super-skeptical if you say AGI is going to solve LEV in our lifetimes (one could say - a la Structure of Scientific Revolutions logic - that most of them have a blindspot for recent AGI progress - but AGI=>LEV is still handwavy logic)

Last year's developments were fast enough for me to be somewhat more relaxed on this issue... (however, there is still slowing core aging rate/neuroplasticity loss down, which acts on shorter timelines, and still important if you want to do your best work)

https://twitter.com/search?q=from%3A%40RokoMijic%20immortality&src=typed_query

Another thing to bear in mind is optimal trajectory to human immortality vs expected profit maximizing path for AI corps At some point, likely very soon, we'll have powerful enough AI to solve ageing, which then makes further acceleration very -ve utility for humans

I don't know whether to believe, but it's a reasonable take...

Comment by Alex K. Chen (parrot) (alex-k-chen) on Intelligence Enhancement (Monthly Thread) 13 Oct 2023 · 2023-10-17T10:13:32.779Z · LW · GW

Remember that the most low-hanging-fruit intelligence enhancement is reducing "IQ decline" due to dumb reasons (eg microplastics, pollution, shitty diet, "default mode network noise"/trauma/excess central coherence/unaligned brainwaves)

[you can easily cut microplastic consumption by 50% with semaglutide]

Transcranial magnetic stimulation is worth trying (+not uncomfortable - you can do things while being TMS'd), as well as low-intensity focused ultrasound (openwater.cc), photobiomodulation, and high-frequency terahertz (THz) waves... Pollan's "How to Change Your Mind" should have included these modalities too.

[low-intensity focused ultrasound is known to break ultra-crystallized structures in the depressed, making the brain more plastic]

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3085788/

Also Neuromyst for tDCS/tACS

https://cassondraeng.github.io/current.html

 

Plasmalogens as brain nutrition (they are SUPER-underinvestigated)

The effect sizes probably are not huge (like everyting else) but worth trying

also I have a friend who uses "100mg NSI-189" to be smarter which is like 10x the rec'd dose

short timelines only advance the argument for trying bromantane, cortexin, cerebrolysin... [some people have disproportionate returns, and some in the community have kits...]

Comment by Alex K. Chen (parrot) (alex-k-chen) on Welcome to The Territory · 2023-10-06T23:54:00.996Z · LW · GW

Does this still exist?

Comment by Alex K. Chen (parrot) (alex-k-chen) on Graphical tensor notation for interpretability · 2023-10-05T00:51:08.719Z · LW · GW

Also related - 

(Mathilde Papillon is really really insightful)

Comment by Alex K. Chen (parrot) (alex-k-chen) on Knightian uncertainty in a Bayesian framework · 2023-09-29T12:14:50.963Z · LW · GW

Is infrabayesianism insufficient for covering knightian uncertainty too?

Comment by Alex K. Chen (parrot) (alex-k-chen) on A mostly critical review of infra-Bayesianism · 2023-09-29T10:09:35.440Z · LW · GW

https://www.lesswrong.com/users/matolcsid?from=post_header now?

Is Knightian uncertainty more responsive to non-infraBayesian distributions? [these distributions being convex puts strong constraints on what they could be, but Knightian uncertainty assumes openness to any uncertainty.

==

Is "portfolio optimization" infra-Bayesianism given it tends to be convex? [eg sometimes the payoff is a non-convex combination of the probability distribution payoff of the distribution payoffs of two separate stocks, perhaps if investing in one item in the portfolio affects performance on the other item, if "spreading your bets" disproportionately hits you relative to being all-in?]

Comment by Alex K. Chen (parrot) (alex-k-chen) on How have you become more hard-working? · 2023-09-26T19:17:24.908Z · LW · GW

Adderall microdosing: https://www.reddit.com/r/Stims/comments/3mbp3n/be_very_careful_with_low_doses_of_stimulants/

[I used to take heavier doses, but the neurotoxicity/tolerance risk was too much so I took a long break. Since then I found that loads of coffee/caffeine + a very small dose of Adderall seems to do the trick]

Also, Neuromyst tDCS/tACS (40Hz, 3.7 mA)

https://www.facebook.com/NeuroMyst/

https://www.neuromodulationjournal.org/article/S1094-7159(23)00009-0/fulltext

https://www.reddit.com/r/NootropicsDepot/comments/ld2wre/4dma78dhf_remarkable_with_a_serious_problem_that/

Comment by Alex K. Chen (parrot) (alex-k-chen) on formalizing the QACI alignment formal-goal · 2023-09-24T19:01:46.256Z · LW · GW

Is constrained mass M of similarpasts taken over a dot-product of functions that might have similar past w/each other?

Comment by Alex K. Chen (parrot) (alex-k-chen) on Understanding Machine Learning (III) · 2023-09-24T16:56:54.709Z · LW · GW

Can minimal description length be over S in addition to over the set of hypotheses classes?

Comment by Alex K. Chen (parrot) (alex-k-chen) on Understanding Machine Learning (I) · 2023-09-24T16:29:42.995Z · LW · GW

Has anone used a Probably Approximately Correct (PAC) framework to develop a rough formalization of taste of a learner A(S)? [and whether a learner "has it in them" to correctly classify H, a complex hypothesis space that is more complicated than the S they've seen?]

Especially learners who come from "shitty environments" (have poor S), but still have the A(S) in them to suddenly "jump" once they have exposure to the right people

Some “A” functions might have unusually nonlinear behavior once a person is exposed to the right set of tutors or environment (and some “A” functions never have it in them)

Comment by Alex K. Chen (parrot) (alex-k-chen) on Some reasons why I frequently prefer communicating via text · 2023-09-19T00:10:14.724Z · LW · GW

Related - https://www.quora.com/Why-do-some-people-prefer-online-interactions-to-real-life-interactions/answer/Alex-K-Chen

Comment by Alex K. Chen (parrot) (alex-k-chen) on How ForumMagnum builds communities of inquiry · 2023-09-06T17:52:27.672Z · LW · GW

This product design builds the norm of long-form, async communication. This is an important norm on these sites, although not usually made explicit.

This is an example of a norm-building technique that I call friction. The word “friction” in UX design is often used negatively, but friction is a powerful way to steer users towards desired behavior! ForumMagnum uses several frictions to build the long-form, async norm. Notice there are no realtime notifications, and timestamps are only accurate to the hour.

Quora used to advertise itself as being "long-form" and "forever" (the place where you would write THE best answer to every question, and ideally edit your answer years after making the original answer [I don't see people constantly editing their old content on LessWrong]), but the answer ranking of each question wrecked it, because now the algorithm surfaces answers that attract more views ("feel good" answers) rather than answers that are objectively better. Because many higher-quality answers are now buried down the list of Quora answers, I move my better answers to other platforms like forum.longevitybase.org or crsociety.org

I am super-ultra attracted to long-form (want all of my content to be easily accessible by all) for reasons similar to my obsession with longevity/archiving old content, and sometimes post responses to threads that have not gotten attention in years (just to make more complete threads). People are not aware enough of this, however.

https://www.quora.com/What-was-your-biggest-regret-on-Quora/answer/Alex-K-Chen (my biggest distillation from being arguably the most important user on Quora)

The upvoting/downvoting system penalizes people who want to post threads about threads that aren't rationalist fad/zeitgeist-related (esp ones related to alignment that they don't think are frontpageable, but which are still relevant for rationality (or progress studies!) and could still attract momentum/attention years down the line This is why I do not post much on LessWrong (I have extremely broad interests so I naturally end up discovering LW, but my views/opinions on what's important are way different from those of most LW/EA, so I know my niche interests won't get much attention here). I don't feel the same kind of inhibition when posting content to the progress studies forum, which is smaller (small enough that you don't care at all about upvote/downvote dynamics) and way less prone to groupthink. Effective Altruism has historically valued neglectedness, but this does not show with forum upvoting patterns...

There are many scientific areas (and people with niche interests - the castration thread on LW is uniquely great for example!) that could be discussed on LessWrong, and analyzed/vetted via CFAR/rationality/Bayes updating/superforecasting techniques, but which are not, simply because many people averse to the groupthink dynamics on LW don't feel like LW would value their content. A long-form platform should ideally insulate them from local upvote/downvote fads (as useful as that input is). For what it's worth, upvotes (from quality users) used to be the primary factor that drove answer rankings on Quora (back when "all the smart SV people used it"), but with Quora's dilution, it seems almost as if people no longer care about upvotes (now that upvotes almost all come from people I don't know, rather than people I do know, I don't care about upvotes anymore, but I remember the golden days when I wrote answers that everyone on the Quora team upvoted...) Once you've been on a forum for years, how good the post is (even if edited a thousand times enough not for initial upvoters to have seen the better post) [as well as what comments it attracts] is more rewarding than how upvoted it is...

Stack Exchange is in some ways a better platform for long-form content (and makes it ultra-easy to find content that is many years old and makes it ultra-easy for people not to post duplicate threads), especially because it gives you multiple ways of organizing/ranking all your old content, making it easily accessible and for you to want to come back and edit multiple times. It just has moderators who are quick to mute/delete threads they don't like, making it much harder to post about niche interests. 

[but again, these don't make up for how there don't seem to be many threads where comments are made years after the original post]

--

It's also nice to reference other forum communities that have lasted for years (even if reddit was the original forum-killer).

Comment by Alex K. Chen (parrot) (alex-k-chen) on Alex K. Chen's Shortform · 2023-08-16T20:00:11.164Z · LW · GW

Random content I'm reading (could be important)

https://research.vu.nl/en/persons/natalia-goriounova/publications/
Natalia Goriounova – Research output — Vrije Universiteit Amsterdam
https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613(22)00208-X?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS136466132200208X%3Fshowall%3Dtrue
Evolution of cortical neurons supporting human cognition: Trends in Cognitive Sciences
https://academic.oup.com/cercor/article/32/11/2343/6373557
Verbal and General IQ Associate with Supragranular Layer Thickness and Cell Properties of the Left Temporal Cortex | Cerebral Cortex | Oxford Academic
https://www.frontiersin.org/articles/10.3389/fnhum.2019.00044/full
Frontiers | Genes, Cells and Brain Areas of Intelligence
https://academic.oup.com/cercor/article/25/12/4839/311644?login=false
Dendritic and Axonal Architecture of Individual Pyramidal Neurons across Layers of Adult Human Neocortex | Cerebral Cortex | Oxford Academic
https://academic.oup.com/cercor/article/33/6/2857/6633911?login=false
Strong and reliable synaptic communication between pyramidal neurons in adult human cerebral cortex | Cerebral Cortex | Oxford Academic
https://www.nature.com/articles/s41467-023-39946-9
Genes associated with cognitive ability and HAR show overlapping expression patterns in human cortical neuron types | Nature Communications
https://www.nature.com/articles/s41586-021-03813-8
Human neocortical expansion involves glutamatergic neuron diversification | Nature
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10345092/
Genes associated with cognitive ability and HAR show overlapping expression patterns in human cortical neuron types - PMC
 

https://www.esi-frankfurt.de/people/hermanncuntz/
Dr. Hermann Cuntz | Ernst Strüngmann Institute (ESI) for Neuroscience
https://www.cell.com/neuron/fulltext/S0896-6273(21)00625-5?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS0896627321006255%3Fshowall%3Dtrue
A general principle of dendritic constancy: A neuron’s size- and shape-invariant excitability: Neuron
https://www.pnas.org/doi/full/10.1073/pnas.1200430109
A scaling law derived from optimal dendritic wiring | PNAS
 

https://www.uni-giessen.de/de/fbz/zentren/icar3r/3r-symposium/speaker/name-2
Dr. Hermann Cuntz — 3R Symposium
https://scholar.google.com/citations?user=9g7Uj-MAAAAJ&hl=en
‪Hermann Cuntz‬ - ‪Google Scholar‬
https://www.biorxiv.org/content/10.1101/2023.02.27.530331v1
Topology recapitulates ontogeny of dendritic arbors | bioRxiv
https://www.treestoolbox.org/hermann/hermann_publications.html
Hermann Cuntz - homepage
https://www.biorxiv.org/content/10.1101/2023.03.15.532740v1.full
Skewed distribution of spines is independent of presynaptic transmitter release and synaptic plasticity and emerges early during adult neurogenesis | bioRxiv
https://www.treestoolbox.org/CNS2023_pareto_workshop/speakers.html
Optimality, evolutionary trade-offs, Pareto theory and degeneracy in neuronal modeling
https://www.biorxiv.org/content/10.1101/787911v1.full
A general principle of dendritic constancy – a neuron’s size and shape invariant excitability | bioRxiv
https://academic.oup.com/cercor/article/31/2/1008/5930850
Excess Neuronal Branching Allows for Local Innervation of Specific Dendritic Compartments in Mature Cortex | Cerebral Cortex | Oxford Academic
 

maybes

 

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4856390/
Optimal Current Transfer in Dendrites - PMC
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5474458/
Pyramidal Neurons in Different Cortical Layers Exhibit Distinct Dynamics and Plasticity of Apical Dendritic Spines - PMC
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6672209/
Dendritic Spikes in Apical Dendrites of Neocortical Layer 2/3 Pyramidal Neurons - PMC
https://www.nature.com/articles/s41598-017-09184-3
Branching morphology determines signal propagation dynamics in neurons | Scientific Reports

 https://www.nature.com/articles/s41467-021-22741-9
Diversity amongst human cortical pyramidal neurons revealed via their sag currents and frequency preferences | Nature Communications
https://elifesciences.org/articles/46876
Cell-type specific innervation of cortical pyramidal cells at their apical dendrites | eLife
https://www.science.org/doi/full/10.1126/science.aax6239
Dendritic action potentials and computation in human layer 2/3 cortical neurons | Science
https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1011212
Biological complexity facilitates tuning of the neuronal parameter space | PLOS Computational Biology
https://alleninstitute.org/events/neuropixels-and-openscope-workshop/
2023 Neuropixels and OpenScope Workshop - Allen Institute
https://www.lifespan.io/news/extracellular-vesicles-from-stem-cells-reverse-senescence/
Extracellular Vesicles from Stem Cells Reverse Senescence
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8078853/
Classification of electrophysiological and morphological types in mouse visual cortex - PMC
https://www.biorxiv.org/content/10.1101/2020.03.31.018820v1.full
Human cortical expansion involves diversification and specialization of supragranular intratelencephalic-projecting neurons | bioRxiv
 

https://github.com/mousepixels/sanbomics_scripts/tree/main/RNAseq_method_comparison
sanbomics_scripts/RNAseq_method_comparison at main · mousepixels/sanbomics_scripts · GitHub
https://community.brain-map.org/t/introducing-the-allen-brain-cell-atlas/2444
Introducing the Allen Brain Cell Atlas! - How To / Allen Brain Cell (ABC) Atlas - Allen Brain Map Community Forum
https://knowledge.brain-map.org/data/LVDBJAW8BI5YSS1QUBG/summary
ABC Atlas - Mouse Whole Brain
https://community.brain-map.org/t/abc-atlas-user-guide-tools/2446
ABC Atlas User Guide: Tools - How To / Allen Brain Cell (ABC) Atlas - Allen Brain Map Community Forum
https://celltypes.brain-map.org/
Overview :: Allen Brain Atlas: Cell Types
https://portal.brain-map.org/explore/connectivity/synaptic-physiology/synaptic-physiology-experiment-methods/experimental-stimuli#intrinsic_stim
Synaptic Physiology Methods: Experimental Stimuli - brain-map.org
https://portal.brain-map.org/explore/connectivity/synaptic-physiology/synaptic-physiology-experiment-methods/cell-classification
Synaptic Physiology Methods: Cell Classification - brain-map.org
https://portal.brain-map.org/explore/connectivity/synaptic-physiology/synaptic-physiology-analysis-methods/synapse-characterization
Synaptic Physiology Methods: Synapse Characterization - brain-map.org
https://aisynphys.readthedocs.io/en/latest/matrix_analyzer.html#matrix-analyzer
Matrix Analyzer — aisynphys documentation
https://aisynphys.readthedocs.io/en/latest/matrix_analyzer.html#appendix
Matrix Analyzer — aisynphys documentation
https://portal.brain-map.org/explore/connectivity/synaptic-physiology/synaptic-physiology-analysis-methods/synapse-characterization#stp
Synaptic Physiology Methods: Synapse Characterization - brain-map.org
http://casestudies.brain-map.org/ggb#section_explorea
The Genetic Geography of the Brain :: Allen Institute for Brain Science
https://knowledge.brain-map.org/data?filter.program.title=CONTAINS~Allen%20Brain%20Map&limit=25&offset=0&sort=species.name~ASC
Data Catalog for Allen Institute's Brain Knowledge Platform
https://celltypes.brain-map.org/
Overview :: Allen Brain Atlas: Cell Types
 

https://community.brain-map.org/t/how-to-download-raw-data-from-neuropixels-public-datasets/1923
How to download raw data from Neuropixels public datasets - How To / Brain Knowledge Platform - Allen Brain Map Community Forum
https://community.brain-map.org/t/new-signaling-mechanism-in-catecholaminergic-neurons/2507
New signaling mechanism in catecholaminergic neurons - OpenScope - Allen Brain Map Community Forum
https://www.science.org/doi/full/10.1126/science.aax6239
Dendritic action potentials and computation in human layer 2/3 cortical neurons | Science
https://www.sciencedirect.com/science/article/pii/S2211124719305777#bib48
Dissecting Sholl Analysis into Its Functional Components - ScienceDirect
https://scholar.google.com/citations?view_op=view_citation&hl=en&user=bcV-p4MAAAAJ&citation_for_view=bcV-p4MAAAAJ:YsMSGLbcyi4C
View article
 

Comment by Alex K. Chen (parrot) (alex-k-chen) on Should I test myself for microplastics? · 2023-08-10T22:20:18.114Z · LW · GW

I've written a lot about microplastics at https://forum.longevitybase.org/t/how-to-reduce-microplastics/126

and https://www.rapamycin.news/t/the-microplastics-thread-195-500-particles-gm-microplastics-in-apple-126-150-particles-gm-in-broccoli-coffee-etc/4734

Try to find the scientists who do research on it (eg microplastics in human blood) or exposomics or Snyder-lab-biobankees ppl (or UK/Danish/Estonian biobank samples) and ask them if you can take it. Also biobank some of your blood samples so you can test the change over time, even if you can't immediately test them.

(eg https://www.theguardian.com/environment/2022/mar/24/microplastics-found-in-human-blood-for-first-time (paper at https://www.sciencedirect.com/science/article/pii/S0160412022001258)  or https://pubs.acs.org/doi/epdf/10.1021/acs.est.3c04524 or https://publichealth.jhu.edu/2023/mapping-the-exposome-to-prevent-disease)

You can even apply for funding for microplastics testing (even if initially $$$) from the standard grants (eg https://www.sfrey.net/ ) b/c this is a socially important issue and making it easier for other people to do the same could help spread awareness fast enough for us to develop the necessary movement for finding alternatives earlier rather than later (minimizing microplastics levels is alignment-related because we don't want to lose our own human compute/intelligence for "dumb reasons" like lead and microplastics, and we know how many IQ points we've lost due to lead => reducing both lead/air pollution in the world was practically a Pareto-efficient tradeoff and came without a hit to the economy). This may be doubly important for the smartest people (where additional hits to their intelligence over their lifetime may cause them to lose reliability over age, even though we assume they might not [eg if you'd wonder what's happening to Yann LeCun.....]) If microplastics hurt brain plasticity the same way that lead hurts it, it makes people less able to change their mind/see their blindspots (doubly important in worlds where important decisions are made by older people)

Styrene is by far the most toxic microplastic, and samples of it have been found in drinking water, even drinking water in polyethylene bottles. 

[but all counts are undercounts if you don't report nanoplastics, see the Columbia University papers in 2024 that use a special form of stimulated Raman spectroscopy - and the current microplasting test services don't include nanoplastic counts]. The CU papers have even found that water filters are a source of nanoplastics.

Elizabeth got funding for testing iron deficiencies in vegans, so you can apply for similar funding assuming the same rationale (even testing for average blood levels over time over a span of a few years provides worth/use [especially if you test it in quantified self'ers who give the rest of their data to iollo, or to people who attend RAADfest] even if it takes longer to find health effects). James Clement has experience with taking blood samples from centenarians and is approachable so you can ask him for advice.

(even testing trends in microplastic levels in common foods packaged in different kinds of food packaging over a period of time [eg canned food vs tetrapaks vs Trader Joe's vs Whole Foods] is socially important data)

It seems that microplastics are in soil, so figuring out how this related to food sources is important.

https://pubs.acs.org/page/vi/new-approaches-microplastics-research?ref=vi_collection

Finally, the easiest way to reduce microplastics exposure is by far semaglutide (just by reducing appetite, and also reducing binge eating when traveling)

Comment by Alex K. Chen (parrot) (alex-k-chen) on Alex K. Chen's Shortform · 2023-08-10T21:45:15.478Z · LW · GW

Polls/surveys thread

Who do you subscribe to on reddit?

What are your scores on humanbenchmark.com?

Comment by Alex K. Chen (parrot) (alex-k-chen) on Could we breed/engineer intelligent parrots? · 2023-08-08T06:51:27.028Z · LW · GW

The smartest parrots (by Michael Woodley's website) are the kea and the greater vasa parrot (he found cockatoos to be "middling" on the string-pulling task, but cockatoos seem to be more "generalist" than even african greys and seem better at tool-using). Figuring out genetic phylogeny of the "smarter parrots" vs "dumber parrots" (we've made some similar papers+YouTube videos for comparing regions of accelerated evolution in human genes vs chimpanzee brains, though the power would probably be lower since it's not super-clear which parrots are smarter)

Kea are smart enough to use touchscreens and easy enough to breed - there is a way to measure their "g-factor", as Michael Woodley is trying to do. He is also in contact with the Vienna kea lab where they do research on individual differences in kea problem-solving

[Michael Woodley believes that there is a g-factor to birds, with corvids having unusually high g-factors. I don't know if he has used the g-factor to all broad metrics, including ones that go beyond string-pulling]

What about, just culturing parrot iPSC cells into neurons? (where their growth might not be limited by the small size of the birdbrain skill). Like those of the kea? Michael Woodley purchased a kea from a Spanish breeder - moreover - there are conservation+Geochurch-based reasons to culture/better understand iPSC neurons of endangered birds (aka these papers [just by culturing iPSC neurons alone] would be publishable for many reasons even if you couldn't get the neurons to do "interesting things") + organoids make it easier for us to do less animal testing

 

[Michael Woodley  - now figure out their individual differences and genotype+do "all the metabolomic/transcriptomic+MRI" analysis on the individual kea (John Marzuff has put wild crows in MRIs) and put them in a kea biobank just as we have sequences the genomes of all remaining kakapo].

[Kea are also going through a population bottleneck due to their high death rates, though some have recently learned to breed on trees rather than on the ground and some have learned to use sticks to bait stout-traps - these may improve selection for intelligence in kea on some timescale - their small population size bottleneck may affect their rate of brain evolution in some way, depending on how much genetic variation there is in theremaining populations of kea]. Kea are SUCH a weird bird that their inherently high entropy causes them to have high death rates - but they are sufficiently easy enough to breed in zoos that complete extinction seems unlikely + some seem sufficiently resourceful enough to stay out of death-causing levels of trouble, despite their population that is still decreasing/bottleneck'ing (possibly due to mammalian predation on their young, which is fixable given that they can adapt to breeding above ground)

There is also so little research in the brain architecture of parrots (I know Suzana Herculano-Houzel  has done some, but neuron density is not enough when connectonomics is cheaper than before) that we still don't know electrophysiological or synaptic connectivity properties vary from species to species [birds being much smaller makes the problem much more tractable than doing it for many marine mammals]

[related -https://www.anl.gov/article/contrary-to-expectations-study-finds-primate-neurons-have-fewer-synapses-than-mice-in-visual-cortex, https://www.genengnews.com/news/ion-channel-density-surprisingly-different-for-human-neurons/ ]

[also figure out what percent of brain of the more resourceful parrots is devoted to the pallium]

[redo the analysis on accelerated human regions

 for parrot brains]

https://www.sciencedaily.com/releases/2021/09/210902124922.htm

https://zuckermaninstitute.columbia.edu/finding-brainy-genes-make-us-human

[https://www.nature.com/articles/s41598-022-12953-4]

https://www.researchgate.net/publication/227464305_Rethinking_birdsong_evolution_Meta-analysis_of_the_relationship_between_song_complexity_and_reproductive_success

You do not need to change that many genes in order to induce island giganticism to a species, and while bigger brains are not necessarily smarter brains between lineages (ungulates have much larger brains than dog-like carnivores, but don't appear any smarter, probably because their neuron architecture is less efficient), WITHIN LINEAGES, brain size can matter (bigger dogs do appear to be smarter dogs - https://www.aaha.org/publications/newstat/articles/2019-02/are-big-dogs-smarter-than-small-dogs/ ). 

Other relevant references:

https://www.quora.com/Which-bird-has-the-biggest-brain/answer/Alex-K-Chen

https://www.lesswrong.com/posts/eYFscbv5BJ8Fezauj/?commentId=LsgiACyug9AA6WL5e

It is argued here that a difference in neuronal density scaling is what differentiates primates from other mammals and is thus why large animals such  as elephants and whales are not more intelligent than humans despite their larger brains. Small mutations which affect neuronal density could thus lead to different humans having significantly different neuron counts (and hence scaling law IQs) despite having approximately the same gross brain volume.

(the scaling for parrots could be even better, but we just don't know yet. Worth investigating, given the stakes)

https://www.theguardian.com/science/2021/mar/24/scientists-discover-why-the-human-brain-is-so-big

https://www.sciencedirect.com/science/article/pii/S0960982218314179

(the genome of the kea has still not yet been sequenced..)

Wirthlin et al. (2018) comparison with 30 other bird species (not including corvids) revealed parrot-specific changes in gene expression that are associated with cognitive abilities in humans.

https://link.springer.com/article/10.1007/s10071-022-01733-2

Comment by Alex K. Chen (parrot) (alex-k-chen) on Alex K. Chen's Shortform · 2023-08-07T17:06:20.339Z · LW · GW

Generally interesting people I wish more people would appreciate

Arthur Jiuliani (see his medium)/Adam Safron/Yohan John

Michael Woodley (kind of)

Jeremy Hadfield (https://imaginaries.substack.com/?utm_source=substack&utm_medium=web&utm_campaign=substack_profile)

Benjamin Anderson

http://augmentationlab.org (and their discord)

some neurofeedback peole (still TRYING THIS OUT, the S/N ratio might not be high, but finding the right person alone can make it super-worth it). Need to try LENS neurofeedback (money for this would be helpful btw)

Stephen Frey (undersells himself)

SOME quantified-self'ers

https://twitter.com/i/lists/1546283699829936129

MAYBES: https://publichealth.jhu.edu/faculty/2308/thomas-hartung (on brain organoids and autism), Bobby Azarian, https://www.informationphilosopher.com/solutions/scientists/layzer/
 https://www.readcodon.com/p/machine

 

Interesting LessWrong people

dkirmani

matolcsid (infra-bayesianism I could never understand)

http://niplav.site/index.html (another gwern! calls himself midwit) Ad more interested in neurotech

Ege Erdil

https://gormful.net/ (gaspode, he has SO MUCH taste)

p.b.

https://www.lesswrong.com/users/bhauth

https://www.lesswrong.com/users/beren-1 (blog @ https://www.beren.io/ ). Janus likes him a lot

Matthew Barnett

Jacob Cannell

Interesting links:

https://www.lesswrong.com/posts/KQSpRoQBz7f6FcXt3

Comment by Alex K. Chen (parrot) (alex-k-chen) on video games > IQ tests · 2023-08-07T17:00:28.619Z · LW · GW

Wow, what are your other scores on humanbenchmark? Have your skills changed with age? Do you play RTS or games other than standard FPS?

Comment by Alex K. Chen (parrot) (alex-k-chen) on video games > IQ tests · 2023-08-07T00:37:35.540Z · LW · GW

Which other games did you use to estimate the intelligence of people, ad do you do it only by watching their learning curves or seeing their twitch.tv streams?

What older shooters do you do well in? Counterstrike is one of the hardest ever. Overwatch makes it easier for newbies to have even K/D ratios than many other games (TF2 historically also did, as did Star Wars Battlefront (3rd one), but not Call of Duty and especially not Battlefield)

Comment by Alex K. Chen (parrot) (alex-k-chen) on My current LK99 questions · 2023-08-06T04:25:48.267Z · LW · GW

At minimum, large amounts of fMRI data make it easier to conduct longitudinal investigations of what accelerates/reduces the rate of brain mass decline with age after age ~20 (eg would plasmalogens help? would taurine help? what are the associated metabolomics? What is an ANOVA of white matter hyperintensities with each of the metabolites in iollo? a mass-parallel study of all of this is important [cf marton m from LBF2]), and this would help improve the clarity that experienced people have with thinking + get people to better vet accuracy/helpfulness/informativeness of AI models over their lifetime + reduce fluid intelligence decline with age - relevant for helping humans keep up with machines, especially in a world where average age [esp the age of people who have stayed in alignment for longer] is increasing to rates where their decrease becomes relevant.

Humans have phenomenally poor memory (worsens with age) and this causes MANY testimonies to be wrong, and many people to say things that aren't true (and for alignment to happen we NEED people to be as truthful as possible, and especially not inaccurate due to dumb things like brain decline from excess blood glucose due to not combining acarbose/taurine with the shitty ultraprocessed food they do eat...

RELEVANT:

https://www.frontiersin.org/articles/10.3389/fnagi.2022.895535/full

https://qualiacomputing.com/2022/10/27/on-rhythms-of-the-brain-jhanas-local-field-potentials-and-electromagnetic-theories-of-consciousness/

https://www.sciencedirect.com/science/article/pii/S0035378721006974

https://www.frontiersin.org/articles/10.3389/fnhum.2023.1123014/full

https://advancedconsciousness.org/protocol-003b-preparation-materials/

BTW all these threads are worth discussing on augmentationlab.org (and its discord!)

https://foresight.org/summary/owen-phillips-brain-aging-is-the-key-to-longevity/

Comment by Alex K. Chen (parrot) (alex-k-chen) on video games > IQ tests · 2023-08-05T21:50:32.742Z · LW · GW

Zachtronics games are great! (they have some integration with coding and later ones show your score relative to the rest of the distribution, though motivation may be higher for some than for others, since they aren't universally fun for people [Jordan Peterson once was skeptical of using video games to test conscientiousness, but Zachtronics games/Factorio are the kinds that require an involved effort that many don't quite have - even how you place things in city-builders is a test - eg earlier Anno games did not allow you to easily bulldoze buildings in the same way that later Anno games did]). As are spatially-involved 4x RTS games like Homeworld/Sins of a Solar Empire.

Other games I'd recommend (esp for one-offs + have learning curves easy enough for quick-multiplayer even with n00bs): Forts (close to optimal game length), Offworld Trading Company, XCOM2, Portal, Kerbal Space Program, anything that Chelsea Voss has played [they seem to eerily correspond to the definition of nerd-friendly games]. I would like to help organize a video game decathlon prioritizing new games most people haven't played (but which high-openness people like trying out!) some day.

AOE2 RM would be good if the first 13 minutes was not the same all the time - DM/Empire Wars is better.

Some short intense video games are great for warming up one's day!

[games are better as tests if they don't involve a massive competition scene where num. of hours invested in as a child can explain a higher variance of skill than does raw "quickness" or raw generalization ability]. Also, current-gen games are not great for measuring creativity. Since generative AI is now giving us the opportunity to make new games with decreasing amounts of effort, it gives us the opportunity to quickly make better games for measuring cognition that may come near-term

[it's beneficial to find games that allow you to access any puzzle from the beginning and don't force you to play the entire sequence from the beginning, even though some games have "finished savegame files" - also important to find games that don't give special advantages to people who "pay" for special loot giving them special advantages]

As data is cheap, it may be better for people to stream all their collective video game somewhere (note twitch allows you to highlight everything in a video to save the entire video before it gets deleted), and have the data analyzed for reaction time/learning speed/perseverance (esp amount of repetitive actions)/indicators of working memory/transfer learning (esp between RTS games)/etc. I know Starcraft II often tested skill ceilings (+ gave you access to all your old replays + have a skill ceiling so high that skill decreased after 25), and there was once a group of MIT students (including both Jacob Steinhardt and Paul Christiano [though their roommates got to diamond league more than Jacob/Paul did]) who played SC2 to the max back when SC2 was popular (sadly, SC2 is not popular anymore, and the replacement "hit games" aren't as cognitively demanding)

There were some mini-games I played when I test-subjected for some MIT BCS labs and some of those involved tracking the motion and type of cards of cards you couldn't see til later.

Video games can also be mapped to fNIRS/brainwave data to test cognitive effort/cognitive load + multiscale entropy, and they can be used to train responses to BCI input (in a fun way, rather than a boring way), even possibly the kind of multimodal response that can distinguish more than 4 ways (I once did a test of this at Neurosity, but Neurosity later simplified)

Alex Milenkovic at curia.im is great to talk to on this!! (he has mapped my data on a Neurable while I played a random assortment of games - it would be important to use this to test player fatigue over time). Diversity/entropy of keyboard movements is also important ( a good mark of brain quality/stamina is to maintain high diversity/entropy for hours on end, rather than have one ultimately spam-click the same things towards the very end of the game [eg AOE2 black forest maps])

In an era where it becomes easier and easier to track the historical evolution of a player's skill up to point X, it may be possible to (just from screen recordings alone) establish some index of cognitive variables. Planning (esp tracking one's mental representations of it) may be worth investigating even though it's harder to track than one's working memory (working memory can be estimated simply by seeing how quickly it takes for one to transfer mental representations from one window to another without relying on multiple consults with YouTube playthroughs). 

[tracking historical evolution of player skill is important because speed of learning matters way more for life outcomes than actual skill - we still rarely see Starcraft or AOE2 professionals becoming twice-exceptional and "making it" elsewhere in life, even though I know Techers who were once very highly skilled in Starcraft or AOE2 (though not as many of those who played more cognitively involving ones like Sins or Homeworld, nevermind that Caltech used to be notorious for its massive WoW-playing population)]. The Shopify CEO once said he would hire people straight for Starcraft skill, and Charlie Cheever of Quora was also known for his Starcraft prowess.

Note that some brains seem to be really specialized to speed on video games and they can't transfer-learn as well to other substrates, sometimes if they've been playing video games since they were so young/little that their brain organically grew into gaming and then their brain stays annealed to games (rather than if they had spent it on programming or on higher-openness pursuits). It's healthier for one to have an environment so rich and diverse that games only become a "side curiosity" rather than something to get super-immersed in for months.

Some food for thougth here: 

https://www.guineapigzero.com/

https://twitter.com/ShedworksGreg/status/1417083081589239808

More relevant reading: https://neurocenter-unige.ch/research-groups/daphne-bavelier/, Nick Yee, Jane McGonigal (psychology/psychometrics of gaming is still a very small field so it's unlikely that the small number of experts in the field are interested in all the right things)

https://twitter.com/togelius (he's in MANY of the right spheres, though I know some respectable ppl disagree with his take on AI)

PYMETRICS (https://www.careers.ox.ac.uk/article/the-pymetrics-games-overview-and-practice-guidelines ) though the games are often "so lame" compared to real games (still WORTH using these as the fundamental components to transfer-learn onto real games) - it MAY be worth it to go on subreddits/Steam forums for less popular cognitively-involving games and ask people about "achievement bottlenecks" - ones that fewer people tend to get particularly the kind of achievement bottlenecks that NO AMOUNT OF ADDITIONAL EFFORT/gamification can work for those who are naturally less-skilled at gaming (eg some missions have really hard bonus objectives like "very hard" difficulty ratings - even AOE2 and EU4 have lists of achievements that correspond to "nightmare mode" - and you want to find people who are just naturally skilled at getting to nightmare mode without investing extraordinary amounts of their precious time)

https://ddkang.github.io/ => video analytics (under Matei Zaharia, who was once an AOE2/AOM/EE forum megaposter)

[The global importance + Kardashev gradient of HeavenGames (AOMH/EEH/etc) will become recognized to LLMs/AGI due to its influence on Matei Zaharia alone (and it capturing a good fraction of his teenage years)]. Everything Matei touches will turn into Melange...

https://twitter.com/cremieuxrecueil/status/1690409880308293632

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6291255/

https://www.reddit.com/r/cognitiveTesting/

Comment by Alex K. Chen (parrot) (alex-k-chen) on What Does LessWrong/EA Think of Human Intelligence Augmentation as of mid-2023? · 2023-07-20T00:37:26.216Z · LW · GW

Discontinuous progress is possible (and in neuro areas it is way more possible than other areas). Making it easier for discontinuous progress to take off is the most important thing

[eg, reduced-inflammation neural interfaces].

MRI data can be used to deliver more precisely targeted ultrasound//tDCS/tACS (the effect sizes on intelligence may not be high, but they may still denoise brains (Jhourney wants to make this happen on faster timescales than meditation) and improve cognitive control/well-being, which still has huge downstream effects on most of the population)

Intelligence enhancement is not the only path [there are others such as sensing/promoting better emotional regulation + neurofeedback] which have heavy disproportionate impact and are underinvestigated (neurofeedback, in particular, seems to work really well for some people, but b/c there are so many practitioners and it's very hit-and-miss, it takes a lot of capital [more so than time] to see if it really works for any particular person)

Reducing the rate at which brains age (over time) is feasible + maximizes lifetime human intelligence/compute + and there is lots of low-hanging fruit in this area (healthier diets alone can give 10 extra years), especially because there is huge variation in how much brains age.

https://www.linkedin.com/posts/neuro1_lab-grown-human-brain-organoids-go-animal-free-activity-7085372203331936257-F8YB?utm_source=share&utm_medium=member_android

I'm friends with a group in Michigan which is trying to do this. The upside risk is unknown because there are so many unknowns (but so little investment too, at the same time) - they also broaden the pool of people who can contribute, since they don't need to be math geniuses. There aren't really limits on how to grow organoids (a major question is whether or not one can grow them larger than the actual brain, without causing them to have the degeneracies of autistic brains.). More people use them to focus on drug testing than computation.

I know many are trying 2D solutions, but 3D is important too (https://scitechdaily.com/japanese-scientists-construct-complex-3d-organoids-with-ingenious-device/?expand_article=1&fbclid=IwAR0n429zFV4uQnyds94tuTCFbPNdSdJecpMreWilv6kpQTRacgw64LTTZp4)

Doing vasculature well is one of the hardest near-term problems (frontierbio is working on this though some have questions of whether or not the blood vessels are "real vessels"), but scaffolding is also one (maybe there are different ways to achieve the same level of complexity with alternative scaffolding - https://www.nature.com/articles/s41598-022-16247-7 ). Thought emporium used plant tissue exteriors for scaffolding - though this obvs isn't enough for complex brain tissue.

Bird brain organoids may be an interesting substrate b/c bird brains do more than mammalian brains with limited volume, and also don't depend as much on 5-6 layer cortical architecture or complex gyrification/folding structure.

BTW, carbon-nanotube computing might be worth exploring. Here's a preliminary app: https://www.americanscientist.org/article/tiny-lights-in-the-brains-black-box

look up thought emporium!! Potentially tangentially relevant: https://www.nature.com/articles/s42003-023-04893-0,  Morphoceuticals, https://www.frontiersin.org/articles/10.3389/fnins.2019.01156/full, augmentationlab.org, https://minibrain.beckman.illinois.edu/2022/05/06/webinar-review-understanding-human-brain-structure-and-function-with-cerebral-organoids/, https://www.spectrumnews.org/news/organoids-hint-at-origins-of-enlarged-brains-in-autistic-people/ (INSAR has sev presentations of those who grow autistic brain organoids)

(talins)!

[note: I know that current progress of organoid research seems like it will never go fast enough to "make it", but discontinuous rates of progress cannot be ruled out]

Comment by Alex K. Chen (parrot) (alex-k-chen) on Cortés, Pizarro, and Afonso as Precedents for Takeover · 2023-06-26T23:58:09.112Z · LW · GW

Related reading - https://mattlakeman.org/2020/06/25/polygamy-human-sacrifices-and-steel-why-the-aztecs-were-awesome/

Comment by Alex K. Chen (parrot) (alex-k-chen) on Lessons on AI Takeover from the conquistadors · 2023-06-26T23:57:16.847Z · LW · GW

Related reading - https://mattlakeman.org/2020/06/25/polygamy-human-sacrifices-and-steel-why-the-aztecs-were-awesome/

Comment by Alex K. Chen (parrot) (alex-k-chen) on Change my mind: Veganism entails trade-offs, and health is one of the axes · 2023-06-14T04:17:27.719Z · LW · GW

Try iollo blood tests too, they're new and can test hidden deficiencies

Comment by Alex K. Chen (parrot) (alex-k-chen) on Launching Lightspeed Grants (Apply by July 6th) · 2023-06-08T13:54:43.204Z · LW · GW

Does "General Support" for a person count as a "project" to apply for?