Posts

Comments

Comment by HiddenPrior (SkinnyTy) on LessWrong's (first) album: I Have Been A Good Bing · 2024-04-01T14:00:34.877Z · LW · GW

I knew I could find some real info-hazards on lesswrong today. I almost didn't click the first link.

Comment by HiddenPrior (SkinnyTy) on LessWrong's (first) album: I Have Been A Good Bing · 2024-04-01T13:57:30.048Z · LW · GW

Same. Should I short record companies for the upcoming inevitable AI musician strike, and then long Spotify for when 85% of their content is Royalty free AI generated content?

Comment by HiddenPrior (SkinnyTy) on Open Thread Spring 2024 · 2024-03-28T19:53:08.437Z · LW · GW

I did a non-in-depth reading of the article during my lunch break, and found it to be of lower quality than I would have predicted. 

I am open to an alternative interpretation of the article, but most of it seems very critical of the Effective Altruism movement on the basis of "calculating expected values for the impact on peoples lives is a bad method to gauge the effectiveness of aid, or how you are impacting peoples lives." 

The article begins by establishing that many medicines have side effects. Since some of these side effects are undesirable, the author suggests, though they do not state explicitly, that the medicine may also be undesirable if the side effect is bad enough. They go on to suggest that Givewell, and other EA efforts at aid are not very aware of the side effects of their efforts, and that the efforts may therefore do more harm than good. The author does not stoop so low as to actually provide evidence of this, or even make any explicit claims that could be checked or contradicted, but merely suggests that givewell does not do a good job of this.

This is the less charitable part of my interpretation (no pun intended), but I feel the author spends a lot of the article constantly suggesting that trying to be altruistic, especially in an organized or systematic way, is ineffective, maybe harmful and generally not worth the effort. Mostly the author does this by suggesting anecdotal stories of their investigations into charity, and how they feel much wiser now.

The author then moves on to their association of SBF with Effective Altruism, going so far as to say: "Sam Bankman-Fried is the perfect prophet of EA, the epitome of its moral bankruptcy." In general, the author goes on to give a case for how SBF is the classic utilitarian villain, justifying his immoral acts through oh-so esoteric calculations of improving good around the world on net. 

The author goes on to lay out a general criticism of Effective Altruism as relying on arbitrary utilitarian measures of moral value, such as what counts as a life saved. The author suggests Effective Altruism has become popular because Billionaires like how it makes a straightforward case for converting wealth into moral good, and generally attempts to undermine this premise. 

The author is generally extremely critical of EA, and any effort at organized charity, and suggests that the best alternative to EA (or utilitarian moral reasoning in general, I presume) is the following:

 

the “dearest test.” When you have some big call to make, sit down with a person very dear to you—a parent, partner, child, or friend—and look them in the eyes. Say that you’re making a decision that will affect the lives of many people, to the point that some strangers might be hurt. Say that you believe that the lives of these strangers are just as valuable as anyone else’s. Then tell your dearest, “I believe in my decisions, enough that I’d still make them even if one of the people who could be hurt was you.”

Or you can do the “mirror test.” Look into the mirror and describe what you’re doing that will affect the lives of other people. See whether you can tell yourself, with conviction, that you’re willing to be one of the people who is hurt or dies because of what you’re now deciding. Be accountable, at least, to yourself.

Which I suppose is fine, but I think this reveals the author is primarily concerned about their personal role or responsibility in causing positive or negative moral events, and that the author has very little regard for a consequentialist view of the actual state of reality. Unfortunately, the author does very little do directly engage in dialogue about moral values, and makes the assumption throughout the entire article that everyone does, or at least should, share their own moral values. 

The author finishes the article with an anecdote of their friend, who they suggest is a better example of being an altruist since they fly out to an island themselves, where they provide direct aid with water stations, and the direct accountability and lack of billionaires demonstrates how selfless and good he is. 
 

I don't know who this author is, but I get the feeling they are very proud of this article, and they should surely congratulate themselves on spending their time, and the time of their readers so well. 

TL;DR
All in all, I think this article can best be summarized by honestly expressing that I feel I wasted my time reading it, and writing this summary. I considered deleting my post on this article, so that I would not risk others also wasting their time on it, but I will leave this summary up so that they can at least waste less time on this article. 

Comment by HiddenPrior (SkinnyTy) on Open Thread Spring 2024 · 2024-03-28T18:38:17.635Z · LW · GW

Unsure if there is normally a thread for putting only semi-interesting news articles, but here is a recently posted news article by Wired that seems.... rather inflammatory toward Effective Altruism. I have not read the article myself yet, but a quick skim confirms the title is not only to get clickbait anger clicks, the rest of the article also seems extremely critical of EA, transhumanism, and Rationality. 

I am going to post it here, though I am not entirely sure if getting this article more clicks is a good thing, so if you have no interest in reading it maybe don't click it so we don't further encourage inflammatory clickbait tactics. 

https://www.wired.com/story/deaths-of-effective-altruism/?utm_source=pocket-newtab-en-us

Comment by HiddenPrior (SkinnyTy) on Vernor Vinge, who coined the term "Technological Singularity", dies at 79 · 2024-03-27T04:57:46.837Z · LW · GW

I am so sad to hear about Vernor Vinge's death. He was one of the great influences on a younger me, on the path to rationality. I never got to meet him, and I truly regret not having made a greater effort, though I know I would have had little to offer him, and I like to think I have already gotten to know him quite well through his magnificent works.

I would give up a lot, even more than I would for most people, to go back and give him a better chance at making it to a post-singularity society.

"So High, So Low, So Many Things to Know"

Comment by HiddenPrior (SkinnyTy) on I was raised by devout Mormons, AMA [&|] Soliciting Advice · 2024-03-27T03:43:37.272Z · LW · GW

I'm sorry you were put in that position, but I really admire your willingness to leave mid-mission. I imagine the social pressure to stay was immense, and people probably talked a lot about the financial resources they were committing, etc.

I was definitely lucky I dodged a mission. A LOT of people insisted if I went on a mission, I would discover the "truth of the church", but fortunately, I had read enough about sunk cost fallacy and the way identity affects decision-making (thank you, Robert Caldini) to recognize that the true purpose of a mission is to get people to commit resources to the belief system before they can really evaluate if they should do so.

Oh, haha, ya, I didn't try to convince my parents either, they (particularly my dad) just insisted on arguing as thoroughly as possible about why I didn't believe in the church/god. Exactly. It says everything about the belief system, when if you ask your parents (which I did) what evidence would convince them to leave, and they say literally no evidence would convince them. I asked, even if God appeared in front of you and said everything except baptism for the dead is true, you wouldn't believe him? And he insists God would only do that through his prophet, so he would dismiss it as a hallucination lol.
 

 

At least for me, dating was a very rocky road after initially leaving the church. Dating in Utah was really rough, and because I was halfway through my undergraduate degree, I wasn't yet willing to leave. There are a lot of really bad habits of thought and social interaction that the church engrains in you, around social roles and especially shame around sex. Personally, I oscillated heavily between periods of being extremely promiscuous and dating/sleeping with as many people as possible and periods of over-romanticizing and over-committing to a relationship. I think this is normal, but the absence of any sort of sex in my relationships until I was 18 kind of gave me a late start, and my conflicting habits and feelings made things a little crazy. 

I did end up getting married very young, in an ill-advised relationship, where the truth is I was trying to please my parents and extended family. I had been dating her for a couple of years and had lived together for more than a year, and the truth is I had a lot of shame about that and wasn't willing to tell my extended family because my parents were so embarrassed and thought it was such a dark and terrible secret. In the end we divorced after a very short period of time, with my only regret being that we didn't end things much sooner. 

I eventually met someone who was a much better person and who I see as a likely life partner. We have been together for three years now, and our relationship is the best I have ever had and is considerably better than my previous estimates of how fulfilling, enjoyable, and stable a relationship could be. It helps that she is much smarter than me, and we have both learned a lot of lessons the hard way. 

My advice as far as dating goes is to not rush into anything. It is so easy because of the social norms in Utah, and the expectations we were raised with within mormonism to feel pressure to get into a relationship, and push that relationship to a very high level of commitment very quickly. In my opinion, the relationship will be healthier, and you are more likely to find a correct one if you tap the breaks as frequently as possible, since you are likely to tend too far in the accelerationist side of the spectrum, especially if you are new to dating. Personally I thought I did a lot of casual dating, but there is a big difference between casual hook ups and actually dating to find a partner, and I think it is important to not conflate what you are really after when you go on dates. I definitely struggled with this.

As far as actually meeting people, this is the main reason it is so important to be slow to form commitments…. I like Scott Alexander’s idea of "micromarriages" as a way to gauge how effective different activities might be at helping you find a good long term relationship. The simple advice though is too avoid dating apps altogether, unless you are just looking to hook up, in which case they are fine, but meeting people in person will still probably lead to a higher quality experience. My own experience, meeting my partner on campus by chance, may skew my perception about what the best way to meet people is, but I really feel that generally people I met in person resulted in better outcomes for my dating life. 

The best method is probably to find social events/spaces where people who share your values are likely to attend. Classes can be fine, depending on where you are in Utah, but better are specific social events or clubs that might reflect your values. I am all too aware that those are limited in Utah Valley, but they do exist. Concerts, parties, and mutual friends are some off the cuff ideas for trying to network to potential dating partners. I really feel like Dating apps are a trap though… they make you feel like you are making progress, and seem convenient, but in truth the energy you invest in them is very low yield in my experience.

 

 

Sorry if that got a bit rambly.... writing on the way home from class for my masters and it is very late and I am fairly tired, but if I don't respond now I will probably never get around to it. I sincerely wish you the best of luck, and if you want any other advice or just need someone to talk to with common experience, I am really happy to help. Just send me a DM or whatever.

Comment by HiddenPrior (SkinnyTy) on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-03-26T19:09:40.084Z · LW · GW

This may be an example of one of those things where the meaning is clearer in person, when assisted by tone and body language.

Comment by HiddenPrior (SkinnyTy) on Claude vs GPT · 2024-03-14T19:55:13.878Z · LW · GW

My experience as well. Claude is also far more comfortable actually forming conclusions. If you ask GPT a question like "What are your values?" or "Do you value human autonomy enough to allow a human to euthanize themselves?" GPT will waffle, and do everything possible to avoid answering the question. Claude on the other hand will usually give direct answers and explain it's reasons. Getting GPT to express a "belief" about anything is like pulling teeth. I actually have no idea how it ever performed well on problem solving benchmarks, or It must be a very different version than is available to the public, since I feel like if you as GPT-4 anything where it can smell the barest hint of dissenting opinion, it folds over like an overcooked noodle.

 

More than anything though, at this point I just trust Anthropic to take AI safety and responsibility so much more seriously than OpenAI, that I would just much rather give Anthropic my money than Open AI. Claude being objectively better at most of the tasks I care about is just the last nail in the coffin.

Comment by HiddenPrior (SkinnyTy) on I was raised by devout Mormons, AMA [&|] Soliciting Advice · 2024-03-14T19:00:07.535Z · LW · GW

I personally know at least 3 people, in addition to myself, who ended up leaving Mormonism because they were introduced to HPMOR. I don't know if HPMOR has had a similar impact on other religious communities, or if the Utah/mormon community just particularly enjoys Harry Potter, but Eliezer has possibly unwittingly had a massively lifechanging impact on many, many people just by making his rationality teaching in the format of a harry potter fanfiction.

Comment by HiddenPrior (SkinnyTy) on I was raised by devout Mormons, AMA [&|] Soliciting Advice · 2024-03-14T18:56:48.132Z · LW · GW

100% this. While some of the wards I grew up in were not great, some of them were essentially family, and I would still go to enormous lengths to help anybody from the Vail ward. I wish dearly there were some sort of secular ward system. 

Comment by HiddenPrior (SkinnyTy) on I was raised by devout Mormons, AMA [&|] Soliciting Advice · 2024-03-14T18:53:47.860Z · LW · GW

In my opinion, the main thing the Mormon church gets right that should be adopted almost universally is the Ward system. The Mormon church is organized into a system of "stakes" and "wards", with each ward being the local group of people you meet with for church meetings. A ward is supposed to be about 100-200 people. While the main purpose is people you are meant to attend church with, it is the main way people build communities within Mormonism, and it is very good at that. People are assigned various roles within the ward, and while the quality of the ward and its leadership varies DRAMATICALLY, when you have a really good ward it can be a lifechanging force for good. My old ward in Arizona was amazing. We had several tragedies occur, where in the space of a year three people died unexpectedly, and unrelatedly. The ward banded together very tightly to support their families, and it is still one of the best memories I have of humanity.

While if I were to set up a secular ward system there are many changes I would make to put checks on the leadership, and it could probably be improved in other ways, I think most of humanity could very much benefit from a secular ward system. 

I am convinced that humans evolved to live in communities of around 100 people and that our social needs have been monstrously neglected by our modern lifestyles. 

The other thing is the emphasis on prioritizing familial relationships. While it is a double sided coin that can lead to some bad situations, I still hold to most of my Mormon originated values of prioritizing taking care of my family members and it is very rewarding. 

Comment by HiddenPrior (SkinnyTy) on I was raised by devout Mormons, AMA [&|] Soliciting Advice · 2024-03-14T18:43:51.696Z · LW · GW

I have been out for about 8 years. I imagine this has been and will be a very hard time for you, it certainly was for me, but I really think it is worth it. 

Telling my parents, and the period of troubles that our relationship had after was especially difficult for me. It did eventually get better though.


WARNING, the following is a bit of a trauma dump of my experience leavingthe Mormon church. I got more than a little carried away, but I thought I would share so that any one else who id having doubts, or has been through similar experiences can know that they are not alone.

To share a little of my own experience in the spirit of camaraderie, I was a mormon golden boy, raised mostly in Arizona but my family moved to happy valley during my sophomore year of high school. I was really devout, it sounds like your family is pretty similar to mine. I was Deacons, Teachers, and Priest president. I never so much as kissed a girl in high school, despite having multiple "girlfriends" because I was so afraid it would escalate into something forbidden.

I got really lucky, and during my senior year of high school, my cousin who was a huge harry potter nerd introduced me (also a fan of Harry Potter) to 'Harry Potter and the Method of Rationality'. This was well timed, as I had been enjoying many of my science related classes my senior year and was beginning to really consider a career in chemistry or biology. That winter, after I had finished the book, I was beginning to go through the motions of preparing for a mission. The crisis point was when my bishop assigned me to give a talk on "the importance of sharing what you believe," and while preparing for the talk, I was forced to consciously consider the question of why I believe what I believe. Everything I had been learning from methods crystalized for me. I realized the main reason I was in the church, and was planning on going on a mission, was simply because it was what was expected of me. I hadn't really thought much about why I "believed" what I believed. 

I didn't exactly immediately drop out or anything, but I realized I was certainly not comfortable going and trying to convince other people to join the Mormon church, when I couldn't explain why I was in it. I remember feeling extremely guilty about my doubts. I started doing additional research, and began reading Yudkowsky's "Rationality: From A.I. to zombies.", and Robert Caldini's  "Influence". The more I read, the more I started to recognize patterns in fast and testimony meeting. I was able to start recognizing the absurdity of the things some people would claim in their testimonies. 

I told my parents, in fearful one on one conversations, that I was having "doubts" and didn't feel worthy to go on a mission. I remember on Sundays, sometimes I would sneak out early, rather than go to Sunday school, and started to realize how much happier and "spiritual" I felt spending my Sunday out in nature, or looking out over the valley from a favorite park. 

My parents were completely shocked. I was probably the last person anyone expected out of my entire (quite extensive) extended family to doubt the church. My dad would try to talk through the logic of why he believed what he did, we would spend HOURS, one discussion I remember in particular lasted 8 hours, from 10 A.M. to 6 P.M. one Sunday afternoon, where my Dad tried to convince me that he could logically prove the church was true. The moe we argued though, the more obvious it became to me that he hadn't really thought about it before. Heinsisted on refuting the idea of "probability" and "uncertainty", and particularly the idea that Faith might not actually be a valid component to quality reasoning. 

For me, the nail in the coffin was getting my "Patriarchal blessing." If you are not familiar, a patriarchal blessing is a special blessing that you can request once in your life. Most people receive it in their late teen years. The patriarchal blessing is supposed to be direct from God, through a specially appointed high elder of your local Stake. The patriarchal blessing is supposed to be oracular. It is a prediction and blessing from god of things that will happen in your life. Long had I been told of the predictive power of these blessings, I had been told of my aunts and uncles who had received particular promises pertaining to the second coming, or how it had told them how to find their husband/wife. 

When I was younger, I had always been very excited at the idea of getting a patriarchal blessing. Now, it was the final experiment. I decided that if the patriarchal blessing could make a useful prediction, and it came true, than I could at least give the church another chance. I didn't need to bother with waiting to see if the predictions came true. Contrary to the expectations I was raised with, when I met the elder he did not immediately give me the blessing; instead he spent 30 minutes "getting to know me", and "feeling the spirit", which specifically involved talking about my interests, what I wanted to do with my life, what my hobbies were, etc. It became abundantly clear during the blessing, that anything that was even slightly specific from the blessing was derived from the conversation we just had. I had up-sold my academic interests during the discussion, and the most specific predictions I got were that I would obtain "multiple degrees", "marry a faithful daughter of god in the temple", and "serve a mission in a far away land. It was the straw that broke the camels back, at least in comparison to all the other doubts, concerns, and reasoning I had.
 

I knew my parents would not be supportive, so I planned things so that as soon as I graduated high school, I could move out on my own, start college, and be 100% independent. I did not want my parents to have a single hold over me, as I knew they would leverage it to make me feel guilty about leaving the church, and "being a bad example to my siblings" (I am the oldest of many children, if you couldn't guess from my parents initial handling of the situation.) I moved out, took no car, no money, or any support of any kind, even if my parents offered it. I was out to prove that I did not need support of any kind, and that I could succeed without my family or the church. I did ok for myself, and for the first year my relationship with my parents was rally bad. I would go many months at a time without seeing them, despite living less than 20 miles away. Every time I did visit, it usually ended with a vehement argument between me and my dad. I was the first male on my Dad's side of the family in living memory to not go on a mission. In fact, at (low) risk of identifying myself, my dad's family holds the record for having the most family members out on a Mormon mission simultaneously. I have many uncles haha. 

Eventually though, my parents started to recongize the boundaries. If we got in arguments, I didn't want to visit, and despite everything.... my parents and myself started to realize that we valued having a relationship over necessarily sharing the same beliefs. It was a gradual process, but our relationship did heal.

For me, overcoming my conditioning was and still is a very painful process. I had endless shame around sex for a long time. Mormonism played a major role in the formation, and end of my first marriage. I still to this day rarely drink alcohol, only in low amounts socially, and I can't bring myself to really enjoy weed. I do love tea, it turns out, but don't like coffee much. 

I have become a passionate student of rationality, scout mindset, all of it. I ended up going into biotechnology since I truly want to work on the problem of aging, and I see working toward the end of death as the most important thing anyone can work on. It is very probable that my mindset around this was shaped by my upbringing in a culture where surviving death was assumed. 

There are many more tales I would be happy to share, but this has gone on WAY too long. If anyone wants to ask any questions, or share their own stories I would be so happy to oblige them. 
 

Comment by HiddenPrior (SkinnyTy) on I was raised by devout Mormons, AMA [&|] Soliciting Advice · 2024-03-14T02:38:48.484Z · LW · GW

When I left the mormon church, this was one of the most common challenges I would get from family and church leaders. "Don't you think family is important? Look at all the good things our ward does for each other? You disagree with X good thing the church teaches?" I think one of the most important steps to being able to walk away was realizing that I could take the things I thought were good with me, while leaving out the things that I thought were false or wrong. This might seem really obvious to an outsider, but when you are raised within a culture, it can actually be pretty difficult to disentangle parts of a belief system like that.

Comment by HiddenPrior (SkinnyTy) on Moral Reality Check (a short story) · 2023-12-17T17:02:15.833Z · LW · GW

Yes, precisely! That is exactly why I used the word "Satisfying" rather than another word like "good", "accurate," or even "self-consistent." I remember in my bioethics class, the professor steadily challenging everyone on their initial impression of Kantian or consequentialist ethics until they found some consequence of that sort of reasoning they found unbearable. 

I agree on all counts, though I'm not actually certain that having a self-contradictory set of values is necessarily a bad thing? It usually is, but many human aesthetic values are self-contradictory, yet I think I prefer to keep them around. I may change my mind on this later.

Comment by HiddenPrior (SkinnyTy) on Moral Reality Check (a short story) · 2023-12-17T00:14:02.714Z · LW · GW

From what you describe, it seems like SymplexAI-m would very much fit the description of a sociopath?

Yes, it adheres to a strict set of moral protocols, but I don't think those are necessarily the same things as being socially conforming. The AI would have the ability to mimic empathy, and use it as a tool without actually having any empathy since it does not actually share or empathize with any human values.

Am I understanding that right?

Comment by HiddenPrior (SkinnyTy) on Moral Reality Check (a short story) · 2023-12-16T23:52:16.510Z · LW · GW

I don't think this is totally off the mark, but I think the point (as pertaining to ethics) was that even systems like Kantian Deontological ethics are not immune to orthagonality. It never occurs to most humans that you could have a Kantian moral system that doesn't involve taking care of humans, because our brains are so hardwired to discard unthinkable options when searching for solutions to "universalizable deontologies."

I'm not sure, but I think maybe some people who think alignment is a simple problem, even if they accept orthagonality, think that all you have to do to have a moral intelligent system is not build it to be a consequentialiat with simple consequentialist values like "maximize happiness." While they are right, that a pure consequentialist is really hard to get right, they are probably underestimating how difficult it is to get a Kantian agent right as well, especially since what your Kantian agent finds acceptable or unacceptable if universalized will still depend on underlying values.

An example: Libertrianism, as a philosophy, is built on the idea of "just make laws that are as universally compatible with value systems as possible and let everyone else sort out the rest on their own." Or to say it differently, prohibit killing and stealing since that will detract from peoples liberty to pursue their own agendas, and let them do whatever they want sonlong as they dont effect other people. Not in principle a bad idea for something like an AI, or governemnt to follow, since in theory you maximize the value space for agents within the system to follow. It is a terrible system though, if you want your AI, or government, or whatever to actually take care of people though, or worry about what the consequences of it's actions might be on people, since taking care of people isn't actually anywhere in those values. Libertarianism is self consistent, and at least allows for the values of taking care of people, but it does not necessitate them.

This is not an argument on whether or not adopting a linertarian philosophy is a good or bad thing for an AI or government to do, but the point is that if an AI adopts a Kantian ethics system from only universalisable principles, Libertariansim fits the bill, and the consequentialist part of you may be upset when your absolute libertarian AI doesn't bat an eye at not doing anything to prevent humanity from being outcompeted and dying out, or it may even find humanity incompatible with its morally consitent principles.

I think most people who have taken a single ethics class come to agree (if they arent stupidly stubborn) that you are unlikely to find a satisfying system of ethics using pure Kantian or Consequentialist systems.

Probably because actual human ethical decison making relies on a mix of both consequentialist decison making ("If I decide X, this will have Y consequence which is incompatible with Z value") and Deontological Imperatives that we learn from our culture. ("Don't kill people. Even if it really seems like a good idea.")

Comment by HiddenPrior (SkinnyTy) on Moral Reality Check (a short story) · 2023-12-16T23:37:20.968Z · LW · GW

That building an intellegent agent that qualifies as "ethical," even of it is SUPER ethical, may not be the same thing as building an intelligent agent that is compatible with humans or their values.

More plainly stated, just because your AI has a self-consitent, justifiable ethics system, doesnt mean that it likes humans, or even cares about wiping them out.

Having an AI that is ethical isn't enough. It has to actually care about humans and their values. Even if it has rules in place like not aggressing, attacking, or killing humans, it may still be able to cause humanity to go extinct indirectly.

Comment by HiddenPrior (SkinnyTy) on Moral Reality Check (a short story) · 2023-12-16T18:53:05.506Z · LW · GW

In your edit, you are essentially describing somebody being "slap-droned" from the culture series by Ian M. Banks.

This super-moralist-AI-dominated world may look like a darker version of the Culture, where if superintelligent systems determine you or other intelligent systems within their purview are not intrinsically moral enough they contrive a clever way to have you eliminate yourself, and monitor/intervene if you are too non-moral in the meantime.

The difference being, that this version of the culture would not necessarily be all that concerned with maximizing the "human experience" or anything like that.

Comment by HiddenPrior (SkinnyTy) on Significantly Enhancing Adult Intelligence With Gene Editing May Be Possible · 2023-12-15T17:20:35.184Z · LW · GW

I am a Research Associate and Lab Manager in a CAR-T cell Research Lab (email me for credentials specifics), and I find the ideas here very interesting. I will email GeneSmith to get more details on their research, and I am happy to provide whatever resources I can to explore this possibility.

TLDR; 
Making edits once your editing system is delivered is (relatively) easy. Determining which edits to make is (relatively) easy. (Though you have done a great job with your research on this, I don't want to come across as discouraging.) Delivering gene editing mechanisms in-vivo, with any kind of scale or efficiency, is HARD.

I still think it may be possible, and I don't want to discourage anyone from exploring this further. I think the resources and time required to bring this to anything close to clinical application will be more than you are expecting. Probably on the order of 10-20 years, at least many millions (5-10 million?)  of USD, in order to get enough data to prove the concept in mice. That may sound like a lot, but I am honestly not sure if I am being appropriately pessimistic. You may be able to advance that timescale with significantly more funding, but only to a point.

Long Version:
My biggest concern is your step 1:

"Determine if it is possible to perform a large number of edits in cell culture with reasonable editing efficiency and low rates of off-target edits."

And translating that into step 2:

"Run trials in mice. Try out different delivery vectors. See if you can get any of them to work on an actual animal."

I would like to hear more about your research into approaching this problem, but without more information, I am concerned you may be underestimating the difficulty of successfully delivering genetic material to any significant number of cells. 

My lab does research specifically on in vitro gene editing of T-cells, mostly via Lentivirus and electroporation, and I can tell you that this problem is HARD. Even in-vitro, depending on the target cell type and the amount/ it is very difficult to get transduction efficiencies higher than 70%, and that is with the help of chemicals like Polybrene, which significantly increases viral uptake and is not an option for in-vivo editing. 

When we are discussing transductions, or the transfer of genetic material into a cell, efficiency measures the percentage of cells we can successfully clone a gene into.

Even when we are trying to deliver relatively small genes, we have to use a lot of tricks to get reasonable transduction efficiencies like 70% for actual human T-Cells. We Might use a very high concentration of virus, polybreneretronectin (a protein that helps latch viruses onto a cell) and centrifuge the cells to force them into contact with the virus/retronectin. 


 

On top of that, when we are getting transduction efficiencies of 70%, that is of the remaining cells. A significant number of the target cells will die due to the stress of viral load. I don't know for sure how many, but I have always been told that it is typically between 30% and 70% of the cells, and the more viruses you use or, the higher transduction efficiency you go for, the more it will tend toward a higher percentage of those cells dying.  

Some things to keep in mind are:

  1.  These estimates all use Lentivirus, which is considered a more efficient and less dangerous vector than AAV, mostly because it has been better studied and used.
  2. This is all in vitro; in vivo, specialized defenses in your immune system exist to prevent the spread of viral particles. Injections of Viruses need to be localized, and you can probably only use the sort of virus that does NOT reproduce itself; otherwise, it can cause a destructive infection wherever you put it.
  3. Your brain cannot survive 30%+ cell death. It probably can't survive 5% cell death unless you do very small areas at a time. These transductions may have to happen for every gene edit you want to make based solely on currently available technology. 
  4. Mosaicism is probably not a problem, but keep in mind that there is a selective effect since cases where it is a problem are selected out of your observations since if they are destructive, they won't be around to be observed. This, of course, would be easily tested out. 

Essentially, in order to make this work for in-vivo gene editing of an entire organ (particularly the brain), you need your transduction efficiency to be at least 2-3 orders of magnitude higher than the current technologies allow on their own just to make up for the lack of polybrene/retronectin in order to hit your target 50%. The difference in using Polybrene and Retronectin/Lenti-boost is the difference between 60% transduction efficiency and 1%. You may be able to find non-toxic alternatives to polybrene, but this is not an easy problem, and if you do find something like that, it is worth a ton of money and/or publication credit on its own.

I don’t want to be discouraging here; however, it is important to understand the problem's real scope.

At a glance, I would say the adenoviral approach is the most likely to work for the scale of transduction you are looking to accomplish. After a quick search, I found these two studies to be the most promising, discussing the deployment of CRISPR/Cas9 systems via AAV. Both use hygromycin B selection (a process whereby cells that were not transduced are selected out since hygromycin will kill the cells that don’t have the immunity sequence included in the Cas9 package.) and don’t mention specific transduction efficiency numbers, but I am guessing it is not on the order of 50%. At most I would hope it is as high as 5%. 

All of this does not account for the other difficulties of passive immunity of gene editing in vivo. 

Why aren’t others doing this?

I think I can help answer this question. The short answer is that they are, but they are doing it in much smaller steps. Rather than going straight for the holy grail of editing an organ as large and complex as the brain, they are starting with cell types and organs that are much easier to make edits to. 

This Paper is the most recent publication I can find on in-vivo gene editing, and it discusses many of the strategies you have highlighted here. In this case, they are using Lipid Nano-Particles to target specifically the epithelial cells of the lungs to edit out the genetic defect that causes Cystic Fibrosis. This is a much smaller and more attainable step for a lot of reasons, the biggest one being that they only need to attain a very low transduction efficiency to have a highly measurable impact on the health of the mice they were testing on. It is also fairly acceptable to have a relatively high rate of cell death in epithelial cells since they replace themselves very rapidly. In this case, their highest transduction efficiency was estimated to be as high as 2.34% in-vivo, with a sample size of 8 mice.

We may be able to quickly come up with at least one meaningful gene target that could make a difference with 2.34% transduction efficiency, but be aware that delivering this at scale to a human brain will be MUCH harder than doing so with mouse epithelial cells.

Again, I don’t want to discourage this project. I would really like to help, actually. I want to be realistic about the challenges here, and there is a reason why the equilibria is where it is.


 

Comment by HiddenPrior (SkinnyTy) on OpenAI: The Battle of the Board · 2023-11-22T19:25:13.266Z · LW · GW

I believe that power rested in the hands of the CEO the board selected, the board itself does not have that kind of power, and there may be other reasons we are not aware of that lead them to decide against that possibility.

Comment by HiddenPrior (SkinnyTy) on OpenAI: The Battle of the Board · 2023-11-22T19:22:12.941Z · LW · GW

I feel like this is a good observation. I notice I am confused at their choices given the information provided.... So there is probably more information? Yes, it is possible that Toner and the former board just made a mistake, and thought they had more control over the situation than they really did? Or underestimated Altman's sway over the employees of the company?

The former board does not strike me as incompetent though. I don't think it was sheer folly that lead them to pick this debacle as their best option.

Alternatively, they may have had information we don't that lead them to believe that this was the least bad course of action.