Posts

How likely is brain preservation to work? 2024-11-18T16:58:54.632Z
Refactoring cryonics as structural brain preservation 2024-09-11T18:36:30.285Z
Being against involuntary death and being open to change are compatible 2024-05-27T06:37:27.644Z
Cryonics p(success) estimates are only weakly associated with interest in pursuing cryonics in the LW 2023 Survey 2024-02-29T14:47:28.613Z
Why wasn't preservation with the goal of potential future revival started earlier in history? 2024-01-16T16:15:08.550Z
Transcript of Sam Altman's interview touching on AI safety 2023-01-20T16:14:18.974Z
Brain preservation to prevent involuntary death: a possible cause area 2022-03-22T12:36:16.135Z
A review of cryonics/brain preservation in 2016 2016-12-31T18:19:56.460Z
How computational approaches can contribute to brain preservation research 2016-12-16T18:56:11.746Z
Stand-up comedy as a way to improve rationality skills 2016-11-27T21:52:33.989Z
Update on the Brain Preservation Foundation Prize 2015-05-26T01:47:20.018Z
Calories per dollar vs calories per glycemic load: some notes on my diet 2015-03-14T16:07:32.893Z
2015 New Years Resolution Thread 2014-12-24T22:16:35.669Z
What are the most common and important trade-offs that decision makers face? 2014-11-03T05:03:16.968Z
One way to manipulate your level of abstraction related to a task 2013-08-19T05:47:10.920Z
[LINK] Hypothesis about the mechanism for storing long-term memory 2013-07-10T14:33:14.244Z
Which cognitive biases should we trust in? 2012-06-01T06:37:44.383Z
The Outside View Of Human Complexity 2011-10-08T18:12:03.504Z
What Makes My Attempt Special? 2010-09-26T06:55:38.929Z
Step Back 2009-05-09T18:07:34.526Z

Comments

Comment by Andy_McKenzie on How likely is brain preservation to work? · 2024-11-19T15:39:07.099Z · LW · GW

 identity is irretrievably lost when the brain activity stops 

My point here is that this is a very strong claim about neuroscience -- that molecular structure doesn’t encode identity/memories. 

Comment by Andy_McKenzie on Science advances one funeral at a time · 2024-11-02T04:23:40.132Z · LW · GW

The examples you provided don't actually support the "one funeral at a time" narrative in your title. Take Barbara McClintock's jumping genes or Barry Marshall's H. pylori discovery -- in both cases, many scientists changed their views based on compelling evidence while very much alive. There are plenty of other examples of this. For example, the acceptance of prions as disease agents, the role of microbiomes in health, dark energy, and mitochondria's bacterial origins all show how consensus can shift rapidly once a sufficient amount of evidence has accumulated. Scientists change their minds all. the. time. 

This is not to say that there are not fads or incorrect beliefs in science -- of course there are. And sometimes it can takes years or decades for them to be overwhelmed. But the "funeral" framing in particular is not only historically inaccurate but also promotes a harmful view that death is necessary for progress. What we actually see in these examples is that scientific views change when sufficient evidence accumulates and a sufficient number of people are convinced, regardless of generational turnover. Suggesting we need scientists to die rather than be convinced by evidence is both incorrect and ethically fraught. I am saddened to see it here and therefore strong downvoted this post. 

Comment by Andy_McKenzie on Refactoring cryonics as structural brain preservation · 2024-09-12T22:48:48.852Z · LW · GW

Thanks for the clarification and your thoughts. In my view, the question is to what extent the polymer gel embedding is helpful from the perspective of maintaining morphomolecular structure, so that it is worth the trade-off of removing the lipids, which could potentially also have information content. https://brainpreservation.github.io/Biomolecules#how-lipid-biomolecules-in-cell-membranes-could-affect-ion-flow

You are in good company in thinking that clearing and embedding the tissue in a hydrogel is the best approach. Others with expertise in the area have suggested the same thing to me. I'm just not convinced, so I think that more research is required to tell whether that is the best approach.  

ETA: Sorry, just saw your edit. Interesting thoughts on the interaction between preservation and reconstruction. Your perspective and goals make sense to me, although it is not exactly what we are pursuing at Oregon Brain Preservation. We are agnostic as to the potential method of revival and expect this to be relatively further away in the future, if it ever becomes possible. 

Comment by Andy_McKenzie on Refactoring cryonics as structural brain preservation · 2024-09-12T22:05:10.881Z · LW · GW

Thanks for your interest! 

Does OBP plan to eventually expand their services outside the USA?

In terms of our staff traveling to other locations to do the preservation procedure, unfortunately not in the immediate future. We don't have the funding for this right now. 

And how much would it cost if you didn’t subsidize it?

There are so many factors. It depends a lot on where in the world we are talking about. If we are talking about someone who legally dies locally in Salem, perhaps a minimal estimated budget would be (off the top of my head, unofficial, subject to change): 

  • Labor cost of brain preservation = ~$500-1000
  • Cost of chemicals, disposable equipment = ~$100-300 
  • Cost of death certificate, cremation, funeral services fees for the rest of the body = ~$1000-1500
  • Long-term preservation cost = very difficult to estimate, depends on economies of scale, and other factors, perhaps ~$1000-2000 

These are rough estimates. But generally speaking, there is a reason that many brain banks around the world are also able to allow people to donate their brains for free. There are many many thousands of brains preserved in this way throughout the world. It is not nearly as expensive as traditional cryonics. Here is some more information about costs for similar type of procedure from the Brain Support Network: https://www.brainsupportnetwork.org/brain-donation/brain-donation-faq/ 

Cost is a common complaint about cryonics so I could see you becoming much bigger than the cryonics orgs, but judging by the website you look quite small. Do you know why that is?

I don't know. A couple of guesses are that we are just getting started and not really doing any marketing because our focus is on researching the preservation methods. Also, it is likely that many people are skeptical of the preservation methods that we use, since they are new and different than others, as well as being experimental. Here is some more information about our research program, which you might find interesting: https://osf.io/preprints/osf/c28hm 

Comment by Andy_McKenzie on Refactoring cryonics as structural brain preservation · 2024-09-12T20:19:47.493Z · LW · GW

We discuss the possibility of fluid preservation after tissue clearing in our article: 

An alternative option is to perform tissue clearing prior to long-term preservation (118). This would remove the lipids in the brain, but offer several advantages, including repeated non-invasive imaging, and potentially reduced oxidative damage over time (119).

And also in our fluid preservation article we have a whole section on it. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11058410/#S7

I'm not sure why this option is much more robust that formaldehyde fixation alone. I haven't seen any strong evidence for that. I do agree that it is potentially very useful for 3D reconstruction, but reconstruction is a much different problem than preservation. 

Comment by Andy_McKenzie on Refactoring cryonics as structural brain preservation · 2024-09-12T02:14:55.484Z · LW · GW

I can't speak for Adele, but here is one somewhat recent article by neuroscientists discussing memory storage mechanisms: https://bmcbiol.biomedcentral.com/articles/10.1186/s12915-016-0261-6

DNA is discussed as one possible storage mechanism in the context of epigenetic alterations to neurons. See the section by Andrii Rudenko and Li-Huei Tsai.

Comment by Andy_McKenzie on Refactoring cryonics as structural brain preservation · 2024-09-11T23:01:06.556Z · LW · GW

This is an important question. While I don't have a full answer, my impression is that yes, it seems to preserve the important information present in DNA. More information here: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11058410/#S4.4

Comment by Andy_McKenzie on Cryonics p(success) estimates are only weakly associated with interest in pursuing cryonics in the LW 2023 Survey · 2024-03-01T04:04:13.924Z · LW · GW

Thanks for the comment. I'm definitely not assuming that p(success) would be a monocausal explanation. I'm mostly presenting this data to give evidence against that assumption, because people frequently make statements such as "of course almost nobody wants cryonics, they don't expect it will work". 

I also agree that "is being revived good in expectation / good with what probability" is another common concern. Personally, I think niplav has some good analysis of net-negative revival scenarios: https://niplav.site/considerations_on_cryonics.html

Btw, according to the author, 'Lena' is largely a critique of exploitive capitalism: https://qntm.org/uploading 

Comment by Andy_McKenzie on On the future of language models · 2023-12-21T01:34:24.366Z · LW · GW

Very high-effort, comprehensive post. Any interest in making some of your predictions into markets on Manifold or some other prediction market website? Might help get some quantifications. 

Comment by Andy_McKenzie on Principles For Product Liability (With Application To AI) · 2023-12-11T02:05:12.661Z · LW · GW

A simple solution is to just make doctors/hospitals liable for harm which occurs under their watch, period. Do not give them an out involving performative tests which don’t actually reduce harm, or the like. If doctors/hospitals are just generally liable for harm, then they’re incentivized to actually reduce it.

Can you explain more what you actually mean by this? Do you mean if someone comes into the hospital and dies, the doctors are responsible, regardless of why they died? If you mean that we figure out whether the doctors are responsible for whether the patient died, then we get back to whether they have done everything to prevent it, and one of these things might be ordering lab tests to better figure out the diagnosis, and then it seems we're back to the original problem i.e. the status quo. Just not understanding what you mean. 

Comment by Andy_McKenzie on What did you change your mind about in the last year? · 2023-12-05T05:38:01.002Z · LW · GW

Out of curiosity, what makes you think that the initial freezing process causes too much information loss? 

Comment by Andy_McKenzie on Digital brains beat biological ones because diffusion is too slow · 2023-08-26T02:51:29.477Z · LW · GW

I agree with most of this post, but it doesn’t seem to address the possibility of whole brain emulation. However, many/(?most) would argue this is unlikely to play a major role because AGI will come first.

Comment by Andy_McKenzie on Cryonics Career Survey (more jobs than you think) · 2023-06-18T11:01:02.153Z · LW · GW

Thanks so much for putting this together Mati! If people are interested in cryonics/brain preservation and would like to learn about (my perspective on) the field from a research perspective, please feel free to reach out to me: https://andrewtmckenzie.com/

I also have some external links/essays available here: https://brainpreservation.github.io/

Comment by Andy_McKenzie on Updating Drexler's CAIS model · 2023-06-16T23:21:16.115Z · LW · GW

It seems to me like your model is not necessarily taking into account technical debt sufficiently enough. https://neurobiology.substack.com/p/technical-debt-probably-the-main-roadblack-in-applying-machine-learning-to-medicine

It seems to me like this is the main thing that will slow down the extent to which foundation models can consistently beat newly trained specialized models.

Anecdotally, I know several people who don’t like to use chatgpt because its training cuts off in 2021. This seems like a form of technical debt.

I guess it depends on how easily adaptable foundation models are.

Comment by Andy_McKenzie on Transformative AGI by 2043 is <1% likely · 2023-06-16T21:04:13.545Z · LW · GW

Sounds good, can't find your email address, DM'd you. 

Comment by Andy_McKenzie on Transformative AGI by 2043 is <1% likely · 2023-06-16T20:48:53.612Z · LW · GW

Those sound good to me! I donated to your charity (the Animal Welfare Fund) to finalize it. Lmk if you want me to email you the receipt. Here's the manifold market: 

Bet

Andy will donate $50 to a charity of Daniel's choice now.

If, by January 2027, there is not a report from a reputable source confirming that at least three companies, that would previously have relied upon programmers, and meet a defined level of success, are being run without the need for human programmers, due to the independent capabilities of an AI developed by OpenAI or another AI organization, then Daniel will donate $100, adjusted for inflation as of June 2023, to a charity of Andy's choice.

Terms

Reputable Source: For the purpose of this bet, reputable sources include MIT Technology Review, Nature News, The Wall Street Journal, The New York Times, Wired, The Guardian, or TechCrunch, or similar publications of recognized journalistic professionalism. Personal blogs, social media sites, or tweets are excluded. 

AI's Capabilities: The AI must be capable of independently performing the full range of tasks typically carried out by a programmer, including but not limited to writing, debugging, maintaining code, and designing system architecture.

Equivalent Roles: Roles that involve tasks requiring comparable technical skills and knowledge to a programmer, such as maintaining codebases, approving code produced by AI, or prompting the AI with specific instructions about what code to write.

Level of Success: The companies must be generating a minimum annual revenue of $10 million (or likely generating this amount of revenue if it is not public knowledge).

Report: A single, substantive article or claim in one of the defined reputable sources that verifies the defined conditions.

AI Organization: An institution or entity recognized for conducting research in AI or developing AI technologies. This could include academic institutions, commercial entities, or government agencies.

Inflation Adjustment: The donation will be an equivalent amount of money as $100 as of June 2023, adjusted for inflation based on https://www.bls.gov/data/inflation_calculator.htm.

Regulatory Impact: In January 2027, Andy will use his best judgment to decide whether the conditions of the bet would have been met in the absence of any government regulation restricting or banning the types of AI that would have otherwise replaced programmers. 

Comment by Andy_McKenzie on Transformative AGI by 2043 is <1% likely · 2023-06-12T15:53:11.110Z · LW · GW

Sounds good, I'm happy with that arrangement once we get these details figured out. 

Regarding the human programmer formality, it seems like business owners would have to be really incompetent for this to be a factor. Plenty of managers have coding experience. If the programmers aren't doing anything useful then they will be let go or new companies will start that don't have them. They are a huge expense. I'm inclined to not include this since it's an ambiguity that seems implausible to me. 

Regarding the potential ban by the government, I wasn't really thinking of that as a possible option. What kind of ban do you have in mind? I imagine that regulation of AI is very likely by then, so if the automation of all programmers hasn't happened by Jan 2027, it seems very easy to argue that it would have happened in the absence of the regulation. 

Regarding these and a few of the other ambiguous things, one way we could do this is that you and I could just agree on it in Jan 2027. Otherwise, the bet resolves N/A and you don't donate anything. This could make it an interesting Manifold question because it's a bit adversarial. This way, we could also get rid of the requirement for it to be reported by a reputable source, which is going to be tricky to determine. 

Comment by Andy_McKenzie on Transformative AGI by 2043 is <1% likely · 2023-06-10T20:06:57.355Z · LW · GW

Understandable. How about this? 

Bet

Andy will donate $50 to a charity of Daniel's choice now.

If, by January 2027, there is not a report from a reputable source confirming that at least three companies, that would previously have relied upon programmers, and meet a defined level of success, are being run without the need for human programmers, due to the independent capabilities of an AI developed by OpenAI or another AI organization, then Daniel will donate $100, adjusted for inflation as of June 2023, to a charity of Andy's choice.

Terms

Reputable Source: For the purpose of this bet, reputable sources include MIT Technology Review, Nature News, The Wall Street Journal, The New York Times, Wired, The Guardian, or TechCrunch, or similar publications of recognized journalistic professionalism. Personal blogs, social media sites, or tweets are excluded.

AI's Capabilities: The AI must be capable of independently performing the full range of tasks typically carried out by a programmer, including but not limited to writing, debugging, maintaining code, and designing system architecture.

Equivalent Roles: Roles that involve tasks requiring comparable technical skills and knowledge to a programmer, such as maintaining codebases, approving code produced by AI, or prompting the AI with specific instructions about what code to write.

Level of Success: The companies must be generating a minimum annual revenue of $10 million (or likely generating this amount of revenue if it is not public knowledge).

Report: A single, substantive article or claim in one of the defined reputable sources that verifies the defined conditions.

AI Organization: An institution or entity recognized for conducting research in AI or developing AI technologies. This could include academic institutions, commercial entities, or government agencies.

Inflation Adjustment: The donation will be an equivalent amount of money as $100 as of June 2023, adjusted for inflation based on https://www.bls.gov/data/inflation_calculator.htm.

I guess that there might be some disagreements in these terms, so I'd be curious to hear your suggested improvements. 

Caveat: I don't have much disposable money right now, so it's not much money, but perhaps this is still interesting as a marker of our beliefs. Totally ok if it's not enough money to be worth it to you. 

Comment by Andy_McKenzie on Transformative AGI by 2043 is <1% likely · 2023-06-07T23:47:09.555Z · LW · GW

I’m wondering if we could make this into a bet. If by remote workers we include programmers, then I’d be willing to bet that GPT-5/6, depending upon what that means (might be easier to say the top LLMs or other models trained by anyone by 2026?) will not be able to replace them.

Comment by Andy_McKenzie on The basic reasons I expect AGI ruin · 2023-05-12T11:24:06.375Z · LW · GW

These curves are due to temporary plateaus, not permanent ones. Moore's law is an example of a constraint that seems likely to plateau. I'm talking about takeoff speeds, not eventual capabilities with no resource limitations, which I agree would be quite high and I have little idea of how to estimate (there will probably still be some constraints, like within-system communication constraints). 

Comment by Andy_McKenzie on Geoff Hinton Quits Google · 2023-05-01T23:26:50.439Z · LW · GW

Does anyone know of any AI-related predictions by Hinton? 

Here's the only one I know of - "People should stop training radiologists now. It's just completely obvious within five years deep learning is going to do better than radiologists because it can get a lot more experience. And it might be ten years but we got plenty of radiologists already." - 2016, slightly paraphrased 

This seems like still a testable prediction - by November 2026, radiologists should be completely replaceable by deep learning methods, at least other than regulatory requirements for trained physicians. 

Comment by Andy_McKenzie on AI doom from an LLM-plateau-ist perspective · 2023-04-27T17:51:29.267Z · LW · GW

Thanks! I agree with you about all sorts of AI alignment essays being interesting and seemingly useful. My question was more about how to measure the net rate of AI safety research progress. But I agree with you that an/your expert inside view of how insights are accumulating is a reasonable metric. I also agree with you that the acceptance of TAI x-risk in the ML community as a real thing is useful and that - while I am slightly worried about the risk of overshooting, like Scott Alexander describes - this situation seems to be generally improving. 

Regarding (2), my question is why algorithmic growth leading to serious growth of AI capabilities would be so discontinuous. I agree that RL is much better in humans than in machines, but I doubt that replicating this in machines would require just one or a few algorithmic advances. Instead, my guess, based on previous technology growth stories I've read about, is that AI algorithmic progress is likely to occur due to the accumulation of many small improvements over time. 

Comment by Andy_McKenzie on AI doom from an LLM-plateau-ist perspective · 2023-04-27T14:45:22.387Z · LW · GW

Good essay! Two questions if you have a moment: 

1. Can you flesh out your view of how the community is making "slow but steady progress right now on getting ready"? In my view, much of the AI safety community seems to be doing things that have unclear safety value to me, like (a) coordinating a pause in model training that seems likely to me to make things less safe if implemented (because of leading to algorithmic and hardware overhangs) or (b) converting to capabilities work (quite common, seems like an occupational hazard for someone with initially "pure" AI safety values). Of course, I don't mean to be disparaging, as plenty of AI safety work does seem useful qua safety to me, like making more precise estimates of takeoff speeds or doing cybersecurity work. Just was surprised by that statement and I'm curious about how you are tracking progress here.

2. It seems like you think there are some key algorithmic insights, that once "unlocked", will lead to dramatically faster AI development. This suggests that not many people are working on algorithmic insights. But that doesn't seem quite right to me -- isn't that a huge group of researchers, many of whom have historically been anti-scaling? Or maybe you think there are core insights available, but the field hasn't had (enough of) its Einsteins or von Neumanns yet? Basically, I'm trying to get a sense of why you seem to have very fast takeoff speed estimates given certain algorithmic progress. But maybe I'm not understanding your worldview and/or maybe it's too infohazardous to discuss. 

Comment by Andy_McKenzie on But why would the AI kill us? · 2023-04-18T17:01:26.849Z · LW · GW

I didn't realize you had put so much time into estimating take-off speeds. I think this is a really good idea. 

This seems substantially slower than the implicit take-off speed estimates of Eliezer, but maybe I'm missing something. 

I think the amount of time you described is probably shorter than I would guess. But I haven't put nearly as much time into it as you have. In the future, I'd like to. 

Still, my guess is that this amount of time is enough that there are multiple competing groups, rather than only one. So it seems to me like there would probably be competition in the world you are describing, making a singleton AI less likely. 

Do you think that there will almost certainly be a singleton AI? 

Comment by Andy_McKenzie on The basic reasons I expect AGI ruin · 2023-04-18T15:29:39.220Z · LW · GW

Thanks for writing this up as a shorter summary Rob. Thanks also for engaging with people who disagree with you over the years. 

Here's my main area of disagreement: 

General intelligence is very powerful, and once we can build it at all, STEM-capable artificial general intelligence (AGI) is likely to vastly outperform human intelligence immediately (or very quickly).

I don't think this is likely to be true. Perhaps it is true of some cognitive architectures, but not for the connectionist architectures that are the only known examples of human-like AI intelligence and that are clearly the top AIs available today. In these cases, I expect human-level AI capabilities to grow to the point that they will vastly outperform humans much more slowly than immediately or "very quickly". This is basically the AI foom argument. 

And I think all of your other points are dependent on this one. Because if this is not true, then humanity will have time to iteratively deal with the problems that emerge, as we have in the past with all other technologies. 

My reasoning for not expecting ultra-rapid takeoff speeds is that I don't view connectionist intelligence as having a sort of "secret sauce", that once it is found, can unlock all sorts of other things. I think it is the sort of thing that will increase in a plodding way over time, depending on scaling and other similar inputs that cannot be increased immediately. 

In the absence of some sort of "secret sauce", which seems necessary for sharp left turns and other such scenarios, I view AI capabilities growth as likely to follow the same trends as other historical growth trends. In the case of a hypothetical AI at a human intelligence level, it would face constraints on its resources allowing it to improve, such as bandwidth, capital, skills, private knowledge, energy, space, robotic manipulation capabilities, material inputs, cooling requirements, legal and regulatory barriers, social acceptance, cybersecurity concerns, competition with humans and other AIs, and of course value maintenance concerns (i.e. it would have its own alignment problem to solve). 

I guess if you are also taking those constraints into consideration, then it is really just a probabilistic feeling about how much those constraints will slow down AI growth. To me, those constraints each seem massive, and getting around all of them within hours or days would be nearly impossible, no matter how intelligent the AI was. 

As a result, rather than indefinite and immediate exponential growth, I expect real-world AI growth to follow a series of sigmoidal curves, each eventually plateauing before different types of growth curves take over to increase capabilities based on different input resources (with all of this overlapping). 

One area of uncertainty: I am concerned about there being a spectrum of takeoff speeds, from slow to immediate. In faster takeoff speed worlds, I view there as being more risk of bad outcomes generally, such as a totalitarian state using an AI to take over the world, or even the x-risk scenarios that you describe. 

This is why I favor regulations that will be helpful in slower takeoff worlds, such as requiring liability insurance, and will not cause harm by increasing take-off speed. For example, pausing AGI training runs seems likely to make takeoff speed more discontinuous, due to creating hardware, algorithmic, and digital autonomous agent overhangs, thereby making the whole situation more dangerous. This is why I am opposed to it and dismayed to see so many on LW in favor of it. 

I also recognize that I might be wrong about AI takeoff speeds not being fast. I am glad people are working on this, so long as they are not promoting policies that seem likely to make things more dangerous in the slower takeoff scenarios that I consider more likely. 

Another area of uncertainty: I'm not sure what is going to happen long-term in a slow takeoff world. I'm confused. While I think that the scenarios you describe are not likely because they are dependent upon there being a fast takeoff and a resulting singleton AI, I find outcomes in slow takeoff worlds extraordinarily difficult to predict. 

Overall I feel that AI x-risk is clearly the most likely x-risk of any in the coming years and am glad that you and others are focusing on it. My main hope for you is that you continue to be flexible in your thinking and make predictions that help you to decide if you should update your models. 

Here are some predictions of mine: 

  • Connectionist architectures will remain the dominant AI architecture in the next 10 years. Yes, they will be hooked up in larger deterministic systems, but humans will also be able to use connectionist architectures in this way, which will actually just increase competition and decrease the likelihood of ultra-rapid takeoffs. 
  • Hardware availability will remain a constraint on AI capabilities in the next 10 years. 
  • Robotic manipulation capabilities will remain a constraint on AI capabilities in the next 10 years. 
Comment by Andy_McKenzie on But why would the AI kill us? · 2023-04-18T15:24:33.403Z · LW · GW

I can see how both Yudkowsky's and Hanson's arguments can be problematic because they either assume fast or slow takeoff scenarios, respectively, and then nearly everything follows from that. So I can imagine why you'd disagree with every one of Hanson's paragraphs based on that. If you think there's something he said that is uncorrelated with the takeoff speed disagreement, I might be interested, but I don't agree with Hanson about everything either, so I'm mainly only interested if it's also central to AI x-risk. I don't want you to waste your time. 

I guess if you are taking those constraints into consideration, then it is really just a probabilistic feeling about how much those constraints will slow down AI growth? To me, those constraints each seem massive, and getting around all of them within hours or days would be nearly impossible, no matter how intelligent the AI was. Is there any other way we can distinguish between our beliefs? 

If I recall correctly from your writing, you have extremely near-term timelines. Is that correct? I don't think that AGI is likely to occur sooner than 2031, based on this criteria: https://www.metaculus.com/questions/5121/date-of-artificial-general-intelligence/

Is this a prediction that we can use to decide in the future whose model of the world today was more reasonable? I know it's a timelines question, but timelines are pretty correlated with takeoff speeds I guess. 

Comment by Andy_McKenzie on But why would the AI kill us? · 2023-04-18T13:38:54.953Z · LW · GW

To clarify, when I mentioned growth curves, I wasn't talking about timelines, but rather takeoff speeds. 

In my view, rather than indefinite exponential growth based on exploiting a single resource, real-world growth follows sigmoidal curves, eventually plateauing. In the case of a hypothetical AI at a human intelligence level, it would face constraints on its resources allowing it to improve, such as bandwidth, capital, skills, private knowledge, energy, space, robotic manipulation capabilities, material inputs, cooling requirements, legal and regulatory barriers, social acceptance, cybersecurity concerns, competition with humans and other AIs, and of course safety concerns (i.e. it would have its own alignment problem to solve). 

I'm sorry you resent that implication. I certainly didn't mean to offend you or anyone else. It was my honest impression, for example, based on the fact that there hadn't seemed to be much if any discussion of Robin's recent article on AI on LW. It just seems to me that much of LW has moved past the foom argument and is solidly on Eliezer's side, potentially due to selection effects of non-foomers like me getting heavily downvoted like I was on my top-level comment. 

Comment by Andy_McKenzie on But why would the AI kill us? · 2023-04-17T20:35:18.231Z · LW · GW

Here's a nice recent summary by Mitchell Porter, in a comment on Robin Hanson's recent article (can't directly link to the actual comment unfortunately): 

Robin considers many scenarios. But his bottom line is that, even as various transhuman and posthuman transformations occur, societies of intelligent beings will almost always outweigh individual intelligent beings in power; and so the best ways to reduce risks associated with new intelligences, are socially mediated methods like rule of law, the free market (in which one is free to compete, but also has incentive to cooperate), and the approval and disapproval of one's peers.

The contrasting philosophy, associated especially with Eliezer Yudkowsky, is what Robin describes with foom (rapid self-enhancement) and doom (superintelligence that cares nothing for simpler beings). In this philosophy, the advantages of AI over biological intelligence are so great, that the power differential really will favor the individual self-enhanced AI, over the whole of humanity. Therefore, the best way to reduce risks is through "alignment" of individual AIs - giving them human-friendly values by design, and also a disposition which will prefer to retain and refine those values, even when they have the power to self-modify and self-enhance.

Eliezer has lately been very public about his conviction that AI has advanced way too far ahead of alignment theory and practice, so the only way to keep humanity safe is to shut down advanced AI research indefinitely - at least until the problems of alignment have been solved.

ETA: Basically I find Robin's arguments much more persuasive, and have ever since those heady days of 2008 when they had the "Foom" debate. A lot of people agreed with Robin, although SIAI/MIRI hasn't tended to directly engage with those arguments for whatever reason. 

This is a very common outsider view of LW/SIAI/MIRI-adjacent people, that they are "foomers" and that their views follow logically from foom, but a lot of people don't agree that foom is likely because this is not how growth curves have worked for nearly anything historically. 

Comment by Andy_McKenzie on But why would the AI kill us? · 2023-04-17T20:33:03.188Z · LW · GW

AIs can potentially trade with humans too though, that's the whole point of the post. 

Especially if the AI's have architectures/values that are human brain-like and/or if humans have access to AI tools, intelligence augmentation, and/or whole brain emulation. 

Also, it's not clear why AIs will find it easier to coordinate with one another than humans and humans or humans and AIs. Coordination is hard for game theoretic reasons. 

These are all standard points, I'm not saying anything new here. 

Comment by Andy_McKenzie on But why would the AI kill us? · 2023-04-17T18:48:35.258Z · LW · GW

When you write "the AI" throughout this essay, it seems like there is an implicit assumption that there is a singleton AI in charge of the world. Given that assumption, I agree with you. But if that assumption is wrong, then I would disagree with you.  And I think the assumption is pretty unlikely. 

No need to relitigate this core issue everywhere, just thought this might be useful to point out. 

Comment by Andy_McKenzie on Why should we expect AIs to coordinate well? · 2023-02-14T22:51:19.824Z · LW · GW

I agree this is a very important point and line of research. This is how humans deal with sociopaths, after all.

Here’s me asking a similar question and Rob Bensinger’s response: https://www.lesswrong.com/posts/LLRtjkvh9AackwuNB/on-a-list-of-lethalities?commentId=J42Fh7Sc53zNzDWCd

One potential wrinkle is that in a very fast take off world AI’s could potentially coordinate very well because they would basically be the same, or close branches of the same AI.

Comment by Andy_McKenzie on How much is death a limit on knowledge accumulation? · 2023-02-14T12:33:34.755Z · LW · GW

"Science advances one funeral at a time" -> this seems to be both generally not true as well as being a harmful meme (because it is a common argument used to argue against life extension research).

https://www.lesswrong.com/posts/fsSoAMsntpsmrEC6a/does-blind-review-slow-down-science 

Comment by Andy_McKenzie on Schizophrenia as a deficiency in long-range cortex-to-cortex communication · 2023-02-02T15:03:24.785Z · LW · GW

Interesting, thanks. All makes sense and no need to apologize. I just like it when people write/think about schizophrenia and want to encourage it, even as a side project. IMO, it's a very important thing for our society to think about. 

Comment by Andy_McKenzie on Schizophrenia as a deficiency in long-range cortex-to-cortex communication · 2023-02-02T13:59:28.028Z · LW · GW

A lot of the quotes do find decreased connectivity, but some of them find increased connectivity between certain regions. It makes me think that there's a probability there might be something more complicated than just "increased or decreased", but rather specific types of connections. But that's just a guess, and I think an explanation across all cortical connections is more parsimonious and therefore more likely a priori. 

Of your criteria of "things to explain", here are some thoughts: 

4.1 The onset of schizophrenia is typically in the late-teens-to-twenties, 4.2 Positive symptoms—auditory hallucinations (hearing voices), “distortions of self-experience”, etc. 4.3 Negative symptoms - yes these are all critical to explain. 

4.4 Creativity - hm, this is tricky and probably needs to be contextualized. Some people disagree that schizophrenia is associated with increased creativity in relatives, although I personally agree with it. I don't think it's a core aspect. 

4.5 Anticorrelation with autism - I don't think this is a core aspect. I'm not even sure it's true. 

4.6 Relation to myelination - I think this is likely true, but I think it's too low level to call a core aspect of the disease per se. I agree with your point about two terms always yielding search results, this is true of Alzheimer's disease as well. 

4.7 Schizophrenia and blindness - I don't think this is a core aspect, I agree with you it's probably not true. 

Other core aspects I think should be explained: 

1. Specific types of gene pathways that are altered in people with schizophrenia being related to the development/function of whatever the physiologic thing being hypothesized is. Genetics are causal, so this is usually pretty helpful, albeit quite complex. 

2. Cognitive deficits: These include impairments in executive function, working memory, and other cognitive domains. These are usually considered distinct from negative symptoms (anhedonia, blunted affect, etc), and usually involve a decline from functioning premorbid/earlier in life. 

3. Why nicotine is helpful. 

4. Why antipsychotics/neuroleptics seem to be helpful (at least in certain circumstances). 

5. Why there is so much variability in the disorder? Why do some people end up with predominantly delusions, hallucinations, or negative symptoms as the core part of their experience with schizophrenia? 

Just some thoughts. As I said, I'm glad you're focused on this!

Comment by Andy_McKenzie on Schizophrenia as a deficiency in long-range cortex-to-cortex communication · 2023-02-02T00:38:13.578Z · LW · GW

Interesting theory and very important topic. 

I think the best data source here is probably neuroimaging. Here's a recent review: https://www.frontiersin.org/articles/10.3389/fnins.2022.1042814/full. Here are some quotes from that: 

For functional studies, be they fluorodeoxyglucose positron emission tomography (FDG PET), rs-fMRI, task-based fMRI, diffusion tensor imaging (DTI) or MEG there generally is hypoactivation and disconnection between brain regions. ...

Histologically this gray matter reduction is accompanied by dendritic and synaptic density decreases which likely signals a lack of communication (disconnection theory) across selected neural networks... 

According to Orliac et al. (2013), patients with schizophrenia have reduced functional connectivity in the default mode network and salience network. Furthermore, decreased connectivity in the paracingulate cortex is associated with difficulties with abstract thought, whereas decreased connectivity in the left striatum is associated with delusions and depression. Longer memory response time for face recognition was also associated with functional connectivity abnormalities in early-schizophrenia, centered in the anterior cingulate...

This is in line with the frontotemporoparietal network disruption theory in schizophrenia that is well-known (Friston and Frith, 1995). ...

In a study that conducted by Lottman et al., patients with schizophrenia showed an increased connectivity between auditory and subcortical networks ...

Both increased and decreased functional connectivity has been observed in patients with schizophrenia vs. controls, in resting state and during various tasks

Mondino et al. (2016) found that transcranial direct current stimulation can decrease negative symptoms course and the severity of auditory verbal hallucination in patients with schizophrenia. This improvement was associated with reduction in functional connectivity between the left anterior insula and left temporoparietal junction (middle and superior temporal gyri and Wernicke’s area)

Overall I think it's pretty complicated. I imagine that when you wrote explaining "everything" was tongue in cheek, but I think there are a lot of things that need to be explained about schizophrenia beyond the seven that you wrote about. I hope you keep doing some research in this area and continue to refine your theory. 

Comment by Andy_McKenzie on My Model Of EA Burnout · 2023-01-28T00:45:30.902Z · LW · GW

A quote I find relevant: 

“A happy life is impossible, the highest thing that man can aspire to is a heroic life; such as a man lives, who is always fighting against unequal odds for the good of others; and wins in the end without any thanks. After the battle is over, he stands like the Prince in the re corvo of Gozzi, with dignity and nobility in his eyes, but turned to stone. His memory remains, and will be reverenced as a hero's; his will, that has been mortified all his life by toiling and struggling, by evil payment and ingratitude, is absorbed into Nirvana.” - Arthur Schopenhauer

Comment by Andy_McKenzie on Transcript of Sam Altman's interview touching on AI safety · 2023-01-22T01:26:25.432Z · LW · GW

Good point. 

I know your question was probably just rhetorical, but to answer it regardless -- I was confused in part because it would have made sense to me if he had said it would "better" if AGI timelines were short. 

Lots of people want short AGI timelines because they think the alignment problem will be easy or otherwise aren't concerned about it and they want the perceived benefits of AGI for themselves/their family and friends/humanity (eg eliminating disease, eliminating involuntary death, abundance, etc). And he could have just said "better" without really changing the rest of his argument. 

At least the word "better" would make sense to me, even if, as you imply, it might be wrong and plenty of others would disagree with it. 

So I expect I am missing something in his internal model that made him use the word "safer" instead of "better". I can only guess at possibilities. Like thinking that if AGI timelines are too long, then the CCP might take over the USA/the West in AI capabilities, and care even less about AGI safety when it matters the most. 

Comment by Andy_McKenzie on Transcript of Sam Altman's interview touching on AI safety · 2023-01-22T00:23:43.204Z · LW · GW

One of the main counterarguments here is that the existence of multiple AGIs allows them to compete with one another in ways that could benefit humanity. E.g. policing one another to ensure alignment of the AGI community with human interests. Of course, whether this actually would outweigh your concern in practice is highly uncertain and depends on a lot of implementation details. 

Comment by Andy_McKenzie on Transcript of Sam Altman's interview touching on AI safety · 2023-01-20T18:37:38.610Z · LW · GW

You're right that the operative word in "seems more likely" is "seems"! I used the word "seems" because I find this whole topic really confusing and I have a lot of uncertainty. 

It sounds like there may be a concern that I am using the absurdity heuristic or something similar against the idea of fast take-off and associated AI apocalypse. Just to be clear, I most certainly do not buy absurdity heuristic arguments in this space, would not use them, and find them extremely annoying. We've never seen anything like AI before, so our intuition (which might suggest that the situation seems absurd) is liable to be very wrong. 

Comment by Andy_McKenzie on Transcript of Sam Altman's interview touching on AI safety · 2023-01-20T16:55:00.224Z · LW · GW

A few comments: 

  1. A lot of slow takeoff, gradual capabilities ramp-up, multipolar AGI world type of thinking. Personally, I agree with him this sort of scenario seems both more desirable and more likely. But this seems to be his biggest area of disagreement with many others here. 
  2. The biggest surprise to me was when he said that he thought short timelines were safer than long timelines. The reason for that is not obvious to me. Maybe something to do with contingent geopolitics. 
  3. Doesn't seem great to dismiss people's views based on psychologizing about them. But, these are off-the-cuff remarks, held to a lower standard than writing. 
Comment by Andy_McKenzie on We Need Holistic AI Macrostrategy · 2023-01-16T01:56:25.553Z · LW · GW

Got it. To avoid derailing with this object level question, I’ll just say that I think it seems helpful to be explicit about takeoff speeds in macrostrategy discussions. Ideally, specifying how different strategies work over distributions of takeoff speeds.

Comment by Andy_McKenzie on We Need Holistic AI Macrostrategy · 2023-01-16T00:01:19.290Z · LW · GW

Thanks for this post. I agree with you that AI macrostrategy is extremely important and relatively neglected. 

However, I'm having some trouble understanding your specific world model. Most concretely: can you link to or explain what your definition of "AGI" is? 

Overall, I expect alignment outcomes to be significantly if not primarily determined by the quality of the "last mile" work done by the first AGI developer and other actors in close cooperation with them in the ~2 years prior to the development of AGI.

This makes me think that in your world model, there is most likely "one AGI" and that there is a "last mile" rather than a general continuous improvement. It seems to me to be basically a claim about very fast takeoff speeds. Because otherwise, it seems to me that we would expect multiple groups with access to AGIs with different strengths and weaknesses, a relatively slower and continuous improvement in their capabilities, etc. 

Comment by Andy_McKenzie on We don’t trade with ants · 2023-01-11T03:43:06.132Z · LW · GW

OK, I get your point now better, thanks for clarifying -- and I agree with it. 

In our current society, even if dogs could talk, I bet that we wouldn't allow humans to trade (or at least anywhere close to "free" trade) with them, due to concerns for exploitation. 

Comment by Andy_McKenzie on We don’t trade with ants · 2023-01-11T02:49:32.718Z · LW · GW

I quoted "And if she isn't a good girl, we genetically engineer and manufacture (ie. breed) an ex-wolf who is a good girl."

If genetic engineering a new animal would satisfy human goals, then this would imply that they don't care about their pet's preferences as individuals. 

Comment by Andy_McKenzie on We don’t trade with ants · 2023-01-11T02:10:44.567Z · LW · GW

At the end of the day, no matter how many millions her trainer earns, Lassie just gets a biscuit & ear scritches for being such a good girl. And if she isn't a good girl, we genetically engineer and manufacture (ie. breed) an ex-wolf who is a good girl.

I don't think it's accurate to claim that humans don't care about their pets' preferences as individuals and try to satisfy them. 

To point out one reason that I think this, there are huge markets for pet welfare. There are even animal psychiatrists and there are longevity companies for pets

I've also known many people who've been very distraught when their pets died. Cloning them would be a poor consolation. 

I also don't think that 'trade' necessarily captures the right dynamic. I think it's more like communism in the sense that families are often communist. But I also don't think that your comment, which sidesteps this important aspect of human-animal relations, is the whole story. 

Now, one could argue that the expansion of animal rights and caring about individual animals is a recent phenomenon, and that therefore these are merely dreamtime dynamics, but that requires a theory of dreamtime and why it will end. 

Comment by Andy_McKenzie on I’m mildly skeptical that blindness prevents schizophrenia · 2022-08-16T14:26:08.112Z · LW · GW

Thanks for this good post. A meta-level observation is that people are grasping at straws like this is evidence that our knowledge of the causes of schizophrenia is quite limited. 

Comment by Andy_McKenzie on On A List of Lethalities · 2022-06-14T02:26:09.994Z · LW · GW

“One day, one of the AGI systems improves to the point where it unlocks a new technology that can reliably kill all humans, as well as destroying all of its AGI rivals. (E.g., molecular nanotechnology.) I predict that regardless of how well-behaved it's been up to that point, it uses the technology and takes over. Do you predict otherwise?”

I agree with this, given your assumptions. But this seems like a fast take off scenario, right? My main question wasn’t addressed — are we assuming a fast take off? I didn’t see that explicitly discussed.

My understanding is that common law isn’t easy to change, even if individual agents would prefer to. This is why there are Nash equilibria. Of course, if there’s a fast enough take off, then this is irrelevant.

Comment by Andy_McKenzie on On A List of Lethalities · 2022-06-13T18:28:29.275Z · LW · GW

Thanks for the write-up. I have very little knowledge in this field, but I'm confused on this point: 

> 34.  Coordination schemes between superintelligences are not things that humans can participate in (eg because humans can’t reason reliably about the code of superintelligences); a “multipolar” system of 20 superintelligences with different utility functions, plus humanity, has a natural and obvious equilibrium which looks like “the 20 superintelligences cooperate with each other but not with humanity”.

Yes. I am convinced that things like ‘oh we will be fine because the AGIs will want to establish proper rule of law’ or that we could somehow usefully be part of such deals are nonsense. I do think that the statement here on its own is unconvincing for someone not already convinced who isn’t inclined to be convinced. I agree with it because I was already convinced, but unlike many points that should be shorter this one should have probably been longer.

Can you link to or explain what convinced you of this? 

To me, part of it seems dependent on take-off speed. In slower take-off worlds, it seems that agents would develop in a world in which laws/culture/norms were enforced at each step of the intelligence development process. Thus at each stage of development, AI agents would be operating in a competitive/cooperative world, eventually leading to a world of competition between many superintelligent AI agents with established Schelling points of cooperation that human agents could still participate in.

On the other hand, in faster/hard take-off worlds, I agree that cooperation would not be possible because the AI (or few multipolar AIs) would obviously not have an incentive to cooperate with much less powerful agents like humans. 

Maybe there is an assumption of a hard take-off that I'm missing? Is this a part of M3? 

Comment by Andy_McKenzie on Who is doing Cryonics-relevant research? · 2022-03-15T17:34:05.673Z · LW · GW

It is so great you are interested in this area! Thank you. Here are a few options for cryonics-relevant research: 

- 21st Century Medicine: May be best to reach out to Brian Wowk (contact info here: https://pubmed.ncbi.nlm.nih.gov/25194588/) and/or Greg Fahy (possibly old contact info here: https://pubmed.ncbi.nlm.nih.gov/16706656/)

- Emil Kendziorra at Tomorrow Biostasis may know of opportunities. Contact info here: https://journals.plos.org/plosone/article/authors?id=10.1371/journal.pone.0244980

- Robert McIntyre at Nectome may know of opportunities. Contact: http://aurellem.org/aurellem/html/about-rlm.html 

- Chana Phaedra/Aschwin de Wolf at Advanced Neural Biosciences may know of opportunities. Contact info for Aschwin here: https://www.liebertpub.com/doi/10.1089/rej.2019.2225 

- Brain Preservation Foundation: https://www.brainpreservation.org/. No lab, but space for discussions, especially related to neuroscience and related computational modeling.

- Robert Freitas at the Institute for Molecular Manufacturing just published a book called Cryostasis Revival. I'm not sure, but it's possible people there may know of related computational modeling opportunities: http://www.imm.org/

As below, Laura Deming is also a good person to contact. 

As you may know, there is a somewhat big divide in methodology these days between people who favor aldehydes as a part of the preservation procedure and those who do not. But there are good options either way. 

With 3 months and I'm not sure of your location or geographic flexibility, the best option might be some sort of computational modeling experiment, such as a molecular dynamics simulation: https://www.brainpreservation.org/how-computational-researchers-can-contribute-to-brain-preservation-research/ 

Regarding discussions with your profs, I totally understand, but I suspect that people may be more open to discussing it on an intellectual level than you think. 

You can also email me for further information/discussion, although this is not my personal area of research: amckenz at gmail dot com

Comment by Andy_McKenzie on My attitude towards death · 2022-02-25T21:24:23.905Z · LW · GW

But there’s also a significant utilitarian motivation - which is relevant here because utilitarianism doesn’t care about death for its own sake, as long as the dead are replaced by new people with equal welfare. Indeed, if our lives have diminishing marginal value over time (which seems hard to dispute if you’re taking our own preferences into account at all), and humanity can only support a fixed population size, utilitarianism actively prefers that older people die and are replaced.

I strongly disagree with this. I think the idea of human fungibility is flawed from a hedonistic quality of life perspective. In my view, much of human angst is due to the specter of involuntary death. There has been a lot of academic literature on this. One famous book is Ernest Becker's: https://en.wikipedia.org/wiki/The_Denial_of_Death/ 

Involuntary death is one of the great harms of life. Decreasing the probability and inevitability of involuntary death seems to have the potential to dramatically improve the quality of human lives. 

It is also not clear that future civilizations will want to create as many people as they can. It is quite plausible that future civilizations will be reticent to do this. For one, those people have not consented to be born and the quality of their lives may still be unpredictable. There is a good philosophical case for anti-natalism as a result of this lack of consent. I consider anti-natalism totally impractical - and even problematic - in today's world because we need the next generation to continue the project of humanity. But in the future that may not be an issue anymore. Whereas people who have opted for cryonics/biostasis are consenting to live longer lives. 

(As a side note, I'm a strong proponent of brain preservation/cryonics and I'm consistently surprised others are not more interested in it.) 

(updated from a previous comment I made on this topic here: https://forum.effectivealtruism.org/posts/vqaeCxRS9tc9PoWMq/why-are-some-eas-into-cryonics)