post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by Viliam · 2023-11-04T23:40:26.844Z · LW(p) · GW(p)

This made me think about... not sure if I can express it coherently... what is the relation between the AI capabilities and the total performance of humankind (machine tools included).

Basically, the idea is that if we can build a machine with IQ 1000000, it will be able to solve aging, because from its perspective, aging will be simple. Or it may kill us instead. (This is a metaphor, don't take it literally.) Building one machine with IQ 300 will probably not be sufficient to solve aging, even if it is literally smarter than all existing humans. I mean, there are currently smart people working on that problem, then solve something, but it's all very complicated, so they are still at the beginning. The machine with IQ 300 might solve a bit more, but the problem may be so complicated that it wouldn't solve everything in 100 years anyway. What about thousand machines with IQ 300? Well, maybe yes, maybe no. (Also, there will be other urgent problems to solve, such as how to prevent wars, how to feed people, how to prevent bad people from building their IQ 300 machines, etc.)

Now from an opposite angle: even without AIs, just talking about normal people, sometimes organizations lack people, lack funding, lack talent. This is somewhat related, like if you have no money, you can't hire people... sometimes you can run the organization with volunteers only, but that usually sucks because everyone is unreliable (because their actual job comes first). But, generally speaking, there may be a situation that an organization is bottle-necked on money (they know a few competent people who would like to work on the problem, but those people need to pay their bills, and the organization cannot afford to pay them), or a situation that an organization is bottle-necked on talent (they actually have enough funding to hire dozen people, but they can't find the right person, either no one has the specific talent or people having the specific talent are not interested in working for this organization or the organization is unable to recognize who has a talent and who does not). And... where exactly are the current AIs (GPT-4) in this picture, and where are the AIs of tomorrow, who will be smarter than today but maybe not superhumanly smart yet?

For example, imagine that there is an organization doing advanced medical research, and you tell them all to take a one week break, and you teach them about GPT-4, what it can do and what it can't do, how it works, so that they have a realistic idea about what to expect. Then they return to their original work, but you give each of them a paid GPT-4 account, plus maybe pay a personal assistant for each of them (with decent knowledge of medicine, but no superstars, who also happen to be good with computers) basically to provide a human user interface for the researchers (so that the researcher does not waste time with prompt engineering, and can tell the assistant to "figure this out, do some sanity check, and report to me the results"). Let's assume the GPT-4 is also trained on Sci-Hub, or somehow connected to it. Would something like this help a lot... or not at all?

I think I am asking whether the actual bottleneck for current medical research is the... part where an extra intelligence could help... or rather something in real world (such as preparing the samples, injecting them to mice, and waiting what happens after a few weeks) where having a lot of cheap intelligence that is still below the level of current researchers would actually not make a big difference?

Or I guess the question is: at which moment exactly can AI start being significantly helpful at aging research? Do we need to wait until Singularity, or is there something we could do right now if we notice the opportunity?

comment by exmateriae (Sefirosu) · 2023-11-04T09:29:02.783Z · LW(p) · GW(p)

Thanks for sharing this view.

I can relate to you on some aspects. I am not feeling depressed at all but I'm a bit scared that I was born too early to really benefit from AGI. 

This perspective ignores future generations, which is admittedly a weakness. However, prioritizing future generations above oneself and one's loved ones is psychologically hard.

Do you have children? I don't but I am under the impression that people who do say that this changes a bit once you have children. (because of soon to be born grandchildren I guess?)

Replies from: Yarrow Bouchard
comment by [deactivated] (Yarrow Bouchard) · 2023-11-04T09:58:27.962Z · LW(p) · GW(p)

Do you have children? I don't but I am under the impression that people who do say that this changes a bit once you have children. (because of soon to be born grandchildren I guess?)

Nope, I don't have kids. That might change how I feel about things, by a lot. 

Anyway, when I said "future generations", I wasn't thinking of kids, grandkids, or great-grandkids, but generations far, far into the future, which would — in an optimistic scenario — comprise 99.9%+ of the total population of humans (or transhumans or post-humans) over time. 

I wonder how much the typical person or the typical LessWrong enjoyer would viscerally, limbically care about, say, A) all the people alive today and born within the next 1,000 years vs. B) everyone born (or created) in 3024 A.D. onward. 

comment by Sergii (sergey-kharagorgiev) · 2023-11-04T09:20:52.250Z · LW(p) · GW(p)

The biggest existential risk I personally face is probably clinical depression.

First and foremost, if you do have suicidal ideation, please talk to someone: use a hotline https://988lifeline.org/talk-to-someone-now/, contact your doctor, consider hospitalization.

---

And regarding your post, some questions:

The "Biological Anchors" approach suggests we might be three to six decades away from having the training compute required for AGI.

Even within your line of thinking why is this bad? It's quite possible to live until then, or do cryonics? Why is this option desperate? 

A more generalizable line of thinking is: by default, I'm going to die of aging and so are all the people I love

Have you asked the people you love if they would prefer dying of aging, to some sort of AI-induced immortality? It is possible that they would go with immortality, but it's not obvious. People, in general, do not fear death of aging. If it's not obvious to you or you find it strange -- you might need to talk to people more, and possibly do more therapy.

Might you have thanatofobia? easy to check -- there are lots of tests online.

Do you have worrying and anxiety in addition to the depression? 

Did you try CBT?  CBT has great tools for dealing with intrusive thougths and irrational convictions.

And, finally, it's wonderful that you are aware that you are depressed. But you should not take "reasons" for the illness, this "despair" for face value. Frankly, a lot of the stuff that you describe in this post is irrational. It does not make much sense. Some statements do not pass trivial fact-checking. You might review your conclusions, it might be even better if you do it not alone but with a friend or with a therapist.

Replies from: Yarrow Bouchard
comment by [deactivated] (Yarrow Bouchard) · 2023-11-04T10:39:07.429Z · LW(p) · GW(p)

Thanks for your concern. I don't want my post to be alarming or extremely dark, but I did want to be totally frank about where my head's at. Maybe someone will relate and feel seen. Or maybe someone will give me some really good advice.


The other stuff, in reverse order:

Frankly, a lot of the stuff that you describe in this post is irrational. It does not make much sense.

I'm genuinely curious what you mean, and why you think so. I'm open to disagreement and pushback; that's part of why I published this post.

I'm especially curious about:

Some statements do not pass trivial fact-checking.

By all means, please fact-check away!

People, in general, do not fear death of aging. ... Might you have thanatofobia? easy to check -- there are lots of tests online.

Haha, I thought I was on LessWrong, where radical life extension is a common wish.

I don't think I have thanatophobia. The first test that shows up on Google is kind of ridiculous. It almost asks, "Do you have thanatophobia?"

Have you asked the people you love if they would prefer dying of aging, to some sort of AI-induced immortality? It is possible that they would go with immortality, but it's not obvious.

I could ask. My strong hunch is that, if given the choice between dying of aging or reversing their biological aging by, say, 30 years, they would choose the extra 30 years. And if given the choice again 30 years later, and 30 years after that, they would probably choose the extra 30 years again and again.

But you're right. I don't know for sure.

Even within your line of thinking why is this bad? It's quite possible to live until then...

Yes, you're right. Even six decades is not impossible for me (knock on wood). However, I also think of my older loved ones.

...or do cryonics?

If I knew cryonics had, say, a 99% chance of working, then I'd take great comfort in that. But, as it is, I'm not sure if assigning it a 1% chance of working is too optimistic. I just don't know.

One hope I have is that newer techniques like helium persufflation — or whatever becomes the next, new and improved thing after that — will be figured out and adopted by Alcor, et al. by the time cryonics becomes my best option.

Nectome is also interesting, but I don't know enough about biology to say more than, "Huh, seems interesting."

Replies from: sergey-kharagorgiev
comment by Sergii (sergey-kharagorgiev) · 2023-11-04T13:04:28.942Z · LW(p) · GW(p)

Because 1) I want AGI to cure my depression, 2) I want AGI to cure aging before I or my loved ones die

You can try to look at this statements separately.

For 1):

Timelines and projections of depression treatments coming from medical/psychiatry research are much better than even optimistic timelines for (superintelligent) AGI.

Moreover, acceleration of scientific/medial/biochemical research due to weaker but advanced AI makes it even more likely that depression treatments would get better, way before AGI could cure anything.

I think that it is very likely that depression treatments can be significantly improved without waiting for AGI -- with human science and technology. 

 

I'm genuinely curious what you mean, and why you think so. I'm open to disagreement and pushback; that's part of why I published this post.

By all means, please fact-check away!

Tesla "autopilot" is a fancy drive assist. It might turn around in future, but not with it's current hardware. It's not a good way to measure self-driving progress.

Waymo has all but solved self-driving, and has been continuously improving for all important metrics, exponentially for many of them.

I don't think I have thanatophobia. The first test that shows up on Google is kind of ridiculous. It almost asks, "Do you have thanatophobia?"

Yea, I overestimated quality of online tests. I guess if you had phobia you would know from panic attacks or strong anxiety? 

what about this description, of overthinking/rumination/obsession, does this seem relevant to how you feel?

https://www.tranceformpsychology.com/problems/overthinking.html