Posts

arisAlexis's Shortform 2023-03-24T17:12:04.655Z
Why kill everyone? 2023-03-05T11:53:23.823Z
The burden of knowing 2023-02-28T18:40:24.567Z

Comments

Comment by arisAlexis (arisalexis) on Many arguments for AI x-risk are wrong · 2024-03-20T11:08:14.802Z · LW · GW

I think the fact that we have extinguished species is a binary outcome that supports my argument. Why would it be a count of how many? The fact alone says that we can be exterminated.

Comment by arisAlexis (arisalexis) on If you weren't such an idiot... · 2024-03-09T18:27:23.890Z · LW · GW

I am really cautious of saying that there are only 2 things I am not doing and I got a weird feeling that I ticked most the boxes. Has anybody had this feeling? (OK I don't use pens and I don't have a consistent mentor).

Comment by arisAlexis (arisalexis) on Many arguments for AI x-risk are wrong · 2024-03-09T18:15:28.248Z · LW · GW

can you explain why sub is the most likely since humans have made exticts thousands of animal species? not semi-extinct. We made them 100% extinct.

Comment by arisAlexis (arisalexis) on Many arguments for AI x-risk are wrong · 2024-03-07T15:40:31.033Z · LW · GW

I don't want to be nihilistic about your article but I stopped reading at the first paragraph because I disagree (along with others) on the most important thing: it doesn't matter if risk can't be proved. Since it's something unknown and the risk is unknown and the risk includes possible annihilation because of the prior of intelligence in nature annihilating other species then -> the opposite needs to be proven. Safety. So any argument about risk arguments being wrong is an error in itself.

Comment by arisAlexis (arisalexis) on Significantly Enhancing Adult Intelligence With Gene Editing May Be Possible · 2023-12-13T10:26:58.440Z · LW · GW

great article. I hope you realize your startup research/idea. One comment, I think the salaries derail the whole budget plan, afaik from startup world I have been involved, founders make big sacrifices to get their thing going in return for a big equity in the startup they believe someday will become a unicorn. 

Comment by arisAlexis (arisalexis) on You can just spontaneously call people you haven't met in years · 2023-11-18T14:53:47.676Z · LW · GW

how about texting vs calling? pros / cons? I frequently text people from my past but I find calling a bit more awkward/pervasive

Comment by arisAlexis (arisalexis) on I think I'm just confused. Once a model exists, how do you "red-team" it to see whether it's safe. Isn't it already dangerous? · 2023-11-18T14:38:33.554Z · LW · GW

We just "hope" that we will get first something that is dangerous but cannot outpower everyone, just trick some and then the rest will stop it. In your scenario, we are screwed yes. That's what this forum is about isn't it ;)

Comment by arisAlexis (arisalexis) on Impending AGI doesn’t make everything else unimportant · 2023-09-05T09:28:31.884Z · LW · GW

Regardless of content I would say that me among I suspect the majority of people have a natural aversion to titles starting with "No." It is confrontational and shows that the author has a strong conviction about something that is clearly not binary and wants to shove the negative word to start off in your face. I would urge everyone to refrain from having a title like that.

Comment by arisAlexis (arisalexis) on arisAlexis's Shortform · 2023-07-30T12:22:05.921Z · LW · GW

Has anyone seen MI7? I guess Tom is not the most popular guy in this forum but the storyline of a rogue AI as presented (within the limits of a mission impossible block buster) sounds not only plausible but also a great story to bring awareness to crowds about the dangers. It talks about the inability of governments to stop it (although obviously it will be stopped in the upcoming movie) and also their eagerness to control it to rule over the world while the AI just wants to bring chaos (or does it have an ultimate goal?) and also how some humans will be aligned with and obey it regardless if it takes them to their own doom too. Thoughts?

Comment by arisAlexis (arisalexis) on Cryonics and Regret · 2023-07-25T11:34:08.474Z · LW · GW

can you explain your calculations? isn't cryo around 50k right now? 

Comment by arisAlexis (arisalexis) on Cryonics and Regret · 2023-07-24T20:57:00.847Z · LW · GW

A trained psychologist on feelings of remorse and loss could help you. I know you know but it's good sometimes to be told.

Comment by arisAlexis (arisalexis) on Cryonics and Regret · 2023-07-24T20:56:00.223Z · LW · GW

Still, you can try to pursuade them so you do not feel remorse after.

Comment by arisAlexis (arisalexis) on Cryonics and Regret · 2023-07-24T20:54:57.104Z · LW · GW

Are you taking any steps to preserve your mother's data? Can you explain how?

Comment by arisAlexis (arisalexis) on The bullseye framework: My case against AI doom · 2023-06-01T15:36:10.364Z · LW · GW

"I suspect that AGI is decades away at minimum". But can you talk more about this? I mean if I say something against the general scientific consensus which is a bit blurry right now but certainly most of the signatories of the latest statements do not think it's that far away, I would need to think myself to be at least at the level of Bengio, Hinton or at least Andrew Ng. How can someone that is not remotely as accomplished as all the labs producing the AI we talk about can speculate contrary to their consensus? I am really curious. 

Another example would be me thinking that I like geopolitics and I think USA is making such and such mistake in Ukraine. The truth is that there are many think tanks with insider knowledge and a lifetime of training that concluded that is the best course of action so I would certainly express my opinion only in very low probability terms and certainly without consequences. Because the consequences can be very grave.

Comment by arisAlexis (arisalexis) on arisAlexis's Shortform · 2023-05-24T08:34:21.341Z · LW · GW

After Hinton's and Bengio's articles that I consider a moment in history, I struggle to understand how most people in tech dismiss them. If Einstein wrote an article about the dangers of nuclear weapons in 1939 you wouldn't have people saying "nah, I don't understand how such a powerful explosion can happen" without a physics background. Hacker News is supposed to be *the place for developers, startups and such and you can see comments that despare me. The comments go from "alarmism is boring" to "I have programmed MySQL databases and I know tech and this can't happen". Should I update my view on the intelligence and biases of humans right now I wonder much. 

Comment by arisAlexis (arisalexis) on Mental Health and the Alignment Problem: A Compilation of Resources (updated April 2023) · 2023-05-15T11:59:27.054Z · LW · GW

I think the stoic's (Seneca's letters, Meditations) talk a lot about how to live in the moment while awaiting probable death. Then the classic psychology book The Denial of Death would also be relevant. I guess The Myth of Sisiphus would also be relevant but I haven't read it yet. The metamorphosis of prime intellect is also a very interesting book talking about mortality being preferable to immortality and so on. 

Comment by arisAlexis (arisalexis) on Mental Health and the Alignment Problem: A Compilation of Resources (updated April 2023) · 2023-05-11T09:41:28.962Z · LW · GW

I tink there is an important paragraph missing from this post about books related to Stoicism and existential philosophy etc. 

Comment by arisAlexis (arisalexis) on Stop trying to have "interesting" friends · 2023-04-20T11:06:24.605Z · LW · GW

But sometimes something happens in the world and your "best man always fun forever" friends can't seem to understand reality. They think it's because God wanted it this way or because there is a world wide conspiracy of Jews. Then you feel really alone.

Comment by arisAlexis (arisalexis) on AI Takeover Scenario with Scaled LLMs · 2023-04-17T18:43:22.806Z · LW · GW

The metamorphosis of Prime Intellect is an excellent book

Comment by arisAlexis (arisalexis) on arisAlexis's Shortform · 2023-03-24T17:12:05.115Z · LW · GW

What is the duration of P(doom)? 

What do people mean by that metric? What is x-risk for the century? Forever? For the next 10 years? Until we figured out AGI or after AGI on the road to superintelligence?

To me it's fundamentally different because P(doom) forever must be much higher than doom over the next 10-20 years. Or is it implied that if we survive the next period means only that we figured out alignment eternally for all the next generation AIs? It's confusing. 

Comment by arisAlexis (arisalexis) on AI: Practical Advice for the Worried · 2023-03-02T09:58:03.036Z · LW · GW

Thank you. This is the kind of post I wanted to write when I posted "the burden of knowing" a few days ago but I was not rational thinking at that moment.