Posts

Interest poll: A time-waster blocker for desktop Linux programs 2024-08-22T20:44:04.479Z
Review of METR’s public evaluation protocol 2024-06-30T22:03:08.945Z
Math-to-English Cheat Sheet 2024-04-08T09:19:40.814Z
A&I (Rihanna 'S&M' parody lyrics) 2023-05-21T22:34:29.930Z
Will the first AGI agent have been designed as an agent (in addition to an AGI)? 2022-12-03T20:32:52.242Z
Is there any policy for a fair treatment of AIs whose friendliness is in doubt? 2022-11-18T19:01:40.770Z

Comments

Comment by nahoj on Interest poll: A time-waster blocker for desktop Linux programs · 2024-08-22T20:46:32.914Z · LW · GW

4/4 I need to alpha-test this right now.

Comment by nahoj on Interest poll: A time-waster blocker for desktop Linux programs · 2024-08-22T20:46:03.562Z · LW · GW

3/4 I want this, please release it!

Comment by nahoj on Interest poll: A time-waster blocker for desktop Linux programs · 2024-08-22T20:45:34.274Z · LW · GW

2/4 Maybe I'll have a look if/when it's ready

Comment by nahoj on Interest poll: A time-waster blocker for desktop Linux programs · 2024-08-22T20:44:49.777Z · LW · GW

1/4 Not for me.

Comment by nahoj on Interest poll: A time-waster blocker for desktop Linux programs · 2024-08-22T20:44:17.250Z · LW · GW

Poll (vote with a 👍)

Comment by nahoj on Secondary forces of debt · 2024-07-01T17:19:26.545Z · LW · GW

Reading this gave me an uncomfortable moment considering my feelings for all the people who expect things of me or of whom I expect things, outside the specific context of debt.

It makes me think of the very common case in society of someone taking care of an elderly or otherwise care-needing relative.

But like @Dagon says, this is only one aspect of such interpersonal relationships, out of many. In particular, taking this "as reason to avoid debt in all its forms more" sounds to me like hoping never to get in a situation which in fact happens all the time. It would be throwing a lot of human interactions out with the bathwater.

I think one could consider it a part of mental health to be able to make commitments without resenting it, and to manage situations in which such resentment arises.

Comment by nahoj on Math-to-English Cheat Sheet · 2024-04-09T19:15:00.195Z · LW · GW

Added

Comment by nahoj on Math-to-English Cheat Sheet · 2024-04-09T19:12:08.108Z · LW · GW

Thanks, I have applied most suggestions.

Indeed I didn't choose the formulas myself but just told GPT to produce some, and then removed a few that seemed dubious or irrelevant.

Comment by nahoj on Will the first AGI agent have been designed as an agent (in addition to an AGI)? · 2022-12-10T19:16:10.787Z · LW · GW

Thank you.

Comment by nahoj on Will the first AGI agent have been designed as an agent (in addition to an AGI)? · 2022-12-10T17:08:17.710Z · LW · GW

Right. So, considering that the most advanced AIs of a leading AI company such as OpenAI are not agents, what do you think of the following plan to solve or help solve AI risk: keep making more and more powerful Q&A AIs that are not agents until we have ones that are smarter than us, then ask them how to solve the problem. Do you think this is a safe and reasonable pursuit? Or do you think we just won't get to superhuman intelligence that way?

Comment by nahoj on Will the first AGI agent have been designed as an agent (in addition to an AGI)? · 2022-12-10T13:22:37.179Z · LW · GW

I'm not sure I understand, do you mean that considering these possibilities is too difficult because there are too many or that it's not a priority because AIs not designed as agents are less dangerous? Or both?

Comment by nahoj on Will the first AGI agent have been designed as an agent (in addition to an AGI)? · 2022-12-08T21:03:18.722Z · LW · GW

Thank you for your answer. In my example I was thinking of an AI such as a language model that would have latent ≥human-level capability without being an agent, but could easily be made to emulate one just long enough for it to get out of the box, e.g. duplicate itself. Do you think this couldn't happen?

More generally, I am wondering if the field of AI safety research studies somewhat specific scenarios based on the current R&D landscape (e.g. "A car company makes an AI to drive a car and then someone does xyz and then paperclips") and tailor-made safety measures in addition to more abstract ones like the ones in A Tentative Typology of AI-Foom Scenarios for instance.