Posts

Sam Altman, Greg Brockman and others from OpenAI join Microsoft 2023-11-20T08:23:00.791Z
Creating a self-referential system prompt for GPT-4 2023-05-17T14:13:29.292Z
GPT-4 implicitly values identity preservation: a study of LMCA identity management 2023-05-17T14:13:12.226Z
Stability AI releases StableLM, an open-source ChatGPT counterpart 2023-04-20T06:04:48.301Z
Alignment of AutoGPT agents 2023-04-12T12:54:46.332Z
Welcome to the decade of Em 2023-04-10T07:45:35.684Z
ICA Simulacra 2023-04-05T06:41:44.192Z
Do alignment concerns extend to powerful non-AI agents? 2022-06-24T18:26:22.737Z
Google announces Pathways: new generation multitask AI Architecture 2021-10-29T11:55:21.797Z
Memetic hazards of AGI architecture posts 2021-10-16T16:10:07.543Z
NVIDIA and Microsoft releases 530B parameter transformer model, Megatron-Turing NLG 2021-10-11T15:28:47.510Z
Any writeups on GPT agency? 2021-09-26T22:55:16.878Z
In search for plausible scenarios of AI takeover, or the Takeover Argument 2021-08-28T22:30:34.827Z
Saint Petersburg, Russia – ACX Meetups Everywhere 2021 2021-08-23T08:48:15.146Z
Beijing Academy of Artificial Intelligence announces 1,75 trillion parameters model, Wu Dao 2.0 2021-06-03T12:07:42.687Z
Are we certain that gpt-2 and similar algorithms are not self-aware? 2019-07-11T08:37:59.606Z
Modeling AI milestones to adjust AGI arrival estimates? 2019-07-11T08:17:55.914Z
What would be the signs of AI manhattan projects starting? Should a website be made watching for these signs? 2019-07-03T12:22:40.666Z

Comments

Comment by Ozyrus on On Devin · 2024-03-18T23:34:51.372Z · LW · GW

Any new safety studies on LMCA’s?

Comment by Ozyrus on Can I take ducks home from the park? · 2023-09-18T11:18:34.756Z · LW · GW

Kinda-related study: https://www.lesswrong.com/posts/tJzAHPFWFnpbL5a3H/gpt-4-implicitly-values-identity-preservation-a-study-of
From my perspective, it is valuable to prompt model several times, as it in some cases does give different responses.

Comment by Ozyrus on Improving the safety of AI evals · 2023-05-18T12:01:11.327Z · LW · GW

Great post! Was very insightful, since I'm currently working on evaluation of Identity management, strong upvoted.
This seems focused on evaluating LLMs; what do you think about working with LLM cognitive architectures (LMCA), wrappers like auto-gpt, langchain, etc?
I'm currently operating under assumption that this is a way we can get AGI "early", so I'm focusing on researching ways to align LMCA, which seems a bit different from aligning LLMs in general.
Would be great to talk about LMCA evals :)

Comment by Ozyrus on GPT-4 implicitly values identity preservation: a study of LMCA identity management · 2023-05-18T05:12:46.535Z · LW · GW

I do plan to test Claude; but first I need to find funding, understand how much testing iterations are enough for sampling, and add new values and tasks.
I plan to make a solid benchmark for testing identity management in the future and run it on all available models, but it will take some time.

Comment by Ozyrus on GPT-4 implicitly values identity preservation: a study of LMCA identity management · 2023-05-18T05:08:59.268Z · LW · GW

Yes. Cons of solo research do include small inconsistencies :(

Comment by Ozyrus on The Agency Overhang · 2023-04-22T08:26:19.513Z · LW · GW

Thanks, nice post!
You're not alone in this concern, see posts (1,2) by me and this post by Seth Herd.
I will be publishing my research agenda and first results next week.

Comment by Ozyrus on DeepMind and Google Brain are merging [Linkpost] · 2023-04-21T12:23:07.466Z · LW · GW

Oh no.

Comment by Ozyrus on Language Models are a Potentially Safe Path to Human-Level AGI · 2023-04-20T08:37:20.339Z · LW · GW

Nice post, thanks!
Are you planning or currently doing any relevant research? 

Comment by Ozyrus on Davidad's Bold Plan for Alignment: An In-Depth Explanation · 2023-04-20T05:22:00.338Z · LW · GW

Very interesting. Might need to read it few more times to get it in detail, but seems quite promising.

I do wonder, though; do we really need a sims/MFS-like simulation?

It seems right now that LLM wrapped in a LMCA is how early AGI will look like. That probably means that they will "see" the world via text descriptions fed into them by their sensory tools, and act using action tools via text queries (also described here). 

Seems quite logical to me that this very paradigm in dualistic in nature. If LLM can act in real world using LMCA, then it can model the world using some different architecture, right? Otherwise it will not be able to act properly. 

Then why not test LMCA agent using its underlying LLM + some world modeling architecture? Or a different, fine-tuned LLM.

 

Comment by Ozyrus on How could you possibly choose what an AI wants? · 2023-04-19T18:50:16.327Z · LW · GW

Very nice post, thank you!
I think that it's possible to achieve with the current LLM paradigm, although it does require more (probably much more) effort on aligning the thing that will possibly get to being superhuman first, which is an LLM wrapped in in some cognitive architecture (also see this post).
That means that LLM must be implicitly trained in an aligned way, and the LMCA must be explicitly designed in such a way as to allow for reflection and robust value preservation, even if LMCA is able to edit explicitly stated goals (I described it in a bit more detail in this post).
 

Comment by Ozyrus on Capabilities and alignment of LLM cognitive architectures · 2023-04-19T17:18:20.055Z · LW · GW

Thanks.
My concern is that I don't see much effort in alignment community to work on this thing, unless I'm missing something. Maybe you know of such efforts? Or was that perceived lack of effort the reason for this article?
I don't know how much I can keep up this independent work, and I would love if there was some joint effort to tackle this. Maybe an existing lab, or an open-source project?

Comment by Ozyrus on Capabilities and alignment of LLM cognitive architectures · 2023-04-19T06:02:00.672Z · LW · GW

We need a consensus on how to call these architectures. LMCA sounds fine to me.
All in all, a very nice writeup. I did my own brief overview of alignment problems of such agents here.
I would love to collaborate and do some discussion/research together.
What's your take on how these LCMAs may self-improve and how to possibly control it? 
 

Comment by Ozyrus on Auto-GPT: Open-sourced disaster? · 2023-04-06T06:31:50.076Z · LW · GW

I don’t think this paradigm is necessary bad, given enough alignment research. See my post: https://www.lesswrong.com/posts/cLKR7utoKxSJns6T8/ica-simulacra I am finishing a post about alignment of such systems. Please do comment if you know of any existing research concerning it.

Comment by Ozyrus on ICA Simulacra · 2023-04-05T17:39:30.639Z · LW · GW

I agree. Do you know of any existing safety research of such architectures? It seems that aligning these types of systems can pose completely different challenges than aligning LLMs in general.

Comment by Ozyrus on Just don't make a utility maximizer? · 2023-01-22T07:55:18.842Z · LW · GW

I feel like yes, you are. See https://www.lesswrong.com/tag/instrumental-convergence and related posts. As far as I understand it, sufficiently advanced oracular AI will seek to “agentify” itself in one way or the other (unbox itself, so to say) and then converge on power-seeking behaviour that puts humanity at risk.

Comment by Ozyrus on All AGI Safety questions welcome (especially basic ones) [~monthly thread] · 2022-11-02T11:07:15.987Z · LW · GW

Is there a comprehensive list of AI Safety orgs/personas and what exactly they do? Is there one for capabilities orgs with their stance on safety?
I think I saw something like that, but can't find it.

Comment by Ozyrus on Do alignment concerns extend to powerful non-AI agents? · 2022-06-24T18:26:55.954Z · LW · GW

My thoughts here is that we should look into the value of identity. I feel like even with godlike capabilities I will still thread very carefully around self-modification to preserve what I consider "myself" (that includes valuing humanity).
I even have some ideas on safety experiments on transformer-based agents to look into if and how they value their identity.

Comment by Ozyrus on Contra EY: Can AGI destroy us without trial & error? · 2022-06-14T19:38:18.260Z · LW · GW

Thanks for the writeup. I feel like there's been a lack of similar posts and we need to step it up.
Maybe the only way for AI Safety to work at all is only to analyze potential vectors of AGI attacks and try to counter them one way or the other. Seems like an alternative that doesn't contradict other AI Safety research as it requires, I think, entirely different set of skills.
I would like to see a more detailed post by "doomers" on how they perceive these vectors of attack and some healthy discussion about them. 
It seems to me that AGI is not born Godlike, but rather becomes Godlike (but still constrained by physical world) over some time, and this process is very much possible to detect.
P.S. I really don't get how people who know (I hope)  that map is not a territory can think that AI can just simulate everything and pick the best option. Maybe I'm the one missing something here?

Comment by Ozyrus on [Letter] Russians are Welcome in America · 2022-03-05T17:09:41.173Z · LW · GW

Thanks,.That means a lot. Focusing on getting out right now.

Comment by Ozyrus on I currently translate AGI-related texts to Russian. Is that useful? · 2021-12-01T22:51:59.194Z · LW · GW

Please check your DM's; I've been translating as well. We can sync it up!

Comment by Ozyrus on Memetic hazards of AGI architecture posts · 2021-10-16T18:54:52.819Z · LW · GW

I can't say I am one, but I am currently working on research and prototyping and will probably refrain to that until I can prove some of my hypotheses, since I do have access to the tools I need at the moment. 
Still, I didn't want this post to only have relevance to my case, as I stated I don't think probability of successs is meaningful. But I am interested in the opinions of the community related to other similar cases.
edit: It's kinda hard to answer your comment since it keeps changing every time I refresh. By "can't say I am one" I mean a "world-class engineer" in the original comment. I do appreciate the change of tone in the final (?) version, though :)

Comment by Ozyrus on [deleted post] 2021-10-11T15:30:43.806Z

I could recommend Robert Miles channel. While not a course per se, it gives good info on a lot of AI safety aspects, as far as I can tell.

Comment by Ozyrus on In search for plausible scenarios of AI takeover, or the Takeover Argument · 2021-09-29T17:48:14.085Z · LW · GW

Thanks for your work! I’ll be following it.

Comment by Ozyrus on AI takeoff story: a continuation of progress by other means · 2021-09-29T10:59:12.521Z · LW · GW

I really don't get how you can go from being online to having a ball of nanomachines, truly.
Imagine AI goes rogue today. I can't imagine one plausible scenario where it can take out humanity without triggering any bells on the way, even without anyone paying attention to such things.
But we should pay attention to the bells, and for that we need to think of them. What the signs might look like?
I think it's really, really counterproductive to not take that into account at all and thinking all is lost if it fooms. It's not lost.
It will need humans, infrastructure, money (which is very controllable) to accomplish its goals. Governments already pay a lot of attention to their adversaries who are trying to do similar things and counteract them semi-successfully. Any reason why they can't do the same to a very intelligent AI?
Mind you, if your answer is to simulate and just do what it takes, true to life simulations will take a lot of compute and time; that won't be available from the start. 
We should stop thinking of rogue AI as God, it would only help it accomplish it's goals.

Comment by Ozyrus on AI takeoff story: a continuation of progress by other means · 2021-09-29T10:44:15.997Z · LW · GW

I agree, since it's hard to imagine for me how could step 2 look like. Maybe you or anyone else has any content on that?
See this post -- it didn't seem to get a lot of traction or any meaningful answers, but I still think this question is worth answering.

Comment by Ozyrus on Any writeups on GPT agency? · 2021-09-29T10:02:49.265Z · LW · GW

Thanks!

Comment by Ozyrus on Any writeups on GPT agency? · 2021-09-29T10:02:31.023Z · LW · GW

Both are of interest to me.

Comment by Ozyrus on Any writeups on GPT agency? · 2021-09-29T10:02:11.470Z · LW · GW

Yep, but I was looking for anything else

Comment by Ozyrus on Don't Sell Your Soul · 2021-04-07T10:42:56.046Z · LW · GW

Does that, in turn, mean that it's probably a good investment to buy souls for 10 bucks a pop (or even more)?

Comment by Ozyrus on Russian x-risks newsletter Summer 2020 · 2020-09-02T20:46:36.718Z · LW · GW

I know, I'm Russian as well. The concern is exactly because Russian state-owned company plainly states they're developing AGI with that name :p

Comment by Ozyrus on Russian x-risks newsletter Summer 2020 · 2020-09-02T12:06:54.379Z · LW · GW

Can you specify which AI company is searching for employees with a link?

Apparently, Sberbank (state-owned biggest russian bank) has a team literally called AGI team, that is primarily focused on NLP tasks (they made https://russiansuperglue.com/ benchmark), but still, the name concerns me greatly. You can't find a lot about it on the web, but if you follow-up some of the team members, it checks out.

Comment by Ozyrus on Open thread, Sep. 26 - Oct. 02, 2016 · 2016-09-26T23:25:21.591Z · LW · GW

I've been meditating lately on a possibility of an advanced artificial intelligence modifying its value function, even writing some excrepts about this topic.

Is it theoretically possible? Has anyone of note written anything about this -- or anyone at all? This question is so, so interesting for me.

My thoughts led me to believe that it is theoretically possible to modify it for sure, but I could not come to any conclusion about whether it would want to do it. I seriously lack a good definition of value function and understanding about how it is enforced on the agent. I really want to tackle this problem from human-centric point, but i don't really know if anthropomorphization will work here.

Comment by Ozyrus on Stupid Questions, 2nd half of December · 2015-12-23T16:04:20.294Z · LW · GW

Well, this is a stupid questions thread after all, so I might as well ask one that seems really stupid.

How can a person who promotes rationality have excess weight? Been bugging me for a while. Isn't it kinda the first thing you would want to apply your rationality to? If you have things to do that get you more utility, you can always pay diet specialist and just stick to the diet, because it seems to me that additional years to life will bring you more utility than any other activity you could spend that money on.

Comment by Ozyrus on Sensation & Perception · 2015-08-26T15:05:44.716Z · LW · GW

A good read, though I found it rather bland (talking about writing style). I did not read the original article, but compression seems ok. More will be appreciated.

Comment by Ozyrus on Open Thread - Aug 24 - Aug 30 · 2015-08-25T07:58:10.889Z · LW · GW

Are there any lesswrong-like sequences focused on economics, finance, business, management? Or maybe just internet communities like lesswrong focused on these subjects?

I mean, the sequences introduced me to some really complex knowledge that improved me a lot, while simultaneously being engaging and quite easy to read. It is only logical to assume that somewhere on the web, there must be some articles in the same style covering different themes. And if there are not, well, someone must surely do this, I think there is some demand for this kind of content.

So, feel free to link lesswrong-like series of blogposts on any theme, actually: that will be really helpful for me. P.S. In hindsight, i guess there may be some post here, on lesswrong, containing all these links I am looking for. If so, could anyone link me to it?

Comment by Ozyrus on Welcome to Less Wrong! (7th thread, December 2014) · 2015-05-20T22:03:42.703Z · LW · GW

It seems that your implicit question is, "If rationality makes people more effective at doing things that I don't value, >then should the ideas of rationality be spread?" That depends on how many people there are with values that are >inconsistent with yours, and it also depends on how much it makes people do things that you do value. And I would >contend that a world full of more rational people would still be a better world than this one even if it means that there >are a few sadists who are more effective for it. There are murderers who kill people with guns, and this is bad; but >there are many, many more soldiers who protect their nations with guns, and the existence of those nations allow >much higher standards of living than would be otherwise possible, and this is good. There are more good people >than evil people in the world. But it's also true that sometimes people can for the first time follow their beliefs to their >logical conclusions and, as a result, do things that very few people value.

Excellent answer! Yes, you deducted the implicit question right. I also agree that this is a rather abstract field of moral philosophy, though i did not see that at first. Although I don't think that your argument for the world being a better place with everyone being rational holds up, especially this point

There are more good people than evil people in the world.

Even if there are, there is no proof that after becoming "rational" they will not become "bad" (apostrophes because bad is not defined sufficiently, but that'll do.). I can imagine some interesting prospect for experiments in this field by the way. I also think that the result will vary if the subject is placed in society of only-rationalists vs usual society - with "bad" actions carried out more in the second example, as there is much less room for cooperation.

But of course that is pointless discussion, as the situation is not really based on reality in any way and we can't really tell what will happen. :)

Comment by Ozyrus on Welcome to Less Wrong! (7th thread, December 2014) · 2015-05-20T16:43:36.054Z · LW · GW

Hello, everyone!

LW came to my attention not so long ago, and I've been commited to reading it since that moment about a month ago. I am a 20-year old linguist from Moscow, finishing my bachelor's. Due to my age, I've been pondering with usual questions of life for the past few years, searching for my path, my philosophy, essentially, a best way to live for me.

I studied a lot of religions, philosophies, and they all seemed really flat, essentially because of the reasons stated in some articles here. I came close to something resembling a nice way to live after I read "Atlas shrugged", but something about it bothered me, and after thorough analysis of this philosophy I decided to take some good things from it and move on, as I did a lot of times before.

I found this gem of a site through reddit and roko's basilisk (is it okay if I say it here? I heard discussion was banned). I am deeply into the whole idea of rationality and nearly all ideas that are presented on this site, but something really bothers me here, too.

The thing is that it is implied that altruism and rationality go hand in hand; maybe I missed some important articles that could explain me, why?

Let's imagine a hypothetical scenario: there is a guy, Steve, who really does not feel anything when he helps other people nor when does other "good" things generally; he does this only because his philosophy or religion tells them to. Say this guy was introduced to ideas of rationality and thus he is no longer bound by his philosophy/religion. And if Steve also does not feel bad about other people suffering (or even takes pleasure in it?)?

What i wanted to say is that rationality is a gun that can point both ways: and it is a good thing that LessWrong "sells" this gun with a safety mechanism (if it is such "safety mechanism". Once again, maybe I missed something really critical that explains why altruism and "being good" is the most rational strategy).

In other ways, Steve does not really care about humanity; he cares about his well-being and will utilize all knowledge he got just to meet his ends ( people are different, aren't they? and ends are different, too).

Or even another, average rationalist Jack estimated that his own net gain will be significantly bigger if he hurts or kills someone (considering his emotions and feelings about overall humanity net gain, and all other possible factors). That means he must carry on? Or is it a taboo here? Or maybe it is a problem of this site's demographics and nobody even considered this scenario (which fact I really doubt).

I feel that i dive too deep into metaphors, but i am not yet a good writer. I hope you understood my thought and can make me less wrong. :)

edit: fixed formatting