Posts

Comments

Comment by Patodesu on MIRI 2024 Communications Strategy · 2024-06-04T23:36:26.974Z · LW · GW

Cool, so MIRI is focusing on public passive support, PauseAI and others in active public support.

Now, can an org focus on the lobbying of pausing/ stopping (or redlines for killswitches) then?

Comment by Patodesu on [deleted post] 2023-12-26T18:25:52.160Z

I'm interested in what do people think are the best ways of doing advocacy in a way that gives more weight to the risks than the (supposed) benefits.

Talking about all the risks? Focusing on the expert polls instead of the arguments?

Comment by Patodesu on What are the best non-LW places to read on alignment progress? · 2023-07-07T02:05:29.854Z · LW · GW

Some people post about AI Safety in the EA Forum without crossposting here

Comment by Patodesu on [Linkpost] "Governance of superintelligence" by OpenAI · 2023-05-23T17:51:43.056Z · LW · GW

When they say stopping I think they refer to stopping it forever, instead of slowing down, regulating and even pausing development.

Which I think is something pretty much everyone agrees on.

Comment by Patodesu on The self-unalignment problem · 2023-04-15T19:02:52.526Z · LW · GW

I think there's two different misalignments that you're talking about. So you can say there's actually two different problems that are not recieving enough attention.

One is obvious and is between different people. 

And the other is inside every person. The conflict between different preferences, the not knowing what they are and how to aggregate them to know what we actually want.

Comment by Patodesu on [deleted post] 2023-03-08T22:50:37.515Z

Human empowerment is a really narrow target too

Comment by Patodesu on My Model Of EA Burnout · 2023-01-30T04:51:28.959Z · LW · GW

I'm kinda new here, so where all this EAF fear comes from?

Comment by Patodesu on (My understanding of) What Everyone in Technical Alignment is Doing and Why · 2022-10-06T06:18:07.668Z · LW · GW

Even if you think S-risks from AGI are 70 times less likely than X-risks, you should think how many times worse would it be. For me would be several orders of magnitude worse.

Comment by Patodesu on All AGI safety questions welcome (especially basic ones) [Sept 2022] · 2022-09-08T23:54:02.599Z · LW · GW

Can non Reinforcement Learning systems (SL or UL) become AGI/ Superintelligence and take over the world? If so, can you give an example?