Posts

What do you think will most probably happen to our consciousness when our simulation ends? 2022-04-12T08:23:17.859Z
Is there a possibility that the upcoming scaling of data in language models causes A.G.I.? 2022-04-08T06:56:50.146Z

Comments

Comment by ArtMi (richard-ford) on Takeoff speeds have a huge effect on what it means to work on AI x-risk · 2022-04-15T09:47:29.835Z · LW · GW

I support that thinking-on-the-margin alignment research version is crucial and it is one of the biggest areas of opportunity to increase the probability of success. Based on the seemingly current low probability, possibly at least it is worth trying. 

 

In the general public context, one of the premises is about if they could have benefit to the problem. In my intuition it really seems that AI alignment should be more debated in Universities and Technology. The current lack of awareness is concerning and unbelievable. 

We should evaluate more the outcomes of raising awareness. Taking into account all the options: a general(public), partial(experts) awareness strategy and the spectrum and variables in between. It seems that current Safe Alignment leaders have high motives to not focus enough on the expansion of awareness or even to not take the strategy even as possibly useful. I believe this motives should not be fixed but be more debated and not determined as too hard to implement.

We can't assume that if someone is capable to solve Safe Alignment, he/she/they would also be aware of the problem. It seems probable that if someone is capable of solving safe alignment, he/she/they currently don't understand the true magnitude of the problem. In that probable case, a needed step on the success path is in he/she/they understanding the problem. And we can be crucial with that. I understand that with this strategy, as in many safe alignment strategies, the probability of it reducing instead of increasing our success must be highly evaluated.

 

In the current Alignment research context, possibly there is also positive opportunity from taking more thinking-on-the-margin approaches. The impact of present and future AI systems to AGI/Safe Alignment is very likely of very high importance, and more so compared to its current focus. Because these systems are very likely to shorten the timelines("how much?" is important and currently ambiguous). Seems that we are not evaluating enough the probable crucial impact of current deep models to the problem, but I'm glad the idea is growing. 

(paulfchristiano, 2022) states "AI systems could have comparative disadvantage at alignment relative to causing trouble, so that AI systems are catastrophically risky before they solve alignment." I support that is one of the most important issues. Because, if AI systems are capable of improving safe AI alignment research, they will very likely be even more capable of improving non-safe/default AI alignment and probably Superintelligence creation research. This means that the probable most crucial technology in the Superintelligence birth lowers the probability of safe alignment. So two crucial questions are: How to fight this? and more essentially: How and how much can the current and near future AI Systems improve AGI creation?.

 

Now I will propose a polemic AI Safe Alignment thinking-on-the-margin tactic. The (I argue highly) probable ideation of new/different/better AI Safe Alignment strategies by AI Safe Alignment researchers from taking advantage of Stimulants and Hallucinogens. We definitely are in a situation where we must take any advantage we can. Non-ordinary states of consciousnesses are very highly worth trying because of the almost none risks involved. (Also with nootropics, but I'm almost not familiar with them).

 


Finally I will share what I believe currently should be the most important issue on all versions of alignment research and is on top of all previous ideas: If trying to Safely Align will almost certainly not solve our x-risk as EY states in "Miri announces new "Death with dignity strategy"". Then what it will have achieved is only higher s-risk probabilities. (THANK YOU for the Info hazards T.T). So one option is to aim to shorten the x-risk timeline if that reduces the probability of s-risks. Helping to build the Superintelligence asap.
Or to shift all the strategy to lowering s-risks. This is specially and highly relevant to us because we have a higher probability of s-risk (thanks e.e). So we should focus on the issues that increased our s-risk probabilities.


 

Comment by ArtMi (richard-ford) on What do you think will most probably happen to our consciousness when our simulation ends? · 2022-04-12T09:12:07.959Z · LW · GW

In this premise, The "Creator" of our simulation seems to not share our same ethical values.

This can be supported by the premises that:
A) A SuperIntellgence can (easily) create simulations.
B) It is (really) hard to align a SuperIntelligence with our ethical values. 
C) There is suffering in our reality. 

Which seem to have a high probability. 

Comment by ArtMi (richard-ford) on Productive Mistakes, Not Perfect Answers · 2022-04-08T07:59:06.359Z · LW · GW

We don't know the status or evolution of internal MIRI or LW independent/individual Safety Align Research.

But it seems that A.G.I. has a (much?) higher probability of getting invented away.

So the problem is not only to discover how to Safely Align A.G.I. but also to invent A.G.I. 

Inventing A.G.I. seems to be a step before than discovering how to Safely Align A.G.I. right?
 

How probable is it estimated that the first A.G.I. will be the Singularity? isn't it a spectrum? The answer is probably in the take-off speed and acceleration. 

If anyone could provide resources on this it would be much appreciated. 
 

Comment by ArtMi (richard-ford) on MIRI announces new "Death With Dignity" strategy · 2022-04-05T06:39:48.048Z · LW · GW

What other activities?

Comment by ArtMi (richard-ford) on MIRI announces new "Death With Dignity" strategy · 2022-04-04T05:25:16.760Z · LW · GW

"We can get lots of people to show up at protests and chant slogans in unison. I'm not sure how this solves technical problems."
-In the case that there is someone in the planet who could solve the alignment but still doesn't know about the problem. If that is the case this can be one of the ways to find him/her/them (we must estimate the best probability of success in the path we take, and if them exist and where). Maybe involving more intellectual resources into the safety technical investigation, with a social mediatic campaign. And if it just accelerates the doom, weren't we still doomed?

People should and deserve to know the probable future. A surviving timeline can come from a major social revolt, and a social revolt surging from the low probability of alignment is possible.


So we must evaluate prob of success:

1)By creating loudness and how.
2)Keep trying relatively silent. 


 

Comment by ArtMi (richard-ford) on MIRI announces new "Death With Dignity" strategy · 2022-04-02T19:00:19.677Z · LW · GW

Shouldn't we implement a loud strategy?

One of the biggest problems is that we haven't been able to reach and convince a lot of people. This would be most easily done with a more efficient route. I think of someone who already knows the importance of this issue the to a certain level and has high power to act. I am talking about Elon Musk. If we show him the dangerous state of the problem, he would be convinced to take a more important focus. It is aligned with his mindset.

If one of the most wealthy and influential persons of this planet already cares about the problem, we must show him that the issue is even greater and the importance of acting now.  And not only him, there are many other technology and scientific leaders who would act if illustrated with the important existencial risk we are facing.


Also in the loud strategy context, i argue for making noise on the streets. If we must go the congress, we should. If we must go the Tesla, Amazon, etc. Headquarters, we should. If we must fund and implement a public campaign, we should. It is worth trying. 

Comment by ArtMi (richard-ford) on Anti-Aging: State of the Art · 2022-04-01T06:21:06.704Z · LW · GW

I may argue against the anti aging field (and probably now that i reflect about it, it seems more important than most non-essential productivity in society) that it is highly more probable that we reach an agi singularity during our lifetimes, before we die of aging. (depends on your age). 
What do you think?