Why small phenomenons are relevant to morality ​

post by Ryo (Flewrint Ophiuni) · 2023-11-13T15:25:46.738Z · LW · GW · 0 comments

Contents

No comments

Follow-up to this post [LW · GW]


One thing that seems important to me is that optionality implies a sort of high-level game theory :

It is good that we are in a complex environment with many phenomenons even when it seems that those phenomenons aren't directly relevant (to prevent threats from our blind-spots).

This is a precautionary principle.
"What is apparently meaningless now can be meaningful in a future space-time context"

The ethical system needs to be formalized in a way that procedurally includes all phenomenons 

So that it would scale and fit with the events and variables of our complex reality  

 



A proof of the importance of phenomenal diversity can be ie. that being in a universe with many solar systems is good because it enhances the possibility of life.


So diversity of phenomenons is in itself an ethical subgoal,
Blooming the phenomenons that allow the most diversity of other phenomenons is good

(But we need to define complexity/diversity, I'll extend on that at the end)

 

Although those far-away/small phenomenons should not be the main concern, it should always be part of the equation


The issue is not simply to make AI care about what we want, but about what "we would like to want if we add perfect knowledge about both our ideal values, and the consequences of our actions"

cf -> .Coherent Extrapolated Volition [? · GW]

 

 

One reason I'm talking about small/far phenomenons is to foresee the possibility of rogue AI

If the fundamental layer is about phenomena diversification, ecosystemic value etc.
-> Even a rogue AI can't maximize paperclips

Because the procedure would inherently maximize "diversity" instead


If you need to define something to a superhuman AI you have to put a definition that takes everything into account,

You have to really define optionality in the less underfitting formula possible (an underfitted definition could be interpreted in unforeseen ways)

 

An AI that has just [human optionality] in its definition (or in its limited halo of attention) may be harmful for wildlife in unforeseen ways, which then would be bad for us (and is bad in itself).
I'm trying not simply to assume "ASI will be non-myopic in a good way"

Because this AI/AGI/ASI didn't took the time and energy to dive in certain irreducible calculations you have to make in order to really care about natural ecosystems

Similarly if you didn't care about phenomenons, you might have missed important elements of the environment, of physics etc that will come back later on, be harmful to wildlife, and harmful to us


It's to escape the decrease of open-endedness or even avoid catastrophic threats
(While staying careful to not trigger such threats from the serendipitous creation of things/chaos)

 

The procedure is simple and easy :

Make things open-ended
In a non-myopic way

 

But if I say just that, one will ask me to define
Then I try to define what I mean;

Access to options, affordance, autonomy, understanding
To me this is the Okham's razor way to deal with threats

 

We stated "AI is dangerous"

Why?
Because it may frantically maximize local reward
Because it can *not care*

Okay, so we need a clear definition
A Very Precise Complete Definition so that there is no possibility of misunderstanding


If we formalize a concept so neatly AI has just to run it

Then it cares by default;

It's just procedurally applying this non-local, truly general value we want


(Note : defining why 1 + 1 = 2 takes a whole book)


 

 

And thus I try to detail that :

Blooming the phenomenons that allow the most diversity of other phenomenons is good

With respect to what I call "endemic time rates" which is the time at which phenomona evolves without intervention of a strong causal power such as AI. It implies the notion of ecosystem etc. And that's where technical stuff should come along with ​a proper formalism​.

 

How to make sure your equation actually increase options an not simply a wrong model? 

 

We need to write down a scale-free procedure applying to small and big phenomenons.

 

What' the process to causally increase the production of options and agency?

What's the proof of maximal open-endedness?

 

-> In the best case you're aware you'll have unknown variables but a provably optimal equation



 

I regularly mention "complexity", or "sophistication". It is often argued to not use such vague words. 

They should have a proper formalism too (for now I have a less formal definition).
What I mean by complexity is something like a combinaison of those two parameters :

1) The diversity of dimensions of informations (patterns/stimuli) computed/circulating in a system

-> So how dissimilar they are, and the number of different categories included in the continuum

(Individual objects are a category, but very similar to the other objects of a same meta-category)

2) The causal power of a system 

-> Qualitative and quantitative external changes operated by the system

(So how dissimilar changes are, and the number of different categories included)



I try to articulate and synthesize this "open-ended phenomenal" ethics here [LW · GW].
TLDR here [LW · GW]

0 comments

Comments sorted by top scores.