Can subjunctive dependence emerge from a simplicity prior?

post by Daniel C (harper-owen) · 2024-09-16T12:39:35.543Z · LW · GW · No comments

This is a question post.

Contents

No comments

Suppose that an embedded agent models its environment using an approximate simplicity prior, would it acquire a physicalist agent ontology or an algorithmic/logical agent ontology [LW · GW]?

One argument for the logical agent ontology is that it allows the agent to compress different parts of its observations that are subjunctively dependent: If two physical systems are computing the same function, the logical agent ontology only has to store that function once, then model the dependencies between those two systems and the function. On the other hand, information about that function would be redundantly represented in both physical systems under the physicalist agent ontology.

Most decision theory literature seems to treat (physical) causal dependence as the default, requiring extra work to formalize subjunctive dependence. However, if logical agent ontology naturally emerges from a simplicity prior, we might expect subjunctive dependence to arise by default for most agents.

 

Thanks to Anthony Digiovanni for the discuission that inspired this post

Answers

No comments

Comments sorted by top scores.