Posts

Comments

Comment by deo on Beyond Blame Minimization · 2022-03-29T09:36:45.163Z · LW · GW

Great post!

I think it's good to first expand a view on bureaucracy a bit -- while FDA may look as a prototypical bureaucracy, it's not the only example

As Matthew Barnett said, for-profit firms also create internal bureaucracies. In Soviet Union and other communist countries all command economy could be viewed as a single firm, and all economic management was done via in-firm internal bureaucracy. 

So I'd hypothesize that the thing bureaucracy (at any level or subunit) tries to maximize is "power", or to put it more concretely, ability to give orders. In that case an attempt to maximize its budget, or extend its purview evidently gives a manager inside a bureaucracy more capabilities to give orders, both inside and outside of bureaucratic hierarchy

In this framing avoiding blame is more like instrumental and defensive goal -- being constantly blamed for some transgressions may cause higher levels on hierarchy (or principals/politicians/firm executives) to reduce manager's or unit's power.

Comment by deo on Saving Time · 2021-05-24T05:41:54.729Z · LW · GW

Great post, lot of food for thought, thanks!

IIRC, in "Good and Real" Gary Drescher suggests to first consider a modified version of Newcomb's Problem, where both boxes are transparent. He then goes on to propose a solution in which agent precommits to one-box before being presented with the problem in a first place. This way, as I understand, causal diagram would first feature a node where agent chooses to precommit, and it both deterministically causes their later action to one-box, and Omega's action to put $1000000 in larger box. This initial node for choosing to precommit does look like an agent's abstract model of their and Omega's actions.

Alternatively, in this paper, "Unboxing the Concepts in Newcomb’s Paradox: Causation, Prediction, Decision in Causal Knowledge Patterns", Roland Poellinger suggests to augment ordinary causal networks with new type of undirected edge, he calls an "epistemic contour". In his setup, this edge connects agent's action to select either one or two boxes and Omega's prediction. This edge is not cut when performing do(A) operation on the causal graph, but the
information is passed backwards in time, thus formalizing the notion of "prediction".

Comment by deo on The Threat of Cryonics · 2010-08-05T07:29:27.327Z · LW · GW

Yes, I'm wrong about this being prisoner's dilemma. One side defecting (dieing) against other cooperating (cryopreserving) won't make first side better off and second one worse off.

So it's just insufficient communication/coordination.

Comment by deo on The Threat of Cryonics · 2010-08-05T05:44:58.141Z · LW · GW

Of course cryo people would love to take their loved ones with them, and are horrified when they ignore the chance.

If my loved ones signed up for cryonics, that would be reason enough for me.

What a horrendous case of prisoner's dilemma...

Comment by deo on The New Nostradamus · 2009-09-25T06:11:03.984Z · LW · GW

Well, in 21 months we'll be able to have some partial track record:

http://wrongtomorrow.com/authors/bruce-bueno-de-mesquita