Ethicophysics II: Politics is the Mind-Savior

post by MadHatter · 2023-11-28T16:27:19.233Z · LW · GW · 9 comments

This is a link post for https://bittertruths.substack.com/p/ethicophysics-ii-affilliation-economics

We present an ethicophysical treatment on the nature of truth and consensus reality, within the framework of a historical approach to ludic futurecasting modeled on the work of Hegel and Marx. We prove an ethicophysical approximate conservation law: the Conservation of Bullshit. We conclude with a lengthy list of open questions, homework problems, and assigned readings, focused on many large and weighty questions about history and politics. We do not presume to know the answers to any of these weighty questions, but we invite any interested readers to submit answers to these questions, either as comments on this post, comments on my substack, direct messages on LessWrong, or direct emails to my personal email.

We hope that, by providing a mathematically rigorous treatment of naturalistic game theory and hooking it up to these weighty political and historical questions, we can encourage people to take history and morality more seriously without starting any unproductive flamewars. I will be setting the moderation settings on this post to Reign of Terror to encourage the LessWrong moderators to enforce these norms to the limits of their ability and discretion.

We provide the list of open questions, homework exercises, and assigned reading from the PDF, in order to facilitate productive discussion on LessWrong. Please share your answers or work-in-progress for any of the following questions in the comments:

9 comments

Comments sorted by top scores.

comment by Dagon · 2023-11-28T17:08:30.956Z · LW(p) · GW(p)

Do you actually want discussion on LW, or is this just substack spam?  If you want discussion, you probably should put something to discuss in the post itself, rather than a link to a link to a PDF in academic language that isn't broken out or presented in a way that can be commented upon.

From a very light skim, it seems like your "mathematically rigorous treatment" isn't.  It includes some equations, but not much of a tie between the math and the topics you seem to want to analyze.  It deeply suffers from is-ought confusion.

Replies from: MadHatter, MadHatter
comment by MadHatter · 2023-11-28T18:50:01.987Z · LW(p) · GW(p)

I actually want discussion on LW. I'll post my list of open questions as a comment on this post and encourage people to respond to it by taking a crack at them.

Replies from: MadHatter
comment by MadHatter · 2023-11-28T19:00:50.024Z · LW(p) · GW(p)

Edited the post to include the homework/discussion questions from the post. Hopefully these questions are substantive enough and phrased in neutral enough language that people feel comfortable discussing politics on an explicitly apolitical forum?

comment by MadHatter · 2023-11-28T19:06:49.714Z · LW(p) · GW(p)

Also, any solution to the alignment problem must suffer from is-ought confusion when presented in plain language rather than extensively worked out theoretical equations with extensive empirical verification. 

Which part would you have me remove, the plain language, the extensively worked out theoretical equations, or my list of open problems that I hope people will use to help me assemble extensive empirical verification of my work?

Replies from: Dagon, Dagon
comment by Dagon · 2023-12-01T05:38:01.443Z · LW(p) · GW(p)

On reflection, I suspect that I'm struggling with the is-ought problem in the entire project.  Physics is "is" and ethics is "ought", and I'm very skeptical that "ethicophysics" is actually either, let alone a bridge between the two.

Replies from: MadHatter
comment by MadHatter · 2023-12-01T12:01:57.184Z · LW(p) · GW(p)

That's fair (strong up/agree vote).

If you consult my recent shortform, I lay out a more measured, skeptical description of the project. Basically, ethicophysics constitutes a globally computable Schelling Point, such that it can be used as a protocol between different RL agents that believe in "oughts" to achieve Pareto-optimal outcomes. As long as the largest coalition agrees to prefer Jesus to Hitler, I think (and I need to do far more to back this up) defectors can be effectively reined in, the same way that Bitcoin works because the majority of the computers hooked up to it don't want to destroy faith in the Bitcoin protocol.

comment by Dagon · 2023-11-30T16:19:27.265Z · LW(p) · GW(p)

I suspect we have a disagreement about whether the "worked out theoretical equations" suffer from is-ought any less than the plain language version.  And if they are that fundamentally different, why should anyone think the equations CAN be explained in plain language.

I am currently unwilling to put in the work to figure out what the equations are actually describing.  If it's not the same (though with more rigor) as the plain language claims, that seriously devalues the work.

Replies from: MadHatter
comment by MadHatter · 2023-11-30T16:24:31.045Z · LW(p) · GW(p)

Check out my post entitled "Enkrateia" in my sequence. This is a plain language account of a safe model-based reinforcement learner using established academic language and frameworks.

comment by the gears to ascension (lahwran) · 2023-11-30T04:30:25.701Z · LW(p) · GW(p)

Vote towards zero. I don't know what you're trying to say and would appreciate much more explanation without having to browse your other page.