The Alignment Agenda THEY Don't Want You to Know About

post by MadHatter · 2023-11-30T04:29:27.784Z · LW · GW · 16 comments

Contents

17 comments

The title of this post is completely tongue-in-cheek. I have been advised to lean into the unpopularity of my opinions, so that's where it came from.

In this post we lay out perhaps the most surprising prediction of the ethicophysics, which is that any solution to the alignment problem will be wildly unpopular on LessWrong when it is initially posted. This should surprise you - LessWrong has mortgaged everything else it holds dear in order to prioritize solving the alignment problem - why would it react poorly to someone actually doing so?

Our model has the following components:

16 comments

Comments sorted by top scores.

comment by alexgieg · 2023-11-30T13:56:32.630Z · LW(p) · GW(p)

When this person goes to post the answer to the alignment problem to LessWrong, they will have low enough accumulated karma that the post will be poorly received.

 

I don't think this is accurate, it depends more on how it's presented.

In my experience, if someone posts something that's controversial to the general LW consensus, but argues carefully and in details, addressing the likely conflicts and recognizing where their position differs from the consensus, how, why, etc., in short, if they do the hard work of properly presenting it, it's well received. It may earn an agreement downvote, which is natural and expected, but it also earns a karma upvote for the effort put into exposing the point, plus those who disagreed engaging with the person explaining their points of disagreement.

Your point would be valid on most online forums, as people who aren't as careful about arguments as LWers tend to conflate disliking with disagreeing, which results in a downvote is a downvote is a downvote. Most LWers, in contrast, tend to be well skilled at treating the two axes as orthogonal, and it shows.

Replies from: MadHatter
comment by MadHatter · 2023-11-30T15:39:17.683Z · LW(p) · GW(p)

That's pretty fair, and an argument for me to be less trollish in my presentation. I have strong-agreed with you.

comment by Said Achmiz (SaidAchmiz) · 2023-11-30T07:38:21.754Z · LW(p) · GW(p)

What the heck is “ethicophysics”?

Replies from: MadHatter
comment by MadHatter · 2023-11-30T07:47:39.794Z · LW(p) · GW(p)

A novel theory of the interactions between the physical components of a cyberphysical system and the cybernetic components of a cyberphysical system. Please see the sequence I have published for more details.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-11-30T08:00:27.966Z · LW(p) · GW(p)

All I can find is this post [LW · GW], which links to a Substack post, which links to an Academia.edu page, which links to a PDF… but doesn’t let me view or download the PDF unless I log in.

Do you have an explanation of “ethicophysics” available somewhere… more accessible?

Replies from: MadHatter, rsaarelm
comment by MadHatter · 2023-11-30T08:06:11.925Z · LW(p) · GW(p)

Look at the most recent post on my substack, which links to this github repo: https://github.com/epurdy/ethicophysics

comment by rsaarelm · 2023-11-30T08:29:02.573Z · LW(p) · GW(p)

The PDF is shown in full for me when I scroll down the academia.edu page, here's an archive.is capture in case this is some sort of intermittent A/B testing thing.

comment by Chris_Leong · 2023-11-30T08:17:04.269Z · LW(p) · GW(p)

Some thoughts since you ask for feedback elsewhere:

1) Part of the reason why this post is likely being downvoted is the clickbait title. This is not looked upon favorable on Less Wrong.
2) You make some pretty good points in this post, but you state it far too confidently, almost like a mathematical proof. If you want your posts to do well on Less Wrong, try not to make strong claims without correspondingly strong evidence.

comment by interstice · 2023-11-30T04:49:27.450Z · LW(p) · GW(p)

Basically, people will reason, if this guy was about to knock a baseball into outer space, wouldn’t we already know his name and have his rookie card?

This is the crux for me. I think a person capable of making a significant contribution to alignment would probably also be capable of making some sort of smaller but more legible/uncontroversial contribution to show their competence. To go with the Einstein example, he got a PhD in physics before producing his groundbreaking results, and then was able to present those results in a format that was accepted by the physics establishment.

Replies from: MadHatter
comment by MadHatter · 2023-11-30T05:01:18.452Z · LW(p) · GW(p)

I have a PhD in Computer Science (2013, University of Chicago). My dissertation was entitled "Grammatical Methods in Computer Vision". My masters thesis was in complexity theory and was entitled "Locally Expanding Hypergraphs and the Unique Games Conjecture". I also have one publication in ACM Transactions on Computation Theory on proving lower bounds in a toy model of computation.

I am an Engineering Fellow at [redacted] AI. My company went to Series A while I was leading its machine learning team. (I have since transitioned to being an individual contributor, because management sucks and is boring and I'm no good at it.) My company has twice received the most prestigious award handed out in its industry. I hold multiple patents related to my contributions at [redacted] AI. 

I hold a patent for my work at Vicarious, where I was a senior researcher.

At one point, I quit my job and started a generative AI startup dedicated to providing psychotherapy. This model is online, and I can share a link to it in a DM if you are interested.

The state-sponsored German physics establishment famously sneered at Einstein's work. The Nazi regime derided it as degenerate, "Jewish" physics. Sure, everyone who we actually respect now could recognize the value of his work after he started predicting novel astronomical phenomena. But it's not like he ever could have gotten a job at a German university while the Nazis were in charge.

Maybe the problem is with my poor writing and sloppy craftsmanship, but maybe it is also partially a matter of LessWrong expecting the solution to the alignment problem to come with a lot less emotionally charged language and politically charged content than it logically would have to come with?

Replies from: interstice
comment by interstice · 2023-11-30T05:39:16.548Z · LW(p) · GW(p)

Interesting, this is more competence-requiring stuff than I expected.

comment by Odd anon · 2023-11-30T08:23:20.427Z · LW(p) · GW(p)

When this person goes to post the answer to the alignment problem to LessWrong, they will have low enough accumulated karma that the post will be poorly received.

Does the author having lower karma actually cause posts to be received more poorly? The author's karma isn't visible anywhere on the post, or even in the hover-tooltip by the author's name. (One has to click through to the profile to find out.) Even if readers did know the author's karma, would that really cause people to not just judge it by its content? I would be surprised.

Replies from: MadHatter
comment by MadHatter · 2023-11-30T08:47:16.158Z · LW(p) · GW(p)

Well, I'm more talking about the actual reputation one has in the alignment community, since that's the thing that's actually relevant to how a post is received. I have no idea what my reputation is like, but it would almost have to be "total unknown".

comment by MadHatter · 2023-11-30T07:30:05.232Z · LW(p) · GW(p)

Since predictions about the past are cheap, let's make some ethicophysical predictions about the future.

My work will remain unpopular until it is endorsed by someone who is high status (say, a comparable amount of karma to John Wentworth), and will not truly enter the LessWrong canon until Eliezer Yudkowsky seriously engages with it.

These predictions are pretty obviously true, so I don't claim many Brier points for making them. But I also can't claim many Brier points for many points in the ethicophysics, since it's really just a formalization of common sense applied to the moral domain.

Replies from: lahwran
comment by the gears to ascension (lahwran) · 2023-11-30T23:46:30.572Z · LW(p) · GW(p)

I think it's more likely you just need to explain yourself better. try making a single post, which is not a linkpost, does not ask the reader to read anything else, is less than 5k words, which explains your ideas end to end without claiming them correct, simply describe what the proposal is without asserting results you don't have. if you have more than one author, you can use the multiple-author "we", but use "I" otherwise. in other words, stop propping your ideas up with pompous writing and just explain yourself already.

Replies from: MadHatter
comment by MadHatter · 2023-12-01T02:12:30.348Z · LW(p) · GW(p)

Here is the best I could muster on short notice: https://bittertruths.substack.com/p/ethicophysics-for-skeptics

Since I'm currently rate-limited, I cannot post it officially.