Challenge to the notion that anything is (maybe) possible with AGI

post by Remmelt (remmelt-ellen), flandry19 · 2023-01-01T03:57:04.213Z · LW · GW · 4 comments

This is a link post for https://mflb.com/ai_alignment_1/af_proof_irrationality_psr.html

Contents

4 comments

4 comments

Comments sorted by top scores.

comment by Shmi (shminux) · 2023-01-01T05:00:30.230Z · LW(p) · GW(p)

I clicked on the link, so you don't have to: content-free stream of consciousness.

Replies from: quintin-pope
comment by Quintin Pope (quintin-pope) · 2023-01-01T06:18:08.782Z · LW(p) · GW(p)

Not entirely content-free, but very much stream of consciousness and thoroughly mistaken. The main argument seems to be:

  • Making logical arguments requires accepting some (unspecified) axioms about "symmetry".
  • These axioms must be accepted with 100% credence.
  • This conflicts with the common (though not universally accepted) LW position that nothing can be known with literally 100% credence.
  • Until LW accepts the author's preferred epistemology, there's little point in engaging directly with discussion on LW.
  • Thus, there's no point in writing up an actual proof of their claim that alignment is impossible.

The author is also pretty rude, repeatedly calling rationalists unreasonable, irrational, and generally seeming very offended over a difference in opinion about a fairly trivial (IMO) point of epistemology.

Relevant context: other work by the author was linked previously [LW · GW], and Paul Christiano said that work seemed "cranky", so I don't hold the author's abrasiveness fully against him.

I still singly-downvoted this post because I think the core of the provided argument is extremely weak. As far as I saw, the author just repeatedly asserts that performing logical reasoning implies you should assign 100% confidence to at least some claims, and that rationalists are completely irrational for thinking otherwise. All the while, the author made no reference whatsoever to preexisting work in this area. E.g., MIRI's Logical Induction paper directly explains one way to have coherent uncertainties over logical / mathematical facts, as well as the limits of ones own reasoning process, despite Gödel incompleteness.

comment by the gears to ascension (lahwran) · 2023-01-01T19:47:09.876Z · LW(p) · GW(p)

y'all seem like you're mostly echoing stuff other people have thought of and then being frustrated when you get the response "this is known, also you don't discuss important modulating factors, and also this is hard to read". as a crank myself you gotta give a lot of charity to your reader for thinking you sound crazy when you do in fact sound crazy.

I agree this is all obvious, and that some things deserve relatively incredibly high confidence such as the relationship patterns of symbols under the rules of math, but geez, we've been there before in AI safety.

comment by IrenicTruth · 2023-01-01T23:54:43.198Z · LW(p) · GW(p)

Hint for those who want to read the text at the link: go to the bottom and click "view source" to get something that is not an SVG.