Posts
Comments
Yeah Eliezer really isn't the most efficient communicator, at least in discussions. Being able to predict how someone will interpret your words and adjusting them to illicit the correct interpretation is an impossible skill to perfect, but it's nearly as difficult to master. Unfortunately, it's the singular skill utterly completely critical for one party to posses for a conversation to go anywhere, and in this case neither party did a good job of efficiently contradicting incorrect interpretations. Eliezer did a better job though, for what it's worth.
It was absolutely wild reading twitter reactions from e/acc people who clearly hadn't watched a second of the discussion say stuff like "Eliezer got destroyed lol".
Wolfram literally partially conceded. If you skip to the closing remarks he admits he "could be convinced" and that he understands Eliezer's point better now.
I mean, it's depressing someone clearly as smart as Wolfram who also is quite knowledgeable about AI hadn't genuinely considered Eliezer's whole Optimality Function problem before, and speaks to how potentially screwed we are, but it was cathartic that Wolfram sorta kinda seemed to "get it" at the end.
I doubt he'll start advocating to STOP AI though.