Brief Response to Suspended Reason on Parallels Between Skyrms on Signaling and Yudkowsky on Language and Evidence

post by Zack_M_Davis · 2020-04-16T03:44:06.940Z · LW · GW · 0 comments

(This is the continuation of a Twitter conversation, hosted as a Less Wrong post because Less Wrong lets me post more than 280 characters at a time.)

I'm having trouble finding a "juicy" passage to quote due to ... uh, genre differences? Skyrms isn't preaching about the True Nature of Language to a popular audience, but rather explaining game-theory models to an academic audience with occasional remarks about how these ideas apply to human language, animal behavior, cellular systems, &c. The nine-page paper "Evolution of Signalling Systems with Multiple Senders and Receivers" gives a much shorter taste of the technical content, but I don't anticipate you counting it as a precedent on its own, so I'll give my brief summary of some of what I got out of Signals: Evolution, Learning, and Information and how it parallels Yudkowsky's work on language [? · GW] and evidence [LW · GW].

(One could argue that the fact that I feel like I have to do my own summarizing rather than just quoting Skyrms is a hint that what I'm seeing, I got from Skyrms-reinterpreted-in-light-of-Yudkowsky, rather than it being easy to get from Skyrms himself. But I think the insight is mostly in the math, and the relative ease of juicy quoting is just a genre difference.)

Skyrms expounds on sender–receiver games as introduced by David Lewis in 1969. We have two abstract agents, a "sender" and a "receiver". The sender gets to observe some state of Nature and send a "signal" to the receiver, who will then take some action, and get a payoff depending on the action taken and the state of Nature. For simplicity, suppose there are two states (, ), two possible signals (, ), and two possible acts (, ), and the receiver gets a better payoff when it takes action (respectively ) when reality is in (respectively ).

The thing is, the signals are completely opaque—the receiver can distinguish one from the other, but there's no reason for to mean that the state of Nature is rather than —the names are totally arbitrary. (I could have said "red" and "blue", or "♥" and "❄" rather than and .)

And yet, if you run an evolutionary-game-theory simulation where behavior that gets payoffs is more likely to be copied, our agents learn to form one of the two consistent signaling systems where each of the signals corresponds to one of the states of Nature.

And that's where language comes from!! Starting with completely opaque, symmetric signals, the forces of evolution engineer meaning from nothing—where the "meaning" of a signal is just how it changes a receiver's probabilities. If you've read the Sequences, this should be familiar theme: "the true import of a thing is its entanglement with other things" [LW · GW].

There's a lot more to the book (what happens with more agents?—more signals?—what is deception?—how do new signals evolve? &c.) but I hope this suffices to point out the parallel to Yudkowskian thought.

Skyrms's work appears to post-date Yudkowsky's

I mean, yes, the specific book I'm citing was published in 2010 whereas the Sequences were 2007–2009, but Brian Skyrms has been doing this style of philosophy for quite a while, and I would be pretty surprised if he were an Overcoming Bias reader, so it seems like the kind of causal independence we're looking for when we talk about "priority", even if some of the publication dates overlap or are reversed? (I want to say, Skyrms and Yudkowsky are conditionally independent given Claude Shannon??)

0 comments

Comments sorted by top scores.