Posts

Situating LessWrong in contemporary philosophy: An interview with Jon Livengood 2020-07-01T00:37:00.695Z · score: 96 (40 votes)
The Dark Miracle of Optics 2020-06-24T03:09:29.874Z · score: 18 (7 votes)
Conceptual engineering: the revolution in philosophy you've never heard of 2020-06-02T18:30:30.495Z · score: 81 (30 votes)

Comments

Comment by suspended-reason on Signaling: Why People Have Conversations · 2020-07-13T06:33:42.015Z · score: 3 (2 votes) · LW · GW

Can you talk more about the movement of signaling frontiers? I'd be super appreciative of an example if possible. I assume your mention of Goodharting is the idea that as soon as something becomes legible as a reliable signal of a quality, it'll be optimized for and cease being reliable. This is the movement of the signaling frontier, I take it?

I've read through the papers you recommended in a previous comment, which I incorporated into The Dark Miracle of Optics, but I'd love to continue this conversation with you. Is there somewhere—your own post, or elsewhere—I should explore re: this moving frontier, and the constant Goodhart-led inflation of signals?

Comment by suspended-reason on Situating LessWrong in contemporary philosophy: An interview with Jon Livengood · 2020-07-13T03:44:46.280Z · score: 1 (1 votes) · LW · GW

Thank you! I'd be very curious to hear what didn't resonate, since I'm working the ongoing MetaSequences project, but of course you're very busy, so only if you think it'd be valuable for both of us!

Comment by suspended-reason on Conceptual engineering: the revolution in philosophy you've never heard of · 2020-07-06T19:20:20.157Z · score: 5 (2 votes) · LW · GW

re: meta-sequences, thank you! It's proven a much bigger and more difficult project than I'd naively imagined, insofar as I began realizing that my own readings of the original or secondary texts were not even remotely adequate, and that I needed to have extensive conversations with people closer to the field in order to understand the intellectual context that makes e.g. the subtle differences in Carnapian linguistics vs that of other logical positivists so salient.

The project will likely end up focusing mostly on language and reality (map and territory) for better or worse. I think it's a big enough cross-section of LW's intellectual history, and also enough of a conversation-in-progress in philosophy, that it will hopefully shed light on the larger whole.

As for damning philosophy—I think there are some real self-selection effects; Russell has his quote about young men wanting to think "deep thoughts," that's reflected in Livengood's description of Pitt philosophy; Stove's "What's Wrong With Our Thinking" touches on some of the cognitive biases that might guide individuals to certain views, and increase the likelihood of their having a prominent reception and legacy. (One can understand, for instance, why we might be inclined and incentivized to battle against reductionism, or determinism, or relativism.) There's a certain view of philosophy which sees the Sophists as basically getting the broad brushstrokes right, and much of philosophical history that follows them as being an attempt at "cope" and argue against their uncomfortable truths—that e.g. the ethical and ontological relativism the Sophists pushed was too emotionally and morally destructive to Athens, and Plato's defense of the forms of beauty, or the just, are an attempt to re-stabilize or re-structure a world that had been proven undone. (I understand "relativism" is in some ways the nemesis of LW philosophy, but I believe this is solely because certain late 20th C relativists took the concept too far, and that a more moderate form is implicit in the LW worldview: there is no such "thing" as "the good" out in the world, e.g.) This is a very partial narrative of philosophy, like any other, but it does resonate with why, e.g., neoplatonism was so popular in the Christian Dark Ages—its idea of an "essence" to things like the Good, or the Just, or a Table, or a human being is all very in accord with Christian theology. And the way that Eastern philosophies avoided this reifying ontology, given a very different religious background, seals the deal. Still, I'd like to do quite a bit more research before taking that argument too seriously.

OTOH, I can't help but think of philosophy as akin to an aesthetic or cultural endeavour—it can take years of consumption and knowledge to have sophisticated taste in jazz, and perhaps philosophy is somewhat the same. Sure, LessWrong has a kind of clarity in its worldview which isn't mirrored in philosophy, but as Stove points out and Livengood seconds, the main problem here is that we still have no way of successfully arguing against bad arguments. The positivists tried with their description of "nonsense" (non-analytic or non-verifiable) but this carving still fails to adequately argue against most of what LWers would consider "philosofolly," and at the same time hacks off large quadrants of humanistically meaningful utterances. Thus, so long as people who want to become philosophers and see value in "philosofolly," and find readers who see value in philosofolly, then what can the more analytic types say? That they do not see the value in it? Its fans will merely say, well, they do, and that the analytic conception of the world is too narrow, too cold-blooded. think the real takeaway is that we don't have a good enough understanding of language and communication yet to articulate what is good and productive versus what is not, and to ground a criticism of one school against the other. (Or even to verify, on solid epistemic ground, that such arguments are folly, that they are wrong rather than useful.) This is a big problem for the discipline, as it becomes a pitting of intuitions and taste against one another.

Comment by suspended-reason on Situating LessWrong in contemporary philosophy: An interview with Jon Livengood · 2020-07-02T22:06:07.762Z · score: 1 (1 votes) · LW · GW

Thank you! I'd seen the poll but not the repo.

Comment by suspended-reason on Simulacra Levels and their Interactions · 2020-06-26T23:01:21.401Z · score: 1 (1 votes) · LW · GW

Re-reading this, it strikes me that an entity communicating purely on the first level is himself a drone, not an agent. He is a slave to the territory, and can only report its condition, even when it may harm him. (See Kant's thought experiment about an ax murderer who enters your home and demands knowledge of where your friend is hidden.)

Comment by suspended-reason on Simulacra Levels and their Interactions · 2020-06-21T02:00:06.392Z · score: 1 (1 votes) · LW · GW

Thank ya!

Comment by suspended-reason on Simulacra Levels and their Interactions · 2020-06-19T06:47:25.849Z · score: 1 (1 votes) · LW · GW

Any chance you could point me to some keywords/authors/texts on this topic? I'd love to learn more.

Comment by suspended-reason on Simulacra Levels and their Interactions · 2020-06-17T00:30:06.535Z · score: 27 (10 votes) · LW · GW

My research into animal mimicry, which closely resembles Baudrillardian simulacra, makes me think the slide in language/signaling from the first to second step is a potentially intractable problem. Once some association in information-space develops a reputation among situated actors, and is recognized as open to manipulation which benefits some of those actors at the cost of others... well, there's no way to break the freeriders of dishonest signaling.

Let's say that a black and red phenotype on a butterfly develops a reputation among predators as inedible (the butterfly releases toxins on being eaten). Now it's protected, great! What used to be a lose-lose (predator eats toxins, butterfly gets eaten) is transformed to a win-win (predator avoids toxins, butterfly survives) by the power of information: honest signaling benefits everyone. This is "step 1."

Unfortunately, the next step is other, non-toxic butterflies "noticing" (which is to say, evolution exploiting) this statistical association, and protecting themselves by dishonestly signaling the black and red, protected phenotype. This works alright at first, but it's driven by frequency-dependent selection: the more dishonest signalers, the less protection for everyone, toxic or not. This is "step 2."

But the actually toxic butterflies—the original honest signalers—they can't go anywhere. They're just stuck. One might happen to evolve a new phenotype, but that phenotype isn't protected by reputational association, and it's going to take a very long time for the new signal-association to take hold in predators. Once other insects have learned how to replicate the proxy-association or symbol that protected them, they can only wait it out until it's no longer protective.

You may have noticed this is a very similar mechanism to Goodhart's Law; the mechanism's the same far as I can tell. It's all about a publicly visible signal proxies for a hidden quality which outsiders do not have access to. (E.g. the lemon problem in used car sales, or size/confidence as a proxy for fighting ability in macaque hierarchies.) It can be easier and more reliable to just learn and copy the proxy than to evolve the hidden quality and hope other people catch on. (Think how many black and red butterflies got munched before the predators learned). It's a bleak problem; I haven't been able to make much progress on it, though I'd be super curious to hear if you think I've made errors in my premises, or if there's literature in game theory on this problem.

Comment by suspended-reason on Conceptual engineering: the revolution in philosophy you've never heard of · 2020-06-03T15:09:09.442Z · score: 5 (6 votes) · LW · GW

Yes, I think it all depends whether you find the criticisms of Socratic dialogue, logical positivism, and "tree falls in a forest"-type questions raised on this board since the late 00s compelling.

Comment by suspended-reason on Conceptual engineering: the revolution in philosophy you've never heard of · 2020-06-03T15:07:56.354Z · score: 1 (1 votes) · LW · GW

I agree, and think many conceptual engineering-type philosophers would agree, about natural language. The problem is that when you're applying rigorous analysis to a "naturally" grown structure like "truth" or "knowledge," you run into serious issues. Kevin Scharp's project (e.g.) is just to improve the philosophical terms, not to interfere with mainstream use.

Comment by suspended-reason on Conceptual engineering: the revolution in philosophy you've never heard of · 2020-06-03T05:37:19.267Z · score: 5 (3 votes) · LW · GW

Though I don't know much about it, I take "meaning as use" as a vague proto-version of the more explicit theories of fuzziness, polysemy, and "family resemblance" he'd develop later in his life. In some sense, it merely restates descriptivism; in another less literal sense, it's a tonal subversion of more classical understandings of meaning.

Conceptual engineering takes a very different stance from mere descriptivism; it specifically thinks philosophers ought to "grasp the language by its reins" and carve up words and concepts in more useful ways. "Useful," of course, depends on the fields, but e.g. in metaphysics, the disambiguation would be focused on evading common language traps. In that way, it's a bit like Yudkowsky's "Taboo Your Words."

Thanks for reading!

Comment by suspended-reason on Conceptual engineering: the revolution in philosophy you've never heard of · 2020-06-03T05:31:10.017Z · score: 5 (3 votes) · LW · GW

Yes, so the premise of Chalmers's lecture, and many other texts being published right now in conceptual engineering (a quickly growing field) is to first treat and define "conceptual engineering" using conceptual engineering—a strange ouroboros. Other philosophers are doing more applied work; see Kevin Scharp's version of conceptual engineering in his work on truth, or Sally Haslanger's version of it, "ameliorative analysis." But broadly, Chalmers's tentative definition is fine as a generic-enough umbrella: constructing, analyzing, renovating, etc. Right now, really anything in the ballpark of what "conceptual engineering" intuitively connotes is a fine description.

One place to start, as Cappelen does in his monographs on the subject, is with Nietzsche's Will to Power, so I'll quote that here:

Philosophers … have trusted in concepts as completely as they have mistrusted the senses: they have not stopped to consider that concepts and words are our inheritance from ages in which thinking was very modest and unclear. … What dawns on philosophers last of all: they must no longer accept concepts as a gift, nor merely purify and polish them, but first make and create them, present them and make them convincing. Hitherto one has generally trusted one's concepts as if they were a wonderful dowry from some sort of wonderland: but they are, after all, the inheritance from our most remote, most foolish as well as most intelligent ancestors. …What is needed above all is an absolute skepticism toward all inherited concepts.

Might add to the main post as well for clarity.

EDIT: Also, to be clear, my problem is not that Chalmers attempts to offer a definition. It's that, when presented with an intellectual problem, his first recourse in designing a solution is to consult a dictionary. And to make it worse, the concept he is looking up in the dictionary is a metaphor that a scholar twenty years ago thought was a nice linguistic turn of phrase.

Comment by suspended-reason on In Search of Slack · 2020-06-02T05:04:12.245Z · score: 1 (1 votes) · LW · GW
If there was a single allele that coded for the half of the irreducibly complex eye it could become fixed even though having a half of eye is, strictly speaking, worse than not having an eye at all.

I understand this was a toy example, so I feel bad nitpicking, but I've never quite understood why this example is so popular. While eyeballs are incredibly complex, one must imagine that "half an eyeball" is in fact very advantageous: it can likely sense light, some movement.

Thought the connection of slack to randomness was provocative, though!