Comment by less on Are we in an AI overhang? · 2020-08-03T01:03:35.224Z · LW · GW

Return on investment in the field of AI seems to be sub-linear beyond a certain point. Because it's still the sort of domain that relies on specific breakthroughs, it's dubious how effective parallel research can be. Hence, my guess would be that we don't scale because we can't currently scale.

Comment by less on A Rational Argument · 2020-05-12T23:54:33.814Z · LW · GW

What you need to remember is that all of this applies to probabilistic arguments with probabilistic results - of course deductive reasoning can be done backward. However, when evidence is presented as contribution to a belief, omitting some (as you will, inevitably, when reasoning backward) disentangles the ultimate belief from the object thereof. If some evidence doesn't contribute, the (probabilistic) belief can't reflect reality. You seem to conceptualize arguments as requiring the outcome if they're valid and their premises are true, which doesn't describe the vast majority.

Comment by less on Reversed Stupidity Is Not Intelligence · 2020-05-12T01:10:43.097Z · LW · GW

I've just realized that there's a footnote addressing this. My apologies.

Comment by less on Reversed Stupidity Is Not Intelligence · 2020-05-12T01:04:31.532Z · LW · GW

"The conditional probability P(cults|aliens) isn’t less than P(cults|aliens) "

Shouldn't this be " The conditional probability P(cults|~aliens) isn’t less than P(cults|aliens) ?" It seems trivial that a probability is not less than itself, and the preceding text seems to propose the modified version included in this comment.

Comment by less on Player vs. Character: A Two-Level Model of Ethics · 2020-05-11T18:32:22.615Z · LW · GW

I haven't quite observed this; even extremely broad patterns of behavior frequently seem to deviate from any effective strategy (where said strategy is built around a reasonable utility function). In the other direction, how would this model be falsified? Retrospective validation might be available (though I personally can't find it), but anticipation using this dichotomy seems ambiguous.

Comment by less on The Simple Truth · 2020-05-11T17:56:26.089Z · LW · GW

This is quite a charming allegory, though my notion of truth was already simple and absolute. It's certainly an argument worth reading.

Comment by less on Negative Feedback and Simulacra · 2020-05-11T14:09:33.705Z · LW · GW

That's very much true. However, it appears to me the object of frustration is the gesture's sentiment (as evidenced by the girlfriend's focus on the gesture specifically). Thus, I find it dubious that the girlfriend's primary concern was the changes in her own beliefs regarding her cooking.

Comment by less on Negative Feedback and Simulacra · 2020-05-06T20:32:40.603Z · LW · GW

I don't quite see how, in the hot sauce example, the girlfriend is "treating [the OP] like he retroactively made her cooking worse. " Hot sauce tends to improve the taste of food, so it appears that she perceives his addition of the condiment (increasing his appreciation of the food) as implying that her food isn't of sufficient quality to be palatable on its own.

Comment by less on Religion's Claim to be Non-Disprovable · 2020-05-06T17:28:30.834Z · LW · GW

Each has no grounds to believe in the other's existence, so rationally they ought to both say that the other doesn't exist.