Yeah, I later realized that my comment was not really addressing what you were interested in.
I read you as questioning the argument "separation of concerns, therefore, separation of epistemic vs instrumental" -- not questioning the conclusion, which is what I initially responded to.
I think separation-of-concerns just shouldn't be viewed as an argument in itself (ie, identifying some concerns which you can make a distinction between does not mean you should separate them). That conclusion rests on many other considerations.
Part of my thinking in writing the post was that humans have a relatively high degree of separation between epistemic and instrumental even without special scientific/rationalist memes. So, you can observe the phenomenon, take it as an example of separation-of-concerns, and think about why that may happen without thinking about abandoning evolved strategies.
Sort of like the question "why would an evolved species invent mathematics?" -- why would an evolved species have a concept of truth? (But, I'm somewhat conflating 'having a concept of truth' and 'having beliefs at all, which an outside observer might meaningfully apply a concept of truth to'.)dagon on Separation of Concerns
Wait. Some thoughts enable actions, which can change reality. Some thoughts may be directly detectable and thereby change reality (say, pausing before answering a question, or viewers watching an fMRI as you're thinking different things). But very few hypothetical and counterfactual thoughts in today's humans actually effect reality in either of these ways.
Are you claiming that someone who understands cooperation and superrationality can change reality by thinking more about it than usual, or just that knowledge increases the search space and selection power over potential actions?
Or to put it another way, if the principle you discovered is useful for more than running the same program with a different seed, shouldn't it be possible to test it by some means other than running the same program with a different seed?
Certainly. But even if the results are not useful and can't be generalized to other situations, it's probably possible to replicate it, in a way that's slightly different from running the same program with a different seed. (E.g. you could run the same algorithm on a different environment that was constructed to be the kind of environment that algorithm could solve.) So this wouldn't work as a test to distinguish between useful results and non-useful results.pattern on Coherent decisions imply consistent utilities
I'd hold that it's the reverse that seems more questionable. If n is a large number then the Law of Large Numbers may be applicable ("the average of the results obtained from a large number of trials should be close to the expected value, and will tend to become closer as more trials are performed.").habryka4 on Comment section from 05/19/2019
Those posts are definitely permissible on LessWrong from the site-rule perspective, though there is a sense in which they are off-topic in that we didn't promote them to the frontpage.
I do think that imbalance of frontpage vs. personal already creates some problems, though I think the distinction is doing a bunch of important work that I don't know how to achieve in other ways.elo on Open Thread May 2019
There are definitely rationalist positions that have unexamined potential in the pr direction, where a good excuse is, "I haven't looked yet". (and a bad excuse might be, "that's dumb I don't want to look there"). In that sense there is rationality that is not yet at Post-rational investigations.
I had to have some sense and experience of investigating and knowing the world before I turned that machine on itself and started to explore the inner workings of the investigation mechanism.totallybogus on Comment section from 05/19/2019
The rationality community itself is far from static; it tends to steadily improve over time, even in the sorts of proposals that it tends to favor. If you go browse RationalWiki (a very early example indeed of something that's at least comparable to the modern "rationalist" memeplex) you'll in fact see plenty of content connoting a view of theists as "people who are zealously pushing for false beliefs (and this is bad, really really bad)". Ask around now on LW itself, or even more clearly on SSC, and you'll very likely see a far more nuanced view of theism, that de-emphasizes the "pushing for false beliefs" side while pointing out the socially-beneficial orientation towards harmony and community building that might perhaps be inherent in theists' way of life. But such change cannot and will not happen unless current standards are themselves up for debate! One simply cannot afford to reject debate simply on the view that this might make standards "hazy" or "fuzzy", and thus less effective at promoting some desirable goals (including, perhaps, the goal of protecting vulnerable people from very real harm and from a low quality of life more generally). An ineffective standard, as the case of views-of-theism shows, is far more dangerous than one that's temporarily "hazy" or "fuzzy". Preventing all rational debate on the most "sensitive" issues is the very opposite of an effective, truth-promoting policy; it systematically pushes us towards having the wrong sorts of views, and away from having the right ones.
One should also note that it's hard to predict how our current standards are going to change in the future. For instance, at least among rationalists, the more recent view "theism? meh, whatever floats your boat" tends to practically go hand-in-hand with a "post-rationalist" redefinition of "what exactly it is that theists mean by 'God' ". You can see this very explicitly in the popularity of egregores like "Gnon", "Moloch", "Elua" or "Ra", which are arguably indistinguishable, at least within a post-rationalist POV, from the "gods" of classical myths! But such a "twist" would be far beyond what the average RationalWiki contributor would have been able to predict as the consensus view about the issue back in that site's heyday - even if he was unusually favorable to theists! Clearly, if we retroactively tried to apply the argument "we (RationalWiki/the rationalist community) should be a lot more pro-theist than we are, and we cannot allow this to be debated under any circumstances because that would clearly lead to very bad consequences", we would've been selling the community short.saidachmiz on Comment section from 05/19/2019
The latter.benquo on A War of Ants and Grasshoppers
The fact that evolution is adequate to produce ants doesn't really have much bearing on anything here, unless there's also reason to believe that lookahead can't do better than ants, which is clearly absurd. Even if the moon were a rich source of calories (say, by having comparatively unimpeded access to sunlight), evolution just doesn't know how to get there and can't figure it out by iteration. Humans clearly can in principle, it's hard for us but obviously within our reach as a species, and not by natural selection for flight.shminux on Does the Higgs-boson exist?
The way you think about the concept of "phlogiston" I think about the concept of "truth". Useful to a point, but then breaking down when pushed.