Posts
Comments
Thamks for that clarification! I think it would be OK to discuss the merits of importing any given page, perhaps in this very LW thread. Separately, there is quite a bit of Wiki content that's now been 'hidden' in the new system as a result of being merged with an existing tag, and the more "in-depth" portions of that content, if considered worthwhile, should probably be moved to newly-created 'wiki-only' pages, so as to reduce confusion among users who only care about the bare "tagging" aspect.
(I have in mind, e.g. the discussion of problematic 'persuasion' technology in the Dark Arts wiki page, or the 'community' conceptual metaphor for computer-mediated communication as discussed in the page on "Groupthink". That kind of content can make sense on a "wiki only" page, not so much in the bare description of a "tag"!)
I can still identify a few pages on the old wiki that seem to have no matching entity in the new "tagging" system, e.g. Adversarial process (a general, widely-used notion wrt. which the rationalist Adversarial collaboration may be a special case -- so it seems like a fairly important thing to have!). Will these pages be imported in the future?
We're at a point where gender studies shouldn't even be considered part of the humanities anymore, I'd say. As you remind us, they're severely in denial about what biology, medicine and psychology have established and their experimental data. They're the intellectual equivalent of anti-vax "activists" (except that the latter have yet to reach the same degree of entryism and grift).
There are other adjacent fields that are similarly problematic, being committed to discredited ideas like Marxist economics, or to what's sometimes naïvely called "post-modernism" (actually a huge misreading of what the original postmodernists were in fact trying to achieve!). All of that stuff is way too toxic and radioactive to even think about seeking it out explicitly.
For what it's worth, your struggles with modeling others via ToM probably had very little to do with your interest in Objectivism, individualism and the like. It seems that many, perhaps most children and teenagers share this trait in the first place; moral development is a slow process, even for those with entirely normal emotions and a normal substrate for affective empathy (i..e the non psychopathic/ODD/ASPD!).
I do have to caution though that the basic other-awareness that being non-psychopathic gives you also makes you a lot more effective at modeling others' preferences and being able to enter into efficient win-win deals and arrangements with them. Renouncing that other-awareness thus has very real costs, while OTOH the benefits of doing so are quite dubious. After all, even though you're obviously self-interested in some sense, you aren't trying to pursue the same preferences as a psychopath/ASPD would. And when you say "I’m able to constrain others rather heavily" by doing this, you're probably fooling yourself since expectations, implicit demands and social constraints are inherently a two-way street - they empower you to influence others even as they act as constraints on your own behavior!
It’s surprising to me that people are even debating whether mistake- or conflict-theory is the “correct” way of viewing politics. Conflict theory is always true ex ante, because the very definition of politics is the stuff that people might physically fight over, in the real world! You can’t get much more "conflict-theory" than that. Now of course, this is not to say that debate and deliberation might not also become important, and such practices do promote a "mistake-oriented" view of political processes. But that’s a means of de-escalation and creative problem solving, not some sort of proof that conflict is irrelevant to politics. Indeed, this is the whole reason why norms of fairness are taken to be especially important in politics, and in related areas such as law: a "fair" deliberation is generally successful at de-escalating conflict, in a way that a transparently "unfair" one (perhaps due to rampant elitism or over-intellectualism)-- even one that’s less "mistaken" in a broader sense -- might not be.
I'm very sorry that we seem to be going around in circles on this one. In many ways, the whole point of that call to doing "post-rationality" was indeed an attempt to better engage with the sort of people who, as you say, "have epistemology as a dumpstat". It was a call to understand that no, engaging in dark side epistemology does not necessarily make one a werewolf that's just trying to muddy the surface-level issues, that indeed there is a there there. Absent a very carefully laid-out argument about what exactly it is that's being expected of us I'm never going to accept the prospect that the rationalist community should be apologizing for our incredibly hard work in trying to salvage something workable out of the surface-level craziness that is the rhetoric and arguments that these people ordinarily make. Because, as a matter of fact, calling for that would be the quickest way by far of plunging the community back to the RationalWiki-level knee-jerk reflex of shouting "werewolf, werewolf! Out, out out, begone from this community!" whenever we see a "dark-side-epistemology" pattern being deployed.
(I also think that this whole concern with "safety" is something that I've addressed already. But of course, in principle, there's no reason why we couldn't simply encompass that into what we mean by a standard/norm being "ineffective" - and I think that I have been explicitly allowing for this with my previous comment.)
The rationality community itself is far from static; it tends to steadily improve over time, even in the sorts of proposals that it tends to favor. If you go browse RationalWiki (a very early example indeed of something that's at least comparable to the modern "rationalist" memeplex) you'll in fact see plenty of content connoting a view of theists as "people who are zealously pushing for false beliefs (and this is bad, really really bad)". Ask around now on LW itself, or even more clearly on SSC, and you'll very likely see a far more nuanced view of theism, that de-emphasizes the "pushing for false beliefs" side while pointing out the socially-beneficial orientation towards harmony and community building that might perhaps be inherent in theists' way of life. But such change cannot and will not happen unless current standards are themselves up for debate! One simply cannot afford to reject debate simply on the view that this might make standards "hazy" or "fuzzy", and thus less effective at promoting some desirable goals (including, perhaps, the goal of protecting vulnerable people from very real harm and from a low quality of life more generally). An ineffective standard, as the case of views-of-theism shows, is far more dangerous than one that's temporarily "hazy" or "fuzzy". Preventing all rational debate on the most "sensitive" issues is the very opposite of an effective, truth-promoting policy; it systematically pushes us towards having the wrong sorts of views, and away from having the right ones.
One should also note that it's hard to predict how our current standards are going to change in the future. For instance, at least among rationalists, the more recent view "theism? meh, whatever floats your boat" tends to practically go hand-in-hand with a "post-rationalist" redefinition of "what exactly it is that theists mean by 'God' ". You can see this very explicitly in the popularity of egregores like "Gnon", "Moloch", "Elua" or "Ra", which are arguably indistinguishable, at least within a post-rationalist POV, from the "gods" of classical myths! But such a "twist" would be far beyond what the average RationalWiki contributor would have been able to predict as the consensus view about the issue back in that site's heyday - even if he was unusually favorable to theists! Clearly, if we retroactively tried to apply the argument "we (RationalWiki/the rationalist community) should be a lot more pro-theist than we are, and we cannot allow this to be debated under any circumstances because that would clearly lead to very bad consequences", we would've been selling the community short.
Okay, so where exactly do you see Zack M. Davis as having expressed claims/viewpoints of the "ought" sort? (i.e. viewpoints that might actually be said to involve a preferred agenda of some kind?) Or are you merely saying that this seems to be what Vanessa's argument implies/relies on, without necessarily agreeing one way or the other?
Some speech acts lower the message length of proposals to attack some groups, or raise the message length of attempts to prevent such attacks.
Rationality is the common interest of many causes; the whole point of it is to lower the message-description-length of proposals that will improve overall utility, while (conversely, and inevitably) raising the message-description-length of other possible proposals that can be expected to worsen it. To be against rationality on such a basis would seem to be quite incoherent. Yes, in rare cases, it might be that this also involves an "attack" on some identified groups, such as theists. But I don't know of a plausible case that theists have been put in physical danger because rationality has now made their distinctive ideas harder to express! (In this as in many other cases, the more rationality, the less religious persecution/conflict we see out there in the real world!) And I have no reason to think that substituting "trans advocacy that makes plausibly-wrong claims about how the real-world (in this case: human psychology) factually works" for "theism" would lead to a different conclusion. Both instances of this claim seem just as unhinged and plausibly self-serving, in a way that's hard not to describe as involving bad faith.
You do realize that viewpoints about the state-of-Nature don't have preferred agendas? Hume teaches us that you can't derive an ought from an is. By the same token, you can't refute an is from an ought!
The clearest issue with OP's scenarios is that all the "accusations" portrayed involve cheap talk - thus, they are of no use other than as a pure "sunspot" or coordination mechanism. This is why you want privacy in such a world; there is no real information anyway, so not having "privacy" just makes you more vulnerable! Back in the real world, even the very act of accusing someone may be endowed with enough information that some truthful evidence is actually available to third parties. And this makes it feasible to coordinate around "telling the truth" - though truth-tellers still have to work hard at finding the best feasible signals and screening mechanisms! Yes, "human implies political" - but even useful truth-telling involves playing politics, of a sort. (This is something that the local OB/LW subculture is not always ready to acknowledge, of course. It's why we have a deeper problem with politics here.)
One obvious problem with your predicted "good king" scenario is that a high rank in the "pecking order" inherently attracts bad actors to it - which in turn are precisely the agents who will use that rank to do the most damage, both to other actors within the group and indeed to the organizational goal itself! Separating "pecking order" and "decision-making order" would seem to be the right answer - except for another wrinkle, that is; it seems that, among the few ways we know about of semi-reliably screening off bad actors, (1) requiring proof of having reached good decisions recently, especially on non-trivial and long-term matters; and (2) giving a boost to "prophet" types who can provide strong, complex signals of pro-sociality, not just momentarily but on a somewhat long-term basis (And yes, these reliable signals do seem to exist. For example, sound goal analysis that does not reduce to mere politicking and is based on a principled assessment of the group - this very blogpost provides us with a fine example! - impressive art, high-quality work in general - even something as simple as humour, perhaps!), are especially important. And both of these concerns would seem to push into the other direction, of commingling the "pecking order" and "decision-making" hierarchy to some extent!
I think it would be interesting to try and design honeypot hierarchies, that are expressly intended for bad actors to have harmless fun in, without dealing extensive damage to others. But a pecking order is not that; being low in the pecking order, especially with someone malicious at the top, is really bad. Thus, arguably, this is a goal that's best pursued by the market system as a whole, not by a small-scale social structure that - like all social structures - comes with inherently "soft" and "pliable" incentives that will never manage to keep the most toxic agents on the shortest leash.
modify their individual utility functions into some compromise utility function, in a mutually verifiable way, or equivalently to jointly construct a successor AI with the same compromise utility function and then hand over control of resources to the successor AI
This is precisely equivalent to Coasean efficiency, FWIW - indeed, correspondence with some "compromise" welfare function is what it means for an outcome to be efficient in this sense. It's definitely the case that humans, and agents more generally, can face obstacles to achieving this, so that they're limited to some constrained-efficient outcome - something that does maximize some welfare function, but only after taking some inevitable constraints into account!
(For instance, if the pricing of some commodity, service or whatever is bounded due to an information problem, so that "cheap" versions of it predominate, then the marginal rates of transformation won't necessarily be equalized across agents. Agent A might put her endowment towards goal X, while agent B will use her own resources to pursue some goal Y. But that's a constraint that could in principle be well-defined - a transaction cost. Put them all together, and you'll understand how these constraints determine what you lose to inefficiency - the "price of anarchy", so to speak.)
Is Clickbait Destroying Our General Intelligence? You Won't BELIEVE What Comes Next!
(Personally, I don't buy it. I think persuasion technology - think PowerPoint et al., but also possibly new varieties of e.g. "viral" political advertising and propaganda, powered by the Internet and social media - has the potential to be rather more dangerous than BuzzFeed-style clickbait content. If only becausr clickbait is still optimizing for curiosity and intellectual engagement, if maybe in a slightly unconventional way compared to, e.g. 1960s sci-fi.)
12k $ per year UBI and socialized healthcare? I'm sorry, but this cannot possibly work - the taxes required to pay for both would be a huge disincentive to individual effort. Make it more like 6k $ per year plus a mandatory healthcare component (to be placed in an individual HSA, as per the Singaporean model) and it starts to look like a workable idea. Giving everyone money for doing nothing turns out to be really, really expensive, so the less you do it, the better. Who'd have thunk it?
A large reason for the decline in norms around building local communities is that there is a new source of competition for organizational talent: building online communities. ... we don’t know how to make a complete civil society out of online institutions.
I'm not exactly disagreeing with your overall point here, but the very notion of "online communities" is simply nonsensical: a social club or social group is not a "community" in the sense that applies in the physical world. Thus, any goal of "mak[ing] a complete civil society" that operates entirely online is even more nonsensical. The rule of thumb, as always, is to "think globally [about global issues], act locally [leveraging your local social groups]".
Not necessarily; if anything, I was in fact agreeing with you that some portion of people's 'existing acculturation' to middle-class culture is not, strictly speaking, neutral, due to historical path dependence if nothing else. But I still think it may be unproductive and even pointless for people to act overly "touchy" about such subjects. Should, e.g. Quebeckers, and perhaps Francophones in general, feel justified about their "touchy" attitude wrt. the cultural dominance of English?
Even if he was, it’s not obvious that the actually existing acculturation people do to participate in the global cultural middle class is entirely composed of culturally universal middle-class traits, rather than accidental traits attributable to the particular areas where this culture emerged first.
Some such traits undoubtedly exist; for instance, people throughout the world learn English for no other reason than to take part in a successful culture where "middle class" traits are relatively common. But it's not clear that there could be any alternative to English that would not be "attributable to [some] particular area"; for example, Esperanto is culturally European and perhaps even specifically Eastern-European; Lojban was indeed designed to be culturally and areally neutral but this doesn't seem to help its popularity, since the Lojban-speaking community is in fact quite tiny.
...It’s not obvious that “middle class” as a concept is a cultural universal, much less that middle class norms are the same across cultures.
The concept of "middle class" (in the "middle class norms" sense) is increasingly co-evolving with existing cultures in a way that makes it more of a cultural universal. And cultures which don't adopt the middle class concept tend to fail at basic human flourishing, which is as close to a universal as it gets. Marx was well aware of this BTW; he thought socialism would be infeasible unless and until the "middle class norms"-based stage of history (originating from early-modern-age Europe at the latest, not the 20th-century Anglosphere) had fully played out in most of the world, at which point it would be superseded in a quite natural way. See also Scott's post "How the West was won", which is relevant to this question.
There’s a whole chain of schools that teach poor, mostly minority students business social norms, by which they mean white-middle-class norms.
Are "white middle class norms" substantially different from, um, black middle class norms, hispanic middle class norms, asian middle class norms and the like? If they are, the article should perhaps hint at this, and at some relevant evidence. If they aren't, the "white" bit seems pointlessly divisive in a rather obnoxious way. Either way, you're creating quite a bit of "interpretive debt" that the reader will have to pay down via interpretive labor.
Crucial Conversations/Non-Violent Communication/John Gottman’s couples therapy books/How to Talk so Kids Will Listen and Listen so Kids Will Talk are all training for interpretive labor.
We could add the guide to How to Ask Questions the Smart Way to this list. Pithily, the "smart way" to ask a question in a technically-complex setting is the one that minimizes interpretive debt, via adopting "tell culture" norms. There are other best practices that point in a related direction, such as, in a work environment, being very clear about whether you actually understand what's being asked of you, and whether you're taking on a serious commitment to achieve it (something that plenty of people don't seem to realize as being important).
I think a large part of the anger around the concept of trigger warnings is related to interpretive labor.
I think a large part of the anger about trigger warnings - on both sides - is no longer about sensible and effective trigger warnings. Trigger warnings make sense precisely when they shift a large interpretive or emotional burden, away from the person who is least equipped to handle it.
I think historically, most of the gain of increasing cooperation has occurred not by opposing tribalism, but rather by channeling it in broadly socially-useful directions. Democrats and Republicans might not kill each other, but that's not because they aren't "tribes" of some sort. Indeed, it's not clear how politics itself could even work absent some degree of 'tribalism' (which should rather be called factionalism, but never mind that) as a basic organizing principle.
Isn't a "researcher's basic income" just another word for, um... tenure? I think the proper solution is to tighten standards for what's considered "good" research (fix the replication crisis) and to increase the status of other sorts of scholarship which aren't highly valued at present (at least in STEM) but are very much needed, such as review articles and in-depth monographs. These things don't have the problem where only an unambiguously "positive" result demonstrates the value of one's scholarship, and reaching positive results is largely a matter of luck.
The author does suggest a system where "academic researchers are rewarded for running high quality studies with these sorts of attributes, regardless of outcome", but, barring highly-selective preregistration, I'm not sure how this can work - other things being equal, an unambiguous outcome does signal a higher-quality study, so researchers will always prefer clear (i.e. "positive") outcomes.
A mathematical definition is what the answer to a philosophical problem looks like. ... An example I particularly like is the definition of a topological space. I don’t know for a fact that this is what people “really meant” when they pondered the nature of “space” ... it doesn’t matter, because the power of this definition shows that it is what they should have meant.
I fear that this particular example might be a bit, um, pointless these days. But perhaps this simply reflects different intuitions as to what "a powerful definition" should ultimately look like.
It's surprising to me that people are even debating whether mistake- or conflict-theory is the "correct" way of viewing politics. Conflict theory is always true ex ante, because the very definition of politics is the stuff that people might physically fight over, in the real world! You can't get much more "conflict-theory" than that. Now of course, this is not to say that debate and deliberation might not also become important, and such practices do promote a "mistake-oriented" view of political processes. But that's a means of de-escalation and creative problem solving, not some sort of proof that conflict is irrelevant to politics. Indeed, this is the whole reason why norms of fairness are taken to be especially important in politics, and in related areas such as law: a "fair" deliberation is generally successful at de-escalating conflict, in a way that a transparently "unfair" one (perhaps due to rampant elitism or over-intellectualism)-- even one that's less "mistaken" in a broader sense-- might not be.
A resolve arose within me:
I will solve AI alignment, and then I will work to further AGI, bringing that day closer and closer to prevent the loss of another grandson, another nephew, another friend.
Good decision. Also, even if we can't quite solve AI alignment in the fully general case, we're likely close to having AIs which are safe enough in practice to successfully prevent almost all car wrecks.
"If A then B" is logically equivalent to "if not B then not A", which is sometimes much easier to prove. Et cetera, et cetera.
Careful here, because this transformation is enough to make your proof non-constructive! Since we're learning "how to write proofs", it's worthwhile to follow good proof-structuring rules, one of which is to keep things constructive as far as practicable.
...I theoretically ought to answer “I can’t confirm or deny what I was doing last night” because some of my counterfactual selves were hiding fugitive marijuana sellers from the Feds. ...
This seems easy to fix in principle. If, conditioned on the info that's known, or that probabilistically might be known to your asker, your counterfactual selves were especially likely to hide fugitives, you ought to say "I can’t confirm or deny"; otherwise, you can be truthful, and accept the consequence that some negligible fraction of your counterfactual selves are going to be exposed. Of course, the frequency of being truthful depends on how much you'd care about being counterfactually exposed, compared to the counterfactual worlds where providing true info to the asker is beneficial.
I'm quite skeptical that war is a big influence on how we organize our societies, because national defense is quite a small fraction of GDP in modern Western countries (including the U.S.!) And that fraction is even dropping over time. Legibility may matter more, but legibility also correlates with other features of our way of life, like an extensive division of labor/specialization, that most people agree are very important.
Is Duncan/Conor OK with you linking his content here at LW? (There are of course reasons why I think this is a sensible question to ask, but I won't be going into them here.)
Aside from that, the post is very much of the TL;DR variety, and should ideally be broken into a series of self-contained posts, each pointing out some well-defined inferential step. I'm really quite skeptical that productive discussion about the OP is feasible as is. (But I'm of course willing to be proven wrong, if anyone wants to try!)
Also, if enlightenment here refers to the increase of knowledge I don’t see how that necessarily reduces suffering.
This is also what Daniel Ingram heavily implies in Mastering the Core Teachings of the Buddha. The very first "training" Ingram discusses is ethical/practical training; he states pretty much overtly that you should 'set your house in perfect order' before you pursue enlightenment, and keep working on that even as you engage other "trainings" or "teachings".
I think positive psychology has a lot of potential in EA, but AIUI, even the author of "Mastering the Core Teachings of the Buddha" is far from saying that these teachings can trivially eradicate all human suffering. They are quite worthwhile in other ways and should definitely be part of positive psychology in a broader sense, but they're not a silver bullet.
Should I just let them do whatever they want with my corpse?
Why not? I mean, it doesn't really seem like you can currently afford to pay for even life-insurance-funded cryopreservation at the moment (given that you report having trouble with basic necessities), so unless that were to change in some way, why not let your surviving friends and social allies make their preferred choice about the matter?
Tsai Wo asked about the three years’ mourning for parents, saying that one year was long enough.
‘If the superior man,’ said he, ‘abstains for three years from the observances of propriety, those observances will be quite lost. If for three years he abstains from music, music will be ruined.’
‘Within a year the old grain is exhausted, and the new grain has sprung up, and, in procuring fire by friction, we go through all the changes of wood for that purpose. After a complete year, the mourning may stop.’
Confucius said, ‘If you were, after a year, to eat good rice, and wear embroidered clothes, would you feel at ease?’ ‘I should,’ replied Wo.
Confucius said, ‘If you can feel at ease, do it. But a superior man, during the whole period of mourning, does not enjoy pleasant food which he may eat, nor derive pleasure from music which he may hear. He also does not feel at ease, if he is comfortably lodged. Therefore he does not do what you propose. But now you feel at ease and may do it.’
Tsai Wo then went out, and Confucius said,
‘This shows Tsai Yu’s lack of humaneness! It is not till a child is three years old that it is allowed to leave the arms of its parents. The three years’ mourning is universally observed throughout the land. And did Tsai Yu not enjoy the three years’ love of his parents?’
Naturally, in the interesting classes (read: science, math, and tech), I was engaged enough to counteract this drowsiness; in the useless classes (read: literature, art, music, foreign language), I was not.
Understood. But even allowing that this school did feature quite a few engaging classes (and again, it's not like OP denies this), is it really fair to praise a school as 'top-class' or 'the best school in city X' when its narrow STEM focus leads it to provide markedly-substandard education in such subjects as literature, art, history and foreign languages? Note that this is not at all an unrealistic or "Platonic" standard, since there are plenty of schools that obviously succeed in engaging their students academically wrt. these subjects (in a relative sense, obviously)! So what exactly is "ridiculous" or hyperbole about such a claim?
What's the average bus factor in the typical EA local community (either at Melbourne or elsewhere)? EA is still a very small and fragile movement, so we're very nuch at the point where loss of even one locally-knowledgeable person can actually be a very serious setback for the movement.
I am saying that top-class high schools (or, at least, one top-class high school) are not, in fact, “so bad”.
But OP is saying that they are, and you don't really address any of his claims. Seriously, if you can be spending most of your time correcting your own lecturers when they get things wrong, and being otherwise bored to death-- or being denied access to electives because of your scores in unrelated subjects-- that's terrible enough. We wouldn't accept this in any institution which was attempting to provide even minimally-"engaging" academics.
There’s basically no bullying, there are no “jocks” to speak of, and the concept of being made fun of for being smart, or for being a “nerd”, or for being focused on schoolwork, is absurd even to contemplate.
I didn't see any mention of bullying or the like in the OP? (Leaving aside the fact that putting forward "just select the smartest kids w/ a high-stakes entrance exam" as a solution to bullying is preposterous.) OP's complaints have to do with ineffective teachers and institutional red tape, and if top-class high schools are so bad, I struggle to think what the average school must be like!
Well, since I have now created a new account, this is not a problem in any real sense to me. The biggest problem is that the recovery email for my LW1 account was never properly set as such on LW2, which means the otherwise foolproof "ask for a password reset" does not work. It should be easy to spot whether there are any other accounts with the same issue (legacy accounts w/ no password recovery email set, even though there is one in the LW2 "Email" field) with some sort of database query, and maybe even fix them up semi-automatically.
Any progress on this bug that was seemingly leading to a number of LW1 ("legacy") accounts getting locked out of the site? LW developers?
If the 1567 figure is wrong, Aubrey de Grey needs to amend that arXiV paper... Also I wouldn't put too much trust in a result that was reached "by a SAT solver" either, unless either the SAT results came with a proof certificate that can be fed to a formally correct checker, or (in the UNSAT case) the solver itself was formally verified to provide correct and complete results.
Classical rhetoric is old hat these days. The really persuasive Art is making PowerPoint slides!
But I mean, isn't it obvious that damage to the truck alone as a result of the attack would imply quite a higher cost than whatever the shotgun was worth? (And yes, I think this is clearly the case even when you consider that the probability of being attacked is quite a bit less than 100%.) I don't think this shows lives being insufficiently valued in the military; I think it just shows the sort of pervasive dysfunction we would expect in any large-scale organization lacking internal mechanisms to ensure accountability and proper response to incentives.
blockchain
That's it, Effective Altruism has now officially jumped the shark.
PROTIP: Read this carefully before you take any of this tech seriously. The use cases for anything regarding "blockchain" or "crypto-currency" are extremely limited right now, and not even close to EA's core advantages. If anything, EA proponents should work on relaxing existing regulations around access to mainstream finance platforms (similar to how recent regulatory efforts made "crowdfunding" significantly more accessible to casual investors), so that things like smart markets can be developed out in the open, without needing to resort to exotic and unreliable tech.
Since short-term satiation after orgasm (the 'refractory period') is much less of an issue in women, it's at least reasonable to expect that they might have far less long-term orgasm satiation as well. Which is not to say that loss of relationship energy is not a problem more broadly (the stereotype of "lesbian bed death" indicates as much!), just that we shouldn't necessarily expect orgasms to be the causal link in that case.
Therefore, we should relieve sexual pressure without orgasm and engage in more pair-bonding behavior
Note that this is not exactly a novel claim - many highly-developed sexual practices promote mate-bonding behavior in a broad sense, while discouraging mere ejaculation. Often this is accompanied by a claim that too frequent ejaculation 'drains' sexual and relationship energy, which would mesh quite well with it being a causal factor in satiety!
Crowdfunding approaches as seen e.g. in Kickstarter or Patreon have recently made it a lot easier for artists to capture significant amounts of value for their efforts. (This could still be supplemented though, e.g. via after-the-fact prize awards for especially impressive art.) It's interesting to think of what comparable approaches may be applicable to goods and services that are very much unlike art, and where value may nonetheless be hard to capture efficiently.
FWIW, I didn't necessarily intend the term "dissident" in an especially negative sense, or even with any real negative connotation. I literally mean "someone who disagrees or dissents, one who separates themself from the established religion; a dissenter." It was also meant to highlight the fact that there are clearly a lot of people like that, as a necessary consequence of LW's overall nature as a remarkably developed 'memeplex' (just like most real-world religions and perhaps political ideologies).
Sure, but my claims weren't actually about libertarians and conservatives in general, only the fraction among them who support and oppose social insurance, respectively. It doesn't actually take much formal evidence (that is, evidence that also reaches a high 'admissibility' standard - which 'who I run into in my filter bubble?' might not!) to show that sizeable such groups do exist, or to talk about their ideas.
I think academic math has a problem where it’s more culturally valorized to be really smart than to teach well
I don't think that's the issue exactly. My guess is that academic math has a culture of teaching something quite different from what most applied practitioners actually want. The culture is to focus really hard on how you reliably prove new results, and to get as quickly as possible to the frontier of things that are still a subject of research and aren't quite "done" just yet. Under this POV, focusing on detailed explanations about existing knowledge, even really effective ones, might just be a waste of time and effort that's better spent elsewhere!