Posts
Comments
My working theory is that Putin could be worried about some kind of internal threat to himself and his power.
He's betting a lot on his image of strong, dangerous leader to keep afloat. However the Russian constant propaganda that keeps up that image was starting to be more and more known and ineffective.
Europe also has been trying to get rid of Russian influence through gas for a while, and would likely have managed in a few more years. Then they'd be free to be less accepting of his anti-human rights antics.
Ukraine joining Nato would have made him look extremely weak, and it would have made it easier to make him look weak in the future.
Once his strong image faded he might have been worried of reforming forces within Russia to manage oust him out of office with an actual election and mass wide protests if the ball got rolling enough, or he might be worried about someone taking a more direct approach to eliminate him (he killed enough people to be extremely worried about being murdered, I think).
So this is his extreme move to deny weakness. Better to be seen as the tyrant who's willing to do anything if provoked, than the ex-strong leader who can be taken out of office.
Indeed, including the people who willingly caused it. But profiting from a problem is not the same as fixing it.
Since I wrote my comment I had lots of chances to prod at the apathy of people to act against imminent horrible doom.
I do believe that a large obstacle it's that going "well, maybe I should do something about it, then. Let's actually do that" requires a sudden level of mental effort and responsibility that's... well, it's not quite as unlikely as oxygen turning into gold, but you shouldn't just expect people doing that (it took me a ridiculous amount of time before starting to do so).
People are going to require a lot of prodding or an environment where taking personal responsibility for a collective crisis is the social norm to get moving. 10 millions would cont as lot of prodding, yeah. 100k... eh, I'd guess lots of people would still jump at that, but not many of those who are paid the same amount or more.
So a calculation like "I can enjoy my life more by doing nothing, lots of other people can try to save the world in my place" might be involved, even if not explicitly. It's a mixture of the Tragedy of the Commons and of Bystander Apathy, two psychological mechanism with plenty of literature.
She gave me the answer of someone who had recently stopped liking fritos through an act of will. Her answer went something like this: "Just start noticing how greasy they are, and how the grease gets all over your fingers and coats the inside of the bag. Notice that you don't want to eat things soaked in that much grease. Become repulsed by it, and then you won't like them either."
This woman's technique stuck with me. She picked out a very specific property of a thing she wanted to stop enjoying and convinced herself that it repulsed her.
I completely stopped smoking four years ago with the exact same method. It's pretty powerful, I'm definitely making a technique out of this.
I think I've been able to make outstanding progresses last year in improving rationality and starting to work on real problems mostly because of megalomaniac beliefs that were somewhat compartmentalised but that I was able to feel at a gut level each time I had to start working.
Lately, as a result of my progresses, I've started slowing down because I was able to come to terms with these megalomaniac beliefs and realise at a gut level they weren't accurate, so a huge chunk of my drive faded, and my predictions about my goals updated on what I felt I could realise with the drive I was feeling, even if I knew that destroying these beliefs was a sign I really improved and was learning how actually hard it is to do world changing stuff...
I'll definitely give this a trial run, trying to chain down those beliefs and pull them out as fuel when I need to.
Mh... I guess "holy madman" is a definition too vague to make a rational debate on it? I had interpreted it as "sacrifice everything that won't negatively affect your utility function later on". So the interpretation I imagined would be someone that won't leave himself an inch of comfort more than what's needed to keep the quality of his work constant.
I see slack as leaving yourself enough comfort that you'd be ready to use your free energy in ways you can't see at the moment, so I guess I was automatically assuming an "holy madman" would optimise for outputting the current best effort he can in the long term, rather than sacrificing some current effort to bet on future chances to improve the future output.
I'd define someone who's leaving this level of slack as someone who's making a serious or full effort, but not an holy madman, but I guess this doesn't means much.
If I were to try to summarise my thoughts on what would happen in reality if someone were to try these options... I think the slack one would work better in general, both by managing to avoid pitfalls and to better exploit your potential for growth.
I still feel there's a lot of danger to oneself in trying to take ideas seriously though. If you start trying to act like it's your responsibility to solve a problem that's killing people, the moment you lose your grip on your thoughts it's the moment you cut yourself badly, at least in my experience.
In these days I've managed to reduce the harm that some recurrent thoughts were doing by focusing on distinguish between 1) me legitimately wanting A and planning/acting to achieve A and 2) my worries related to not being able to get A or distress for things currently being not A, telling myself that 2) doesn't helps me get what I want in the least, and that I can still make a full effort for 1), likely a better one, without paying to 2) much attention.
(I'm afraid I've started to slightly rant from this point. I'm leaving it because I still feel it might be useful)
This strategy worked for my gender transition.
I'm not sure how I'd react if I were to try telling myself I shouldn't care/feel bad/worry if people die because I'm not managing to fix the problem, even if I KNOW that worrying myself about people dying hinders my effort to fix the problem because feeling sick and worried and tired wouldn't in any way help toward actually working on the problem, I still don't trust my corrupted hardware to not start running some guilt trip against me because I'm trying to be, in a sense that's not utilitarian at all, callous, because I'm trying to not care/feel bad/worry about something like that.
Also, as a personal anecdote of possible pitfalls, trying to take personal responsibility for a global problem had drained my resources in ways I could't foreseen easily. When I got jumped by an unrelated problem about my gender, I found myself without the emotional resources to deal with both stresses at once, so some recurrent thoughts started blaming me because I was letting a personal problem that was in no way as bad as being dead, and didn't blipped at all on any screen in confront to a large number of deaths, screw up with my attempt of working on something that was actually relevant. I realised immediately that this was a stupid thing to think and in no way healthy, but that didn't do much to stop it, and climbing out of that pit of stress and guilt took a while.
In short, my emotional hardware is stupid and bugged and it irritates me to no end how it can just go ahead and ignore my attempts of thinking sanely about stuff.
I'm not sure if I'm just particularly bad at this, or if I just have expectations that are too high. An external view would likely tell me that it's ridiculous for me to expect to be able to go from "lazy and detached" to "saving the world (read reducing X risk), while effortlessly holding at bay emotional problems that would trip most people". I'd surely tell anyone that. On the other hand, it just feels like a stupid thing to not manage doing.
(end of the rant)
(in contrast to me; I'm closer to the standard 40 hours)
Can I ask if you have some sort of external force that makes you do these hours? If not, any advice on how to do that?
I'm coming from a really long tradition of not doing any work whatsoever, and so far I'm struggling to meet my current goal of 24 hours (also because the only deadlines are the ones I manage to give myself... and for reasons I guess I have explained above).
Getting to this was a massive improvement, but again, I feel like I'm exceptionally bad at working hard.
I think that the approaches based on being a holy madman greatly underestimates the difficulty on being a value maximiser running on corrupted, basic human hardware.
I'd be extremely skeptical on anyone who claims to have found a way to truly maximise it's utility function, even if they claim to have avoided all the obvious pitfalls of burning out and so-so.
It would be extremely hard to conciliate "put forth your full effort" and staying rational enough to notice you're burning yourself out or noticing that you're getting stuck in some suboptimal route because you're not leaving yourself enough slack to notice better opportunities.
The detached academic seems to me an odd way to describe Scott Alexander, who seems to make a really effective effort to spread his values and live his life rationally, for him most of the issues he talks about seem to be pretty practical and relevant, even if he often takes interest on what makes him curious and isn't dropping everything to work on AI - maximise the number of competent people who would work on AI.
I'm currently in a now-nine-months-long attempt to move from detached-lazy-academic to make an extraordinary effort.
So far every attempt to accurately predict how much of a full effort I can make without getting backlash that makes me worse at it in the next period has failed.
Lots of my plans have failed, so if I had went along with plans that required me to make sacrifices, as taking an idea Seriously would require you to do, would have left me at a serious loss.
What worked most and obtained the most result was keeping a curious attitude toward plans and subjects that are related to my goal, studying to increase my competence in related areas even if I don't see any immediate way it could be of help, and monitoring on how much "weight" I'm putting on the activities that produce the results I need.
I feel I started out being unbelievably bad at working seriously at something, but in nine months I got more results than in a lifetime (in a broad sense, not just related to my goal) and I feel like I went up a couple levels.
I try to avoid going toward any state that resembles a "holy madman" for fear of crashing hard, and I notice that what I'm doing already has me pass as one to even my most informed friends on related subjects, when I don't censor to look normally modest and uninterested.
I might be just at such a low level in the skill of "actually working" that anything that would work great for a functional adult with a good work ethic is deadly to me.
But I'd strongly advise anyone trying the holy madman path to actively pump for as much "anti-holy-madmannes" as they can, since making the full effort to maximise for something seems to me the best way to make sure your ambition burns through any defence your naive, optimistic plans think you have put in place to protect your rationality and your mental health.
Cults are bad, becoming a one-man-cult is entirely possible and slightly worse.
The review seem pretty balanced and interesting, however the bit about Bailey struck me as really misguided.
I'll try to explain why, I apologise if at some times I might come off as angry but the whole issue about autogynephilia annoys me both at a personal level as a trans person and at a professional level as a graduated in psychologist and scientist. Alice Dreger seems to have massively botched this part of her work.
In 2006, Dreger decided to investigate the controversy around J. Michael Bailey's book The Man Who Would be Queen. The book is a popularized account of research on transgenderism, including a typology of transsexualism developed by Ray Blanchard. This typology differentiates between homosexual transsexuals, who are very feminine boys who grow up into gay men or straight trans women, and autogynephiles, men who are sexually aroused by imagining themselves as women and become transvestites or lesbian trans women.
Bailey's position is that all transgender people deserve love and respect, and that sexual desire is as good a reason as any to transition. This position is so progressive that it could only cause outrage from self-proclaimed progressives.
Bailey's position caused outrage in nearly every trans woman who read the book or heard the theory, and in a lot others trans persons who felt delegitimised and misrepresented by the implications.
If you are transgender, you are suffering from gender dysphoria and you aren't transitioning for sexual reasons at all, though your sexual health would often improve. You are doing what science shows to be the one thing that solves your symptoms that are ruining your life and making you miserable.
But then, someone who's not trans comes along and says "no, it's really a sex thing" based on a single paper that presented no evidence whatsoever.
This person, rather than very rigorously trying to test the theory with careful research, which is what everyone, especially someone who's not feeling what trans women are feeling and thus is extremely clueless about the subject because it's really easy to misunderstand a sensation your brain isn't capable of feeling, should do, bases one of the two clusters of the book mostly on a single case study of a trans woman, who has a sex life which isn't representative of the average trans woman at all, but who makes for a very vivid, very peculiar account of sexual practices, and the rest of the "evidence" are just unstructured observations and interviews.
The book doesn't talk at all about how most trans person, men and women and non-binary, discover they are trans, and doesn't describes accurately their internal experience at all. It instead presents all trans women as being motivated by sex, and half of them by sexual tendencies that psychology depicts as pathological.
And then, somehow, this completely unfounded theory becomes one of the most known theories about trans women.
So, if you are a trans woman, best case is, your extremely progressive friends and family come to you and say "oh, we didn't knew it was just a sex thing, you could have told us you had this very weird sexual tendencies rather than make up all of that stuff about how your body and how society's way of treating you like a man makes you feel horribly, it's fine, we understand and love you anyway".
Worst and more common case, your friend, family, work associates and whatever, aren't extremely progressive. They still believe Blanchard's and Bailey's theory about you, though.
And then, when the trans community starts yelling more or less in unison "what the hell?!" at what Bailey wrote in his book, the best response he can come up is saying that the trans women attacking him are in a narcissistic rage because they are narcissists whose pride has been wounded by the truths he wrote, and that they are autogynephiles in denial.
Bailey attracted the ire of three prominent transgender activists who proceeded to falsely accuse him of a whole slew of crimes and research ethics violations. The three also threatened and harassed anyone who would defend Bailey; this group included mostly a lot of trans women who were grateful for Bailey's work, and Alice Dreger.
I'm not aware if some transgender women tried to defend the book, but "a lot of transgender women" seem to be a more accurate description for the books detractors than its supporters.
I'm aware of the fact that the three activists mentioned went way too far to be justified in any way. But presenting those as the only critics he received is completely wrong, because there was a huge number of wounded people who saw their lives get worse because of the book.
Autogynephilia was made popular as a theory mostly by Bailey's book, and trans exclusionary radical feminist groups, which are currently doing huge damages to trans rights and healthcare, are using it as one of their main arguments to delegitimate trans women and routinely attack trans women with it. Even if Bailey's intentions were good, he failed miserably and produced far more harm than anything else.
I'll try my best to express it, even if I feel it makes me look stupid:
Short version:
Trying to improve how activism is done, figuring out ways to maximise the positive impact activists and activism organisations can have to advance their cause and that can be reasonably taught.
Reasoning
Activism organisations that are composed by volunteers and that don't hire professionals are limited in what they can learn about their craft. Typically, activists can figure out by trial and errors and by looking at others what seem to work or not, but only if there is a feedback one can correctly eyeball.
So there is no reason to believe that the efficiency of these activists and organisations can't be improved.
An individual studying communication and organisation wouldn't likely be able to improve the frontier of efficiency in marketing or professional organisations that deal in communication, but even bringing the efficiency of volunteers closer to the current efficiency of professionals would be a huge improvement, able to produce a lot of positive value for the world, if one choose the right organisations to boost.
Currently I'm focusing on the communication of main-stream causes that deal with x-risks related issues, the second step would be to use the strategies learned to boost non-mainstream causes that try to address stuff that's even more relevant to x-risks (if anyone is involved in a similar attempt or cause already they are welcomed to contact me, I'd love a chance to talk about this and see if cooperation is possible).
First steps I should currently be doing
Essentially I think I should be developing a system in excel that would allow one to classify posts on social media according to their characteristics and that would allow to investigate at a statistical level what works and what not.
I've started but continuing slowly, because it's hard, it's something I'm really not familiar with and I have an unhealthy attitude of flinching away from anything that's hard to do in a way that makes me feel stupid and out of my league.
The second thing I should be doing is an "inadequacy analysis" of the current processes in the organisation I'm in to see all the low hanging fruits one could pick to improve performance.
I so far failed to identify any fruit but two (the statistical analysis is one, the second one is how work is distributed to volunteers and seem an easier fix), because I'm likely overly worried of "shooting my foot off and falling flat on my face in a way that makes me look stupid" so I'm flinching off again.
I did managed to correct some other major procrastination problem and I'm not able to reliably get hours of work done for this project, but I have so far oriented this work in too many directions (like trying to study negotiation tactics for the future, rationality, persuasion strategy, and communication strategies on social medias, all at once) and so I couldn't really focus enough of a significant effort in actually making progress with something.
Trying to fix the problem by creating habits and incentives that would orient me toward the most important things I should be doing, rather than the most "interesting" things I could be doing that is somehow related to the project.
I might also need to learn more efficient ways to study and practice stuff, so far I'm still studying as if to pass a written exam on it.
I'm not 100% sure I understood the first paragraph, could you clarify it for me if I got it wrong?
Essentially, the "efficient-markets-as-high-status-authorities" mindset I was trying to describe seems to me that work as such:
Given a problem A, let's say providing life saving medicine to max number of people, it assumes that letting agents motivated by profit act freely, unrestricted by regulations or policies that even be aimed to try fix problem A, would provide said medicine to more people than an intentional policy of a government that's trying to provide said medicine to max number of people.
The market doesn't seem to have a utility function in this model, but any agent in this market (that is able to survive in it) is motivated by an utility function that just wants to maximise profit.
Part of the reason for the assumption that "free market of agents motivated by profit" should be so good at producing solution for problem A (save lives with medicine) is that the "free market" is awesomely good at pricing actions and at finding ways to get profits, because a lot of agents are trying different things at their best to get profit and everything that works get copied. (If anyone has a roughly related theory and feels I butchered or got wrong the reasoning involved, you are welcomed to express it right, I'm genuinely interested).
My main objection to this is that I fail to see how this is different by asking an unaligned AI that's not super intelligent, but still a lot smarter than you, to get your mother out of a burning building so you'd press the reward button the AI wants you to press.
If I understood your first paragraph correctly, we are both generally skeptic that a market of agents set about to maximise profit would be, on average in many different possible cases, good at generating value that's different than maximising profit.
Thank you for the clarification between unregulated and free.
I was aware of how one wouldn't lead to the other, but I'm now unsure about how many of the people I talked to about this had this distinction in mind.
I saw a lots of arguments for deregulation in political press that made appeals to the idea of the "free market", so I think I usually assumed that one arguing for one of these positions would assume that a free market would be an unregulated one and not foresee this obvious problem.
I actually can’t recall seeing anyone make the mistake of treating efficient markets like high-status authorities in a social pecking order.
I've seen often enough, or at least I think I've seen often enough, people treating efficient markets or just "free, deregulated market" as some kind of benevolent godly being that is able to fix just any problem.
I admit that I came from the opposite corner and that I flinched at the first paragraphs of the explanation on efficient market, but I still feel that a lot of bright people aren't asking the questions
"Is it more profit-efficient to fix the problem or to just cheat?"
"Can actors get more profit by causing damages worse than the benefits they provide?"
"Is the share of actors that, seeing that the cheaters niche of the market is already filled when they get there, would go on to do okayish profits by trying to genuinely fix the problem able to produce more public value than the damage cheaters produce?"
Asking an unregulated free market to fix a problem in exchange for rewards is like asking an unaligned human intelligence with thousands of brains to do it.
I have seen more blatant examples of this toward the concept of free market, but a lot of people still seem to interpret the notion of "efficient market" as "and given the wisdom of the efficient market, the economy would improve and produce more value for everyone", and I feel the two views are related, though I might be wrong about how many people have a clear difference of the two concepts in their heads.
"If these investments really are bogus and will horribly crush the economy when they collapse, surely someone in the efficient market would have seen it coming" is the mindset I'm trying to describe, though this mindset seem to have a blurry idea of what an efficient market is about.
A journalist thinks that a candidate who talks about ending the War on Drugs isn’t a “serious candidate.” And the newspaper won’t cover that candidate because the newspaper itself wants to look serious… or they think voters won’t be interested because everyone knows that candidate can’t win, or something? Maybe in a US-style system, only contrarians and other people who lack the social skill of getting along with the System are voting for Carol, so Carol is uncool the same way Velcro is uncool and so are all her policies and ideas? I’m not sure exactly what the journalists are thinking subjectively, since I’m not a journalist. But if an existing politician talks about a policy outside of what journalists think is appealing to voters, the journalists think the politician has committed a gaffe, and they write about this sports blunder by the politician, and the actual voters take their cues from that. So no politician talks about things that a journalist believes it would be a blunder for a politician to talk about. The space of what it isn’t a “blunder” for a politician to talk about is conventionally termed the “Overton window.”
I'd agree with Simplicio that voter's "stupidity" as in "ignorance and inability to discern correctly on even issues where a scientific consensus has been reached and it really feels like a good, intuitive idea to make an internet search of ten minutes and check what the most accredited institutions are saying on the matter" would interact a lot with the border of the Overton window.
If 90% of the voters were able to mock any "stupid" idea suggested, moving out of the Overton window by going down with the quality of idea discussed would be plain suicide, moving up would sometime reward. Attempts to shift the Overton window downward, such as "hey, let's completely go against what (insert science field) says about (insert important issue), and lets (choose between prohibiting particular subgroup therapies even if science says it's really a good idea to provide said therapies/talk against preventing key crisis that will product unbelievable damage in short term future/suggest completely unfounded model about social issue x works and propose solution unrelated to any actual finding on the matter that has a track history of failures)" would be harshly punished by the voters, while right now these seem to be roughly 30% of politics discussed.
Still, I guess Cecie's theory can explain the source of this "stupidity" with systemic failures that happen in other parts of society such as information and education, while if we just ascribe this to widespread individual "stupidity" and "sheepness" we are not less confused, but perhaps more so.
I wonder about that.
I'd expect we'd first see a huge number of newspaper articles and internet websites trying to make health scares about "lab meat" and an ungodly about of memes about "real men eating real meat", or "only real meat has real taste", and then governments would ramp up subsidies to traditional farms because "cultural activities" and whatever. Oh, and a lot of jokes about the synthetic meat that many sci-fi dystopias have as an element.
Old, powerful lobbies don't like the free market regulating itself, at all, and making harmful/obsolete stuff a cultural/identity/political tribes battle is the first strategy to hinder it.
I'd agree it eventually will become the solution, but I expect it to go slightly worse than the energetic transition.
Computer game characters also exhibit ”intentions” and such, but there’s nobody home a lot of the time, unless you’re playing against another person.
Yes, but what we know about the structure of a computer program is greatly different than what we know about the structure of an animal brain. More complex brains seem to share a lot of our own architecture, mammals brains are ridiculously complex, and mammals show a lot of behaviours that isn't purely directed to acquiring food, reproducing and running from predators.
For animals such as frogs and bugs, which seem to be built more like a "sensory input goes in, reflex goes out" I'd accept more doubt on whether the "somebody's home" metaphor can be considered true, for mammals and other smarter animals the doubt are a lot less believable.
It seems cows might be smarter than dogs and highly intelligent, and right now dogs are discussed as possibly having self-recognition, since they pass olfactory tests that require self recognition (from what I saw it seems the tests are a bit more complex than just requiring the dog to have a "this-is-your-urine-mark-for-your-territory.exe" in its brain).
Generally speaking, cows show to have long term social relations with each others, good problem solving skills, and long term effects on their emotional range from negative experiences. I haven't been able to find information on cows passing or failing self-recognition tests, visual or not, but from the intelligence they show I'd put them pretty high on moral meaningfulness.
Pigs are notoriously smart and have passed the self-recognition test, as Pattern commented.
Though, I think my main point it's that even simpler animals, as long as the brain architecture allows for doubts that our experience of "being home", feeling pain and etc, is in some way generalisable to theirs, would have some scaled down moral weight.
If I had to lose my higher cognitive function and be reduced to animal levels of intelligence, I wouldn't really be okay with agreeing to be subjected to significative pain in exchange for a trivial benefit now, on the ground that I wouldn't be sapient.
Note: this isn't really aimed at turning lesswrongers vegan. There are convincing reasons to be vegan based on the impact over humans, but if you are already trying to be an effective altruist by doing a hard job I can accept the need of conserving willpower and efficiency, though I guess one could consider if he/she/they could reduce consumption without risks.
I think the issue of the moral weight of animals should be considered independently from the consequences it might hold for one's diet or behaviour, or we're just back to plain rationalisation.
I do agree on everything you said.
Right now farming animals seems to be a huge risk for zoonosis, if I remember correctly Covid-19 could have spread from exotic animals being sold in high numbers, and it jumped from man to minks in farms, spread like wildfire in the packed environment, gathering all sort of mutations, and then jumped back to man.
Farming animal is also not sustainable at all with the level of tech, resources and consumption we have now. I'd expect the impact of farming to kill at least some tens-millions people in a moderately bad global warming scenario, it's already producing humanitarian crises now, and I'm afraid global warming increases extinction risks due to how we would be more likely to botch AGI.
I had just suggested the rule for an entirely hypothetical scenario where we are asked to trade human lives against animal lives, because I was trying to discuss the moral situation "trade animal lives and suffering against human convenience" on its own.
I generally avoid commenting only if I feel I have nothing relevant to say. The only thing that makes me delete a comment mid-writing is realising that I'm writing something that's wrong.
If I notice I made a mistake mid-discussion, or after I've already posted a comment people read, I admit it, and I've seen that usually up-votes show it's appreciated.
Usually when I comment it's because I have... let's call them "political beliefs", though they are always about concrete things and decisions, that are a lot more "left leaning" than the average position here. As long as I'm confident in my reasons for having such beliefs, I don't seem to worry about my reputation at all, even if I think I'm about to say something "unpopular". As long as I'm willing to explain myself and change my mind if I'm wrong, I think that holding back on expressing such ideas makes the site weaker and betrays its spirit (I do try to keep the discussion as much apolitical as possible). I don't comment unpopular opinions if I don't think I can put on the effort to explain them well.
Often commenting on LessWrong is an useful test for my belief in something, the thought of having to justify your disagreement with the "smart kids club" makes me check more on my reasons for believing things, and put forward some research work.
The reputation system seems to work fine for me, since it gets me to improve. The few times I tried discussing in PMs something, it turned out less confrontational and more productive, though, so I think that's a good approach (and it's much more enjoyable).
I also try to remember making short comments of agreement to make our kind cooperate.
I do feel stupid and irritated each time I get down-voted, since I try to not comment on stuff that I don't know about or to write shallow statements I can't help but think "wow, whoever this person was is very biased against my idea" which... likely isn't a mature reaction. I'd like to know why I get down voted, though.
I'm a hundred times more self conscious about making posts, though. I feel the stress of having a post come under the scrutiny of the community would make me obsessively edit and quadruple-check everything, so at least four ideas for posts died out this way without any good reason (so far I've managed to post just two questions).
Using an anonymous account or something like that wouldn't work at all, I'm not concerned about lesswrongers writing off Emiya as an idiot, I'm afraid of me thinking I'm an idiot because my ideas got shredded apart which... is not a way of thinking about this that's in any way good or useful, and it's hindering my progress, so I should really try to break through it.
"Meaningfully conscious" seem a tricky definition, and consciousness a rather slippery word.
Animals clearly aren't sapient, but saying they aren't conscious seems to also sneak in the connotation that there's "nobody home" to feel the pain and the experience, like a philosophical zombie.
It's pretty clear that animal seem to act like there's somebody home, feeling sensations and emotions and having intentions, and what we know about neurology also suggests that.
Given how some animals even pass self-recognition tests, sapience seems the only hard cut-off we can trace between animals and humans.
I'd certainly agree that we should value life based on how "complex" it's mental life is, (perhaps with a roof that we reach when we hit sapience that I'd like to introduce for our convenience), and it certainly makes sense we shouldn't concern with the well being of stuff that has no mind at all, but it doesn't seem intuitive that the lack of sapience should mean that whatever suffering strikes a mind has zero moral weight.
If we agree that the suffering of a mind has a certain weight, then yeah, the "flesh eating monster hell" is a quantitatively reduced version of doing the same thing to human beings (measuring in total moral wrongness, some consequences of doing it to humans would be totally absent and others wouldn't be scaled down at all). We can of course discuss how much the moral wrongness is reduced.
One might argument that it's certainly preferable to slaughter a cow than to have a human die of hunger, or to slaughter a cow (with the exact meat of a human for convenience of our example) to feed two humans and save them from starvation than to slaughter a human to save two humans, and I'd agree.
I'd even agree that one might have much more urgent thing to do for the wellbeing of others than become vegan.
But the fact that we value human lives more than animals, because of sapience, doesn't implicate that animal lives and suffering have no value whatsoever, and as long as animal lives have some value, there are some trade-offs in animal pain for human convenience we should refuse or we're not thinking quantitatively about morals.
Deontological rules such as "let's let any number of animals die to save even a single human life" might be considered as a temporary placeholder to separate the issue of human lives from the issue of human convenience, I think it might make discussing the issue easier.
Ziz adheres to a moral principle which classifies all life which has even the potential to be sentient as people and believes that all beings with enough of a mind to possess some semblance of selfhood should have the same rights that are afforded to humans. To her, carnism is a literal holocaust, on ongoing and perpetual nightmare of torture, rape, and murder being conducted on a horrifyingly vast scale by a race of flesh eating monsters. If you’ve read Three Worlds Collide, Ziz seems to view most of humanity the way the humans view the babyeaters.
... well.
... I mean, letting aside the holocaust comparison which is just asking to have the whole discourse get pulled astray, can you really make rational arguments that it's not at least as bad as a quantitatively reduced version of doing the same things to human beings?
Having said this, I'm just puzzled on why she seems to think that the "flesh-eating monster hell" would survive a positive singularity with a human-aligned AI.
I can't really imagine a future with a positive singularity where there's just not a more convenient solution to have meat than actually growing and butchering a live animal. Humans, save perhaps a handful of sadistic psychopaths or a few people really wanting to cling about fringe stuff like "the moral value of the hunt" or barbaric recipes who would supposedly increase taste, would choose to have their meat sans-suffering if that was an option. You'd have to model people worse than Quirrelmort does, because they wouldn't even be able to role-play a good person act that simple as answering yes to that question.
Or should I interpret her utopia as making all life immortal as well, protecting animals from accidental deaths, each others and etc... ? I'd say it still seems like a trivial fix and not something to threaten or sabotage people working on singularity over.
I honestly felt a full cognitive assault reading the links with her writing and had to commit to never open links to her blog again, but I think this is mostly my own issues resonating hard.
Two months ago I kicked open the lid of my gender dysphoria, after having repressed it for some 15 years. I quickly found out that rationality+a kind of distress that doesn't get better if you can think clearly about it (not to say that rationality can't help making that going away in other ways) can quickly degenerate in paranoia, since you can't seem to manage to push a stop button to whatever search process your mind is attempting to solve your pain.
I had overthought what I was feeling a dozen different ways, and the way she seems to model other people's thoughts struck a lot of resonating chords with me about what people I talked to could actually be thinking about me.
I'm going to focus most of the post on the theme of trans people, I think it exemplifies the first of the two main problems behind a "social conservativism" approach.
- Conservatives generally don't provide good arguments for their worries.
The model they'll present will rarely stand as a coherent reasoning or have moving parts you can examine. "Gay marriage -> loss of societal cohesion", with a not really explained "loss of validity for traditional marriage" in between.
When there are detailed models, usually scientific literature will prove most of the concerns wrong. Progressives are usually the ones that seem to be aligned with the science, at least in the recent struggles (if one can provide counterexamples they are free to do so. Currently the strongest one I could think of is transgender athletes in sport, where both sides are misaligned with the studies - transgender athletes seem to regain an advantage with the current guidelines, but it's not enough to make women sport a one-sided battle that are dominated by trans athletes or to be reasonably certain it's there, and treating all sports as being influenced in the same way it's nonsense).
2. The current situation might well be running at full speed toward a crash.
A cautionary approach that says "don't change anything, you don't know what you could break in our society" would be considerable only if it seems we are at a really good and stable points.
But broken stuff in our society can carry costs and problems that compound, which seem to be pretty much the situation we are in at the moment given the number of crisis our society is facing, up to extinction risks, so stasis doesn't seems an option.
I should add that none of this is hypothetical. Right now, as we speak, young people are being actively encouraged by progressive parents, teachers and activists to ask themselves the question if maybe they’ve been born in the wrong body. And while progressives insist that this can’t possibly do any harm because all sex-related matters are unique in being the only human traits that are fully genetic and on which environment has zero effect, my counterargument is that that’s horseshit.
People are being actively encouraged in knowing that there are some people born in the wrong body, not to ask if they are really cis. I've yet to see an activist write something that would encourage random persons to question their gender identity, save in the kind of internet places you go look if you are questioning your gender identity, and they generally say stuff like "if you think that these experiences match your own, you might want to keep questioning your gender identity, here are other experiences that aren't related to that to help you differentiate".
There is very strong evidence that people can't change their sexual orientation or their gender identity on command, such as the staggering, complete rate of failures of every sort of therapy that ever tried to obtain that.
My (more or less informed) guess is that a lot of people have a sexual orientation that's more or on less on the bisexual spectrum, and so could their environment would influence if they acknowledge and/or follow on it or not. But the environment can only work on cases where the innate preference isn't too marked. If you're bisexual with a 50/50 preference you'd have a hard time not noticing it. If you have a 90/10 preference, you might believe you are straight (or gay if the preference is same sex) if you grow up in an environment, or notice that 10% if you grow up in another.
Similarly, gender identity also seems to follow a continuous scale. Some people might go either way, since transitioning isn't exactly easy environment would likely influence their decisions on the matter. A lot of people who will "pop out" as trans depending on the environment will likely be people who are very much trans, and that will notice they are because they are informed on the subject.
I do agree that some harm might come of it. The number of people who transition and then de-transition will rise as the social stigma, huge hassles, and other problems associated with transitioning will also increase.
But, given that the numbers seem to be hundreds of de-transitioners who would suffer from it and millions of trans people who would greatly reduce their sufferings, it seems that for now we should floor it on the "more education on transgenderism" and check on what follows. Being trans and not noticing/not being able to act on it is a real harm.
Even if you insist that the number of trans people is kept constant across time and space by some kind of universal law, their suicide rates are still some factor ~18 higher than the rest of society, and you cannot possibly expect me to believe that this has nothing to do with them being constantly told by trans activists that the world hates them and that there is nothing they can do about it (by the way, I don’t hate you.) So from my point of view, progressives are only making impressionable young people more miserable by convincing them that their current reality is intolerable and evil.
As a trans person this is not my experience at all. Trans activists usually provide more support to trans people than depressing content. My worries about how the world would treat me if I were trans decreased as trans activism became more prevalent, because you get a sense that a growing number of people would just accept you.
Transphobia is, by far, the most likely suspect in the suicide rates of trans people who have transitioned. Trans people are still shown to be heavily discriminated and at a much higher risk of assault or unemployment, discrimination correlates a lot with these increased suicide rates.
Locker room talk you hear in high school, transphobic medias of all kind, stuff like that will convince you that the current reality is intolerable and evil, even if you don't hear about aggressions and discrimination about trans people in the news. In high school I went straight "nope, not worth even considering the question" because of these things, and trans visibility was basically zero (and by the way, thank you for expressing support!).
Activism will tell you "it's bad, but it's getting better, there are people and places that will accept you right now, and we can make this even better". So, in this sense, it will make more trans people be out, and make more trans people realise I'm trans, which is not a bad thing because gender dysphoria and all that follows from it take the lion share in the suicide rates before transitioning. Even despite transphobia and discriminations, trans people are much more at risk of suicide before transitioning, gender dysphoria it's just that bad.
The trans problem wouldn't get worse if society tried to go the stasis route, but it would still mean paying a huge amount of sufferings and deaths with no good reason.
Alright, I don't think I have any problem talking a bit about it in private with you, for the time being I'd rather avoid sharing more in public though.
If anyone else thinks information on this could be helpful they can contact me, put please only do so if you think it's really relevant you know.
Pick as small of an internal conflict as possible and try to IDC it.
Whoops. Yeah, starting small definitely sounded like an obviously good idea, in hindsight.
I might have gone ahead and used as my first try figuring out my... gender identity, yeah.
It frigging worked as far as I can tell, I've used this yesterday and ever since I've felt a lot better than I've been in days. This was unbelievably helpful to me, and I'm really grateful for you having written this post.
To clarify my experience, in case someone is considering trying this for something on this scale after reading my comment:
I had gone in to identify stuff I already knew to be related to my gender, and that was sending out signals too confused and too conflicted to make sense, but it was making me feel worse and worse.
I went in ready to accept anything I'd have found, just wanting to know what it was, and it turned out that what was making me feel bad was misinterpreting/ignoring the stuff I was able to figure out by using IDC.
I have to say this looks like a long due change of policy. I seriously hope that this site will finally stop talking all day long about rationality and finally focus on how we can get more paperclips.
Please kindly remove the CAPTCHA though, I'm finding it a slight annoyance.
In contrast, APA is a professional organization of health care providers, writing guidelines for practicing therapists who deal with vulnerable men who come to them for help. The standards are quite different.
The content is quite different also.
Here is a list of things APA considers “harmful”, under the umbrella term of “traditional masculinity”:
Saying that the lists of items below are the most likely problem you are going to see in the subgroup of men who end up looking for therapy is not the same as saying that these traits are always harmful.
Similarly, citing a prominent figure that did well for himself while showing high doses of these traits is not good evidence that these traits will favour an average person.
The argument that these traits can either be vices or virtues is technically correct I guess, but what seem to happening is that men that are, let's say, "traditionally educated" are often pressured into adopting these traits and keeping them on full time.
So these traits seem to be something society is consistently teaching wrong to a very large number of men, and they also seem to be a pattern on a wrong way to educate men into masculinity, which studies show to be tied to all kind of problems, especially because a large number of men are taught or pick up a dysfunctional way to express masculinity or trait.
So the goal, especially from those who document themselves on the studies about masculinity, is not to attack masculinity, just to teach people the broad points that a) It's okay not to be masculine, if you don't want to, and b) you can be masculine or male and still show traits that are usually classified by certain models of masculinity like feminine or weak, such as cooperation, kindness, etc... and c) you can be successful and do well for society using "feminine" traits and strategies too, and that this holds true even in environments that are usually seen as relying on "masculine" traits.
There certainly isn't a war on competence in general. If you go check what the suggestions to face the problems "competitiveness" in the workplace creates, you'd usually find stuff that aim to reduce strategies that try to game the scoring system of competence assessment, like trying to talk over your competitor or to socially diminish him/her, or you'd find stuff that tries to help people notice that certain traits that have been socialised into women on average more than into men, like looking for compromises or avoiding conflicts, and to look for ways to even the field about them. I've never heard at all the suggestion that, in a workplace or in research or whatever, you should consider competence less, save when people opposed to these kind of attempts misreport their contents.
Similarly, there are some groups that are, I guess, just pissed off at men and try to attack masculinity in general, but these groups are a small minority, and likely don't control APA. But, they are the groups someone attacking the above attempts at changing masculinity would quote the most, and try lump everyone in the same category.
Jordan Peterson, from the talks I heard him give, is absolutely guilty of this and does not seems to argue in these subjects with good faith.
Instead of dealing with hard questions, it’s easier to reuse the tricks that worked in the past like saying that any majority-male hierarchy is nefarious and privileged. The APA was quick to point out that 95% of Fortune 500 CEOs are men. So are 80% of Google engineers and 80% of top-grossing actors. Also 99% of HVAC mechanics, but only 2% of dental hygienists. Are those examples of privilege or of competence?
The answers to all of the above are “almost certainly both, it’s complicated”. But this answer doesn’t help you climb the hierarchy of progressive politics. To maintain that those are all examples of pure male privilege, one has to completely deny the role of competence. As people on the left compete to demonstrate their commitment to dismantling privilege, the entire concept of competence gets wholly ignored and the pursuit of it is seen as pathological. I think that this impulse is at the root of the “war on competence”.
People can't just start to look exclusively for competence and ignore prejudice. The argument made is that women are not evaluated on competence fairly, because, unless a women is clearly superior to a man, she'll often get evaluated as less competent.
Unless you haven a clear biological discriminant that says that males would dominate a field even if women are given equal chances to compete (like in, say, powerlifting), then any majority-male hierarchy is privileged because there is no fair reason for having a higher percentage of males higher in the hierarchy than the base rate of males in the field (which is a topic that's has separated stuff into it).
Even if we assume that this discrimination is partly competence based: let's say, entirely as an example, than the 70% of most competent individuals between google engineers are males, and that only 60% of google engineers are males, which would be the strongest possible case I can think for a genuine difference in competence getting existing alongside pure bias. Unless you have large evidence that men are just better engineers than women for biological reasons, then the difference in competence still has to be determined by some kind of privilege that is having men receive more formation and/or chances for improving their competence than women. As a possible example of it, I remember hearing that in STEM sciences a common thing that has been noticed is that males are a lot more likely to have been habituated to thinker with computers and programs and experiment when young, and that there is also evidence that men are socialised to react to their failures in "traditionally male fields" differently than women, who are more likely to receive a negative feedback in their competence for failing. So you have a real disparity in competence, which is being caused by privilege and unfair reasons.
That might be what you were referring to with "it's complicated", I don't know.
But the point is that, if you still want to value competence over fairness, which for jobs that have large consequences, you still need to dismantle that privilege as aggressively as you can, because you are currently missing a lot of competence that could be developed in women and that would have more competent individuals.
Now, to be fair, I usually am pretty selective in the medias I consume, so I'm unlikely to stick around a media that consistently gets stuff I agree with wrong.
But still, I think there is a consistent attempt from a number of people in the conservative and reactionary crowd to frame the war on toxic masculinity as a war on masculinity/competence, and that this attempt has a lot more influence on communication than the subgroups of people on the "other side" who genuinely wants to attack masculinity or get these things wrong, and so that the post is kinda attacking an issue that's not as relevant as much as it would seem. That is, referring to the issue of "war on competence". Our society is not good at effectively teaching competence and how to improve, so attempts to overcome that are pretty relevant.
I do agree that "hierarchy climbing traits" can guide someone to improve, and do think sport can be a good way to learn the "climbing hierarchy traits" in a positive way, but I fear it depends heavily by who you are and where you are.
Soccer in Italy is... pretty much the opposite of what you describe, in my experience. The incentives to just rack a win are huge compared to other sports here, so you see all kind of foul plays and anti-sportiveness and stuff. Acrimony between teams is also pretty high, so people are quick to pounce on enemy cheating and to justify or forget their own team cheating.
You can, of course, still find beautiful examples of sportsmanship and players that are renowned for their correctness.
I suspect that sportsmanship and the types of positive competition you describe are more common when the monetary stakes are lower. Unfair competition won't be penalised by fans if the rise of it is very slow, and when every team shows it the sport can still be hugely popular while penalising "fair" teams.
Sadly, I think that briefly stopping the AstraZeneca vaccine (In Italy it got restarted about today, I think) was a rational decision, made necessary by absolute rampant stupidity.
I've heard of several people I know getting unreasonably scared about blood clots, and several people had commented over the vaccine being "unsafe" before that. If they didn't suspended it after those nonsense reports then we'd have faced months of general idiocy about it, with every single case of thrombosis in people having received the AstraZeneca vaccine becoming a news story. As a result, a lot of people would have resisted vaccination or tried to receive the "safer" vaccine. If things spiralled out of control, I fear that would have killed a lot more than a week worth of AstraZeneca suspension.
Just saying that there was no blood clots problem while continuing vaccinations would have left the no-vax crowd free to spread doubts.
Stopping the vaccinations for a week, instead, is a commitment to "safety" so insane that it almost comes across as villainous, "we don't care if 2000 people or more dies, if there is even a 1/300000 chances our vaccine might hurt you we will stop it and check it out". It leaves no doubt about where the government's priorities lie, safety above all, to the point of sheer evilness, and it also got people get really angry about it, so I guess that the talk about the government pushing unsafe vaccines onto you should have taken a serious hit.
So yeah, it's the decision of killing (in my country) about 2000 people to safeguard against a panic that might or might not have spread, to the goal of getting even more people to vaccinate. It gets me mad that it was necessary, and it was necessary for very stupid reasons, but it was not a stupid decision by itself.
I'm not sure how much of this reasoning was actually responsible for the decision and how much was a consideration on popularity/liability, but I'm sure that if they didn't stopped it things could have turned out worse.
One perspective would be to say that when Ben read the sequences at 13, he adopted a suboptimal paradigm and later moved on from that paradigm. From the perspective of Kegan's framework, adopting that paradigm was however likely very good for Ben's development as it allowed him to go from Kegan 3 to Kegan 4 which is an important step in development. Not everyone moves from Kegan 3 to Kegan 4 and many people need a good university education to make the transition. Making that step at 13 is fast cognitive development.
I think this would be more smooth to understand if you defined what Kegan 3 means before you use the term in this paragraph. It left me struggling a bit to follow, but I'm pretty tired at the moment so maybe that's just on me.
It was a very interesting read and I think it's a good frame to look at things.
I'm a bit puzzled that one who read (if meant as "learned") the sequences would be left at Kegan 4, if I understood correctly and all there is to Kegan 5 is to be able to engage with other school of thoughts, then the sequences seem to hammer on that point pretty often. I've taken as a personal rule to allow people to try persuade me of their reasons/school of thoughts specifically because it was one of the main points of the sequences.
It has less benign forms. Governments and other bandits look for wealth and take it. Sometimes those bandits are your friends, family and neighbors. A little giving back is a good thing, but in many cultures demands for help and redistribution rapidly approach 100% – life is tough, and your fellow tribe members, or at least family members, are endless pits of need, so any wealth that can be given away must be hidden if you want to remain in good standing. Savings, security and investment in anything but status are all but impossible. There is no hope for prosperity.
I'm not sure of how literally I should interpret this part. Governments and systems seem to be in a trend of taxing poverty more than they tax wealth, after a certain level of wealth you definitely pay less per dollar earned that someone who's poor, even considering official taxes alone.
Poor people do seem to be forced to dissipate any extra wealth they accumulate through societal obligations, and for slack and status purchase it seems to definitely hold true, I'm just puzzled by the government thing.
Characters often want change as part of their role. And just as importantly, their role often requires that they can't achieve that change. The tension between craving and deprivation gives birth to the character's dramatic raison d'être. The "wife" can't be as clingy and anxious if the "husband" opens up, so "she" enacts behavior that "she" knows will make "him" close down. "She" can't really choose to change this because "her" thwarted desire for change is part of "her" role.
I'm conflicted about drawing this kind of conclusions from people behaviour, it opens up a door that allows you to interpret anything any way you like.
More simple explanations are that if a "wife" knows how to interact with the husband in a way that causes him to open up and talk about what's happening, then the conflict gets resolved and you aren't observing a clingy and anxious "wife" anymore.
It's actually hard to communicate openness and communication while you are feeling anxious and clinging, so you'd see a lot of people acting in ways that "discharge" their anxiety, rather than fix their problem. You don't need to go as far as to postulate that they are actually acting like this "on purpose".
Even if the "wife" is clearly showing a stereotypical script, it might just be that "she" has no utter clue of what else could be done about "her" situation. "She" could be just assuming that it's the correct way to face the problem, nag the "husband" until it finally works. Yeah, "she" would likely feel nervous and lost if considering the option of going off script and trying something else, and would avoid doing that because of that. But people have been using "punishments" in contexts where they have no hopes to work for countless millennia now, and there's no reason to assume everyone just secretly wants the target to persist in unwanted behaviour so they can punish him some more.
There are other circumstances where drawing simpler explanations is harder, and then you can start to wonder if there is this kind of "purpose" in someone's actions. Self sabotage is definitely a real thing, sometimes. But I think you'd be safer by going with the simplest explanation first, because you can use "secret reasons" as an explanation for everything in psychology.
Aside from this, the post was really good and insightful. It got me thinking about what roles I'm being pushed on and where I'm pushing my friends to.
I often see that people I know make assumptions about me being the rational one of the group, such as assuming I'd commit the stereotypical mistakes of someone who follows Hollywood rationality... which I always found weird as hell, because 1) in other contexts it's basically a meme that I'm really genre-savvy (for example, I DM in games for the group and people have a habit of worrying at least about the first four-five levels of subversions and recursions of my twists and plots), and so I thought they should realise I'd have saw the possibility of making the obvious cliched mistake coming, and 2) because I never showed any hint of such behaviours and regularly do the opposite thing, but I guess it makes more sense now.
My role, according to them, is to be incredibly devious and intelligent and do the non-supervillain equivalent of having the hero fall in my devious-four-levels-of-deception-trap, and then screw up something obvious such as leaving him unattended to free himself or fail to my own hubris or insert cliched genius mistake x, so the "balance" between intelligence and heart is reaffirmed.
Given what I’ve actually seen of people’s psychology, if you want anything done about global warming (like building 1000 nuclear power plants and moving on to real problems), then, yes, you should urge people to sign up for Alcor.
I realise this is a 13 year old post, but please don't dismiss global scale problems with the first idea that comes to mind and without doing serious research first, your opinion is (to say the least) really respected on this site and lots of people would assume you were right about it.
By IPCC datas from 2014, electricity and heat production is a mere 35% (total, considering all associated costs) of global emissions. Even if we convinced everyone to switch to electric cars and transportations AND to electric heating, which would not be trivial at all, we'd have curbed emissions by a total 55%.
https://www.ipcc.ch/site/assets/uploads/2018/02/SYR_AR5_FINAL_full.pdf (page 102)
Also by IPCC datas, nuclear phase out will add a 7% cost to what it would take to stop climate change, while each year wasted between 2014 and 2030 by delaying actions increments cost by more or less 3%. Of course, that is due to the low prevalence of nuclear power as an energy source, but it still goes to show that the issue of nuclear energy is far from being the vault key here. (same link as above, page 41)
If you could persuade everyone to build 1000 nuclear plants, switch to electric cars and to electric heating, then you'd also be able to solve the problem in a dozen more ways.
I agree with everything else on the post and that there are worse problems than climate change (though my guess is that it would still increase existential risk by at least 5% if botched, mostly because it would increase the likelihood of someone botching AGI).
Can anyone suggest me good background reading material to understand the technical language/background knowledge of this and, more generally, on decision theory?
I'm puzzled by a really effective activism post that manages to get me to commit to give 10% of my income to charity saying that activism and spreading the cause isn't an effective way to get things done.
I also think protesting can buy a lot more political shift for a cause than the average hourly pay of the participant. Millions of protesters seem to shift the political landscape a lot more than tens of millions of dollars spent in lobbying and ads.
I shouldn’t pretend I’m worried about this for the sake of the poor. I’m worried for me.
At this point I should just try ask in a poll if there's a level of intelligence where you eventually stop worrying if you could ever catch up to the level above yourself.
Maybe if you were literally the highest-IQ person in the entire world you would feel good about yourself, but any system where only one person in the world is allowed to feel good about themselves at a time is a bad system.
Well, that's fricking encouraging.
This was amazingly good.
On a side note:
But things that work from a god’s-eye view don’t work from within the system. No individual scientist has an incentive to unilaterally switch to the new statistical technique for her own research, since it would make her research less likely to produce earth-shattering results and since it would just confuse all the other scientists. They just have an incentive to want everybody else to do it, at which point they would follow along. And no individual journal has an incentive to unilaterally switch to early registration and publishing negative results, since it would just mean their results are less interesting than that other journal who only publishes ground-breaking discoveries. From within the system, everyone is following their own incentives and will continue to do so.
You can, as an individual scientist, start praising and giving status to any other scientist who follow stricter guidelines than the average, and comment negatively on any scientist that's using guidelines that are laxer than the average and your own. Eventually really lax scientist stop having an edge, slightly stricter scientists gain it and the standards in the field move up.
It doesn't require simultaneous coordination and it's a rule of thumb any scientist can adopt without harming their own fitness too much.
This was pretty interesting, and pretty different from the kind of content you usually find on LessWrong.
I often see arguments against "spontaneous inconvenient moral behaviour", such as worrying whether to kill ants infesting your house or stop eating meat, that advocate these behaviours should be replaced with more effective planned behaviours, but I don't really think most of the first behaviours prevent the others.
Suggesting that someone currently in his house should stop thinking about how to humanly get rid of ants, start working for an hour and using those overtime moneys to donate to ants charity isn't a feasible model, since most people wouldn't have a job where they can just take an hour of spare time whenever they want and convert it to extra money. You are converting "fun time" into "care for the ants time".
Thinking about how you can be more effective to produce charity or moral value is certainly a good idea, 15 minutes of your time can easily improve the charity you can output in the next years by ten times or more without any real drawback, but the kind of "moral rigor" that's required when one wants to contest a behaviour he doesn't want to adopt it's usually the level of rigor that requires someone to drop his career, start working on friendly AI full time and donating every material possession that he doesn't think it's needed to keep his productivity high to friendly AI research.
You'll need a Schelling point about morality if you don't want to donate your every value to friendly AI research ( if you want to I won't certainly try to stop you), at some point you have to go "screw it, I'll do this less effective thing instead because I want to", and this Schelling point will likely include a lot of behaviours that are spontaneous things you care about but are also ineffective.
Also the way some critiques try to evaluate non-human lives doesn't really make sense. I agree on a "humans > complex animals > simple animals logic", but there should be some kind of quantitative relations between the wellbeing of the groups. You can argue that you would save a human over any number of cow and I guess that can sorta makes sense, but there still should be some amount of human pleasure you should be willing to give up to prevent some amount of animal suffering, or you might as well give up on quantitative moral at all.
If one's suggesting a 1:1000 exchange of human pleasure:animal suffering, you can't refuse by arguing that you'd refuse a 10:10 exchange.
Inquire about the subjective vs objective duration of that millisecond. If there aren't any bad surprises there, pick torture before my mind can try to make a guess of how bad it will hurt.
In the torture vs dust specks I choose dust specks if they weren't allowed to cause ripple effect and if they were guaranteed to be spread with only 1 dust speck for humans. Here there is a similar consideration, how the pain is spread in a time interval so small that it will basically be inconsequential (since he guaranteed that I won't suffer lasting consequences, I'd fully expect such a pain to fry my brain and have it possibly melt out of my eyes or something).
I'm basically choosing to screw over the future myself of that millisecond to protect all the other future self.
Both decisions should work fine as long as I'm not approached by a large number of Pascal's muggers, if it risks becoming a trend I should review my decision theory.
For another human... I'd choose torture for the same considerations, if he choose torture I wouldn't override it, I'd have emotional qualms about overriding his "death" decision, but I likely will.
The math of pain vs pleasure of being alive would likely say my decisions are wrong, but I think the math starts to stop helping in this limit cases, picking death strikes me as a two boxing with Omega (though I think the math there shows you are right went one boxing if you manage to take in the backward causal link). You'll be pretty glad you choose torture exactly one millisecond after and for the rest of your live, and so will the stranger (unless he was suicidal, but it doesn't seem I'm allowed to know it before picking).
I think the only... slight divergence of the situation from reality is that the bad guys figured out most of this stuff already (though I doubt they did so explicitly).
There has been a lot of talk about how "the political divide has grown harsher than ever" as if this kind of shift just happened because of random cosmic variations.
What exactly happened is that, invariably in different country, the local "bag guy" wannabe grabs the loudest mic it can get and starts saying something absolutely hateful over and over, doing everything he can to poison the well and just stop people from talking with each other, instead getting the two parties to yell insults at the other one.
Pretty sure democrats didn't just went "hey, you know that Trump guy? For no real reason, I really hate him and his supporters way more than I hated Romney and his supporters, even though I don't perceive his communications have taken a harsh shift away from democracy and basic human decency. Let's abandon debate and go tell them what ignorant dumb faces they have".
It's a scarily effective trap, and a strong argument in favour of the tactic the post suggest.
And yes, I know it's not a helpful argument to say if you want to propose "look, maybe we'd just better agree to sit down and talk politics civilly" to a "bad guy supporter" but I think it would be great if somehow an agreement about how wonderful it would be if we could just agree to shun the next politician who tries to poison the well, no matter which party is he from, also ended up in the discussion.
Creationists lie. Homeopaths lie. Anti-vaxxers lie. This is part of the Great Circle of Life. It is not necessary to call out every lie by a creationist, because the sort of person who is still listening to creationists is not the sort of person who is likely to be moved by call-outs. There is a role for organized action against creationists, like preventing them from getting their opinions taught in schools, but the marginal blog post “debunking” a creationist on something is a waste of time. Everybody who wants to discuss things rationally has already formed a walled garden and locked the creationists outside of it.
This was a very useful insight, I think I had realised it a while ago but didn't thought it explicitly yet.
Generally the post is pretty good. I think another key point of how civilisation evolved is that the "smarter than you" guy who just goes "hey, I can refuse to play by the rule if I'm effective enough, this way I'll get an even bigger advantage and be unstoppable, I'm just going to blitzkrieg these schmucks and take everything over" regularly gets ganged up and beaten to the ground by everyone else.
Julius Caesar, Hitler, Napoleon, Genghis Khan, possibly Alexander the Great... the great conquerors who try to impose a new world order seem to either regularly be beaten by an alliance of fed up people or murdered if they don't go down that way, and I think most of them didn't honestly saw how their "screw everything, I'll just play to win" scheme could possibly backfire.
It seems humans, when someone goes "screw the rules", tend to answer with "well, screw you too".
"Yeah, I can totally do my master thesis in six months, even it if involves examining a large database of newspaper articles by myself, inventing a methodology to analyse them that translates in quantitative data, invent an observation grid for what people would usually treat as subjective evaluations, mapping and quantifying the business relationships between newspapers and other industries, and generally pushing past the methodology limits that prevented studies I saw so far to actually prove quantitatively that there were in fact a relationship between newspaper relationships with fossil fuels industries and their treatment of climate change in the news, while I know nothing about journalism studies or text analysis. No, my tendency to procrastinate hard or unpleasant things I don't know how to do won't be a problem. Why do you ask?"
It took a bit less than one and a half years.
The more I read about simulated humans the more I'm convinced that a hard ban on simulating new humans and duplicating existing one is a key point of what differentiates dystopias too horrible to even grasp and hyper-existential failures from sane futures, at least until we have aligned AI.
He’s even right that on utilitarian grounds, it’s hard to argue with an em era where everyone is really happy working eighteen hours a day for their entire lives because we selected for people who feel that way. But at some point, can we make the Lovecraftian argument of “I know my values are provincial and arbitrary, but they’re my provincial arbitrary values and I will make any sacrifice of blood or tears necessary to defend them, even unto the gates of Hell?”
I also think that if we don't, we run fast into what we can call... Cenobitical Existential Failures? (Cenobites are Hellraiser demons who see excruciating pain as the best thing in the universe).
Or in a lot of very tiny people really happy about hydrogen atoms (or working overtime).
I'd also strongly argue about making this stand before we select untold billions of people who don't care if they live or die and they outcompete anyone who actually cares out of business.
Now take it even further, and imagine this is what’s happened everywhere. There are no humans left; it isn’t economically efficient to continue having humans. Algorithm-run banks lend money to algorithm-run companies that produce goods for other algorithm-run companies and so on ad infinitum. Such a masturbatory economy would have all the signs of economic growth we have today. It could build itself new mines to create raw materials, construct new roads and railways to transport them, build huge factories to manufacture them into robots, then sell the robots to whatever companies need more robot workers. It might even eventually invent space travel to reach new worlds full of raw materials. Maybe it would develop powerful militaries to conquer alien worlds and steal their technological secrets that could increase efficiency. It would be vast, incredibly efficient, and utterly pointless. The real-life incarnation of those strategy games where you mine Resources to build new Weapons to conquer new Territories from which you mine more Resources and so on forever.
Economical Growth has stopped to correlate with nearly all measures of wellbeing for the population in first world nations. We are already more than halfway there it seems.
I'd think that some of these alien civilisation would have figured it out in time, implanted everyone with neural chips that override any world ending decision, kept technological discoveries over a certain level available only to a small fraction of the population or in the hand of aligned AI, or something.
An aligned AI definitely seems able to face a problem of this magnitude, and we'd likely either get that or botch that before reaching the technological level any lunatic can blow up the planet.
How many of the experts in this survey are victims of the same problem? “Do you believe powerful AI is coming soon?” “Yeah.” “Do you believe it could be really dangerous?” “Yeah.” “Then shouldn’t you worry about this?” “Hey, what? Nobody does that! That would be a lot of work and make me look really weird!”
It does seem to be the default response of groups of humans to this kind of crisis. People died in burning restaurants because nobody else got up to run.
"Why should I, an expert in this field, react to the existential risk I acknowledge as a chance as if I were urgently worried, if all the other experts I know are just continuing with their research as always and they know what I know? It's clear that existential risk is no good reason to abandon routine".
As in Asch conformity experiment, whee a single other dissenter was enough to break compliance to the consensus, perhaps the example of even a single person who acts coherently with the belief the threat is serious and doesn't come across as weird could break some of this apathy from pluralistic ignorance. Such examples seems to be one of the main factors in causing me to try to align my effort with my beliefs on what's threatening the future of mankind twice, so far.
This was a remarkably successful attempt to summarise the whole issue in one post, well done.
On a side note, I think that getting clever people to think as if in the shoes of a cold, amoral AI can be an effective way to persuade them of the danger. "What would you do if some idiot tried to make you cure cancer, but you had near omnipotence and didn't really cared one bit if humans lived or died?" It makes people go from using their intelligence for arguing why containment would work to use it to think how containment could fail.
When I first met the subject in the sequences I tried to ask me what I would do as an unaligned AI. Most of my hopes for containment died out in half an hour or so.
A common complaint about immigration is "they're taking our jobs." For a group whose primary asset is their ability to do labor, this seems pretty fair to characterize as "our resources are being appropriated," and it's easy to notice that many billionaires who are made better off by mass immigration support decreasing regulatory barriers to immigration.
[Of course, open borders seem like a good idea to economists, and billionaires are more likely to have economist-approved views on economic policy, so I don't think this is just a 'self-interest' story; I just think it's worth noticing that the same "disenfranchised group having their resources appropriated" story does in fact go through for those groups.]
Sorry, I guess I could have explained this part more clearly. I agree that the Rural Brits and American Reds like-groups often believe in a narrative about some external power attacking and erasing them (the evil EU ruling council, billionaires engaged in philanthropy, etc...). My point was that the difference in sympathy these group receive from a third party is best explained:
1) by the belief of this third party in the existence of this external power. Most people criticising these groups would believe in China's violations of human rights but not in evil billionaires controlling the choice on immigration policies.
2) by the strategy these people adopt in defending their culture. If the Tibetan started harassing refugees from a war thorn country I would sympathise with them less than I sympathise with their current attempts to defend their traditions by just practicing them.
I feel like this is missing the core point of the article, which is that the "colonizer / colonized" narrative misses the transition from the 'traditional cultures' of Britain and America to universal culture. Why did universalism win in Britain and America? If it was because those places were torn apart in order to exploit the hell out of them, then the flavor of this analysis changes significantly.
First, I think a lot of the universal culture is actually straight from the "traditional cultures" of Britain and America, it's just harder to see it as something not universal since we grew up in it. Often I feel a cultural barrier that gets in the way of the conversation when I'm discussing certain subjects with Americans on this site, and I'm from Italy, so still in the western culture myself. It is however a complex subject and debating exactly which is what would be pretty hard.
I also think it's not clear what is considered "traditional cultures" of these places, if we are talking about their cultural traditions from before industrialisation... then those were changed in those place to better fit the requirements industrialisation had. Other western countries started to industrialise as fast as they could because the first ones who did it were starting to gain a military-economical supremacy over them.
Non-western countries weren't fast enough to adapt or didn't had enough weapons to stave off who did, so they were colonised, invaded and etc until they either managed to build up an industry and a military or were torn apart to exploit them.
I'm of course generalising a bit, but I think that 90% of this "culture war" was actually a war of might. Industrialisation gives you an edge that everyone wants, so everyone either tries to copy it or is invaded and exploited until they do it anyway.
If nations didn't have to compete for domination and freedom, I think a lot of them would have picked just some bits of the "universal culture" rather than the whole package, either for inertia or because some bits you can just left out and your population would be better off. (I guess whether that would have been better or worse would require calculating a lot of deaths and of changes in quality of life. A lot of the costs will hit us in the face in the next years if they aren't prevented, so the question would still be left open anyway).
The bits that these nations would usually pick would be "universal culture" that fits the description suggested in the post, since they would be practices that win over other in a fair fight for culture. But the main driving factor for the expansions of these norms was the increased military and economical effectiveness that came with industrialisation, so we can't really call Coca Cola an universal winner because we have no idea of how things would have gone in a cultural fight, we just mainly saw a military and economical one.
Human rights and democracy do seem like these cultural universal winners, I gave it some thoughts and realised that yeah, a lot of places seem to have people in it who kinda buy this whole "not being exploited by our local feudal overlords" once they hear the concept. Unfortunately, Coca Cola itself and other... competitive spreaders had a few words against it in a lot of these places.
Also, other cultural practices have expanded peacefully in western countries, but usually they are just exported in other countries as part of the whole industrialisation package, so it would be hard to name them as universal winners.
There's also the whole subject of mass medias of communications, which I think are pretty effective at overwhelming any kind of culture with new content. I do hope that nazism and fascism aren't universal winners, and that they managed to take over Germany and Italy because they had just found a way to be louder than anyone else for a while. The same thing can happen with McDonald or action movies or whatever.
This is a really tangled subject, so I guess I was a bit a lot harsh in my comment, but missing those points I mentioned was a rather biased way to look at it.
To summarise, I guess I understood the main idea of the article, and I'm interested in how exactly reality could be shaped to maximise the benefits of "true cultural universal winners" without erasing the parts of local culture that don't make people miserable.
But I think the post didn't managed to carve reality at the right joints and confounded different kind of victories.
Edit: I've changed my original post a bit because I couldn't tell if it came across as aggressive and I was starting to really obsess about it.
I'm... kinda puzzled by the questions and the situation described by this post. It seems it's missing a couple points that are a relevant part of the whole picture. These points are also extremely relevant in the motivations of those who support differently "local conservatives" and "foreign populations that try to defend their cultures" and in most reasoned objections to the spread of "universal ideology" (I've also met a large number of stupid objections that argue against it for worse reasons). My position is one of support for the spread of some of the elements of this "universal ideology" and of opposition to the spread of others.
- The clear distinctions you can make between Australian Aborigines, Tibetans, Native Americans on a side, and rural British and "American Rednecks" on the other is that in the first group there's a foreign culture that's also overwhelming in power that has come to their home and is erasing both their culture and their properties/territories/wellbeing in general. Their cultural erasure it's also going step by step with an exploitation from the power that's attempting to erase their culture. In the second group... not at all. Rural British and American Rednecks aren't certainly seeing their resources appropriated by the powers behind the immigrants. It is only their culture that's under "siege" and it's a different kind of siege involving no laws or planned attempts to erase their cultural ways, the attack comes from mere exposure to different ideas and customs. So yeah, it makes perfect sense to sympathise with Tibetans trying to shield what's left of their culture and not with British trying to do the same, especially since the attempts that elicit different reactions are usually very different in nature. It would take a special kind of fanatic to go bother British trying to have a traditional warm pint of beer with shepherd pies in their pubs (I apologise with any British reading this for stereotyping and not bothering go search a cherished British tradition) because "sushi is better, you uncultured simpletons". Usually you contest British for trying to defend their culture in ways that make other people miserable or will break a lot of stuff, such as banning immigrations or exiting UE. If Tibetans started throwing rocks and making racist signs against poor North Korean immigrants who are escaping from the persecutions of dictatorships and trying to make a new life for themselves, well support would evaporate fast.
- I think the idea of Western Culture that needs to be defended from barbarism often seems to be actually talking about the universal rights, a reasoned attempts to understand what rights every human should be granted. (There is some opposition about Western Culture choosing universal rights for everyone, but most objections to universal rights I've heard seem to melt under the base kind of pragmatism that's required to allow Zeno of Elea to not starve before reaching his kitchen, it just takes starting to think concrete stuff like "okay, then are you okay with being eye-gouged if the other guy's culture insist it's really necessary?"). The current set of universal rights fits the Noahide Laws example in spade, they're awesomely tolerant of everything that don't involve oppressing people and atrocities and, if applied correctly, would take a lot of fanatism out of the fight for transgender bathrooms. People don't get that pissed off about the bathrooms, people get really pissed off because of a myriad of bigger and smaller things that oppress category x and then every fight for category x right becomes a crusade for some of them. It would be really hard to get that heated about the bathroom issue by itself, I think. Sadly, coca cola seems to be more competitive than universal rights if things are left to take their course, so we might want to give universal rights a hand there.
- I'd also point out that a lot of the "fair fights" that universal culture and colonialism picked were more about bombing the other guys to hell and/or setting up a local corrupt, bloodthirsty dictatorship/protectorate/whatever from which to "buy" their resources for pennies than saying who would win between the Dreamtime and sushi restaurants in a free market fight. It's a bit weird to say that western/universal culture wins fair fights when it has mostly been exported by weapon superiority. Most of the places where universal culture is replacing their own were first torn apart to exploit the hell out of them. If this war of cultures was an experiment, I'd say this was a hell of a confounder.
I guess that what I'm trying to say is that, if you try to take a step back and look at what's happening on the whole, the situation goes back to be... not so complicated, at least about the goals we can pick. We can go big in support of universal rights and of attempts to preserve individual cultures that don't involve deeply problematic strategies. We also go big against large countries invading and exploiting the hell out of small ones and cancelling their culture as they do. Then we can see what problems are actually left after this approach and deal with them.
I'd strongly suggest that anyone looking into this kind of issues explored more the current research on how wealth distribution affects wellbeing. I recommend The Spirit Level by Wilkinson and Pickett as a point to start, is the single most relevant book I've read in my whole psychology curriculum.
Countries hardly find themselves better off due to economic growth and GDP alone, what matters the most is how this increased wealth is distributed, and economic growth is getting more and more decoupled with people finances.
A separated problem is that people seem to be pretty bad at finding an anchor against which to evaluate their happiness level. I'd be pretty skeptical of any program that tried to improve the quality of life and used the people subjective reports of happiness as a measurement.
A few years later, another Dutch trader comes to the little kingdom. Everyone asks if he is there to buy tulips, and he says no, the Netherlands’ tulip bubble has long since collapsed, and the price is down to a guilder or two. The people of the kingdom are very surprised to hear that, since the price of their own tulips has never stopped going up, and is now in the range of tens of thousands of guilders. Nevertheless, they are glad that, however high tulip prices may be for them, they know the government is always there to help. Sure, the roads are falling apart and the army is going hungry for lack of rations, but at least everyone who wants to marry is able to do so.
A kingdom having no preconceptions about the state legitimate role in the economy could have just started some tulip farms and hand those to the poor, free of charges. I guess that would lower the price tulips would reach, but given the damage bubbles do to the economy of a country I see as a plus.
There's also a harsh lesson to be learned on allowing speculations on goods that are "basic necessities".
Higher education is in a bubble much like the old tulip bubble. In the past forty years, the price of college has dectupled (quadrupled when adjusting for inflation). It used to be easyto pay for college with a summer job; now it is impossible. At the same time, the unemployment rate of people without college degrees is twice that of people who have them. Things are clearly very bad and Senator Sanders is right to be concerned.
The price of education has quadrupled, not the costs. Just fund good public universities and call it a day. Nations that manage to spread education do so by spreading good "cheap" education.
If, for reasons I can't imagine, getting a degree on Medieval History has a production cost of 100000$, then make a good public online university and call that a day.
I think that if education was deemed a basic necessity good, with governments supplying it at fixed prices for those who can't afford it, the world would be way better off.
There would be some ifs and how people could qualify for it, but it would definitely be an improvement.
People also stocked up with disinfectants. (I don't remember whether authorities mentioned these, or it was just common sense.) This seemed more tricky, because making disinfectants at home... well, you couldburn some strong alcohol, you wouldn't even have to worry about toxicity if you do not intend to drink it;
This one they handled better, I'm 99% sure that the government started to hand out instructions on how to make disinfectants at home the minute people started trying doing it on their own... I guess it fits my hunch of "prevent flashy, showy bad consequences" as a decisional process, since people self procuring x-degree chemical burns would make the news fast.
Which again makes me think that if there is a risk of panic and shortage, you might want it to happen sooner rather than later, so that the market has enough time to adapt before the worst happens.
I think I disagree on this one. The market starts producing as soon as it suspects there might be panic and shortage, I don't think that shops running out are actually needed for industries getting the message. But once shortages start to happen, people go crazy and start stockpiling more, so you get a random family owning more disinfectant than what they'll consume in the next three years and a lot of families without. Then the behaviour spreads more and more, people worry what might run out next and so on.
As a government, you could even contribute to the shortage, by buying tons of stuff... and later redistributing it to the places of greatest need: sell it to hospitals for the original price, thus shielding them from shortage and price hikes.
I guess any politician would say "no" just by the thought of the backlash in consensus from the population. The party who's playing opposition can jump on the "soviet requisitions" bandwagons and pitch the government as an adversary of the people, fighting them on the product they absolutely need to survive.
Even leaving political games aside... I think it would have backfired. The governments back then had the difficult task of convincing people to concede them more authority on their lives and follow restrictions, "sanity dictatorship" has become a rallying cry for protests already. Stuff like this would have made people revolt from day one.
This was a more embarrassing question than I was expecting... well, here it goes.
Who the hell do you think we are?
Do the impossible, break the unbreakable
Row, row, fight the power!
Kick reason to the curb and do the impossible.
These three are straight from Gurren Lagann. I use them often as mental rally cries when I feel I'm at loss for hope or about to give up on something.
The first is a vague "don't give up and persevere" mostly for getting grit in the moment.
The second is more for my long term plans that are a long way from my reach (I know it's a misquote, but I like it more expressed this way.
The third I always interpreted as "kick common sense to the curb and think a way to do the impossible", I use it for situations where I see no way of winning and have to make me think a way anyway.
Shut up and just do the impossible.
From HPMoR and Lesswrong, when I have to solve something I'm certain I can't, and the price for failure is high. Most of times I used I managed to at least make the situation better.
I can do anything if I study hard enough!
I can do anything if I think hard enough!
From HPMoR, courtesy of general Sunshine. The second is more or less as the one above, the first one is for pushing through plans or reach goals that would ask me to study for a long time.
Are you making an extraordinary effort?
Are you doing everything you can?
Less pleasant than the ones above. I use these two to both step up my game and take ideas seriously.
The first one is from Lesswrong, but I had been using the second one from a while already, so they got paired up. I got the second one from reading something about Greta Thumberg, and it hit me that it was the first "ordinary" person I saw described in details as one that was actually taking the climate change issue seriously and behaving as she believed that much was at risk.
(To clarify: I don't think she's the only one doing so, but her behaviour had struck me as impressively more coherent than what one would usually expect and left an impression)
They seem to put on me a lot of stress though, since they require me feeling like everything depends just on my ability of breaking my limits, giving my 100% effort, and then somehow reach past that an order of magnitude more. I'm using it only on my current life goal. So far my results ramped up.
Are you trying to argue you are right or to understand where the truth is?
This is my "rationality mindset, on!" mantra, it seems to be pretty effective on stopping certain bias when they activate and make me look at a question with the right mentality. I've often changed my mind and ideas when I used it, so I think it works pretty well.
Remember you can choose not to care.
Remember you can regulate your emotions.
I use these when I'm feeling bad for something I can't change or that I don't think I should be feeling bad about, or when I'm in the grip of anger or some other emotional state that's hindering me.
I'm not 100% sure it's the healthiest thing I could say to myself, but it did got me through a light depression when I was a teenager and stabilised my mood a lot, so it stuck.
Thanks, I'll check them out as well!
So, I remain firmly convinced that discouraging people from wearing masks caused deaths. In short term, by making the pandemic spread faster. In long term, by undermining public trusts in experts.
It might be I guess... I'm starting to wonder if my memories about the "no mask" period are how most people lived it.
The way it happened for me was that in Italy masks quickly started to go sold out. People started making masks at home here too, I don't trust my memories 100% but I think that in a week or so most people I saw outside were wearing one.
The narration I remember being pushed by experts was "if you aren't elderly or at risk in other ways you don't need one" so I think the attempt was to make sure the few masks available were getting to the most vulnerable citizens... but it might have been that they were just worrying about panic and fistfights breaking out on pharmacies or something, consequences that would "look bad" or weaken the perception of how well the government was handling things. Then again, if people started to panic it's hard to tell before how serious the consequences would be.
I think the shift toward "wear a mask" here was done gradually and quickly as more masks were being produced, but as I said I wouldn't trust my memories too much.
A problem with my memories is that I remember interpreting the mask message that was being pushed as... ambivalent from day one.
I'm sure I hadn't read Lesswrong opinions on it, but I remember clearly I had concluded from the start that masks had to help reduce the spread of the virus because they would reduce how far you'd breathe. I guess I figured the experts would be able to realise it, so this ought to be an attempt to slow down the starting panic rush toward masks.
A lot of people instead apparently polarised and started arguing against masks because the far right had jumped onboard the "Chinese virus" bandwagon, but at the time I was avoiding any mention of the virus I could because I got fed up with it (I had recently finished a long work about how medias weren't focusing enough on news about global warming but coverage was improving at last, seeing this attention for a virus that even in the worst possible case of "hundreds of millions of people get infected" would kill significantly less people annoyed me a great deal. Not a rational reaction from my end, but I couldn't help it) and I was just checking a couple sites for the number of infections and RT, avoiding people and wearing what I could on my face, so I think I missed most of the confusion about it.
Thinking through it now, my guess is that instructions on how to make a mask at home or what to use as a quick fix would have likely contained the pandemic more and prevented more deaths.
But if people panicked the wrong way the pandemic would have spread faster.
Governments pushed the "don't panic" button by habitude and didn't really tried to see through the issue.
In hindsight I guess they should have tried a different way to keep the calm, I think they underestimated how widespread the pandemic would become. Back then the call was harder (but I don't think they really weighted it rationally).
I wouldn't be able to estimate the "undermining public trusts in experts" damages.
Part of the problem is that facebook has a lot of moderators who can just ban people. Ron Paul is strong enough to complain and get a decision reversed but average people who get banned by a random moderator can't.
I agree it's a big problem, the inability of average people to complain worries me as well.
I think Facebook should elaborate a strict guideline for its moderators, hold them accountable on how they decide and keep track on how they acted in the past, rewarding accuracy and punishing "interpretations". For such a big organisation it wouldn't really be excusable to leave moderators free to interpret the norms as the average forum would.
This would likely help a bit, if a moderator thinks he's acting properly when he's shooting down people belonging to the "enemy and obviously wrong" faction then things would turn sour really fast.
If the solution doesn't do the trick and there are still too many "mistakes" then some other way to implement controls on the decision system would be needed.