Posts
Comments
I think the new communication systems could be a catalyst, but that stopping at this conclusion obscures the actual cause of cancel culture. I think the answer is something like what Kaczynski said about oversocialization, and that social media somehow worsens the social dynamics responsible. I think it's an interesting question how exactly these dynamics work socially and psychologically, so for me, "it's the new communication systems" is not a conclusion but a possible first step in finding the answer
My own expectation is that limitations result in creativity. Writers block is usually a result of having too many possibilities/choices. If I tell you "You can write a story about anything", it's likely harder for you to think of anything than if I tell you "Write a story about an orange cat". In the latter situation, you're more limited, but you also have something to work with.
I'm not sure if it's as true for computers as it is for humans (that would imply information-theoretic factors), but there's plenty of factors in humans, like analysis paralysis and the "See also" section of that page
If that is really his view, Sam Harris didn't think things through at all, nor did he think very deeply.
Qualia is created by the brain, not by anything external. Touching a hot stove feels bad because we are more likely to survive when we feel this way. There's no reason why it can't feel pleasurable to damage yourself, it just seems like a bad design choice. The brain uses qualia to reward and punish us so that we end up surviving and reproducing. Our defense mechanisms are basically just toying with us because it helps us in the end (it's merely the means to survival), and our brains somewhat resist our attempts at hacking our own reward mechanisms because those who could do that likely ended up dying more often.
You could use Harris Arguments to imply that objective beauty exists, too. This is of course also not correct.
The argument also implies that all life or all consciousness can feel positive and negative qualia, but that's not necessarily true. He should have written "made our corner of the universe suck less, for us, according to us. (What if a change feel bad for us but causes great suffering to some alien race?)
Lastly, if these philosophers experienced actual, severe suffering for long periods of time, they would likely realize that suffering isn't even the issue, but suffering that one feels is meaningless. Meaningful pain is not bothersome at all, and it doesn't even need to reduce further pain. Has Harris never read "man's search for meaning" or other works which explain this?
Thank you! Writing is not my strong suit, but I'm quite confident about the ideas. I've written a lot, so it's alright if you don't want to engage with all of it. No pressure!
I should explain the thing about suffering better:
We don't suffer from the state of the world, but from how we think about it. This is crucial. When people try to improve other peoples happiness, they talk about making changes to reality, but that's the least effective way they could go about it.
I believe this is even sufficient. That we can enjoy life as it is now, without making any changes to it, by simply adopting a better perspective on things.
For example, inequality is a part of life, likely an unavoidable one (The Pareto principle seems to apply in every society no matter its type). And even under inequality, people have been happy, so it's not even an issue in itself. But now we're teaching people in lower positions that they're suffering from injustice, that they're pitiful, that they're victims, and we're teaching everyone else that life could be a paradise, if only evil and immoral influences weren't preventing it. But this is a sure way to make people unhappy with their existence. To make them imagine how much better things could be, and make comparisons between a naive ideal and reality. Comparison is the thief of joy, and most people are happy with their lot unless you teach them not to be.
Teaching people about suffering doesn't cause it per se, but if you make people look for suffering, they will find it. If you condition your perception to notice something unpleasant, you will see it everywhere. Training yourself to notice suffering may have side-effects. I have a bit of tinnitus, and I got over it by not paying it any attention. It's only like this that my mind will start to filter it away, so that I can forget about it.
The marketing perspective
I don't think you need pain to motivate people to change, the carrot is as good at the stick. But you need one of the two at minimum (curiousity and other such drives make you act naturally, but do so by making it uncomfortable not to act and rewarding to act)
I don't think that suffering is bearable because of reward itself, but because of perceived value and meaning. Birth is really painful, but the event is so meaningful that the pain becomes secondary. Same for people who compete in the olympics, they have found something meaningful enough that a bit of physical pain is a non-issue.
You can teach this to people, but it's hard to apply. It's better to help them avoid the sort of nihilism which makes them question whether things are worth it. I think one of the causes of modern nihilism is a lack of aesthetics.
My 2nd perspective
I don't think understanding translates directly into power. It's a common problem to think "I know what I should be doing, but I can't bring myself to do it". If understanding something granted you power over it, I'd practically be a wizard by now.
You can shift the problem that people attack, but if they have actual problems which put them in danger, I think their focus should remain on these. You can always create dissatisfaction by luring them towards better futures, in a way which benefits both them and others at the same time.
I'm never motivated by moral arguments, but some self-help books are alluring to me because they prey on my selfishness in a healthy manner which also demands responsibility and hard work.
As for the third possibility, that sounds a bit pessimistic. But I don't think it would be a worthless outcome as long as the image of what could be isn't a dangerous delusion. Other proposed roads to happiness include "Destroy your ego", "Be content with nothing", "Eat SSRIs forever", and various self-help which asks you to "hustle" and overwork.
who want to prevent human extinction
I see! That something deeper than preventing suffering. I even think that there's some conflicts between the two goals. But motivating people towards this should be easier since they're preventing their own destruction as well, and not just helping other people.
it is difficult to know what the far-reaching consequences of this hypothetical world would be
It really is. But it's interesting to me how both of us haven't used this information to decrease our own suffering. It's like I can't value things if they come too easy, and like I want to find something which is worth my suffering.
But we can agree that wasted suffering is a thing. That state of indecision, being unable to either die or live, yield or fight back, fix the cause of suffering or come to terms with it.
The scarcity mindset is definitely a problem, but many resources are limited. I think a more complex problem would be that people tend to look for bad actions to avoid, rather than positive actions to adopt. It's all "we need to stop doing X" and "Y is bad" and "Z is evil". It's all about reduction, restrictions, avoidance. It simply chokes us. Many good people trap themselves with excessive limitations and become unable to move freely. To simply use positives likes "You should be brave", "You should stand up for what you believe in", "You should accept people for who they are" would likely help improve this problem.
there are certain important aspects of human psychology that I'm still unsure about
I think pain and such are thresholds between competing things. If I'm tired and hungry, whether or not I will cook some food depends on which of the two cause the greatest discomfort.
When procrastinating I've also found that deadlines helped me. Once I was backed into a corner and had to take action, I suddenly did. I ran away for as long as I could. The stress from deadlines might also result in dopamine and adrenaline, which help in the short term.
"Acceptance of suffering" is a bit ambigious. Accepting something usually reduces the suffering it causes, and accepting suffering lessens it too. But one can get too used to suffering, which makes them wait too long before they change anything, like the "This is fine" meme or the boiling frog that I mentioned earlier
Spread logical decisionmaking
Logic can defend against mistakes caused by logic, but we did not destroy ourselves in the past when we were less logical than now. I also don't think that logic reduces suffering. Many philosophers have been unhappy, and many people with down syndrome are all smiles. Less intelligent people often have a sort of wisdom about them, often called "street smarts" when observed, but I think that the lack of knowledge leads them to make less map-territory errors. They're nearer to reality because they have less knowledge which can mislead them.
I personally think that intellect past a certain level gives humans the ability to deliberately manipulate their suffering
I don't think any human being is intelligent enough to do this (Buddha managed, but the method was crude, reducing not only suffering). What we can do, is manipulate our reward systems. But this leaves us feeling empty, as we cannot fake meaning. Religion basically tells us to live a good life according to a fixed structure, and while most people don't like this lack of freedom, it probably leads to more happiness in the long run (for the same reason that neuroticism and conscientiousness are inversely correlated)
since human meaning is constructed by our psychological systems
Yes, the philosophical question of meaning, and the psychology of meaning are different. To solve meaninglessness by proving external meaning (this is impossible, but lets assume you could), is like curing depression by arguing that one should be happy. Meaning is basically investment, engagement, and involvement in something which feels like it has substance.
I recommend just considering humanity as a set of axioms. Like with mathematical axioms, this gives us a foundation. Like with mathematics, it doesn't matter that this foundation is arbitrary, for no "absolute" foundation can exist (in other words, no set of axioms are more correct than any other. Objectivity does not exist, even in mathematics, everything is inherently relative).
Since attemping to prove axioms is silly, considering human nature (or yourself) as sets of axioms allows you not to worry about meaning and values anymore. If you want humanity to survive, you no longer have to justify this preference.
Maybe you can point me to a source of information that will help me see your perspective on this?
That would be difficult as it's my own conclusion. But do you know this quote by Taleb?
"I am, at the Fed level, libertarian;
at the state level, Republican;
at the local level, Democrat;
and at the family and friends level, a socialist."
The smaller the scope, the better. The reason stupid people are happier than smart people is because their scope of consideration is smaller. Being a big fish in a small pond feels good, but increase your scope of comparison to an entire country, and you become a nobody. Politics makes people miserable because the scope is too big, it's feeding your brain with problems that you have no possibility of solving by yourself. "Community" is essential to human well-being because it's cohersion on a local level. "family values" are important for the same reason. Theres more crime in bigger cities than smaller ones. Smaller communities have less crazy behaviour, they're more down-to-earth. A lot of terrible things emerge when you increase the scale of things.
Multiple things on a smaller scale does not seem to have a cost. One family can have great coherence. You can have 100 families living side by side, still great. But force them all to live together in one big house, and you will notice the cost of centralization. You will need hierarchies, coordination, and more rules. This is similar to urbanization. It's also similar to how the internet went from being millions of websites, to becoming a few 100 popular websites. It's even similar to companies merging into giants that most people consider evil.
An important antidote is isolation (gatekeeping, borders, personal boundaries, independence, seperation of powers, the single-responsibility-principle, live and let live philosophies, privacy and other rights, preservation).
I wish it was just "reduced efficiency" which was the problem. And sadly, it seems that they optimal way to increase the efficiency between many things is simply to force them towards similarity. For society, this means the destruction of different cultures, the destruction of different ways of thinking, the destruction of different moralities and different social norms.
I presume you're referring to management claiming credit
It's much more abstract than that. The amount of countries, brands, languages, accents, standards, websites, communities, religious, animals, etc. are all decreasing in numbers. All slowly tending towards 1 thing having monopoly, with this 1 thing being the average of what was merged.
Don't worry if you don't get last few points. I've tried to explain them before, but I have yet to be understood.
I wonder about what basis/set of information you're using to make these 3 claims?
Once a moloch problem has been started, you "either join or die", like you said. But we can prevent moloch problems from occuring in the first place, by preventing the world from becoming legible enough. For this idea, I was inspired by "Seeing like a state" and this
There's many prisoners-dilemma like situations in society, which do not cause problems simply because people don't have enough information to see them. If enough people cannot see them, then the games are only played by a few people. But that's the only solution to Moloch: Collectively agree not to play (or, I suppose, never stop playing in the first place). The amount of moloch-like problems has increased as a side-effect of the increased accessibility of information. Dating apps ruined dating by making it more legible. As information became more visible, and people had more choices and could make more informed decisions, they became less happy. The hidden information in traditional dating made it more "human", and less materialistic as well. Since rationalists, academics and intellectuals in general want to increase the openness of information and seem rather naive about the consequences, I don't want to become either.
I agree with the factors leading to human extinction. My solution is "go back". This may not be possible, and like you say, we need to use intelligence and technology to go forwards instead. But like the alignment problem, this is rather difficult. I haven't even taught myself high-level mathematics, I've noticed all this through intuition alone.
I think letting small disasters happen naturally could help us prevent black-swan like events. Just like burning small patches of trees can prevent large forest fires. Humanity is doing the opposite. By putting all its eggs in one basket and making things "too big to fail", we make sure that once a disaster happens, it hits hard.
Related to all of this: https://slatestarcodex.com/2017/03/16/book-review-seeing-like-a-state/ (the page mentions black swan risks, Taleb, Ribbonfarms, legibility and centralization). I actually had most of these thought before I knew about this page, so that gives me some confidence that I'm not just connecting unrelated concepts like a schizophrenic.
My argumentation is a little messy, but I don't want to invest my life in understanding this issue or anything. Kaczynski's books have a few overlapping arguments with me, and the other books I know are even more crazy, so I can't recommend them.
But maybe I'm just worrying over nothing. I'm extrapolating things as linear or exponential, but they may be s-shaped or self-correcting cycles. And any partial collapse of society will probably go back to normal or even bring improvements with it in the long run. A lot of people have ruined themselves worrying over things which turned out just fine in the end.
I like this post, but I have some problems with it. Don't take it too hard, as I'm not the average LW reader. I think your post is quite in line with what most people here believe (but you're quite ambitious in the tasks you give yourself, so you might get downvoted as a result of minor mistakes and incompleteness resulting from that). I'm just an anomaly who happened to read your post.
By bringing attention to tactical/emotionally pulling patterns of suffering, people will recognize it in their own life, and we will create an unfulfilled desire that only we have the solution for.
I think this might make suffering worse. Suffering is subjective, so if you make people believe that they should be suffering, or that suffering is justified, they may suffer needlessly. For example, poverty doesn't make people as dissatisfied with life as relative poverty does. It's when people compare themselves to others and realize that they could have it better, that they start disliking what they have at the moment. If you create ideals, then people will work towards archiving them, but they will also suffer from the gap between the current state and what's ideal. You may argue "the reward redeems the suffering and makes it bearable", and yes, but only as long as people believe that they're getting closer to the goal. Most positive emotion we experience is a result of feeling ourselves moving towards our goals.
Personal concurrent life-satisfaction is possible in-spite of punishment/suffering when punishment/suffering is perceived as a necessary sacrifice for an impending reward.
Yes, which is why one should not reduce "suffering" but "the causes of unproductive suffering". Just like one shouldn't avoid "pain", but "actions which are painful and without benefit". The conclusions of "mans search for meaning" was that suffering is bearable as long as it as meaning, that only meaningless suffering is unbearable. I've personally felt this as well. One of the times I was the most happy, I was also the most depressed. But that might just have been a mixed episode as is known from bipolar disorder.
I'm nitpicking, but I believe it's important to state that "suffering" isn't a fundamental issue. If I touch a flame and burn my hand, then the flame is the issue, not the pain. In fact, the pain is protecting me from touching the flame again. Suffering is good for survival, for the same reason that pain is good for survival. The proof is that evolution made us suffer, that those who didn't suffer didn't pass on their genes.
We are products of EA
I'm not sure this is true? EA seems to be the opposite of darwinism, and survival of the fittest has been the standard until recent (everyone suddenly cares about reducing negative emotions and unfairness, to an almost pathological degree). But even if various forces helped me avoid suffering, would that really be a good thing?
I personally grew the most as a person as a result of suffering. You're probably right that you were the least productive when you didn't eat, but suffering is merely a signal that change is necessary, and when you experience great suffering, you become open to the idea of change. It's not uncommon that somebody hits rock bottom and turns their lives around for the better as a result. But while suffering is bearable, we can continue enduring, until we suffer the death of a thousand papercuts (or the death of the boiling frog, by our own hands)
That said, growth is usually a result of internal pressure, in which an inconsistency inside oneself finally snaps, so that one can focus on a single direction with determination. It's like a fever - the body almost kills itself, so that something harmful to it can die sooner.
We are still in trouble if the average human is as stupid as I am.
Are you sure suffering is caused by a lack of intelligence, and not by too much intelligence? ('Forbidden fruit' argument) And that we suffer from a lack of tech rather than from an abundance of tech? (As Ted Kaczynski and the Amish seem to think)
Many animals are thriving despite their lack of intelligence. Any problem more complicated than "Get water, food and shelter. Find a mate, and reproduce" is a fabricated problem. It's because we're more intelligent than animals that we fabricate more difficult problems. And if something was within out ability, we'd not consider it a problem, which is why we always fabricate problems which are beyond our current capacities, which is how we trick ourselves into growth and improvement. Growth and improvement which somehow resulted in us being so powerful that we can destroy ourselves. Horseshoe crabs seem content with themselves, and even after 400 million years they just do their own thing. Some of them seem endangered now, but that's because of us?
Bureaucracy
Caused by too much centralization, I think. Merging structures into fewer, bigger structures causes an overhead which doesn't seem to be worth it. Decentralizing everything may actually save the world, or at least decrease the feedback loop which causes a few entities to hog all the resources.
Moloch
Caused by too much information and optimization, and therefore unlikely to be solved with information and optimization. My take here is the same as with intelligence and tech. Why hasn't moloch killed us sooner? I believe it's because the conditions for moloch weren't yet reached (optimal strategies weren't visible, as the world wasn't legible and transparent enough), in which case, going back might be better than going forwards.
The tools you wish to use to solve human extinction are, from my perspective, what is currently leading us towards extinction. You can add AGI to this list of things if you want.
Great post!
It's a habit of mine to think in very high levels of abstraction (I haven't looked much into category theory though, admittedly), and while it's fun, it's rarely very useful. I think it's because of a width-depth trade-off. Concrete real-world problems have a lot of information specific to that problem, you might even say that the unique information is the problem. An abstract idea which applies to all of mathematics is way too general to help much with a specific problem, it can just help a tiny bit with a million different problems.
I also doubt the need for things which are so complicated that you need a team of people to make sense of them. I think it's likely a result of bad design. If a beginner programmer made a slot machine game, the code would likely be convoluted and unintuitive, but you could probably design the program in a way that all of it fits in your working memory at once. Something like "A slot machine is a function from the cartesian product of wheels to a set of rewards". An understanding which would simply the problem so that you could write it much shorter and simpler than the beginner. What I mean is that there may exist simple designs for most problems in the world, with complicated designs being due to a lack of understanding.
The real world values the practical way more than the theoretical, and the practical is often quite sloppy and imperfect, and made to fit with other sloppy and imperfect things.
The best things in society are obscure by statistical necessity, and it's painful to see people at the tail ends doubt themselves at the inevitable lack of recognition and reward.
I think there's a problem with the entire idea of terminal goals, and that AI alignment is difficult because of it.
"What terminal state does you want?" is off-putting because I specifically don't want a terminal state. Any goal I come up with has to be unachievable, or at least cover my entire life, otherwise I would just be answering "What needs to happen before you'd be okay with dying?"
An AI does not have a goal, but an utility function. Goals have terminal states, once you achieve them you're done, the program can shut down. An utility function goes on forever. But generally, wanting just one thing so badly that you'd sacrifice everything else for it.. Seems like a bad idea. Such a bad idea that no person has ever been able to define an utility function which wouldn't destroy the universe when fed to a sufficiently strong AI.
I don't wish to achieve a state, I want to remain in a state. There's actually a large space of states that I would be happy with, so it's a region that I try to stay within. The space of good states form a finite region, meaning that you'd have to stay within this region indefinitely, sustaining it. But something which optimizes seeks to head towards a "better state", it does not want to stagnate, but this is precisely what makes it unsustainable, and something unsustainable is finite, and something finite must eventually end, and something which optimizes towards an end is just racing to die. A human would likely realize this if they had enough power, but because life offers enough resistance, none of us ever win all our battles. The problem with AGIs is that they don't have this resistance.
The after-lives we have created so far are either sustainable or the wish to die. Escaping samsara means disappearing, heaven is eternal life (stagnation) and Valhalla is an infinite battlefield (a process which never ends). We wish for continuance. It's the journey which has value, not the goal. But I don't wish to journey faster.
I meant that they were functionally booleans, as a single condition is fulfilled "is rich", "has anvil", "AGI achieved". In the anvil example, any number past 1 corresponds to true. In programming, casting positive integers to booleans results in "true" for all positive numbers, and "false" in the case of zero, just like in the anvil example. The intuition carries over too well for me to ignore.
The first example which came to mind for me when reading the post was confidence, which is often treated as a boolean "Does he have confidence? yes/no". So you don't need any countable objects, only a condition/threshold which is either reached or not, with anything past "yes" still being "yes".
A function where everything past a threshold maps to true, and anything before it maps to false, is similar to the anvil example, and to a function like "is positive" (since a more positive number is still positive). But for the threshold to be exactly 1 unit, you need to choose a unit which is large enough. 1$ is not rich, and having one water droplet on you is not "wet", but with the appropriate unit (exactly the size of the threshold/condition) these should be functionally similar.
I'm hoping there is simple and intuitive mathematics for generalizing this class of problems. And now that I think about it, most of these things (the ones which can be used for making more of themselves) are catalysts (something used but not consumed in the process of making something). Using money to make more money, anvils to make more anvils, breeding more of a species before it goes extinct.
This probably makes more sense if you view it as a boolean type, you either "have an anvil" or you don't, and you either have access to fire or you don't. We view a lot of things as booleans (if your clothes get wet, then wet is a boolean). This might be helpful? It connects what might seem like a sort of edge case into something familiar.
But "something that relies on itself" and "something which is usually hard to get, but easy to get more of once you have a bit of it" are a bit more special I suppose. "Catalyst" is a sort of similar yet different idea. You could graph these concepts as dependency relations and try out all permutations to see if more types of problems exists
The short version is that I'm not sold on rationality, and while I haven't read 100% of the sequences it's also not like my understanding is 0%. I'd have read more if they weren't so long. And while an intelligent person can come up with intelligent ways of thinking, I'm not sure this is reversible. I'm also mostly interested in tail-end knowledge. For some posts, I can guess the content by the title, which is boring. Finally, teaching people what not to do is really inefficient, since the space of possible mistakes is really big.
Your last link needs an s before the dot.
Anyway, I respect your decision, and I understand the purpose of this site a lot better now (though there's still a small, misleading difference between the explanation of rationality and in how users are behaving. Even the name of the website gave the wrong impression).
Yes intuitions can be wrong welcome to reality
But these ways of looking at the world are not factually wrong, they're just perverted in a sense.
I agree that schools are quite terrible in general.
how could I have come up with this myself?
That helps for learning facts, but one can teach the same things in many different ways. A math book from 80 years ago may be confusing now, even if the knowledge it covers is something that you know already, because the terms, notation and ideas are slightly different.
we need wisdom because people cannot think
In a way. But some people who have never learned psychology have great social skills, and some people who are excellent with psychology are poor socializers. Some people also dislike "nerdy" subjects, and it's much more likely that they'd listen to a ted talk on budy language than read a book on evolutionary psychology and non-verbal communication. Having an "easy version" of knowledge available which requires 20 IQ points less than the hard version seems like a good idea.
Some of the wisest and psychologically healthy people I have met have been non-intellectual and non-ideological, and even teenagers or young adults. Remember your "Things to unlearn from school" post? Some people may have less knowledge than the average person, and thus have less errors, making them clear-sighted in a way that makes them seem well-read. Teaching these people philosophy could very well ruin their beautiful worldviews rather than improve on them.
if you know enough rationality you can easily get past all that.
I don't think "rationality" is required. Somebody who has never heard about the concept of rationality, but who is highly intelligent and thinks things through for himself, will be alright (outside of existential issues and infohazards, which have killed or ruined a fair share of actual geniuses).
But we're both describing conditions which apply to less than 2% of the population, so at best we have to suffer from the errors of the 98%.
I'm not sure what you mean by "when you dissent when you have an overwhelming reason". The article you linked to worded it "only when", as if one should dissent more often, but it also warns against dissenting since it's dangerous.
By the way, I don't like most rational communities very much, and one of the reasons is that is that they have a lot of snobs who will treat you badly if you disagree with them. The social mockery I've experienced is also quite strong, which is strange since you'd suspect intelligence to correlate with openness, and for the high rate of autistic people to combat some of the conformity.
I also don't like activism, and the only reason I care about the stupid ideas of the world is that all the errors are making life harder for me and the people that I care about. Like I said, not being an egoist is impossible, and there's no strong evidence that all egoism is bad, only that egoism can be bad. The same goes for money and power, I think they're neutral and both potentially good/bad. But being egoistic can make other people afraid of me if I don't act like I don't realize what I'm doing.
It's more optimal to be passionate about a field
I think this is mostly correct. But optimization can kill passion (since you're just following the meta and not your own desires). And common wisdom says "Follow your dreams" which is sort of naive and sort of valid at the same time.
Believing false things purposefully is impossible
I think believing something you think is false, intentionally, may be impossible. But false beliefs exist, so believing in false things is possible. For something where you're between 10% and 90% sure, you can choose if you want to believe in it or not, and then using the following algorithm:
Say "X is true because" and then allow your brain to search through your memoy for evidence. It will find them.
The articles you posted on beliefs is about the rules of linguistics (belief in belief is a valid string) and logic, but how belief works psychologically may be different. I agree that real beliefs are internalized (exist in system 1) to the point that they're just part of how you anticipate reality. But some beliefs are situational and easy to consciously manipulate (example: self-esteem. You can improve or harm your own self esteem in about 5 minutes if you try, since you just pick a perspective and set of standards in which you appear to be doing well or badly). Self-esteem is subjective, but I don't think the brain differentiates subjective and objective things, it doesn't even know the difference.
And it doesn't seem like you value truth itself, but that you value the utility of some truths, and only because they help you towards something you value more?
Ethically yes, epistemically no
You may believe this because a worldview will have to be formed through interactions with the territory, which means that a worldview cannot be totally unrelated to reality? You may also mean this: That if somebody has both knowledge and value judgements about life, then the knowledge is either true or false, while the value judgements are a function of the person. A happy person might say "Life is good" and a depression person might say "Life is cruel", and they might even know the same facts.
Online "black pills" are dangerous, because the truth value of the knowledge doesn't imply that the negative worldview of the person sharing it is justified. Somebody reading the vasistha yoga might become depressed because he cannot refute it, but this is quite an advanced error in thinking, as you don't need to refute it for its negative tone to be false.
Rationality is about having cognitive algorithms which have higher returns
But then it's not about maximizing truth, virtue, or logic.
If reality operates by different axioms than logic, then one should not be logical.
The word "virtue" is overloaded, so people write like the word is related to morality, but it's really just about thinking in ways which makes one more clear-sighted. So people who tell me to have "humility" are "correct" in that being open to changing my beliefs makes it easier for me to learn, which is rational, but they often act as if they're better people than me (as if I've made an ethical/moral mistake in being stubborn or certain of myself).
By truth, one means "reality" and not the concept "truth" as the result of a logic expression. This concept is overloaded too, so that it's easy for people to manipulate a map with logical rules and then tell another person "You're clearly not seeing the territory right".
physics is more accurate than intuitive world models
Physics is our own constructed reality, which seems to a act a lot like the actual reality. But I think an infinite amount of physics could exist which predicts reality with a high accuracy. In other words, "There's no one true map". We reverse engineer experiences into models, but experience can create multiple models, and multiple models can predict experiences.
One of the limitation is "there's no universal truth", but this is not even a problem as the universe is finite. But "universal" in mathematics is assumed to be truly universal, covering all things, and it's precisely this which is not possible. But we don't notice, and thus come up with the illusion of uniqueness. And it's this illusion which creates conflict between people, because they disagree with eachother about what the truth is, claiming that that conflicting things cannot both be true. I dislike the consensus because it's the consensus and not a consensus.
A good portion of hardcore rationalists tend to have something to protect, a humanistic cause
My bad for misrepresenting your position. Though I don't agree that many hardcore rationalists care for humanistic causes. I see them as placing rationality above humanity, and thus prefering robots, cyborgs, and AIs above humanity. They think they prefer an "improvement" of humanity, but this functionally means the destruction of humanity. If you remove negative emotions (or all emotions entirely. After all, these are the source of mistakes, right?), subjectivity, and flaws from humans, and align them with eachother by giving them the same personality, or get rid of the ego (it's also a source of errors and unhappiness) what you're left with is not human. It's at best a sentient robot. And this robot can achieve goals, but it cannot enjoy them.
I just remembered seeing the quote "Rationality is winning", and I'll admit this idea sounds appealing. But a book I really like (EST: Playing the game the new way, by Carl Frederick) is precisely about winning, and its main point is this: You need to give up on being correct. The human brain wants to have its beliefs validated, that's all. So you let other people be correct, and then you ask them for what you want, even if it's completely unreasonable.
Rationality doesn't necessarily have nature as a terminal value
I meant nature as its source (of evidence/truth/wisdom/knowledge). "Nature" meaning reality/the dao/the laws of physics/the universe/GNON. I think most schools of thought draw their conclusions from reality itself. The only kind of worldviews which seems disconnected from reality is religions which create ideals out of what's lacking in life and making those out to be virtue and the will of god.
None of that is incompatible with rationality
What I dislike might not be rationality, but how people apply it, and psychological tendencies in people who apply it. But upvotes and downvotes seem very biased in favor of a consensus and verifiability, rather than simply being about getting what you want out of life. People also don't seem to like being told accurate heuristics which seem immoral or irrational (the colloquial definition that regular people use) even if they predict reality well. There's also an implicit bias towards alturism which cannot be derived from objective truth.
About my values, they already exist even if I'm not aware of them, they're just unconscious until I make them conscious. But if system 1 functions well, then you don't really need to train system 2 to function well, and it's a pain to force system 2 rationality onto system 1 (your brain resists most attempts at self-modification). I like the topic of self-modification, but that line of studies doesn't come up on LW very often, which is strange to me. I still believe that the LW community downplays the importance of human nature and psychology. It may even underevaluate system 1 knowledge (street smarts and personal experiences) and overevaluate system 2 knowledge (authority, book-smarts, and reasoning)
There's a lot to unfold for this first point:
Another issue with teaching it academically is that academic thought, like I already said, frames things in a mathematical and thus non-human way. And treating people like objects to be manipulated for certain goals (a common consequence of this way of thinking) is not only bad taste, it makes the game of life less enjoyable.
Learning how to program has harmed my immersion in games, and I have a tendency to powergame, which makes me learn new videogames way faster than other people, also with the result that I'm having less fun than them. I think rationality can result in the same thing. Why do people dislike "sellouts" and "cars salesmen" if not for the fact they they simply optimize for gains in a way which conflicts with taste? But if we all just treat taste like it's important, or refuse to collect so much information that we can see the optimal routes, then Moloch won't be able to hurt us.
If you want something to be part of you, then you simply need to come up with it yourself, it will be your own knowledge. Learning other peoples knowledge however, feels to me like consuming something foreign.
Of course, my defense of ancient wisdom so far has simply been to translate it into an academic language in which it makes sense. "Be like water" is street-smarts, and "adaptability is a core component of growth/improvement/fitness" is the book-smarts. But the "street-smarts" version is easier to teach, and now that I think about it, that's what the bible was for.
Most things that society waste its time discussing are wrong. And they're wrong in the sense than even an 8-year-old should be able to see that all controversies going on right now are frankly nonsense. But even academics cannot seem to frame things in a way that isn't riddled with contradictions and hypocrisy. Does "We are good, but some people are evil, and we need to fight evil with evil otherwise the evil people will win by being evil while we're being good" not sound silly? A single thought will get you karl poppers "paradox of tolerance" and a single thought more will make you realize that it's not a paradox but a kind of neutrality/reflexivity which make both sides equal, and that "We need to fight evil" means "We want our brand of evil to win" as long as people don't dislike evil itself but rather how it's used. Again, this is not more complicated than "I punched my little brother because I was afraid he'd punch me first, and punching is bad" which I expect most children to see the problem with.
astrology
The thought experiment I had in mind was limited to a single isolated situation, you took it much further, haha. My point was simply "If you use astrology for yourself, the outcomes are usually alright". Same with tarot cards, as far as I'm concerned, it's a way to talk with your subconsciousness without your ego getting in the way, which requires acting as if something else is present. Even crystal balls are probably a kind of Rorschach test, and should not be used to "read other people" for this reason. Finally, I don't disagree with the low utility of astrology, but false hope gives people the same reassurance as real hope. People don't suffer from the non-existence of god, but from the doubt of his existence. The actual truth value of beliefs have no psychological effects (proof: Otherwise we could use beliefs to measure the state of reality).
are more rational w.r.t. to that goal
I disagree as I know of counter-examples. It's more likely for somebody to become rich making music if their goal is simply to make music and enjoy themselves, than if their goal is to become rich making music. You see similar effects for people who try to get girlfriends, or happiness for that matter. If X resuls in Y, then you should optimize for X and not for Y. Many companies are dying because they don't realize such a simple thing (they try to exploit something pre-existing rather than making more of what they're exploiting, for instance the trust in previous IPs). Ancient wisdom tackles this. Wu Wei is about doing the right by not trying to do it. I don't know how often this works, but it sometimes does.
I have to disagree that anyones goal is truth. I've seen strong evidence that knowledge of an environment is optimal for survival, and that knowledge-optimizing beats self-delusion every time, but even in this case, the real goal is "survival" and not "truth". And my proof is the following: If you optimize for truth because it feels correct or because you believe it's what's best, then your core motivation is feelings or beliefs respectively. For similar reasons, non-egoism is trivially impossible. But the "Something to protect" link you sent seems to argue for this as well?
And truth is not always optimal for goals. The belief that you're justified and the belief that you can do something are both helpful. The average person is 5/10 but tend to rate themself as 7/10, which may the around the optimal bias.
By the way, most of my disagreements so far seem to be "Well, that makes sense logically, but if you throw human nature into the equation then it's wrong"
Some people may find fulfillment from that
I find myself a little doubtful here. People usually chase fame not because they value it, but because other people seem to value it. They might even agree cognitively on what's valuable, but it's no use if they don't feel it.
I think you would need to provide evidence for such claims
How many great peoples autobiographies and life stories have you read? The nearer you get to them, the more human they seem, and if you get too close you may even find yourself crushed by pity. About Isaac Newton, it was even said "As a man he was a failure; as a monster he was superb". Boltzmann committed suicide, John Nash suffered from skizophrenia. Philosophy is even worse off, titles like "suicide or coffee?" do not come from healthy states of mind. And have you read the Vasistha Yoga? It's basically poison. But it's ultimately a projection, a worldview does not reveal the world, but rather than person with the worldview.
Then you weren't thinking rationally
But what saved me was not changing my knowledge, but my interpretation of it. I was right that people lie a lot, but I thought it was for their own sake, when it's mostly out of consideration for others. I was right that people were irrational, but I didn't realize that this could be a good thing.
No one can exempt you from laws of rationality
That seems like it's saying "I define rationality as what's correct, so rationality can never be wrong, because that would mean you weren't being rational". By treating rationality as something which is discovered rather than created (by creating a map and calling it the territory), any flaw can be justified as "that wasn't real rationality, we just didn't act completely rationally because we're flawed human beings! (our map was simply wrong!)".
There can be no universal knowledge, maps of the territory are inherently limited (and I can prove this). As far as rationality uses math and verbal or written communication, it can only approximate something which cannot be put into words "The dao of which can be spoken is not the dao" simply means "the map is not the territory".
By the way, I think I've found a big difference between our views. You're (as far as I can tell) optimizing for "Optimization power over reality / a more reliable map", while I'm optimizing for "Biological health, psychological well-being and enjoyment of existence".
And they do not seem to have as much in common as rationalists believe.
But if rationality in the end worships reality and nature, that's quite interesting, because that puts it in the same boat as Taoism and myself. Some people even put Nature=God.
Finally, if my goal is being a good programmer, then a million factors will matter, including my mood, how much I sleep, how much I enjoy programming, and so on. But somebody who naively optimizes for progamming skills might practice at the cost of mood, sleep, and enjoyment, and thus ultimately end up with a mediocre result. So in this case, a heuristic like "Take care of your health and try to enjoy your life" might not lose out to a rat-race like mentality in performance. Meta-level knowledge might help here, but I still don't think it's enough. and the tendency to dismiss things which seem unlikely, illogical or silly is not as great as a heuristic as one would think, perhaps because any beliefs which manage to stay alive despite being silly have something special about them.
I think majority of people aren't aware of psychology and various fields under it
I don't think there's a reason for most people to learn psychology or game theory, as you can teach basic human behaviour and such without the academic perspective. I even think it's a danger to be more "book smart" than "street smart" about social things. So rather than teaching game theory in college, schools could make children read and write a book report on "How to Win Friends & Influence People" in 4th grade or whatever. Academic knowledge which doesn't make it to 99% of the population doesn't help ordinary people much. But a lot of this knowledge is simple and easier than the math homework children tend to struggle with.
I don't particularly believe in morality myself, and I also came to the conclusion that having shared beliefs and values is really useful, even if it means that a large group of people are stuck in a local maximum. As a result of this, I'm against people forcing their "moral" beliefs on foreign groups, especially when these groups are content and functional already. So I reject any global consensus of what's "good". No language is more correct than another language, and the same applies for cultures and such.
Well it depends on your definition of inhuman
It's funny that you should link that post, since it introduces an idea that I already came up with myself. What I meant was that people tend to value what's objective over what's subjective, so that their rational thinking becomes self-destructive or self-denying in a sense. Rationality helps us to overcome our biases, but thinking of rationality as perfect and of ourselves as defect is not exactly healthy. A lot of people who think they're "super-humans" are closer to being "half-humans", since what they're doing is closer to destroying their humanity than overcoming or going beyond it. And I'm saying this despite the fact that some of these people are better at climbing social hierarchies or getting rich than me. In short, the objective should serve the subjective, not the other way around. "The lenses which sees its own flaws" merely conditions itself to seeing flaws in everything. Some of my friends are artists, and they hate their own work because they're good at spotting imperfections in it, I don't consider this level of optimization to be any good for me. When I'm rational, it's because it's useful for me, so I'm not going to harm myself in order to become more rational. That's like wanting money thinking it will make me happy, and then sacrificing my happiness in order to make money.
But the fields like cognitive biases etc are not
I'll agree as long as these fields haven't been subverted by ideologies or psychological copes against reality yet (as that's what tend to make soft sciences pathetic). The "Tall poppy syndrome" has warped the publics perception of the "Dunning kruger effect", so that it becomes an insult you can use against anyone you disagree with who are certain of themselves, especially in a social sitaution in which a majority disagree.
Astrology
Astrology is wrong and unscientific, but I can see why it would originate. It's a kind of pattern reocgnition gone awry. Since everything is related, and the brain is sometimes lazy and thinks that correlation=causation and that X implies Y is the same as Y implies X, they use patterns to predict things, and assume that recreating the patterns will recreate the things. This is mostly wrong, of course, but not always. People who are happy are likely to smile, but smiling actually tends to make you happier as well. Do you know the tragic story behind the person who invented handwashing? He found the right pattern, and the results were verifiable, but because his idea sounded silly, he ended up suffering.
If you had used astrology yourself, it might have ended better, as you'd be likely to intrepret what you wanted to to be true, and since your belief that your goal in life was fated to come true would help against the periodic doubt that people face in life.
I would strongly disagree on the front of intelligence
Intelligent is not something you are, it's something you have. Identifying with your intelligence is how you disown 90% of yourself. Seeing intelligence as something available to you rather than as something you are helps eliminate internal conflict. All "gifted kid burnout" and "depressed intelligent person" situation I have seen was partly caused by this dangerous identification. Even if you dismiss everything else I've said so far, I want to stress the importance of this one thing. Lastly, "systematic optimality" seems to suffer from something like Goodhart's law. When you optimize for one variable, you may harm 100 other variables slightly without realizing it (paperclip optimizers seem like the mathematical limit of this idea). Holistic perspectives tend to go wrong less often.
I like the Internal Family Systems view. I think the brain has competing impulses whose strength depends on your physical and psychological needs. but while I think your brain is rational according to what it wants, I don't think it's rational according to what you want. In fact, I think peoples brains tend to toy them them completely. It creates suffering to motivate you, it creates anxiety to get you to defend yourself, it creates displeasure and tells you that you will be happy if you achieve your goals. Being happy all the time is easy, but our brain makes this hard to realize so that we don't hack our own reward systems and die. If you only care about a few goals, your worldview is extremely simple. You have a complex life with millions of factors, but you only care about a few objective metrics? I'm personally glad that people who chase money or fame above all end up feeling empty, for you might as well just replace humanity with robots if you care so little for experiencing what life has to offer.
there is a good amount of coorelations with IQ
Oh, I know, I have a few bans from various websites myself (and I once got rate limited on here). And intelligence correlates with nihilism, meta-thinking, systemization, and anxiety (I know a study found the correlaton to mental illness to be false. But I think the correlation is negative until about 120 IQ and then positive after). But why did Nikola Tesla's intelligence not prevent him from dying poor and lonely? Why was Einstein so awkward? Why do some many intelligent people not enjoy life very much? My answer is that these are consequences of lacking humanity / healthy ways of thinking. It's not just that stupid people are delusional. I personally like the idea that intelligence comes at the cost of instinct. For reference, I used to think rationally, I hated the world, I hated people, I couldn't make friends, I couldn't understand myself. Now I'm completely fine, I even overcame depression. I don't suffer and I don't even dislike suffering, I love life, I like socializing. I don't worry about injustice, immorality or death.
I just found a highlight of the sequences, and it turns out that I have read most of the posts already, or just discovered the principles myself previously. And I disagree with a few of the moral rules because they decrease my performance in life by making me help society. Finally, my value system is what I like, not what is mathematically optimal for some metric which people think could help society experience less negative emotions (I don't even think this is true or desirable)
There's an entire field of psychology, yes, but most men are still confused by women saying "it's fine" when they are clearly annoyed. Another thing is women dressing up because they want attention from specific men. Dressing up in a sexy manner is not a free ticket for any man to harass them, but socially inept men will say "they were asking for it" because the whole concept of selection and stardards doesn't occur to them in that context. And have you read Niccolò Machiavelli's "The Prince"? It predates psychology, but it is psychology, and it's no worse than modern books on office politics and such, as far as I can tell. Some things just aren't improving over time.
wisdom goes wrong a lot of time
You gave the example of the ayurvedic textbook, but I'm not sure I'd call that "wisdom". If we compare ancient medicine to modern medicine, then modern medicine wins in like 95% of cases. But for things relating to humanity itself, I think that ancient literature comes out ahead. Modern hard sciences like mathematics are too inhuman (autistic people are worse at socializing because they're more logical and objective). And modern soft sciences are frankly pathetic quite often (Gardner's Theory of Multiple Intelligences is nothing but a psychological defense against the idea that some people aren't very bright. Whoever doesn't realize this should not be in charge of helping other people with psychological issues)
I don't understand where it may apply other than being a nice way to say "be more adaptive"
It's a core concept which applies to all areas of life. Humans won against other species because we were better at adapting. Nietzsche wrote "The snake which cannot cast its skin has to die. As well the minds which are prevented from changing their opinions; they cease to be mind". This community speaks a lot of "updating beliefs" and "intellectual humility" because thinking that one has all the answers, and not updating ones beliefs over time, leads to cognitive inflexibility/stagnation, which prevents learning. Principles are incredibly powerful, and most human knowledge probably boils down to about 200 or 300 core principles.
I have found that I can bypass a lot of wisdom by using these axioms
Would I be right to guess that ancient wisdom fails you the most in objective areas of life, and that it hasn't failed you much in the social parts? I don't disagree that modern axioms can be useful, but I think there's many areas where "intelligent" approaches leads to worse outcomes. For the most part, attempting to control things lead to failure. I've had more unpleasant experiences on heavily moderated platforms than I have had in completely unmoderated spaces. I think it's because self-organization can take place once disturbance from the outside ceases. But we will likely never know.
I think the failure to general purpose overcome akrasia is a failure of rationality
You could put it like that. I'd say something like "The rules of the brain are different than those of math, if you treat the brain like it's supposed to be rational, you will always find it to be malfunctioning for reasons that you don't understand". Too many geniuses have failed at living good lives for me to believe that intelligence is enough. I have friends with IQs above 145 who are depressed because they think too rationally to understand their own nature. They reject the things which could help them, because they look down on them as subjective/silly/irrational.
David Goggings story is pretty interesting. I can't say I went through as much as him, but we do have things in common. This might be why I have the courage to criticize science on LW in the first place.
No problem! Little note though, your psychiatrist might doubt you if it seems like you're trying to self-diagnose because of something you read online. It may be better not to name it directly unless they bring it up first
Some false beliefs can lead to bad actions, but I don't think it's all of them. After all, human nature is biased, because having a bias aided in survival. The psyche also seems like it deceives itself as a defense mechanism fairly often. And I think that "believe in yourself" is good advice even for the mediocre.
I'm not sure which part of my message each part of your message is in response to exactly, but some realizations are harmful because they're too disillusioning. It's often useful to act like certain things are true - that's what axioms and definitions are, after all. But these things are not inherently true or real, they become so when we decide that they are, but in a way it's just that we created them. But I usually have to not think about that for a while before these things go back to looking like they're solid pieces of reality rather than just agreements.
Ancient wisdom can fail, but it's quite trivial for me to find examples in which common sense can go terribly wrong. It's hard to fool-proof anything, be it technology or wisdom.
Some things progress. Math definitely does. But like you said, a lot of wisdom is rediscovered periodically. Science hasn't increased our social skills nor our understanding of ourselves, modern wisdom and life advice is not better than it was 2000 years ago. And it's not even because science cannot deal with these. The whole "Be like water" thing is just flexibility/adaptability. Glass is easier to break than plastic. What's useful is that somebody who has never taken a physics class or heard about darwinism can learn and apply this principle anyway. And this may still apply to some wisdom which accidently reveals something which is beyond the current standard of science.
As for that which is not connected to reality much (wisdom which doesn't seem to apply to reality), it's mostly just the axioms of human cognition/nature. It applies to us more than to the world. "As within, so without", in short, internal changes seem to cause external changes. If you're in a good mood then the external world will seem better too. A related quote is "As you think, so you shall become" which is oddly simiar to the idea of cognitive behavioural therapy.
I see this problem quite often in communities for people with ADHD. People describe being unable to relax or start any task if they have any plans later, seemingly going into a sort of "waiting mode" until that event happens. This may be a common problem which is simply stronger in people with ADHD, I'm not sure.
If you Google "ADHD Waiting mode", you should be able to find posts on this. I don't know how many of these are scientific or otherwise high-quality, and how many of them are unhealthy self-victimzation and other such things. I'm not judging, as I'm diagnosed with ADHD and a few other things myself, I just don't recommend identifying as ones medical diagnoses nor considering them as inherently impossible to overcome.
Then, I'd argue, they're being wrong or pedantic. Since I don't believe my evidence is wrong, it's at most incomplete, and one could argue that an incomplete answer is incorrect in a sense, not because it says anything wrong, but because it doesn't convey the whole truth. If either reason applied to anyone reading that comment, I'd have loved to discuss it with them, which is why I wrote that initial comment in a slightly provocative or cocky way (which I belive is not inappropriate as it reflects my level of confidence quite accurately). This may conflict with some peoples intellectual virtues, but I think a bit of conflict is healthy/necessary for learning
Maybe people care way less about the difference between the two kinds of downvotes than I do. Even if the comment was bad or poorly communicated, I don't think the disagree downvote is appropriate as long as the answer is correct. I see the votes as being "subjective" and "objective" respectively. I agree about the noise thing
I don't think any one option is precise enough that it's correct on its own, so I will have to say "5" as well.
Here's my take:
- Altruism can be a result of both good and bad mental states.
- Helping others tends to be good for them, at least temporarily.
- Helping people can prevent them from helping themselves, and from growing.
- Helping something exist which wouldn't exist without your help is to get in the way of natural selection, which over time can result in many groups who are a net negative for society in that they require more than they provide. They might also remain dependent on others.
- Finally, (and I expect some people to disagree with this) I think that moral good is a luxury. Luxuries are pleasant, but expensive, so when you engage in more luxury than you can afford, it stops being sustainable. And putting luxuries above necessities seem to me a good definition of decadence.
Everything has dose-dependent and context-dependent pros and cons.
I think you're expecting too much of the word "good". I don't think any "good" exists such that more of it is always better, so I think "good" is a region of space rather than a direction. If optimization is gradient descent, then the "good" direction might change with every step you take. But if optimization means "what metric should we optimize for?" then we don't know (we have yet to find a single metric which an AGI could maximize without destroying humanity. Heading too far in any direction seems dangerous). So I think many peoples intuition of the word "good" can prevent them from ever hitting a satisfactory answer (as they're actually searching for something which can be taken to infinity without anything bad happening as a result, and not even considering the context in question)
That sounds about right. And "people sometimes feel that way" is a good explanation for the downvote in my opinion. I was arguing the object-level premises of the post because the "disagree" downvote was factually wrong, and this factual wrongness, I argue, is caused by a faulty understanding of how truth works, and this faulty understanding is most common in the western world and in educated people, and in the ideologies which correlate with western thought and academia.
If you disagree with something which is true, I think the only likely explanations are "Does not understand" and "Has a dislike of", and the bias I pointed out covers both of these possibilities (the former is a "map vs territory" issue and the latter is a "morality vs reality" issue).
I think you figured out what went wrong nicely, but in the end the disagreement remains. I still consider my point likely. If somebody comes along and tells me that they disagreed with it for other reasons, I might even argue that they're lying to themselves, as I'm way to disillusioned to think that a "will to truth" exists. I think social status, moral values and other such things are stronger motivators than people will admit even to themselves.
I refered to that too (specifically, the assumption). By true I meant that the bias which I think is to blame certainly exists, not that it was certain to be the main reason (but I'd like to push against this bias in general, so even if this bias only applies to some of the people to see my comment, I think it's an important topic to bring up, and that it likely has enough indirect influence to matter)
To address your points:
1: Of course it's mixed. But the mixed advice averages out to be "wise", something generally useful.
2: I think it's necessarily trial and error, but a good question is "does the wisdom generalize to now?".
3: This of course depends on the examples that you choose. A passage on the ideal age of marriage might generalize to our time less gracefully than a passage on meditation. I think this goes without saying, but if we assume these things aren't intuitive, then a proper answer would be maybe 5 pages long.
4: Would interpreting it as "negative" not mean that it has been misunderstood? That one can learn without understanding is precisely why they could prosper with a level of education which pales to that of modern times. We learned that bad smells were associated with sickness way before we discovered germs. If our tech requires intelligence to use, then the lower quartile of society might struggle. And with the blind approach you can use genius strategies even if you're mediocre.
5: along with 4, I think this is an example of the bias that I talked about above. What we think of as "real" tends to be sufficiently disconnected from humanity. Religion and traditional ways of living seem to correlate with mental health, so the types of people who think that wealth inequality is the only source of suffering in the world are too materialistic and disconnected. Not to commit the naturalistic fallacy, but nature does optimize in its own way, and imitating nature tends to go much better than "correcting" it.
You don't think the entire western world is biased in favor of science to a degree which is a little naive? In addition to this, I think that people idolize intelligence and famous scientists, that they largely consider people born before the 1950s to have repulsive moral values, that they dislike tradition, that they consider it very important to be "educated", that they overestimate book smarts and underestimate the common sense of people living simple lives, and that they believe that things generally improve over time (such that older books are rarely worth bothering with), and I believe that social status in general make people associate with newer ideas over older ones. There's also a lot of people who have grown up around old, strict and religious people and who now dislike these. It doesn't help it that more intelligent people are higher in openness in general, and that rationalism correlates with a materialistic and mechanical worldview.
Many topics receive a lot more hostility than they deserve because of these biases, and usually because they're explained in a crazy way (for instance, Carl Jungs ideas are often called pseudoscience, and if you take the bible literally then it's clearly wrong) or because people associate them with immorality (say, the idea that casual sex is disliked by traditional because they were mean and narrow-minded, and not because casual sex caused problems for them, or because it might cause problems for us)
A lot of things are disliked or discarded despite being useful, and a lot of wisdom is in this category. All of this was packed in the message that "people dislike old things because it sounds irrational or immoral" (people tend to dislike long comments)
I don't see it as unkind, and I don't think "trial and error" is a wrong explanation either. It seems very unlikely that ideas which are strictly harmful stick around for a very long time. So much that it must necessarily tend in the other direction (I won't attempt to prove this though)
I'm good at navigating hypothesis space, so any difficulties are likely related to theory of mind of people who are very different from myself (being intelligent but out of sync in a way). Still, I don't buy the idea that people can't or shouldn't do this. You're even guessing at my intentions right now, and if somebody is going to downvote me for acting in bad faith, they'll also need to guess at my intentions. So this seems like a common and sensible thing to do in moderation, rather than an intellectual sin of sorts
They did answer the question, there's just a little bit of deduction required? I understood it at a glance and didn't even notice any typos. Situations in which agents can learn something without understanding the reasons behind what they learn are quite common, it's not a novel idea, it just raises a red flag in people who are used to scientific thinking. The general bias in society against tradition/spirituality/religion is too strong compared to the utility (even if not correctness) of these three.
That useless extra text in my previous comment saves a future comment or to by taking things into account in advance. I even wrote the "I didn't understand the explanation" reaction above (as something one might have thought before downvoting the comment), so it's not that I didn't think of it, I just considered it an unlikely reaction as I disagree with it
This seems like an argument in favor of:
Stability over potential improvement, tradition over change, mutation over identical offspring, settling in a local maximum over shaking things up, and specialization vs generalization.
It seems like a hyperparameter. A bit like the learning rate in AI perhaps? Echo chambers are a common consequence, so I think the optimal ratio of preaching to the choir is something like 0.8-0.9 rather than 1. In fact, I personally prefer the /allPosts suburl over the LW frontpage because the first few votes result in a feedback loop of engagement and upvotes (forming a temporary consensus on which new posts are better, in a way which seems unfairly weighted towards the first few votes). If the posts chosen for the frontpage use the ratio of upvotes and downvotes rather than the absolute amount, then I don't thing this bias will occur (conformity might still create a weak feedback loop though).
I'm simplifying some of these dynamics though.
I worded that a bit badly, I meant I had a hard time thinking of better (meaning kinder) explanations, not better (meaning more likely) explanations. Across all websites I've been on in my life, I have posted more than 100000 comments (resulting in many interactions), so while things like psychoanalyzing people, assuming intentions, and making stereotypes is "bad", I simply have too much training data, and too few incorrect guesses not to do this. I do, however, intentionally overestimate people (since I want to talk to intelligent people, I give people the benefit of doubt for as long as possible) but this means that mistakes are attributed to their intentions, personality or values, rather than careless mistakes or superficial heuristics. In this situation, I've assumed that they're offended by the idea that traditional socities rival the science method in some situations. But it may be something more superficial like "I find short comments to be effortless", "somebody else already said that" or "I didn't understand your explanation and I consider it your fault". But like I said in another comment, I remember the first downvotes being disagreements (red X) rather than regular downvotes, so I took it as meaning "this is wrong" rather than "I don't like this comment". Not that any of this matters very much, admittedly
That makes sense, I just evaluated the comment in isolation. But I believe that the first few downvotes were as "incorrect" (the red X) rather than regular downvotes (down arrow), which is why the feedback occured to me as simply mistaken (as the comment is not false).
I've noticed, by the way, that most comments posted tend to get downvoted initially and then return to 0 over time. There may be a few regular, highly active users with high standards or something, and less casual users with lower standards which balance them out over time. I've gone to -10 and back before.
I don't think good and evil are objectively real as moral terms, but if something makes us select against certain behaviour, it may be because said behaviour results in organisms deleting themselves from existence. So that "evil" actually means "unsustainable". But this makes it situational (your sustainable expenditure depends on your income, for instance, so spending 100$ cannot be objectively good or evil).
Moral judgments vary between individuals, cultures and societies
Yes, and which actions result in you not existing will also vary. There's no universal morality for the same reason that there's no universal "best food" or "most fitting zoo enclosure", for "best" cannot exist on its own. Calling something "best" is a kind of shortcut, there's implicit things being referred to.
What's the best move in Tetris? The correct answer depends on the game state. When you're looking for "objectively correct universal moral rules" you might also be throwing away the game state on which the answer depends.
I'd go as far as to say that all situations where people are looking for universal solutions are mistaken, as there may (necessarily? I'm not sure) exist many local solutions which are objectively better in the smaller scope. For instance, you cannot design a tool which is the best tool for fixing any machine, instead you will have to create 100s of tools which are the best for each part of each machine. So hammers, saws, wrenches, etc. exist and you cannot unify all of them them to get something which is objectively better than any of them in any situation. But does this imply that tools are not objective? Does it not rather imply that good is a function taking at least two inputs (tool, object) and outputting a value based on the relation between the two? (a third input could be context, i.e. water is good for me in the context that I'm thirsty).
If my take is right, then like 80% of all philosophical problems turn out to be nonsense. In other words, most unsolved problems might be due to flawed questions. I'm fairly certain in this take, but I don't know if it's obvious or profound.
Yeah I'm asking because downvotes are far too ambigious. I think they're ambigious to the point that they don't make for useful feedback (You can't update a worldview for the better if you don't know what's wrong with it). I don't think downvotes are necessarily bad as a concept though. And about humanity - sure, and on any other website I'd largely have agreed with your view, but when I talk about intellectual things I largely push my own humanity to the side. And even if somebody downvotes because of irrational feelings, I'm interested in what those feelings are.
But I know that people on here frequently value truth, and I'm quite brutal to those values as I think truth is about as valid of a concept as a semicolon (the language is just math/logic rather than English). And if we are to talk about Truth with a capital T, then we're speaking about reality, which is more fundamental than language (the territory, reality, is important. But I rarely see any good maps, even on this website. So when taoists seem to suggest throwing the map away entirely, I do think that's a good idea for every day life. It's only for science, research and tech that I value maps). That makes me an outlier though, haha.
I'm curious why you were downvoted, for you hit the nail on the head. For a short an concise answer, yours is the best.
Does anyone know? Otherwise I will just assume that they're rationalists who dislike (and look down on) traditional/old things for moral reasons. This is not very flattering of me but I can't think of better explanations.
Ancient wisdom is not scientific, and it might even be false, but the benefits are very real, and these benefits sort of works to make the wisdom true.
The best example I can give is placebo, the belief that something is true helps make it true, so even if it's not true, you get the benefits of it being true. The special trait ancient wisdom has is this: The outcome is influenced by your belief in the outcome. This tends to be true for psychological things, and advice like "Belief can move mountains" is entirely true in the psychological realm. But scientific people, who deal with reality, tend to reject all of this and consider it as nonsense, as the problems they're used to aren't influenced by belief.
Another case in which belief matters includes treating things with weight/respect/sacredness/divinity. These things are just human constructs, but they have very real benefits. Of course, you can be an obnoxious atheist and break these illusions all you want, but the consequences of doing this will be nihilism. Why? Because treating things as if they have weight is what gives them weight, and nihilism is basically the lack of perceived weight. There's nothing objectively valid about filial piety, but it does have benefits, and acting as if it's something special makes it so.
Ancient wisdom often gets the conclusions right, but get the explanations wrong, and this is likely in order to make people take the conclusions seriously. Meditation has been shown to be good for you. Are you feeling "Ki" or does your body just feel warm when you concentrate on it? Do you become "one with everything" or does your perception just discard duality for a moment? Do you "meet god" or do you merely experience a peace of mind as you let go of resistance? The true answer is the boring one, but the fantastical explanation helps make these ideas more contagious, and it's likely that the false explanations have stuck around because they're stronger memetically.
Ancient wisdom has one advantage that modern science does not: It can deal with things which are beyond our understanding. The opposite is dangerous: If you reject something just because you don't understand why it might be good (or because the people who like it aren't intellectual enough to defend it), then you're being rational in the map rather than in the territory. Maybe the thing you're dismissing is actually good for reasons that we won't understand for another 20 years.
You can compare this with money, money is "real but not real" in a similar way. And this all generalizes far beyond my examples, but the main benefits are found, like I said, in everything human (psychological and spiritual) and in areas in which the consensus has an incomplete map. I belive that nature has its own intelligence in a way, and that we tend to underestimate it.
Edit: Downvotes came fast. Surely I wrote enough that I've made it very easy to attack my position? This topic is interesting and holds a lot of utility, so feel free to reply.
While you could format questions in such a way that you can divide them into A and B in a sensible manner, my usual reaction to thought experiments which seem to make naive assumptions about reality is that the topic isn't understood very deeply. The problem about looking at the surface (and this is mainly why average people don't hold valuable opinions) is that people conclude that solar panels, windmills and electric cars are 100% "green", without taking into account the production and recycling of these things. Many people think that charging stations for electric cars are green, but they don't see the coal powerplant which supplies power to the charging station. In other words "Does X solution actually work?" is never asked. Society often acts like me when I'm being neurotic. When I say "I will clean my house next week" I allow my house to stay messy while also helping myself to forget the manner for now. But this is exactly like saying "We plan to be carbon neutral by 2040" and then doing nothing for another 5 years.
And yes, that does clarify things!
- Valid, but knowing what's important might require understanding the problem in the first place. A lot of people want you to think that the thing they're yelling about is really important.
- Then the axioms do not account for a lot of controversial subjects. I think the abortion debate also depends on definitions "at how many weeks can the child said to be alive?" "When is it your own body and when is it another body living inside you?"
- I'm afraid it doesn't. I believe that morality has little to no correlation with intelligence, and that truth has little to do with morality. I'd go as far as to say that morality is one of the biases that people have, but you could call these "values" instead of biases.
To actually answer your question, I think understanding human nature and the people you're speaking to is helpful. Also the advantages of pushing certain beliefs, and the advantages of holding certain beliefs.
If somebody grew up with really strict parents, they might value freedom, whereas somebody who lacked guidance might recognize the danger of doing whatever one feels like doing. And whether somebody leans left or right economically seems influenced by their own perceived ability to support themselves. Ones level of pity for others seems to be influenced by ones own confidence, since there's a tendency to project ones own level of perceived fragility.
If you could measure a groups biases perfectly, then you could subtract it from the position they hold. If there's strong reasons to lean towards X, but X is only winning by a little bit, then X might not be true. You can often also use reason to find inconsistencies. I'd go as far as saying that inconsistencies are obvious everywhere unless you unconsciously try to avoid seeing them. Discrimination based on inherent traits is wrong, but it's socially acceptable to make fun of stupid people, ugly people, short people and weirdos? The real rule is obviously closer to something like "Discrimination is only acceptable towards those who are either perceived to be strong enough to handle it, or those who are deemed to be immoral in general". If you think about it enough, you will likely find that most things people say are lies. There's also some who have settled on "It's all social status games and signaling" which is probably just another way of looking at the same thing. Speaking of thinking, if you start to deconstruct morality and questioning it, you might put yourself out of sync with other people permanently, so you've been warned.
But the best advice I can give is likely just to read the 10 or so strongest arguments you can find on both sides of the issue and then judging for yourself. If you can't trust your own judgement, then you likely also can't trust your own judgement about who you can trust to judge for you. And if you can judge this comment of mine, then you can likely judge peoples takes on things in general, and if you can't judge this comment of mine, then you won't be able to judge the advice you get about judging advice, and you're stuck in a sort of loop.
I'm sometimes busy for a day or two, I don't think I will have longer delays in replying than that
I misread a small bit, but I still stand by my answer. It is however still unclear to me if you value truth or not. You mention moral frameworks and opinions, but also sound like you want to get rid of biases? I think these conflict.
I guess I should give examples to show how I think:
- Suppose that climate change is real, but that the proposed causes and solutions are wrong. Or that for some problem X, people call for solution Y, but you expect that Y will actually only make X worse (or be a pretend-solution which gives people a false sense of security and which is only adopted because it signals virtue)
- Suppose that X is slightly bad, but not really worth bothering about, however, team A thinks that X is terrible and team B thinks that A is the best thing ever.
- Suppose that something is entirely up to definition, such that truth doesn't matter (for instance, if X is a mental illness or not). Also, suppose that whatever definition you choose will be perceived as either hatred or support.
- I don't think it's good to get any opinions from the general population. If actual intelligent people are discussing an issue, they will likely have more nuanced takes than both the general population and the media.
- Lets say that X personality trait is positively correlated with both intelligence and sexual deviancy. One side argues that this makes them good, another side argues that this makes them bad. Not only is this subjective, people would be confusing the utilitarian "good/bad" with the moral "good/bad" (easy example: Breaking a leg is bad, but having a broken leg does not make you a bad person).
I think being rational/unbiased results in breaking away from socities opinions about almost everything. I also think that being biased is to be human. The least biased of all is reality itself, and a lot of people seem really keen on fixing/correcting reality to be more moral. In my worldview, a lot of things stop making sense, so I don't bother with them, and I wonder why other people are so bothered by so many things.
I might be unable to respond for a little while myself, sorry about that
I think it's often the case that neither A nor B are true. Common opinions are shallow, often simplified and exaggerated or even entirely besides the point.
Now, you're asking what a good way to form opinions is, well, it depends on what you want.
Do you want to know which side you should vote for to bring the future towards the state that you want?
Do you want to figure out which side is the most correct?
Do you want to figure out the actual truth behind the political issue?
Do you want to hold an opinion which won't disrupt your social life too much or make you unpopular?
I expect that these four will bring you to different answers.
(While I think I understand the problem well, I can't promise that I have a good solution. Besides, it's subjective. Since the topic is controversial, any answer I give will be influenced by the very biases that we're potentially interested in avoiding)
By the way, personally, I don't care much what foreign actors (or team A and B) have to say about anything, so it's not a factor which makes a difference to me.
Edit: I should probably have submitted this as a comment and not an answer. Oh well, I will think up an answer if you respond.
Most of my learning took place in my head, causing it to be isolated from other senses, so that's likely one of the reasons. In some of the examples I know of people forgetting other things, they did things like learning 2000 digits of pi in 3 days, which is exactly something which doesn't really connect to anything else. So you're likely correct (at least, I don't know enough instances of forgetting to make any counter-arguments)
most of it isn't really useful at helping you address the problems that you're facing
This is a rather commonly known technique, but you can work backwards from the problems, learning everything related to them. Rather than learning a lot and hoping that you can solve whatever problems might appear.
What I personally did, which might have been unhealthy, was wanting to fully understand what I was working with in general. So I'd always throw myself at material 5 years of studies above what I currently understood. When introduced to the Bayes chain rule, I started looking into the nature of chain rules, wanting to know how many existed across mathematics and if they were connected with one another. Doing things like this isn't always a waste of time, though, sometimes you really can skip ahead. If you Google summaries of about 100 different books written by people who are experts in their fields or highly intelligent in general, you will gain a lot of insights into things.
I have the same problem. I think my non-verbal IQ might be about 45 points above my verbal IQ, so that could be a factor. I also think mostly in concepts, since I'm afraid that thinking in words would blind me to insights which do not yet have words to describe them.
But translation from "idea in my mind" to "words that others can understand" is hard. I hear that information in the mind has a relational (mindmap) structure, while writing is linear and left-to-right. So the data structures are quite different.
I'm autistic, which harms my ability to communicate. I also tend to create my own vocabulary, and to use grammar in a mathematical sense. I might add "un-" or "-izable" affixes to words which shouldn't have them, or use set-builder notation in my personal notes, even if they contain no mathematics at all. This causes me to have my own efficient symbolic language which is incompatible with other peoples models/associations/tokens.
There's two other things I try to avoid:
1: Subvocalization (it slows me down)
2: Explaining things to myself. I know what I mean, always. If I catch myself thinking to myself as if other people were listening, I stop. Is this a natural habit meant to improving communication, or caused by trauma and fear of being misunderstood (like imagining social scenarios while in the shower)? For I imagine that it causes a dramatic reduction in thinking speed, even if you get the benefits of rubberducking.
In short, I'm guessing that people with high verbal intelligence, and those who tend to think purely in words don't have much difficulty writing. I don't have any contrary evidence in any of my memories, so I will believe this for now
I will not argue that any of what you said is wrong, because I don't believe it is, but I've personally found that learning too much too fast makes me sick. "Rapid" learning may be fine, but anything faster could have serious trade-offs. Consuming and digesting food os similar enough to consuming and digesting knowledge that many intuitions carry over (like the neusea for overeating, or getting tired of eating the same thing for too long, etc).
When cramming for exams, I would sometimes go through 4000 pages in about two weeks, and it would result in a sort of confusion and nausea, and I'd have lots of loose ends and scattered thoughts floating around. Now, I didn't always do the exercises like I should, and my learning was more theoretical than practical, so it may just be that I didn't finish anything before moving on to the next part, leaving my knowledge unsolidified. So "sort of understanding" is definitely not a good stage to stop at, that's my mistake, and most people here probably know better than to do that.
However, I've heard of people trying really hard to study or remember something for hours a day, and then forgetting other important events going back like two weeks. Like memories of last weekend just disappearing and such. I'm not sure if older knowledge is at risk (if you can accidentally erase important things if you're too aggressive in your learning).
Maybe some people on here have stories to share? Not that it's likely. You need to be really low in conscientiousness to be as unstructured as I am, and to have a messy desk, a messy house, messy notes, and be obsessed enough with something that you forget to eat, or forget if it's currently morning or evening. And people like 'us' don't fit in here since we avoid "tedius" things, leading to messy and informal writing, and leading us to avoid knowledge which doesn't interests us but which is relevant if one wants to write an article on a subject. Perhaps med students have enough of a workload to understand the consequences of excessive learning, but I don't know how many of them use this site.
Apologies if your areas of interest doesn't extend to what I'm discussing here.
I thought this was quite obvious. It's why skin exists, and why internal bleeding is bad, and it's also why borders are a good idea. That last statement will probably make a lot of people angry with me, but that's because a moral ideal clashes with reality. I'm personally on the side of reality, since I know the consequences of opposing it.
There's a social hierarchy which looks a bit like the following:
Self > Family > Friends > Community > Country > Nation > World
And these layers of separation protect the inside against negative change or entropy or whatever. This website also has its own border which I consider a good thing (as much as society wants to convince me that gatekeeping is sin).
What I hope society will realize soon is that dissolving these boundaries can have terrible consequences. I do include top-down moderation in that (as inherenting rules from upper layers interferes with the agency of lower layers). For instance, if this website has to censor certain information because an authority 2 or 3 levels further up in the hierarchy demands it, then this website will lose some of its agency and thus its individuality (uniqueness/difference). I consider it a danger when structures impose moral criteria on entities further down the hierarchy. "Thought crimes" are one example, but another is when Mojang says "You cannot allow swearing on your Minecraft realm server" or when Google says "Your website has offensive content, we need to delist it", or when the world tells Japan "Strong borders are immoral".
Now, you could make a case for the upper layers rejecting something inside them for a good reason (like the body expelling harmful things). For instance, society might tell parents "You can generally parent how you want, but violence against children is not allowed". But in this instance, we could simply say "Because they would be compromising the agency of children by harming them".
Interference seems to be harmful in general though. By which I mean this seems like an axiomatic truth. The best teacher tries to aid the growth of students, but they do not try to control the direction of growth. Being controlling in relationships is also harmful. I've heard this described as "Don't mess with other peoples destiny". There's also daoist ideals which says not to interfere. And what's the best advice for people with social anxiety? "Just be yourself". In short, people screw up when they try to control themselves too much rather than letting go and letting things flow naturally. it's like everything in life evolves in a good direction by itself as long as you don't mess with it. Even some meditation techniques can be described as "Shut up and just listen/observe". Tyrannies impose excess control. Communism might have failed because of similar issues with control vs letting people (or complex systems) organize themselves.
TL;DR: Interference might be, in a mathematical sense, harmful, because it prevents... Something akin to self-organization/self-assembly. (Or perhaps because "inheriting" restictions from every upper layer cuts off too many possibilities for the lower layers. Like when the educational system fails to foster each students individuality because it only fosters what students have in common. (∪ vs ∩)). And somehow, borders (agency) seems to help against this problem.
I think it's necessarily truth given the statistical distribution of things. If I say "There's necessarily less people with PhDs than with masters, and necessarily less masters than college graduates" you'd probably agree.
The theory that "If you understand something, you can explain it simply" is mostly true, but this does not make it easy to understand, as simplicity is not ease (Just try to explain enlightenment / the map-territory distinction to a stupid person). What you understand will seem trivial to you, and what you don't understand will seem difficult. This is just the mental representation of things getting more efficienct and us building mental shortcuts for things and getting used to patterns.
Proof: There's people who understand high level mathematics, so they must be able to explain these concepts simply. In theory, they should be able to write a book of these simple concepts, which even 4th graders can read. Thus, we should already have plenty of 4th graders who understand high level mathematics. But this is not the case, most 4th graders are still 10 years of education away from understanding things on a high level. Ergo, either the initial claim (that what you understand can be explained simply) is false, or else "explained simply" does not imply "understood easily"
The excessive humility is a kind og signaling or defense mechanism against criticism and excessive expectations from other people, and it's rewarded because of its moralistic nature. It's not true, it's mainly pleasant-sounding nonsense originating in herd morality.
Right, I agree with that.
A right shift by 2SDs would make people like Hawkings, Einstein, Tesla, etc. about 100 times more common, and make it so that a few people who are 1-2SDs above these people are likely to appear soon. I think this is sufficient, but I don't know enough about human intelligence to guarantee it.
I think it depends on how the SD is increased. If you "merely" create a 150-IQ person with a 20-item working memory, or with a 8SD processing speed, this may not be enough to understand the problem and to solve it. Of course, you can substitute with verbal intelligence, which I think a lot of mathematicians do. I can't rotate 5D objects in my head, but I can write equations on paper which can rotate 5D objects and get the right answer. I think this is how mathematics is progressing past what we can intuitively understand. Of course, if your non-verbal intelligence can keep up, you're much better off, since you can combine any insights from any area of life and get something new out of it.
You're correct that the average IQ could be increased in various ways, and that increasing the minimum IQ of the population wouldn't help us here. I was imagining shifting the entire normal distribution two SDs to the right, so that those who are already +4-5SDs would become +5-7SDs.
As far as I'm concerned, the progress of humanity stands on the shoulders of giants, and the bottom 99.999% aren't doing much of a difference.
The threshold for recursive self-improvement in humans, if one exists, is quite high. Perhaps if somebody like Neumann lived today it would be possible. By the way, most of the people who look into nootropics, meditations and other such things do so because they're not functional, so in a way it's a bit like asking "Why are there so many sick people in hospitals if it's a place for recovery?" thought you could make the argument that geniuses would be doing these things if they worked.
My score on IQ tests has increased about 15 points since I was 18, but it's hard to say if I succeeded in increasing my intelligence or if it's just a result of improving my mental health and actually putting a bit of effort into my life. I still think that very high levels of concentration and effort can force the brain to reconstruct itself, but that this process is so unpleasant that people stop doing it once they're good enough (for instance, most people can't read all that fast, despite reading texts for 1000s of hours. But if they spend just a few weeks practicing, they can improve their reading speed by a lot, so this kind of shows how improvement stops once you stop applying pressure)
By the way, I don't know much about neurons. It could be that 4-5SD people are much harder to improve since the ratio of better states to worse states is much lower
Gwern and Scott are great writers, which is different from writing great things. It's like high-purity silver rather than rough gold, if that makes sense.
I do think they write a lot of great things, but not excellent things. Posts like "Maybe Your Zoloft Stopped Working Because A Liver Fluke Tried To Turn Your Nth-Great-Grandmother Into A Zombie" are probably around the limit of how difficult of an idea somebody can communicate while retaining some level of popularity. Somebody wanting to communicate ideas one or two standard deviations about this would find themselves in obscurity. I think there's more intelligent people out there sharing ideas which don't really reach anyone. Of course, it's hard for me to provide examples, as obscure things are hard to find, and I won't be able to prove that said ideas are good, for if it was easy to recognize as such, then they'd already be popular. And once you get abstract enough, the things you say will basically be indistinguishable from nonsense to anyone below a certain threshold of intelligence.
Of course, it may just be that high levels of abstraction aren't useful, leading intelligent people towards width and expertise with the mundane, rather than rabbit holes. Or it may be that people give up attempting to communicate certain concepts in language, and just make the attempt at showing them instead.
I saw a biologist on here comparing people to fire (as chemical processes) and immediately found the idea familiar as I had made the same connection myself before. To most people, it probably seems like a weird idea?
Short note: We don't need 7SDs to get 7SDs.
If we could increase the average IQ by 2SDs, then we'd have lots of intelligent people looking into intelligence enhancement. In short, intelligence feeds into itself, it might be possible to start the AGI explosion in humans.
Generally, some of the ideas here are still potentially useful, they just don't get you any guarantees.
When I say "There's nothing you can do about journalists screwing you over" I mean it like "There's nothing you can do about the police screwing you over". In 90% of cases, you probably won't be screwed over, but the distribution of power makes it easy for them to make things difficult for you if they hate you enough. Another example is "Unprotected WIFI isn't secure", you can use McDonalds internet for your online banking for years without being hacked, so in practice you're only a little insecure, but the statement "It's insecure" just means "Whether or not you're safe no longer depends on yourself, but on other peoples intentions".
From this perspective, I'm warning against something which may not even happen. But it's merely because a bad actor could exploit these attack vectors. I'm also speaking very generally, in a larger scope than just Lesswrong users talking to journalists. This probably adds to the feeling of our conversations being disconnected.
But I will have to disagree about nobody being naive. When two entities interact, and one of the entities is barely making an effort in pleasing the other party, it's because of a difference in power. A small company may go out of its way to help you if you call its customer support line, whereas even getting in touch with a website like Facebook (unless its through the police) is genuinely hard.
The content says "Journalists exist to help us understand the world. But if you are a journalist, you have to be good enough to deserve the name" Which seems to mean "If you're going to trade, you need to provide something of value yourself, like offering a service". I think this is true for journalists as individuals, but not for companies which employ journalists. If these people won't treat you with respect, it's because they don't have to, and arguing with them is entirely pointless, even if you're right. Nothing but power will guarantee a difference, and if a journalist treats you kindly it's probably because they have integrity (which is one of the forces capable of resisting Moloch).
Repeating myself a bit here, but hopefully made my position clearer in the process.
You seem to dislike reality. Could it not be that the worldview which clashes with reality is wrong (or rather, in the wrong), rather than reality being wrong/in the wrong? For instance that "nothing is forever" isn't a design flaw, but one of the required properties that a universe must have in order to support life?
Right, a constraint is power. This constraint is actually the most important. In case of a power imbalance though, there's nothing the weaker party can really do but to rely on the good-will of the other party. It's their choice how things work out, to the extent that the game board favors them.
If the journalist isn't too powerful, and if they benefit from listening to you, and they're not entirely obsessed about pushing a narrative which goes against your interests or knowledge, then things are favorable and more likely to turn out well.
My argument is that we can consider these things (power difference, alignment of views, the good/bad faith of the journalist in question, etc) as parameters, and that the outcome depends entirely on these parameters, and not on the things that we pretend to be important.
Is it for instance good advice to say "Word yourself carefully so that you cannot be misinterpreted?" For how much effort Jordan Peterson put into this, it didn't do much to help his reputation.
Reputation, power and interests matter, they are the real factors. Things like honesty, truthfulness, competence and morality are the things that we pretend matter, and it's even a rule that we must pretend they matter, as breaking the forth wall (as I'm doing here) is considered bad taste. But the pretend-game gets in the way of thinking clearly. And I think this "advice for journalists" post was submitted in the first place because somebody noticed that the game being played didn't align with what it was "supposed" to be. The reason they noticed is because journalists aren't putting much effort into their deception anymore, which is because the balance of power has been skewed so much
If it's not about truth value, then it's not about misinformation. It's more about manipulation and the harmfulness of certain information, no?
My point is about the imperfections//limitations of language. If I say "the vaccine is safe", how safe does it have to be for my statement to be true? Is an one-in-a-million risk a proof by contradiction, or is it evidence of safety? Where's the cut-off for 'safety'?
I do think fighting "bad-faith manipulation" is doable at times, but I don't think you can label anything as being true/false for certain.
Another point, which I should have mentioned earlier, is that removing false information can be harmful. Better to let it stay along with the counter-arguments which are posted, so that observers can read both sides and judge for themselves. Believing in something false is a human right. Imagine, for isntance, if believing (or not believing) in god was actually illegal
Why does it confuse you? The attention something gets doesn't depend strongly on its quality, but on how accessible it is.
If I get a lot of karma/upvotes/thumbs/hearts/whatever online, then I feel bad, because I would have written something poor.
My best comments are usually ignored, with the occasional reply from somebody who misunderstands me entirely, and the even more rare even that somebody understands me (this type is usually so aligned with what I wrote that they have nothing to add).
The nature of the normal distribution makes it so that popularity and quality never correlate very strongly. This is discouraging to people who do their best in some field with the hope that they will be recognized for it. I've seen many artists troubled by this as well, everything they consider a "masterpiece" is somewhat obscure, while most popular things go against their taste. An example that many can agree with is probably pop music, but I don't think any examples exists which more than 50% of people agree with, because then said example wouldn't exist in the first place.
But things like that happen all the time, and most things that people know about most topics are superficial, meaning that they've only heard the accusations, and that they're only going to encounter the correction if they care to have a conversation about the topic. If the topic is politically biased, and these people spend time in politically biased communities, then it's unlikely that anyone is going to show them the evidence that they're wrong. You're not incorrect, but think about the ratio of rationalists to non-rationalists. The reach of the media vs the amount of people who will bother to correct people who don't know the full story.
It would also be easy for the website in question to say "You're been accused of doing X, which is bad. We don't tolerate bad behaviour on your platform" and ban you before you get to defend yourself. If the misunderstanding is bad enough, online websites can simply decide that even talking about you, or "defending you" is a sign of bad behaviour (I think this sort of happened to Kanye West because we had a manic episode in which he communicated things which are hard to understand and easy to misunderstand)
we could collectively keep one wiki page containing all of this
There's a Wikipedia page on "Gamergate", written largely by people who don't know what happened. And there's a "Gamergate Wiki" with tons of information (44 pages) with every detail documented in chronological order. I want to ask you two questions about this Wiki with the "other side of the story":
1: Have you ever heard of it?
2: Can you even find it? (the only link I have myself is an archived page)