Comment by viliam on Ask LW: Have you read Yudkowsky's AI to Zombie book? · 2019-03-18T22:00:27.421Z · score: 12 (3 votes) · LW · GW

I read the website before the book existed. Actually, I argued that it should be turned into a book, because books in general have higher status than websites. Then I read the book, and translated it to Slovak language.

My opinion on reading the comments is... they are interesting, but the added value per minute spent is significantly lower than reading the book. (Some of the comments are awesome, but most are not, and there is a lot of them.) Thus, if you have anything useful to do, reading the comments after you have read the book is probably a waste of time. (Perhaps, if you have specific questions or objections to specific chapters, you should only read the comments in those chapters.)

Your time would probably be better spent reading high-karma articles which are not part of the book (is there a way to see the highest-karma articles? if not, look here), and... you know, going outside and actually doing things.

Comment by viliam on The Impossibility of the Intelligence Explosion · 2019-03-18T21:44:03.530Z · score: 16 (3 votes) · LW · GW
there is no intelligent algorithm that outperforms random search on most problems; this profound result is called the No Free Lunch Theorem.

I am not familiar with the context of this theorem, but I believe that this is a grave misinterpretation. From a brief reading, my impression is that the theorem says something like "you cannot find useful patterns in random data; and if you take all possible data, most of them are (Kolmogorov) random".

This is true, but it is relevant only for situations where any data is equally likely. Our physical universe seems not to be that kind of place. (It is true that in a completely randomly behaving universe, intelligence would not be possible, because any action or belief would have the same chance of being right or wrong.)

When I think about superintelligent machines, I imagine ones that would outperform humans in this universe. The fact that they would be equally helpless in a universe of pure randomness doesn't seem relevant to me. Saying that an AI is not "truly intelligent" unless it can handle the impossible task of skillfully navigating completely random universes... that's trying to win a debate by using silly criteria.

Comment by viliam on Has "politics is the mind-killer" been a mind-killer? · 2019-03-18T21:19:17.564Z · score: 4 (2 votes) · LW · GW

The effectivity of truth and lying depends on environment. For example, imagine a culture where political debates on TV would be immediately followed by impartial fact checking. Or a culture where politicians have to make predictions about future events ("I don't know" also counts as a valid prediction), and these are later publicly reviewed and evaluated. And, importantly, where the citizens actually care about the results. I suppose such environment would bring more truth in politics.

But this is a chicken-and-egg problem, because changing the environment, that's kinda what politics is about. Also, there are many obvious counter-strategies, such as having loyal people do the "fact checking" in your tribe's favor. (For example, when a politician says something that is approximately correct, like saying that some number is 100, when in reality it is 96, it would be evaluated as "a correct approximation -> TRUE" when your side does it, or as "FALSE" when your opponent does it. You could evaluate opponent's metaphorical statements literally, but the other way round for your allies; etc.)

Could someone be completely honest and still be effective?

That mostly depends on other people. Such as voters (whether they bother to check facts) and media (whether they report on the fact that your statements are more likely to be true). If instead the media decide to publish a completely made up story about you, and most readers accept the story uncritically, you are screwed.

(There are also ways to hurt 100% honest people without lying about them, such as making them publicly answer a question where the majority of the population believes a wrong answer and gets offended by hearing the correct one. "Is God real?")

Comment by viliam on Has "politics is the mind-killer" been a mind-killer? · 2019-03-17T21:54:29.280Z · score: 10 (6 votes) · LW · GW

I agree with your disclaimers that not all people go crazy when they start talking politics, and not always the predicted bad things happen. Problem is, I already see how most people would react to a text saying that sometimes, some people go crazy when talking politics: "Meh, 'some people', that definitely doesn't apply to me. Now let me start screaming about why unconditionally supporting my faction is the most important thing ever, and why everyone who doesn't join us is inherently evil and deserves to die painfully." Or just keep inserting their political beliefs in every other discussion endlessly, because "hey, my political beliefs are rational (unlike political beliefs of those idiots who disagree with me), and this is a website about rationality, therefore it is important for people here to discuss and accept my political beliefs. If they disagree with me, they fail at rationality forever."

We tried to debate politics here; it usually failed. Apparently, believing in one's own rationality is not enough.

(There is also another way how political topics can destroy rational debate: they attract people who don't really care about the main topic of this website, but only came here to fight for a specific political belief.)

From my perspective, the main problem of "rationality vs politics" is that in a political fight, being transparent about your beliefs is usually not a winning strategy. (Saying "I am 80% sure I am right" is not going to bring masses to your side. Neither is replying to slogans and tweets by peer-reviewed articles full of numbers.) If you had a completely honest debate about politics, it would have to be done in private, because the participants would have to write things that could ruin their political careers if quoted publicly. (Imagine things like: "Yeah, I know that this specific important person in our party is a criminal, or that this specific popular argument is actually a lie, but I still support them because the future where they prevail seems like a lesser evil compared to the alternatives, for the following reasons: ...") So you get the multiplayer Prisoners' Dilemma with high motivation to defect, because breaking the rules of the game in favor of doing the right thing (which is how acting on a strong political belief feels from inside) seems like the right thing to do.

Comment by viliam on Privacy · 2019-03-17T16:33:11.067Z · score: 7 (3 votes) · LW · GW

The LW karma obviously has its flaws, per Goodhart's law. It is used anyway, because the alternative is having other problems, and for the moment this seems like a reasonable trade-off.

The punishment for "heresies" is actually very mild. As long as one posts respected content in general, posting a "heretical" comment every now and then does not ruin their karma. (Compare to people having their lives changed dramatically because of one tweet.) The punishment accumulates mostly for people whose only purpose here is to post "heresies". Also, LW karma does not prevent anyone from posting "heresies" on a different website. Thus, people can keep positive LW karma even if their main topic is talking how LW is fundamentally wrong as long as they can avoid being annoying (for example by posting hundred LW-critical posts on their personal website, posting a short summary with hyperlinks on LW, and afterwards using LW mostly to debate other topics).

Blackmail typically attacks you in real life, i.e. you can't limit the scope of impact. If losing an online account on a website X would be the worst possible outcome of one's behavior at the website X, life would be easy. (You would only need to keep your accounts on different websites separated from each other.) It was already mentioned somewhere in this debate that blackmail often uses the difference between norms in different communities, i.e. that your local-norm-following behavior in one context can be local-norm-breaking in another context. This is quite unlike LW karma.

Comment by viliam on Privacy · 2019-03-17T00:59:36.942Z · score: 31 (8 votes) · LW · GW
> If others know exactly what resources we have, they can and will take all of them.
Implication: the bad guys won; we have rule by gangsters, who aren't concerned with sustainable production, and just take as much stuff as possible in the short term.

To me this feels like Zvi is talking about some impersonal universal law of economics (whether such law really exists or not, we may debate), and you are making it about people ("the bad guys", "gangsters") and their intentions, like we could get a better outcome instead by simply replacing the government or something.

I see it as something similar to Moloch. If you have resources, it creates a temptation for others to try taking it. Nice people will resist the temptation... but in a prisoners' dilemma with sufficient number of players, sooner or later someone will choose to defect, and it only takes one such person for you to get hurt. You can defend against an attempt to steal your resources, but the defense also costs you some resources. And perhaps... in the hypothetical state of perfect information... the only stable equilibrium is when you spend so much on defense that there is almost nothing left to steal from you.

And there is nothing special about the "bad guys" other than the fact that, statistically, they exist. Actually, if the hypothesis is correct, then... in the hypothetical state of perfect information... the bad guys would themselves end up in the very same situation, having to spend almost all successfully stolen resources to defend themselves against theft by other bad guys.

To defend yourself from the ordinary thieves, you need police. The police needs some money to be able to do their job. But what prevents them from abusing their power to take more from you? So you have the government to protect you from the police, but the government also needs money to do their job, and it is also tempted to take more. In the democratic government, politicians compete against each other... and the good guy who doesn't want to take more of your money than he actually needs to do his job, may be outcompeted by a bad guy who takes more of your resources and uses the surplus to defeat the good guy. Also, different countries expend resources on defending against each other. And you have corruption inside all organizations, including the government, the police, the army. The corruption costs resources, and so does fighting against it. It is a fractal of burning resources.

So... perhaps there is an economical law saying that this process continues until the available resources are exhausted (because otherwise, someone would be tempted to take some of the remaining resources, and then more resources would have to be spent to stop them). Unless there is some kind of "friction", such as people not knowing exactly how much money you have, or how exactly would you react if pushed further (where exactly is your "now I have nothing to lose anymore" point, when instead of providing the requested resources you start doing something undesired, even if doing so is likely to hurt you more); or when it becomes too difficult for the government to coordinate to take each available penny (because their oversight and money extraction also have a cost). And making the situation more transparent reduces this "friction".

It this model, the difference between the "good guy" and the "bad guy" becomes smaller than you might expect, simply because the good guy still needs (your) resources to fight against the bad guy, so he can't leave you alone either.

Comment by viliam on Pedagogy as Struggle · 2019-02-16T23:57:52.130Z · score: 12 (5 votes) · LW · GW

The idea of "purposefully telling people incorrect information to make them learn even faster than by giving them correct information" feels like rationalization. I strongly doubt that people who claim to use this method actually bother measuring its efficiency. It is probably more like: "I gave them wrong information, some students came to the right conclusion anyway, which proves that I am a fantastic teacher, and other students came to a wrong conclusion, which proves that those students were stupid and unworthy of my time." Congratulations, now the teacher can do nothing wrong!

The goal of abstruse writing (if done intentionally, as opposed to merely lacking the skill to write clearly) is to avoid falsification. If my belief is never stated explicitly, and I only give you vague hints, you can never prove me wrong. Even if you guess correctly that I believe X, and then you write an argument about why X is false, I still have an option to deny believing X, and can in turn accuse you of strawmaning me (and being too stupid to understand the true depths of my thinking). If my writing becomes popular, I can let other people steelman my ideas, and then wisely smile and say "yes, that was a part of the deep wisdom I wanted to convey, but it goes even deeper than that", taking credit for their work and making them happy by doing so.

Comment by Viliam on [deleted post] 2019-02-12T22:46:02.391Z

Even people who don't believe in singularity?

Comment by viliam on The Case for a Bigger Audience · 2019-02-09T16:38:21.385Z · score: 26 (9 votes) · LW · GW

More articles, fewer comments per article -- perhaps these two are connected. ;)

In general, I agree that I would also prefer deeper debates below the articles, and more smart people to participate at them. However, I am afraid that the number of smart people on internet is quite limited (perhaps more than even the most pessimistic of us would imagine), and they usually have other things to do with higher priority than commenting on LW.

Also, LW is no longer new and exciting -- the people who wanted to say something, often already said it; the people who would be attracted to LW probably already found it; the people able and willing to write high-quality content typically already have their personal blogs. Of course this does not stop the discussion here completely; it just slows it down.

Comment by viliam on What is learning? · 2019-02-08T20:50:38.375Z · score: 6 (3 votes) · LW · GW

Learning = changing in a way that allows you to solve (a certain class of) problems more efficiently (on average).

Not learning = either not changing, or changing in a way that does not make you more efficient at solving problems.

(Note: I am saying "on average", because... suppose your original algorithm for solving math problems is simply yelling "five!" regardless of the problem. Now you learn math, and it makes you better at solving math problems in general... but it makes you slower at solving those problems where "five" actually happens to be the correct answer.)

Comment by viliam on Thoughts on Ben Garfinkel's "How sure are we about this AI stuff?" · 2019-02-07T10:28:18.168Z · score: 4 (3 votes) · LW · GW

I feel it's like "A -> likely B" being an evidence for "B -> likely A"; generally true, but it could be either very strong or very weak evidence depending on the base rates of A and B.

Not having knowledgeable criticism against position "2 + 2 = 4" is strong evidence, because many people are familiar with the statement, many use it in their life or work, so if it is wrong, it would be likely someone would already offer some solid criticism.

But for statements that are less known or less cared about, it becomes more likely that there are good arguments against them, but no one noticed them yet, or no one bothered to write a solid paper about them.

Comment by viliam on X-risks are a tragedies of the commons · 2019-02-07T10:03:18.785Z · score: 4 (3 votes) · LW · GW

An important aspect is that people disagree about which (if any) X-risks are real.

That makes it quite different from the usual scenario, where people agree that situation sucks but each of them has individual incentives to contribute to making it worse. Such situation allows solutions like collectively agreeing to impose a penalty on people who make things worse (thus changing their individual incentive gradient). But if people disagree, imposing the penalty is politically not possible.

Comment by viliam on How to notice being mind-hacked · 2019-02-06T02:05:20.511Z · score: 2 (1 votes) · LW · GW

Another frequent feature of a mind hack is that suddenly there is an important authority, which wasn't important before (probably because you were not even aware of its existence).

In case of manipulation, the new authority would be your new guru, etc.

But in case of healthy growth, for example if you start studying mathematics or something, it would be the experts in given area.

Comment by viliam on How to notice being mind-hacked · 2019-02-06T01:59:33.283Z · score: 3 (2 votes) · LW · GW

It doesn't have to be always like this, but it seems to me that the process of conversion often includes installing some kind of threat. "If you stop following the rules, all these wonderful and friendly people will suddenly leave you alone, and also you will suffer horrible pain in the hell." So a mind of a converted person automatically adds a feeling of danger to sinful thoughts.

The process of deconversion would then mean removing those threats. For example, by being gradually exposed to sinful thoughts and seeing that there is no horrible consequence. Realizing that you have close friends outside the religious community who won't leave you if you stop going to the church, etc.

More generally: less freedom vs more freedom. (An atheist is free to pray or visit a church, they just see it as a waste of time. A religious person can skip praying or church, but it comes with a feeling of fear or guilt.)

Comment by viliam on What makes a good culture? · 2019-02-06T01:42:48.738Z · score: 6 (4 votes) · LW · GW

Seems to me that an important aspect of culture is how it organizes "zero-sum" games between its members. I am using scare quotes because a game which is zero-sum (or negative-sum) for its two active players can still generate positive or negative externalities for the rest of the tribe. And because some resources are scarce, and there will be a competition for them, it is nice when the energy of the competition can be channeled into some benefit for the rest of the tribe.

For example, in the hacker culture, one gains status by contributing quality code, so whoever wins more imaginary "most awesome coder" point, billions of people will get free software. Or there are cultures where the traditional way to signal wealth is to donate stuff to other members. An opposite would be a culture where people signal wealth e.g. by wearing expensive watches. (Although it could be argued that this creates some positive externalities too, e.g. job opportunities for the watchmakers.)

I am not sure about this, but I have a feeling that if you want to design a culture that is a nice place to live in, you should encourage pro-social activities as the recommended way to do costly signaling.

Sports are probably also an example of this, when people translate their desire to win (as individuals or teams) into entertainment for others. As opposed to e.g. street fighting which would put lives and property of others in risk. (But people are already aware that sports are "violence made harmless"; my suggestion is to focus on competition in abstract, not only physical violence as its one specific form.)

Comment by viliam on My atheism story · 2019-02-05T14:21:00.530Z · score: 11 (5 votes) · LW · GW


Now be careful, and don't get killed by stupid people.

I notice some similarities between what you wrote, and what other people wrote about similar experiences. You focus on technical details that don't fit. It makes sense, of course, if the discussed text is supposed to be flawless. But it means that you are still at the beginning on the long way out of religion. You don't believe it, but you still kinda respect it. I mean, you consider those technical details worthy of your time and attention.

Imagine that we would be discussing some other religion, e.g. Hinduism. And I would say that the 1234th word in the Whatever Veda could not be original, because it contains a consonant that didn't exist thousands of years ago. You would probably feel like "yeah, whatever, who cares about a consonant; the whole story about blue people with four arms leading armies of 10^10 monkeys from other planets is completely ridiculous!" At the end of the road, you may feel the same about the religion you grew up with. The technical details that now seem important to you will feel unimportant compared with the utter falseness of the whole thing.


I think that the important thing to see the big picture is reductionism. Like, let's not talk about the holy texts and evidence; instead tell me what is your God composed of. Is it build of atoms? Of something else, e.g. some mysterious "spiritual atoms"? When it becomes angry or happy, does it literally have such hormones in its bloodstream? When it thinks or remembers, are its "spiritual neurons" exchanging the "spiritual atoms"? Hey, I am not denying your God, I am actually eager to listen to your story about it... as long as you can focus on the technical details and keep making sense. I want to have a sufficiently good model of your God so that I could build one in my laboratory (given enough resources, hypothetically).

And here is when people jump to some bullshit. The Christian version is like "He is not made of ordinary matter; He is outside of the universe", and I am like: okay, let's talk about the non-ordinary matter that His non-ordinary neurons and non-ordinary brain are built of, in His reality-outside-the-universe. But, you know, to be able to think or feel, there needs to be some kind of metabolism -- even if it's a 13-dimensional metabolism built from dark matter -- right? Then the more sophisticated crap is like "but actually God is the most simple possible thing" or something like that, and I am like: dude, just read something about Kolmogorov komplexity, and come back when you realize how ridiculous you sound.

Of course, such complicated dialogs only happen in my imagination :D because... in real life, when you start asking questions, the typical answer is just "this is all very mysterious stuff that humans like us can't even begin to understand", and it doesn't go far beyond that. Also "read these thousand books, they contain answers to all your questions" (spoiler: they don't; this is just an attempt to make you tired and give up).


For most people, however, religion is not about making sense. It is about belonging to a community. If they start doubting it, they will feel alone. Humans have a desire to associate with those who "believe" the same things. It is unfortunate that sometimes the fairy tales they associate around compel them to do horrible things...

Comment by viliam on Book Trilogy Review: Remembrance of Earth’s Past (The Three Body Problem) · 2019-01-31T23:45:45.879Z · score: 7 (4 votes) · LW · GW

I have read the trilogy, I enjoyed it a lot, and I have only two objections: the happy ending, and the lack of serious effort to kill Luo Ji. The latter is especially weird coming from aliens who would have no problem with exterminating half of the human population.

My impression is that the Three Body trilogy is essentially a universe-sized meditation on Moloch.

I am however completely surprised at your indignation at how the book depicts humans. Because I find it quite plausible, at least the parts about how "no good deed goes unpunished". Do we live in so different bubbles?

I see politicians gaining votes for populism, and losing votes for solving difficult problems. I see clickbait making tons of money, and scientists desperately fighting for funding. There was a guy who landed a rocket on a comet, or something like that, and then a mob of internet assholes brought him to tears because he had a tacky shirt. There are scientists who write books explaining psychometric research, and end up physically attacked and called Nazis. With humans like this, what is so implausible about a person who would literally save humanity from annihilation, being sentenced to death? Just imagine that it brings ad clicks or votes from idiots or whatever is the mob currency of the future, and that's all the incentive you need for this to happen.

As the beginning of the trilogy shows, we do not need to imagine a fictionally evil or fictionally stupid humanity to accomplish this. We just need to imagine exactly the same humanity that brought us the wonders of Nazism and Communism. The bell curve where the people on one end wear Che shirts and cry "but socialism has never been tried", and on the other end we have Noam "Pol Pot did nothing wrong" Chomsky in academia. Do you feel safe living on the same planet as these people? Do you trust them to handle the future x-threats in a sane way? I definitely don't.

The unrealistic part perhaps is that these future (realistically stupid and evil) people are too consistent, and have things too much under control. I would expect more randomness, e.g. one person who saves the world would be executed, but another would be celebrated, for some completely random reason unrelated to saving the world. Also, I would expect that despite making the suicide pact the official policy of the humankind, some sufficiently powerful people would prepare an exit for themselves anyway. (But maybe the future has better surveillance which makes going against the official policy impossible.)

Comment by viliam on Does anti-malaria charity destroy the local anti-malaria industry? · 2019-01-06T21:59:34.609Z · score: 7 (5 votes) · LW · GW

After brief reading, seems like the conclusion is: "At market prices, most people would not use the anti-malaria nets; this is empirically verified. Therefore, we provide the nets for free, and we give the nets instead of cash to buy the nets."

The obvious question is why are people unwilling to buy the nets?

Is there a rational reason, such as "the money is needed to prevent more immediate dangers, such as starvation"? Or is it an irrational one, such as underestimating the danger of malaria, not understanding how malaria spreads, or fatalism about diseases?

Comment by viliam on Does anti-malaria charity destroy the local anti-malaria industry? · 2019-01-06T21:45:23.672Z · score: 5 (3 votes) · LW · GW

I am skeptical about armchair Econ-101 reasoning unless it is also supported by empirical data. Many things can go wrong. (Also, it has a flavor of "map over territory".) For example:

  • The models are based on some assumptions, which is necessary to create models, but in real life the assumptions may be wrong so much that it changes the outcome. The players are supposed to be 100% rational and all-knowing; the transactions are supposed to be completely friction-less; it is assumed that the market is the only game in town. So when this perfect market notices that e.g. there is an opportunity to sell more food, -- POOF! -- and there is instantly a new farm with food to sell. In reality, people may be slow to notice, risk-averse in face of uncertainty, they may be tons of bribes or paperwork necessary to start a new farm, growing the food may require a lot of time, and if too many food producers happen to belong to a minority ethnicity it might result in their genocide. When Econ 101 says "there shall be a balance", it usually does not specify how long do we have to wait for its coming: days, weeks, years, or centuries? ("The market can stay irrational longer than you can stay solvent." In case of Africa, longer than you and your clan can survive.)
  • It is easy to notice some relevant forces, and miss others. (The archetypal example.)
  • Seems to me that some armchair conclusions can be weakened or even reverted simply by reasoning "one level higher". Is increasing human capital a good thing? Sounds uncontroversial, ceteris paribus, but suppose I wave a magical wand and every African magically acquires a PhD in anti-malaria net making. I would still suppose they would have a problem to feed their families. And I wouldn't be too surprised to learn afterwards that there are still not enough anti-malaria nets produced.

Sorry for providing a fully general counter-argument. But this is exactly my point: with enough sophistication, you can make Econ-101 arguments either way. I have already seen a clever Econ-101 argument against the anti-malaria nets. What I need is a reality check.

Does anti-malaria charity destroy the local anti-malaria industry?

2019-01-05T19:04:57.601Z · score: 64 (17 votes)
Comment by viliam on Why do Contemplative Practitioners Make so Many Metaphysical Claims? · 2018-12-31T23:40:28.057Z · score: 42 (12 votes) · LW · GW

A simpler explanation:

  • Most people have completely crappy epistemic standards.
  • Intelligent people make up their own crazy ideas (in addition to repeating crazy ideas from others).
  • When you mastered something other people want to learn, you have an audience.
Comment by viliam on In what ways are holidays good? · 2018-12-28T11:49:13.992Z · score: 7 (4 votes) · LW · GW

This is highly individual. I never go somewhere to "learn". (I completely agree that you can read the Wikipedia, and most of the time I don't even care.) Some people seem to have different preferences, hence the profession of a tour guide.

From the perspective of feelings, being somewhere provides you the full 3D experience, which is stronger than just seeing a picture on the screen. Even watching things in a museum or a gallery feels different than looking at pictures on the screen.

Some places provide you things you don't have at home: a sea, a mountain, a forest, a jungle... My typical vacation is going somewhere close to nature.

Sometimes you want to experience a different culture. Could be any kind of difference, just to have a novel experience; or could be a specific culture that you enjoy. People have emotional associations with some cultures, based on stereotypes, books, and movies: romance, adventure... You may want to practice a language.

Traveling somewhere means that you don't sleep at home. If you stay at a hotel, you have to pay for the bed and food, but you don't have to do the dishes and other annoying work; it is expensive, but pleasant. (Yeah, you could also rent a hotel room right next to your house, but this way you combine the necessity of living somewhere on a vacation with the luxury of having more service.)

And then there are the thousand different details, many of them unpredictable, that you can find in a different place. You may find a nice park, a nice cafeteria, or perhaps just a nice view, at the new place. Around your house, you probably already know most of these things.

Sometimes you go to a specific place to meet people who live there. Sometimes you coordinate with people who live far from you, to spend the vacation together at the same place.

...I probably forgot a few things here, but the idea is that a vacation is many kinds of things. Different people put different weight on individual components, thus you get different kinds of vacations (hotel vs camp, distant country vs nearby place, etc.).

Does visiting family count as a holiday in the relevant sense?

Depends. It will be closer to the "archetypal vacation" if they live far away from you, in a different kind of place (e.g. big town vs village), if their neighborhood is interesting, etc. But there is no official definition, and in real life it probably depends on how your partner feels about it.

How much money should I be willing to spend on holidays?

Depends on your income and spending habits. And what type of experience you aim for. Foreign travel will be expensive; local train can be cheap. Hotel will be expensive; you can also find a cheap accommodation. Some countries or regions are expensive, some are cheap.

My algorithm is something like: First, choose the type of experience, e.g. "somewhere near a forest, where I can take walks in multiple directions; also not one of the places I was at recently". Then choose the minimum level of comfort you can live with, e.g. "no cooking, a comfortable bed and a warm shower". Then look for the options, prioritizing for seeming interesting and being cheaper.

Is 'holiday' a coherent enough category that I can treat it as primitive for the purpose of this question?

Yeah, I think so. "Going to a different place, with the intention to have pleasant and/or novel experience." But of course some examples are more central (a hotel at the beach; a hike in the mountains; a guided tour in a foreign country).

Seems to me that "not being at home" itself makes a large part of the definition, although people can also go on business trips. (And you can make an edge case, such as a business trip in an exotic country, where the work is mostly a pretext to be there.)

feel free to answer ones not listed here

Try different things, find out what makes you happy. If you want to go with other people, different kinds of vacation may require different people.

Comment by viliam on Act of Charity · 2018-12-23T23:08:42.089Z · score: 4 (2 votes) · LW · GW
[16:15] There's a mosquito-net maker in Africa. He manufactures around 500 nets a week, employs ten people who as with many African countries each have to support upwards of 15 relatives. As someone that lives in West Africa, I can corroborate that however hard they work they can't make enough nets to combat the malaria-carrying mosquitoes.

This is specifically the part that my understanding of Econ 101 fails to process.

There is this one guy and his 10 employees, and they can't make enough nets for the whole Africa. Okay, that part is simple to imagine. But why doesn't he employ more people and increase the production? Or why someone else doesn't copy his business model?

If my memory serves me well, Econ 101 assumes that if there is an effective demand, sooner or later there will also be the supply to match it.

Should I assume that there is no effective demand to buy more nets? Like, perhaps people are not aware of the dangers of malaria-carrying mosquitoes, or they don't believe the nets are helpful, or they simply do not have enough money to match the production costs of the nets: either because the nets are too expensive, or because they have to prioritize other necessities. But then, the problem is not that they "can't make enough nets", but rather that they "can't sell enough nets".

Another thing about Econ 101 is the principle of comparative advantage. According to this principle, trying to produce everything at home is worse than trading internationally. (Otherwise an embargo would be a blessing instead of a punishment.) But an international trade will inevitably put some of your local producers out of business. Is it possible that Africa has a comparative disadvantage at producing anti-malaria nets?

It seems to me that the author is pattern-matching food aid to nets aid. But that is a different situation. In a situation without foreign aid, you cannot have a long-term imbalance between local food production and local food needs... the starving people will die. But you can have a long-term imbalance between local anti-malaria net production and local anti-malaria net needs... people unprotected against malaria only get it with some probability, not certainty; some who get malaria will die but some will survive. In other words, the resulting balance can't include people who don't eat, but it can include people who are not protected against malaria but still some of them survive until they can reproduce. Foreign food aid tries to solve a temporary imbalance; and in the process perhaps introduces more harm than good. But foreign anti-malaria-net aid tries to change the long-term balance.

the problems caused by aid are extremely bad in some of the countries that are targets of aid (like, they essentially destroy people's motivation to solve their community's problems).

I understand how an intervention that puts half of your population out of business can have this effect. I find it less likely that an intervention that puts one person in a million out of business would have the same effect. That is why I asked how many people are employed in the anti-malaria-net industry, compared with agriculture.

This seems to me like a mistaken pattern-matching. Pretty much anything can make someone lose their job. But there is a difference between "save thousand people, destroy thousand jobs" (food aid) and "save thousand people, destroy one job" (anti-malaria nets).

Comment by viliam on Act of Charity · 2018-12-23T22:03:28.389Z · score: 2 (1 votes) · LW · GW
EAs are good at explaining why you shouldn't do what they're (we're?) doing. That's different than actually doing the right thing.

I agree (with the second part, at least).

The author of the video said "according to Singer" repeatedly, so I assumed he also disagrees with what EAs are saying. If the real objection was "Singer says to do the right thing, but then actually does exactly the thing he said was wrong", I didn't get that message from the video. (Maybe the problem is on my side.)

Comment by viliam on Why Don't Creators Switch to their Own Platforms? · 2018-12-23T16:38:07.318Z · score: 4 (3 votes) · LW · GW

In addition to the IQ difference, the "cluster in thingspace" that includes rationalists and Sam Harris fans contains disproportionately many people with IT skills.

And as you say, Sam Harris is less "fungible" than PDP. He already exists outside of YouTube, while PDP was made (and therefore can be replaced) by YouTube.

Comment by viliam on Open and Welcome Thread December 2018 · 2018-12-23T16:30:02.491Z · score: 3 (2 votes) · LW · GW
Can you think of any reasons we couldn't make the coordinated city's counterpart to the FSP's Statement of Intent contract legally binding, imposing large fines on anyone who fails to keep to their commitment?

Because then even fewer people would sign it. And the remaining ones will be looking for loopholes.

For a lot of people a scheme like this will be the only hope they'll ever have of owning (a share in) any urban property

Unfortunately, those would be most scared of the "large fines".

Comment by viliam on Act of Charity · 2018-12-23T15:39:39.403Z · score: 19 (3 votes) · LW · GW

It is a bit costly for me to review a 48 minute video without a summary, but here it goes:

[8:00] If you're donating a dollar, according to Singer you would not want 99 cents to go to overhead costs and only one cent to go to the actual process of saving lives. Where should you donate your 5 dollars, if you could choose between a program that would take $4.95 to pay their employees and cover other costs, and only take the five cents to pay for bed nets; or one that spends the $4.95 on the bed nets, and only five cents on the overheads? You probably want to pick the latter, at least according to Singer. And this criterion is going to naturally privilege some causes over others because we're focusing on that specific impact that physical tangible impact that you're having.

This seems like missing the point (surprisingly, right after having described how consequentialism means that only the consequences matter), or perhaps it is a motte-and-bailey statement.

The overhead is not the problem. I believe this is actually one of the central messages of the effective altruism: what matters is what the money as a whole causes to happen. Overhead is a part of the equation. Yes, if we -- ceteris paribus -- just randomly increase the overhead for no good reason, it obviously makes us less effective. But if spending 20% on overhead instead of 10% would double the effect of the remaining 80% (for example by hiring more qualified people, or double-checking things to prevent theft), then increasing the overhead that way would be the right thing to do. I strongly believe Singer would approve this.

So, the motte of the statement is that if we have two processes that convert money to anti-malaria nets in exactly the same ratio, only one of them also has a 5% administrative overhead and the other has a 95% overhead, it is better to choose the former. The bailey of the statement would be concluding that...

Of the eight charities listed on the [GiveWell] web page, seven focused solely on directly providing health care: providing bed nets, providing cures for various diseases, and so on. These programs had the lowest costs and according to Singer the highest return in lives saved and pain averted.

No, it's not about "the least overhead is when we provide health care, therefore health care is what we should do". It is about the ratio between the donated dollars and generated outcome (where "many lives saved" is considered to be a quite impressive outcome). The overhead is a red herring.

[11:50] I will argue in fact that does more harm than good. I will claim that charities which minimize overhead actually are less effective than those that use their funds to address other concerns.

Sigh. Go on, fight the straw-Singer!

Then the video explains how donating X to a country can ruin the local X-providing industry. Duh. That's why I'm asking: Where is the local African anti-malaria-net industry that is being so thoughtlessly ruined by GiveWell?

Because I understand how donating food ruins the local food producers, or how donating t-shirts ruins the local t-shirt producers, so I would also understand how donating anti-malaria nets would ruin the local anti-malaria-net producers... the question is, do these "local anti-malaria-net producers" exist in real life, or only as a part of the thought experiment? What fraction of African population is employed in this industry? (My uninformed prejudices say "probably several orders of magnitude less than in local food production", but I may be wrong. Please give me data, not thought experiments designed to prove your point.) Because I believe there is a number X such that "X people saved from malaria" outweighs "1 person losing a job", even in the extreme case where the person losing the job would literally starve to death. (By the way, what about those people who now don't die from malaria? What if they also take someone's job?)

[16:15] There's a mosquito-net maker in Africa. He manufactures around 500 nets a week, employs ten people who as with many African countries each have to support upwards of 15 relatives. As someone that lives in West Africa, I can corroborate that however hard they work they can't make enough nets to combat the malaria-carrying mosquitoes.

Okay, so here is the local industry. I feel uncertain about the "×15" multiplier for the supported relatives, as an argument used in weighing the benefit of "more people saved from malaria" vs "local net producers not going out of business". Some of those people dying from malaria also have relatives they support, don't they? On the other hand, some of them are the supported relatives. (If I overthink it, some of them might even be the supported relatives of the local net producers.)

Now the argument goes: GiveWell sends the nets, puts the local producers (and their ×15 families) to poverty, and "in a maximum of five years the majority of the imported Nets will be torn, damaged, and
of no further use". Now we have 165 more people depending on foreign aid. (Hey, what about those whose lives were saved from malaria? Some of them will depend on foreign aid, too!) Also, no one will restart the local net industry, because now it is seen as a risky business.

The problem I have with this line of thought is that, hypothetically speaking, if I had a magic wand I could just wave and make malaria disappear forever... that would also be an evil thing to do. There would still be the 165 people depending on the foreign aid, right?

Then goes the general argument that by giving people specific aid, we are depriving them of the freedom to choose which aid they would prefer to get. In general, this is a good point, and there is a charity called Give Directly which addresses it... oh crap, donating cash to people makes them dependent, too! :( It seems like even keeping someone alive makes them dependent, because in the future, such person will require food, anti-malaria nets, etc.

Seems like the recommended solution is... to let Africa solve it's own problems, without any kind of aid. Because that way, the solution will be sustainable. This will also prevent "brain drain", because the smartest people will be motivated to keep living in the shitty environment, if they believe they are the only ones who can save it. (Win/win! Now even the jobs of the Westerners are safe.) Then those smart people will invest their savings in their countries of origin, and everything will become exponentially better.

[43:45] Singer and Give Well underestimate the amount of good that is done and pleasure created by bringing someone out of dependence.

Okay, here is a thought experiment: Your family is sick, you go to the doctor, and the doctors tells you: "I could give you a medicine, but imagine how much better it would feel to invent it for yourself! Yeah, it may cost a lot, and your family may die while you are researching, but I am giving you the long-term perspective here, for your own good."

What I am trying to say is that there is the trade-off between the pleasures of being independent and the pleasures of having your relatives alive. Speaking for myself... well, I actually don't think my country is economically independent, and I definitely wouldn't trade my kids' lives to make it so.

But perhaps the next time I will see a person suffering, I will remember that it is a superior option to just let them be, and not take away their motivation to become a well-paid software developer.

Additionally, everything he says seems quite likely according to Econ 101 models.

To me it seems like the Just-World Hypothesis. Specifically the part about how even donating cash to someone makes their life ultimately worse, feels like a status quo worship.

By this logic, there is no way to help another person, ever, without inflicting on them a horrible curse. You give your kids an Xmas present, and you just ruined their motivation to become financially independent. You help an old lady to cross the street, and you ruined her motivation to maintain her vision or to keep good relationships with her relatives. You invent the virus that kills mosquitoes worldwide, and you deprived the Africans of their motivation to study medicine. Any help is just a harm in disguise. (Or perhaps this only applies to helping Africans? Dunno.)

Comment by viliam on Act of Charity · 2018-12-22T23:35:49.387Z · score: 4 (2 votes) · LW · GW

Reading the comments below the linked video... there were responses written 2 years ago that author didn't have time to reply... How specifically does donating anti-malaria nets "keep populations dependent, economically weak, and slaves to the whims of international donors"?

I would understand the part about whims: yeah, tomorrow some influential organization might decide that the nets are bad and you should stop supporting them, and there would be no more nets. Still, would that outcome be worse compared to a parallel reality, where there never was a movement to support the anti-malaria nets? The people would still have gained a few years of health.

Are the donated nets ruining a previously existing huge local anti-malaria-net industry?

there are systemic issues that make things tend to become scams, and charity evaluators aren't in a better position with respect to this problem than charities themselves, one should expect charity evaluators to become scams as well.

This part feels true. Similarly how in medicine people started reading meta-reviews, and soon the homeopaths in addition to their studies also started producing their own meta-reviews supporting their own conclusions... as soon as charity evaluation becomes a generally known thing, some of the current ineffective charities will produce their own charity evaluators which will support whatever needs to be supported.

It's just, instead of "GiveWell will become a scam", to me the more likely scenario seems "in a few years there will be so many charity evaluator scams that when you google for 'charity evaluator', you won't find GiveWell on the first three pages of results".

Comment by viliam on Open and Welcome Thread December 2018 · 2018-12-22T23:12:44.542Z · score: 5 (4 votes) · LW · GW

Seems to me that when we think about animals, there are two opposite mistakes one can make. First is too much anthropomorphism: "the dog that is looking at the moon must be thinking about its existential problems, because that is what I would do during a sleepless night". Second is treating the animals as animal p-zombies: "yeah, the pig seems to suffer, but don't make a mistake, only humans can really suffer; the pig makes the suffering-like movements and noises for a completely unrelated reason".

As usual, the easiest way to get into one of these extremes is trying hard to avoid the other one.

Comment by viliam on Open and Welcome Thread December 2018 · 2018-12-22T23:00:53.023Z · score: 5 (3 votes) · LW · GW

No comment on the voting strategy, just wanted to focus on the idea that "the value of the land is mostly the proximity of other people, so why not coordinate and move to a new cheap place together?"

First, I wonder whether it is actually true. As far as I know, most cities are at a place that has some intrinsic value, such as a crossing of trade roads, a port, or a mine. I wonder how much this is necessary, and how much it is just history's way to solve the chicken-and-egg problem of coordination by saying "first movers come here because of the intrinsic advantage, everyone else moves here because someone already moved here before them".

On one hand, for many people "the value is the proximity of neighbors" is true. If you have a shop, you want to have many customers near you. If you are an employee, you want many employers near you, and vice versa. People move to e.g. the Silicon Valley because of everything that already is in the Silicon Valley; if you could somehow teleport the whole Silicon Valley into a not-very-awful place, this dynamics would probably remain. On the other hand, you have cities like Detroit, where removing an important piece (jobs in car industry) made everything fall apart; the "proximity to many neighbors" was not enough to save it. So having many people at the same place is not necessarily a recipe for success; the whole "ecosystem" needs to be in some kind of balance, which would be difficult to achieve with a new city.

Second, yeah, coordinating people is hard. Look at the Free State Project, where people coordinated to move to the same US state. It took them a few years to coordinate 20 000 people, just to move to existing cities, with existing infrastructure and job opportunities, within USA. How long would it take to coordinate people to move somewhere to a desert, and how many people would actually go there?

Comment by viliam on New edition of "Rationality: From AI to Zombies" · 2018-12-19T20:48:24.478Z · score: 5 (3 votes) · LW · GW

To put it bluntly, a science-related book with bad design just screams "crackpot".

Comment by viliam on New edition of "Rationality: From AI to Zombies" · 2018-12-15T23:36:43.338Z · score: 11 (8 votes) · LW · GW

This is impressive. I am not going to read the text again to see what has changed, so no opinion on that part, but visually, it finally looks like a book. (Not like a bunch of screenshots from a blog.) Especially how the hyperlinks were changed; that was quite painful previously.

A part of me hopes that after publishing the 6 parts of the original Sequences, the series will continue. There were other things written during those years, both by Eliezer and by others.

Comment by viliam on Worth keeping · 2018-12-08T01:09:42.269Z · score: 8 (4 votes) · LW · GW

I think there is a limit on replaceability of friends, even if you are surrounded by people you like. First, a part of value in friendship comes from understanding each other. Exchanging information about each other takes time; with a new friend you have to start from the beginning. Second, if you know a person for a short time, you don't know whether their current behavior is typical, and you don't know what their bad moments look like. So you can rely on them less than on a person you have known for years.

I would still assume a difference between environment where potential friends are scarce, and where potential friends are plenty. But the cost of replacing a friend cannot become arbitrarily small.

Some reality checks:

  • You are supposed to make good first impressions. That makes sense according to this model, because for a person you just met, you are most replaceable, so you have to try your hardest.
  • You can be more open about your weaknesses to people who are strongly connected to you. Makes sense; the least risk of being replaced. On the other hand, sometimes you can be very open to strangers (including therapists). Also makes sense; in this case you will certainly be replaced soon, so there is nothing to lose.
Comment by viliam on Playing Politics · 2018-12-07T23:57:13.998Z · score: 8 (3 votes) · LW · GW
"People disagreeing with you" isn't a threat in itself.

Depends on why they disagree. For example, some people just love to argue. If you say "X", they are going to say "non-X" even if a minute before they had absolutely no opinion about it. It could be their idea of fun; it could be a status move. Some people have to inject themselves in everything, because it makes them feel important. Suddenly you are stuck talking to people who do not provide the truth-seeking value a honest opponent would.

Even if you ignore death threats, a stalker who follows you everywhere and keeps disagreeing with you publicly, can be a waste of your energy. Crazy people can write an insane amount of content, because they can type without thinking and they have nothing better to do at the moment. Even if they don't convince anyone, they can disrupt a meaningful debate, and make you seem bad by association with them.

Comment by viliam on Playing Politics · 2018-12-07T23:40:33.326Z · score: 4 (3 votes) · LW · GW
If you step back then other guests can fill the power vacuum with a different purpose than the one you intended
good moderators are prepared to ask if an audience member can turn a statement into a question or to cut them off for the benefit of the majority

This, so much!

By nature, I am completely "it's unjust to have a master, and more so to be a master", but experience has taught me that if I end up in a role of a boss, I have to play it, no matter how much I dislike it, because usually there is someone in the audience who loves the role and is waiting for the opportunity to grab it.

You try to share the power with the audience equally? Someone from the audience is going to take 80% of that share only for himself, unless you stop him. When you have a talk, it means people came to listen to you, not some overconfident rando from the crowd. (The rando can offer his own talk separately, at a different time or place.)

Comment by viliam on Playing Politics · 2018-12-05T23:15:24.329Z · score: 4 (3 votes) · LW · GW

My interpretation of the message in Incredibles would be: "No one like the whiners". (Note: I haven't actually seen the movies, so there may be a nuance I am missing here completely.)

A part of the reason is that in many situations whining is unproductive. For example, there may be a situation where no available choice is perfect, and any solution necessarily contain a trade-off, but some people waste everyone's time by refusing to accept that, and playing high moral ground, without proposing their own solution (which would expose them to criticism). Or people may defend an obviously suboptimal choice by selectively applying nirvana fallacy to all alternatives.

But another part is that we are instinctively wired to win social conflicts. If someone complains too much, it suggests they are too weak, and therefore a useless ally. You want to join someone who is frustrated today, but has a solid chance to prevail tomorrow; not someone who will predictably remain at the bottom. And there are all kinds of biases that can make your "elephant" perceive something as too much whining.

I occasionally find myself in situations where I feel I’m being asked to take a sort of Straussian stance — if you want to get important things done, you can’t be totally transparent about what you’re doing, because the general public will stop you. I’m not sure these people are wrong. But I really hope they are. I have a bad feeling about maintaining information asymmetries as a general policy.

Among rational and mutually friendly agents, hiding information would be bad. But most people are pretty far from being rational, and some people are pretty far from being friendly. Secrecy is a defense against an unknown but statistically real enemy.

If you say something in front of a sufficiently large audience, inevitably some people will disagree for wrong reasons. Some of them are crazy, or just completely misinformed about the topic (in a way that cannot be fixed in short term). Some of them see "disagreeing with you" as a way to get status points at your expense, even if they actually don't truly disagree with you on the object level. Yes, it's true that some people might disagree for the right reasons. But how would you solicit the feedback from the latter, without exposing yourself to the reaction from the former?

Comment by viliam on Playing Politics · 2018-12-05T22:30:52.384Z · score: 3 (2 votes) · LW · GW
when asked for a choice (like "what should we eat" or "which of these meeting places do you prefer"), I frequently replied with some variant of "no preference".

With some people, I once had a norm that the answer in such situations always consists of two parts: (1) the choice, you have to make one; and (2) a number from 1 to 10 expressing how strongly you prefer this choice.

With the right kind of person, this works quite well. You can have e.g. "option A, strength 2" and "option B, strength 4", then go with option B without feeling guilty, but perhaps acknowledging a small debt towards the person who wanted A. (The debt will probably be erased soon when the next decision goes the other direction, but if it happens to accumulate, you can discuss that explicitly.)

Comment by viliam on Preschool: Much Less Than You Wanted To Know · 2018-12-05T22:14:56.813Z · score: 4 (2 votes) · LW · GW

Yeah, the correct conclusion is probably to give me partial credit. First, to account for the fact that my intervention was only a part of a larger causal chain (some credit rightfully goes for buying the bricks, right?), some parts of which I don't even know about (this becomes prominent now in kindergarten, where I have no idea about all the little things they do). Second, because that's how one deals with probabilities (if you assume 20% chance you caused something, take 20% of the credit, it will work on average).

But I try to be humble, because I believe that people overestimate their impact. First, because they forget about many other influences (including the genes, and the child's own work); second, because they assume 100% probability whenever there is a plausible story (and there usually is one). So, whenever I see an opportunity to impart some knowledge painlessly, I go for it, but in far mode I believe I deserve much less credit than it feels I do. (Not "less credit" as in "less than other parents", but as in "parents in general deserve less credit than they feel they do".)

Related: Bryan Caplan's Selfish Reasons to Have More Kids.

Comment by viliam on Clickbait might not be destroying our general Intelligence · 2018-11-20T23:58:08.103Z · score: 6 (3 votes) · LW · GW

This reminds me of a book Gang Leader for a Day, where author describes how leaders of various gangs prefer peace, because that means greater profit from selling drugs (when gang members are shooting each other on the streets, customers are afraid to approach), but footsoldiers prefer war, because that is their best opportunity to increase their status.

Perhaps it is similar with politics. Online, people compete for getting closer to some extreme archetype. That is their only way to increase their status. In office, politicians have to cooperate with people having different opinions, and have to make deals with them. Also, online people can be fragmented into thousand groups, each of them intolerant towards the others; but the politician need to be acceptable to a sufficient number of people to get elected.

Before social networks, politics was mostly "rich people's business". Now ordinary people can compete against each other by posting "edgy" comments.

Comment by viliam on Preschool: Much Less Than You Wanted To Know · 2018-11-20T23:32:16.642Z · score: 15 (8 votes) · LW · GW

When my daughter was 1 year old, I tried to teach her how to put big Lego-ish bricks on each other properly. She randomly rotated the brick in her hand, and tried to put it on the other brick. When the brick in hand had the hole at the bottom, sometimes the bricks connected successfully; when the brick in hand was rotated differently, she tried to push them together, and then threw the brick away in frustration. I tried to explain, by talking and showing repeatedly, how the brick in the hand needs to be held with the hole facing down... and then I just gave up, because there was no progress. I decided to simply ignore the Lego.

A week or two later, my daughter found the Lego bricks again, and this time she was putting them together correctly.

There were a few experiences like this, when I concluded that sometimes the right thing to do is to wait. Things that are difficult now may become simple later. If instead I stubbornly tried to teach her the Lego bricks every day, likely making us both frustrated, one day she would learn to do it properly, and I would congratulate myself for my patience. But I would be wrong, because simply not doing anything would have achieved exactly the same outcome.


When my daughter was 2 years old, I taught her a few words in English, and also how to draw. People were impressed by both outcomes. Later I didn't have time and patience to practice the English regularly, so she gradually forgot most of what she knew. But drawing remained her favorite activity, and she kept drawing almost every day. People continue being impressed with her drawing skills.

This suggests that when you teach a specific skill (after you have waited enough to make teaching it possible), the important thing is to keep going. If you keep going, the Matthew effect will bless you; if you stop, you will start reverting to the mean. This may not be immediately visible when you are at the age when "the mean" also means progress, only much slower than it could have been otherwise.


So, I think that both Zvi's or moridinamael's conclusions may be correct, depending on the situation. Sometimes the problem is trying to teach a skill too soon. Sometimes the problem is teaching the skill and then letting it revert.

Also, the right timing for the skill may depend on child's IQ. Some children are ready to start reading at 3, others are ready at 6. The kindergarten trying to teach reading at 4 or 5 may fail to achieve long-term improvement with different kids for different reasons.

Comment by viliam on “She Wanted It” · 2018-11-14T00:50:18.597Z · score: 9 (8 votes) · LW · GW

I wonder whether Fifty Shades of Grey could be an example of exceptionally successful marketing. People buy it because they were told by media that it is hugely popular (so they are curious, and don't want to seem ignorant), and when they find out they don't like it, they go: "Well, if everyone else likes it, I better shut up or I will seem like a prude or worse" (the zeitgeist discourages saying anything negative about other people's sexuality, especially when it's weird). Anyway...

The problem with revealed preferences is that it kinda assumes that in any internal conflict, the side that won was the Truth, and the side that lost was Fake all along. Which assumes that people never make mistakes (or that they truly want to make exactly those mistakes they made), and that willpower is just a synonym for hypocrisy -- unless the willpower happens to prevail, in which case it turns out it was the true will all along (and if your first attempts failed but later you succeeded, that means you truly wanted to fail first and succeed later, duh).

And the usual mistake when discussing nature and evolution is ignoring the evolutionary-cognitive boundary, plus the fact that our environment differs from the one we are adapted for. Thus "in ancient past, X provided a reproductive advantage, on average" becomes "X provides an advantage (now and always)" becomes "you want X, and I am not listening to your lies, you hypocrite!". And it's hard to argue otherwise, when there is in fact a part of you that somehow pushes you towards X. But if we follow the same logic, then the True Will of humanity is to eat sugar, become fat, get diabetes, and die; because that's what keeps happening when we give it a chance. (So if the superhuman AI happens kill us, it just fulfilled our desires faster. Actually, the fact that we have built the AI that killed us, already makes our extinction our revealed preference.)

I see two big differences between our ancient evolutionary past and current civilization, with regard to the current topic:

1) Before agriculture, we spent all time together in tribes; today we live in families, often atomic ones. That means the connection between "who do you have sex with" and "who do you spend most time with" is relatively recent. Obviously, spending more time with an abusive asshole is a bad idea. But when the whole group lives together, having sex with someone doesn't mean spending more (non-sexual) time with them. Each member of the tribe is within reach of the fist of the alpha male, whether they have sex with him or not. Women used to choose which man's genes they want for their children, and that was the whole story. (And yes, it makes sense to choose a stronger one over a weaker one, and a winner over a loser.) This evolutionary calculation did not include the danger of spending a lot of time with him alone.

2) The ancient environment also put limits on the male aggression. The alphas often didn't win as individuals, but as coalitions. They had to beat challengers into submission, but when not challenged, they often acted as keepers of peace and justice. Being an asshole to everyone meant that the three or four guys you hurt recently will gang up against you, beat you, and probably kill you to protect themselves against possible revenge. To keep the throne, you needed allies. Ironically, it is the civilized society that allows some individuals to be assholes against everyone and survive. Many annoying people live only because no one considers it worth risking prison for killing them. In the past, being an asshole and remaining alive would be powerful counter-signaling. Today, pretty much any loser can do it, and many do. Of course it messes up our instincts.

anti-feminists often jump to believing that women are more attracted to men who are violent to them.

Then they are deeply ignorant of women's literature: the proper archetype is the guy who is violent to everyone else, but is mysteriously tamed by the charms of the heroine, i.e. Beauty and the Beast.

The women who date "bad guys" don't do it because they have a preference for being punched in the face. They do it because they have a fantasy where they (and they alone) will not get punched in the face. Which would actually make sense in a sufficiently ancient past, but makes much less sense in the recent millennia. Well, evolution sometimes updates slowly. (He who wants to throw a stone, first tell me how much sugar and salt did you eat this week. You realize that shit is killing you slowly, don't you?) Instead of a preference, this is more like a cognitive bias. From inside, the idea "he will punch everyone else, but not me, because he will love me" seems like a perfect reflection of reality. (And if he already punched her, that does not falsify the hypothesis. "Sometimes true love requires a lot of time, patience, and sacrifice. It will all turn out well at the end." Read the Harlequin novel where the man first hurts the heroine, but then he falls in love with her and deeply regrets it. Which one? A random choice will probably be the right one.)

The abuse isn’t being read as wish-fulfillment, but as verisimilitude.  I wouldn’t be surprised if the author and many of the fans have been in abusive relationships or grew up in abusive households. (...) Maybe abused people really do have a higher risk of seeking out a repetition of the harm they experienced and were taught to believe was normal.

Anecdote time: I met a woman who complained about how all her boyfriends were alcohol addicts. Yet, after breaking up, she was soon dating another one. When I tried to talk some reason into her, she told me that actually all men are alcohol addicts, only some of them are honest about it, and others are in denial; and those in denial are actually much worse. -- To me it seemed obvious how such belief is false and self-harming, but of course trying to argue otherwise would have merely put me in the "in denial" category.

I can imagine how having similar beliefs about male aggressiveness could arise as a consequence of abusive childhood (as a defense mechanism against admitting that your father just happened to be an exceptional asshole), and could be further reinforced by seeking out aggressive partners, because the non-aggressive ones are perceived as somehow weird or fake. -- And perhaps together with the Beauty and the Beast fantasy, this could result in a model where all men are aggressive and only true love can tame them. (Plus there are the Nice Guys who are too pathetic to be aggressive openly, but luckily our mindreading skills allow us to see that deep inside they are even worse.)

It probably doesn't help that the idea about all men being violent and evil is... zeitgeist-compatible.

It’s not that women want men to hurt them. It’s that men hurt women a lot.

Yep. Connotational sidenote: there is a difference between "men hurt women a lot" and "many men hurt women". It is possible that a disproportionally large amount of hurting comes from a small minority of men. (Pointing towards statistics about psychopaths having above average amounts of sexual partners, etc.)


I agree with most of your article, I just believe it could be simultaneously true that (1) women who were previously abused, especially in their childhood, may seek out abusive partners because they perceive such behavior as "normal"; which does not excuse the next abuser, and the "revealed preference" answer is bullshit, because the woman is acting on her incorrect model of the world, and the friendly thing to do would be trying to fix that model instead of exploiting it; and (2) women in general have a systematic bias towards perceiving violent men as more attractive, and less dangerous than they actually are, because of evolutionary reasons which actually may not apply to our current environment. In my model, the "sane" women can use their reason to overcome the temptation and realize that the extra excitement is not worth getting punched in the face regularly (similarly how men attracted towards pathological women can decide to "not stick their dick in crazy"), but there are reasons that can make a woman either underestimate the danger or take it as inevitable, in which case dating the violent guy seems like a good choice.

If I understand it correctly, you used the former to explain away the latter, and that seems wrong to me. (I still approve bringing attention to the former.)

Comment by viliam on Schools Proliferating Without Practicioners · 2018-11-06T00:12:39.326Z · score: 2 (1 votes) · LW · GW

I am not sure how to best handle the topic of religion in a community blog.

If it is a single-person blog, the optimal solution would probably be mostly not to even mention it (just focus on naturalistic explanations of the world), and once in a long while to explain, politely, why it is false (without offending people who disagree).

With a community blog, the problem is that being polite towards religion may be interpreted by religious people as an invitation to contribute, but their contributions would inevitably include pro-religious statements, at least sometimes.

And if you make it explicit like "religious people are welcome, but any pro-religious statements will be immediately deleted, and the author may be banned", that sounds like your atheism is a dogma, not an outcome of a logical process (which you merely don't want to repeat over and over again, because you have more interesting stuff to write about). And even here I would expect a lot of rules-lawyering, strongly hinting, etc.

Comment by viliam on Schools Proliferating Without Practicioners · 2018-11-05T23:59:56.728Z · score: 4 (2 votes) · LW · GW

I think the main reason against podcasting preaching is that religion is mostly about social experience. Replace "hundreds of people who meet in real space regularly" with a podcast, and all that's left is some theology, that frankly most religious people don't care about that much.

Comment by viliam on Open Thread November 2018 · 2018-11-05T23:45:18.142Z · score: 12 (4 votes) · LW · GW
What other nice places exist?

Blogs of nice individuals, I guess. (That includes SSC.) If you make a blog for yourself (for your persona) and keep making comments at the few blogs you read, that could easily be the nicest possible way of using internet.

Comment by viliam on No Really, Why Aren't Rationalists Winning? · 2018-11-05T23:42:16.711Z · score: 39 (9 votes) · LW · GW
If LessWrong had originally been targeted at and introduced to an audience of competent business people and self-improvement health buffs instead of an audience of STEM specialists and Harry Potter fans, things would have been drastically different. Rationalists would be winning.

This sounds like "the best way to make sure your readers are successful is to write for people who are already successful". It makes sense if you want to brag about how successful are your readers. But if your goal is to improve the world, how much change would that bring? There is already a ton of material for business people and self-help fans; what is yet another website for them going to accomplish? If people are already competent self-improving businessmen, what motive would they have to learn about rationality?

The Bayesian Conspiracy podcast ... proposed ... that rationality can only help us improve a limited amount relative to where we started out. They predicted that people who started out at a lower level of life success/cognitive functioning/talent cannot outperform non-rationalists who started out at a sufficiently high level.

The past matters, because changing things takes time. Obtaining "the same knowledge, skills, qualities or experience" requires time and money. (Money, because while you are chasing the knowledge and experience, you are not maximizing your short-term income.) Sometimes I wonder how life would be in a parallel universe where LessWrong would appear when I was 15, as opposed to 35. I had a ton of free time while studying at university; I have barely any free time now. I lived at my parents; now I need a daily job to pay my bills. Even putting money into index funds (such a simple advice, yet no one from my social group was able to give it to me) would have brought more interest. In this universe I cannot get on average as far as I could have in the parallel one.

So why haven't we been dominating prediction and stock markets? Why aren't we dominating them right now?

Because there are people who already spent years learning all that stuff, and now they do it as a day job; some of them 16 hours a day. Those are the ones you would have to compete against, not the average person.

In my own case, ... I can't afford to bet on things since I don't have enough money of my own for it, and my income is highly irregular and hard to predict so it’s difficult to budget things. ... Do a lot of other people here have such extenuating circumstances? Somehow that would feel like too much of a coincidence.

For me it feels like too much of a coincidence when a person complaining about why others aren't achieving greater success immediately comes with a good excuse for themselves.

Speaking for myself, my income is nice and regular, but I have kids to feed and care about, and between my daily job and taking care of my kids, I don't have enough time to research things to make bets about, or learn to understand finance as a pro. That is a quite different situation, but still one that makes making miracles difficult. I suppose some people are busy doing their scientific careers, etc.

And then, a few people are actually winning a lot. Now, maybe you overestimate the size of the rationalist community. How many people even among those who participate at meetups, are truly serious about this rationality stuff (and how many are there merely for social reasons)? Maybe it's in total only a few dozen people, worldwide. Some of them have good reasons to not be super winning, and some of them are super winning. Not a very surprising outcome.

I believe there is a lot of space for improvement. I believe there is specifically a lot to improve in "making our kind cooperate". But at the end of the day, we are just a handful of people.

And of course I'm looking forward to your friend's articles.

Comment by viliam on No Really, Why Aren't Rationalists Winning? · 2018-11-05T22:55:07.333Z · score: 11 (3 votes) · LW · GW

Not the author, but my guess would be this:

On various metrics, there can be differences in quantity, e.g. "a job that pays $10k" vs "a job that pays $20k", and differences in quality, e.g. "a job" vs "early retirement". Merely improving quantity does not make a good story. And perhaps it is foolish, but I imagine "winning" as a qualitative improvement, instead of merely 30% or 300% more of something.

And maybe this is wrong, because a qualitative improvement brings qualitative improvements as a side effect. A change from "$X income" to $Y income" can also mean a change from "worrying about survival" to "not worrying about survival", a change from "cannot afford X" to "bought the X", or even a change from "the future is dark" to "I am going to retire early in 10 years, but as of today, I am not there yet". Maybe we insufficiently emphasize these qualitative changes, because... uhm, illusion of transparency?

Comment by viliam on UBI for President · 2018-11-05T22:18:58.833Z · score: 2 (1 votes) · LW · GW

Yeah, could be any of that.

I guess a part of my objection still remains... that unlike the article suggests "human value consumption, which is why they choose to work a lot" it is sometimes more about "employers prefer employees who work a lot (why exactly, that is debated), and in such case employees are only given the options to work a lot or not get the job, with no middle ground".

Comment by viliam on Do Animals Have Rights? · 2018-10-18T21:39:00.863Z · score: 2 (3 votes) · LW · GW

You seem to write exclusively about political topics. If your goal is to become more rational, this is probably a bad idea.

Your disagreement with Peterson seems to be mostly about the definition of the word "rights". Based on your paraphrase, Peterson seems to define "having rights" roughly as being a part of a network of mutual obligations, and concludes that we could hardly have mutual obligations with animals; we could still choose to have unilateral obligations towards them, but that's not the same thing. (Calling it slavery is the usual political exaggeration though.) Your definition of "rights" seems more like a list of things that should happen to you in a society that aspires to be nice, and does not depend on imaginary symmetry. Shortly: you two are using the same word for slightly different concepts.

Comment by viliam on UBI for President · 2018-10-18T21:20:55.306Z · score: 16 (8 votes) · LW · GW
But the reason we don’t work 15 hours a week is the weird equilibrium we’re in of what is valued by society.
Humans don’t intrinsically value “hours worked”. We value things like status, sex, community, pleasure. In modern society, we learned to associate a lot of this with work and consumption. This is especially true of men, which is why men left out of the work-consumption cycle fall into greater despondency than women.

For me, it is none of that. Ten years ago, when I was single and childless, I could have easily lived on 50% of my income. My status would be the same, and I would have more time to spend on things like sex, community, and pleasure. The problem was completely different, namely... signaling, when looking for a job.

When almost everyone works 40 hours a week, you signal conformity (one of the main traits employers are looking for) by working 40 hours a week. Working 40 hours is normal, wanting to work any other number of hours is weird. Why would anyone hire a weird person, when they have an option to hire a perfectly normal person instead?

How exactly are you going to explain, during the job interview, why the option good enough for everyone else is not good enough for you? "You know, I work to live; I don't live to work. I do have dreams beyond working hard to make someone else rich; and I value things that don't require much money, but require time, such as watching sunset or doing math. I already suspect that on my deathbed I will regret not spending more time following my dreams, but I still need to pay my bills today somehow, and none of my hobbies is profitable, at least in short term. Half of my market salary could cover my expenses; and I don't see a reason to spend at job any more time than necessary. So, what benefits does your company offer?" probably not going to win hearts.

Sometimes you have a socially acceptable excuse for not working full time: you can be a student, or disabled, or a woman with kindergarten-age children. In other words, you would like to work 40 hours just like everyone else, but unfortunately you can't, as everyone can see. When someone offers a part-time job, something like this is what they have in mind. None of that applied to me. When I explored my chances to get a part-time job, I found out that I would have to sacrifice a disproportionate part of my income. The best offer I got was working 4 days a week, i.e. 80% of the usual time, for 50% of my usual salary. And the employer still felt like they were doing me a huge favor by accommodating my weird desire for having more free time. Seeing that I can't get the time down to 50% as I wanted, I gave up and returned to 40 hours a week.

tl;dr -- working 40 hours a week is a conformity-signaling equilibrium, and it is difficult to get a job otherwise

Comment by viliam on One-person Universe · 2018-10-10T23:33:15.786Z · score: 5 (3 votes) · LW · GW

And the reason all men don't have the same mass is... the weight of wisdom accumulated through all these reincarnations. (Unlike the electrons, which can't learn.)

I like this!

Comment by viliam on On insecurity as a friend · 2018-10-10T23:29:12.843Z · score: 26 (8 votes) · LW · GW
I cannot express too strongly my utter opposition to the thesis of this post.

And I enjoyed reading both the article and this reply.

Perhaps the Law of Equal and Opposite Advice applies here; depending on how much of your actual feelings of insecurity is just an awareness of your actual lack of skills, and how much is a result of manipulation by others. Manipulators exist, but lack of skills exists, too.

(In my opinion, going ahead and trying stuff is better than listening to your insecurity: maybe you are right, maybe you are wrong, once in a while you break something, but you will learn something in either case. But I can imagine a person/situation for whom the balance could be the other way round.)

Rationality Bratislava Meetup

2018-09-16T20:31:42.409Z · score: 18 (5 votes)

Rationality Vienna Meetup, April 2018

2018-04-12T19:41:40.923Z · score: 10 (2 votes)

Rationality Vienna Meetup, March 2018

2018-03-12T21:10:44.228Z · score: 10 (2 votes)

Welcome to Rationality Vienna

2018-03-12T21:07:07.921Z · score: 4 (1 votes)

Feedback on LW 2.0

2017-10-01T15:18:09.682Z · score: 11 (11 votes)

Bring up Genius

2017-06-08T17:44:03.696Z · score: 48 (48 votes)

How to not earn a delta (Change My View)

2017-02-14T10:04:30.853Z · score: 10 (11 votes)

Group Rationality Diary, February 2017

2017-02-01T12:11:44.212Z · score: 1 (3 votes)

How to talk rationally about cults

2017-01-08T20:12:51.340Z · score: 5 (10 votes)

Meetup : Rationality Meetup Vienna

2016-09-11T20:57:16.910Z · score: 0 (1 votes)

Meetup : Rationality Meetup Vienna

2016-08-16T20:21:10.911Z · score: 0 (1 votes)

Two forms of procrastination

2016-07-16T20:30:55.911Z · score: 10 (11 votes)

Welcome to Less Wrong! (9th thread, May 2016)

2016-05-17T08:26:07.420Z · score: 4 (5 votes)

Positivity Thread :)

2016-04-08T21:34:03.535Z · score: 26 (28 votes)

Require contributions in advance

2016-02-08T12:55:58.720Z · score: 61 (61 votes)

Marketing Rationality

2015-11-18T13:43:02.802Z · score: 28 (31 votes)

Manhood of Humanity

2015-08-24T18:31:22.099Z · score: 10 (13 votes)


2015-08-14T17:38:03.686Z · score: 17 (18 votes)

Bragging Thread July 2015

2015-07-13T22:01:03.320Z · score: 4 (5 votes)

Group Bragging Thread (May 2015)

2015-05-29T22:36:27.000Z · score: 7 (8 votes)

Meetup : Bratislava Meetup

2015-05-21T19:21:00.320Z · score: 1 (2 votes)