Rationality Vienna Meetup June 2019 2019-04-28T21:05:15.818Z · score: 9 (2 votes)
Rationality Vienna Meetup May 2019 2019-04-28T21:01:12.804Z · score: 9 (2 votes)
Rationality Vienna Meetup April 2019 2019-03-31T00:46:36.398Z · score: 8 (1 votes)
Does anti-malaria charity destroy the local anti-malaria industry? 2019-01-05T19:04:57.601Z · score: 64 (17 votes)
Rationality Bratislava Meetup 2018-09-16T20:31:42.409Z · score: 18 (5 votes)
Rationality Vienna Meetup, April 2018 2018-04-12T19:41:40.923Z · score: 10 (2 votes)
Rationality Vienna Meetup, March 2018 2018-03-12T21:10:44.228Z · score: 10 (2 votes)
Welcome to Rationality Vienna 2018-03-12T21:07:07.921Z · score: 4 (1 votes)
Feedback on LW 2.0 2017-10-01T15:18:09.682Z · score: 11 (11 votes)
Bring up Genius 2017-06-08T17:44:03.696Z · score: 56 (51 votes)
How to not earn a delta (Change My View) 2017-02-14T10:04:30.853Z · score: 10 (11 votes)
Group Rationality Diary, February 2017 2017-02-01T12:11:44.212Z · score: 1 (3 votes)
How to talk rationally about cults 2017-01-08T20:12:51.340Z · score: 5 (10 votes)
Meetup : Rationality Meetup Vienna 2016-09-11T20:57:16.910Z · score: 0 (1 votes)
Meetup : Rationality Meetup Vienna 2016-08-16T20:21:10.911Z · score: 0 (1 votes)
Two forms of procrastination 2016-07-16T20:30:55.911Z · score: 10 (11 votes)
Welcome to Less Wrong! (9th thread, May 2016) 2016-05-17T08:26:07.420Z · score: 4 (5 votes)
Positivity Thread :) 2016-04-08T21:34:03.535Z · score: 26 (28 votes)
Require contributions in advance 2016-02-08T12:55:58.720Z · score: 62 (62 votes)
Marketing Rationality 2015-11-18T13:43:02.802Z · score: 28 (31 votes)
Manhood of Humanity 2015-08-24T18:31:22.099Z · score: 10 (13 votes)
Time-Binding 2015-08-14T17:38:03.686Z · score: 17 (18 votes)
Bragging Thread July 2015 2015-07-13T22:01:03.320Z · score: 4 (5 votes)
Group Bragging Thread (May 2015) 2015-05-29T22:36:27.000Z · score: 7 (8 votes)
Meetup : Bratislava Meetup 2015-05-21T19:21:00.320Z · score: 1 (2 votes)


Comment by viliam on Have epistemic conditions always been this bad? · 2020-01-26T23:28:47.822Z · score: 4 (2 votes) · LW · GW
Were people in the USSR getting barred from their constitutional duty to work?

You could be fired from your job and then put into prison for violating your constitutional duty, and no one would care.

But in practice, you were supposed to find a job that was sufficiently low-status, or was dangerous for health, or something like that. Such jobs were allowed to hire even "politically unreliable" people. (Refusing to take one of those jobs, that would be a violation of your constitutional duty.)

Comment by viliam on Matt Goldenberg's Short Form Feed · 2020-01-26T22:36:33.872Z · score: 7 (3 votes) · LW · GW

Being a rationalist is not the only trait the individual rationalists have. Other traits may prevent you from clicking with them. There may be traits frequent in the Bay Area that are unpleasant to you.

Also, being an aspiring rationalist is not a binary thing. Some people try harder, some only join for the social experience. Assuming that the base rate of people "trying things hard" is very low, I would expect that even among people who identify as rationalists, the majority is there only for the social reasons. If you try to fit in with the group as a whole, it means you will mostly try to fit in with these people. But if you are not there primarily for social reasons, that is already one thing that will make you not fit in. (By the way, no disrespect meant here. Most of people who identify as rationalists only for social reasons are very nice people.)

What you could do, in my opinion, is find a subgroup you feel comfortable with, and accept that this is the natural state of things. Also, speaking as an introvert, I can more easily connect with individuals than with groups. The group is simply a place where I can find such individuals with greater frequency, and conveniently meet more of them at the same place.

Or -- as you wrote -- you could create such subgroup around yourself. Hopefully, it will be easier in the Bay Area than it would be otherwise.

Comment by viliam on G Gordon Worley III's Shortform · 2020-01-26T22:13:41.173Z · score: 2 (1 votes) · LW · GW

I wonder how much the "great loneliness for creatures like us" is a necessary outcome of realizing that you are an individual, and how much it is a consequence of e.g. not having the kinds of friends you want to have, i.e. something that you wouldn't feel under the right circumstances.

From my perspective, what I miss is people similar to me, living close to me. I can find like-minded people, but they live in different countries (I met them on LW meetups). Thus, I feel more lonely than I would feel if I lived in a different city. Similarly, being extraverted and/or having greater social skills could possibly help me find similar people in my proximity, maybe. Also, sometimes I meet people who seem like they could be what I miss in my life, but they are not interested in being friends with me. Again, this is probably a numbers game; if I could meet ten or hundred times more people of that type, some of them could be interested in me.

(In other words, I wonder whether this is not yet another case of "my personal problems, interpreted as a universal experience of the humankind".)

Yet another possible factor is the feeling of safety. The less safe I feel, the greater the desire of having allies, preferably perfect allies, preferably loyal clones of myself.

Plus the fear of death. If, in some sense, there are copies of me out there, then, in some sense, I am immortal. If I am unique, then at my death something unique (and valuable, at least to me) will disappear from this universe, forever.

Comment by viliam on How Doomed are Large Organizations? · 2020-01-26T16:06:48.310Z · score: 4 (2 votes) · LW · GW

Depends on situation. Sometimes people can do things independently on each other. Sometimes people do things together because it is more efficient that way. And sometimes people do things together because there is an artificial obstacle that prevents them from making things individually. (In other words, mazes are trying to change the world in a way that makes mazes mandatory.)

As a made-up example, imagine that there are three cities, and there is a shop in each city, each shop having a different owner. (It is assumed that most people buy in their local shop.) Maybe the situation is such that it would be more profitable if there is only one shop chain operating in all three cities. But maybe there is a shop chain successfully lobbying to make it illegal to own individual shops. Or not literally illegal, but perhaps they propose a law that imposes a huge fixed cost on each shop or shop chain, so the owner of one shop would have to pay this tax per shop, while the owner of a chain only has to pay it once per entire chain. Such law could make the shop chains more profitable than uncoordinated shops, even in situations where without that law they might be less profitable.

So, we have two levels of the game here: What is more profitable assuming no artificial obstacles. And what is more profitable when players are allowed to lobby for creating artificial obstacles for competitors using a different strategy. (That is, suppose that the state is not corrupt so much that it would not make a law that makes life specificially easy for corporation A and difficult for an equivalent corporation B, but it can be convinced to make a law that makes life easier for certain types of corporations and more difficult for other types. So the corporation A cannot use the law as a weapon against an equivalent corporation B, but e.g. large companies could use the law as a weapon against small companies. Creating a large fixed cost for everyone is a typical example.)

To answer your question, maybe sometimes things suck because there are more people, but sometimes things only suck because mazes have the power to change the law to make things suck.

Comment by viliam on How Doomed are Large Organizations? · 2020-01-23T22:13:15.829Z · score: 9 (5 votes) · LW · GW

It's like the power of an organization is a square root or perhaps only a logarithm of how many people work for it. It is horrible to see the diminishing returns, but larger still means stronger.

Maybe this is the actual reason why centralized economy sucks. Not because of mere lack of information (as Hayek assumed), because in theory the government could employ thousands of local information collectors, and process the collected data on computers. But it's the maze-nature that prevents it from doing this in a sane way. The distributed economy wins, despite all its inefficiencies (everyone reinventing the copyrighted wheels, burning money in zero-sum games, etc.), because the total size of all mazes is smaller.

But in long term, the successful mazes try to convert the entire country into one large maze, by increasing regulation, raising fixed costs of doing stuff, and doing other things that change the playground so that the total power matters more than the power per individual.

Comment by viliam on How Doomed are Large Organizations? · 2020-01-22T20:55:47.512Z · score: 6 (3 votes) · LW · GW

I suppose that increase in mazes means that if there is external pressure that appears politically fashionable, more people in the positions of relative power are motivated to (appear to) move in the direction of the pressure, whatever it is, because they don't really care either way. This is how companies become woke, ecological, etc. (At least in appearance, because they will of course Goodhart the shit out of it.)

A different question is, why pressure in the direction of e.g. social justice is stronger than pressure in direction of e.g. Christianity. More activists? Better coordination? Strategic capture of important resources, such as media? Or maybe it is something completely different, e.g. social justice warriors pay less attention when their goals are Goodharted? (Firing one employee that said something politically incorrect is much cheaper than e.g. closing the shops on Sunday.) Before you say "left vs right", consider that e.g. veganism is coded left-wing, but we don't hear about companies turning vegan under external pressure. Or perhaps it's all just a huge Keynesian beauty contest, where any thing, once successful, becomes fixed, and the social justice warriors just had lucky timing. I don't know.

Comment by viliam on Is backwards causation necessarily absurd? · 2020-01-14T23:33:23.979Z · score: 5 (3 votes) · LW · GW
Another relativistic argument against time flowing is that simultaneity is only defined relative to a reference frame. Therefore, there is no unified present which is supposed to be what is flowing.

Relativity does not make the arrow of time relative to observer. Events in one's future light cone remain in their future light cone also from a perspective of someone else.

Comment by viliam on Predictors exist: CDT going bonkers... forever · 2020-01-14T23:22:11.343Z · score: 4 (2 votes) · LW · GW

Even if most people on LW are probably familiar with the abbreviation, someone may come here following a link from elsewhere.

Comment by viliam on Is it worthwhile to save the cord blood and tissue? · 2020-01-12T20:57:38.339Z · score: 4 (2 votes) · LW · GW

There is also the question of how soon to cut the cord. The reason for cutting it a bit later is that the blood from the cord still keeps flowing into the baby. Unfortunately, I completely forgot why those few extra drops are supposed to be so important, but I was told the reason years ago and it sounded just as important as the reason for storing the cord blood.

Comment by Viliam on [deleted post] 2020-01-11T21:42:05.048Z

Hello, anonymous person posting an article called "MattG's Shortform". :D

Comment by viliam on Rationalist Scriptures? · 2020-01-10T23:50:19.294Z · score: 4 (2 votes) · LW · GW
Related, has anyone compiled a list of "Rationalist Wisdom"? Like a bunch of sayings that distill Rationalism down that we can point newbs to?

Writing is a skill; you can't simply decide to do it and automatically do it well, even if you believe it is an important thing to do. I hope that in future, some people with sufficiently high writing skills will become rationalists, and one of them will prioritize making simple accessible rationality materials for beginners.

More precisely, writing is more than one skill. I mean, Eliezer definitely is good at writing -- the success of HPMoR is an evidence for that -- and yet it's his Sequences that people complain about. Seemingly, "good at blogging" and "good at writing fiction" doesn't imply "good at writing textbooks for beginners". So it's the person good at writing textbooks for beginners we are waiting for, to join the rationality community and produce the textbooks.

Comment by viliam on Dominic Cummings: "we’re hiring data scientists, project managers, policy experts, assorted weirdos" · 2020-01-06T23:07:29.741Z · score: 2 (1 votes) · LW · GW

Yep. Looking around me, getting Slovakia out of EU would be relatively easier task than making it adopt UBI, for the reasons you mentioned (plus one you didn't: availability of foreign helpers).

Comment by viliam on Dominic Cummings: "we’re hiring data scientists, project managers, policy experts, assorted weirdos" · 2020-01-05T14:02:12.599Z · score: 5 (3 votes) · LW · GW

Burning down a building is easier than constructing it.

People are celebrating Dominic Cummings for changing the building. I'd like to wait until it turns out what specific kind of change it was.

In the meanwhile, I accept the argument that even burning down the building requires more skills and agency than merely talking about the building. In this way, Dominic Cummings has already risen above the level of the rationalist plebs. But how high, that still remains to be seen.

Comment by viliam on Dominic Cummings: "we’re hiring data scientists, project managers, policy experts, assorted weirdos" · 2020-01-05T13:33:17.905Z · score: 15 (3 votes) · LW · GW
There is something in the process there that ought to be emulated, even if you disagree with the instrumental outcome.

I see your point, but the outcome is important, if you want to improve things, not just become famous for changing them.

Comment by viliam on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-04T23:54:26.146Z · score: 21 (8 votes) · LW · GW

If I may offer my opinion, it seems to me that this debate was a proxy for a long-term problem, which I would roughly describe as "how much exactness should be the norm on LW?".

When Eliezer was writing the Sequences, it was simple: whatever he considered right, that was the norm. There were articles with numbers and equations, articles that quoted scientific research, articles that expressed personal opinion or preference, and articles with fictional evidence. And because all those articles came from the same person, together they created the style that has attracted many readers.

But, now that it is a community blog, there are people with preference for numbers and equations, and people with preference for personal opinion. It's like they speak different languages. And sometimes they disagree with each other. And when they do, it is difficult to resolve the situation, because each of them expects different norms of... what kind of argument is valid, and what kind of content belongs here.

If we limit ourselves to things we can define and describe exactly, the extreme of that would be merely discussing equations. Because the real world is messy and complicated, and people are even more messy and complicated. And there is nothing wrong with the equations -- the articles on math or decision theory are great and definitely a part of the LW intellectual tradition -- but we also want to use rationality in real life, as humans, in interaction with other humans, and we want to optimize this, even if we cannot describe it exactly.

The opposite extreme, obviously, is introducing all kinds of woo. Meditation feels right, and Buddhism feels right, and Circling feels right, and... dunno, maybe tomorrow praying will feel right, and homeopathy will feel right. (And even if they won't, the question is what algoritm will draw the line. Is it "I was introduced to it by a person identifying as a rationalist" vs "I have already seen this done by people who don't identify as rationalists"?)

I would like this community to retain the ability to speak both languages. But it doesn't work well when different people specialize in different languages. At best, it would be a website that hosts two kinds of completely unrelated topics. At worst, those two groups would attack each other.

Comment by viliam on Dominic Cummings: "we’re hiring data scientists, project managers, policy experts, assorted weirdos" · 2020-01-04T23:11:52.134Z · score: 4 (2 votes) · LW · GW
I think of Schelling points as the the things that result without specific coordination, but only common background knowledge.

Yes, but specific coordination today can create the common background knowledge for tomorrow.

Comment by viliam on Dominic Cummings: "we’re hiring data scientists, project managers, policy experts, assorted weirdos" · 2020-01-04T23:07:28.229Z · score: 7 (4 votes) · LW · GW

Similarly to Eliezer, I am impressed to see someone who "speaks our tribe's language" to be in a position of political power, but also confused why their list of achievements contains (or consists entirely of) Brexit.

To me it seems like the original strategy behind Brexit referendum was simply "let's make a referendum that will lose, but it will give us power to convert any future complaints into political points by saying 'we told you'". And when the referendum succeeded, it became obvious that no one actually expected this outcome, and people tasked with handling the success are mostly trying to run away and hide, wait for a miracle, or delegate the responsibility to someone else. (Because now it puts them into the position where any future complaints will generate political points for their opponents. And future complaints are inevitable, always.)

I expect that as soon as Brexit is resolved in either way -- i.e. when the decision about staying or leaving is definitely made, and the blame for it is definitely assigned -- the situation will revert to politics as usual.

Comment by viliam on Predictive coding & depression · 2020-01-04T22:36:19.711Z · score: 2 (1 votes) · LW · GW

Just a random thought: This could also explain why rationality and depression seem to often go together. Rational people are more likely to notice things that could go wrong, uncertainty, planning fallacy, etc. -- but in this model those are mostly things that assign lower probability to success.

Even in the usual debates about "whether rationality is useful", the usual conclusion is that rationality won't make you win a lottery (not even the startup lottery), but mostly helps you to avoid all kinds of crazy stuff that people sometimes do. Which from some perspective sounds good (imagine seeing a long list of various risks with their base rates, and then someone telling you "this pill will reduce the probability of each of them to 10% of the original value or less"), but is also quite disappointing from the perspective of wanting strong positive outcomes ("will rationality make me a Hollywood superstar?" "no"; "a billionaire, then?" "it may slightly increase your chance, but looking at absolute values, no"; "and what about ...?" "just stop, for anything other than slightly above average version of ordinary life, the answer is no"). Meanwhile, irrationality tells you to follow your passion, because if you think positively, success is 100% guaranteed, and shouldn't take more than a year or two.

Comment by viliam on Free Speech and Triskaidekaphobic Calculators: A Reply to Hubinger on the Relevance of Public Online Discussion to Existential Risk · 2020-01-04T17:23:04.528Z · score: 9 (4 votes) · LW · GW

Well, that sucks. Good point that no matter what the rules are, people can simply break them. The more you think about the details of the rules, the easier you forget that the rules do not become physical law.

Though I'd expect social consequences for breaking such rules to be quite severe. Which again, deters some kinds of people more, and some of them less.

Comment by viliam on Normalization of Deviance · 2020-01-04T17:06:42.396Z · score: 7 (4 votes) · LW · GW

I was shocked to hear about doctors in hospitals not washing their hands (from a medical student who was shocked to see it during his internship), and when I discussed it privately with some doctors, they told me it all depends on the boss. When the boss in the hospital washes his hands religiously, and insists that all employees must wash their hands all the time, they will. But when the boss ignores this norm, then... ignoring the norm becomes a local symbol of status. So the norm within the same hospital may change dramatically in short time, in either direction, when the boss is replaced.

I saw a similar thing in software projects. You almost always have a list of "best practices", but it makes a big difference whether the highest-status developer is like "we do this all the time, no exceptions", or -- much more frequently -- he is like "of course, sometimes it doesn't make much sense to ... ", and of course the scope of "sometimes" gradually expands, and it becomes a symbol of high status to not write unit tests. You can have two projects in the same company, with the same set of "best practices" on paper, with the same tools for automatically checking conformance (only, in one team, sending of the error messages is turned off), and still dramatically different code quality.

(And actually this reminds me of a time period when making fun of "read the Sequences" was kinda high-status here. I don't hear it recently, and I am not sure what it means: maybe everyone read the Sequences, or everyone forgot about them so that the joke is no longer funny because no one would know what it refers to, or maybe both sides just ageed to not discuss this topic publicly anymore.)

Comment by viliam on bgold's Shortform · 2020-01-02T20:25:38.366Z · score: 4 (3 votes) · LW · GW

Related: Reason as memetic immune disorder

I like the idea that having some parts of you protected from yourself makes them indirectly protected from people or memes who have power over you (and want to optimize you for their benefit, not yours). Being irrational is better than being transparently rational when someone is holding a gun at your head. If you could do something, you would be forced to do it (against your interests), so it's better for you if you can't.

But, what now? It seems like rationality and introspection is a bit like defusing a bomb -- great if you can do it perfectly, but it kills you when you do it halfways.

It reminds me of a fantasy book which had a system of magic where wizards could achieve 4 levels of power. Being known as a 3rd level wizard was a very bad thing, because all 4th level wizards were trying to magically enslave you -- to get rid of a potential competitor, and to get a powerful slave (I suppose the magical cost of enslaving someone didn't grow up proportionally to victim's level).

To use an analogy, being biologically incapable of reaching 3rd level of magic might be an evolutionary advantage. But at the same time, it would prevent you from reaching the 4th level, ever.

Comment by viliam on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-02T17:40:01.687Z · score: 14 (3 votes) · LW · GW

I believe there is a possible middle way between two extremes:

1) There are no questions, ever.

2) When someone writes "today I had an ice-cream and it made me happy", they get a comment: "define 'happiness', or you are not rational".

As Habryka already explained somewhere, the problem is not asking question per se, but the specific low-effort way.

I assume that most of has some idea of what "authentic" (or other words) means, but also it would be difficult to provide a full definition. So the person who asks should provide some hints about the purpose of the question. Are they a p-zombie who has absolutely no idea what words refer to? Do they see multiple possible interpretations of the word? In that case it would help to point at the difference, which would allow the author to say "the first one" or maybe "neither, it's actually more like X". Do they see some contradiction in the naive definition? For example, what would "authentic" refer to, if the person simply has two brain modules that want contradictory things? Again, it would help to ask the specific thing. Otherwise there is a risk that the author would spend 20 minutes trying to write a good answer, only to get "nope, that's not what I wanted" in return.

Comment by viliam on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-01T16:16:51.307Z · score: 25 (7 votes) · LW · GW

Let's try: "Authenticity" is an opposite of "pretending".

There are situations where it is useful to pretend to have thoughts or feelings, to manipulate other people's perception of us. This can be relatively straightforward, such as signaling loyalty to a group by displaying positive emotions to things associated with the group, and negative emotions to enemies of the group. Or more complicated, such trying to appear harmless in order to deceive opponents, or pretending to be irrational about something as a way to signal credible precommitment.

As a first approximation, "authenticity" means communicating one's thoughts and feelings as one feels them, without adding the thoughts and feeling made up for strategic purposes.

This is complicated by the fact that humans are not perfect liars; they do not have a separate brain module for truth and another brain module for deception. Sometimes deception is best achieved by self-deception, which raise the question what "authenticity" means for a self-deceiving person. But going further, self-deception is also often imperfect, and requires some kind of active maintenance, for example noticing thoughts that contradict the projected image, and removing them. In this case, "authenticity" also includes abandoning the maintenance, and acknowledging the heretical thoughts.

Comment by viliam on Plausible A.I. Takeoff Scenario Short Story · 2020-01-01T14:58:10.600Z · score: 3 (2 votes) · LW · GW

Related: Universal Paperclips

Comment by viliam on Programmers Should Plan For Lower Pay · 2019-12-31T22:16:01.604Z · score: 9 (4 votes) · LW · GW
Why should somebody whom society left behind be expected to pay in their pursuit to have a normal life like everybody else? These people are just getting their lives started, I don't want them to have a looming payment hanging over their heads.

Do as you wish, of course; it's your (potential) money and your time. My perspective was that maybe having some of the money back would allow you to teach more people. Like, that you can afford to donate money to ten people, but you could loan money to hundred people; and although getting a gift is better than getting a loan, hundred is also more than ten. On the other hand, if money is not the bottleneck but your time is, then this doesn't make sense. No "should's" were involved in the calculation.

Also, payments in style of Lambda School are not that bad. They are limited in time (unlike school loans), and you only pay if you get a well-paying job. That means that having the new job and the debt is already an improvement over having the old job (and then the debt expires so it becomes even better), and if you fail to get the promised new job, then there is no payment.

Comment by viliam on Programmers Should Plan For Lower Pay · 2019-12-31T22:02:08.349Z · score: 2 (1 votes) · LW · GW

Seems to me that there is pressure on developers to become "full-stack developers" and "dev-ops", which would make them more fungible. But there are also other forces working in the opposite direction, which seem to be stronger at the moment.

Comment by viliam on l · 2019-12-30T14:30:26.514Z · score: 3 (2 votes) · LW · GW

I have recently found this nice note-taking software: cherrytree. You probably have different needs and preferences, so it may not be the best choice for you, but it works in Linux, and allows to save notes in XML format (which would allow e.g. search using command-line tools).

Comment by viliam on Programmers Should Plan For Lower Pay · 2019-12-30T01:38:48.611Z · score: 6 (3 votes) · LW · GW

I think that automation can save a lot of money, for a company. As an individual, if you automate something for yourself, you probably spent more time analyzing the problem and writing the code, than the task took originally. But in a company, you can automate a repetitive task of hundreds of people. And those people made errors when they did it manually, so you also improved the quality. If you save 40 people 1 hour a week, you have already paid your salary. Actually, the company now got 1 extra hour from those 40 people forever, but they only paid you for the automation once.

Well, that would be the ideal case. Then, the programmers would be drowning in money.

But in reality, programmers often spend a lot of time adding features that don't really save costs, just because someone thought they would be nice to have (why read the numeric constant from a configuration file, if you can have a configuration dialog, preferably as a REST service?), or because the idea of having a program makes everyone suggest that it should also do this and that, things that would be completely crazy to do by hand, and become "not crazy, but not profitable either" when automated (the company used to calculate some value once per year, now it is calculated every 5 minutes every day, and you have an on-call duty during Christmas in case the server stops being responsive). Also, the people who make decisions don't really think it through, and then they keep changing their minds during development, so the programmers have to rewrite the code over and over again. (But the programmers need to do "sprints" and keep their tasks in JIRA to make sure they don't waste time doing unproductive things, such as studying new technologies.)

As a result of these two things, the programmer salaries become what they are.

There is also the problem that companies usualy can't distinguish between good and bad programmers, so the bad ones are overpaid, and the good ones underpaid, to make it work on average. Similarly, junior programmers are somewhat overpaid and senior programmers underpaid.

Also, good programmers keep learning all their life. But the company is usually not going to pay for any of that. ("Why would we pay anyone for learning X? If we ever need a person who knows X, we can simply hire someone who already knows.")

In my experience, companies are inflexible. When you read the online debates, the company owners typically complain "we need someone who knows X, Y, Z, and we keep looking for years and can't find anyone", and the audience asks "so how much do you offer?", and the boss says the number, and the audience goes "but that's below the market wage, you should add 50% and people will start coming to you", and the boss goes "no way", and the audience goes "so offer some other benefit, such as part-time work or remote work", and the boss goes "no way", and that's the end of the debate. (That's an equivalent of me precommitting to never buy bread for more than 10 cents, and then writing blogs about national bread crisis, while walking around shops that are full of bread. It's not about crisis, it's about my stubbornness about how much things should cost.)

Comment by viliam on Programmers Should Plan For Lower Pay · 2019-12-30T01:04:04.709Z · score: 10 (5 votes) · LW · GW

Also, smart people often live in a bubble of other smart people. Get out of the bubble and then try again teaching programming.

Recently I got a temporary side job teaching "computer skills" to random people. Most of them had serious problems understanding the "IF" statement in Excel.

Comment by viliam on Programmers Should Plan For Lower Pay · 2019-12-30T00:20:14.124Z · score: 2 (1 votes) · LW · GW

The idea of paying your students for studying sounds fantastic! Maybe you could make a contract with them that they will return you the money if they get a software development job (similar to how Lambda School does it, except they don't pay their students, only teach them for free).

Comment by viliam on Good goals for leveling up? · 2019-12-29T23:48:32.250Z · score: 11 (6 votes) · LW · GW

Welcome! As a dinosaur in the community, I can only upvote your answer.

I think about things in life as either "required" or "optional". Required are the ones whose absence would hurt you, and which cannot be fully compensated by being great at something else; the archetypal example would be health. Optional are the ones that may be great to have, but you could have a fully satisfying life without them, too; the archetypal example would be playing a piano. (Of course it's not completely black and white.)

A good life would then require being good (even if not great) at the required things, and to excel at one or two of the optional ones. Because the optional ones are usually what gets you money. Health may be one of the most important things in your life, but having superior health doesn't buy you food or a place to sleep. However, being a great piano player could get you a job (traditional or not), which could pay your bills.

Going to more technical detail, the required things have a convex usefulness curve. It is much better to exercise than to not exercise at all, but you should not worry too much about having the best exercise regime in the world (unless you want to turn this into a source of income, which is in the "optional" territory). In other words, you should focus on improving the required skill you currently suck most at, because improving that one will give you most benefit. -- With the optional things it is the other way round. Being a mediocre piano player is not much better than not playing piano at all; it is a waste of time either way. Among the optional things, pick one and become great at it; and maybe have another one as a backup.

Holistic leveling up would then consist of making a list of all "required" things, evaluating sincerely how good you are at each of them, and focusing on the ones you have most neglected. Plus doing something about your selected "optional" thing.

Going too wide on the optional things... trust me, I understand the allure... but if you try writing blogs and playing a piano and learning a foreign language and studying math and coding your website and working on your novel... while you eat chocolate, drink Cola, barely go outside, and sleep 4 hours a night... you are doing it wrong. Yet it is easy to deny this wrongness by focusing on the "holistic" part of the picture ("writing and piano and languages, how richer and more balanced than those people who only play the piano!"). Gods know how much my brain tries to drag me this way.

I am saying this because I have a feeling that "writing X blogs per month" is very likely a bad goal. It feels like something that should be done -- if you are reading a community blog, it feels fair to reciprocate by writing; also, writing good articles is high-status here -- but unless you have a comparative advantage here, it is most likely not the best course of action.

Disclaimer: I don't discourage anyone from writing, just because they are not a superstar! Only from having "X blogs per month" as a goal. Do the thing that is best for you to do; and then perhaps write a report on it when you achieved progress. Just don't write instead of doing the best thing.

Comment by viliam on Stupidity and Dishonesty Explain Each Other Away · 2019-12-29T22:47:55.799Z · score: 13 (4 votes) · LW · GW

I already heard a similar idea, expressed like: "To claim this, you must be either extremely stupid or extremely dishonest. And I believe you are a smart person."

But although the separation seems nice in paper, I wonder how it works in real life.

On one hand, I can imagine the prototypes of (1) an intelligent manipulative cult leader, and (2) a naive brainwashed follower; which would represent intelligent dishonesty and stupid honesty respectively.

On the other hand, dishonest people sometimes "get high on their own supply" because it is difficult to keep two separate models of the world (the true one, and the official one) without mixing them up somewhat, and because by spreading false ideas you create an environment full of false ideas, which makes some social pressure on you in return. I think I read about some cult leaders who started as cynical manipulators and later started reverse-doubting themselves: "What if I accidentally stumbled upon the Truth or got a message from God, and the things I believed I was making up when preaching to my followers were actually the real thing?" Similarly, a dishonest politically active person will find themselves in a bubble they helped to create, and then their inputs are filtered by the bubble. -- And stupid people can get defensive when called out on their stupid ideas, which can easily lead to dishonesty. (My idea is stupid, but I believe it sincerely. You show me a counter-example. Now I get defensive and start lying just to have a counter-argument.)

Comment by viliam on Has there been a "memetic collapse"? · 2019-12-28T14:36:47.246Z · score: 20 (7 votes) · LW · GW

That is also a factor, but I think a stronger impact has people less meeting each other offline. Even before internet, TV already had this effect -- the time you spend watching TV you don't interact with other humans.

You get an opportunity to see other people's parenting when you visit them at their homes. You lose this opportunity if each of you spends an evening by TV, or by a computer (even if you send messages to each other, you don't see their interaction with their kids); if everyone goes on a vacation alone (because that is easier to organize, and everyone has different preferences); etc.

Living in big cities also changes things. If you live in a small village, your friends are mostly in the same village, and it takes 5 minutes to see each other. If you live in a big city and your friends are 30 minutes away, you probably won't meet them just because you'd like to spend an hour outside. (My situation right now: I have small kids, my friends have small kids, it would be great to just drop them at the same playground and talk while observing them. But each of us has a playground next to them, which is easier than going 30 minutes by a car. If we lived in a village instead, the kids would naturally play at the same place.)

There is also a thing I observe in my country: before communism, there were many activities for people. During communism, everything spontaneous was illegal, but instead there were activities organized by the Communist Party. After communism, the activities organized by the Party stopped, but the original ones did not regenerate successfully. (Maybe because now they have to compete with TV and internet.) The few activities that exist for kids these days, are mostly organized by schools; which is good to have, but the problem is that the kids will meet there the same classmates they meet every day, instead of meeting strangers with a common interest, as used to be more frequent when I was a kid. For example, I used to attend a "math round" (voluntary extracurricular activity for math olympiad participants) when I was a kid; but as far as I know such thing simply does not exist here today.

I don't want to make too big generalization, because I may live in a bubble. But seems to me that, generally, people meet less offline. You need to be strategic about this; most people are not.

Comment by viliam on Vaccine... Help? Deprogramming? Something? · 2019-12-28T13:44:05.945Z · score: 3 (2 votes) · LW · GW

Instead of reacting to individual comments, I'll try to summarize my thoughts here. Starting with disclosure: I have zero medical education, but I have two kids who are vaccinated, and my wife (a biochemist, used to search things in medical papers) did some research about it, and we talked about it; though there is a chance I may be misinterpreting what she told me.

For all practical purposes, if you are an average person, just get vaccinated. The research will cost you time and energy, and there is 99%+ chance your conclusion will be "get vaccinated" anyway. The rest of this comment is only useful if you are interested in truth and clear thinking per se.

First, remember the virtue of narrowness. More can be said about one specific vaccine, than about vaccines in general. For example, if you want to know whether vaccine X contains thimerosal, try finding the composition of the vaccine X (assuming optimistically that it is a publicly available knowledge) rather than talking about "thimerosal" or "vaccines" in general. Consider the possibility that there may be multiple diferent recipes for the vaccine X; in such case, find out which one is relevant for you.

Second, quantities matter. It's not just whether "aluminum is bad", but also how much aluminum are we talking about. To give an example from a different area, we probably agree that radiation is bad for health. But you get some dose of radiation by simply walking outside for an hour during a sunny day. So if something gives you radiation that is, say, 1% of the one hour walking outside, you can probably ignore it; although on paper it will look scary. (On the other hand, I don't consider comparison between "aluminum in food" with "aluminum injected in the body" completely fair. You'd have to find a coefficient how much of the aluminum in the food actually gets into the blood stream.)

Third, there is a difference between the ideal case, and the real case. If a study tells you the side effects of a vaccine, you should expect that in real life they are probably going to be greater. Unlike the scientists carefully performing the study, some bored person in the production will sometimes get the dosage wrong, cook the viruses for 5 minutes instead of 15, forget to turn on the fridge, administer an expired vaccine rather than throwing it out, etc. On the other hand, the same is true about any other medicine or food, so it probably does not make vaccines more dangerous than the other things.

Now we get to the utilitarian calculus. The vaccines have some negative side effects (to say the very least, they hurt). But the diseases also have some negative side effects, and if you crunch the numbers, it turns out that it is better to vaccinate everyone than to have a fraction of your unvaccinated population die, or something like that.

Okay, the virtue of narrowness again: is this true for all vaccines, or only about vaccination in general? Good luck; getting all the data will take you a lot of time! You must explore each disease separately. What is the chance your unvaccinated child would get sick? (Is this a counterfactual world where no one is vaccinated, or where only your child is free-riding on the herd immunity?) Some diseases are unlikely to get to you, if most people around you are vaccinated. Other diseases will likely get to you anyway. What are the consequences? What is the second best treatment for the unvaccinated person?

The tricky thing is that these numbers change all the time. The prevalence of diseases gradually decreases with increased hygiene and vaccination, then suddenly increases with a wave of immigration, possibly depends on weather, etc. The alternative treatments evolve. The vaccines evolve, too, to have fewer side effects. In other words, the result calculated ten years ago may be different today, in either direction. This is mostly relevant when the result calculated ten years ago was relatively close to zero.

Notice that there are differences between countries: a vaccination mandatory in country A may be optional in country B and virtually unused in country C. Find out why. Sometimes it is caused by different situation (e.g. climate, disease prevalence) in different countries. Sometimes just different high-status experts have different opinions for bad reasons. It is quite possible that citizens of countries A and B living next to border close to each other have more similar environments than citizens living on the opposite sides of the country A; although the former will have different sets of mandatory vaccines, and the latter will have the same.

One more annoying complication: vaccines are usually not administered individually, but in batches. If there are 30 vaccines (number totally made up), instead of 30 individual shots you will get e.g. 3 shots containing 10 vaccines each. So even if it turns out that of the 10 vaccines in the same shot, 9 are necessary but 1 is useless, it may be more practical to just accept the standard batch, instead of trying to get each of those 9 necessary vaccines separately. (Maybe they are no longer produced and sold separately.) Yeah, this sucks.

...and as I said, after you spend 1 year researching all this, you will likely get to conclusion that you should simply get the stardard set of vaccines, because most of them are useful, instead of expending money and time (and taking your child to doctor 9 times instead of once) just to get rid of the useless ones that do not make significant harm anyway.

I second the recommendation to ask specific questions at Stack Exchange. (I would vote against RationalWiki though, because that site is more about winning debates and making fun of political opponents, than about getting things completely right.)

Comment by viliam on Vaccine... Help? Deprogramming? Something? · 2019-12-28T12:16:44.190Z · score: 2 (1 votes) · LW · GW
Evidence for your Uncle's claims?

It was here: "My nephew had seizures after I think the DTAP, but I'm not sure." Yep, needs to be verified first.

Billions of safe vaccinations in the past, widely accepted as safe, solid scientific basis, etc.

Not an expert, but as far as I know, adverse reactions to vaccines are also a known thing. It's just when you do the trolley calculation, the total damage from the adverse reactions is much smaller than the counterfactual total damage from the disease if people were not vaccinated.

If the chance of adverse reaction is roughly the same for everyone, there is nothing you can do about it. But if it turns out that e.g. a minority of people with some specific gene has an unusually strong adverse reaction, it would make sense to make an exception for them.

Comment by viliam on rmoehn's Shortform · 2019-12-28T11:46:44.422Z · score: 5 (2 votes) · LW · GW

To check for sleep apnea, you can borrow from doctor a sensor for one night. You attach the sensor to your finger, and it measures... something... during the night. The next morning the doctor uploads the data to computer and tells you how serious it is; you will see the graph of oxygen level in your blood during the night.

If it turns out you do have sleep apnea, one interesting solution is Velumount. If the reason for apnea is that the... thing in your throat... is blocking the airways when you sleep, the "so simple it shocks you" solution is to stick a wire into your throat each night to keep the airways open. It's a lot of "fun" when you learn to do it without vomiting, heh, but then it works like magic; no surgery or electric device needed.

Comment by viliam on Values Assimilation Premortem · 2019-12-27T23:28:25.290Z · score: 9 (4 votes) · LW · GW


Not sure how relevant can be my advice, because I was never in your position. I was never religious. I grew up in a communist country, which is kinda similar to growing up in a cult, but I wasn't a true believer of that either.

My prediction is that in the process of your change, you will fail to update on some points, and overcompensate on other points. Which is okay, because growing up happens in multiple iterations. What you do wrong in the first step, you can fix in the second one. As long as you keep some basic humility and admit that you still may be wrong, even after you got rid of your previous wrong ideas. Your currect position is the next step in your personal evolution; it does not have to be the final step.

Here are some potential mistakes to avoid:

  • package fallacy: "either the Christianity I grew up in is 100% correct, or the rationalism as I understand it today is 100% correct", or "either everything I believed in the past was 100% correct, or everything I believed in the past was 100% wrong". Belief packages are collection of statements, some of them dependent on each other, but most of them are independent. There is nothing wrong with choosing A and rejecting B from package 1, and choosing X and rejecting Y from package 2. Each statement is true or false individually. You can apply this to religious beliefs, political beliefs, beliefs of rationalists, etc. (This does not imply the fallacy of grey; some packages contain more true statements than others. You can still sometimes find a gem in a 90% wrong package, though.)
  • losing your cool. What is true is already true; and it all adds up to normality. Don't kill yourself after reading about quantum immortality, don't freak out after reading about basilisk, don't do anything crazy just because the latest article on LW or someone identifying as rationalist told you so. Don't burn bridges. Do reductionism properly: after learning that the apple is actually composed of atoms, you can still eat it and enjoy its taste. Evolution is a fact, but the goals of evolution are not your goals (for example, evolution doesn't give a fuck about your suffering).
  • identification and tribalism. "Rationalists" are a tribe; rationality is not. Rationality does not depend on what rationalists believe; the entire point of rationality is doing things the other way round: changing your beliefs to fit facts, not ignoring facts to belong better. What is true is true regardless of what rationalists believe.
There's also a larger meta-issue here. I have a lifelong wholeness project of fighting perfectionism. It's so ingrained in me that I'm pretty confident that fight will be lifelong for me. In that vein, this whole exercise could be seen as just another attempt to Do it Right The First Time™ and Never Make a Mistake®. So I do need to give myself a little freedom to screw this up, or I will really screw it up the way that I screwed up every relationship I never had before this. (Yes, I actually never dated anyone before this. I blame it on fear, shame & perfectionism + Evangelical sexual ethics taken a bit too far.)

Go one step more meta, and realize that perfectionism itself is imperfect (i.e. does not lead to optimal outcomes in life). Making conclusions before gathering data is a mistake. It is okay to do the right thing, as long as it is actually the right thing instead of something that merely feels right (such as following the perfectionist rituals even when they lead to suboptimal outcomes). Relax (relaxation improves your life, how dare you ignore that).

Copying your partner's opinions feels wrong, but hey, what can I do here? Offer you my opinion to copy instead? Heh.

If it's an issue that I don't have strong priors on and is not likely to significantly influence any major decisions I make with regard to her, I might as well just go with the flow and not complicate things unnecessarily.

You might also adopt the position "I don't know". It is a valid position if you really don't know. Also, the point of having opinions is that they may influence your decitions. If something is too abstract to have an impact on anything, ignoring it may be the smart choice.

Comment by viliam on Defining "Antimeme" · 2019-12-27T21:53:44.434Z · score: 6 (4 votes) · LW · GW

Words can't be defined arbitrarily, so I am going to examine your definition first.

First, I am not sure what exactly counts as "mainstream", and why is it even important. What you describe seems like a relationship between a meme and a culture, whether large or small. So you could have "anti-memes of antimemes" as Isnasene describes. Or you could have a polarized society with two approximately equally large cultures, each of them having their own "anti-memes". Or a small minority, such as cult, that strongly ignores the surrounding culture.

What did you mean by "mainstream knowledge"? It is something most people sincerely believe, or just something they profess? They may react differently. Sincerely believing people may listen to arguments when they have proper form; but you can't convince a person whose "belief" is simply an expression of belonging to a team.

A symbiotic war half-meme encourages you attack its parity inverse as "wrong". The meme in a meme-antimeme pair nudges you to dismiss its antimeme as "unimportant" or invisibly ignore it altogether.

I am thinking now about "culture wars" where attacking other people's opinion as wrong has gradually changed into "no-platforming". I wonder whether there is a spectrum where sufficiently "no-platformed" opinions change into "unimportant" when the side defending them is completely defeated.


Also, I am afraid that the actual usage of the word "anti-meme" would be to defend ideas from valid criticism. ("You only disagree with me because this is an anti-meme that threatens your ego!")

The example of Lisp is a good one here: we have a decades long holy war where one side shouts "Lisp is superior (and so am I by recognizing this fact)!", the other side goes "where are the libraries? where are the tools? where are solutions to problems X, Y, and Z?", but the former side goes "la la la, I can't hear you over the sound of how Lisp is superior!". Then suddenly someone with a good object-oriented background fixes the usual problems with Lisp, creating Clojure, and -- lo and behold! -- suddenly the mainstream is happy with the result.

That is, focusing too much on how your idea is an "anti-meme" makes you blind to its actual flaws.

Comment by viliam on Vaccine... Help? Deprogramming? Something? · 2019-12-27T21:43:36.674Z · score: 10 (2 votes) · LW · GW

This probably needs to be discussed for each vaccine separately. I am not an expert, but I can easily imagine a world where vaccine A contains harmful content, and vaccine B does not; or where vaccine C needs to be taken at very young age (e.g. because the disease is extra dangerous for the babies), but vaccine D does not. I can imagine some vaccines being harmful for people with specific genes.

Any of these claims about a specific vaccine can be right or wrong, and proving them right or wrong for a specific vaccine X does not tell us whether they are right or wrong for a different vaccine Y. So the claim of your relative about a specific vaccine can be correct, or can be complete bullshit, or anything in between (e.g. kinda true, but the risk in real life is negligible).

They also believe its an affront to freedom in general to force vaccinations.

This part is a value debate, not a factual debate. Vaccination is a form of trolley problem: we sacrifice the few people who get an adverse reaction to the vaccine, to save health and lives of the majority. Makes sense statistically; also makes you mad when it is your child thrown under the trolley. (The converse point is that when everyone else vaccinates their kids and you do not, you are free-riding on other people's sacrifice, and your ethical concerns seem to them like self-serving bullshit.)

So... which vaccine specifically are we talking about, and what specifically is the "history of reactions in a family"? (Because many babies have a minor reaction; they may be crying for a day or for a week. Are we talking about that, or about something more serious?)

Note: I am not an expert, so even if you give me these answers, I can't help you. But the data will probably be necessary for any expert who happens to join this debate.

Comment by viliam on Vaccine... Help? Deprogramming? Something? · 2019-12-27T20:32:25.806Z · score: 7 (4 votes) · LW · GW

You could start by writing the exact argument(s) by your relative. How can we respond to a claim we never heard? (Or did you just want very general pro-vaccination statements? I am sure google can help with this.)

Comment by viliam on Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think · 2019-12-27T14:58:06.870Z · score: 8 (3 votes) · LW · GW

I wouldn't mind removing hyperboles from socially accepted language. Don't say "everyone" if you don't mean literally everyone, duh. (I suppose that many General Semantic fans would agree with this.)

For me a complicated question is one that compares against an unspecified stardard, such as "is this cake sweet?" I don't know what kind of cakes you are used to eat, so maybe what's "quite sweet" to me is "only a bit sweet" for you. Telling literal truths, such as "yes, it has a nonzero amount of sugar, but also a nonzero amount of other things" will not help here. I don't know exactly how much sugar it contains. So, "it tastes quite sweet to me" is the best I can do here. Maybe that should be the norm.

I agree about the "nearest unblocked strategy". You make the rules; people maximize within the rules (or break them when you are not watching). People wanting to do X will do the thing closest to X that doesn't break the most literal interpretation of the anti-X rules (or break the rules in a deniable way). -- On the other hand, even trivial inconveniences can make a difference. We are not discussing superhuman AI trying to get out of the box, but humans with limited willpower who may at some level of difficulty simply give up.

The linked article "telling truth is social aggression" ignores the fact that even in competition, people make coalitions. And if you have large amounts of players, math is in favor of cooperation, at least on relatively small scale. If your school grades on a curve, it discourages helping your classmate without getting anything in return. But mutual cooperation with one classmate still helps you both against the rest of the class. The same is true about helping people create better models of the world, when the size of your group is tiny compared to the rest of the population.

The real danger these days usually isn't Gestapo, but thousands of Twitter celebrities trying to convert parts of your writing taken out of context into polarizing tweets, and journalists trying to convert those tweets into clickbait, where the damage caused to you and your family is just an externality no one cares about. This is the elephant in the room: "I personally don't disagree with X; or I disagree with X but I think there is no great harm in discussing it per se... but the social consequences of me being publicly known as 'person who talks about X' are huge, and I need to pick my battles. I have things to protect that are more important to me than my mere academic interest in X." Faced by: "But if you lie about X, how can I trust that you are not lying about Y, too?"

Comment by viliam on Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think · 2019-12-27T12:26:39.857Z · score: 26 (7 votes) · LW · GW

In situations where others can hurt you, clever solution like "no comment - because this is the situation where in some counterfactual world I would prefer to be silent" results in you getting hurt.

(A few weeks ago, everyone in the company I am working for got a questionaire from management where they were asked to list the strengths and weaknesses of their colleagues. Cleverly refusing to answer, beyond plausible excuses such as "this guy works on a different project so I haven't really interacted with him much", would probably cost me my job, which would be inconvenient in multiple ways. At the same time, I consider this type of request deeply repulsive -- on Monday I am supposed to be a good team member who enjoys cooperation and teambuilding, and on Tuesday I am asked to snitch on my coworkers -- from my perspective this would hurt my personal integrity much more than mere lying. Sorry, I am officially a dummy who never notices a non-trivial weakness in anyone, now go ahead and try proving that I do.)

Also, it seems to me that in real world, bulding a prestige of a person who never lies, is more tricky than just never lying and cleverly glomarizing. For example, the prestige you keep building for years can be ruined overnight by a third party lying about you having lied to them. (And conversely, you could actually have a strategy of never lying... except to a designated set of "victims", in situations when there is no record of what you said, and who are sufficiently lower-status that you, so if they choose to accuse you publicly, they will be percieved as liars.)

Comment by Viliam on [deleted post] 2019-12-27T11:04:42.753Z
I...right now I'm kind of in disbelief that I am so far ahead of everyone else that I could *literally buy the entire planet for under $20USD,* and no one stopped me.

It's called progress. In my youth, we only had a bridge to sell you.

Comment by viliam on Get a Life · 2019-12-26T08:26:57.582Z · score: 3 (2 votes) · LW · GW

I suppose that "Get a Life" advice, like many general sayings, is useful in some situations and harmful in others.

It can mean "stop doing the thing I consider low-status, and do more of what I like/approve instead". This can be said to make one's supposed low status a common knowledge. i.e. as an attack. It can also be a relatively well-meant advice to someone who seems to have a blind spot regarding how their actions impact their status. It can both at the same time.

It can also mean "your focus is so narrow that you are missing a lot of experience/data that would actually help you achieve your goal", i.e. that you optimize prematurely in a way that creates an epistemic vicious cycle. Something you say to people trying to draw a map of the city from first principles, who are so busy with drawing that they have no time to actually look out of the window, or walk the streets of the city, which would make them immediately realize how utterly wrong their map is.

Comment by viliam on Free Speech and Triskaidekaphobic Calculators: A Reply to Hubinger on the Relevance of Public Online Discussion to Existential Risk · 2019-12-26T00:10:45.933Z · score: 4 (2 votes) · LW · GW

Both sides make good points. One side being Zack, and the other side being everyone else. :D

Instead of debating object-level stuff here, everyone talks in metaphors. Which is supposed to be a good way to avoid political mindkilling. Except that mostly everyone knows what the metaphors represent, so I doubt this really works. And it seems to me that rationality requires to look at the specific things. So, do I wish that people stopped using metaphors and addressed the underlying specific topics? Aaah... yes and no. Yes, because it seems to me there is otherwise no way to come to a meaningful conclusion. No, because that would invite other people, encourage people on Twitter to share carefully selected screenshots, and make everyone worry about having parts of their text quoted out of context. So maybe the metaphors actually do something useful by adding extra complexity.

In real life, I'd say: "Ok guys, let's sit in this room, everyone turn off their recording devices, and let's talk, with the agreement that what happens in this room stays in this room." Which is exactly the thing that is difficult to do online. (On the second thought, is it? What about a chat, where only selected people can join, but everyone gets assigned a random nickname, and perhaps the nicknames also reset randomly in the middle of conversation...)

Paul Graham recommends: "Draw a sharp line between your thoughts and your speech. Inside your head, anything is allowed. [...] But, as in a secret society, nothing that happens within the building should be told to outsiders. The first rule of Fight Club is, you do not talk about Fight Club."

The problem is, how to apply this to an online community, where anything that happens automatically has a written record; and how to allow new members to join the community without making everything that happens there automatically public. (How would you keep the Fight Club secret when people have smartphones?)

Comment by viliam on Free Speech and Triskaidekaphobic Calculators: A Reply to Hubinger on the Relevance of Public Online Discussion to Existential Risk · 2019-12-25T23:33:57.075Z · score: 4 (2 votes) · LW · GW

Debating politics is meta-complicated, because even the decisions on whether/how to debate politics can themselves be (or be interpreted as) moves in the political game.

"You don't want to have a debate about X? That means you agree with the majority opinion on X! While pretending to be neutral! And these people call themselves 'rationalists' without anyone calling you out on such blatant political move?" (Heh, heh, I just made them debate X anyway, again.)

Comment by viliam on How asexuality became an identity · 2019-12-25T22:43:16.943Z · score: 6 (4 votes) · LW · GW

Based on what we know about the role of reproduction (and sexuality) in evolution, my priors for "sex is only a big deal because of some historical accident" are pretty low. I would need much stronger evidence than an opinion of Foucault to even consider this possibility seriously.

That said, of course different cultures may have different norms for which sexual behavior is acceptable, and which beliefs about sexual behavior are acceptable.

Knowing about sex is not the loss of innocence. It makes more life. Death should be the loss of innocence. Human culture indeed is centered around keeping life interesting and death far away. Knowing death hurts humans in this culture and knowing sex strengthens. If grownups really want to protect the innocence of children, they would concentrate on eliminating death and celebrating sex and birth.

I am just guessing here, but the thing that removes "innocence" is probably the knowledge of the zero-sum nature of sexual competition.

What I mean is that kids, in order to survive, require a lot of cooperation from adults. In a friendly environment, they can grow up with an ideal of how nice the world would be if just everyone cooperated with everyone else. Of course they know the world is currently not that way (although they way underestimate how much). But it still seems like an attainable goal. Death is a thing that happens to other people (mostly the old ones, which seem like a different species). The bad guys can be defeated, and sometimes even converted to become good guys. The heaven on earth is an open possibility in the future.

And then you learn (in a monogamous society) that when two boys love the same girl, at least one of them is going to lose. Etc. Then you realize, in near mode, that this threat very likely applies to you, and that too many people are potential competitors. The world becomes a battleground.

(Some people go one level meta, and promote polyamory as a solution. Okay, that solves the part with sex, but not the part with reproduction. At some moment of their lives many people decide to have kids, and their preferred partners are usually not willing to have and support unlimited number of kids. Competition again.)

Foucault's theory is like this: the modern Western attitude on sex is heavily corrupted (he had fierce opinions) by the governments that wished to control its subjects.

If China is an example of the "West", what exactly would be the "East" (and "North" and "South") here? Where are the governments that do not wish to control their subjects?

In the 16th century, the Roman Catholic Church called for its followers to confess their sinful desires as well as their actions. The priests encouraged their confessioners to talk endlessly about their particular sexual thoughts.

Yet those people had more kids than most people in developed countries have today.

Something as simple as the contraceptive pill has fundamentally changed human sex and reproduction.

Yet many people would feel jealous if their partner had protected sex with someone else. Human nature does not adapt so quickly. (Sometimes for a good reason. Your partner having protected sex with someone else is sometimes the first step towards having unprotected sex with them.)


Let me suggest an alternative explanation: I imagine that for an asexual person the behavior of their sexual peers may be confusing, and explanations in style "they are brainwashed" may be tempting because they explain the problem away. It does not mean they are good explanations though.

Comment by viliam on How asexuality became an identity · 2019-12-25T21:47:27.875Z · score: 6 (3 votes) · LW · GW

If I remember correctly, the argument was that, I'll use my own words here, dealing with problems in style "relationships between genders are too difficult, let's use sexbots instead and everyone will be happy" ultimately leads to wireheading, if you apply it consistently to all your desires, not just sexual ones.

This is not a general argument against making things easier. If the difficulty is something that kills you, or hurts you irreversibly, that kind of difficulty should be removed. Or humans should be modified into more resilient, so they can overcome any existing difficulty without permanent damage. In the Nietzschean "what doesn't kill you, makes you stronger", the first option is bad, but the second option is good.

The context for the argument was the long game: if we succeed to conquer the universe by inventing a friendly superhuman intelligence, what then? If you could do literally anything, what is the meaningful thing to do? Eliezer's proposal was that the superintelligence should make us immortal (and otherwise not-irreversibly-damageable), but then let us overcome the remaining problems by ourselves. To give us unlimited time and opportunities to become stronger. As opposed to a nanny-bot that would make us weaker by satisfying all our base desires without us having to do anything, which would lead to our skills and intelligence gradually atrophying to sub-human, and even sub-animal, levels. (By the way, the theme of "does this make you weaker or stronger?" is also present in other parts of the Sequences.)

Comment by viliam on Dony's Shortform Feed · 2019-12-25T20:25:40.672Z · score: 6 (2 votes) · LW · GW

Somewhat related: In which ways have you self-improved that made you feel bad for not having done it earlier?

I doubt the premise of "the one thing" book. Just looking at their example -- Bill Gates -- if he'd only have one skill, the computer programming, he would never get rich. (The actual developers of MS DOS did not get nearly as rich as Bill Gates.) Instead it was a combination of understanding software, being at the right moment at the right place with the right connections, abusing the monopoly and winning the legal battles, etc. So at the very least, it was software development skills and the business skills; the latter much more important than the former.

(To see a real software development demigod, look at Donald Knuth. Famous among the people who care about the craft, but nowhere as rich or generally famous as Bill Gates.)

I would expect similar stories to be mostly post-hoc fairy tales. You do dozen things; you succeed or get lucky at one and fail at the rest; you get famous for the one thing and everything else is forgotten; a few years later self-improvement gurus write books using you as an example of a laser-sharp focus and whatever is their current hypothesis of the magic that creates success. "Just choose one thing, and if you make your choice well, you can ignore everything else and your life will still become a success" is wishful thinking.

Some people get successful by iteration. They do X, and fail. Then they switch to Y, which has enough similarity with X that they get comparative advantage against people who do Y from scratch, but they fail again. They they switch to Z, which is against has some similarity with Y... and finally they succeed. The switch to Y may happen after decades of trying something else.

Some people get successful by following their dream for decades, but it takes a long time until that dream starts making profit (some artists only get world-wide recognition after they die), so they need a daily job. They probably also need some skills to do the daily job well.

To answer your question directly, recent useful habits are exercising and cooking.

(I also exercised before, but that was some random thing that came to my mind at the moment, e.g. only push-ups; the recent habit is a sequence of pull-ups, one-legged squats, push-ups, and leg raise. I also cooked before, but I recently switched to mostly vegetarian meals, and found a subset that my family is happy to eat. I also cook more frequently, and remember some of the frequent recipes, so at shop I can spontaneously decide what to cook today and buy the ingredients on the spot, and I can easily multi-task while cooking.)

My next decade will mostly be focused on teaching habits to my kids, because what they also do has a big impact on my daily life. The less they need me to micromanage them, the more time I have for everything else.

Comment by viliam on What determines the balance between intelligence signaling and virtue signaling? · 2019-12-16T21:21:40.780Z · score: 5 (2 votes) · LW · GW

Sometimes you can sacrifice a bit of virtue to signal intelligence. For example, when people talk in real life, interrupting other people may give you an opportunity to say something clever first. Or you can make a funny joke that shows how smart and quick you are, even if you know that this will derail the debate.

Then there is contrarianism for signalling sake. You disagree with people not because you truly believe they are wrong, but to show that they are unthinking sheep and you are the brave one who dares to oppose the popular opinion (even if you actually believe the popular opinion to be correct, and the thing you said is just an exercise in finding clever excuses for what is most likely the wrong answer). This can cause actual harm, when people convinced by your speech do the wrong thing instead of the right one.