Comment by viliam on Nutrition is Satisficing · 2019-07-16T20:36:15.020Z · score: 4 (2 votes) · LW · GW

Nutrition is one of those cases where perfect is the enemy of good. I mean, if you have sufficient education and enough time, go ahead, research everything, and create the perfect diet for you. But otherwise, your only choices are: using a few heuristics, or giving up. (Note that "I will research this topic later, and meanwhile I will continue eating junk food" is also a form of giving up. The "later" usually never happens.)

In the past, when I asked people what is "healthy nutrition", there were two kinds of unhelpful advice:

a) an incredibly complicated theory, that would require me to spend a few weeks or months studying, and afterwards measuring and analyzing everything I eat and calculating how many calories and vitamins it had;

b) a list of "forbidden foods" which contained pretty much everything I could think of, except for fruits and vegetables (actually, if I remember correctly, even bananas were bad somehow; also potatoes).

It also doesn't help that different people have contradictory theories, e.g. meat, eggs, and diary are either very important to eat, or very important to avoid. More precisely, the best form of meat is fish. Except you shouldn't eat fish, because they are full of deadly mercury. Also, you should eat less to avoid obesity, but you need to eat enough to have enough nutrients. (It is even easier to lose weight if you exercise a lot; but if you exercise seriously, it is even more important to eat enough nutrients.) As a feedback, you should measure your BMI; except that BMI is completely misleading, because it doesn't distinguish between fat (bad) and muscles (good).

(And if you happen to be a woman, it is even more socially important to lose weight, but at the same time getting rid of too much fat will damage your metabolism, because some female hormones need certain amounts of fat to function properly. Also, you probably don't want to get rid of your boobs, which are mostly fat.)

Well... thanks for the helpful advice, I guess. /s


Eat plenty of vegetables.

Notice that this is one of the few things the contradictory nutrition theories all happen to agree about. Vegetables are important in both paleo and vegan diets.

I would also recommend fruit. Unlike vegetables, fruit is usually not a part of a meal... so the simple solution is to eat it between large meals.


My favorite heuristic is Dr. Greger's Daily Dozen (available also as Android app).

Comment by viliam on Economic Thinking · 2019-07-16T19:37:36.926Z · score: 2 (1 votes) · LW · GW

I agree with most of the text, however...

For [efficient market hypothesis] to be true, every trader must have access to all relevant information at all times, react instantaneously to changes and make decisions completely rationally.

...this is unnecessarily strong assumption. It is only sufficient that for every publicly traded thing there are enough traders (with enough money) who act upon the supposedly perfect information.

If the market price is balanced, a random person won't throw it out of balance by selling a few pieces incredibly cheaply (or buying a few pieces incredibly expensively). The wiser players will buy (or sell) the few pieces, and the original price will be soon restored.

Comment by viliam on The AI Timelines Scam · 2019-07-13T21:56:15.690Z · score: 14 (4 votes) · LW · GW

Well, it is not a "Bayesian way" to take a random controversial statement and say "the priors are 50% it's true, and 50% it's false".

(That would be true only if you had zero knowledge about... anything related to the statement. Or if the knowledge would be so precisely balanced the sum of the evidence would be exactly zero.)

But the factual wrongness is only a partial answer. The other part is more difficult to articulate, but it's something like... if someone uses "your keywords" to argue a complete nonsense, that kinda implies that you are expected to be so stupid that you would accept the nonsense as long as it is accompanied with the proper keywords... which is quite offensive.

Comment by viliam on 87,000 Hours or: Thoughts on Home Ownership · 2019-07-07T00:33:00.207Z · score: 2 (1 votes) · LW · GW

When I bought my current place, I learned that "partial reconstruction" means last-minute changes that look pretty when you are inspecting the place, but start falling apart when you actually use it for a few months. Still, the position in the center of the city and the area in square meters remain the same; and those make about 80% of the value.

With funds, the real deal being 80% of what I was promised, would be much better than I ever got.

Comment by viliam on 87,000 Hours or: Thoughts on Home Ownership · 2019-07-06T14:49:33.957Z · score: 6 (4 votes) · LW · GW

Speaking from my "N = 1" perspective: Yes, there is the disadvantage that if something bad happens, e.g. some idiots win the election and decide that my country leaves EU, the value of my lifetime savings could drop dramatically overnight. Same thing if there is e.g. a terrorist attack and my lifetime savings become a cloud of smoke and a heap of rubble, in even shorter time interval.

However, if the above described things do not happen, then I live in a middle of a city with job opportunities left and right (since I have moved here my commute means walking on feet), and I am a boss of what happens inside my place. New buildings growing around me increase the value of my property (it makes my place closer to even more job opportunities), while inflation reduces my mortgage payments to peanuts. The rent collected from the other place I own (where I lived previously) increases. At this very moment the rent I collect from one place (much smaller and further away from the city center) equals the mortgage payment at the other place, so this cancels out; I am looking forward to the surplus in the future.

I am not saying this is a proof that owning the place is better. As Moses said, this is a different risk profile; different advantages, different disadvantages, different probability distribution. I am just saying that from inside, for me, having invested in buying my place feels good. I do not regret not renting my place in the past instead.

In theory, the parallel-universe me could have rented the places, and invested the extra money instead. In practice, I shiver when I imagine what kind of investment the 15 years younger me would have made. Because my 15 years younger me actually had some extra money, invested them, and the money just evaporated. In my defense, it was before I ever heard about passively managed index funds. I live in a post-communist country, where people do not know how to handle having extra money, because in the past this kind of problem simply did not exist. All financial advice you can get here, is scam, regardless of the source (the bank I have money in regularly offers me "opportunities" that are obvious shit). Today I understand this; 15 years ago I didn't. I am really happy that I bought my first place instead. This lesson may not generalize for you; but I think that investing in your place can make sense for people who are not financial experts, because the situation is relatively more legible. (Unlike with various funds, a scammer just cannot sell you a cardboard house pretending it was build from concrete; and the house you bought won't keep changing its shape and address during the next years.)

Comment by viliam on Self-consciousness wants to make everything about itself · 2019-07-03T21:06:46.437Z · score: 9 (4 votes) · LW · GW

I imagine that Susan's position is complicated, because in the social justice framework, in most interactions she is considered the less-privileged one, and then suddenly in a few of them she becomes the more-privileged one. And in different positions, different behavior is expected. Which, I suppose, is emotionally difficult, even if intellectually the person accepts the idea of intersectionality.

If in most situations, using tears is the winning strategy, it will be difficult to avoid crying when it suddenly becomes inappropriate for reasons invisible to her System 1. (Ironically, the less racist she is, the less likely she will notice "oops, this is a non-white person talking to me, I need to react differently".)

Here a white man would have the situation easier, because his expected reaction on Monday is the same as his expected reaction on Tuesday, so he can use one behavior consistently.

Comment by viliam on Self-consciousness wants to make everything about itself · 2019-07-03T20:35:27.714Z · score: 2 (1 votes) · LW · GW

I think it also depends on what model of morality you subscribe to.

In the consequentialist framework, there is a best action in the set of your possible actions, and that's what you should do. (Though we may argue that no person chooses the best action consistently all the time, and thus we are all bad.)

In the deontologist framework, sometimes all your possible actions either break some rule, or neglect some duty, if you get into a bad situation where the only possible way to fulfill a duty is to break a rule. (Here it is possible to turn all duties up to 11, so it becomes impossible for everyone to fulfill them.)

Comment by viliam on Self-consciousness wants to make everything about itself · 2019-07-03T19:36:18.704Z · score: 9 (6 votes) · LW · GW

Seems to me that many advices or points of view can be helpful when used in a certain way, and harmful when used in a different way. The idea "I am already infinitely bad" is helpful when it removes the need to protect one's ego, but it can also make a person stop trying to improve.

The effect is similar with the idea of "heroic responsibility"; it can help you overcome some learned helplessness, or it can make you feel guilty for all the evils in the world you are not fixing right now. Also, it can be abused by other people as an excuse for their behavior ("what do you mean by saying I hurt you by doing this and that? take some heroic responsibility for your own well-being, and stop making excuses!").

Less directly related: Knowing About Biases Can Hurt People (how good advice about avoiding biases can be used to actually defend them), plus there is a quote I can't find now about how "doubting you math skills and checking your homework twice" can be helpful, but "doubting your math skills so much that you won't even attempt to do your homework" is harmful.

Comment by viliam on The Competence Myth · 2019-07-01T19:46:29.472Z · score: 4 (3 votes) · LW · GW

An intelligent and self-reflecting person will realize that luck plays an important role in their success. It's just, if they do not expect this to be the general rule, they will feel guilty about luck playing a role in their success.

If luck does not play a role in your success, it means you remain completely within your comfort zone, and you could on average profit by doing something more difficult, where let's say your chances to succeed are only 80%, but the potential profit is double. As a side effect, this will also provide an opportunity to learn more.

(So I am talking about two things here: The fact that the more difficult job allowed you to learn more, that is a result of your good strategy. But the fact that had enough time in the more difficult job to learn more, before a problem happened you would be unable to solve, that part was luck. "What doesn't kill you, makes you stronger", but first you need the luck to avoid getting killed.)

Comment by viliam on The Competence Myth · 2019-07-01T19:27:27.067Z · score: 6 (4 votes) · LW · GW

This is a disturbing line of thought, and... while I think this is not a complete explanation, it feels like it explains a lot.

The word "incompetent" does a lot of heavy lifting here, though. What exactly it means in this context? Do we play motte and bailey with meanings "a complete retard, making random decisions" and "less awesome than Elon Musk"?

Because if we assume that people are complete retards, then the market selection is not strong enough to explain what keeps the lights on. (Mathematically speaking, there are much more possibilities to screw up things, than the number of companies trying to provide electric power.)

So I guess the correct interpretation is that people are sort of competent, but the selection effect allows (some of) them to achieve more than their personal competence alone would predict. A complete retards would ruin any company, but if you find people who have a 10% chance of keeping the company running... then you simply need dozens of them in dozens of companies, and some of the companies will survive. But because the situation changes, the survival dice will be rolled again later, and the person who kept the company running yesterday may fail to keep it running today.

EDIT: This interpretation also fits the "low hanging fruit" hypothesis: For simple types of companies, you can find people who have a 10% chance to keep them running. For difficult types of companies, you can only find people who have a 1% chance. But if there is enough money, enough people will try it, and some of these companies will survive, too. Their life expectancy will be much shorter, though.

It could even provide an answer to Eliezer's post about competent elites: similarly to Peter Principle, if you are a super-competent person whose chance to keep the company running is not the usual 10%, but rather 50%, it is more profitable for you to actually try running a more complex company (where you have the 10% chance, and anyone else has 1%) instead. So it is true that more competent people end up in more complex companies, and at the same time, people end up in positions that exceed their personal competence.

Comment by viliam on Do children lose 'childlike curiosity?' Why? · 2019-06-30T22:26:31.974Z · score: 7 (4 votes) · LW · GW

Yup, similar with my child. Maybe the first time the question is motivated by actual curiosity, but the following 99 repetitions of the same question have to be motivated by something else.

Most questions I get are repetitions of something that was already asked and already answered, and the child actually remembers the answer.

Comment by viliam on The Competence Myth · 2019-06-30T22:20:32.288Z · score: 18 (8 votes) · LW · GW

Yeah, the more I know about how some things work, the more I am surprised that anything at all works.

Maybe we are so insanely productive, that even with 99% of human output wasted, the remaining 1% is enough to keep the lights on? But then, how wonderful could the world be if we could somehow use 10% of the output instead?

But more likely, it is something like the individuals believing all kinds of bullshit in far mode, but acting rather reasonable in near mode. There are all kinds of incompetence at places where incompetence does not mean immediate disaster; but when the dangers become imminent, someone will use their common sense (and perhaps work overtime) to do the right thing, and to prevent the lights from going out. Well, most of the time; because sometimes the last moment will be already too late to prevent the things from blowing up.

Or perhaps human work is mostly repetition, and you don't need highly competent people to do the same thing they did yesterday. Only once in a while circumstances change enough that someone has to work overtime and/or things blow up.

Comment by viliam on What does the word "collaborative" mean in the phrase "collaborative truthseeking"? · 2019-06-26T21:19:00.865Z · score: 19 (5 votes) · LW · GW

Assumption: Most people are not truthseeking.

Therefore, a rational truthseeking person's priors would still be that the person they are debating with is optimizing for something else, such as creating an alliance, or competing for status.

Collaborative truthseeking would then be what happens where all participants trust each other to care about truth. That not only each of them cares about truth privately, but that this value is also common knowledge.

If I believe that the other person genuinely cares about truth, then I will take their arguments more seriously, and if I am surprised, I will be more likely to ask for more info.

Comment by viliam on The Foundational Toolbox for Life: Introduction · 2019-06-22T22:05:23.195Z · score: 5 (3 votes) · LW · GW
Many people grow up learning how to go through the motions of a skill without understanding why it works, and that makes it harder for them to use it effectively, to adapt their skill to different contexts, and to learn other similar skills.

In mathematics, you address this problem by having students solve problems with different numbers, throwing in some irrelevant numbers, etc.

In programming, you give people a small task to code.

Could this be somehow generalized? I suppose the problem is that in many situations, running an experiment would be long and costly, and the outcome would more depend on random noise.

Comment by viliam on Let Values Drift · 2019-06-21T21:41:53.961Z · score: 9 (5 votes) · LW · GW

From my perspective, this style means that although I feel pretty sure that you made a relatively simple mistake somewhere, I am unable to explain it, because the text is just too hard to work with.

I'd say this style works fine for some purposes, but "finding the truth" isn't one of them. (The same is probably true about the continental philosophy in general.)

My guess is that you use words "value drift" to mean many other things, such as "extrapolation of your values as you learn", "changes in priorities", etc.

Comment by viliam on No, it's not The Incentives—it's you · 2019-06-13T21:48:50.962Z · score: 3 (2 votes) · LW · GW

As a synthesis of points 1 and 4: it is both the incentives and you. The incentives explain why the game is so bad, but you have to ask yourself why you still keep playing it.

A researcher with more personal integrity would avoid the temptation/pressure to do sloppy science... and perhaps lose the job as a result. The sloppy science itself would remain, only done by someone else.

Comment by viliam on On why mathematics appear to be non-cosmic · 2019-06-13T21:42:44.648Z · score: 2 (1 votes) · LW · GW
My point is that it is a bit suspect (granted, this is just intuitive) that so simple and distinct a 2d geometrical form as an ellipse, is actually for us humans front and center in phenomena including the movement of heavenly bodies.

Coincidentally, some complex mathematical things are also related to the movement of heavenly bodies. So I'd say humans are good both at noticing simplicity and noticing complexity.

Comment by viliam on On why mathematics appear to be non-cosmic · 2019-06-12T19:52:12.950Z · score: 3 (2 votes) · LW · GW
In my view nothing describes the actual universe, but there are many possible (species-dependent) translations of the universe.

Well, here is the point where we disagree. In my view, equations for e.g. gravity or quantum physics are given by nature. Different species may use different syntax to describe them, but the freedom to do so would be quite limited.

Recall how even Kepler was originally regarding the ellipsis as way too easy and convenient a form to account for movement in space, and was considering complicated arrangements of the platonic solids :)

The fact that Kepler tried to have it one way, but it turned out to be other way, is an evidence for "the universe having its own mind about the equations", isn't it?

Of course an alternative explanation is that the scientists -- mostly men, at least in history -- unconsciously prefer shapes that remind them of boobs.

Comment by viliam on Personal musings on Individualism and Empathy · 2019-06-11T22:29:14.428Z · score: 2 (1 votes) · LW · GW

Some of what you describe (specifically when you mentioned theory of mind) seems to me like the asperger syndrome.

Comment by viliam on On why mathematics appear to be non-cosmic · 2019-06-11T22:25:56.482Z · score: 3 (2 votes) · LW · GW

If the hypothetical aliens live in the same universe, they will probably develop natural numbers, some version of calculus, probably complex numbers, etc. Because those are things that describe the universe.

They may not have things like Fibonacci numbers, or ZFC axioms, because those are things humans are paying attention to for random historical reasons. Analogically, they may have other concepts that never seemed important for us, such as other sequences, or other sets of axioms. Learning those things could be interesting, but it probably wouldn't feel like a dramatic improvement in math; more like another interesting puzzle to solve.

Comment by viliam on Logic, Buddhism, and the Dialetheia · 2019-06-11T22:14:11.357Z · score: 6 (3 votes) · LW · GW

Buddhism as an applause light, quantum mumbo jumbo...

Not a Less Wrong material, in my opinion.

Comment by viliam on On pointless waiting · 2019-06-11T22:07:28.228Z · score: 4 (2 votes) · LW · GW
In elementary school, there’s no real goal for your studies. Mostly it’s just coming there, doing the things that teachers want you to do, until the day is over and you get to go.
In that environment, every minute that passes means winning. Every minute takes you a bit closer to being out of there. That’s the real goal: getting out so you can finally do something fun.

How much difference is there really for an employee?

Unless you are doing the "early retirement" thing, your job is also something that will never be done. Doing the tasks only results in getting more tasks; completing a project gets you assigned to another project.

The difference is that you must keep certain non-trivial level of productivity to keep the job. Exceeding this level, however, usually brings little benefit -- in worst case it only brings extra work with no benefit; in best case, there is a sublinear reward (e.g. permanently doubling your productivity could result in 30% salary increase).

(It doesn't necessarily have to be like this. There are situations where doubling your productivity could result in only working half the time -- as would be the natural outcome of working for yourself. But in my experience this usually happens informally and unreliably.)

Comment by viliam on FB/Discord Style Reacts · 2019-06-04T21:28:20.432Z · score: 5 (2 votes) · LW · GW

I believe that "like" and "dislike" are good choices, especially if you want people to make a lot of votes, without spending too much time thinking about it. Anything more complex, and most people will not use it; and if that means they cannot vote, then less people will vote (and the results of voting will represent a smaller set of people, mostly the compulsive voters). Time spent voting (not per one comment, but site-wide) is a limited resource.

I think that when websites try to measure more than one dimension, the usual outcome is that the answers correlate so much it was worthless to distinguish between them. No matter how precisely you specify the voting rules, when people like something, they will usually give it the highest rating in all dimensions (even completely irrelevant ones), and when they dislike something, they will give it the lowest rating in all dimensions. The ones who think deeper about it will be "less productive" voters than the ones who don't.

So it should be like/dislike first, and then optionally a flavor next. (Different flavors for likes and dislikes, obviously.)

Perhaps adding a third option, "meh", would still work okay. I mean, "meh" is neither an upvote nor a downvote; it means no strong reaction in either way, it doesn't provide information for others, but it could be useful for you to distinguish between comments you already voted "meh" from the comments you haven't voted yet. And, if we go the way of "vote + flavor", there could be a collection of flavors with "meh" (probably with some overlap with the flavors for "like" and "dislike").

Maybe it should be possible to write your own flavor text for a vote, and the most frequent choices (for the specific vote type: upvote/meh/downvote) should be offered as a menu. So you could choose between what other people mostly use, or write your own text.

Comment by viliam on No Really, Why Aren't Rationalists Winning? · 2019-06-04T21:06:00.726Z · score: 3 (2 votes) · LW · GW

Yeah, this. It is a mistake -- and I suspect a popular one -- to think that rationality trumps any amounts of domain-specific knowledge or resources.

Ceteris paribus, a rational person playing the stock market should have an advantage against an irrational one, with same amount of skills, experience, time spent, etc. Question is, whether this advantage creates a big difference or a rounding error. Another question is whether playing the stock market is actually a winning move: how much is skill, how much is luck, and whether the part that is skill is adequately rewarded... compared to using the same amount of skill somewhere else, and putting your savings into a passively managed index fund.

If you invest your own money, even if you do everything right, you profit will be 1000 times smaller compared with a person who invests 1000× more money equally well. So, even if you make a profit, it may be less than your potential salary somewhere else, because you are producing a multiplier only for a moderate amount of money (unless you started as a millionaire).

On the other hand, if you invest other people's money, it depends on the structure of the market: how much other people's money is there to be invested, and how many people are competing for this opportunity. Maybe there are thousands of wannabe investors competing for the opportunity to manage a few dozen funds. Then, even if the smartest ones make a big profit, their personal reward may be quite small. Because the absolute value of your skill is not relevant here; it is the relative value of employing you versus employing the other guy who would love to take your position; and the other guy is pretty smart, too.

Comment by viliam on The Relationship Between the Village and the Mission · 2019-05-16T23:43:26.149Z · score: 5 (3 votes) · LW · GW
"if you propose a new thing, especially a new confusing thing, there's a good chance you'll get a disproportionate amount of vocal opposition compared to support" ... if I interpreted wrong please correct me

Yes, this is how I meant it, but in context of Less Wrong especially when the new thing is about rationalist having some emotional experience and becoming closer to each other. Even if it is an obviously voluntary activity no one is pressured to join. Unusual and confusing suggestions that would involve studying math or playing poker would not get that intensity of reaction.

(The surprising part is why singing songs together or living in the Dragon Army house is perceived as more dangerous than polyamory. But maybe because the idea of polyamory came first, so the people who strongly objected to that were already gone when the other ideas came.)

Comment by viliam on The Relationship Between the Village and the Mission · 2019-05-16T23:32:02.657Z · score: 7 (4 votes) · LW · GW
Your comment is, I’m afraid, full of the most egregious strawmen

Looking at the discussion you linked... I admit I cannot find the horrible examples my mind keeps telling me I have seen. So, maybe I was wrong. Or maybe it was a different article, dunno. A few negative comments were deleted; but those were all written by the same person, so in either case they do not represent a mass reaction. The remaining comment closest to what I wanted to say is this one...

The whole point of rituals like this in religion is to switch off thinking and get people going with the flow. The epistemic danger should be pretty obvious. Ritual = irrational. [1]

...but even that one is not too bad.

It is only “a perfectly normal thing” because everyone who didn’t think it was perfectly normal, has left! ... It is a simple case of evaporative cooling!

This is a good point. Whatever the community does, if it causes the opposing people to leave, will be in hindsight seen as the obviously right thing to do (because those who disagree have already left), even if in a parallel Everett branch doing the opposite thing is seen as the obviously right thing.

I still feel weird about people who would leave a community just because a few members of the community did sing a song together. Also, people keep leaving for all kinds of reasons. I am pretty sure some have left because of lack of emotional connection, such as, uhm, doing things together.

Meta:

Okay, at this moment I feel quite confused about this comment I just wrote. Like, from certain perspectives it seems like you are right, and I am simply refusing to say "oops". At the very least, I failed to find a sufficiently horrible anti-Solstice comment.

Yet, somehow, it is you saying that there were people who left the rationality movement because of the Solstice ritual, which is the kind of hysterical reaction I tried to point at. (I can't imagine myself leaving a movement just because a few of its members decided to meet and sing a song together.)

Comment by viliam on The Relationship Between the Village and the Mission · 2019-05-15T22:16:31.734Z · score: 20 (7 votes) · LW · GW

I had in mind the proposals to organize (1) Solstice celebration and (2) Dragon Army, on Less Wrong.

From my perspective, both cases were "hey, I have an idea of a weird but potentially awesome activity, here is an outline, contact me if you are interested", and in both cases, the debate was mostly about why this is a horrible thing to do, because only cultists would organize a weird activity in real life.

The Dragon Army pushed the Overton window so far that now it makes difficult to remember what exactly was so horrifying about the Solstice celebration. But back then, the mere idea of singing together was quite triggering for a few people: singing is an irrational activity, it manipulates your emotions, it increases group cohesion which rubs contrarians the wrong way, it's what religious people do, yadda yadda yadda, therefore meeting with a group of friends and singing a song together means abandoning your rationality forever.

Now, the Solstice celebration is a perfectly normal thing, and no one freaks out about it anymore. And I suppose if there would be a second and third attempt to do something like the Dragon Army, people would get used to that, too. But the reactions to the first attempts felt quite discouraging.

Comment by viliam on The Relationship Between the Village and the Mission · 2019-05-15T20:36:42.713Z · score: 9 (5 votes) · LW · GW
people whose coalitional membership is constituted by their shared adherence to “rational,” scientific propositions have a problem when—as is generally the case—new information arises which requires belief revision.

My first reaction was that perhaps the community should be centered around updating on evidence rather than any specific science.

But of course, that can fail, too. For example, people can signal their virtue by updating on tinier and tinier pieces of evidence. Like, when the probability increases from 0.000001 to 0.0000011, people start yelling about how this changes everything, and if you say "huh, for me that is almost no change at all", you become the unworthy one who refuses to update in face of evidence.

(The people updating on the tiny evidence most likely won't even be technically correct, because purposefully looking for microscopic pieces of evidence will naturally introduce selection bias and double counting.)

Comment by viliam on The Relationship Between the Village and the Mission · 2019-05-14T23:22:33.040Z · score: 15 (6 votes) · LW · GW

Could some of this be connected to the "geek social fallacies"? Specifically: some people seem to be a community material; some people seem corrosive to any community; most are probably somewhere on the spectrum. If you try to make a community that includes the corrosive people, it will quickly and inevitably fall apart. However, some communities have "inclusion" as their applause light, so it requires some degree of hypocrisy and tacit coordination to navigate this successfully.

I suppose that even the religious communities who try to save everyone's soul, are ultimately exclusive. This happens in two ways:

First, "doing some actual work" filters out lazy people, or people who prefer talking about things to actually doing things. There are people who could endlessly talk about helping the poor; but if you ask for volunteers who will cook the soup for the homeless, when the time comes to actually cook the soup, these talkers will not be there. Good!

Second, some people take more than they give, but you can balance this by making "taking" low status, and "giving" high status; and then having the high-status people meet separately. So you spend one afternoon cooking the soup and giving it to the homeless; but then you spend another afternoon or two with the fellow cooks in a place where the homeless people are not invited.

So, on one level you have people who love everyone so much that they even spend their free time cooking soups for the homeless. But on another level, you have a clever algorithm to filter out a kind of elite -- people who are altruistic and willing to work -- and have them network with each other, in absence of the less worthy ones. No one mentions this explicitly, because debating it explicitly would probably ruin the effect, if people uninterested in cooking soup for the homeless would start participating anyway, because they would realize the benefits of networking with altruistic and hard-working ones.

I suspect that the atheist community meetup will be full of annoying and disagreeable people who would filter themselves out from the "religious people cooking soup for the homeless" meetup. They don't have to be all annoying and disagreeable, of course, but even a few of them can ruin the atmosphere.

Coordinating online probably also makes things worse. When you announce an activity, people who dislike the activity will give vocal feedback, and you suddenly find yourself in a debate with them, which is a complete waste of your time. As opposed to announcing the time and place on a flyer, so that people who are interested will come, and the people who are not will stay at home.

In my personal experience, I found the highest quality people in various volunteer groups. Doesn't matter what: they could be campaigning for human rights, organizing a summer camp for kids, preparing educational reform materials, or mowing a meadow to save endangered plant species. Some of these activities have specific filters on profession or political alignment, but each of them at the same times filters for... I am not sure I can describe it correctly, but it is a good filter.

Comment by viliam on The Relationship Between the Village and the Mission · 2019-05-14T22:44:54.974Z · score: 4 (2 votes) · LW · GW
Getting a sense of who is already working on what. ...

I would love to read an overview of things that are being done, in the rationalist community. By reading Less Wrong regularly, I am exposed to many random things, but I may have large blind spots. I would like to see the curated big picture.

In addition to big picture (list of meetups or podcasts or research groups), it would be also nice to have a database of helpful people (who organize the meetups, or bring cookies), but the later should probably not be public. I have heard stories of people who come to rationalist community with the goal of extracting free work (under vague and non-committal promises of improving the world or contributing to charity) from naive people. So, if someone loves to bake cookies and bring them to meetups, it would be nice to give their contact to local meetup organizers, but not to make it completely public so that random parasites will spam them. Maybe a trivial inconvenience of "show me the specific work you have already done, before I give you the list of contacts" would be enough.

Comment by viliam on The Relationship Between the Village and the Mission · 2019-05-14T20:59:12.330Z · score: 7 (3 votes) · LW · GW

I would like to have a community that strives to be rational also "outside the lab". The words "professional bayesianism" feel like bayesianism within the lab. (I haven't read the book, so perhaps I am misinterpreting the author's intent.)

Google seems to invest huge amounts of effort into making sure they have a good internal community.

That's nice, but ultimately, if there is a tension between "what is better for you" and "what is better for Google", Google will probably choose the latter. What could possibly be good for you but bad for Google? Thinking for less than one minute I'd say: becoming financially independent, so you no longer have to work; building your own startup; finding a spouse, having kids, and refusing to work overtime...

Yeah, this is a fully general argument against any society, but it seems to me that a Village, simply by not being profit oriented, would have greater freedom to optimize for the benefit of its members. For a business company, every employer is a cost. In a village, well-behaving citizens pay their own bills, and provide some value to each other, whether that value is greater or smaller, it is still positive or zero.

"Church" is something that can continues to succeed even in a large town or city where people come and go more easily (although I'm not confident this is a stable arrangement – once you have large cities, atomic individualism and the gradual erosion of Church might be inevitable)

An important part of being in the Church is being physically present at its religious activities, e.g. every Sunday morning. So even if you happen to be surrounded mostly by non-believers in your city, at least once in a week you become physically surrounded by believers. (A temporary Village.) Physical proximity creates the kind of emotions that internet cannot substitute.

Church is an "eukaryotic" organization: it has a boundary on the outside (believers vs non-believers), but also inside (clergy vs lay members). This slows down value shift: you can accept many believers, while only worrying about value alignment of the clergy: potential heretical opinions of the lay members are just their personal opinions, not the official teaching; if necessary, the clergy will make this clear in a coordinated way. Having stronger filter in the inner boundary allows you to have weaker filter on the outer boundary, because there is no democracy in the outer circle.

Translated to the language of the article: Mission can have multiple Villages, but Village can only have one Mission. As an example, if meditation becomes popular among some rationalists, and they start going to Buddhist retreats and hanging out with Buddhist, and then they bring their nerdy Buddhist friends to rationality meetups... it should be clear that the rationalist community is in absolutely no risk of becoming a religious community, because the mysterious bullshit of Buddhism will be rejected (at least by the inner circle) just like the mysterious bullshit of any other religion. Similarly when people will try to conquer the rationalist community for their political faction; but I believe we are doing quite well here.

You listen to sermons that establish common knowledge of what your people do-and-don't-do.

The important thing here is that the sermons come from the top. They do not represent the latest fashionable contrarian opinion. The Church provides many things for its members, but freedom to give sermons is not one of them.

(To avoid misunderstanding: I am not praising dictatorship for the dictatorship's sake here. Rather, it is my experience from various projects, that there is a type of people who come to introduce controversy, but don't contribute to the core mission. These people will cause drama, and provide nothing useful in return. If they win, they will only keep pushing further; if they lose, they will ragequit and maybe spend some time slandering you. It is nice to have a mechanism that stops them at the door. Even more importantly in a group that attracts so many contrarians, and where "hey, you call yourselves 'rationalists', but you irrationally refuse my opinion before you spent thousand hours debating it thoroughly?!" is a powerful argument. The sermons are a tool of coordination, and coordination is hard.)

Comment by viliam on The Relationship Between the Village and the Mission · 2019-05-14T20:40:07.409Z · score: 7 (3 votes) · LW · GW

If Mission requires a lot of work (or isn't paid well, so you need an extra job to pay your bills), people will have to reduce their involvement when they have kids. And most people are going to have kids at some moment of their lives.

On the other hand, Village without kids... should more properly be called Hotel or Campus.

Thus, Village helps Mission by keeping currently inactive people close, so even if you cannot use their work at the moment, you can still use some of their expertise. Also, the involvement doesn't have to be "all or nothing"; people with school-age kids can be part-time involved.

Mission without Village will keep losing tacit knowledge, and will probably have to make stronger pressure on keeping and recruiting members. (Which can become a positive feedback loop, if members start leaving because of increased pressure, and the pressure increases as a reaction to the threat of losing members.)

Comment by viliam on Hierarchy and wings · 2019-05-09T14:57:09.921Z · score: 11 (3 votes) · LW · GW

I am happy you posted it here. It sounds reasonable, and from my perspective it doesn't feel mindkilling.

I have already assumed that state power is about taking resources (or defending from having more taken from you; but if you have the power to achieve that, most people won't stop there). Saying that the right and the left are two different strategies of creating coalitions to achieve this goal, that sounds quite impartial to me. Maybe I have mentally edited out something offensive, dunno. But I like the definitions of "the Schelling point of power" and "the natural opposition to the former" (this is how I abbreviate it for myself). Definitely more useful than "the guys who would hypothetically sit on the left/right chair in the 18-th century French parliament" or "it's just completely random coalitions".

Two things I would like to add:

1) This model seems to work for e.g. USA, but the situation e.g. in post-communist Eastern Europe is the other way round. The Schelling point of power is "let's bring back communism", its natural leaders being the former apparatchiks and secret service officers (many of them, or their sons, still active in current army and police). Yet, this is considered "left-wing". And "right-wing" is the hodgepodge of free market and religious fundamentalism and whoever's vision of the future does not include the return of communism.

More abstractly, the Schelling point of power depends on the recent historical events in given country. If the previous military power self-identified as "left-wing", the naming gets reverted.

2) It seems possible to go at least one level deeper in this analysis. You have the "natural Schelling point", and "its natural opposition" defined as people most likely to be oppressed by the former. But even the opposition oppresses someone -- there are "minorities within minorities" -- and thus we can sometimes get a second-order opposition, which may ally itself with the enemy of their enemy, despite not belonging there "naturally". Generally, "enemy of my enemy" strategy can create weird coalitions.

To give a real-life example from American politics, the left-wing coalition includes feminists, gays, and ethnic minorities. But what if you are an ethnic minority member who criticizes how given minority treats their own women or gays? You will get labeled as "right-wing". Even if you identify as left-wing, and your opinions and arguments are traditionally left-wing, picking the wrong target gets you thrown out of the coalition.

Comment by viliam on Reference request: human as "backup" · 2019-04-29T18:57:13.482Z · score: 4 (2 votes) · LW · GW

In Matrix, the role of humans was quite similar to the role of mitochondria. (Except, it does not make sense.)

I imagine that at the beginning, humans could be useful to the young AIs which would excel at some skills but fail at others. (One important role would be providing a human "face" in interaction with humans who don't like AIs.) However, that usefulness would only be temporary.

An eukaryotic cell cannot find a short-term replacement for mitochondria, and in evolution the long-term does not happen without the short-term. An intelligent designer -- such as a self-improving AI -- could however spend the time and resources to research a more efficient replacement for the functions the humans provide, if it would make sense in long term.

On the other hand, if the AI is under so much pressure that it cannot afford to do research, it probably also cannot afford to provide luxuries to its humans. So the humans will become an equivalent of cage-bred chickens.

Comment by Viliam on [deleted post] 2019-04-29T00:01:19.446Z

Thank you for writing this; it is an inspiration to many thoughts!

Seems to me that according to "Copenhagen interpretation of ethics", knowing yourself makes you less moral; or makes your life more difficult if you want to remain moral.

If you don't understand your brain's [player's] Machiavellian moves, you cannot be blamed for them, as long as your [character's] intentions are pure. You simply do whatever feels right to you at the moment, and then you reap the rewards of the unconscious strategy given to you by evolution. You execute the shrewd moves with perfect innocence, and the outcome feels like good luck, or even good karma for... some random thing.

("My success is a result of my positive thinking and hard work. It is completely unrelated to the fact that I stabbed my former friends in the back when they outlived their purpose, and always kissed the asses of powerful people. No; I have simply found out that some people whom I considered friends in the past actually suck, and instead I decided to spend my time with genuinely awesome people whom I admire. And now I observe, full of gratitude, that the Universe has rewarded my constant striving for virtuous life.")

On the other hand, suppose you read a lot about evolutionary psychology, and get good at understanding your brain's motives. Your brain prompts you a Machiavellian move, and despite feeling the genuine desire to act that way, you also clearly see it for what it is. ("My friend's behavior feels really annoying recently; sometimes I am so irritated I wish we would just stop seeing each other. On a different level, I am also aware that he is no longer a useful ally to me. I have surpassed him in education, wealth, and social status; he can no longer offer me anything of use, other than sharing a few childhood memories. The time I spend with him these days would be much better spent networking with people in my current professional and social circles. A funny thing that I notice is that the exactly same behavior of his seemed really cool while we were at the high school, when he was a popular kid, and I was just an unpopular nerd who by sheer luck became his friend.")

The problem is, now that you [the character] see your brain's [player's] moves' true meaning, you become complicit if you decide to follow through. It still feels like the desirable thing to do; you just no longer have the privilege of denying the strategic value of it. So you do it anyway, but now you feel dirty. (Or you don't do it, but now you feel like a sucker, because you are aware that most people in your situation probably would have done it, and would have benefited from doing so.) When you follow your brain's path and reach success, you know exactly what to contribute it to, and it probably doesn't make you proud. It is tempting to simply pretend that some things didn't happen for the reasons they did.

Comment by viliam on The Forces of Blandness and the Disagreeable Majority · 2019-04-28T21:39:02.242Z · score: 11 (7 votes) · LW · GW

I wonder how much of this is a consequence of the fact that in the offline world, rich people usually associate with rich people, and poor people associate with poor people (and when a poor person associates with a rich person e.g. in a role of a servant, the poor person must behave in a way that the rich person finds proper)... but in the online world, we all use the same Facebook, Twitter, Reddit, etc.

So now rich people have a cultural shock of meeting the unwashed masses who don't give a fuck about their sensibilities, and will even laugh at their faces, protected by (perceived) online anonymity.

It should be possible to create separate gardens for the elites. Like, make a clone of a famous website, but require e.g. $1000 yearly membership fees, and you get rid of the plebs. There already are projects like that. But as far as I know, they fail. On the internet people provide value to each other, so a website for the 0.1 % would have much fewer interesting stories, fewer cat videos, etc. It would be less offensive, but mostly because it would be dead.

It is probably also hard to find the exact line; I suppose the elites would prefer to avoid dealing with people too low below them, but would welcome the presence of people slightly below them -- they are not that difficult culturally, and because how the top of the pyramid is shaped, there are lots of them, which means lots of useful content.

So instead, the rich people are trying to kick out the plebs from the online places they like. Using politeness and other things correlated with social class as an excuse.

Rationality Vienna Meetup June 2019

2019-04-28T21:05:15.818Z · score: 9 (2 votes)

Rationality Vienna Meetup May 2019

2019-04-28T21:01:12.804Z · score: 9 (2 votes)
Comment by viliam on 10 Good Things about Antifragile: a positivist book review · 2019-04-28T20:50:32.298Z · score: 4 (2 votes) · LW · GW

I guess we mostly agree here.

The current system of restaurants could suffer greatly, if (1) some company would start providing cheap delivery of high quality food by drones, or (2) some epidemic would make it dangerous for people to eat in public. Well, neither of these would wipe out the whole system, but it's just what I though in a few seconds; worse things could probably happen. Also, luck would play a great role, e.g. if first we would have the food delivery by drones, and a few months later the epidemic, with proper timing the combined impact could be much greater than either of these individually. A machine that could cook an (almost) arbitrary recipe automatically at home (plus a convenient delivery for the raw materials) -- at least the recipes usually found in the restaurants -- could also change a lot.

Yes, having many parallel solutions that work slightly differently, makes things more robust. This is a lesson I would love to see implemented in the school system: have hundreds of different types of schools, each providing education in a different way.

Comment by viliam on 10 Good Things about Antifragile: a positivist book review · 2019-04-27T21:44:39.750Z · score: 4 (2 votes) · LW · GW
There are bad events which cannot in principle be predicted.

However, Taleb can already predict which systems will benefit from those events. /s

This is my general problem with Taleb: it feels like his books keep telling you repeatedly that no one can actually predict or understand something, only to suggest that Taleb has some kind of knowledge beyond knowledge that allows him to predict the unpredictable and explain the incomprehensive. Sorry, I don't buy this. If no one can predict stuff, then Taleb can't either; if Taleb can predict a thing or two about stuff, so can possibly someone else.

Of course, the "motte" is that institutions which are inflexible and their success is based on too many dubious assumptions, will break when something important changes, and such changes happen once in a while.

But beyond this, I think it is more likely to be a trade off. A bet on things remaining the same, versus a bet on things changing quickly enough that we can actually benefit from being prepared for the change. A huge empire may gradually fall apart as a result of its own complexity and bureaucracy; but in the meanwhile, it will destroy hundreds of communities that weren't large enough and coordinated enough to resist the attack of a huge army of a centralized state. Other hundreds of communities will avoid the attention of the empire and survive. It is not obvious that being a member of a randomly selected community is better that being a citizen of the centralized state. Even a reliable prophecy that one day -- an unspecified moment between today and 500 years later -- the empire will fall apart, will not make the choice easier. Or maybe one day, Microsoft Windows will be completely replaced by thousands of competing flavors of Linux; I just don't believe that Bill Gates should lose his sleep over that. One day, Java will be a new Cobol, and all Python and Ruby developers will have a good laugh about it (that is, until Python and Ruby become new Cobols, too), but in the meanwhile, my Java skills are paying my bills. Etc.

So, one problem is that unless the changes come soon enough, your anti-fragility features are going to be just dead weight. (If they are providing some benefit in the meanwhile, it means you could have designed them for the purpose of that benefit, even without worrying about anti-fragility.) Another problem is that a genuinely unpredictable bad event can wipe off your anti-fragile solution, too. (Maybe the "anti-fragile" features you designed make it actually more susceptible to the event, not less. That's what genuine unpredictability means.)

tl;dr -- robust systems are usually more desirable than fragile ones, but "anti-fragility" is a pipe dream

Comment by viliam on When is rationality useful? · 2019-04-27T20:18:45.535Z · score: 3 (2 votes) · LW · GW
I feel like people who want to do X (in the sense of the word "want" where it's an actual desire, no Elephant-in-the-brain bullshit) do X, so they don't have time to set timers to think about how to do X.

Yeah. When someone does not do X, they probably have a psychological problem, most likely involving lying to themselves. Setting up the timer won't make the problem go away. (The rebelling part of the brain will find a way to undermine the progress.) See a therapist instead, or change your peer group.

The proper moment to go meta is when you are already doing X, already achieving some outcomes, and your question is how to make the already existing process more efficient. Then, 5 minutes of thinking can make you realize e.g. that some parts of the process can be outsourced or done differently or skipped completely. Which can translate to immediate gains.

In other words, you should not go meta to skip doing your ABC, but rather to progress from ABC to D.

If instead you believe that by enough armchair thinking you can skip directly to Z, you are using "rationality" as a substitute for prayer. Also, as another excuse for why you are not moving your ass.

Comment by viliam on When is rationality useful? · 2019-04-27T20:03:58.495Z · score: 4 (2 votes) · LW · GW

I guess we are talking about two different things, both of them useful. One is excellence in a given field, where the success could be described like "you got a Nobel price, bunch of stuff is named after you, and kids learn your name at high school". Other is keeping all aspects of your life in good shape, where the success could be described like "you lived until age 100, fit and mostly healthy, with a ton of money, surrounded by a harem of girlfriends". In other words, it can refer to being at top 0.0001 % of one thing, or at top 1-10 % at many things that matter personally.

One can be successful at both (I am thinking about Richard Feynman now), but it is also possible to excel at something while your life sucks otherwise, or to live a great life that leaves no impact on history.

My advice was specifically meant for the latter (the general goodness of personal life). I agree that achieving extraordinary results at one thing requires spending extraordinary amounts of time and attention on it. And you probably need to put emphasis on different rationality techniques; I assume that everyday life would benefit greatly from "spend 5 minutes actually thinking about it" (especially when it is a thing you habitually avoid thinking about), while scientists may benefit relatively more from recognizing "teachers' passwords" and "mysterious answers".

How much could a leading mathematician gain by being more meta, for example?

If you are leading, then what you are already doing works fine, and you don't need my advice. But in general, according to some rumors, category theory is the part of mathematics where you go more meta than usual. I am not going to pretend having any actual knowledge in this area, though.

In physics, I believe it is sometimes fruitful (or at least it was, a few decades ago) to think about "the nature of the physical law". Like, instead of just trying to find a law that would explain the experimental results, looking at the already known laws, asking what they have in common, and using these parts as building blocks of the area you research. I am not an expert here, either.

In computer science, a simple example of going meta is "design patterns", a more complex example would be thinking about programming languages and what are their desirable traits (as opposed to simply being an "X developer"), in extreme cases creating your own framework or programming language. Lisp or TeX would be among high-status examples here, but even JQuery in its era revolutionized writing JavaScript code. You may want to be the kind of developer who looks at JavaScript and invents JQuery, or looks at book publishing and invents TeX.

Comment by viliam on When is rationality useful? · 2019-04-26T19:40:35.714Z · score: 10 (5 votes) · LW · GW
But I don't think there's a good reason to expect rationalists to do better unprompted—to have more unprompted imagination, creativity, to generate strategies—or to notice things better: their blind spots, additional dimensions in the solution space.

I wonder if it would help to build a habit about this. Something like dedicating 15 minutes every day to a rationalist ritual, which would contain tasks like "spend 5 minutes listing your current problem, 5 minutes choosing the most important one, and 5 minutes actually thinking about the problem".

Another task could be "here is a list of important topics in human life { health, wealth, relationships... }, spend 5 minutes writing a short idea for each of them how to improve, choose one topic, and spend 5 minutes expanding the idea into specific plan". Or perhaps "make a list of your strengths, now think how you could apply them to your current problems" or "make a list of your weaknesses, now think how you could fix them at least a little" or... Seven tasks for seven days of the week. Or maybe six tasks, and one day should be spent reviewing the week and planning the next one.

The idea is to have a system that has a chance to give you the prompt to actually think about something.

Comment by viliam on How to make plans? · 2019-04-23T21:07:23.267Z · score: 14 (3 votes) · LW · GW

My guess for the most common planning mistakes:

1) Not having an actual plan, only a goal. Essentially, just saying "I want to be X", and then waiting for it to somehow magically happen. As opposed to researching how people actually get from "here" to "there", what kind of tasks they do, which skills they need, and actually practicing those skills. In other words, not making the first step, but instead waiting for the "right moment", which somehow never arrives; or if it does, it will find you unprepared.

2) Expecting the whole thing to happen in one big step, as opposed to setting up your activities and habits so that they keep drawing you in the desired direction.

For example, if you want to get fit, a typical failure is to buy an annual ticket to a gym... and then never actually go there. (Unlike the previous example, you have actually made the first step. But then you wait for the second step to happen magically.) A more successful plan would be to simply start doing push-ups every morning; and perhaps think how to reward oneself for doing so.

Or, if your goal is to become a writer, a typical failure is to start writing your big novel... only to end up a few years later with hundreds of pages of horribly written text, which obviously doesn't have a future, but the sunk costs are breaking your heart. (Now the problem is that you have skipped a few necessary steps.) A more successful plan would involve reading other people's texts and writing exercises, at specified time every week. (Similarly for computer programming.)

Comment by viliam on On the Nature of Programming Languages · 2019-04-22T12:40:28.360Z · score: 6 (4 votes) · LW · GW

I never designed an actual programming language, but I imagine these would be some of the things to consider when doing so:

1. How much functionality do I want to (a) hardcode in the programming language itself, (b) provide as a "standard library", or (c) leave for the programmer to implement?

If the programming language provides something, some users will be happy that they can use it immediately, and other users will be unhappy because they would prefer to do it differently. If I wait until the "free market" delivers a good solution, there is a chance that someone much smarter than me will develop something better than I ever could, and it won't even cost me a minute of my time. There is also a chance that this doesn't happen (why would the supergenius decide to use my new language?) and users will keep complaining about my language missing important functionality. Also, there is a risk that the market will provide dozen different solutions in parallel, each great at some aspect and frustrating at another.

Sometimes having more options is better. Sometimes it means you spend 5 years learning framework X, which then goes out of fashion, and you have to learn framework Y, which is not even significantly better, only different.

It seems like a good solution would be to provide the language, and the set of officially recommended libraries, so that users have a solution ready, but they are free to invent a better alternative. However, some things are difficult to do this way. For example, the type system: either your core libraries have one, or they don't.

2. Who is the target audience: noobs or hackers?

Before giving a high-status answer, please consider that there are several orders of magnitude more noobs than hackers; and that most companies prefer to hire noobs (or perhaps someone in the middle) because they are cheaper and easier to replace. Therefore, a noob-oriented language may become popular among developers, used in jobs, taught at universities, and develop an ecosystem of thousands of libraries and frameworks... while a hacker-oriented language may be the preferred toy or an object of worship of a few dozen people, but will be generally unknown, and as a consequence it will be almost impossible to find a library you need, or get an answer on Stack Exchange.

Hackers prefer elegance and abstraction; programming languages that feel like mathematics. Noobs prefer whatever their simple minds perceive as "simple", which is usually some horrible irregular hack; tons of syntactic sugar for completely trivial things (the only things the noob cares about), optional syntax that introduces ambiguity into parsing but hey it saves you a keystroke now and then (mostly-optional semicolons, end of line as an end of statement except when not), etc.

Hacker-oriented languages do not prevent you from shooting your own foot, because they assume that you either are not going to, or that you are doing it for a good reason such as an improvised foot surgery. Noob-oriented languages often come with lots of training wheels (such as declaring your classes and variables "private", because just asking your colleagues nicely to avoid using undocumented features would have zero effect), and then sometimes with power tools designed to remove those training wheels (like when you find out that there actually may be a legitimate reason to access the "private" variables e.g. for the purpose of externalization).

Unfortunately, this distinction cannot be communicated openly, because when you say "this is only meant for hackers to use", every other noob will raise their hands and say "yep, that means me". You won't have companies admit that their business model is to hire cheap and replaceable noobs, because most of their energy will be wasted through mismanagement and lack of analysis anyway. But when designing a language, you need to consider all the usual horrible things the average developer is going to do with it... and either add a training wheel, or decide that you don't care.

3. It may depend on the type of project. But I fear that 9 out of 10 cases someone uses this argument, it is actually a matter of premature optimization.

Comment by viliam on Slack Club · 2019-04-19T22:19:03.339Z · score: 4 (2 votes) · LW · GW

I think I get what you mean.

Maybe this is somehow related to the "openness to experience" (and/or autism). If you are willing to interact with weird people, you can learn many interesting things most people will never hear about. But you are also more likely to get hurt in a weird way, which is probably the reason most people stay away from weird people.

And as a consequence, you develop some defenses, such as allowing interaction only to some specific degree, and no further. Instead of filtering for safe people, you filter for safe circumstances. Which protects you, but also prevents you from from possible gains, because in reality, some people are more trustworthy than others, and it correlates negatively with some types of weirdness.

Like, instead of "I would probably be okay inviting X and Y to my home, but I have a bad feeling about inviting Z to my home", you are likely to have a rule "meeting people in cafeteria is okay, inviting them home is taboo". Similarly, "explaining concepts to someone is okay, investing money together is not".

So on one hand you are willing to tell a complete stranger in cafeteria the story of your religious deconversion and your opinion on Boltzmann brains (which would be shocking for average people); but you will probably never spend a vacation together with people who are closest to you in intellect and values (which average people do all the time).

Comment by viliam on Slack Club · 2019-04-17T21:52:19.866Z · score: 9 (5 votes) · LW · GW

Seems to me that we have members at both extremes. Some of them drop all caution the moment someone else calls themselves a rationalist. Some of them freak out when someone suggests that rationalists should do something together, because that already feels too cultish to them.

My personal experience is mostly with the Vienna community, which may be unusual, because I haven't seen either extreme there. (Maybe I just didn't pay enough attention.) I learn about the extremes on the internet.

I wonder what would be the distribution in Bay Area. Specifically, on one axis I would like to see people divided from "extremely trusting" to "extremely mistrusting", and on another axis, how deeply are those people involved with the rationalist community. That is, whether the extreme people are in the center of the community, or somewhere on the fringe.

Comment by viliam on Slack Club · 2019-04-16T22:18:21.609Z · score: 41 (12 votes) · LW · GW
My suspicion is that people see that Eliezer gained a lot of prestige via his writing ... and I suspect people make the (reasonable) assumption that if they do something similar maybe they will gain prestige from their writing targeted to other rationalists.

I'd like to emphasize the idea "people try to copy Eliezer", separately from the "naming new concepts" part.

It was my experience from Mensa that highly intelligent people are often too busy participating at pissing contests, instead of actually winning at life by engaging in lower-status behaviors such as cooperation or hard work. And, Gods forgive me, I believed we (the rationalist community) were better than that. But perhaps we are just doing it in a less obvious way.

Trying to "copy Eliezer" is a waste of resources. We already have Eliezer. His online articles can be read by any number of people; at least this aspect of Eliezer scales easily. So if you are tempted to copy him anyway, you should consider the hypothesis that you actually try to copy his local status. You have found a community where "being Eliezer" is high-status, and you are unconsciously pushed towards increasing your status. (The only thing you cannot copy is his position as a founder. To achieve this, you would have to rebrand the movement, and position yourself in the new center. Welcome, post-rationalists, et al.)

Instead, the right thing to do is:

  • cooperate with Eliezer, especially if your skills complement his. (Question is, how good is Eliezer himself at this kind of cooperation. I am on the opposite side of the planet, so I have no idea.) Simply said, anything Eliezer needs to get done, but doesn't have a comparative advantage at, if you do it for him, you free his hands and head to do things he actually excels at. Yes, this can mean doing low-status things. Again, the question is whether your are optimizing for your status, or something else.
  • try alternative approaches, where the rationalist community seems to have blind spots. Such as Dragon Army, which really challenged the local crab mentality. My great wish is to see other people build their own experiments on top of this one: to read Duncan's retrospective, to make their own idea of "we want to copy this, we don't want to copy that, and we want to introduce these new ideas", and then go ahead and actually do it. And post their own retrospective, etc. So that finally we may find a working model of a rationalist community that actually wins at life, as a community. (And of course, anyone who tries this has to expect strong negative reactions.)

I strongly suspect that internet itself (the fact that rationalists often coordinate as an online community) is a negative pressure. Internet is inherently biased in favor of insight porn. Insights get "likes" and "shares", verbal arguments receive fast rewards. The actions in real world usually take a lot of time, and thus don't make a good online conversation. (Imagine that every few months you acquire one boring habit that makes you more productive, and as a cumulative result of ten such years you achieve your dreams. Impressive, isn't it? Now imagine a blog, that every few months publishes a short article about the new boring habit. Such blog would be a complete failure.) I would expect rationalists living close to each other, and thus mostly interacting offline, to be much more successful.

Comment by viliam on Agency and Sphexishness: A Second Glance · 2019-04-16T20:54:57.326Z · score: 4 (2 votes) · LW · GW

Perhaps there is an optimal balance between habits and deliberation.

Too much on the side of habits, and you just keep doing the same behavior over and over again. Not necessarily a bad thing; sometimes you get lucky and the strategy you started with is actually a good one, and can bring you success in life. But you need the luck.

Too much on the side of deliberation, and your clever ideas get undermined by lack of "automated operations" that would keep you moving forward. The result is procrastination; well known among the readers of this website.

And the optimal balance probably depends on your current situation in life. After you achieve some success, you have more choices, and now deliberation probably becomes more useful. But again, there is such thing as too much meta-deliberation; obsessing "exactly how much time should I spend thinking and how much time should I spend working" generates neither useful work nor useful directions for work.

I guess, the more meta, the less time you should give it, unless you already have evidence that the previous level of meta was useful to you. (When you notice that spending some time thinking increases the productivity of the time when you are working, that is the right moment to think about how much time do you actually want to spend planning.) Also, meta decisions take time to bring fruit at the object level, so when you make plans, you should spend the following days executing the plans instead of adjusting them; otherwise you decide without feedback.

Comment by viliam on Why is multi worlds not a good explanation for abiogenesis · 2019-04-13T15:40:14.146Z · score: 7 (4 votes) · LW · GW
nearly anything can be a consequence of infinitely many worlds

This feels like complaining that if you flip a coin million times, all outcomes are possible.

Comment by viliam on Why is multi worlds not a good explanation for abiogenesis · 2019-04-12T21:57:21.521Z · score: 22 (11 votes) · LW · GW

In many worlds, everything happens, but not everything happens with equal "probability". Less miraculous paths towards life are more likely than more miraculous paths towards life. Thus, even if the life sees itself with probability 100%, it most likely sees itself evolved the least miraculous way.

So, at the end, we are in the same situation as we were before considering many worlds: looking for the most likely way life could have evolved, because that is most likely our history.

(In other words, many worlds do introduce miracles, but they still favor the solutions that didn't use them.)

Comment by viliam on Is reality warping theoretically possible ? · 2019-04-11T17:23:41.169Z · score: 3 (2 votes) · LW · GW

Logic? Almost certainly no. I have no idea what kind of activity could even in theory lead to a change in logic.

Physics? Depends on what you mean by "laws". I don't really understand these things, but I think there is a hypothesis that some physical constants were established near the beginning of the universe. So perhaps if we could create similar conditions again, and somehow make the constants different...

But it doesn't seem technically possible, because we live inside the universe, and we would have to collect a lot of its energy together again. It's not like we can find inside the universe a source of energy as big as the universe itself (at the beginning, when all that energy was concentrated in a small place).

Now a different question is whether we could discover new laws of physics. Then perhaps some of these new laws could help us create unimaginable amounts of energy, and maybe even create a new universe, with different laws. I think it is quite likely that we already know too much, and the new discoveries we can make would not give us that kind of magical powers.

Rationality Vienna Meetup April 2019

2019-03-31T00:46:36.398Z · score: 8 (1 votes)

Does anti-malaria charity destroy the local anti-malaria industry?

2019-01-05T19:04:57.601Z · score: 64 (17 votes)

Rationality Bratislava Meetup

2018-09-16T20:31:42.409Z · score: 18 (5 votes)

Rationality Vienna Meetup, April 2018

2018-04-12T19:41:40.923Z · score: 10 (2 votes)

Rationality Vienna Meetup, March 2018

2018-03-12T21:10:44.228Z · score: 10 (2 votes)

Welcome to Rationality Vienna

2018-03-12T21:07:07.921Z · score: 4 (1 votes)

Feedback on LW 2.0

2017-10-01T15:18:09.682Z · score: 11 (11 votes)

Bring up Genius

2017-06-08T17:44:03.696Z · score: 55 (50 votes)

How to not earn a delta (Change My View)

2017-02-14T10:04:30.853Z · score: 10 (11 votes)

Group Rationality Diary, February 2017

2017-02-01T12:11:44.212Z · score: 1 (3 votes)

How to talk rationally about cults

2017-01-08T20:12:51.340Z · score: 5 (10 votes)

Meetup : Rationality Meetup Vienna

2016-09-11T20:57:16.910Z · score: 0 (1 votes)

Meetup : Rationality Meetup Vienna

2016-08-16T20:21:10.911Z · score: 0 (1 votes)

Two forms of procrastination

2016-07-16T20:30:55.911Z · score: 10 (11 votes)

Welcome to Less Wrong! (9th thread, May 2016)

2016-05-17T08:26:07.420Z · score: 4 (5 votes)

Positivity Thread :)

2016-04-08T21:34:03.535Z · score: 26 (28 votes)

Require contributions in advance

2016-02-08T12:55:58.720Z · score: 61 (61 votes)

Marketing Rationality

2015-11-18T13:43:02.802Z · score: 28 (31 votes)

Manhood of Humanity

2015-08-24T18:31:22.099Z · score: 10 (13 votes)

Time-Binding

2015-08-14T17:38:03.686Z · score: 17 (18 votes)

Bragging Thread July 2015

2015-07-13T22:01:03.320Z · score: 4 (5 votes)

Group Bragging Thread (May 2015)

2015-05-29T22:36:27.000Z · score: 7 (8 votes)

Meetup : Bratislava Meetup

2015-05-21T19:21:00.320Z · score: 1 (2 votes)