Rationality Vienna Meetup June 2019 2019-04-28T21:05:15.818Z · score: 9 (2 votes)
Rationality Vienna Meetup May 2019 2019-04-28T21:01:12.804Z · score: 9 (2 votes)
Rationality Vienna Meetup April 2019 2019-03-31T00:46:36.398Z · score: 8 (1 votes)
Does anti-malaria charity destroy the local anti-malaria industry? 2019-01-05T19:04:57.601Z · score: 64 (17 votes)
Rationality Bratislava Meetup 2018-09-16T20:31:42.409Z · score: 18 (5 votes)
Rationality Vienna Meetup, April 2018 2018-04-12T19:41:40.923Z · score: 10 (2 votes)
Rationality Vienna Meetup, March 2018 2018-03-12T21:10:44.228Z · score: 10 (2 votes)
Welcome to Rationality Vienna 2018-03-12T21:07:07.921Z · score: 4 (1 votes)
Feedback on LW 2.0 2017-10-01T15:18:09.682Z · score: 11 (11 votes)
Bring up Genius 2017-06-08T17:44:03.696Z · score: 56 (51 votes)
How to not earn a delta (Change My View) 2017-02-14T10:04:30.853Z · score: 10 (11 votes)
Group Rationality Diary, February 2017 2017-02-01T12:11:44.212Z · score: 1 (3 votes)
How to talk rationally about cults 2017-01-08T20:12:51.340Z · score: 5 (10 votes)
Meetup : Rationality Meetup Vienna 2016-09-11T20:57:16.910Z · score: 0 (1 votes)
Meetup : Rationality Meetup Vienna 2016-08-16T20:21:10.911Z · score: 0 (1 votes)
Two forms of procrastination 2016-07-16T20:30:55.911Z · score: 10 (11 votes)
Welcome to Less Wrong! (9th thread, May 2016) 2016-05-17T08:26:07.420Z · score: 4 (5 votes)
Positivity Thread :) 2016-04-08T21:34:03.535Z · score: 26 (28 votes)
Require contributions in advance 2016-02-08T12:55:58.720Z · score: 62 (62 votes)
Marketing Rationality 2015-11-18T13:43:02.802Z · score: 28 (31 votes)
Manhood of Humanity 2015-08-24T18:31:22.099Z · score: 10 (13 votes)
Time-Binding 2015-08-14T17:38:03.686Z · score: 17 (18 votes)
Bragging Thread July 2015 2015-07-13T22:01:03.320Z · score: 4 (5 votes)
Group Bragging Thread (May 2015) 2015-05-29T22:36:27.000Z · score: 7 (8 votes)
Meetup : Bratislava Meetup 2015-05-21T19:21:00.320Z · score: 1 (2 votes)


Comment by viliam on The horse-sized duck: a theory of innovation, individuals and society · 2020-04-02T22:59:16.860Z · score: 7 (6 votes) · LW · GW

Sometimes I get an impression that people on autistic spectrum "have outlived their usefulness" (TV Tropes) from the perspective of the society. There was a time, not that long ago, when normies didn't care about computers, because using them required esoteric knowledge of things such as binary numbers, and they didn't care about internet, because it was mostly a way to interact with people who cared about these esoteric things. To become good with computers, you had to spend a lot of time studying obsessively something that didn't have much value in the eyes of most people.

Then it became common knowledge that IT is where the money is, and also working with computers became easier. Suddenly people with no intrinsic interest in esotetic knowledge started paying attention to IT. And now you have students of computer science who freely admit that they actually don't like programming and consider it boring... but they are willing to do it for money (because presumably all other jobs are boring, too).

The weirdoes became a minority in the field they have created, and the social norms are turning against them. Caring about the craft already became low-status; if you care about clean code and algorithmic complexity, you are obviously not paying attention to the larger picture i.e. the buzzwords the management is most happy about recently. There are not enough resources to do anything properly (although there sometimes are resources to do the same thing over and over again as the old solutions keep falling apart under their technical debt). The social skills are more important than the technical ones. Even in open source people are kicked out of projects for being bad at political games.

Of course, there is a value in social skills, and there is a harm in excessive weirdness. People can have long unproductive wars about minutiae of formatting the source code. Lack of communication within the project can waste lot of resources. Documentation sucks when it is written by people who hate talking to others. Introducing social skills to the project should be good... if we could keep the balance. If the people with social skills could respect the people with technical skills, and vice versa. But it seems to me that after the initial resistance is broken, the pendulum swings to the opposite extreme, and suddenly we have a formerly nerdy profession where people are regularly reminded that nerds suck.

Normie-ness is a positive feedback loop; the more normies you have, the greater the pressure to eliminate the non-normies. People with better social skills will almost by definition succeed at pushing the narrative that what we really need is to give even more power to people with social skills. And when things start falling apart, instead of shutting up and fixing the code, more and more meetings are scheduled, because for a normie, talking endlessly is the preferred (and the only known) way to solve all problems.

To some degree, this is not as bad as it sounds. Software is easy to copy. You could have 99% of software projects completely dysfunctional, and the remaining 1% would still move the planet forward. Similarly, you can have million anti-vaxers, but as long as you have one Einstein, science can still move forward. One person doing the right thing is more important than millions wasting time, if the solution can be copied.

But ultimately, the resources are scarce, and the people pretending to care are competing against the people who actually care. When you get to the point where the Einstein can't get a job, because he is outcompeted at every position by people with better social skills, then -- unless he is independently wealthy (but how could he save for early retirement if he can't get a good job?) or he has a generous sponsor (but here he also competes against people who have better social skills) -- he will not be able to work on his theory of relativity. And if only 1% of programmers care about clean code, you won't get clean code in 1% of projects; it will be much less, because most projects are developed by teams, and you would need a majority of the team to actually care.

Comment by viliam on Why do we have offices? · 2020-04-01T22:08:22.730Z · score: 4 (2 votes) · LW · GW

There are multiple reasons, and here is one of them:

Imagine yourself as a boss. How would you check whether your employess are doing the stuff you pay them for, or just taking your money and slacking? (Because there are many people who would enjoy the opportunity to take your money for nothing.)

This depends on the work. Sometimes the outputs are easy to measure and easy to predict. Suppose your employees are making boxes out of cardboard. You know how many boxes per hour can the average worker make, so you have a simple transformation of your money to the number of boxes produced. If someone does not produce enough boxes, they are either incompetent or slacking; in both cases it would make sense to replace them with someone who will produce enough boxes.

This is the type of work that would be safe to let people do remotely -- as long as the same amount of boxes is produced, you get the value you paid for -- although there may be other reasons that make it difficult: transportation of the cardboard and the boxes, or maybe if a machine is needed.

But imagine the kind of work like software development. To the eternal frustration of managers, the output is hard to measure. Both because of inherent randomness of the work (bugs appear unexpected and may take a lot of time to fix), and because the people who supervise the work are usually not programmers themselves (so they have no idea how much time "writing a REST controller which provides data serialized in XML format" should take - are we talking minutes or weeks?). Different people have different strong opinions on what quality means, but it is a fact that some projects can grow steadily for years, while others soon collapse under their own weight.

Having this kind of work done remotely, how do you distinguish between the case when the employee solved a difficult problem, fixed someone else's bug, and spent some time preventing other bugs happening in the future... and the case when someone did some quick and dirty work in 2 hours, spent the remaining 6 hours watching Netflix, and afterwards reported 8 hours of work? Trying to impose some simple metric such as "lines of code written per day" is more likely to hurt than help, because it punishes useful legitimate work, such as designing, or fixing bugs.

Making the people stay in the office guarantees that they will not spend 6 hours watching Netflix. They may do good work, they may do bad work, or they may find ways to procrastinate (e.g. watch YouTube videos instead). But at least, there is a long list of things they can't do.

It seems like a problem of trust, but on a deeper level it is a problem that you can't even "trust but verify" if you can't actually verify the quality of the output. So you have to rely on things like "spent enough time looking busy", which sucks for both sides.

Comment by viliam on Mati_Roy's Shortform · 2020-03-30T20:14:48.266Z · score: 3 (2 votes) · LW · GW

High status feels better when you are near your subordinates (when you can watch them, randomly disrupt them, etc.). High-status people make the decision whether remote work is allowed or not.

Comment by viliam on Thomas Kwa's Shortform · 2020-03-24T00:46:12.470Z · score: 3 (2 votes) · LW · GW

Something like Goodhart's Law, I suppose. There are natural situations where X is associated with something good, but literally maximizing X is actually quite bad. (Having more gold would be nice. Converting the entire universe into atoms of gold, not necessarily so.)

EY has practiced the skill of trying to see things like a machine. When people talk about "maximizing X", they usually mean "trying to increase X in a way that proves my point"; i.e. they use motivated thinking.

Whatever X you take, the priors are almost 100% that literally maximizing X would be horrible. That includes the usual applause lights, whether they appeal to normies or nerds.

Comment by viliam on Should we all be more hygenic in normal times? · 2020-03-17T18:44:22.529Z · score: 4 (2 votes) · LW · GW

Should humans be less disgusting? All in favor, raise your tentacle...

There is so much low-hanging fruit. Doctors don't wash their hands consistently. Parents send sick kids to kindergartens and schools. They are told repeatedly; and they ignore it. Cost/benefit analysis? If I send my sick child to kindergarten, it's your cost and my benefit; that's all I need to know.

Comment by viliam on How would you take math notes to make the most of them? · 2020-03-17T01:12:53.817Z · score: 6 (3 votes) · LW · GW

To me it helps to imagine that I am explaining the topic to someone else. If I had enough time, I would never copy the textbook; I would rewrite it using my own words, and probably change the entire structure. (In other words, instead of "paper1 -> paper2", it would go "paper1 -> internal model -> paper2".) Unfortunately, doing things the way I wish takes a lot of time.

For example, if I make notes about programming, I am trying to write the simplest code that illustrates the concept in isolation from other concepts. (Most examples I find online are introducing multiple concepts at the same time. Okay, I suppose in reality, you usually use X and Y and Z together in the same project. But I still want to see X used separately, and Y separately, and Z separately. And then an example of how X and Y and Z go together.)

I would suggest to explore the concept in unusual ways. For example, when you learn about commutative operators, don't just use "addition" and "multiplication" as obvious examples, but also think about ones like "least common multiple" or even "these words have the same amount of strokes in Chinese". (Ultimately coming to "there is an arbitrary undirected graph, where the nodes are the possible inputs, and each edge contains an arbitrary output as a label".)

Also, when you learn things, the value is not merely in the individual things, but also (mostly?) in their connections to other things. That is the difference between a newbie who can recite the facts but cannot apply them, and an expert who can immediately take three abstract concepts and chain them together to solve a problem. (Not sure what exactly this imples for note-taking and zettelkasten method. My preferred way to make notes would be like making wiki pages, so I would mention these connections at the bottom of the page.) For example, there are many proofs that there are infinitely many primes, but I enjoyed reading an argument how having finitely many primes would allow us to create an insane compression algorithm. (You take the input as a binary number, factorize it, and save the factors. If your input is much larger than the hypothetical largest prime, the output file size will be a logarithm of the input file size.)

Comment by Viliam on [deleted post] 2020-03-17T00:42:52.328Z

So, you did something that made you feel smarter. To make sure the effect is real, you could take an IQ test, assuming you took one in the past, and compare the numbers.

I think it is relatively common to have a feeling of becoming smarter without actually being so. The change of mood already can move you from "ignores things" to "observes things and wonders about details". Learning something gives you domain-specific knowledge. Abstract mumbo-jumbo can make you feel like you understand the deep truths about the world. Good speakers know how to induce these feelings in their audience. Crackpots can induce them in themselves.

But the feeling doesn't necessarily correspond to reality. In fact it is often the other way round, e.g. when people are on drugs, their critical thinking turns off, and they believe themselves to be super-smart. Only, when they write down their supreme wisdom, it turns out to be garbage when they get sober. Your hormones can have similar effect, e.g. if you are super excited about something.

Try doing actual tasks that someone else gave you, and see whether you actually became better according to that other person's criteria. Anything else is just potentially deluding yourself.

Comment by viliam on How Do You Convince Your Parents To Prep? To Quarantine? · 2020-03-17T00:28:22.424Z · score: 2 (1 votes) · LW · GW

For me, "Italy" sounds convincing, because it is closer to us -- I live in Europe --, geographically and culturally, than China. (Talking about China feels about as relevant as talking about Mars.)

A video from Italy, showing the crowded hospitals and soldiers on streets, would probably feel more convincing than citing numbers. (Also, this was shared on SSC.) I would only cite numbers afterwards to say something like "see, two or three weeks ago they also had only X known cases".

I would probably try convincing along the lines of: (1) if everyone will stop their social life in two weeks anyway, we might as well do it today, and (2) many people are asymptomatic or have mild symptoms, and the incubation time is several days while people already spread the virus, so by the time you know 1 person in your neighborhood to have severe symptoms, there are probably already hundred who spread the virus.

Also, when talking about the probability of death, I would add that even "non-death" can mean a lot of pain and irreversibly damaged health.

Most people are altruistic, therefore I would emphasise "you might unknowingly infect people you care about" over "you might get sick and die". (Also, gender stereotypes: men are socially conditioned to not worry about what happens to them, but they are supposed to protect their families.)

If your parents don't have Skype (or equivalent) ready, install it now.

Start buying stuff for your parents even before you have convinced them. Say "I know you don't share my worries, but knowing that you have this stuff makes me feel much better, please accept it".

Comment by viliam on How Do You Convince Your Parents To Prep? To Quarantine? · 2020-03-17T00:13:17.081Z · score: 2 (1 votes) · LW · GW

If she manages to convince them later, the supplies will already be there, so it's definitely a good move.

Comment by viliam on Positive Feedback -> Optimization? · 2020-03-17T00:09:17.833Z · score: 4 (2 votes) · LW · GW

Not sure whether this is what you meant, but there is a difference between a situation when resources are abundant and the reproduction is an exponential function of the speed of reproduction, and when resources become scarce and reproduction is only one important parameter along with survival and interaction with competitors.

To continue with your example, imagine that Y has faster doubling rate than X (assuming abundant resources), but X can disassemble Y to create its own copy while Y can't do the same to X. So there will be first a period when Y exponentially outgrows X, followed by a period where Y greadually disappears.

If you want to model this by matrices or something similar, you need to somehow include this aspect.

Also, the reality will be more complicated, because the values of X and Y and their interaction may depend on local environment. So it is possible that X eliminates Y in warm waters, but Y survives around the poles. Then it is possible that X evolves into intelligent species that causes global warming... okay, this is probably outside the scope of the original question.

Comment by viliam on When are the most important times to wash your hands? · 2020-03-16T20:46:07.235Z · score: 2 (1 votes) · LW · GW

So far I was never protecting myself against coronavirus in summer.

Under more usual circumstances, I simply don't think about my phone as a possible infection vector. Which is possibly a big mistake.

The wallet is usually in some bag.

Comment by viliam on When are the most important times to wash your hands? · 2020-03-15T15:19:28.241Z · score: 6 (4 votes) · LW · GW

From my perspective, my wallet is a part of "outside the house". I don't literally leave it outside, but I leave it in my coat's pocket, and never touch it when I am inside. Now I learned to do the same thing with my keys -- I open the door, put the keys in the pocket, remove the coat and hang it. Then I wash my hands. So the wallet and keys are not touched until I go for a walk again, so it's kinda equivalent to leaving them outside.

The most problematic thing is the phone. That one I use both inside and outside, so I have to clean it a lot. (It would be nice to have two phones, where you could use one to remotely activate or deactivate the other. Then I would have an inside phone and an outside phone.)

More generally, this strategy seems like what cultures more obsessed with purity do. Instead of cleaning everything all the time, you specify various zones of cleanness, clean things when they cross the boundary in the wrong direction, and develop instincts against unthinkingly crossing the boundary in the wrong direction.

If your home is "pure" and the wallet is "impure", then obviously you shouldn't handle the wallet at your home, unless you carefully perform the "purification ritual". You don't even have to remember why the wallet is "impure", just the fact that it is. And if you keep these rules all your life, you won't forget it, because the though of using the wallet at your home will automatically invoke a feeling of "dirtiness".

Comment by viliam on DrAlta's Shortform · 2020-03-15T13:51:50.353Z · score: 2 (1 votes) · LW · GW

In my opinion:

"The closer an idea is to what you already believe the easier it is to think of it." -- Yes.

"The closer an idea is to the truth the easier it is to think of it." -- No.

These is this idea of systematic bias; of errors that all people do for the same reasons (e.g. because making this type of error often provided an evolutionary advantage, or because the neural networks are likely to make this type of error) Ideas like "there are supernatural agents that act in our world" are easy, discovering electricity is hard.

Comment by viliam on Rationalists, Post-Rationalists, And Rationalist-Adjacents · 2020-03-15T00:41:58.017Z · score: 3 (2 votes) · LW · GW

A related thing I was thinking about for some time: Seems to me that the line between "building on X" and "disagreeing with X" is sometimes unclear, and the final choice is often made because of social reasons rather than because of the natural structure of the idea-space. (In other words, the ideology is not the community; therefore the relations between two ideologies often do not determine the relations between the respective communities.)

Imagine that there was a guy X who said some wise things: A, B, and C. Later, there was another guy Y who said: A, B, C, and D. Now depending on how Y feels about X, he could describe his own wisdom as either "standing on shoulders of giants, such as X", or "debunking of teachings of X, who was foolishly ignorant about D". (Sometimes it's not really Y alone, but rather the followers of Y, who make the choice.) Two descriptions of the same situation; very different connotations.

To give a specific example, is Scott Alexander a post-rationalist? (I am not sure whether he ever wrote anything on this topic, but even if he did, let's ignore it completely now, because... well, he could be mistaken about where he really belongs.) Let's try to find out the answer based on his online behavior.

There are some similarities: He writes a blog outside of LW. He goes against some norms of LW (e.g. he debates politics). He is admired by many people on LW, because he writes things they find insightful. At the same time, a large part of his audience disagrees with some core LW teachings (e.g. all religious SSC readers presumably disagree with LW taking atheism as the obviously rational conclusion).

So it seems like he is in a perfect position to brand himself as something that means "kinda like the rationalists, only better". Why didn't this happen? First, because Scott is not interested in doing this. Second, because Scott writes about the rationalist community in a way that doesn't even allow his fans (e.g. the large part that disagrees with LW) to do this for him. Scott is loyal to the rationalist project and community.

If we agree that this is what makes Scott a non-post-rationalist, despite all the similarities with them, than it provides some information about what being a post-rationalist means. (Essentially, what you wrote in the article.)

Comment by viliam on What is a School? · 2020-03-14T23:52:11.593Z · score: 4 (2 votes) · LW · GW
this may be because they think online schooling or homework will be an adequate substitute for in-person schooling

Then we have another curious fact: that we needed the corona virus to notice that schools can be replaced by a much cheaper alternative.

(I mean, previously people have tried to start new online schools, but as far as I know, they didn't try to replace the existing schools with online schools. But now we see it as a realistic option.)

Comment by viliam on Ineffective Response to COVID-19 and Risk Compensation · 2020-03-08T17:37:21.039Z · score: 10 (8 votes) · LW · GW
Given that, if I propose an intervention like making homemade masks from fabric which reduced handwashing compliance by 1% (perhaps due to distracting people or making them think handwashing is less critical,) it would need to be astonishingly effective to be net positive. And most such approaches being discussed are, as far as I can tell, nowhere near that level of effectiveness.

This argument depends a lot on the correctness of your model. How do you know which proposals reduce handwashing compliance by 1%? Without numbers, it becomes a fully general argument against doing or even debating anything (other than washing your hands).

Comment by viliam on At what level of coronavirus cases in a population should the people in that population start self-quarantining? · 2020-03-08T17:23:16.334Z · score: 6 (4 votes) · LW · GW

Sorry, no specific number here, but my reasoning would approximately go like this:

First, how long can I afford to be self-quarantined? Buying food is not the main problem for me (I can cook, rice is cheap, I have enough money to hypothetically buy enough rice for a year); the limiting factor is how much vacation can I get, and whether I want to burn it all now. Even assuming that coronavirus is the highest priority, I suspect there may be two major waves of infection: one now, and one during the autumn. (Quitting is not an option; then I would have to pay my health insurance and would run out of money much faster.)

Second, assume exponential growth, until almost everyone is sick. Now you can estimate the peak, and time your self-quarantine so that it is around that peak.

The problem is, the noise in estimation of the peak is probably greater than my vacation time, so I can't really do this in practice. Oops.

The second best option is partial self-quarantine, that is reducing exposure to the minimum level I can keep for a few months. Try to work remotely whenever possible, never eat outside your home, reduce social activities to minimum. When? Well, I already started this week -- on Monday I asked my boss to let me work from home, started cooking every day, took my child out of kindergarten, and cancelled a birthday party this weekend. Seemed a bit paranoid... and then on Friday we got the first confirmed coronavirus case in my country.

Comment by viliam on Why would panic during this coronavirus pandemic be a bad thing? · 2020-03-08T16:51:38.143Z · score: 3 (2 votes) · LW · GW
When there are sufficient supplies of things like food, like now, and people start hoarding, shortages become a self-fulfilling prophecy.

Would it make sense to encourage the panic to start too soon? First the customers would cause a shortage, then the producers would increase their production in hope for easy profit, then the shortage would end with everyone having enough stuff at home... and then the actual need would come.

More simply, if people are going to empty the shops eventually, I prefer if they do it one month before the actual crisis rather than one week before it. Because during one month, the market may fix the shortage, but one week is not enough time to do much.

Comment by viliam on Exercises in Comprehensive Information Gathering · 2020-02-17T01:07:21.827Z · score: 8 (4 votes) · LW · GW
Given how inexpensive and useful it is to do this, why do so few people it?

Because there are so many possible topics, that even if each of them takes relatively little time, together they would take a lot?

For example, in your example, you mentioned " an obscure country" and "a particular era", and also a focus on politics and military (as opposed to science, or art, or sport). Okay, maybe you can do it in a week, or in an afternoon. But why that country, and why that era? How much it would cost to get a comparable knowledge of all countries and, uhm, let's say the entire 20th century?

Comment by viliam on Taking the Outgroup Seriously · 2020-02-17T01:00:03.131Z · score: 12 (4 votes) · LW · GW

I agree that one should be aware of what their opponents literally believe, instead of strawmanning them. Also, it should be acceptable to say: "I didn't really spend time to research what they believe, but they have a bad reputation among the people I trust, so I go along with that judgment", if that indeed is the case.

On the other hand, the example about religious proselytising -- there may be a difference between why people do things, and why it works. Like this, but on a group level. So, you should understand the motivation of your outgroup, but also the mechanism. More generally, you should understand the mechanism of everything, including yourself. Your opponents are implemented on broken hardware, and so are you, and it's actually the same type of hardware. But when you work on this level, you should be skeptical not only about your opponents, but also about yourself and your allies. If you fail to apply the same skepticism towards yourself, you are doing it wrong -- not because you are too unfair to your opponents, but because you are too naive about yourself.

Comment by viliam on Jimrandomh's Shortform · 2020-02-10T21:43:27.840Z · score: 4 (2 votes) · LW · GW

I wonder what would be a non-software analogy of this.

Perhaps those tiny packages with labels "throw away, do not eat" you find in some products. That is, in a parallel world where 99% of customers would actually eat them anyway. But even there it isn't obvious how the producer would profit from them eating the thing. So, no good analogy.

Comment by viliam on Source of Karma · 2020-02-10T21:37:16.949Z · score: 6 (3 votes) · LW · GW

Look at the karma numbers in this debate, and imagine them divided by ten. Oops, nothing is left.

Now imagine the same thing, except that one person (for whatever reason) bothered to vote. Now that one person's opinion is all the feedback you have.

Also, that person is probably going to be someone with too much free time.

(For the record, I agree that better feedback would be nice to have, it's just that I find this cure worse than the original problem. The problem is that better feedback is costly in terms of time and effort, and when you increase the costs, instead of better feedback you simply get less feedback. I mean, currently nothing is preventing the people who vote from also writing a comment. I am also pessimistic about finding a simple solution that would improve things, mostly because I think that if such simple fix existed, someone else would have already tried it on a different website.)

Comment by viliam on Source of Karma · 2020-02-09T20:31:14.427Z · score: 6 (3 votes) · LW · GW
why not require any vote to post a comment as to why that vote is made?

That would increase the voting time by orders of magnitude, which would result in fewer votes. The fewer votes there are, the more random the outcome.

Making the votes non-anonymous (which is what a mandatory vote comment would mean) would open yet another layer of karma obsession. People replying to vote comments, people complaining about voting patterns of other people, people taking revenge for downvotes or forming mutually upvoting cliques (not necessarily consciously).

Comment by viliam on Looking for books about software engineering as a field · 2020-02-04T22:54:15.318Z · score: 2 (1 votes) · LW · GW

If the books won't satisfy you, you can still ask individual questions here. But, as philh said, there is a chance that fully understanding something requires one to actually use it. Otherwise, your understanding will only be powered by analogies, and it will stop at the moment when there is no convenient analogy for something complex (that is, complex for people who never used it, but kinda intuitive for people who use it regularly), or you may misuse the analogy beyond its purpose. Also, you will be unable to distinguish between correct answers and wrong answers. -- That said, I am curious how far you actually can get like this.

For example, I've had three people try to explain exactly what an API is to me, for more than two hours total, but I just can't internalize it.

API is a list of functionality you are supposed to use (because the authors guarantee it will keep working tomorrow), as opposed to functionality that is either inaccessible to you (therefore you can't use it directly) or is technically accessible but you shouldn't touch it anyway, because the authors make no guarantee they won't change it tomorrow.

More generally, this idea of distinguishing between what is meant for use by others, and what are the internal details the others should not touch, is called "encapsulation".

Encapsulation can happen on multiple levels. You have small units of code, let's call them classes, which provide some functionality to the outside world, and keep some private details to themselves. Then you can compose a larger unit of code, let's call it a module, out of hundred such classes. Now there is a functionality that this module as a whole exposes to the outside world, and some details it wants to keep private. Then you compose the entire program or library out of dozen modules, and it provides some public functionality. API usually refers to the public functionality on the level of program or library.

Analogy time:

Imagine that I am a robot that offers some useful service. For example, I can remember numbers. The officially recommended way to use me is to come to my desk and say "Remember the number X" (some specific number, such as 42) and I will remember it; later you can come to my desk and say "Tell me the number I told you the last time" and I would tell you the number (42, in this case). The list of commands you should officially use with me is my API.

You can either use my services as a person, or you can send your own robots to interact with me. My behavior is the same in either case.

Now you are a curious person, and you notice that when you tell me a number, I will write it down on a piece of paper. When you ask me later, I will read the number from the paper. This inspires you to make an improvement to your process. You tell your robots that instead of asking me about the number, they should simply look at my desk and read the number on the paper. This is 40% faster, and that makes you happy!

Five weeks later, your factory stops producing stuff. It takes you a few hours to find out why. The robots that occassionally come to use my service, keep staying frozen at my desk and never return. That's because there is no paper on my desk anymore.

You complain to my owner and threaten to sue them. But my owner shows you the original contract, which specifies that you (or your robots) are supposed to ask me about the number; and there is nothing there about a paper on the desk. When you ask me, I give you the right answer. It's because this morning I was installed a new memory, so that I don't need to write things down anymore. (Which by the way makes me now 80% faster than before.) All other users of my services are happy about this change. Only you are mad, because now you have to reprogram all your robots that interact with me, otherwise your factory remains unproductive. It takes you three days to reprogram your robots, which means a great financial loss for you.

End of analogy.

The lesson is that when the user limits themselves to stuff they are supposed to use, it allows the service provider to make improvements, without breaking things. The only way to allow future improvements is to make some parts of the operation forbidden to use for the customers; otherwise you could not change them, and you can hardly improve things if you are not allowed to change anything.

Comment by viliam on Healing vs. exercise analogies for emotional work · 2020-01-27T20:53:03.831Z · score: 2 (1 votes) · LW · GW

When people say something helped them a lot, how much did it actually help?

My guess is that people are likely to overestimate this. Like, imagine that life has 1000 different aspects you need to get in order, and one day you find something that makes you better at one of them by 10%.

From your perspective at the moment, it probably feels like a lot. You probably spent a lot of time in the past practicing this thing, with mixed success... and suddenly it improves by 10% almost overnight? That's wonderful! And because you are focusing on this thing at the moment, it feels like a very important thing.

Globally, increasing one of 1000 things by 10% means improving your life by 0.01%. That's practically invisible from outside. Yeah, you are now better in one thing, but the other 999 things remained the same.

And you don't have the same success every day, so an improvement by 0.01% in a day doesn't translate into a 3% improvement in a year. You probably can't even repeat the same success in the same thing, because you get diminishing returns.

Numbers obviously made up to illustrate the point.

So when people say something helps them a lot (whether it is the same thing for years, or a different thing every week), I expect something like this to happen. Maybe it feels like a huge change from inside, at the moment when they are focusing on the one thing that improved. But from outside, I don't expect to see a dramatic change soon.

And it's not just when other people tell me about their successes. It took me a few dozen epiphanies to realize that even a few dozen epiphanies won't turn me into a superman. One epiphany achieves even less.

To make an analogy with exercise, what helps is actually doing the exercise over and over again, several times a week, for years. Just one afternoon spent exercising hard changes nothing.

Miracles are cheap, integrating them into your daily routine is hard?

Comment by viliam on Matt Goldenberg's Short Form Feed · 2020-01-27T19:46:15.778Z · score: 2 (1 votes) · LW · GW
Definitely, but why limit it to just rationalists in that case?

Good point.

Not sure how well a mixed group of rationalists and non-rationalists would function. But you could create more than one group.

Comment by viliam on Have epistemic conditions always been this bad? · 2020-01-26T23:28:47.822Z · score: 12 (6 votes) · LW · GW
Were people in the USSR getting barred from their constitutional duty to work?

You could be fired from your job and then put into prison for violating your constitutional duty, and no one would care.

But in practice, you were supposed to find a job that was sufficiently low-status, or was dangerous for health, or something like that. Such jobs were allowed to hire even "politically unreliable" people. (Refusing to take one of those jobs, that would be a violation of your constitutional duty.)

Comment by viliam on Matt Goldenberg's Short Form Feed · 2020-01-26T22:36:33.872Z · score: 7 (3 votes) · LW · GW

Being a rationalist is not the only trait the individual rationalists have. Other traits may prevent you from clicking with them. There may be traits frequent in the Bay Area that are unpleasant to you.

Also, being an aspiring rationalist is not a binary thing. Some people try harder, some only join for the social experience. Assuming that the base rate of people "trying things hard" is very low, I would expect that even among people who identify as rationalists, the majority is there only for the social reasons. If you try to fit in with the group as a whole, it means you will mostly try to fit in with these people. But if you are not there primarily for social reasons, that is already one thing that will make you not fit in. (By the way, no disrespect meant here. Most of people who identify as rationalists only for social reasons are very nice people.)

What you could do, in my opinion, is find a subgroup you feel comfortable with, and accept that this is the natural state of things. Also, speaking as an introvert, I can more easily connect with individuals than with groups. The group is simply a place where I can find such individuals with greater frequency, and conveniently meet more of them at the same place.

Or -- as you wrote -- you could create such subgroup around yourself. Hopefully, it will be easier in the Bay Area than it would be otherwise.

Comment by viliam on G Gordon Worley III's Shortform · 2020-01-26T22:13:41.173Z · score: 4 (2 votes) · LW · GW

I wonder how much the "great loneliness for creatures like us" is a necessary outcome of realizing that you are an individual, and how much it is a consequence of e.g. not having the kinds of friends you want to have, i.e. something that you wouldn't feel under the right circumstances.

From my perspective, what I miss is people similar to me, living close to me. I can find like-minded people, but they live in different countries (I met them on LW meetups). Thus, I feel more lonely than I would feel if I lived in a different city. Similarly, being extraverted and/or having greater social skills could possibly help me find similar people in my proximity, maybe. Also, sometimes I meet people who seem like they could be what I miss in my life, but they are not interested in being friends with me. Again, this is probably a numbers game; if I could meet ten or hundred times more people of that type, some of them could be interested in me.

(In other words, I wonder whether this is not yet another case of "my personal problems, interpreted as a universal experience of the humankind".)

Yet another possible factor is the feeling of safety. The less safe I feel, the greater the desire of having allies, preferably perfect allies, preferably loyal clones of myself.

Plus the fear of death. If, in some sense, there are copies of me out there, then, in some sense, I am immortal. If I am unique, then at my death something unique (and valuable, at least to me) will disappear from this universe, forever.

Comment by viliam on How Doomed are Large Organizations? · 2020-01-26T16:06:48.310Z · score: 4 (2 votes) · LW · GW

Depends on situation. Sometimes people can do things independently on each other. Sometimes people do things together because it is more efficient that way. And sometimes people do things together because there is an artificial obstacle that prevents them from making things individually. (In other words, mazes are trying to change the world in a way that makes mazes mandatory.)

As a made-up example, imagine that there are three cities, and there is a shop in each city, each shop having a different owner. (It is assumed that most people buy in their local shop.) Maybe the situation is such that it would be more profitable if there is only one shop chain operating in all three cities. But maybe there is a shop chain successfully lobbying to make it illegal to own individual shops. Or not literally illegal, but perhaps they propose a law that imposes a huge fixed cost on each shop or shop chain, so the owner of one shop would have to pay this tax per shop, while the owner of a chain only has to pay it once per entire chain. Such law could make the shop chains more profitable than uncoordinated shops, even in situations where without that law they might be less profitable.

So, we have two levels of the game here: What is more profitable assuming no artificial obstacles. And what is more profitable when players are allowed to lobby for creating artificial obstacles for competitors using a different strategy. (That is, suppose that the state is not corrupt so much that it would not make a law that makes life specificially easy for corporation A and difficult for an equivalent corporation B, but it can be convinced to make a law that makes life easier for certain types of corporations and more difficult for other types. So the corporation A cannot use the law as a weapon against an equivalent corporation B, but e.g. large companies could use the law as a weapon against small companies. Creating a large fixed cost for everyone is a typical example.)

To answer your question, maybe sometimes things suck because there are more people, but sometimes things only suck because mazes have the power to change the law to make things suck.

Comment by viliam on How Doomed are Large Organizations? · 2020-01-23T22:13:15.829Z · score: 11 (6 votes) · LW · GW

It's like the power of an organization is a square root or perhaps only a logarithm of how many people work for it. It is horrible to see the diminishing returns, but larger still means stronger.

Maybe this is the actual reason why centralized economy sucks. Not because of mere lack of information (as Hayek assumed), because in theory the government could employ thousands of local information collectors, and process the collected data on computers. But it's the maze-nature that prevents it from doing this in a sane way. The distributed economy wins, despite all its inefficiencies (everyone reinventing the copyrighted wheels, burning money in zero-sum games, etc.), because the total size of all mazes is smaller.

But in long term, the successful mazes try to convert the entire country into one large maze, by increasing regulation, raising fixed costs of doing stuff, and doing other things that change the playground so that the total power matters more than the power per individual.

Comment by viliam on How Doomed are Large Organizations? · 2020-01-22T20:55:47.512Z · score: 6 (3 votes) · LW · GW

I suppose that increase in mazes means that if there is external pressure that appears politically fashionable, more people in the positions of relative power are motivated to (appear to) move in the direction of the pressure, whatever it is, because they don't really care either way. This is how companies become woke, ecological, etc. (At least in appearance, because they will of course Goodhart the shit out of it.)

A different question is, why pressure in the direction of e.g. social justice is stronger than pressure in direction of e.g. Christianity. More activists? Better coordination? Strategic capture of important resources, such as media? Or maybe it is something completely different, e.g. social justice warriors pay less attention when their goals are Goodharted? (Firing one employee that said something politically incorrect is much cheaper than e.g. closing the shops on Sunday.) Before you say "left vs right", consider that e.g. veganism is coded left-wing, but we don't hear about companies turning vegan under external pressure. Or perhaps it's all just a huge Keynesian beauty contest, where any thing, once successful, becomes fixed, and the social justice warriors just had lucky timing. I don't know.

Comment by viliam on Is backwards causation necessarily absurd? · 2020-01-14T23:33:23.979Z · score: 5 (3 votes) · LW · GW
Another relativistic argument against time flowing is that simultaneity is only defined relative to a reference frame. Therefore, there is no unified present which is supposed to be what is flowing.

Relativity does not make the arrow of time relative to observer. Events in one's future light cone remain in their future light cone also from a perspective of someone else.

Comment by viliam on Predictors exist: CDT going bonkers... forever · 2020-01-14T23:22:11.343Z · score: 4 (2 votes) · LW · GW

Even if most people on LW are probably familiar with the abbreviation, someone may come here following a link from elsewhere.

Comment by viliam on Is it worthwhile to save the cord blood and tissue? · 2020-01-12T20:57:38.339Z · score: 5 (3 votes) · LW · GW

There is also the question of how soon to cut the cord. The reason for cutting it a bit later is that the blood from the cord still keeps flowing into the baby. Unfortunately, I completely forgot why those few extra drops are supposed to be so important, but I was told the reason years ago and it sounded just as important as the reason for storing the cord blood.

Comment by Viliam on [deleted post] 2020-01-11T21:42:05.048Z

Hello, anonymous person posting an article called "MattG's Shortform". :D

Comment by viliam on Rationalist Scriptures? · 2020-01-10T23:50:19.294Z · score: 4 (2 votes) · LW · GW
Related, has anyone compiled a list of "Rationalist Wisdom"? Like a bunch of sayings that distill Rationalism down that we can point newbs to?

Writing is a skill; you can't simply decide to do it and automatically do it well, even if you believe it is an important thing to do. I hope that in future, some people with sufficiently high writing skills will become rationalists, and one of them will prioritize making simple accessible rationality materials for beginners.

More precisely, writing is more than one skill. I mean, Eliezer definitely is good at writing -- the success of HPMoR is an evidence for that -- and yet it's his Sequences that people complain about. Seemingly, "good at blogging" and "good at writing fiction" doesn't imply "good at writing textbooks for beginners". So it's the person good at writing textbooks for beginners we are waiting for, to join the rationality community and produce the textbooks.

Comment by viliam on Dominic Cummings: "we’re hiring data scientists, project managers, policy experts, assorted weirdos" · 2020-01-06T23:07:29.741Z · score: 2 (1 votes) · LW · GW

Yep. Looking around me, getting Slovakia out of EU would be relatively easier task than making it adopt UBI, for the reasons you mentioned (plus one you didn't: availability of foreign helpers).

Comment by viliam on Dominic Cummings: "we’re hiring data scientists, project managers, policy experts, assorted weirdos" · 2020-01-05T14:02:12.599Z · score: 5 (3 votes) · LW · GW

Burning down a building is easier than constructing it.

People are celebrating Dominic Cummings for changing the building. I'd like to wait until it turns out what specific kind of change it was.

In the meanwhile, I accept the argument that even burning down the building requires more skills and agency than merely talking about the building. In this way, Dominic Cummings has already risen above the level of the rationalist plebs. But how high, that still remains to be seen.

Comment by viliam on Dominic Cummings: "we’re hiring data scientists, project managers, policy experts, assorted weirdos" · 2020-01-05T13:33:17.905Z · score: 16 (4 votes) · LW · GW
There is something in the process there that ought to be emulated, even if you disagree with the instrumental outcome.

I see your point, but the outcome is important, if you want to improve things, not just become famous for changing them.

Comment by viliam on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-04T23:54:26.146Z · score: 21 (8 votes) · LW · GW

If I may offer my opinion, it seems to me that this debate was a proxy for a long-term problem, which I would roughly describe as "how much exactness should be the norm on LW?".

When Eliezer was writing the Sequences, it was simple: whatever he considered right, that was the norm. There were articles with numbers and equations, articles that quoted scientific research, articles that expressed personal opinion or preference, and articles with fictional evidence. And because all those articles came from the same person, together they created the style that has attracted many readers.

But, now that it is a community blog, there are people with preference for numbers and equations, and people with preference for personal opinion. It's like they speak different languages. And sometimes they disagree with each other. And when they do, it is difficult to resolve the situation, because each of them expects different norms of... what kind of argument is valid, and what kind of content belongs here.

If we limit ourselves to things we can define and describe exactly, the extreme of that would be merely discussing equations. Because the real world is messy and complicated, and people are even more messy and complicated. And there is nothing wrong with the equations -- the articles on math or decision theory are great and definitely a part of the LW intellectual tradition -- but we also want to use rationality in real life, as humans, in interaction with other humans, and we want to optimize this, even if we cannot describe it exactly.

The opposite extreme, obviously, is introducing all kinds of woo. Meditation feels right, and Buddhism feels right, and Circling feels right, and... dunno, maybe tomorrow praying will feel right, and homeopathy will feel right. (And even if they won't, the question is what algoritm will draw the line. Is it "I was introduced to it by a person identifying as a rationalist" vs "I have already seen this done by people who don't identify as rationalists"?)

I would like this community to retain the ability to speak both languages. But it doesn't work well when different people specialize in different languages. At best, it would be a website that hosts two kinds of completely unrelated topics. At worst, those two groups would attack each other.

Comment by viliam on Dominic Cummings: "we’re hiring data scientists, project managers, policy experts, assorted weirdos" · 2020-01-04T23:11:52.134Z · score: 4 (2 votes) · LW · GW
I think of Schelling points as the the things that result without specific coordination, but only common background knowledge.

Yes, but specific coordination today can create the common background knowledge for tomorrow.

Comment by viliam on Dominic Cummings: "we’re hiring data scientists, project managers, policy experts, assorted weirdos" · 2020-01-04T23:07:28.229Z · score: 7 (4 votes) · LW · GW

Similarly to Eliezer, I am impressed to see someone who "speaks our tribe's language" to be in a position of political power, but also confused why their list of achievements contains (or consists entirely of) Brexit.

To me it seems like the original strategy behind Brexit referendum was simply "let's make a referendum that will lose, but it will give us power to convert any future complaints into political points by saying 'we told you'". And when the referendum succeeded, it became obvious that no one actually expected this outcome, and people tasked with handling the success are mostly trying to run away and hide, wait for a miracle, or delegate the responsibility to someone else. (Because now it puts them into the position where any future complaints will generate political points for their opponents. And future complaints are inevitable, always.)

I expect that as soon as Brexit is resolved in either way -- i.e. when the decision about staying or leaving is definitely made, and the blame for it is definitely assigned -- the situation will revert to politics as usual.

Comment by viliam on Predictive coding & depression · 2020-01-04T22:36:19.711Z · score: 2 (1 votes) · LW · GW

Just a random thought: This could also explain why rationality and depression seem to often go together. Rational people are more likely to notice things that could go wrong, uncertainty, planning fallacy, etc. -- but in this model those are mostly things that assign lower probability to success.

Even in the usual debates about "whether rationality is useful", the usual conclusion is that rationality won't make you win a lottery (not even the startup lottery), but mostly helps you to avoid all kinds of crazy stuff that people sometimes do. Which from some perspective sounds good (imagine seeing a long list of various risks with their base rates, and then someone telling you "this pill will reduce the probability of each of them to 10% of the original value or less"), but is also quite disappointing from the perspective of wanting strong positive outcomes ("will rationality make me a Hollywood superstar?" "no"; "a billionaire, then?" "it may slightly increase your chance, but looking at absolute values, no"; "and what about ...?" "just stop, for anything other than slightly above average version of ordinary life, the answer is no"). Meanwhile, irrationality tells you to follow your passion, because if you think positively, success is 100% guaranteed, and shouldn't take more than a year or two.

Comment by viliam on Free Speech and Triskaidekaphobic Calculators: A Reply to Hubinger on the Relevance of Public Online Discussion to Existential Risk · 2020-01-04T17:23:04.528Z · score: 9 (4 votes) · LW · GW

Well, that sucks. Good point that no matter what the rules are, people can simply break them. The more you think about the details of the rules, the easier you forget that the rules do not become physical law.

Though I'd expect social consequences for breaking such rules to be quite severe. Which again, deters some kinds of people more, and some of them less.

Comment by viliam on Normalization of Deviance · 2020-01-04T17:06:42.396Z · score: 7 (4 votes) · LW · GW

I was shocked to hear about doctors in hospitals not washing their hands (from a medical student who was shocked to see it during his internship), and when I discussed it privately with some doctors, they told me it all depends on the boss. When the boss in the hospital washes his hands religiously, and insists that all employees must wash their hands all the time, they will. But when the boss ignores this norm, then... ignoring the norm becomes a local symbol of status. So the norm within the same hospital may change dramatically in short time, in either direction, when the boss is replaced.

I saw a similar thing in software projects. You almost always have a list of "best practices", but it makes a big difference whether the highest-status developer is like "we do this all the time, no exceptions", or -- much more frequently -- he is like "of course, sometimes it doesn't make much sense to ... ", and of course the scope of "sometimes" gradually expands, and it becomes a symbol of high status to not write unit tests. You can have two projects in the same company, with the same set of "best practices" on paper, with the same tools for automatically checking conformance (only, in one team, sending of the error messages is turned off), and still dramatically different code quality.

(And actually this reminds me of a time period when making fun of "read the Sequences" was kinda high-status here. I don't hear it recently, and I am not sure what it means: maybe everyone read the Sequences, or everyone forgot about them so that the joke is no longer funny because no one would know what it refers to, or maybe both sides just ageed to not discuss this topic publicly anymore.)

Comment by viliam on bgold's Shortform · 2020-01-02T20:25:38.366Z · score: 4 (3 votes) · LW · GW

Related: Reason as memetic immune disorder

I like the idea that having some parts of you protected from yourself makes them indirectly protected from people or memes who have power over you (and want to optimize you for their benefit, not yours). Being irrational is better than being transparently rational when someone is holding a gun at your head. If you could do something, you would be forced to do it (against your interests), so it's better for you if you can't.

But, what now? It seems like rationality and introspection is a bit like defusing a bomb -- great if you can do it perfectly, but it kills you when you do it halfways.

It reminds me of a fantasy book which had a system of magic where wizards could achieve 4 levels of power. Being known as a 3rd level wizard was a very bad thing, because all 4th level wizards were trying to magically enslave you -- to get rid of a potential competitor, and to get a powerful slave (I suppose the magical cost of enslaving someone didn't grow up proportionally to victim's level).

To use an analogy, being biologically incapable of reaching 3rd level of magic might be an evolutionary advantage. But at the same time, it would prevent you from reaching the 4th level, ever.

Comment by viliam on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-02T17:40:01.687Z · score: 14 (3 votes) · LW · GW

I believe there is a possible middle way between two extremes:

1) There are no questions, ever.

2) When someone writes "today I had an ice-cream and it made me happy", they get a comment: "define 'happiness', or you are not rational".

As Habryka already explained somewhere, the problem is not asking question per se, but the specific low-effort way.

I assume that most of has some idea of what "authentic" (or other words) means, but also it would be difficult to provide a full definition. So the person who asks should provide some hints about the purpose of the question. Are they a p-zombie who has absolutely no idea what words refer to? Do they see multiple possible interpretations of the word? In that case it would help to point at the difference, which would allow the author to say "the first one" or maybe "neither, it's actually more like X". Do they see some contradiction in the naive definition? For example, what would "authentic" refer to, if the person simply has two brain modules that want contradictory things? Again, it would help to ask the specific thing. Otherwise there is a risk that the author would spend 20 minutes trying to write a good answer, only to get "nope, that's not what I wanted" in return.

Comment by viliam on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-01T16:16:51.307Z · score: 25 (7 votes) · LW · GW

Let's try: "Authenticity" is an opposite of "pretending".

There are situations where it is useful to pretend to have thoughts or feelings, to manipulate other people's perception of us. This can be relatively straightforward, such as signaling loyalty to a group by displaying positive emotions to things associated with the group, and negative emotions to enemies of the group. Or more complicated, such trying to appear harmless in order to deceive opponents, or pretending to be irrational about something as a way to signal credible precommitment.

As a first approximation, "authenticity" means communicating one's thoughts and feelings as one feels them, without adding the thoughts and feeling made up for strategic purposes.

This is complicated by the fact that humans are not perfect liars; they do not have a separate brain module for truth and another brain module for deception. Sometimes deception is best achieved by self-deception, which raise the question what "authenticity" means for a self-deceiving person. But going further, self-deception is also often imperfect, and requires some kind of active maintenance, for example noticing thoughts that contradict the projected image, and removing them. In this case, "authenticity" also includes abandoning the maintenance, and acknowledging the heretical thoughts.

Comment by viliam on Plausible A.I. Takeoff Scenario Short Story · 2020-01-01T14:58:10.600Z · score: 3 (2 votes) · LW · GW

Related: Universal Paperclips