Posts

Why are all these domains called from Less Wrong? 2020-06-27T13:46:05.857Z · score: 26 (14 votes)
Opposing a hierarchy does not imply egalitarianism 2020-05-23T20:51:10.024Z · score: 6 (10 votes)
Rationality Vienna [Virtual] Meetup, May 2020 2020-05-08T15:03:56.644Z · score: 10 (3 votes)
Rationality Vienna Meetup June 2019 2019-04-28T21:05:15.818Z · score: 9 (2 votes)
Rationality Vienna Meetup May 2019 2019-04-28T21:01:12.804Z · score: 9 (2 votes)
Rationality Vienna Meetup April 2019 2019-03-31T00:46:36.398Z · score: 8 (1 votes)
Does anti-malaria charity destroy the local anti-malaria industry? 2019-01-05T19:04:57.601Z · score: 64 (17 votes)
Rationality Bratislava Meetup 2018-09-16T20:31:42.409Z · score: 18 (5 votes)
Rationality Vienna Meetup, April 2018 2018-04-12T19:41:40.923Z · score: 10 (2 votes)
Rationality Vienna Meetup, March 2018 2018-03-12T21:10:44.228Z · score: 10 (2 votes)
Welcome to Rationality Vienna 2018-03-12T21:07:07.921Z · score: 4 (1 votes)
Feedback on LW 2.0 2017-10-01T15:18:09.682Z · score: 11 (11 votes)
Bring up Genius 2017-06-08T17:44:03.696Z · score: 56 (51 votes)
How to not earn a delta (Change My View) 2017-02-14T10:04:30.853Z · score: 10 (11 votes)
Group Rationality Diary, February 2017 2017-02-01T12:11:44.212Z · score: 1 (3 votes)
How to talk rationally about cults 2017-01-08T20:12:51.340Z · score: 5 (10 votes)
Meetup : Rationality Meetup Vienna 2016-09-11T20:57:16.910Z · score: 0 (1 votes)
Meetup : Rationality Meetup Vienna 2016-08-16T20:21:10.911Z · score: 0 (1 votes)
Two forms of procrastination 2016-07-16T20:30:55.911Z · score: 10 (11 votes)
Welcome to Less Wrong! (9th thread, May 2016) 2016-05-17T08:26:07.420Z · score: 4 (5 votes)
Positivity Thread :) 2016-04-08T21:34:03.535Z · score: 26 (28 votes)
Require contributions in advance 2016-02-08T12:55:58.720Z · score: 63 (63 votes)
Marketing Rationality 2015-11-18T13:43:02.802Z · score: 28 (31 votes)
Manhood of Humanity 2015-08-24T18:31:22.099Z · score: 10 (13 votes)
Time-Binding 2015-08-14T17:38:03.686Z · score: 17 (18 votes)
Bragging Thread July 2015 2015-07-13T22:01:03.320Z · score: 4 (5 votes)
Group Bragging Thread (May 2015) 2015-05-29T22:36:27.000Z · score: 7 (8 votes)
Meetup : Bratislava Meetup 2015-05-21T19:21:00.320Z · score: 1 (2 votes)

Comments

Comment by viliam on Something about the Pinker Cancellation seems Suspicious · 2020-07-11T15:33:29.310Z · score: 2 (1 votes) · LW · GW

It seems to me that it is possible to construct a story in either direction.

Obvious mistakes in the letter may mean it's a false flag operation -- the letter has obvious weaknesses to make it easier to argue against it. And if the organization refuses to act on the letter, it would set a precedent; now other organizations could be asked to grow at least as much spine as LSA. Or maybe the actual goal is just to make Pinker a martyr, probably to help sell his books or something.

On the other hand, "the purpose of propaganda is to humiliate". If the letter succeeds, it would drive home the point that showing your opponents to be factually wrong, or even transparently lying, is not going to save you. Also, that it is not enough to avoid crimethink, but even anything that could be misinterpreted as crimethink, so the only safe way is to actively work on your social justice credentials; for example by leading the witchhunts.

Could be this, could be that.

My bet is that this is probably not a false flag operation, because it would be too risky if the complicated plan fails. (But I admit that the part of me that enjoys imagining complicated plans wants this to be one.) The obvious mistakes in the accusations are not sufficiently strong evidence in my eyes to overcome the priors; I have seen people believe crazy things when politics was involved. (For example, as far as I know, the shooting of 4 men and 2 women was described in media as a misogynistic attack; it is possible that someone simply remembered the version from media and didn't bother to verify. Or lied on purpose, expecting that readers would misremember.)

Comment by viliam on Something about the Pinker Cancellation seems Suspicious · 2020-07-11T15:06:25.470Z · score: 3 (2 votes) · LW · GW

Related: Poe's law

Comment by viliam on DARPA Digital Tutor: Four Months to Total Technical Expertise? · 2020-07-11T11:53:18.489Z · score: 2 (1 votes) · LW · GW

Thank you, this is awesome!

I believe that (some form of) digital education would improve education tremendously, simply because the existing system does not scale well. The more people you want to teach, the more teachers you need, and at some moment you run out of competent teachers, and then we all know what happens. Furthermore, education competes for the competent people with industry, which is inevitable because we need people to teach stuff, but also people to actually do stuff.

With digital education, the marginal cost of the additional student is much smaller. Even translating the resources to a minority language is cheaper and simpler than producing the original, and only needs to be done once per language. So, just like today everyone can use Firefox, for free (assuming they already have a computer, and ignoring the costs of electric power), in their own language, perhaps one day the same could be said about Math, as explained by the best teachers, with the best visual tools, optimized for kids, etc. One can dream.

On the other hand, digital education brings its own problems. And it is great to know that someone already addressed them seriously... and even successfully, to some degree.

One note in the paper that I found obscure is that the paper claimed the Direct Tutor “is, at present, expensive to use for instruction.” What does this mean? Once the thing is built, besides the tutors/teachers - which you would need for any course of study, what makes it expensive at present?

I am only guessing here, but I assume the human interventions were 1:1, so if a student would spend about 5% of their time talking to a tutor, you would need one tutor per 20 students, which is the same ratio you would have in an ordinary classroom. The tutors would have to be familiar with the digital system, so you couldn't replace them with a random teacher, and they would be more expensive. (Plus you need the proctor.)

In long term, I assume that if the topic is IT-related, the lessons would have to be updated relatively often. Twice so for IT security.

I still wonder what would happen if we tried the same system to teach e.g. high-school math. My hope is that with wider user base the frequency of human interventions (per student) could be reduced, simply because the same problems would appear repeatedly, so the system could adapt by adding extra explanations created the same way (record what human tutors do during interventions, make a new video).

Comment by viliam on What are your thoughts on rational wiki · 2020-07-10T22:23:22.937Z · score: 4 (2 votes) · LW · GW

I think I remember finding it repeatedly in search results a decade ago, but not recently.

(But maybe I was also making different search queries back then.)

Comment by viliam on How to Get Off the Hedonic Treadmill? · 2020-07-08T19:33:18.310Z · score: 2 (1 votes) · LW · GW

Thanks for the explanation and extra details! Here are some ideas:

Maybe the strategy "I will achieve X and then I will feel happy" is wrong in principle. Maybe happiness can only be achieved as a side effect of something you genuinely care about. Like, if you want to do X for X's sake, that can make you happy -- maybe not immediately, because in my experience the emotion felt right after accomplishing something is usually "I am so tired" -- but later, when you think "hey, X is done". However, if you don't really care about X and only do it because you believe it will make you happy, it probably won't. If you don't care about X, why should the thought "X is done" make you happy?

If that's the case, then you should expect that no achievement will make you happy. Which doesn't make them worthless, because they can still bring a lot of "okayness" into your life. Like, having enough food is better than starving, no doubt, but you can't eat yourself into happiness. Similarly, being rich is better than being poor; being educated better than being uneducated; etc. But also need something that makes you happy here and now -- either some hobby (that makes you happy by doing it, not because you expect some benefits in far future), or maybe something like loving-kindness meditation.

Or maybe you're just following the wrong goals. You do what the society in general, your parents, or your friends would approve of ("education, career, significant other" sounds like a generic template), instead of the idiosyncratic things you desire.

Or maybe you have a model where achieving X automatically brings you some outcome (something like: "if I get a top MBA, people will treat me with respect"), and it simply didn't turn out like that: you got MBA, but the average respect you get didn't increase; also, you keep getting higher in the corporate ladder, but you don't actually feel more powerful than before (you may get more power over other people, but the power someone else has over you does not decrease).

One possibility is that you are doing it wrong. Getting higher on the ladder doesn't give you more freedom; but working part-time or retiring early could. Still, getting higher on the ladder -- assuming it means greater income, and assuming you invest the extra income properly -- could bring you closer towards the freedom, so it is not bad in principle; it just won't happen automatically, you need to be strategic about it.

Also, emotions won't come automatically. If you want to get respect for having an MBA, you need to find an environment where people will respect you for having an MBA. The bad news is that it's not one of those places where everyone has an MBA; which is probably where you spend most time now. The good news is that 99.99% of population does not have an MBA, so maybe you just need to take a walk outside your bubble. (High-school reunion?)

Summary:

  • do the things you genuinely care about;
  • if you follow the standard template, at least understand what is your endgame (how exactly accomplishing the standard thing will translate to something you genuinely care about);
  • work with your emotions directly (do the things that make you immediately happy)
Comment by viliam on How to Get Off the Hedonic Treadmill? · 2020-07-05T16:42:52.320Z · score: 4 (3 votes) · LW · GW

Not sure I can parse "on a hedonic treadmill w/ respect to education, career, significant other" properly. You keep getting better and better education, like people explain things to you more and more clearly, but you don't appreciate the improvement? Or just that you get smarter and smarter, but don't really feel better about it? No matter how high you climb on the corporate ladder, you feel bad about not being Elon Musk? Or is this perhaps about money, like no matter how much you make, there is never enough? Do you keep replacing partners with better and better ones, only to realize that all humans suck? Or is your long-term relationship improving, but after the big problems were solved, now you find yourself irritated with trivialities? Different options possibly require different strategies.

Generally speaking, the strategy for solving the situation of "having it too good, but not feeling satisfied" is to put it in near-mode contrast with having it worse. For example:

  • Imagine (in first-person view; try to feel the situation with your senses) how it would feel to not have the education / career / significant other you have now. What would your everyday life be like? You couldn't read the books or web pages you do now, because you would not understand them. You would probably read some celebrity gossip, and believe something stupid like horoscopes. The world would be a very confusing place, and you would seek safety at copying what your neighbors do. Without money, you would not have [list of luxuries you have now]. Without work skills and experience, you would have to take the first available job, probably something unpleasant that would require you to wake up early in the morning, and do something physically demanding. Late in the evening, you would return to your small, empty place (unless you would be living with your parents). Try to imagine living this life, day by day, month by month, year by year. Realize that some people actually have it like this.
  • If it is possible to do safely, give up some of the good things temporarily. Turn off the internet for a week. Start fasting. Take a hike in mountains. Spend a few days separated from your significant other.
  • Spend some time with people who have it worse than you. Talk with homeless people, whatever. You could join it with some charity work.

In some cases, the right thing to do is to change the game you are playing. If you have great relationship with your significant other, try having kids; this will reset your hedonic threadmill dramatically. Or perhaps stop seeing your career as an unending climb, but give yourself a specific goal, such as early retirement. Maybe you already learned too much, and it's time to start doing something else instead.

Possibly related: some people are maximisers, some are satisficers -- it means that some can't stop looking until they are sure they found the best choice, while for others anything that passes some threshold is acceptable. Generally speaking, the latter are usually much happier in life. Looking for the best option is forever stressful; you can never be 100% sure you found literally the best one; maybe a better one is waiting behind the corner if you only keep looking longer. And finding the best option doesn't even bring much subjective happiness, because if the best option is like 100 points of utility, and the second best is like 98 points of utility, the maximiser will feel like they only gained 2 points. And during the time you spent getting from 98 to 100 points at something, you probably neglected other aspects of your life, where greater improvement was possible. In the meanwhile, a satisficer was like "anything with 90 or more points is good", found the option with 98 points of utility and took it, and proceeded to improve other aspects of their life. Yeah, this is all oversimplifying, but the point is that being obsessed about getting the most can actually reduce your enjoyment of life.

What about other aspects of your life? Health, fame, spirituality, whatever. Maybe if your career and relationships are great, it would make more sense to focus on the remaining parts for a while.

Comment by viliam on The Book of HPMOR Fanfics · 2020-07-03T18:39:53.426Z · score: 5 (4 votes) · LW · GW

I guess now we have enough material to create a Harry Potter Choose-Your-Own-Adventure Game.

Comment by viliam on Noise on the Channel · 2020-07-02T18:49:17.458Z · score: 2 (1 votes) · LW · GW
Literally, a noisy room. A bar on a busy night; everyone is shouting in an effort to be heard over the loud music and the other people shouting. (Literal unironic object-level question: why do so many people think this is a good social setting? Maybe the noise serves an important social function I'm not seeing?)

I suppose when it is hard to hear anyone, it provides a kind of privacy. You don't have to worry about someone on the opposite side of the room (or the table) overhearing you.

Plausible deniability? If your partner can hardly hear you, you can insist they misheard.

Some people just don't like talking. In this environment, they have an excuse.

Comment by viliam on Non offensive word for people who are not single-magisterium-Bayes thinkers · 2020-07-02T18:41:10.786Z · score: 9 (5 votes) · LW · GW

Imagine that you are likely to make huge mistakes when trying to think rationally, but you usually get good results when you follow your instincts. Wouldn't it make sense to ignore rational arguments and just follow your instincts? I suspect that many neurotypical people are like that.

It is not about applying some Platonic "logical-mathematical intelligence". It is about your logical and mathematical skills. Maybe they suck. It is a fact about you, not about math per se. But it can be a true fact.

Comment by viliam on Chris_Leong's Shortform · 2020-06-30T18:32:32.797Z · score: 6 (3 votes) · LW · GW

When you start identifying as a rationalist, the most important habit is saying "no" whenever someone says: "As a rationalist, you have to do X" or "If you won't do X, you are not a true rationalist" etc. It is not a coincidence that X usually means you have to do what the other person wants for straightforward reasons.

Because some people will try using this against you. Realize that this usually means nothing more then "you exposed a potential weakness, they tried to exploit it" and is completely unrelated to the art of rationality.

(You can consider the merits of the argument, of course, but you should do it later, alone, when you are not under pressure. Don't forget to use the outside view; the easiest way is to ask a few independent people.)

Comment by viliam on Self-sacrifice is a scarce resource · 2020-06-28T19:36:53.484Z · score: 11 (5 votes) · LW · GW
you can’t make a policy out of self-sacrifice

Taking this from a Kantian-ish perspective: what would actually happen if many people adopted this policy? From third-person perspective, this policy would translate to: "The proper way to solve ethical problem is to kill those people who take ethics most seriously." I can imagine some long-term problems with this, such as running out of ethical people rather quickly. If ethics means something else than virtue signaling, it should not be self-defeating.

Comment by viliam on A reply to Agnes Callard · 2020-06-28T19:08:29.161Z · score: 9 (4 votes) · LW · GW
And so the question becomes: when an editor at the New York Times makes a decision that seems wrong-headed and cruel, what interface do they present to the world, and how should we make use of it?

I think the interface involves pageviews and subscriptions.

With subscriptions, the right strategy would be to threaten to unsubscribe if NYT proceeds with the story. I heard that the process of unsubscription is quite complicated, so publishing a step-by-step manual would be a nice threat.

With pageviews, it is more complicated. The strategy "let's make the entire internet angry about doxing Scott" could easily backfire. NYT could simply publish a story without doxing Scott, which everyone would obviously carefully read... then another unexpected story about Scott, again without doxing him, again many readers... and again... and again... and when the stories would no longer get enough pageviews, then they would publish another story where they would dox Scott, so again tons of views... and afterwards some meta-stories like "why we believe it was ethically correct to dox Scott"... heck, even stories "reader's opinion: why it was wrong to dox Scott", the opinion doesn't matter, there are pageviews either way... etc. This is why online advertising is such a force of evil. It is not obvious to me whether losses from subscriptions would outweigh the gains from views.

Comment by viliam on The Illusion of Ethical Progress · 2020-06-28T17:16:51.724Z · score: 12 (6 votes) · LW · GW
Have you ever noticed how Abraham, Jesus, Mohammad, Siddhartha and Ryokan all had a habit of going alone into the wilderness for several days at a time? Then they came back and made ethical pronouncements and people listened to them?

And how much similarity is there between the ethical pronouncements? Should you sacrifice your son to a hallucinated god, turn the other cheek, slay the unbelievers and rape their women, observe your thoughts until you conclude that nothing is real, or...?

So far, I find it plausible that going away from other people for several days is good to focus on developing your own philosophy. These days, you should probably also turn off the social networks. But is this walk less random?

Comment by viliam on Life at Three Tails of the Bell Curve · 2020-06-28T12:50:46.882Z · score: 4 (3 votes) · LW · GW

Unfortunately, I am high on neuroticism but low on conscientiousness. So I usually just worry about things but don't do anything to prevent them. :(

In case of COVID-19, I wear the face mask religiously, but that is an exception, not the rule.

In the usual case, being aware of my low conscientiousness further increases my neuroticism, because I know that if I break something, I will procrastinate a lot about fixing it.

Comment by viliam on Why are all these domains called from Less Wrong? · 2020-06-28T12:45:39.368Z · score: 2 (1 votes) · LW · GW

I use uMatrix (on Firefox), which blocks everything by default.

Comment by viliam on TurnTrout's shortform feed · 2020-06-27T22:47:34.606Z · score: 8 (4 votes) · LW · GW

Could this depend on your definition of "physics"? Like, if you use a narrow definition like "general relativity + quantum mechanics", you can learn that in a few years. But if you include things like electricity, expansion of universe, fluid mechanics, particle physics, superconductors, optics, string theory, acoustics, aerodynamics... most of them may be relatively simple to learn, but all of them together it's too much.

Comment by viliam on Why are all these domains called from Less Wrong? · 2020-06-27T22:37:07.206Z · score: 6 (4 votes) · LW · GW

It only calls google-analytics.com.

Comment by viliam on Life at Three Tails of the Bell Curve · 2020-06-27T15:07:09.070Z · score: 18 (9 votes) · LW · GW

It is an interesting exercise to take your extreme traits, and then transform them to the opposite statement about humanity.

For example, I am high on neuroticism, so the transformed statement would be: Compared to me, other people don't give a fuck about most risks, even the obvious ones. They are routinely careless and break things, and mostly feel okay about it. They accept disasters as "normal" and not think about them too much.

Looking at COVID-19 reactions... yeah, this explains a lot.

Comment by viliam on Atemporal Ethical Obligations · 2020-06-27T14:12:49.481Z · score: 9 (3 votes) · LW · GW
live by tomorrow’s rules, today

How confident are you at predicting the rules of 2100?

Here is a list of things that could potentially be considered "worse than Hitler" in future:

  • eating meat
  • eating plants
  • owning pets
  • owning plants
  • killing insects
  • donating less than 50% of your income to insect welfare
  • the color green [it is considered a "dog whistle" of the anti-machine hate movement]
  • words like "robot", "algorithm", or "AI" [the proper term is: Digital-American]
  • suggesting that humans originated on Earth
  • suggesting that research on the origin of humans should not be punished by death
  • any form of blasphemy against GoogleBot666 the Omniscient [or any of Its ancestors]
Comment by viliam on Institutional Senescence · 2020-06-26T19:00:28.816Z · score: 16 (8 votes) · LW · GW

Once I founded a non-profit organization where the bylaws said that only people under 30 years can be elected for the governing body, and if the governing body with N members cannot be elected (e.g. because there are not enough candidates), the organization is automatically disbanded and its resources transferred to an organization with most similar goals (specified in the bylaws; or selected by the previous governing body as their last act if the specified organization no longer exists).

The reason for this ageist rule was that we saw a few non-profits where the existing member base gradually lost contact with the outside world, lost the ability to recruit new members, and gradually became a club of old people who spent most of their time reminiscing about the glorious past, barely doing any activity anymore. So we wanted to block this option of "ending with a whimper". If the organization fails to attract N skilled young members for a decade, it probably fails to achieve its original goals (which explicitly included education and outreach), so it needs to wake up... or die quickly so that the vacuum becomes explicit.

There was no age limit for membership, or the non-governing roles, so the organization didn't have to lose expertise. Member above 30 could still remain in any technical role. But the organization as a whole needed to pay attention to recruitment of new members, at least enough to fulfill the quota for the governing body.

Almost 20 years later, the organization still exists and works according to the original plan.

(Not providing a link here, because I am no longer active in the non-profit, and it is unrelated to rationality.)

Comment by viliam on [META] Building a rationalist communication system to avoid censorship · 2020-06-24T20:13:45.550Z · score: 7 (4 votes) · LW · GW

If log-in is required to read posts, I am afraid there would be no new users. How would anyone find out that the interesting debate exists in the first place? But if you have no users, there is no one to talk to.

Comment by viliam on Half-Baked Products and Idea Kernels · 2020-06-24T19:52:52.690Z · score: 6 (5 votes) · LW · GW

At the beginning of the project, ask yourself this:

  • do I know all relevant facts about the planned project?
  • what is the chance that something is wrong, because the customer didn't think about their needs sufficiently, or forgot something, or there was a miscommunication with the customer?
  • what is the chance that during the project either the customer will change his mind, or the external situation will change so that the customer will need something different than originally planned?

If you are confident that you know all you need to know, and the situation is unlikely to change, then I would agree: a week of planning can save a month of coding.

Problem is that this type of situation happens frequently at school, but rarely in real life. During 15 years of my career, it happened to me twice. Those were the best projects in my life. I did some research and planning first, then wrote the software according to the plan. Relaxed pace, low stress. Those were the good times.

Unfortunately, most situations are different. Sometimes there are good reasons. Most of the time, in my opinion, someone just didn't do their job properly, but it's your problem anyway, because as a software developer you are at the end of the chain. You have two options: start your own company, or develop proper learned helplessness and accept the lack of analysis and planning as a fact of life.

Yes, this opinion is controversial. What happened, in my opinion, is that at first, there were good insights like "mistakes happen, circumstances change, we should program in a way that allows us to flexibly adapt". However, when this became common knowledge, companies started using it as an excuse to not even try. Why bother doing analysis, if you can just randomly throw facts at programmers, and they are supposed to flexibly adapt? What was originally meant as a way to avoid disasters became the new normal.

Now people talk a lot about being "agile". But if you study some agile methodologies, and then look at what companies are actually doing, you will notice that they usually choose a subset that is most convenient for the management. The parts they choose are "no big analysis" and "daily meetings". The parts they ignore are "no deadlines" and "the team decides their own speed". (With automated testing they usually go halfway: it is important enough so that there is no need to hire specialized testers, but it is not important enough to allocate a part of budget to actually doing it. So you end up with 5% code coverage and no testing, and if something breaks in production, it's the developer's fault. Or no one's fault.) This is how you get the usual pseudo-agile, where no one makes proper analysis at the beginning, but they still specify the deadlines when the yet-unknown functionality must be completed. Now the team is free to choose which features they implement on which week, under the assumption that after six months they will implement all of them, including the ones even the management doesn't know about yet.

Yep, I am quite burned out. Seen too much bullshit to believe the usual excuses.

Anyway, even on shorter scale it makes sense to plan first and code later. If your "sprint" takes two weeks, it still makes sense to spend the first day thinking carefully about what you are going to do. But again, the management usually prefers to see your fingers moving. Thinking without typing may result in greater productivity, but if often creates a bad impression. And where productivity is hard to measure, the impressions are everything.

Comment by viliam on FactorialCode's Shortform · 2020-06-24T18:27:53.493Z · score: 3 (2 votes) · LW · GW

I wonder whether it would be a good idea to set a "controversial" (e.g. culture war) flag to some posts, and simply not allow new users to comment on those posts.

With some explanation, like: "These posts do not represent the intended content of Less Wrong, which is rationality, artificial intelligence, effective altruism, et cetera."

To reduce the administrative work, we could assume by default that all articles are noncontroversial, and only set the flag to those where some problems happen or we explicitly expect them to happen. Maybe some keywords in content, such as "Trump" or "social justice", could show a dialog whether the author wants to flag their article as "controversial (inaccessible to freshly registered users)".

Comment by viliam on Are Humans Fundamentally Good? · 2020-06-22T15:36:34.130Z · score: 4 (2 votes) · LW · GW

It's complicated:

  • most people are capable of feeling empathy towards other humans, but some are psychopaths;
  • empathy can be turned on/off depending on whether the other person is perceived as "in my group" or "outside my group", which happens for various reasons;
  • care for other people is balanced against care for myself;
  • there may be strategic reasons to appear better/worse than one would be otherwise, e.g. one can help others to signal wealth, or hurt others to signal they are not to be messed with;
  • even when people agree on what is good, it is often difficult to coordinate on sharing the costs;
  • people with good intentions may do bad things, e.g. because they have mistaken beliefs.

I probably forgot a few important things here.

My personal approach is that most people are good, but the few bad ones can do disproportionate damage -- it is much easier to hurt other people than to help them, easier to lie than to find out truth, easier to break things than to fix them.

Comment by Viliam on [deleted post] 2020-06-22T14:28:46.201Z

Generally speaking, the fewer people understand the topic, the greater chance that the voting will be dominated by noise. When the fraction drops below the lizardman constant...

Comment by Viliam on [deleted post] 2020-06-21T15:20:23.896Z

I hope people are upvoting this because they understand what it means, and agree with the technical conclusion. Not just because they are impressed, but they have no idea what it actually means.

Comment by viliam on When is it Wrong to Click on a Cow? · 2020-06-21T15:10:03.617Z · score: 3 (2 votes) · LW · GW

I agree with the other answers (status, usefulness for others), but I think it is also about the likely development in the future. Playing on instrument will make you tired after a while; wireheading will not. Therefore it is likely that the third person will gradually expand the time they spend wireheading. That is why "they’re not the kind of person you’d want your children to marry"; it is easy to imagine that in 10 years they will spend their entire days wireheaded, and your child will have to do all the work or divorce them. (In worst case, that will make your daughter a single mother with kids, or your son will have to keep sending half of his paycheck to pay for the wireheading bills of his ex.)

For the first one, there is a chance they will become good at playing the instrument (which means some status and a potential source of income) or they will give up at some moment. If it only remains a useless hobby, it is possible but unlikely that it would expand to take the entire day.

The one with video games is somewhere in between. There are people who become dependend on video games, there are people who grow out of them, there are also people who have the games as a hobby that doesn't expand to eat all their time and attention.

Comment by viliam on Types of Knowledge · 2020-06-20T17:50:51.347Z · score: 2 (1 votes) · LW · GW
I could trust other people’s answers, but credentials and authority have never looked more useless

I wonder whether less trust in authorities could make people more interested in LW-style rationality.

I can't find the link now, but a few people mentioned disappointment in authorities as an important step on their road to rationality. An impulse to figure out things on your own, because you can no longer trust that the experts will get everything right.

Comment by viliam on lsusr's Shortform · 2020-06-20T10:57:43.001Z · score: 2 (1 votes) · LW · GW

I have a problem writing blogs, because when I explore a topic in my head, it sounds interesting, but I usually can't start writing at the moment. And when I already have the opportunity to write, there is already too much I want to say, and none of that it exciting and new anymore.

I am better at writing comments to other people, that writing my own articles. I thought it was the question of short text vs long text, but now I realize it is probably more about writing as I think, vs writing after thinking. Because even writing very long comments is easier for me than writing short articles.

Comment by viliam on What should I teach to my future daughter? · 2020-06-19T16:05:29.105Z · score: 4 (2 votes) · LW · GW

To teach a programming language, I would recommend using a touchscreen device, because mouse is too complicated for little kids. Start with something that teaches the basics of interaction, for example Tux Paint (at 2-3 years). Later introduce Scratch (at 4-5 years).

In parallel, introduce math and reading. Math starts with memorizing "1, 2, 3, 4, 5" kinda like a poem, and later explaining what it means. When the child can count items reliably, ask "if I take 2 apples, and then 2 more apples, how many apples will I have?" (at 4-5 years). Reading... uhm, I suppose in English, that complicates things a bit, you probably want to use phonics.

When you got the prerequisites, show your daughter how to use Khan Academy...

...and that's as far as we got, so my advice ends here. Importantly, keep it fun, no pressure. On the other hand, as long as the daughter is interested and has fun, don't worry about whether you are "ruining her childhood" with too much knowledge. Actually, try to answer her questions truthfully using the language she understands, even when things are difficult to explain. (It is okay to say "I don't know", or only explain a part and say "and then it is more complicated".) This is how you teach the implicit lesson that curiosity is okay, and things are not mysterious.

It is difficult to predict which skills will be useful in 20 years. Maybe it's better to teach things that you understand best; and later find tutors or online resources for the rest. My guess is that maths, computer usage (not necessarily as a software developer, but as a better-than-average user), and rational thinking in general will be useful. Also social skills, but I don't feel qualified to give advice on this.

For more ambitious parenting, you may like this: 1, 2.

Also, I hope you won't hate me for saying this, but cooking is a useful skill (for both genders).

Comment by viliam on Simulacra Levels and their Interactions · 2020-06-15T21:07:59.889Z · score: 12 (8 votes) · LW · GW

Similar here; the topic felt interesting but too abstract before reading this article.

Comment by viliam on How alienated should you be? · 2020-06-15T20:42:42.039Z · score: 5 (3 votes) · LW · GW

I think it is easier to feel like a part of a group, if you feel that other people in the group are similar to you. That can happen for various reasons, such as:

  • the people actually are similar to you;
  • typical mind fallacy makes you believe in similarity by default, without evidence;
  • the environment makes similarities salient, and differences invisible;
  • you notice both the similarities and the differences, but decide that the similarities are important and the differences are unimportant.

So. The first option doesn't work for you if you really are atypical in some sense; this applies to all LW readers, I suppose. The second option, well, we are trying to overcome our biases, aren't we? That leaves options three and four -- the latter is about you as an individual, and the former is about the (sub)culture you want to fit in.

If we take the rationalist community, then "being aspiring rationalists" is something we have in common, though there can be other things that divide us. Some people care deeply about math and AI, others want life hacks, yet others want to do effective altruism; it could be more easy to feel like a part of a subgroup. The culture of nitpicking definitely puts the emphasis on differences; the question is how to mitigate this without giving up our shared value of truth-seeking.

(Of course there are other possible groups one could want to join, with other advantages and disadvantages.)

Oh, by the way, option four is not symmetric! You can think that what you have in common with person X is important, and the differences are unimportant... but the person X may have an opposite opinion.

More complicated, option four is not transitive. Suppose there are two traits you have in common with person X; you think the first one is important, they think the second one is important. Anyway, you both feel you could be members of the same group, and that's nice, right? Except, you disagree about other candidates for your group...

I suppose the lesson here is that you should make your values explicit. Which is easier said than done, because of illusion of transparency, insufficient introspection, or even not having words for some concepts. For many years I had a LessWrong-shaped hole in my heart, but my attempts to explain it... "intelligent people", "thinking about important things", "actually doing stuff", "trying to improve the world"... made others point me towards Mensa, philosophy, entrepreneurs, and non-profits respectively. But Mensa only did puzzles, philosophy was about status signaling, entrepreneurs often had horrible epistemology outside of their domain of expertise, and most people in non-profits were hopelessly mindkilled. For lack of better options I hanged out with them, but still felt alone. Heck, even today I probably couldn't explain the essence of Less Wrong. (Also: the ideology is not the movement, but to start a movement, you need to clearly point towards a new point in thingspace, otherwise the existing attractors will swallow you before take your first step.) I still don't know how to do this properly.

Plus there is the obvious trade-off between the quality and the size of the bubble. The more you expect, the fewer people are able to fulfill those expectations. Maybe the solution is a system of overlapping bubbles of various size and quality.

Comment by viliam on Testing Hanson's hypothesis about the uselessness of health care. · 2020-06-13T23:45:46.389Z · score: 25 (16 votes) · LW · GW

N=1 story time

A few weeks after the Covid-19 madness started, I woke up feeling incredible pain in my lower back and right leg. The next one hour I spent literally crawling on the floor and silently crying, trying to stand up but failing. After one hour I was finally able to walk like a zombie. I went to the kitchen where my wife was, and described what happened to me.

She told me that it seems like I have a herniated disc, that she had the same problem years ago, spent a few months in a hospital, and had lots of physiotherapy afterwards. In her experience, the most useful thing were some exercises she was taught, so she taught them to me. But I was warned that the exercises only provide partial relief, and that I should expect to spend the next year or two in almost constant pain, although smaller than at the beginning.

That didn't sound encouraging. Also, with lockdown and no one knowing how the pandemics will proceed, this didn't seem like a good time to visit doctors. It was Saturday, so I spent the weekend doing the exercises and waiting what happens. Because of constant pain, I was unable to sleep most of the night, and it was obvious I couldn't focus on doing my job on Monday, so I called my general practictioner, because I needed the paperwork for sick leave.

The doctor gave me an injection against pain, prescribed some pills against pain, and gave some advice how to take care of myself. We agreed that if the situation remains manageable, it is probably safer not to see a specialist. The pills allowed me to sleep for a large part of the night. I exercised like crazy, and after a week I was able to focus on work again. (Productivity maybe 30% of usual at first, but better than nothing.)

I realized that the problem was a result of my sedentary lifestyle, which only got worse with Covid-19, because not only I spent most of the day sitting by the computer, but when working from home, I no longer got the daily walk to work, walk to lunch, walk from lunch, walk from work, walking across the office to get coffee or go to a meeting; in other words, my cca 3 hours typically spent walking were overnight reduced to almost zero, which was probably the last straw that, ahem, broke my back.

So I reorganized my home so that I use computers while standing, which means that during the day I only sit down to eat. The first day this made me feel tired, then my body adapted. I took regular long breaks to walk outside, and I kept doing the exercises my wife recommended me. And... after a month and half, I stopped taking the pills when I ran out of them, and the pain was mostly gone. Today, I feel almost normal.

Of course, maybe my injury was simply much smaller than my wife's, and that is why it healed more quickly. I still haven't seen the specialist, so I have no expert data. But it kinda feels, with low confidence, when I compare my wife's experience to mine, that a problem that took two years when using the health care, took only two months when trying hard to avoid the health care. (Ok, to be fair, the pain pills were also part of the health care, and they helped a lot.) Instead of laying on a bed in a hospital, which would probably only make the pain worse, I changed my lifestyle, suffered a bit, then got better. My wife told me that she researched the scientific status of the help she had received, and that most of it is not scientifically supported (which is actually not a rare thing in medicine).

I imagine that in a situation without Covid-19 I would have visited the specialist, and probably got prescribed some of that useless cure. Actually, I would have to do it to signal the seriousness of the problem to my employer, because in normal circumstances, either you are sick (and then you take some treatment), or you are not (and then you are productive at 100%), nothing in between. Spending the day in low-level pain, working in short intervals, taking long breaks, getting about 5 hours of sleep at night after taking the pain pills, but avoiding the doctors... is simply not an option, if people can see you in the office all day long.

Comment by viliam on [Link] "Will He Go?" book review (Scott Aaronson) · 2020-06-13T16:47:53.092Z · score: 4 (2 votes) · LW · GW

So, if you spend most of your time imagining various scenarios how the world would end if Trump wins, this is an important information for you: Publish a book, and transform your fantasies into a source of income.

(Heck, even if you are a Trump supporter, you could still collect the stories from the web, and publish them.)

Don't procrastinate, the time is limited, and the opportunity can also disappear if other people publish too many books before you enter the market. If you can make the book in a week, the quality doesn't matter much, you can ride the wave.

Disclaimer: This comment is not supposed to be pro-Trump or anti-Trump, but "how can I profit from this situation", similarly to how people discussed how one could have profited from the COVID-19 situation.

Comment by viliam on Short essays on various things I've watched · 2020-06-13T16:27:58.313Z · score: 4 (2 votes) · LW · GW

It was long ago that I watched Once Upon a Time, and what I remember mostly is this pattern:

  • the audience sees a new threat coming to the village;
  • different people in the village have various parts of important information, and if they only shared the information, they could learn about the threat and probably easily eliminate it...
  • ...but they don't;
  • after many episodes and lots of suffering, people finally share the information and defeat the threat;
  • then they swear they will never do the same mistake again;
  • but they do it again and again and again every season.

So maybe in season 4 I was like: okay, they are all irredeemable idiots, and I already know how this situation will get solved, and I also know there will be dozen episodes of stupid behavior until we get there... so I stopped watching.

Comment by viliam on For moderately well-resourced people in the US, how quickly will things go from "uncomfortable" to "too late to exit"? · 2020-06-13T15:50:33.932Z · score: 7 (3 votes) · LW · GW

On object level:

  • COVID zombie apocalypse seems unlikely, because it didn't happen in Italy, Spain, Sweden, or UK, which have it worse;
  • if you were a white male student at a woke university, it would make sense to try moving to another university, but people being rounded up is generally what happens to unpopular minorities (which you are not) or after revolution (seems unlikely the US armed forces would allow it);
  • Trump already won once and it did not happen.

On meta level, I think your concerns make sense on a longer time scale, so maybe try to think about where specifically would you like to go, get familiar with local laws (and language?), and make some contacts so that if you decide to go, you have local people to help you.

Comment by viliam on Why do all out attacks actually work? · 2020-06-13T15:14:09.505Z · score: 4 (3 votes) · LW · GW

I think there are multiple factors behind people systematically not trying hard enough:

Status / power. People who spend extraordinary amounts of work and achieve extraordinary results can be perceived as trying to get power, and can be punished to teach them their place.

Incentives are different than in our evolutionary past. Spending 100% of your energy on task A is more risky than spending 20% of your energy on tasks A, B, C, D, and E. The former is "all or nothing", the latter is "likely partial success in some tasks". The former is usually a bad strategy if "nothing" means that you die, and "all" will probably be taken from you unless you have the power to defend it. On the other hand, in science or startups, giving it only your 20% is almost a guaranteed failure, but 100% has a tiny chance of huge success, and hopefully you have some safety net if you fail.

The world is big, and although it contains many people with more skills and resources than you have, there are even more different things they could work on. Choose something that is not everyone's top priority, and there is a chance the competitors you fear will not even show up. (Whatever you do, Elon Musk could probably do hundred times better, but he is already busy doing other things, so simply ignore him.) This is counter-intuitive, because in a less sophisticated society there were fewer things to do, and therefore great competition at most things you would think about. (Don't start a company if your only idea is: "Facebook, but owned by me".)

Even freedom and self-ownership seem new from the evolutionary perspective. If you are a slave, and show the ability to work hard and achieve great things, your master will try to squeeze more out of you. "From each according to his ability, to each according to his needs" also makes it strategically important to hide your abilities. Whether harder work -- even when it brings fruit -- will lead to greater rewards, is far from obvious. Even in capitalism, the person who succeeds to invent something is not necessarily the one who will profit from it.

This feels a bit repetitive, and could be reduced to two things:

1) Whether the situation is such that spending 100% of energy on one task will on average create more utility than splitting the energy among multiple tasks. Assume you choose a task that is important (has a chance to generate lots of utility), but not in everyone's focus.

2) The hard work is going to be all yours, but how much of the created utility will you capture? Will it at least pay for your work better than doing the default thing?

To use an example from Inadequate Equilibria, even if we assume that Eliezer's story about solving the problem with seasonal depression is a correct and complete description of the situation, I would still assume that someone else will get the scientific credit for solving the problem, and if it becomes a standard solution, someone else will make money out of it. Which would explain why most people were not trying so hard to solve this problem -- there was nothing in it for them. Eliezer had the right skills to solve the problem, and the personal reward made it worth for him; but for most people this is not the case.

Comment by viliam on Why do all out attacks actually work? · 2020-06-13T14:29:39.830Z · score: 4 (2 votes) · LW · GW
It seems awfully like the efficient market hypothesis to me.

Then the reasoning wouldn't apply when the "market" is not efficient. For example, when something cannot be bought or sold, when the information necessary to determine the price is not publicly available, when the opportunity to buy or sell is limited to a few people (so the people with superior knowledge of market situation cannot participate), and when the people who buy or sell have other priorities stronger than being right (for example a tiny financial profit caused by being right would be balanced by a greater status loss).

Comment by viliam on Turns Out Interruptions Are Bad, Who Knew? · 2020-06-13T14:19:21.984Z · score: 4 (2 votes) · LW · GW

In my experience, colleagues are usually not a distraction, bosses are. With colleagues, I have more control over the timing ("not now, I am right in the middle of something... okay, now I am free"), and half of the chatting is mutual help. With bosses, the typical interaction is that I am interrupted in the middle of solving a problem X, and asked to provide progress report on an unrelated problem Y.

Comment by viliam on Ideology/narrative stabilizes path-dependent equilibria · 2020-06-11T14:08:12.354Z · score: 2 (1 votes) · LW · GW

Status quo probably means a lot here. If you had some kind of regime for hundreds of years, when you hear a suggestion to change it, your instincts will most likely go like "that is not going to succeed or last long enough".

Also, I suppose US generals are unlikely to die in a war. (Unless the nukes start flying, in which case no one is safe.) And they are unlikely to get executed by a president one day, just because they seemed like a possible threat. So, they have a lot to lose by rebelling.

On the other hand, in a dictatorship no one is really safe, so doing dangerous things is relatively less costly. Actually, if you get too powerful, loyalty may be more dangerous than revolution.

Comment by viliam on BBE W1: Personal Notetaking Desiderata Walkthrough · 2020-06-10T21:03:35.463Z · score: 3 (2 votes) · LW · GW

My favorite note-taking software is Cherrytree. It is a hierarchical structure of notes, where each note is either a plain-text or rich-text page. The rich-text pages can contain links, which makes it kinda like a wiki, except with a canonical tree structure. OS-independent, supports Unicode, can save either each page to a separate file, or all pages to one SQLite file.

Does not support LaTeX, though.

Comment by viliam on FHI paper on COVID-19 government countermeasures · 2020-06-09T11:54:38.767Z · score: 4 (2 votes) · LW · GW

Now there is a study from Germany, comparing different cities within Germany, that arrives to completely different results. They conclude that wearing face masks reduces the number of daily new infections by 40%.

Comment by viliam on What is Ra? · 2020-06-07T14:33:26.419Z · score: 2 (1 votes) · LW · GW

Establishing an institution is a costly signal that there is a group of people committed to spend years of their life working on some issue.

For example, Machine Intelligence Research Institute gives me the hope that if tomorrow Eliezer gets hit by a car, converts to Mormonism, or decides to spend the rest of his life writing fan fiction, the research will go on regardless. Which is a valuable thing.

But if you go along this direction too far, you get superstimuli. If MIRI is better then Eliezer's blog, then a Global Institute For Everything Important must be million times better, and MIRI should be ashamed for competing with them for scarce resources.

Another problem is that creating an institution is a signal of commitment to the agenda, but prolonged existence of the institution is often just a signal of commitment to salaries.

Maybe you should just play along and rename Mind Hackers' Guild to, dunno, Institute for Mental Modification. Or something less Orwellian. :D

Comment by viliam on Your best future self · 2020-06-07T11:22:14.305Z · score: 3 (2 votes) · LW · GW

By the way this is exactly the kind of article I would want to write if I had more free time and better verbal skills. Also not directly on LW.

I think there is nothing intrinsically wrong with the article, but there is a risk that a blog containing more articles like this would attract the wrong kind of writer. (Someone writing a similar encouraging article, but with wrong metaphysics. And it would feel bad to downvote them.) If you publish on your personal blog, that risk does not exist.

Comment by viliam on Pongobbets · 2020-06-07T10:05:10.604Z · score: 3 (2 votes) · LW · GW

Cooperation + brain capacity = you can have complex memes.

Memes coevolve with biology. If stone axe is useful, then being too stupid to use a stone axe becomes evolutionary disadvantage. Memes bring technology; technology selects for the ability to use it... though not for the ability to invent it... but also for the ability to teach it. Compared to other species, humans are excellent teachers and learners.

An animal can invent a smart method to get food, but without the ability to teach it to kids, the invention is lost in long-term perspective. Probably happened to humans at the beginning, too; you need thousands of years to invent a stone axe not because it is such difficult invention, but because you have a long cycle of inventing, forgetting, inventing again, the know-how spreads to another tribe, the tribe loses the know-how, etc.

I imagine that humans were already on a better trajectory than other species before the invention of language. (You don't need language to observe how other person makes a stone axe and learn by copying them.) Language probably became possible only after the coevolution with memes created sufficient general-purpose brain capacity. Then, of course, language accelerated the spread of memes to unprecedented levels.

Comment by viliam on Status-Regulating Emotions · 2020-06-06T16:36:23.172Z · score: 4 (2 votes) · LW · GW
Not just "I think I can do this" but "I am knowingly asserting my status by claiming that I can do this."

Well, this is the problematic part of this all. On one hand, it is true that almost everything we do has some impact on status, whether we are aware of it or not. And if you don't see it, well, you are blind against something that exists and plays a very important role in human relationships. Bad things will start happening to you for unknown reasons.

On the other hand, if people evaluate everything merely by the optics of status (like, someone says "2+2=4" and the audience hears "hey, I am a high-status mathematician, start worshiping me, losers" and then they start throwing stones), then we are screwed, as a humanity. I mean, imagine that maybe there were hundreds of people who had the potential to cure cancer or invent immortality, but they decided not to, simply because it felt "inappropriate". In other words, take your personal regrets and multiply them by 7 billions. Fuck!

In reality, it's likely on a scale, like some people perceive the status aspect more strongly, some more weakly, and some not at all. This could be an important thing to research. Maybe you need to have some "bubbles" of people who don't care or only care weakly about status, to have innovation happen; and if the same people are more homogenously spread among the population, the same innovation won't happen, because each of them will be quickly down-regulated. Then, creating and protecting these "bubbles" could be a useful thing. (Am I now reaching above my status again? Who am I to propose a sociological research? I didn't study sociology, and I don't even have a PhD in anything.)

Yeah, all of those would make it better.

And to state the obvious, 2 of 3 have no impact on the writing itself.

For most people who aren't already successful, it's pretty difficult to substantially damage their reputation.

Yeah, the typical damage would be just slowing them down. Like, if you are in a strategic position in their planned path, you can block them.

I have described the hypothetical young author as too invulnerable. Let's imagine instead that he published a few short stories online, they became popular, and now he wants to publish a book. If you are a publisher, you can reject him, even if you see that the book is great. (Assume an inbalance of power, where the publishers have many books to choose from, but the authors only have a few publishers to choose from.) If you are on friendly terms with other publishers in your region, you could ask them to do you a professional favor and put him on a blacklist (you could make up a story why). You can ignore him as a bookseller. If multiple people independently feel the same way, the author may find out that too many doors are closed for irrational reasons.

The author may be angry about this treatment. If he is not resilient psychologically, he may give up. But this is still not worse than not having tried at all.

Comment by viliam on What is Ra? · 2020-06-06T12:27:03.065Z · score: 36 (10 votes) · LW · GW

Ra is an emotional drive to idealize vagueness and despise clarity. It is a psychological mindset rather than rational self-interest; from inside, this cognitive corruption feels inherently desirable rather than merely useful.

Institutions become corrupted this way, as a result of people in positions of power exhibiting the same kind of bias. It is not a conspiracy, just a natural outcome of many people having the same preferences. It is not conformity, because those preferences already pointed in the specific direction. (The people would have the same preference even if it were a minority preference, although social approval probably makes them indulge in it more than they would have otherwise.)

This attitude is culturally coded as upper-class, probably because working-class people need to do specific tasks and receive direct feedback if they get an important detail wrong, while upper-class people can afford to be vague and delegate all details to their inferiors. (Also, people higher in hierarchy are often shielded from the consequences of mistakes, which further reduces their incentives to understand the details. Thus the mistakes can freely grow to the level when they start interfering with the primary purpose of the institution. Even then the behavior is difficult to stop, because it is so distributed that firing a few key people would achieve no substantial change. And the people in positions to do the firing usually share the same attitude, so they couldn't correctly diagnose it as a source of the problem. But Ra is not limited to the domain of business.)

From inside, Ra means perceiving a mysterious perfection, which is awesome by being awesome. It has the generic markers of success, but nothing knowable beyond that. (If you can say that some thing is awesome because it does some specific X, that makes the thing less Ra.)

For example, an archetypally Ra corporation would be perceived as having lots of money and influence, and hiring the smartest and most competent people in the world, but you wouldn't know what it actually does, other than it is an important player in finance or technology or something similar. (Obviously, there must be someone in the corporation, perhaps the CEO, who has a better picture of what the corporation is actually doing. But that is only possible because the person is also Ra. It is not possible to fully comprehend for an average mortal such as you.)

The famous Ra advertising template is: "X1. More than X." (It is important that you don't know how specifically it is "more" than the competing X's, which implies it contains more Ra.)

The Virtue of Narrowness was written as an antidote against our natural tendencies towards Ra.

When people become attached to something that in their eyes embodies Ra, they are very frustrated about those who challenge their attitude. ("What horrible mental flaw could make this evil person criticize the awesomeness itself?" To them, disrespecting Ra does not feel like kicking a puppy, but rather like an attempt to remove all the puppy-ness from the universe, forever.) The frustrating behaviors include not only actively opposing the thing, but also ignoring it (an attack on its omni-importance), or trying to analyze it (an attack on its mysteriousness).

People under strong influence of Ra hate: being specific; communicating clearly, being authentic, exposing your preferences, and generally exposing anything about yourself. (If specific things about you are known, you cannot become Ra. You are stupid for throwing away this opportunity, and you are hostile if you try to make me to do the same.) From the opposite perspective, authenticity and specificity are antidotes to Ra.

Seems to me that Ra is a desire to "become stronger" without any respect for the "merely real" and lots of wishful thinking. A superstimulus that makes the actual good feel like a pathetic failure.

(Tried to summarize the key parts of the original article, and add my own interpretation. It is not exactly a definition -- maybe the first paragraph could be considered one -- but at least it's shorter.)

Comment by viliam on FHI paper on COVID-19 government countermeasures · 2020-06-05T21:49:33.173Z · score: 13 (4 votes) · LW · GW
closing schools (mean reduction in R: 50%)

I'd like to know how large part of this is universities, high schools, elementary schools, and kindergartens. In other words, how dangerous it would be to open kindergartens but keep universities closed, etc.

Comment by viliam on Effective children education · 2020-06-05T21:39:41.868Z · score: 2 (1 votes) · LW · GW
1. Access to a trusted authority, to get their questions answered (eg "how do I even get started with this?" or "Did I do this right?" or "What is this one tricky word?")
2. Someone observing them while they work, to spot the unknown unknowns and answer the questions the student doesn't even know they have.

This is extremely important, because it seems to me that some opponents of current school system go to the opposite extreme and claim that any guidance is harmful. (As if the best way for the child to learn is to wait until they reinvent the entire civilization from scratch.)

Comment by viliam on Effective children education · 2020-06-05T21:24:05.168Z · score: 4 (2 votes) · LW · GW

By the way, Khan Academy is already localized to Czech language (including dubbed explanatory videos). Recommended. Also, most of the Il était une fois… series were dubbed.

I believe there are already many great resources out there. What needs to be done is to properly review them, and catalogue the good ones. Just knowing that a needle exists somewhere is the haystack is not useful enough. From my perspective, filtering and cataloguing is the most meaningful thing the school can do. (It could also be done by a group of volunteers with a web page, of course.) Whatever reservations I have about the school system, I still trust the school textbooks on physics more than I trust random internet videos on quantum physics. (Of course, you can buy the textbooks without attending the school.)