Posts

Viliam's Shortform 2020-07-22T17:42:22.357Z · score: 8 (1 votes)
Why are all these domains called from Less Wrong? 2020-06-27T13:46:05.857Z · score: 26 (14 votes)
Opposing a hierarchy does not imply egalitarianism 2020-05-23T20:51:10.024Z · score: 5 (11 votes)
Rationality Vienna [Virtual] Meetup, May 2020 2020-05-08T15:03:56.644Z · score: 10 (3 votes)
Rationality Vienna Meetup June 2019 2019-04-28T21:05:15.818Z · score: 9 (2 votes)
Rationality Vienna Meetup May 2019 2019-04-28T21:01:12.804Z · score: 9 (2 votes)
Rationality Vienna Meetup April 2019 2019-03-31T00:46:36.398Z · score: 8 (1 votes)
Does anti-malaria charity destroy the local anti-malaria industry? 2019-01-05T19:04:57.601Z · score: 64 (17 votes)
Rationality Bratislava Meetup 2018-09-16T20:31:42.409Z · score: 18 (5 votes)
Rationality Vienna Meetup, April 2018 2018-04-12T19:41:40.923Z · score: 10 (2 votes)
Rationality Vienna Meetup, March 2018 2018-03-12T21:10:44.228Z · score: 10 (2 votes)
Welcome to Rationality Vienna 2018-03-12T21:07:07.921Z · score: 4 (1 votes)
Feedback on LW 2.0 2017-10-01T15:18:09.682Z · score: 11 (11 votes)
Bring up Genius 2017-06-08T17:44:03.696Z · score: 57 (52 votes)
How to not earn a delta (Change My View) 2017-02-14T10:04:30.853Z · score: 10 (11 votes)
Group Rationality Diary, February 2017 2017-02-01T12:11:44.212Z · score: 1 (3 votes)
How to talk rationally about cults 2017-01-08T20:12:51.340Z · score: 5 (10 votes)
Meetup : Rationality Meetup Vienna 2016-09-11T20:57:16.910Z · score: 0 (1 votes)
Meetup : Rationality Meetup Vienna 2016-08-16T20:21:10.911Z · score: 0 (1 votes)
Two forms of procrastination 2016-07-16T20:30:55.911Z · score: 10 (11 votes)
Welcome to Less Wrong! (9th thread, May 2016) 2016-05-17T08:26:07.420Z · score: 4 (5 votes)
Positivity Thread :) 2016-04-08T21:34:03.535Z · score: 26 (28 votes)
Require contributions in advance 2016-02-08T12:55:58.720Z · score: 64 (64 votes)
Marketing Rationality 2015-11-18T13:43:02.802Z · score: 28 (31 votes)
Manhood of Humanity 2015-08-24T18:31:22.099Z · score: 10 (13 votes)
Time-Binding 2015-08-14T17:38:03.686Z · score: 17 (18 votes)
Bragging Thread July 2015 2015-07-13T22:01:03.320Z · score: 4 (5 votes)
Group Bragging Thread (May 2015) 2015-05-29T22:36:27.000Z · score: 7 (8 votes)
Meetup : Bratislava Meetup 2015-05-21T19:21:00.320Z · score: 1 (2 votes)

Comments

Comment by viliam on Blog posts as epistemic trust builders · 2020-09-27T09:34:47.539Z · score: 7 (4 votes) · LW · GW

Building reputation by repeated interaction.

But it needs to be the type of interaction where you notice and remember the author. For example, if you go to LessWrong, you are more likely to associate "I read this on LessWrong" with the information, than if you just visited LessWrong articles from links shared on social networks. (And it is probably easier to remember Zvi than an average author at LessWrong, because Zvi recently posted a sequence of articles, which is easier to remember than an equal number of articles on unrelated topics.) You need to notice "articles by Zvi" as a separate category first, and only then your brain can decide to associate trust with this category.

(Slate Star Codex takes this a bit further, because for my brain it is easier to remember "I read this on SSC" than to remember the set of articles written by Scott on LessWrong. This is branding. If your quality is consistently high, making the fact "this was written by me" more noticeable increases your reputation.)

The flip side of the coin is that the culture of sharing hyperlinks on social networks destroys trust. If you read hundred articles from hundred different sources every day, your brain has a problem to keep tabs. Before internet, when you regularly read maybe 10 different journals, you gradually noticed that X is reliable and Y is unreliable. Because sometimes you read ten reliable stories on one day, and ten unreliable stories on a different day, and it felt differently. But on internet, there are hundred websites, and you switch between them, so even if a few of the are notoriously bad, it is hard to notice. Even harder, because the same website can have multiple authors with wildly different quality. A scientist and a crackpot can have a blog on the same domain. With paper sources, the authors within one source were more balanced. (LessWrong is also kinda balanced, especially if you only consider the upvoted articles.)

Comment by viliam on niplav's Shortform · 2020-09-27T09:04:21.178Z · score: 2 (1 votes) · LW · GW

If the adblockers become too popular, websites will update to circumvent them. It will be a lot of work at the beginning, but probably possible.

Currently, most ads are injected by JavaScript that downloads them from a different domain. That allows adblockers to block anything coming from a different domain, and the ads are blocked relatively simply.

The straightforward solution would be to move ad injection to the server side. The PHP (or whatever language) code generating the page would contact the ad server, download the ad, and inject it into the generated HTML file. From the client perspective, it is now all coming from the same domain; it is even part of the same page. The client cannot see the interaction between server and third party.

Problem with this solution is that it is too easy for the server to cheat; to download thousand extra ads without displaying them to anyone. The advertising companies must find a way to protect themselves from fraud.

But if smart people start thinking about it, they will probably find a solution. The solution doesn't have to work perfectly, only statistically. For example, the server displaying the ad could also take the client's fingerprint and send it to the advertising company. Now this fingerprint can of course either be real, or fictional if the server is cheating. But the advertising company could cross-compare fingerprints coming from thousand servers. If many different servers report having noticed the same identity, the identity is probably real. If a server reports too many identities that no one else have ever seen, the identities are probably made up. The advertising company would suspect fraud if the fraction of unique identities reported by one server exceeds 20%. Something like this.

Comment by viliam on Most People Aren't Fishermen · 2020-09-26T21:56:02.992Z · score: 2 (1 votes) · LW · GW

My opinion on marriage is conservative -- people should get married when they want to have kids. They don't sacrifice to each other; they together pay the costs of creating a good environment for their kids to grow up in.

If you don't want to have kids, you can have sex or live together also without marriage, and divorce made marriage kinda useless as a signal of commitment. (Okay, there are other reasons, too, such as tax benefits.)

From this perspective, I am quite surprised that you see marriage as an opposite of growth mindset. Making a commitment to radically change your everyday life for the next 20 years, and taking responsibility for challenges you never experienced before, knowing that there is no way to stop this train without someone getting hurt...

Similarly, strategically making a sacrifice counts as "growth" in my books. (Jordan Peterson agrees.)

Not knowing your friend's buddy of course makes it impossible for me to guess whether his decision was a result of maturity or... something completely different.

They don’t want to take the risky leap in becoming fishermen. As long as they keep receiving enough fish, they’ll tolerate the misery.

Who knows what would happen if the risk became smaller, e.g. thanks to the UBI. You seem to assume that people who don't accept risk now, they simply are the type of person who would never take a risk. But maybe many people consider some smaller levels of risk acceptable (e.g. "there is a chance I will spend three years working on something that ultimately fails, and if I switch to a regular career later, I will be three years behind my peers"), and some higher levels of risk unacceptable (e.g. "there is a chance I will lose my lifelong savings and live in poverty, or get sick without having good healthcare"). And maybe too many people live in a situation where trying something revolutionary would require the unacceptable levels of risk.

By the way, some people work in corporations because they need to accumulate the capital necessary for starting their own company. And some people work in corporations because their company failed and now they have to pay their debts. Both of these can take many years.

Comment by viliam on What is complexity science? (Not computational complexity theory) How useful is it? What areas is it related to? · 2020-09-26T15:43:51.847Z · score: 3 (2 votes) · LW · GW

Yes, this is a motte of "emergence".

The problematic part is when you turn the concept of "despite understanding the rules of all little pieces, it is still difficult for a human to predict some patterns of their interaction" into a noun, and then kinda suggest that it refers to a mysterious thing that many difficult-to-predict patterns have in common, and that there is a way to study this mysterious thing itself, and by doing so gain insight (going beyond "yep, complex things with many parts are often difficult to predict") into all these difficult-to-predict patterns.

In other words, if you make it seem as if understanding of e.g. gliders and biological evolution (two examples of "emergence") allows you to better predict stock markets (another example of "emergence"... therefore, they all should have something in common, and you can study that).

Quoting Eliezer: (source)

Taken literally, that description fits every phenomenon in our universe above the level of individual quarks [...] There’s nothing wrong with saying “X emerges from Y,” where Y is some specific, detailed model with internal moving parts. [...] Gravity arises from the curvature of spacetime, according to the specific mathematical model of General Relativity. Chemistry arises from interactions between atoms, according to the specific model of quantum electrodynamics.

The phrase “emerges from” is acceptable, just like “arises from” or “is caused by” are acceptable, if the phrase precedes some specific model to be judged on its own merits. However, this is not the way “emergence” is commonly used. “Emergence” is commonly used as an explanation in its own right.

Comment by viliam on What is complexity science? (Not computational complexity theory) How useful is it? What areas is it related to? · 2020-09-26T13:29:12.483Z · score: 3 (2 votes) · LW · GW

Similar here. Reading the title, thinking "explaining how exponential complexity is worse than linear will be a piece of cake". Reading the text, thinking "okay, how is this different from cybernetics?"

Even Wikipedia just says "study of complexity and complex systems", and then points towards computational complexity and systems theory. Wikipedia has its flaws, but...

Even among the resources linked as "some courses/primers/introductions", half of them do not contain words "complexity theory" or "complexity science". Which makes me doubt:

It is at least not 100% crackpottery, since some books are published by Princeton university press and Oxford university press.

Just because those books contain the word "complex" or "complexity", doesn't mean they support the idea of "complexity science".

Comment by viliam on MikkW's Shortform · 2020-09-26T12:34:09.102Z · score: 3 (2 votes) · LW · GW

either A) most LW'ers aren't investing in stocks

Does LW 2.0 still have the functionality to make polls in comments? (I don't remember seeing any recently.) This seems like the question that could be easily answered by a poll.

Comment by viliam on Shittests are actually good · 2020-09-25T20:55:22.922Z · score: 2 (1 votes) · LW · GW

Seems that we mostly agree here, the major disagreement is about terminology.

I disagree about too wide use of "shit-testing" to include... maybe not testing in general, but still more than the narrow meaning in the PUA literature... which is approximately "purposefully annoying your partner, in order to find out whether the partner is good at keeping their boundaries".

I agree that if there are incompatibilities between people, it's better to find them sooner rather than later. And that sometimes you need to search for the possible incompatibilities actively.

Comment by viliam on Losing the forest for the trees with grid drawings · 2020-09-24T22:38:33.246Z · score: 6 (4 votes) · LW · GW

Ironically, Drawing on the Right Side of the Brain recommends as an exercise to draw the picture upside-down, so that the "forest" does not distract you from getting the details right.

(But it is not assumed that the resulting picture will be beautiful, and there is also no grid that would introduce artificial line bends.)

Perhaps a metaphor could be made for that, too, that sometimes focusing on the big picture prevents you from noticing that you got the details wrong, which can also ruin the outcome.

Comment by viliam on Shittests are actually good · 2020-09-24T22:28:24.901Z · score: 10 (6 votes) · LW · GW

The phrase "If you can't handle me at my worst, you don't deserve me at my best" is sometimes the idea.

I would assume that your current "worst" is the best predictor of your future behavior. And frankly, shouldn't I? I think it is a consensus among the people who use the word that shit-tests never end.

I am ambivalent about the whole idea of shit-testing. On one hand, it makes sense to test your partner's reaction to your bad behavior. Because, if you stay together for a long time, sooner or later some bad behavior will happen; life will throw a lot of stress on you, and you will snap. You need the kind of partner who can survive it gracefully. If it is someone who would collapse, or go nuclear, that is a time-bomb; better avoid that.

On the other hand, if someone occassionally behaves badly even when everything goes fine, it doesn't exactly give me confidence that the person will try their best when things get hard. When a life-or-death situation happens (and by the same logic, sooner or later it will), would you want your partner to choose exactly that moment for their next shit-test? And what makes you so sure they wouldn't, if they already do it habitually?

So... shit-testing allows you to select a better partner... but at the same time, "being the kind of person who shit-tests their partner" makes you a worse partner. (Which is kinda your partner's problem, not yours, but still...)

It's like those "if you really love me, you will do X for me" situations, when someone demands an arbitrary sacrifice X as a proof of love. If you are too focused on signaling your love, you may miss the larger picture, which is that a person who loves you would not ask you to make arbitrary sacrifices. So you are setting yourself up for a one-sided relationship; and the right answer would be to walk away, and find someone else who is willing to reciprocate your love. (Even if you believe that sufficiently strong one-sided love may eventually elicit the same feelings in the other party, it still makes more sense to choose someone who will not abuse you before that happens, assuming it happens at all.)

How to get out of this dilemma? Arbitrarily testing your partner is bad, leaving them untested is dangerous...

Perhaps, if you could observe your partner in tests that life throws at them naturally. That would require to spend a lot of time together. If you want to speed it up, you could choose a situation that increases stress levels naturally, for some good reason. For example, spend a vacation in mountains together. Or something else that gets you tired and uncomfortable, but for reasons better than one person choosing to annoy the other.

(I wonder if shit-testing was also so frequent in the past, or whether it is an adaptation to the modern dating market where you have to test your partners quickly.)

Comment by viliam on The rationalist community's location problem · 2020-09-24T17:33:06.219Z · score: 4 (2 votes) · LW · GW

Rationalists may be less likely than average to want kids, but that doesn't mean none of us are having them.

Many people don't want to have kids in their 20s, and change their mind later. Ten years later, I could imagine that many rationalists will feel ambiguous, and then something can start a chain reaction of having kids.

Actually, I think it would be super cool to have a generation of kids of approximately the same age, whose parents are rationalists living next to each other and can coordinate on school choice / homeschooling / providing extra lessons in free time.

Comment by viliam on I have discovered a new kind of unemployment. · 2020-09-24T17:12:59.697Z · score: 2 (1 votes) · LW · GW

Firms impose higher effort demands on workers; workers have to complete more tasks (for a higher wage) or be fired.

This sounds correct, but I thought it was specific for IT. I mean the popular trends of being "full-stack developer" and "dev-ops", which in my opinion both mean: -- Why should I hire two or three specialists, when one person could do everything alone? And if the project size requires hiring two or three people anyway, at least this will make them more replaceable, and I can immediately move one of them to another project when the worst crisis is over. And if being unable to maintain top-level expertise at too many things at the same time makes them feel like impostors, at least it will keep them humble.

Do you suggest it also happens in other industries? Your articles has "technology" in its title, but seems to talk about economy in general. Could you perhaps provide more specific information about other industries? Unfortunately, I am not qualified to comment on the second part of your article.

I will admit that the claim American workers have become Stakhanovites is a bold one. It's the sort of claim that immediately raises all sorts of objections and questions, like: how is that even possible in a capitalist economy, and why hasn't it also happened outside of the U.S?

I don't have enough data, but is it possible that this is more about IT (and also in other countries) than about Americans? Because the hypothesis "nerds suck at negotiation, and are easily brainwashed" would explain a few things we see. I mean, even comrade Stakhanov didn't spend his free time improving his Github portfolio.

Comment by viliam on AllAmericanBreakfast's Shortform · 2020-09-24T15:31:15.963Z · score: 6 (3 votes) · LW · GW

There are things like "lying for a good cause", which is a textbook example of what will go horribly wrong because you almost certainly underestimate the second-order effects. Like the "do not wear face masks, they are useless" expert advice for COVID-19, which was a "clever" dark-arts move aimed to prevent people from buying up necessary medical supplies. A few months later, hundreds of thousands have died (also) thanks to this advice.

(It would probably be useful to compile a list of lying for a good cause gone wrong, just to drive home this point.)

Thinking about historical record of people promoting the use of dark arts within rationalist community, consider Intentional Insights. Turned out, the organization was also using the dark arts against the rationalist community itself. (There is a more general lesson here: whenever a fan of dark arts tries to make you see the wisdom of their ways, you should assume that at this very moment they are probably already using the same techniques on you. Why wouldn't they, given their expressed belief that this is the right thing to do?)

The general problem with lying is that people are bad at keeping multiple independent models of the world in their brains. The easiest, instinctive way to convince others about something is to start believing it yourself. Today you decide that X is a strategic lie necessary for achieving goal Y, and tomorrow you realize that actually X is more correct than you originally assumed (this is how self-deception feels from inside). This is in conflict with our goal to understand the world better. Also, how would you strategically lie as a group? Post it openly online: "Hey, we are going to spread the lie X for instrumental reasons, don't tell anyone!" :)

Then there are things like "using techniques-orthogonal-to-truth to promote true things". Here I am quite guilty myself, because I have long ago advocated turning the Sequences into a book, reasoning, among other things, that for many people, a book is inherently higher-status than a website. Obviously, converting a website to a book doesn't increase its truth value. This comes with smaller risks, such as getting high on your own supply (convincing ourselves that articles in the book are inherently more valuable than those that didn't make it for whatever reason, e.g. being written after the book was published), or wasting too many resources on things that are not our goal.

But at least, in this category, one can openly and correctly describe their beliefs and goals.

Metaphorically, reason is traditionally associated with vision/light (e.g. "enlightenment"), ignorance and deception with blindness/darkness. The "dark side" also references Star Wars, which this nerdy audience is familiar with. So, if the use of the term itself is an example of dark arts (which I suppose it is), at least it is the type where I can openly explain how it works and why we do it, without ruining its effect.

But does it make us update too far against the use of deception? Uhm, I don't know what is the optimal amount of deception. Unlike Kant, I don't believe it's literally zero. I also believe that people err on the side of lying more than is optimal, so a nudge in the opposite direction is on average an improvement, but I don't have a proof for this.

Comment by viliam on Comparative advantage and when to blow up your island · 2020-09-23T20:58:53.212Z · score: 2 (1 votes) · LW · GW

In this example it is assumed that the entire island is literally owned by one person. So, if you wish, this person may be a metaphor for a strong centralized government.

Destroying your production capacity is a strategic mistake, and exposes you to blackmail in the future. A smart owner (or a smart centralized government) would not let that happen. If you want to give me free bananas, okay, I will take them; but I will still keep my banana plantation ready. That way, I get free bananas today and keep my ability to produce bananas tomorrow.

(And the other side of the same coin is that a smart owner -- or centralized government -- will try to expand their future production capacities. For example, if today it is for me more profitable to grow bananas than to write computer software, I might strategically decide to write software anyway, at least part-time, because two or three years later my software-writing skills are likely to increase dramatically, while my banana-growing skills would probably remain the same. So the comparative advantage of tomorrow may reward me for writing software, but in order to get there, I need to accept some disadvantage today.)

That said, another question is whether subsidies are the best way to keep your production capacity, and what amount of subsidies is optimal. (Of course, the farmers will always say "more is better" for obvious reasons.) If we discuss real-life agriculture, I would even challenge which types of products should we subsidize: if the goal is to prevent starving, we probably do not need to protect our meat production -- if the other countries keep giving us cheap meat, let them; and if they suddenly stop doing that (in the unlikely case that all meat-subsidizing countries would coordinate to do this in the same year), we may have a year or two of mostly vegetarian diet, but no one is going to die.

In other words, although some protection of production capacity is strategically important, it doesn't necessarily follow that the farming subsidies, as we know them now, are anywhere near the optimal solution. (Specifically, I think that subsidies of meat production are completely unnecessary -- it is unlikely that all other countries would stop subsidizing meat at the same year, and in the unlikely case that would happen, we would survive anyway.)

Comment by viliam on Comparative advantage and when to blow up your island · 2020-09-23T13:25:23.689Z · score: 5 (3 votes) · LW · GW

In general, yes, but there can be other factors that reduce the possibility to interact with many possible partners.

Geographical local monopolies -- there are thousands of islands in the ocean, but most of them are too far from your home. You could replace your nearest trade partner with someone further away, at an extra cost; and if your nearest trade partner pushes you too far, you will do it. But within that interval, the negotiation is important.

Upfront transaction costs -- even if the trade partners are equivalent, but it is costly to start interacting with another one (you have to do a complicated background check, you need to adapt to their specifics), this again creates an extra cost of switching, and an interval within which it is about negotiation.

Both can apply at the same time.

There is also a gray line between "cartel" and "people doing the same thing, acting selfishly, but updating on their competitors' past actions". To make it simple, imagine that that a fair price for a ton of bananas is $100. (Fair price = what would be the market balance if anyone could trade with anyone, in a world with zero transaction costs.) But there is a $8 cost for trading with someone who is not your geographically nearest trade partner. In this situation, the banana buyers can individually precommit to buy at e.g. $95, because they know that you will prefer to sell them for $95 rather than sell someone else for $100, pay $8 for transit, and only keep $92.

Now imagine the banana buyers have a website, where they publicly share their experience. (This is perfectly legal, right?) And there is this highly upvoted article called: "Don't buy bananas for $100, you can get them for $95 using game theory". It becomes common knowledge that the banana sellers suck at negotiation (they don't have an analogical website), and that most banana buyers only pay $95. -- Armed with this knowledge, you can now precommit to only pay $90 for a ton of bananas next year, because now it is known that the best price your neighbor can get from anyone else is $95.

How many iterations can happen, depends on the exact shape of diminishing returns. For example, even if I was willing to pay $100 for my first ton of bananas, but using my power of precommitment I already got them from my neighbor for $85, I am probably not willing to pay $100 for the second ton of bananas. Suppose the second ton of bananas is only worth $90 to me. But to obtain it, from someone who is not my neighbor, I would have to pay $85 + $8, which is more. So I will not defect against the new equilibrium. -- Here I act almost like a cartel member (my first ton of bananas is worth $100 to me, and at the end I only buy one ton, and yet I precommitted to not pay more than $85), but I am still only following my selfish incentives, and at no point I am sacrificing a potential extra profit in favor of keeping the balance.

I feel like I am reinventing here the Marxist class conflict, in a more general form, with emphasis on sharing negotiation tactics. The essence is that one side shares their negotiation tricks, which work individually even if no one else is using them (this is what makes it not a cartel), but quickly become a new standard if shared; and the new standard -- and the common knowledge thereof -- becomes a more powerful leverage (this is what makes it cartel-like in effect) in the following iteration of negotiation. The power to say: "Yes, you noticed that I am using this dirty trick against you, but we both know that all my competitors use exactly the same trick, so you cannot punish me by switching to another. And it is perfectly legal, because we coordinated this publicly. Your side as a whole sucks at negotiation, my side successfully turned it into a global leverage, and you as an individual face an uphill battle here."

Comment by viliam on Viliam's Shortform · 2020-09-20T20:05:55.811Z · score: 8 (5 votes) · LW · GW

1) There was this famous marshmallow experiment, where the kids had an option to eat one marshmallow (physically present on the table) right now, or two of them later, if they waited for 15 minutes. The scientists found out that the kids who waited for the two marshmallows were later more successful in life. The standard conclusion was that if you want to live well, you should learn some strategy to delay gratification.

(A less known result is that the optimal strategy to get two marshmallows was to stop thinking about marshmallows at all. Kids who focused on how awesome it would be to get two marshmallows after resisting the temptation, were less successful at actually resisting the temptation compared to the kids who distracted themselves in order to forget about the marshmallows -- the one that was there and the hypothetical two in the future -- completely, e.g. they just closed their eyes and took a nap. Ironically, when someone gives you a lecture about the marshmallow experiment, closing your eyes and taking a nap is almost certainly not what they want you to do.)

After the original experiment, some people challenged the naive interpretation. They pointed out that whether delaying gratification actually improves your life, depends on your environment. Specifically, if someone tells you that giving up a marshmallow now will let you have two in the future... how much should you trust their word? Maybe your experience is that after trusting someone and giving up the marshmallow in front of you, you later get... a reputation of being an easy mark. In such case, grabbing the marshmallow and ignoring the talk is the right move. -- And the correlation the scientists found? Yeah, sure, people who can delay gratification and happen to live in an environment that rewards such behavior, will suceed in life more than people who live in an environment that punishes trust and long-term thinking, duh.

Later experiments showed that when the experimenter establishes themselves as an untrustworthy person before the experiment, fewer kids resist taking the marshmallow. (Duh. But the point is that their previous lives outside the experiment have also shaped their expectations about trust.) The lesson is that our adaptation is more complex than was originally thought: the ability to delay gratification depends on the nature of the environment we find ourselves in. For reasons that make sense, from the evolutionary perspective.

2) Readers of Less Wrong often report having problems with procrastination. Also, many provide an example when they realized at young age, on a deep level, that adults are unreliable and institutions are incompetent.

I wonder if there might be a connection here. Something like: realizing the profound abyss between how our civilization is, and how it could be, is a superstimulus that switches your brain permanently into "we are doomed, eat all your marshmallows now" mode.

Comment by viliam on Against Victimhood · 2020-09-20T15:29:43.340Z · score: 13 (5 votes) · LW · GW

A systematically oppressed group can still be wrong. Being oppressed gives you an experience other people don't have, but doesn't give you epistemic superpowers. You can still derive wrong conclusions, despite having access to special data.

Anecdote time: When I was a kid, I was bullied by someone who did lots of sport. As a result, I developed an unconscious aversion to sport. (Because I didn't want to be like him, and I didn't want to participate in things that reminded me of him.) Obviously, this only further reduced the quality of my life. Years later, I found some great friends, who also did lots of sport. Soon, the aversion disappeared. My unconsciousness decided it was actually okay to be like them.

Maybe I am generalizing my experience too much, but looking at some groups, it seems like they follow the same algorithm (sometimes except for the happy ending, yet). At some moment in history, your group happens to be at the bottom of the social ladder. Others -- the bad guys -- have the money, the education, the institutions, etc. Your group starts associating money, education, and institutions with the bad things that were done to them. The difference is that when this happens on a group level, the belief gets reinforced culturally, because your friends and family all had the same experience.

A few decades or centuries later, your group also gets an access to education, money, and institutions. (And I am not necessarily talking about equal access here; just about some access, as opposed to your ancestors who had none.) But now everyone knows that these are things your people traditionally don't have, and whoever aspires to get them is perceived as a traitor, as someone who wants to join the bad guys. You cannot discuss rationally whether getting more education, more money, and more of your people in institutions is actually a good thing for your group, because it increases your individual and collective power. The group as a whole is flinching away from the painful experience in the collective memory, and the individuals who go against the grain get punished.

(An example would be black people policing each other against "acting white", but a similar mechanism applies in situations where one group of white people was historically oppressed by another group of white people, because of different language or religion or whatever.)

But of course, there may be also legitimate reasons to distrust strategies that work for other people. For example, education means acquiring debt in return for higher expected income in the future. If you know that the "higher income" is not going to happen, e.g. because of racism, then education is not as profitable for you as it would be for the majority.

Comment by viliam on Viliam's Shortform · 2020-09-19T21:47:01.365Z · score: 14 (4 votes) · LW · GW

Moving a comment away from the article it was written under, because frankly it is mostly irrelevant, but I put too much work into it to just delete it.

But occasionally I hear: who are you to give life advice, your own life is so perfect! This sounds strange at first. If you think I’ve got life figured out, wouldn’t you want my advice?

How much your life is determined by your actions, and how much by forces beyond your control, that is an empirical question. You seem to believe it's mostly your actions. I am not trying to disagree here (I honestly don't know), just saying that people may legitimately have either model, or a mix thereof.

If your model is "your life is mostly determined by your actions", then of course it makes sense to take advice from people who seem to have it best, because those are the ones who probably made the best choices, and can teach you how to make them, too.

If your model is "your life is mostly determined by forces beyond your control", then the people who have it best are simply the lottery winners. They can teach you that you should buy a ticket (which you already know has 99+% probability of not winning), plus a few irrelevant things they did which didn't have any actual impact on winning.

The mixed model "your life is partially determined by your actions, and partially by forces beyond your control" is more tricky. On one hand, it makes sense to focus on the part that you can change, because that's where your effort will actually improve things. On the other hand, it is hard to say whether people who have better outcomes than you, have achieved it by superior strategy or superior luck.

Naively, a combination of superior strategy and superior luck should bring the best outcomes, and you should still learn the superior strategy from the winners, but you should not expect to get the same returns. Like, if someone wins a lottery, and then lives frugally and puts all their savings in index funds, they will end up pretty rich. (More rich than people who won the lottery and than wasted the money.) It makes sense to live frugally and put your savings in index funds, even if you didn't win the lottery. You should expect to end up rich, although not as rich as the person who won the lottery first. So, on one hand, follow the advice of the "winners at life", but on the other hand, don't blame yourself (or others) for not getting the same results; with average luck you should expect some reversion to the mean.

But sometimes the strategy and luck are not independent. The person with superior luck wins the lottery, but the person with superior strategy who optimizes for the expected return would never buy the ticket! Generally, the person with superior luck can win at life because of doing risky actions (and getting lucky) that the person with superior strategy would avoid in favor of doing something more conservative.

So the steelman of the objection in the mixed model would be something like: "Your specific outcome seems to involve a lot of luck, which makes it difficult to predict what would be the outcome of someone using the same strategy with average luck. I would rather learn strategy from successful people who had average luck."

A toy model to illustrate my intuition about the relationship between strategy and luck:

Imagine that there are four switches called A, B, C, D, and you can put each of them into position "on" or "off". After you are done, a switch A, B, C, D in a position "on" gives you +1 point with probability 20%, 40%, 60%, 80% respectively, and gives you -1 point with probability 80%, 60%, 40%, 20% respectively. A switch in a position "off" always gives you 0 points. (The points are proportional to utility.)

Also, let's assume that most people in this universe are risk-averse, and only set D to "on" and the remaining three switches to "off".

What happens in this universe?

The entire genre of "let's find the most successful people and analyze their strategy" will insist that the right strategy is to turn all four switches to "on". Indeed, there is no other way to score +4 points.

The self-help genre is right about turning on the switch C. But also wrong about the switches A and B. Neither the conservative people nor the contrarians get the answer right.

The optimal strategy -- setting A and B to "off", C and D to "on" -- provides an expected result +0.8 points. The traditional D-only strategy provides an expected result +0.6 points, which is not too different. On the other hand, the optimal strategy makes it impossible to get the best outcome; with best luck you score +2 points, which is quite different from the +4 points advertised by the self-help genre. This means the optimal strategy will probably fail to impress the conservative people, and the contrarians will just laugh at it.

It will probably be quite difficult to distinguish between switches B and C. If most people you know personally set both of them "off", and the people you know from self-help literature set both of them "on" and got lucky at both, you have few data points to compare; the difference betwen 40% and 60% may not be large enough to empirically determine that one of them is a net harm and the other is a net benefit.

(Of course, whatever are your beliefs, it is possible to build a model where acting on your beliefs is optimal, so this doesn't prove much. It just illustrates why I believe that it is possible to achieve outcomes better than usual, and also that it is a bad idea to follow the people with extremely good outcomes, even if they are right about some of the things most people are wrong about. I believe that in reality, the impact of your actions is much greater than in this toy model, but the same caveats still apply.)

Comment by viliam on Against Victimhood · 2020-09-19T21:42:24.448Z · score: 10 (7 votes) · LW · GW

Yep, I was just nitpicking about literally two lines from the entire article. Guess they triggered me somehow.

Humbled by your niceness when pointing this out, I moved the comment away. Thank you!

Comment by viliam on otto.barten's Shortform · 2020-09-19T20:03:33.833Z · score: 0 (2 votes) · LW · GW

Technically, tiling the entire universe with paperclips or tiny smiling faces would probably count as modern art...

Comment by viliam on Against Victimhood · 2020-09-18T22:44:53.609Z · score: 20 (10 votes) · LW · GW

EDIT: Moved this comment to my shortform, because it was nitpicking mostly irrelevant to the article. Sorry about that.

Comment by viliam on Against Victimhood · 2020-09-18T21:55:08.231Z · score: 6 (3 votes) · LW · GW
Quite interesting, how all these different worldviews converge on that one :)

Maybe a religion that wants to appeal to people with modern sense of justice (i.e. those not satisfied with "the ingroup goes to heaven, the outgroup goes to hell, exactly as you would wish, right?") has no better option than take the just-world hypothesis and dress it up in religious terms.

Comment by viliam on What are examples of simpler universes that have been described in order to explain a concept from our more complex universe? · 2020-09-18T21:38:32.875Z · score: 2 (1 votes) · LW · GW

Relevant comment here:

I think Wolfram's "theory" is complete gibberish. Reading through "some relativistic and gravitational properties of the Wolfram model" I haven't encountered a single claim that was simultaneously novel, correct and non-trivial...
Comment by viliam on What Does "Signalling" Mean? · 2020-09-16T23:13:18.830Z · score: 17 (7 votes) · LW · GW
For example, a bird performing an impressive mating display signals that it is healthy and has good genes.
But we already have a term for signalling desirable properties about yourself: virtue signalling!

I don't understand the objection.

Virtue signalling is a subset of signalling. Specifically, it is signalling of moral virtues.

Therefore, a bird signalling health and good genes is not virtue signalling (but it is signalling in general). Because health and good genes are usually not considered to be moral virtues.

In some of these cases, mere assertion goes a long way. [...] In other cases, mere assertion doesn't work.
I'll charitably assume that he meant both cases to be types of signalling.

I think Scott got this right, but you misunderstood it.

X is a signal of Y if seeing X makes Y more likely. In some cases, mere assertions do that, in some cases, they don't.

For example, saying "I read Less Wrong" is a signal of reading Less Wrong, because people who read Less Wrong are more likely to say that they read Less Wrong. However, saying "I am not a criminal" is not a signal of not being a criminal, because criminals also say it a lot.

It's not about what the words mean, it's about what they correlate with. Sometimes the act of speaking the words correlates with their literal meaning (not lying, or lying rarely). Sometimes the act of speaking the words has almost zero correlation with their literal meaning (lying almost always).

.

I agree with your third objection, that Less Wrong uses signalling in the narrower sense (about agent), because that is how Robin Hanson typically uses it, and most of us were probably introduced to the concept by him.

(I am not sure whether Robin never used signalling in the wider sense, or he did but we just didn't notice.)

Comment by viliam on Low hanging fruits (LWCW 2020) · 2020-09-16T00:05:27.670Z · score: 4 (2 votes) · LW · GW

As an alternative to One Note, I suggest trying Cherrytree. It is open-source; works on Windows, Linux, Mac.

Comment by viliam on Notes on good judgement and how to develop it (80,000 Hours) · 2020-09-15T14:14:30.968Z · score: 2 (1 votes) · LW · GW

No.

Comment by viliam on 4thWayWastrel's Shortform · 2020-09-15T11:42:25.332Z · score: 3 (2 votes) · LW · GW

Given the power of mindkilling, the result could easily be an army of ex-altruistic ex-rationalists in politics. (Which wouldn't necessarily be worse than the current state of politics, it just wouldn't be the expected improvement.)

It's not like a have a better plan, though. I was thinking along the line of "suppose that certain fraction of politicians will be responsible, and will seek advice among the experts... look at what algorithm they use to pick their advisors... and position yourself so that they pick you".

But I suspect the algorithm would be something like "choose the most visible people already working in the domain you want to improve". In which case my advice reduces to "the 'life hack' to improve domain X is to spend your life working in domain X and become successful and famous", which sounds like doing things the hard way and being sufficiently lucky. (Maybe that is the optimal answer, dunno.)

The only point of intervention I see here is that we could notice the people who are doing the right thing, and try making them more visible, e.g. by writing articles about how they are doing the right thing. Which might slightly increase their chances of being picked as an advisor, compared to a person who is doing a wrong thing, but is good at climbing the hierarchy, so from outside seems like an equally qualified expert. In other words, instead of trying to place rationalists into domain X, just find people already in domain X who are relatively more rational than average, and try giving them more light.

Another potentially interesting project would be to create and publish a compilation of "rational policies on everything", and allow politicians to steal the ideas from the book. Let your memes travel farther than you can. The question is whether we could even compile such book. Because it's not just about technical answers, but also choosing your values. Often the choice is not between a policy that is "good" or "bad", but "better for X and worse for Y" and "better for Y and worse for X". Even the obviously bad choices usually have someone who derives some small benefit from status quo.

Comment by viliam on [Link] Five Years and One Week of Less Wrong · 2020-09-14T23:02:10.910Z · score: 12 (6 votes) · LW · GW

My experience was that I had already thought about many things that Eliezer described in the Sequences, except that he took the thought a bit further, and then connected it with some other thoughts. Also it felt awesome to have a social proof that I was not the only person in the world thinking about "weird" things.

I assume if one never thought much in that direction, then the Sequences would simply be too much, too weird.

Most of the ideas in the Sequences are available elsewhere, too. You could probably get 90% of the information by reading five carefully selected books. I still appreciate having them neatly collected, and connected in what feels like a coherent whole.

Maybe just as important are the things that are not in the Sequences. There are many books out there that mix good stuff with bad stuff. I like the absence of applause lights, mysterious explanations, arguments by definition, etc. There are many smart books and smart people, who just can't resist doing also something incredibly insane (by our standards). I already had a bad feeling about that, but couldn't articulate it.

Comment by viliam on Against boots theory · 2020-09-14T22:31:30.926Z · score: 6 (3 votes) · LW · GW

According to your classification, I think I almost completely believe in level 1, a little bit in level 2, and not at all in level 3 unless we take an extremely motte interpretation.

Level 1: A person with more money can use all strategies available to a person with less money, plus a few extra strategies. Some of those extra strategies will be better (and the rest can be ignored). You can buy the more expensive boots if that actually saves money in long term (and you can ignore them if it doesn't).

If you have a place you can lock, fewer of your things will get stolen. If you have electricity and fridge in your home, you can cook your own meals and store them in the fridge, which is cheaper and healthier than eating fast food. You can save money by buying things in bulk. (If the delivery is cheaper than the savings, you can save money and time by buying things in bulk and having them delivered to your home.) You can save money and time by researching things online, assuming you have internet connection and a computer. You can save traveling time and money by living in a better location. If you have financial slack, you can self-insure. If you have an option to get X% profit on your investment for a constant amount of work, the more money you can invest, the greater your net profit. (With little money, the X% may be less than the transaction costs even for quite generous values of X.) If you can afford to take a year of vacation from your work, you can learn a new skill or start your own business. If taking care of your finances for less than 8 hours a day generates enough income to cover your expenses, you never need to have a full-time job again. And if the generated income covers all your expenses plus the salary of a person who will take care of your finances, then you never need to have any job, ever.

Exceptions: Sometimes you have extra expenses that come along with your position on the social ladder. For example, a middle-class employee probably needs to buy nicer clothes to keep their job than a working-class employee. A multi-millionaire needs to spend money to protect themselves and their family from kidnapping.

Level 2: I believe that rich people spend less money on some things, but more money on most things. For every $1 saved on buying 10-years boots instead of a series of 1-year boots, there is probably $100 spent in restaurants. (But it feels good to give poor people lectures on buying boots rationally, before enjoying your dinner and wine.)

Level 3: No, a homeless guy definitelly cannot become a millionaire by choosing his boots strategically. No matter how much money you save, you will never catch up with someone who saves more than you simply by having 10× greater income and spending less than 90% of it.

The motte version applies when you compare yourself to someone who is almost exactly at your level. Like, by being strategic, you can live on a higher economical level than someone who has 1.5× your income. (Also, if a person is a complete idiot, it is possible to ruin yourself regardless of your initial wealth. So yes, you can also live on a higher level than the few complete idiots who started from much better positions.)

Comment by viliam on Are there non-AI projects focused on defeating Moloch globally? · 2020-09-14T21:27:59.382Z · score: 3 (2 votes) · LW · GW

Skimmed the wiki, watched the first 15 minutes of the video, still have no idea whether there is anything specific. So far it seems to me like a group of people who are trying to improve the world by talking to each other about how important it is to improve the world.

You seem to know something about it, could you please post three specific examples? (I mean, examples other than making a video or a web page about how Game B is an important thing.)

Comment by viliam on Are there non-AI projects focused on defeating Moloch globally? · 2020-09-14T21:09:02.834Z · score: 5 (3 votes) · LW · GW

Maybe, let's generalize this a bit... let's call these types of solutions:

Singleton solutions -- there will be no coordination problems if everything is ruled by one royal dynasty / one political party / one recursively self-improving artificial intelligence.

Typical problems:

Requires absolute power; not sure if we can get there without sacrificing everything to Moloch during the wars between competing royal dynasties / political systems / artificial intelligencies.

Does not answer how the singleton makes decisions internally: royal succession problems / infighting in the political party / interaction between individual modules of the AI.

Fragility of outcome; there is a risk of huge dysutility if we happen to get an insane king / a political party with inhumane ideology / an unfriendly artificial intelligence.

Primitivism solutions -- all problems will be simple if we make our lifestyle simple.

Typical problems:

Avoiding Moloch is an instrumental goal; the terminal goal is to promote human well-being. But in primitive societies people starve, get sick, most of their kids die, etc.

Doesn't work in long term; even if you would reduce the entire planet into stone age, there would be a competition who gets out of the stone age first.

In a primitive society, some formerly easy coordination problems may become harder to solve, when you don't have internet or phones.

Comment by viliam on Have you tried hiIQpro.com's cognitive training or coaching? · 2020-09-14T20:26:29.289Z · score: 3 (2 votes) · LW · GW
Gee, if I do the training twice, can I get 20 - 40 points?

Careful, kids, this is how you get the intelligence explosion! Especially if the 20 extra points allow you to complete the later trainings faster...

Comment by viliam on Notes on good judgement and how to develop it (80,000 Hours) · 2020-09-14T20:17:11.940Z · score: 4 (2 votes) · LW · GW

For comparison, this lecture was given at European Mensa meetup in Prague six years ago.

Short version, it "disproves" the theory of relativity by proposing the existence of Global Aether. The theory "avoids quantitative details to keep the difficulty level as low as possible". It also debunks quantum physics, because quantum physics literally accepts magic, defined as "forces from virtual fields with mathematical properties and without material or tangible cause". Et cetera.

The really bad part was when I later tried to explain to Mensa members why the lecture was obviously nonsensical, because we already have a lot of experimental evidence in favor of relativistic effects (basically sharing this link and providing a short summary, such as "the GPS in your smartphone must calculate using relativistic equations, otherwise it would give wrong results"), so any alternative theory would need to explain these effects too, not just ignore them and insist on flat space-time.

The consensus of Mensa members was: "He is an internationally respected author who published a lot of books" (translation: He has a website with a list of dozen self-published books), plus the usual arguments that "people deserve to hear alternative opinions" and "according to Popper, scientific theories cannot be proved anyway". With the recommendation that if I misunderstood something, I should ask the author, instead of wasting everyone's time.

...so, one of the reasons I don't go to Mensa meetups, despite technically being a (former) member.

I believe there is a huge difference between "Copenhagen vs MWI" and "relativity vs aether" controversies.

Comment by viliam on Notes on good judgement and how to develop it (80,000 Hours) · 2020-09-14T18:28:51.189Z · score: 2 (1 votes) · LW · GW

Thanks, I appreciate you taking time to respond to my objections. Will read the book.

I agree that Mensa selects for "not having better ways to spend your time". I think you probably overestimate the impact of retraining -- there are many people who keep taking the test every year, and keep failing every year. (Perhaps retraining properly already requires some intelligence threshold?) I agree that Mensa isn't high enough bar -- it's like a slightly above-average university, except that you can meet dozen people in local Mensa and hundreds in local university.

Comment by viliam on 4thWayWastrel's Shortform · 2020-09-14T01:59:51.683Z · score: 3 (2 votes) · LW · GW
Ha, I never would have thought of just messaging a bunch of politicians to sate my curiosity.

It also took me a moment to realize that this option exists. But this is a small country, and there are a few politicians I happen to know personally (especially if we include the municipal level). And the worst case is they would ignore my question, no harm done.

Actually, I once invited a (government-level) politician to a local Less Wrong meetup, and they accepted. This is probably easier than it seems. They are humans, too. Just be a polite person, give an interesting proposal, don't choose the one at the top but rather someone who is like 100th from the top, and don't do it at their most busy moment.

who's on your list and through what process would you bootstrap to organisation?

The case of "becoming a president overnight" is too magical. The less magical option would be becoming an average parliament-level politician, which means having a specific agenda (in my case, that would probably be education), and having a time to prepare. I would try to meet people who seem to be domain experts in my country; the more, the better. Just talk to them, try to extract their experience and opinion, and keep contact with those who seem rational and friendly.

Then, I would probably contact local rationalists I trust, simply because I would expect local knowledge to be useful. I would share with them my data, and ask them to help draw conclusions. (Even better if they would also independently collect data. Because more data is better, and also to cover my blind spots.) I would also ask foreign rationalists who seem to have some domain knowledge in education. Ironically, I probably wouldn't include Eliezer, because he doesn't seem to have domain-specific knowledge here. (Maybe I am wrong.)

But I assume that having a good plan would actually be just a beginning. I expect that the deep state would protect the status quo and throw all kinds of obstacles at me. So I would need domain experts at overcoming these obstacles. Such as lawyers, actually quite specialized ones, and I have no idea where I would find them. (I assume many of the obstacles would have a form of someone pointing out that my idea X is incompatible with some existing law Y, providing zero guidance at how to solve this problem, because their preferred outcome is that I give up on X.) Allies within the bureaucracy, who would notify me about things their colleagues are trying to hide from my attention, for example to set me up for an embarrassing failure. And this is something I couldn't bring from outside.

All things considered, even if I had my position perfectly guaranteed for 4 years, I would expect most of my efforts to fail, simply because people who spent decades working in the system (and benefit from the existing situation) would be able to wear me down by passive resistance and sabotage, and it would not be possible to replace them with new loyal people, even if I had them, because the new people couldn't sufficiently quickly gain full understanding of the existing system, and the things would start falling apart. I have seen people more experienced than me to fail exactly this way.

So at the end, I guess, I would need a rationalist with experience in politics. Do we have any? (And no, I wouldn't call Dominic Cummings, because to me he seems like an expert at making things worse.)

Comment by viliam on Notes on good judgement and how to develop it (80,000 Hours) · 2020-09-14T01:10:15.238Z · score: 5 (3 votes) · LW · GW
one of the most g-loaded things in an IQ test is how large your vocabulary is

Is it? I am quite surprised, because that sounds like one of those culturally dependent things that were removed from IQ tests, because they decreased the measured IQ of minorities for not being sufficiently good with the English vocabulary.

Unless you would count the number of words in everyone's native language, like the English vocabulary of a native English speaker versus the Chinese vocabulary of a native Chinese speaker. Except, there is no 1:1 correspondence between English and Chinese words, so you would get wildly different results between similarly intelligent speakers, depending on how specifically the number of known words was measured. (Are plurals counted as separate words? Irregular verbs? "Good / better / the best"? If you already got points for "sand" and "box", do you get another point for "sandbox"?) Are bilinguals considered twice as intelligent, or do we take the maximum, or the union of their vocabularies?

Comparing speakers of the same language, yes, there is a correlation between IQ and vocabulary. But IQ is a biological thing, while vocabulary is relatively easy to change by education. Yes, it is easier to teach hundred new words to a smart child compared to a stupid child, but if you just teach hundred new words to one child and leave the other children alone, that doesn't give the educated child extra IQ points.

Okay, I am now really confused. But I suppose that within a homogenous monolingual community, with standardized education, comparing only with people of the same sex, using school year as a proxy for age... the correlation between vocabulary size and IQ could be quite strong.

The correlation with IQ and Stanovich's "Rationality Quotient" is also quite high at 0.7

Correlation, sure. I'd say that certain level of intelligence is necessary for certain level of rationality. But the original article hinted that these two may come apart at the extremes. You seem to dismiss this possibility without providing an argument. Correlation 0.7 is still compatible with the possibility that some of the high-IQ people are very high on rationality, but many are average or low.

(I am not good at statistics, but I made a toy model where first IQ was generated as a random number between 50 and 150, and then RQ was generated as a random number between 50 and IQ, to simulate the hypothesis that intelligence is a precondition, but not a guarantee of rationality. The correlation in this model was something above 0.6, and yet many characters with very high IQ didn't have very high RQ. Of course this doesn't prove that reality works like this, but it shows that mere correlation does not disprove the possibility.)

And then there is my anecdotal experience with Mensa, where I have also heard similar stories from other people living in different countries. I shortly joined the local Mensa Facebook group, and then quickly left it, because it was a constant source of conspiracy theories, and people claiming to have disproved theory of relativity. (I suspect that intelligence also correlates with being a contrarian, and being a contrarian correlates with believing in conspiracy theories.) Maybe Mensa specifically selects for a combination of high IQ and low RQ, dunno. Then again, maybe rationality community specifically selects for high RQ.

Comment by viliam on Does turning on the shower help reduce wildfire smoke in the air? · 2020-09-13T12:25:47.700Z · score: 4 (2 votes) · LW · GW

If the water drop collects the smoke particles while it travels though the air, then the water from your shower travels much smaller distance than the raindrops.

(Unless the fire is in your room, but then obviously you should aim the shower at the fire. And turn off the electric power before you do that.)

Comment by viliam on 4thWayWastrel's Shortform · 2020-09-13T11:23:08.286Z · score: 4 (3 votes) · LW · GW
The obvious immediate step ...

Yes, seems like an obvious step to me, too. Which makes me wonder, whether people who actually become politicians (not necessarily presidents) do it.

I can imagine that they do, it's just that neither me nor anyone I know is invited to be a member of the "smart group". Either because there are smarter and more competent people available, or because the politicians' criteria for smartness are incompatible with my bubble. Or maybe the invited people are discreet and don't tell me.

I can also imagine that they don't -- that basically, if you have the instinct "there are people smarter than me and their opinions are valuable", you will never make it into the top positions in politics. (In other words, the Dunning-Kruger effect provides an evolutionary advantage in social interaction.)

Or maybe it's something in the incentives, like if you are a politician, your #1 task is infighting and backstabbing, and the outsiders are not going to be helpful there, because you need to spend 24 hours a days doing this to remain competitive. Also, if you are not the party leader, your party leader probably makes all the decisions and your opinion is unimportant; and if you are the party leader, you probably already made so many trades in order to get there that your hands are tied; in either case, the external advice, however good, would be unactionable.

It also seems possible that the politicians are so flooded with all kinds of unsolicited advice, that asking someone for advice is counter-intuitive.

...any successful politician here that could enlighten me?

EDIT:

Sent a few messages on Facebook to actual politicians, I wonder if any will respond. :D The question was not framed "did you assemble a group of smart people" but rather "is it common among your colleagues to assemble a group of smart people".

I also realized that I missed a simple hypothesis: In the thought experiment, if you suddenly become a politician, of course you would need help, because you didn't have time to think about things, and you don't have contacts. But in real life, you rise in politics gradually, so when you get on the top, you have already discussed things a lot, and you are surrounded by people who have also discussed things a lot, so... you probably don't feel a need to find external advisors, because it seems like all the smartest people are already in your bubble, and you meet them regularly. (My hypothesis is that the people in your bubble are not necessarily the smart ones, but that it feels so.)

EDIT2:

A response (shortened, anonymized):

I agree [with the idea of surrounding yourself by smart people], and if everyone would do so, we would be in a different situation. Yes, there are many lobbists. Doing my work, I am in regular contact with professional organizations and experts. So far I have achieved only marginal improvements, although big reforms are needed. But there is not enough political will, lack of people and ideas, so as far as I know, only the "maintenance" work is being done. The administration refuses experts; they break the status quo and cost money. They prefer to hire cheaply.
Comment by viliam on Notes on good judgement and how to develop it (80,000 Hours) · 2020-09-13T10:46:15.325Z · score: 7 (2 votes) · LW · GW
the conception of "Intelligence" as "processing speed" is really flawed, and in-practice intelligence already measures something closer to "good judgement".

What makes you think so? I know people from Mensa who completely lack good judgment.

Intelligence, as measured by IQ tests, seems to be mostly about processing speed and short-term memory capacity (how many complex you can simultaneously juggle in your head).

Could it be that the rationalist community is a special bubble, where intelligence and rationality are correlated much stronger than in the outside world?

Comment by viliam on In 1 year and 5 years what do you see as "the normal" world. · 2020-09-13T10:43:03.926Z · score: 3 (2 votes) · LW · GW

It is funny to read discussions on Hacker News, where half of people are hoping to return to the office soon, and half of people are hoping that the remote work remains forever.

Some people seem to suffer a lot from the remote work. From my perspective, it is a huge improvement in my quality of life (and would have been even greater if I had a long commute, but at his specific job I don't). And I still have kids at home all day, because of the COVID-19, so if I could stay at home when this all is over, with kids in kindergarten during the day, that would be even greater.

Unfortunately, I am afraid that managers are overrepresented in the "wants back to office" group, and ultimately, the managers will make the decisions about the remote work. I have a little hope that the CEOs will also consider "money saved" and decide that it's cheaper to not have a huge office. (I mean, the ultimate reason for having open spaces, that most people complain about, and that significantly increase sickness at work, is saving a few bucks. So, if working from home saves even more bucks...)

People talk about how working from home is better/worse from the productivity perspective (fewer distractions / fewer spontaneous discussion), but I suspect that for most people this is actualy about being introverted or extraverted, and the rest is mostly rationalization. For me, I am strongly introverted, and just being surrounded by people the whole day puts some unnecessary emotional stress on me, my instincts are screaming at me to find some small place to hide. At the beginning of my career, people had actual offices where they could go and close the door; these days, it's not much of an option. Also, when people take break from work, at the office they congregate around the coffee machine; at home I can exercise or do the dishes (which means I don't have to do that in the evening). I can cook the lunch for my entire family, and it's cheaper (and probably healthier) than me alone having lunch at a restaurant near work.

I hope enough people will have the same preference, and having experienced the joys of working from home, that there will be enough pressure on employers in the future to allow it. But I wouldn't bet my money on it, when COVID-19 is over. I already see many people returning to the office voluntarily while they still have a perfectly good excuse to stay at home. Once too many people return voluntarily, it will automatically become mandatory again.

Comment by viliam on Comparative advantage and when to blow up your island · 2020-09-12T14:48:02.586Z · score: 24 (13 votes) · LW · GW

I believe this is a very important topic, so thank you for addressing it! But making a focus on comparative (as opposed to absolute) advantage at the beginning seems to me like distraction from the main problem, which occurs regardless of whether the advantage is absolute or not.

It is a separate (and very important) fact that mutually beneficial trade can occur in situations where one party doesn't have an absolute advantage at anything. (At least, if we ignore the transaction costs.) Without economical education, people are often unaware of this, but if you take an interest in economics, this will be one of the first things they will teach you.

But another fact is that if we have a situation (or possibility) of mutually beneficial trade -- regardless of how specifically that happened -- and the traded resources are (sufficiently) continuous, then there is actually a whole interval of possible mutually beneficial trades (ZOPA), and it matters a lot at which specific point the trade happens. It matters so much that it actually makes sense to put the entire trade at risk in order to achieve a more profitable point (which is still profitable for your trade partner!). For some reason, this fact is taught much less frequently (at least it seems so to me).

If you understand the concept of relative advantage, but don't understand the concept of ZOPA (and other people know this), what happens is that you get many trade offers that all give you epsilon extra benefit, and at the end of the day you are confused: "If I am participating in so many mutually beneficial trades, why am I not so rich?" Yeah, the technical answer is that your profit margin is low; but the important question is: why?

Even worse, low profit margin may seem like a natural consequence of trading at an efficient market (which again is a concept you do learn in Econ 101). So we get a "valley of bad economical literacy" where you believe that your behavior is optimal, and you even have seemingly solid scientific arguments to prove it, but in fact you are leaving a lot of money on the table. That's because you are an "educated stupid" and whenever you meet a market imperfection, you allow someone else to pick up the banknote from the street. (For the purposes of this article, the market imperfection is that there are two different islands, instead of thousand similar ones.)

I wonder what other similarly important concepts I am unaware of...

EDIT:

The way I used to describe this topic is: Imagine that if you work alone, you can make $100 a day. There is another person who can also make $100 a day working alone. But if you work together, you can make $300 a day. How would you split the money?

Now, the second scenario: If you work alone, you can make $100 a day. Another person approaches you and offers to work together, because that would be more productive; at the end of the day the person will give you $120. Would you accept the offer? You don't know how much money the other person can make alone, and how much money they will keep if you work together; they refuse to tell you. Do you actually need this information in order to make a rational decision? Obviously, $120 is more than $100, therefore...

Comment by viliam on MikkW's Shortform · 2020-09-11T19:40:05.184Z · score: 2 (1 votes) · LW · GW
two similar concepts that get conflated into the single word "ownership"

Sounds like "owner" vs "manager".

So, if I understand it correctly, you are allowed to create a company that is owned by state but managed by you, and you can redirect your tax money there. (I assume that if you are too busy to run two companies, it would also be okay to put your subordinate in charge of the state-owned company.)

I am not an expert, but it reminds me of how some billionaires set up foundations to avoid paying taxes. If you make the state-owned company do whatever the foundation would do, it could be almost the same thing.

The question is, why would anyone care whether the state-owned company actually generates a profit, if they are not allowed to keep it? This could means different things for different entrepreneurs...

a) If you have altruistic goals, you could use your own company to generate profit, and the state-owned company to do those altruistic things that don't generate profit. A lot of good things would happen as a result, which is nice, but the part of "generating profit for the public" would not be there.

b) If the previous option sounds good, consider the possibility that the "altruistic goal" done by the state-owned company would be something like converting people to the entrepreneur's religion, or lobbying for political changes you oppose.

c) For people without altruistic or even controversially-altruistic goals, the obvious option is to mismanage the state-own company and extract as much money as possible. For example, you could make the state-owned company hire your relatives and friends, give them generous salary, and generate no profit. Or you could make the state-owned company buy overpriced services from your company. If this would be illegal, then... you could do the nearest thing that is technically legal. For example, if your goal is to retire early, then the state-owned company could simply hire you and then literally do nothing. Or you would pretend to do something, except that nothing substantial would ever happen.

Comment by viliam on In 1 year and 5 years what do you see as "the normal" world. · 2020-09-10T16:47:34.358Z · score: 11 (6 votes) · LW · GW

I think that most things will return to normal, simply because most people will insist. For many people keeping their habits is more important than survival. Right now in the middle of pandemic there are people who can't resist the desire to travel -- why would anything about their habits change when it is over?

I only expect long-term changes in situations where the people learned something new during the pandemic. For example, many parents were forced to set up video conferences for their kids, or many employees were forced to try working from home. This knowledge will not disappear completely. Maybe in the future, more employees will try to work remotely, and more parents will try to communicate with schools online. But even this will be a minority of the population, so instead "the society does X differently" it will be more like "there is a significant minority doing X where previously almost no one did".

Comment by viliam on Updates Thread · 2020-09-09T20:19:34.647Z · score: 8 (4 votes) · LW · GW

Since COVID-19 I am cooking at home a lot, and I would say that most details don't matter (either the difference is difficult to notice, or the difference is negligible). Even cooking a soup 30 minutes longer (I got distracted and forgot I was cooking) made no big difference.

Exceptions: burning food; adding too much salt or acid.

Possible explanation is that some people are more sensitive about the taste, and those may be the ones that add the tiny details in recipes. They may be overrepresented among professional cooks.

Before I got some experience and self-confidence, I was often scared by too many details in the recipes. These days I mostly perceive the recipe as a "binary code" and try to see the "source code" behind it. The source code is like "cook A and B together, optionally add C or D", with some implied rules like "always use E with A, unless specifically told otherwise". The amounts officially specified with two significant digits usually don't have to be taken too precisely; plus or minus 20% is often perfectly okay. Sometimes a details actually matters... you will find out by experimenting; then you can underline that part of the recipe.

I would like to see a Pareto cookbook. ("Potato soup: Peel and cut a few potatoes, cook in water for 10-30 minutes, add 1/4 teaspoon of salt. Expert version: while cooking add some bay leaves and a little fat.") So that one could start with the simple version, and optionally add the less important details later.

Comment by viliam on Escalation Outside the System · 2020-09-09T17:55:02.721Z · score: 7 (4 votes) · LW · GW

Doing a thing that hurts me is stupid, in isolation. But having a precommitment to do X even if it hurts me, can be a powerful tool in negotiation. "Give me a dollar, or I swear I will click this button and kill us both" can be a good strategy to gain a dollar even if you don't want to die, assuming you are sufficiently certain that your opponent fears death, too. ("My opponent doesn't seem to have sufficiently strong precommitments against blackmail, and he knows he has more to lose than I have" is a possible heuristic for when this strategy might work.)

People won't express it this way, either because they are not fully conscious of the game-theoretical mechanism their instincts tell them to use, or because they want to be the good guys in their story. (Actually, not understanding your own motivation is another game-theoretical tool: if you can't understand it, you can't change it, and that makes your precommitments more credible.) From inside, it's just when the world feels unfair, strategy "if you won't make me happy, I will burn down the entire place" feels like the right thing to do. The explanations how burning things actually improves places are just rationalizations.

Then there are many biases and lot of hypocrisy on top of that. Because we are evolutionarily optimized to live in smaller groups, people are probably likely to overestimate their chances in violent conflict. (When hundreds of people are on your side, what could possibly go wrong? In a Dunbar-sized tribe, nothing.) On the other hand, most people will only speak about violence, and expect someone else to initiate it and bear the risk. Etc.

What I don't understand is how leftists could look at the current political climate in the US and think that violent revolution would work out well for them?

Do they mean it, or do they just bond over the sound of talking violence? (Simulacrum level 1 or 3?)

Assuming they mean it literally (which I don't think is the case for most), I can imagine some possible sources of bias. Maybe the near-mode experience of living in a strong bubble at campus trumps the far-mode knowledge of election results. Or the belief that they represent the majority is so strong it resists empirical falsification. ("We are the people. Those who vote against us are just temporarily confused, but they will join us when they see us fighting for their rights.") Maybe they assume the opponents are less organized on average, or unwilling to fight. (A smaller organized group can defeat a larger disorganized crowd. Also, elections show the direction but not the magnitude of your political faith: "I weakly believe that X is lesser evil than Y" vs "I am willing to sacrifice my life for X".)

Comment by viliam on How long does it takes to read the sequences? · 2020-09-08T18:36:44.063Z · score: 2 (1 votes) · LW · GW

I would definitely recommend reading the book version instead of website, simply because it is tempting to read the comments below the articles, but the comments are maybe 10× as long as the articles. So in the time you could have read the entire book, you would be in less than 10% of the web.

Also, 1600 may seem like a big number, but if you spend a few hours a day online, you probably read more than one hundred pages of text a day. And in 10% of the book (which you can read in a day) you probably already know whether you like it or not.

Comment by viliam on Design thoughts for building a better kind of social space with many webs of trust · 2020-09-07T18:14:03.107Z · score: 3 (2 votes) · LW · GW

I just realized that trust itself already slightly violates anonymity. If you say that person X is trustworthy, and if I trust your prudence at assigning trust, I can conclude that you had a lot of interaction with person X at some moment in your life.

If you gave me a network of anonymous personas, with data how much they trust each other, plus surveillance data about who met whom, I could probably connect many of those personas to the real people.

Maybe a real solution to this wouldn't be to prevent abusive cultures of monitoring and censorship, but to instate measures that accelerate their inevitable trajectory towards open lies, obvious insanity and self-canibalization, so that they burn down before getting too large.

A team of people who would infiltrate the toxic monocultures and encourage in-fighting, until the group becomes incapable of attacking non-members, because it will be consumed by internal conflict? Would be an interesting story, but it probably wouldn't work in real life.

My model of these things is co-centric circles. You have an online mob of 10000 people, among them 100 important ones. 30 of them also meet in a separate secret forum. 5 of those also meet in a separate even-more-secret forum. As an outsider you can't get into the inner circle (it probably requires living in the same city, maybe even knowing each other since high school). And whatever internal conflict you try to stir, the members of the inner circle will support each other. Character assassinations that work perfectly against people outside the inner circle (where the standard is "listen and believe") will fail against a person in the inner circle (when a few important people will vouch in their favor, and immediately launch a counter-attack).

Comment by viliam on Donald Hobson's Shortform · 2020-09-07T17:24:55.501Z · score: 2 (1 votes) · LW · GW

If I understand it correctly, A is a number which has predicted properties if it manifests somehow, but no rule for when it manifests. That makes it kinda anti-Popperian -- it could be proved experimentally, but never refuted.

I can't say anything smart about this, other than that this kind of thing should be disbelieved by default, otherwise we would have zillions of such things to consider.

Comment by viliam on Design thoughts for building a better kind of social space with many webs of trust · 2020-09-06T19:33:22.852Z · score: 4 (3 votes) · LW · GW

If an online space becomes successful, we can expect the real life to interfere with it. Self-censhorship, even compelled speech, on accounts that are publicly connected with your identify... unless you want to lose your job and make enemies among your neighbors.

I suppose the solution is to have one official persona (you call it "presence"), where the "official Viliam" would toe the party line, praise his boss and all company products, and tweet you-know-what three times a day just to be on the safe side. Then, a separate persona... ideally, one for each subculture or group I am a member of: family, friends in general, school, club, rationalist community, etc. Hopefully a good UI would allow me to navigate the horde of personas easily.

But sometimes there is an overlap (e.g. some of my colleagues are also my friends, some of my relatives are also rationalists). Also, different people will cut the social space in different ways; maybe some will not distinguish between their friends and the rationalist community; the least careful ones might even use one persona for everything. The connections would be between the personas: for example my "official" persona would publicly connect to my wife's "official" persona, and my "family" persona would connect to her "family" persona. If a colleague also happens to be my friend, and he only uses one account for everything, perhaps both my "official" and "friend" personas could connect to his only persona -- okay, this already poses some questions: should the UI show to my friend that "official Viliam" and "friend Viliam" are the same person? Should it show the information to my boss, who is connected to my colleague's only persona? (Preferred answer: it would show nothing by default; you can be connected to someone's two personas without knowing it is the same user. My colleague/friend would be allowed to add a private description "this is Viliam" to my unofficial persona, but this description would not be displayed to his contacts.)

Comment by viliam on Why is Bayesianism important for rationality? · 2020-09-05T16:49:06.278Z · score: 2 (1 votes) · LW · GW

Sometimes the subjectivity comes back in the form of choosing the proper reference class.

If I flip a coin, should our calculation include all coins that were ever flipped, or only coins that were flipped by me, or perhaps only the coins that I flipped on the same day of week...?

Intuitively, sometimes the narrower definitions are better (maybe a specific type of coin produces unusual outcomes), but the more specific you get, the fewer examples you find.

Comment by viliam on Why is Bayesianism important for rationality? · 2020-09-05T12:48:13.403Z · score: 4 (2 votes) · LW · GW

There is a difference between "the law applies randomly" and "multiple laws apply, you need to sum their effects".

If you say "if one apple costs 10 cents, then three apples cost 30 cents", the rule is not refuted by saying "but I bought three apples and a cola, and I paid 80 cents". The law of gravity does not stop being universal just because the ball stops falling downwards after I kick it.