Posts

Don't Sell Your Soul 2021-04-06T19:02:55.391Z
Monastery and Throne 2021-04-06T19:00:52.623Z
Above the Narrative 2021-02-25T04:37:21.587Z
Confirmation Bias in Action 2021-01-24T17:38:08.873Z
Review: LessWrong Best of 2018 – Epistemology 2020-12-28T04:32:50.070Z
My Model of the New COVID Strain and US Response 2020-12-27T04:34:23.796Z
What trade should we make if we're all getting the new COVID strain? 2020-12-25T07:09:41.026Z
The Good Life Quantified 2020-12-11T17:52:21.219Z
How I Write 2020-12-02T23:17:28.742Z
SubOnlyStackFans 2020-11-03T02:33:00.091Z
The Treacherous Path to Rationality 2020-10-09T15:34:17.490Z
Against Victimhood 2020-09-18T01:58:31.041Z
On Suddenly Not Being Able to Work 2020-08-25T22:14:45.747Z
Writing with GPT-3 2020-07-24T15:22:46.729Z
Kelly Bet on Everything 2020-07-10T02:48:12.868Z
Fight the Power 2020-06-22T02:19:39.042Z
Do Women Like Assholes? 2020-06-22T02:14:43.503Z
TFW No Incels 2020-05-03T16:56:24.278Z
Sex, Lies, and Canaanites 2020-04-23T16:26:10.458Z
The Origin of Consciousness Reading Companion, Part 1 2020-04-06T22:07:35.190Z
The Great Annealing 2020-03-30T01:08:24.268Z
Tales From the Borderlands 2020-03-25T19:11:48.373Z
Seeing the Smoke 2020-02-28T18:26:58.839Z
The Skewed and the Screwed: When Mating Meets Politics 2020-01-29T15:50:31.681Z
Go F*** Someone 2020-01-15T18:39:33.080Z
100 Ways To Live Better 2019-12-31T20:23:12.039Z
Is Rationalist Self-Improvement Real? 2019-12-09T17:11:03.337Z
Genesis 2019-11-14T16:20:47.508Z
Aella on Rationality and the Void 2019-10-31T21:40:52.042Z
Polyamory is Rational(ist) 2019-10-18T16:48:52.990Z
Interview with Aella, Part I 2019-09-19T14:05:18.523Z
Predictable Identities - Midpoint Review 2019-09-12T14:39:44.348Z
Unstriving 2019-08-19T14:31:56.786Z
Jacob's Twit, errr, Shortform 2019-08-17T23:49:43.993Z
Diana Fleischman and Geoffrey Miller - Audience Q&A 2019-08-10T22:37:53.090Z
Cephaloponderings 2019-08-04T16:45:57.065Z
Interview With Diana Fleischman and Geoffrey Miller 2019-07-16T01:34:26.156Z
PlayStation Odysseys 2019-07-01T17:41:52.499Z
Podcast - Putanumonit on The Switch 2019-06-23T04:09:25.723Z
Get Rich Real Slowly 2019-06-10T17:51:32.654Z
Lonelinesses 2019-05-31T13:55:55.135Z
Thinking Fast and Hard 2019-05-13T19:58:34.089Z
The State of Affairs 2019-05-03T16:18:31.706Z
Buying Value, not Price 2019-04-29T15:51:55.470Z
Interview with Putanumonit 2019-04-24T14:53:00.096Z
Airportpourri 2019-04-24T14:51:24.281Z
Exponential Secretary 2019-03-04T19:47:48.912Z
Cooperation is for Winners 2019-02-15T14:58:08.949Z
Masculine Virtues 2019-01-30T16:03:56.000Z
Curing the World of Men 2019-01-18T20:23:18.006Z

Comments

Comment by Jacob Falkovich (Jacobian) on Are PS5 scalpers actually bad? · 2021-05-18T17:50:56.682Z · LW · GW

PS5 scalpers redistribute consoles away from those willing to burn time to those willing to spend money. Normally this would be a positive — time burned is just lost, whereas the money is just transferred from Sony to the scalpers who wrote the quickest bot. However, you can argue that gaming consoles in particular are more valuable to people with a lot of spare time to burn than to people with day jobs and money!

Disclosure: I'm pretty libertarian and have a full-time job but because there weren't any good exclusives in the early months I decided to ignore the scalpers. I followed https://twitter.com/PS5StockAlerts and got my console at base price in April just in time for Returnal. Returnal is excellent and worth getting the PS5 for even if costs you a couple of hours or an extra $100.

Comment by Jacob Falkovich (Jacobian) on MIRI location optimization (and related topics) discussion · 2021-05-09T01:32:20.673Z · LW · GW

Empire State of Mind

I want to second Daniel and Zvi's recommendation of New York culture as an advantage for Peekskill. An hour away from NYC is not so different from being in NYC — I'm in a pretty central part of Brooklyn and regularly commute an hour to visit friends uptown or further east in BK and Queens. An hour in traffic sucks, an hour on the train is pleasant. And being in NYC is great. 

A lot of the Rationalist-adjacent friends I made online in 2020 have either moved to NYC in the last couple of months or are thinking about it, as rents have dropped up to 20% in some neighborhoods and everyone is eager to rekindle their social life. New York is also a vastly better dating market for male nerds given a slightly female-majority sex ratio and thousands of the smartest and coolest women on the planet as compared to the male-skewed and smaller Bay Area.  

Peekskill is also 2 hours from Philly and 3 from Boston, which is not too much for a weekend trip. That could make it the Schelling point for East Coast megameetups/conferences/workshops since it's as easy to get to as NYC and a lot cheaper to rent a giant AirBnB in.

Won't Someone Think of the Children

I love living in Brooklyn, but the one thing that could make us move in the next year or two is a community of my tribe that are willing to help each other with childcare, from casual babysitting to homeschooling pods. I'm keenly following the news of where Rationalist groups are settling, especially those who plan to (like us) or already have kids. A critical mass of Rationalist parents in Peekskill may be enticing enough for us to move there, since we could have the combined benefits of living space, proximity to NYC, and the community support we would love.

Comment by Jacob Falkovich (Jacobian) on Monastery and Throne · 2021-04-09T18:55:23.058Z · LW · GW

I don't think that nudgers are consequentialists who also try to accurately account for public psychology. I think 99% of the time they are doing something for non-consequentialist reasons, and using public psychology as a rationalization. Ezra Klein pretty explicitly cares about advancing various political factions above mere policy outcomes, IIRC on a recent 80,000 Hours podcast Rob was trying to talk about outcomes and Klein ignored him to say that it's bad politics.

Comment by Jacob Falkovich (Jacobian) on Politics is way too meta · 2021-03-17T22:13:30.293Z · LW · GW

I understand, I think we have an honest disagreement here. I'm not saying that the media is cringe in an attempt to make it so, as a meta move. I honestly think that the current prestige media establishment is beyond reform, a pure appendage of power. It's impact can grow weaker or stronger, but it will not acquire honesty as a goal (and in fact, seems to be giving up even on credibility). 

In any case, this disagreement is beyond the scope of your essay. What I learn from it is to be more careful of calling things cringe or whatever in my own speech, and to see this sort of thing as an attack on the social reality plane rather than an honest report of objective reality.

Comment by Jacob Falkovich (Jacobian) on Politics is way too meta · 2021-03-17T21:15:07.571Z · LW · GW

Other people have commented here that journalism is in the business of entertainment, or in the business of generating clicks etc. I think that's wrong. Journalism is in the business of establishing the narrative of social reality. Deciding what's a gaffe and who's winning, who's "controversial" and who's "respected", is not a distraction from what they do. It's the main thing.

So it's weird to frame this is "politics is way too meta". Too meta for whom? Politicians care about being elected, so everything they say is by default simulacrum level 3 and up. Journalists care about controlling the narrative, so everything they say is by default simulacrum level 3 and up. They didn't aim at level 1 and miss, they only brush against level 1 on rare occasion, by accident.

Here are some quotes from our favorite NY Times article, Silicon Valley's Safe Space:

the right to discuss contentious issues

The ideas they exchanged were often controversial

even when those words were untrue or could lead to violence

sometimes spew hateful speech

step outside acceptable topics

turned off by the more rigid and contrarian beliefs

his influential, and controversial, writings

push people toward toxic beliefs

These aren't accidental. Each one of the bolded words just means "I think this is bad, and you better follow me". They're the entire point of the article — to make it so that it's social reality to think that Scott is bad.

So I think there are two takeaways here. One is for people like us, EAs discussing charity impact or Rationalists discussing life-optimization hacks. The takeaway for us is to spend less time writing about the meta and more about the object level. And then there's a takeaway about them, journalists and politicians and everyone else who lives entirely in social reality. And the takeaway is to understand that almost nothing they say is about objective reality, and that's unlikely to change.

Comment by Jacob Falkovich (Jacobian) on Above the Narrative · 2021-03-02T02:37:35.101Z · LW · GW

I agree that advertising revenue is not an immediate driving force, something like "justifying the use of power by those in power" is much closer to it and advertising revenue flows downstream from that (because those who are attracted to power read the Times).

I loved the rest of Viliam's comment though, it's very well written and the idea of the eigen-opinion and being constrained by the size of your audience is very interesting.

Comment by Jacob Falkovich (Jacobian) on Jacob's Twit, errr, Shortform · 2021-01-29T07:25:58.234Z · LW · GW

Here's my best model of the current GameStop situation, after nerding out about it for two hours with smart friends. If you're enjoying the story as a class warfare morality play you can skip this, since I'll mostly be talking finance. I may all look really dumb or really insightful in the next few days, but this is a puzzle I wanted to figure out. I'm making this public so posterity can judge my epistemic rationality skillz — I don't have a real financial stake either way.

Summary: The longs are playing the short game, the shorts are playing the long game.

At $300, GameStop is worth about $21B. A month ago it was worth $1B, so there's $20B at stake between the long-holders and short sellers.

Who's long right now? Some combination of WSBers on a mission, FOMOists looking for a quick buck, and institutional money (i.e., other hedge funds). The WSBers don't know fear, only rage and loss aversion. A YOLOer who bought at $200 will never sell at $190, only at $1 or the moon. FOMOists will panic but they're probably a majority and today's move shook them off. The hedgies care more about risk, they may hedge with put options or trust that they'll dump the stock faster than the retail traders if the line breaks.

The interesting question is who's short. Shorts can probably expect to need a margin equal to ~twice the current share price, so anyone who shorted too early or for 50% of their bankroll (like Melvin and Citron) got squeezed out already. But if you shorted at $200 and for 2% of your bankroll you can hold for a long time. The current borrowing fee is 31% APR, or just 0.1% a day. I think most of the shorts are in the latter category, here's why:

Short interest has stayed at 71M shares even as this week saw more than 500M shares change hands. I think this means that new shorts are happy to take the places of older shorts who cash out, they're only constrained by the fact that ~71M are all that's available to borrow. Naked shorts aren't really a thing, forget about that. So everyone short $GME now is short because they want to be, if they wanted to get out they could. In a normal short squeeze the available float is constrained, but this hasn't really happened with $GME.

WSBers can hold the line but can't push higher without new money that would take some of these 71M shares out of borrowing circulation or who will push the price up so fast the shorts will get margin-called or panic. For the longs to win, they probably need something dramatic to happen soon.

One dramatic thing that could happen is that people who sold the huge amount of call options expiring Friday aren't already hedged and will need to buy shares to deliver. It's unclear if that's realistic, most option sellers are market makers who don't stay exposed for long. I don't think there were options sold above the current price of $320, so there's no gamma left to squeeze.

I think $GME getting taken off retail brokerages really hurt the WSBers. It didn't cause panic, but it slowed the momentum they so dearly needed and scared away FOMOists. By the way, I don't think brokers did it to screw with the small people, they're their clients after all. It just became too expensive for brokerages to make the trade because they need to post clearing collateral for two days. They were dumb not to anticipate this, but I don't think they were bribed by Citadel or anything.

For the shorts to win they just need to wait it out not get over-greedy. Eventually the longs will either get bored or turn on each other — with no squeeze this becomes just a pyramid scheme. If the shorts aren't knocked out tomorrow morning by a huge flood of FOMO retail buys, I think they'll win over the next weeks.

Comment by Jacob Falkovich (Jacobian) on Is Rationalist Self-Improvement Real? · 2021-01-10T18:56:25.881Z · LW · GW

This is a self-review, looking back at the post after 13 months.

I have made a few edits to the post, including three major changes:
1. Sharpening my definition of what counts as "Rationalist self-improvement" to reduce confusion. This post is about improved epistemics leading to improved life outcomes, which I don't want to conflate with some CFAR techniques that are basically therapy packaged for skeptical nerds.
2. Addressing Scott's "counterargument from market efficiency" that we shouldn't expect to invent easy self-improvement techniques that haven't been tried.
3. Talking about selection bias, which was the major part missing from the original discussion. My 2020 post The Treacherous Path to Rationality is somewhat of a response to this one, concluding that we should expect Rationality to work mostly for those who self-select into it and that we'll see limited returns to trying to teach it more broadly.

The past 13 months also provided more evidence in favor of epistemic Rationality being ever more instrumentally useful. In 2020 I saw a few Rationalist friends fund successful startups and several friends cross the $100k mark for cryptocurrency earnings. And of course, LessWrong led the way on early and accurate analysis of most COVID-related things. One result of this has been increased visibility and legitimacy, and of course another is that Rationalists have a much lower number of COVID cases than all other communities I know.

In general, this post is aimed at someone who discovered Rationality recently but is lacking the push to dive deep and start applying it to their actual life decisions. I think the main point still stands: if you're Rationalist enough to think seriously about it, you should do it.

Comment by Jacob Falkovich (Jacobian) on What trade should we make if we're all getting the new COVID strain? · 2021-01-04T17:22:32.987Z · LW · GW

Trade off to a promising start :P
 

Comment by Jacob Falkovich (Jacobian) on Review: LessWrong Best of 2018 – Epistemology · 2021-01-01T03:10:03.656Z · LW · GW

There's a whole lot to respond to here, and it may take the length of Surfing Uncertainty to do so. I'll point instead to one key dimension.

You're discussing PP as a possible model for AI, whereas I posit PP as a model for animal brains. The main difference is that animal brains are evolved and occur inside bodies.

Evolution is the answer to the dark room problem. You come with prebuilt hardware that is adapted a certain adaptive niche, which is equivalent to modeling it. Your legs are a model of the shape of the ground and the size of your evolutionary territory. Your color vision is a model of berries in a bush, and your fingers that pick them. Your evolved body is a hyperprior you can't update away. In a sense, you're predicting all the things that are adaptive: being full of good food, in the company of allies and mates, being vigorous and healthy, learning new things. Lying hungry in a dark room creates a persistent error in your highest-order predictive models (the evolved ones) that you can't change.

Your evolved prior supposes that you have a body, and that the way you persist over time is by using that body. You are not a disembodied agent learning things for fun or getting scored on some limited test of prediction or matching. Everything your brain does is oriented towards acting on the world effectively. 

You can see that perception and action rely on the same mechanism in many ways, starting with the simple fact that when you look at something you don't receive a static picture, but rather constantly saccade and shift your eyes, contract and expand your pupil and cornea, move your head around, and also automatically compensate for all of this motion. None of this is relevant to an AI who processes images fed to it "out of the void", and whose main objective function is something other than maintaining homeostasis of a living, moving body.

Zooming out, Friston's core idea is a direct consequence of thermodynamics: for any system (like an organism) to persist in a state of low entropy (e.g. 98°F) in an environment that is higher entropy but contains some exploitable order (e.g. calories aren't uniformly spread in the universe but concentrated in bananas), it must exploit this order. Exploiting it is equivalent to minimizing surprise, since if you're surprised there some pattern of the world that you failed to make use of (free energy). 

Now if you just apply this basic principle to your genes persisting over an evolutionary time scale and your body persisting over the time scale of decades and this sets the stage for PP applied to animals.

For more, here's a conversation between Clark, Friston, and an information theorist about the Dark Room problem.

Comment by Jacob Falkovich (Jacobian) on Review: LessWrong Best of 2018 – Epistemology · 2020-12-30T17:49:13.930Z · LW · GW

Off the top of my head, here are some new things it adds:


1. You have 3 ways of avoiding prediction error: updating your models, changing your perception, acting on the world. Those are always in play and you often do all three in some combination (see my model of confirmation bias in action).
2. Action is key, and it shapes and is shaped by perception. The map you build of any territory is prioritized and driven by the things you can act on most effectively. You don't just learn "what is out there" but "what can I do with it".
3. You care about prediction over the lifetime scale, so there's an explore/exploit tradeoff between potentially acquiring better models and sticking with the old ones.
4. Prediction goes from the abstract to the detailed. You perceive specifics in a way that aligns with your general model, rarely in contradiction.
5. Updating always goes from the detailed to the abstract. It explains Kuhn's paradigm shifts but for everything — you don't change your general theory and then update the details, you accumulate error in the details and then the general theory switches all at once to slot them into place.
6. In general, your underlying models are a distribution but perception is always unified, whatever your leading model is. So when perception changes it does so abruptly.
7. Attention is driven in a Bayesian way, to the places that are most likely to confirm/disconfirm your leading hypothesis, balancing the accuracy of perceiving the attended detail correctly and the leverage of that detail to your overall picture.
8. Emotions through the lens of PP.
9. Identity through the lens of PP.
10. The above is fractal, applying at all levels from a small subconscious module to a community of people.
 

Comment by Jacob Falkovich (Jacobian) on What trade should we make if we're all getting the new COVID strain? · 2020-12-30T02:51:31.860Z · LW · GW

The new strain has been confirmed in the US and the vaccine rollout is still sluggish and messed up, so the above are in effect. The trades I made so far are buying out-of-the-money calls on VXX (volatility) and puts on USO (oil) and JETS (airlines) all for February-March. I'll hold until the market has a clear, COVID related drop or until these options all expire worthless and I take the cap gains write-off. And I'm HODLing all crypto although that's not particularly related to COVID. I'm not in any way confident that this is wise/useful, but people asked.

Comment by Jacob Falkovich (Jacobian) on My Model of the New COVID Strain and US Response · 2020-12-27T17:07:18.781Z · LW · GW

I don't think it was that easy to get to the saturated end with the old strain. As I remember, the chance of catching COVID from a sick person in your household was only around 20-30%, and at superspreader events it was still just a small minority of total attendees that were infected.

Comment by Jacob Falkovich (Jacobian) on What trade should we make if we're all getting the new COVID strain? · 2020-12-25T20:28:11.939Z · LW · GW

The VXX is basically at multi-year lows right now, so one of the following is true:
1. Markets think that the global economy is very calm and predictable right now.
2. I'm misunderstanding an important link between "volatility = unpredictability of world economics" and "volatility = premium on short-term SP500 options".

Comment by Jacob Falkovich (Jacobian) on What trade should we make if we're all getting the new COVID strain? · 2020-12-25T20:10:10.409Z · LW · GW

Some options and their 1-year charts:
JETS - Airline ETF

XLE - Energy and oil company ETF

AWAY - Travel tech (Expedia, Uber) ETF

Which would you buy put options on, and with what expiration?

Comment by Jacob Falkovich (Jacobian) on How Lesswrong helped me make $25K: A rational pricing strategy · 2020-12-25T03:43:59.382Z · LW · GW

Those are good points. I think competition (real and potential) is always at least worth considering in any question of business, and I was surprised the OP didn't even mention it. But yes, I can imagine situations where you operate with no relevant competition.

But this again would make me think that pricing and the story you tell a client is strictly secondary to finding these potential clients in the first place. If they were the sort of people who go out seeking help you'd have competition, so that means you have to find people who don't advertise their need. That seems to be the main thing the author doing and the value they're providing: finding people who need recruitment help and don't realize it.

Comment by Jacob Falkovich (Jacobian) on How Lesswrong helped me make $25K: A rational pricing strategy · 2020-12-22T19:03:55.826Z · LW · GW

This pricing makes sense if your only competition is your client just going at it by themselves, in which case you clearly demonstrate that you offer a superior deal. But job seekers have a lot of consultants/agencies/headhunters they can turn to and I'd imagine your price mostly depends on the competition. In the worst case, you not only lose good clients to cheaper competition, but get an adverse selection of clients who would really struggle to find a job in 22 weeks and so your services are cheap/free for them.

Comment by Jacob Falkovich (Jacobian) on The Curse Of The Counterfactual · 2020-12-16T00:57:19.791Z · LW · GW

This statement for example:
> Motivating you to punish things is what that part of your brain does, after all; it’s not like it can go get another job!

I'm coming more from a predictive processing / bootstrap learning / constructed emotion paradigm in which your brain is very flexible about building high-level modules like moral judgment and punishment. The complex "moral brain" that you described is not etched into our hardware and it's not universal, it's learned. This means it can work quite differently or be absent in some people, and in others it can be deconstructed or redirected — "getting another job" as you'd say.

I agree that in practice lamenting the existence of your moral brain is a lot less useful than dissolving self-judgment case-by-case. But I got a sense from your description that you see it as universal and immutable, not as something we learned from parents/peers and can unlearn.

P.S.
Personal bias alert — I would guess that my own moral brain is perhaps in the 5th percentile of judginess and desire to punish transgressors. I recently told a woman about EA and she was outraged about young people taking it on themselves to save lives in Africa when billionaires and corporations exist who aren't helping. It was a clear demonstration of how different people's moral brains are.

Comment by Jacob Falkovich (Jacobian) on The Curse Of The Counterfactual · 2020-12-14T00:13:47.541Z · LW · GW

I've come across a lot of discussion recently about self-coercion, self-judgment, procrastination, shoulds, etc. Having just read it, I think this post is unusually good at offering a general framework applicable to many of these issues (i.e., that of the "moral brain" taking over). It's also peppered with a lot of nice insights, such as why feeling guilty about procrastination is in fact moral licensing that enables procrastination.

While there are many parts of the posts that I quibble with (such as the idea of the "moral brain" as an invariant specialized module), this post is a great standalone introduction and explanation of a framework that I think is useful and important.

Comment by Jacob Falkovich (Jacobian) on Blackmail · 2020-12-13T23:15:05.820Z · LW · GW

But if evidence of that regrettable night is all over the internet, that is much worse. You then likely have a lot of other regrettable nights. College acceptances are rescinded, jobs lost.

I have a major quibble with this prediction. Namely my model is that the regrettability of nights, and moral character of people, is always graded on a curve, not absolutely.

Colleges still need to admit students. Employers still need employees. In a world where everyone smokes weed in high school but this is known about only 5% of students, it makes sense for jobs and colleges to exclude weed-smokers. But if 80% of people are known to have smoked weed (or had premarital sex, or shoplifted from CVS, or gotten into a fight), then it stops being a big deal. 

An example from the other side would be cheating on your spouse: by some accounts half of us do it, but a lot fewer than half are publicly exposed for it. So today this still carries a huge stigma, but in a world where every cheater was being blackmailed, one of the main effects would be that cheating on a spouse would cease to be seen as an irredeemable sin.

Comment by Jacob Falkovich (Jacobian) on Book Summary: Consciousness and the Brain · 2020-12-13T20:38:55.794Z · LW · GW

The GNW theory has been kicking about for at least two decades, and this book has been published in 2014. Given this it is almost shocking that the idea wasn't written up on LW before giving it's centrality to any understanding of rationality. Shocking but perhaps fortunate, since Kaj has given it a thorough and careful treatment that enables the reader both to understand the idea and evaluate its merits (and almost certainly to save the purchase price of the book).

First, on GNW itself. A lot of the early writing on rationality used the simplified system 1 / system 2 abstraction as the central concept. GNW puts actual meat on this skeleton, describing exactly what unconscious (formerly known as system 1) processes can and can't do, how they learn, and under what conditions consciousness comes into play. Kaj elaborates more on system 2 in another post, but this review offers enough to reframe the old model in GNW-terms — a reframing that I've been convinced is more accurate and meaningful.

As for the post itself, it's main strength and weakness is that it's very long. The length is not due to fluff — I've compiled my own summary of this post in Roam that runs more than 1,000 words, with almost every paragraph worthy of inclusion. But perhaps, in particular for purposes of a book, the post could more fruitfully broken up in two parts: one to describe the GNW model and its implications, one to cover the experimental evidence for the model and its reliability. The latter takes up almost half of the text of the post by volume, and while it is valuable the former could perhaps stand alone as a worthwhile article (with a reference to a discussion of the experiments so people can assess whether they buy it).

Comment by Jacob Falkovich (Jacobian) on Is Rationalist Self-Improvement Real? · 2020-12-12T22:36:25.433Z · LW · GW

D'oh. I'm dumb.

Comment by Jacob Falkovich (Jacobian) on Is Rationalist Self-Improvement Real? · 2020-12-11T17:56:52.691Z · LW · GW

EDIT: The Treacherous Path was published in 2020 so never mind.

Thank you (and to alkjash) for the nomination! 

I guess I'm not supposed to nominate things I wrote myself, but this post, if published, should really be read along with The Treacherous Path to Rationality. I hope someone nominates that too.

This post is an open invitation to everyone (such as the non-LWers who may read the books to join us). The obvious question is whether this actually works for everyone, and the latter post makes the case for the opposite-mood. I think that in conjunction they offer a much more balanced take on who and what applied rationality is good for.

Comment by Jacob Falkovich (Jacobian) on How I Write · 2020-12-03T00:20:31.479Z · LW · GW

Do you have trouble writing for short periods of time, or do you have enough long chunks of free time that there's no use for small chunks?

If my life was so busy that I couldn't even find 4-5 hourlong chunks throughout the week I probably wouldn't blog at all. I sometimes write in 15-20 minute bits while in the office (remember those?) but almost every single post took a multi-hour chunk to come together.

Comment by Jacob Falkovich (Jacobian) on The Treacherous Path to Rationality · 2020-10-19T20:29:08.309Z · LW · GW

Yes, really smart domain experts were smarter and earlier but, as you said, they mostly kept it to themselves. Indeed, the first rationalists picked up COVID worry from private or unpublicized communication with domain experts, did the math and sanity checks, and started spreading the word. We did well on COVID not by outsmarting domain experts, but by coordinating publicly on what domain experts (especially any with government affiliations) kept private.

Comment by Jacob Falkovich (Jacobian) on The Treacherous Path to Rationality · 2020-10-12T18:51:50.137Z · LW · GW

We didn't get COVID, for starters. I live in NYC, where approximately 25% of the population got sick but no rationalists that I'm aware of did.

Comment by Jacob Falkovich (Jacobian) on The Treacherous Path to Rationality · 2020-10-12T18:43:24.453Z · LW · GW

If I, a rationalist atheist, was in Francis Bacon's shoes I would 100% live my life in such a way that history books would record me as being a "devout Anglican". 

Comment by Jacob Falkovich (Jacobian) on Most Prisoner's Dilemmas are Stag Hunts; Most Stag Hunts are Schelling Problems · 2020-10-01T15:25:46.541Z · LW · GW

The longer (i.e., more iterations) you spend in the shaded triangles of defection the more you'll be pulled to the defect-defect equilibrium as a natural reaction to what the other person is doing and the outcome you're getting. The longer you spend in the middle "wedge of cooperation", the more you'll end moving up and to the right in Pareto improvements. So we want to make that wedge bigger.

The size of that wedge is determined by the ratio of a player's outcome from C-C to their outcome in D-D. In this case the ratio is 2:1, so the wedge is between the slopes of 2 and 1/2. If C-C only guaranteed 1.1-1.1 to each player while a defection got them at least 1, the wedge would be a tiny sliver. Conversely, if the payoff for C-C was 999-999 almost the entire square would be the wedge. 

But the bigger the wedge, the more difference there is between outcomes on the pareto frontier so the outcome of 100% C-C is a lot less stable than if any deviation from it immediately led to non-equilibrium points that degenerate to D-D.

Comment by Jacob Falkovich (Jacobian) on Covid-19 6/18: The Virus Goes South · 2020-06-20T22:06:20.583Z · LW · GW

Here's what I wrote about coordinated moving when Raymond was talking about leaving the Bay for a while:

"Coordinated moving seems hard. It seems unlikely to happen. But, I think that uncoordinated moving can end up quite coordinated.

If I'm thinking of leaving Brooklyn, I have 10,000 small towns to choose from. If [Zvi, or Ray, or anyone like that] publicizes which one he goes to after doing research, that town is immediately in my top 10 options I'll actually consider. Not just because I'd want to live near [Zvi/Ray] and I trust his research, but also because I know that hundreds of other people I like would know about that town and consider moving there. So if people just move out without coordinating but tell all their friends about it, I think we'll end up with decent enough agglomerations of friends wherever the pioneers end up going."

On a related note, I'm planning to go on a small road trip around the northeast in July and would love to visit you in Warwick if you're accepting visitors (got tested this week, alas no antibodies, still distancing at home).

Comment by Jacob Falkovich (Jacobian) on Simulacra and Subjectivity · 2020-05-20T14:16:39.467Z · LW · GW

Let me know if this matches — the way I understand it is that level 3 is often about signaling belonging to a group, and level 4 is about shaping how well different belonging signal works.

So:

Level 1: "Believe all women" = If a woman accuses someone of sexual assault, literally believe her.

Level 2: "Believe all women" = I want accusations of sexual assault to be taken more seriously.

Level 3: "Believe all women" = I'm part of the politically progressive tribe that takes sexual assault seriously.

Level 4: "Believe all women" = Taking sexual assault seriously should be a more important signal of political progressivism than other issues.

Level 5: "Believe all women" = But actually take sexual assault seriously even if it becomes opposed to political progressivism because Biden.

Comment by Jacob Falkovich (Jacobian) on Jacob's Twit, errr, Shortform · 2020-04-22T23:08:08.190Z · LW · GW

People ask what the goal of the Rationalist community is. It's to raise the sanity waterline. To flood the cities with sanity. To wash the streets with pure reason. To engulf the land in common sense. And when our foes, gasping for air, scream "this literally can't be happening!" we'll remind them that 0 and 1 are not probabilities.

Comment by Jacob Falkovich (Jacobian) on Premature death paradox · 2020-04-16T22:54:03.492Z · LW · GW

If you die at age 90, you died prematurely relative to what we'd expect a month before you died, but (postmaturely? it should be a word) relative to what we'd expect and bet on 80 years before your death (i.e., at age 10).

Now, you may still think there's a paradox in the following sense: let's say the median lifespan expected at birth is 70. That means that the 50% of people who died before 70 died prematurely relative to all predictions made throughout their lives, while for the remaining 50% some of the predictions were too pessimistic (those made early in their lives) but some were optimistic. Isn't there still a skew towards being surprised that people died early?

The imbalance disappears if we count not people, but people-seconds. I.e., if we predict how long everyone is going to live at every second of their lives, the average prediction will not be either pre- or post-mature. The people who live longer will accumulate more pessimistic early death predictions through the sheer fact that they live more seconds and so more predictions are made about them. A person who lives to 100 may accumulate 95 years of too-pessimistic predictions and only 5 years of too-optimistic ones.

Comment by Jacob Falkovich (Jacobian) on April Coronavirus Open Thread · 2020-04-16T22:25:48.981Z · LW · GW

Hydroxychloroquine update!

A smart friend pointed me to this study that explains that mediocre antivirals only work if administered right after infection. By the onset of symptoms the effect is already much reduced. (The study isn't clear as to what counts as "symptoms" except that they occurred 3 days before hospitalization, so maybe early warning signs like loss of smell don't count). HCQ is, at best, a mediocre antiviral.
https://www.medrxiv.org/content/10.1101/2020.04.04.20047886v1

This model agrees with a new study from China (N=150) that showed zero effect when giving patients HCQ 16-17 days after the onset of the disease. Of note, the study compared Standard of Care to SOC+HCQ, and I have no idea what the Chinese SOC is beyond the minimal requirement of intravenous fluids, oxygen, and monitoring that's mentioned in the paper. In particular, there's no info on whether it includes antibiotics like azithromycin, and whether it includes zinc. It's hypothesized that HCQ works partly by easing the entry of zinc into cells where it slows viral replication, and so they work well in conjunction.
https://www.medrxiv.org/content/10.1101/2020.04.10.20060558v1

Bottom line: it may still be worth it to take HCQ+zinc if you cough and lose your sense of smell two days after going through an airport, but HCQ may not be of any help to heavily symptomatic people (and it still has nasty side effects).

Real bottom line: now that hydroxychloroquine is a politicized issue, you can't trust anything journalists have to say about it and have to read the studies yourself.

Comment by Jacob Falkovich (Jacobian) on The Great Annealing · 2020-04-01T03:52:34.547Z · LW · GW

As a follow up on the media angle, here's something I posted on my Facebook:

We're going to see a lot of research on hydroxychloroquine and azithromycin (HC&A), among other drugs, coming out in the next few weeks from around the world. HC&A is already the standard of care in several countries, in part because the drugs are cheap and widely available and in part because early results are promising. The combined evidence of these studies may show that other treatments are better as a first choice, or that HC&A is better, or that it depends on the particular characteristics of each patient. It’s always going to be complicated.

What the studies will never be able to do is *prove* that HC&A cures COVID since we already know that nothing works 100% for it. There is too much variance in how patients are selected for each study, how they're treated, how outcomes are measured, and how an individual responds. There's never one big indisputable hammer in small-N drug research, and there are always outlier results for people to cherry-pick one way or another. However, enough Bayesian evidence could mount that taking 600 mg of hydroxychloroquine at home at the first onset of symptoms or a positive test is better than chicken soup or going to an overcrowded hospital, all else being equal [1].

And if that happens, there is little doubt in my mind that mainstream media will fight for weeks against admitting that it is the case. They will hide behind "it's not proven" and "more research is needed" and "but the FDA". Facebook will be along for the denial ride claiming they "fight unofficial misinformation", which is anything that’s not coming from the WHO (which is currently telling people not to wear masks). Many politicians will fight to suppress this information as well, especially if Trump starts gloating over some particularly poor pro-HC&A study and saying that he called it. Trump is an idiot, but reversed stupidity is not intelligence.

So, please don’t fall prey to Gell-Mann amnesia. The same people who bullshitted you about “it’s just the flu” and about closing borders and about masks would 100% keep bullshitting you about drugs. Journalists aren’t smart enough to understand cumulative research evidence, and organizations like WHO and FDA have institutional incentives that will force them to react two months and thousands of corpses too late. You have to learn how to read medical studies yourself, or follow people who can and who aren’t compromised by working in media or politics. The lives of your loved ones are at stake.

[1] I will not disclose here whether I think that’s already the case for two reasons. First, I don’t want Facebook to remove this post for giving unsolicited medical advice, so I’m only giving information consumption advice. Second, I am not the authority you should be listening to. It’s better that we all find different sources to read and share our independent conclusions.

Comment by Jacob Falkovich (Jacobian) on "No evidence" as a Valley of Bad Rationality · 2020-03-30T14:19:11.706Z · LW · GW

I just thought of this in the context of this study on hydroxychloroquine in which 14/15 patients on the drug improved vs 13/15 patients treated with something else. To the average Joe, HCQ curing 14/15 people is an amazing positive result, and it's heartening to know that other antivirals are almost as good. To the galaxy-brained journalist, there's p>0.05 and so "the new study casts doubt on hydroxychloroquine effectiveness... a prime example of why Trump shouldn't be endorsing... actually isn't any more effective."

Comment by Jacob Falkovich (Jacobian) on Seeing the Smoke · 2020-02-29T06:07:34.367Z · LW · GW

I think the economic impact will also be huge. Businesses are prepared for 2% of their workers being out with the flu on any given day through the winter, but not for 20% to be sick while the other 80% are quarantined as COVID-19 hits their city. And the company who needs the input parts from that first business is not prepared to not have them for a month, and the companies that rely on them are not prepared, and most industries have slim enough cash reserves and profit margins that a pandemic can knock a lot good companies out of business for good. This could all mean just slightly more expensive electronics for two years, or it could mean a decade of unemployment and restructuring.

Comment by Jacob Falkovich (Jacobian) on Go F*** Someone · 2020-01-22T16:36:50.372Z · LW · GW

Attractiveness comes in many forms. I'm extroverted and write better than I look, so I do well at dinner parties and OKCupid. You can be attractive in dancing skill, in spiritual practice, in demonstrable expertise, in an artistic pursuit... guitar players get laid even if they're not that good looking.

And yet, everyone's first association when talking about "aim for 100 dates" is Tinder, which works only for the men who are top 20% in the one aspect of attractiveness that's crowded and hard to improve - physical looks. This includes men who self-report as unattractive, like this commenter (and presumably, "Simon").

The minimum threshold of attractivenes on Tinder is incredibly high, much higher than almost any other place to look for dates. It's certainly higher than my own good looks — I only turn Tinder on when I leave the country.

Comment by Jacob Falkovich (Jacobian) on Go F*** Someone · 2020-01-22T16:25:50.202Z · LW · GW

I was thinking of people who write comments without reading the post, which pollutes the conversation. Or people who form broad opinions about a writer or a blog without reading. I deal with those people all day every day on Twitter and in the blog comments.

I didn't mean people deciding what to read based on the title. Of course everyone does that! Someone seeing 'Go F*** Someone' may assume that the post will be somewhat vulgar, and will talk about sex. Both things are true. People not interested in vulgar writing about sex shouldn't read it. If I titled it 'A Consideration of Narcissism as it Affects the Formation of Long Term Bonds' that would actually be more misleading, since people would not expect it to be a vulgar post about sex and will get upset.

Comment by Jacob Falkovich (Jacobian) on Go F*** Someone · 2020-01-20T22:56:22.607Z · LW · GW

I understand your concerns.

I cross-post everything I write on Putanumonit to LW by default, which I understood to be the intention of "personal blogposts". I didn't write this for LW. If anyone on the mod team told me that this would be better as a link post or off LW entirely, not because it's bad but because it's not aligned with LW's reputation, I'll be happy to comply.

I could imagine casual readers quickly looking at this and assuming it's related to the PUA community

With that said, my personal opinion is that LW shouldn't cater to people who form opinions on things before reading them and we should discourage them from hanging out here.

Comment by Jacob Falkovich (Jacobian) on Go F*** Someone · 2020-01-16T21:29:28.714Z · LW · GW

95%+ of people who drop out of the workforce to raise children are women

Citation needed.

Other than that, you are supporting my general argument by writing from within the very framework that I lay out here. Why is the choice to leave work "destructive"? Why is it OK for a man to depend on a woman for the biological necessities of having a family, but not OK for either partner do depend on the other for the financial necessities?

Accomplished women who drop out to raise families usually don't surrender the spending of money to their husbands (I agree that demanding that they do so is patriarchal and bad). They only surrender the making of the money. The ability to spend money is what lets people build good lives and families, but making money is what contributes to their status*. Post-divorce, it's usually much easier for a woman (particularly an accomplished one) to make money again than it is for a man to have children again.

*At least, their status among some people. I personally care about LW karma more than income :)

Comment by Jacob Falkovich (Jacobian) on Caring less · 2019-12-23T22:45:31.762Z · LW · GW

"Caring less" was in the air. People were noticing the phenomenon. People were trying to explain it. In a comment, I realized that I was in effect telling people to care less about things without realizing what I was doing. All we needed was a concise post to crystallize the concept, and eukaryote obliged.

The post, especially the beginning, gets straight to the point. It asks the question of why we don't hear more persuasion in the form of "care less", offers a realistic example and a memorable graphic, and calls to action. This is the part that was most useful to me - it gave me a clear handle on something that I've been thinking about for a while. I'm a big fan of telling people to care less, and once I realized that this is what I was doing I learned to expect more psychological resistance from people. I'm less direct now when encouraging people to care less, and often phrase it in terms of trade-offs by telling people that caring less about something (usually, national politics and culture wars) will free up energy to care more about things they already endorse as more important (usually, communities and relationships).

The post talks about the guilt and anxiety induced by ubiquitous "care more" messaging, and I think it's taking this too much for granted. An alternative explanation is that people who are not scrupulous utilitarian Effective Altruists are quite good at not feeling guilt and anxiety, which leaves room for "care more" messaging to proliferate. I wish the post made more distinction between the narrow world of EA and the broader cultural landscape, I fear that it may be typical-minding somewhat.

Finally, eukaryote throws out some hypotheses that explain the asymmetry. This part seems somewhat rushed and not fully thought out. As a quick brainstorming exercise it could be better as just a series of bullet points, as the 1-2 paragraph explanations don't really add much. As some commenters pointed out and as I wrote in an essay inspired by this post, eukaryote doesn't quite suggest the "Hansonian" explanation that seems obviously central to me. Namely: "care more about X" is a claim for status on behalf of the speaker, who is usually someone who has strong opinions and status tied up with X. This is more natural and more tolerable to people than "care less about Y", which reads as an attack on someone else's status and identity - often the listener themselves since they presumably care about Y.

Instead of theorizing about the cause of the phenomenon, I think that the most useful follow ups to this post would be figuring out ways to better communicate "care less" messages and observing what actually happens if such messages are received. Even if one does not buy the premise that "care less" messaging is relaxing and therapeutic, it is important to have that in one's repertoire. And the first step towards that is having the concept clearly explained in a public way that one can point to, and that is the value of this post.


Comment by Jacob Falkovich (Jacobian) on Expressive Vocabulary · 2019-12-17T15:11:37.414Z · LW · GW

I feel like this post is missing an important piece.

When people say "chemicals" or "technology" they are very often not talking about the term in question, but communicating an emotional fact about themselves. "I am disgusted by foods that feel artificially produced", "I want you not to be distracted by devices during dinner". Coming up with better and more precise terms won't help at all, since the thing is being communicated has little to do with the referent of the imprecise term.

You can notice this when the conversation switches from personal experience to a more general and technical discussion. If someone proposes a "ban on technology use in school", everyone will be quick to focus on what is actually in the category.

Comment by Jacob Falkovich (Jacobian) on What determines the balance between intelligence signaling and virtue signaling? · 2019-12-17T14:49:40.184Z · LW · GW

This is a great example. During the Cultural Revolution and similar periods (e.g., Stalinist Russia) you not only wanted to signal virtue above intelligence, you actively wanted to signal *lack* of intelligence as vigorously as you could. The inteligentzia are always suspect.

Comment by Jacob Falkovich (Jacobian) on A LessWrong Crypto Autopsy · 2019-12-10T19:55:58.634Z · LW · GW

I wrote about this post extensively as part of my essay on Rationalist self-improvement. The general idea of this post is excellent: gathering data for a clever natural experiment of whether Rationalists actually win. Unfortunately, the analysis itself is very lacking and is not very data-driven.

The core result is: 15% of SSC readers who were referred by LessWrong made over $1,000 in crypto, 3% made $100,000. These quantities require quantitative analysis: Is 15%/3% a lot or a little compared to matched groups like the Silicon Valley or Libertarian blogosphere? How good a proxy is Scott's selection for people who were on LessWrong when Bitcoin was launching and had the means to take advantage of the opportunity? How much of a consensus on LessWrong was the advice to buy cryptocurrencies? These are all questions that one could find data on (I did a bit of it in my own post), but the essay does no such thing. Scott declares by fiat that 15% earns the community a C grade, with very little justification provided. This conclusion aligns perfectly with what Scott previously opined on the utility of Rationality to things like making money, which doesn't engender confidence in the objectivity of his evaluation.

The idea behind this essay is very admirable; one of the main things we fail to do more as a community is to test ourselves against real world outcomes. And the fact that Scott gathered the data himself is laudable as well. But the essay in itself is more of a suggestion for a good research post than a good work of analysis in itself.

Comment by Jacob Falkovich (Jacobian) on The Intelligent Social Web · 2019-12-10T19:30:41.701Z · LW · GW

In my opinion, the biggest shift in the study of rationality since the Sequences were published were a change in focus from "bad math" biases (anchoring, availability, base rate neglect etc.) to socially-driven biases. And with good reason: while a crash course in Bayes' Law can alleviate many of the issues with intuitive math, group politics are a deep and inextricable part of everything our brains do.

There has been a lot of great writing describing the issue like Scott’s essays on ingroups and outgroups and Robin Hanson’s theory of signaling. There are excellent posts summarizing the problem of socially-driven bias on a high level, like Kevin Simler’s post on crony beliefs. But The Intelligent Social Web offers something that all of the above don’t: a lens that looks into the very heart of social reality, makes you feel its power on an immediate and intuitive level, and gives you the tools to actually manipulate and change your reaction to it.

Valentine’s structure of treating this as a “fake framework” is invaluable in this context. A high-level rigorous description of social reality doesn’t really empower you to do anything about it. But seeing social interactions as an improv scene, while not literally true, offers actionable insight.

The specific examples in the post hit very close to home for me, like the example of one’s family tugging a person back into their old role. I noticed that I quite often lose my temper around my parents, something that happens basically never around my wife or friends. I realized that much of it is caused by a role conflict with my father about who gets to be the “authority” on living well. I further recognized that my temper is triggered by “should” statements, even innocuous ones like “you should have the Cabernet with this dish” over dinner. Seeing these interactions through the lens of both of us negotiating and claiming our roles allowed me to control how I feel and react rather than being driven by an anger that I don’t understand the source of. An issue that I struggled with for years was mostly resolved after reading this post and thinking about it for a while.

The post’s focus on salient examples (family roles, the convert boyfriend, the white man’s role) also has a downside, in that it’s somewhat difficult to keep track of the main thrust of Valentine’s argument. The entire introductory section also does nothing to help the essay cohere; it makes claims about personal benefits Valentine has acquired by using this framework. These claims are neither substantiated nor explored further in the essay, and they are also unnecessary — the essay is compelling by the force of its insight and not by promising a laundry list of results.

Valentine does not go into detail about the reasons that people “need the scene to work” above all other considerations. This for two reasons: the essay is long enough as it is, and the underlying structure is more speculative than established. I hope to see more people exploring this underlying structure as a follow up. I recommend Sarah Constantin’s look at abusive relationships through the lens of playing out familiar roles; I have also written an essay fitting Valentine’s idea into a broader framework of how predictive processing shapes how we think about identity and social interaction.

But again: The Intelligent Social Web didn’t just inspire me to write about ideas, it changed how I live my life. Whenever I feel a discordant emotion in a social interaction or have a goal that is thwarted I put on the framework of improv scenes and social roles to understand what is happening. And every time I reread the post after trying out the framework in real life, I glean more from it. If the post was slightly better structured and focused it could reach more readers, but it is already the most impactful thing I read on LessWrong in 2018.

Comment by Jacob Falkovich (Jacobian) on Is Rationalist Self-Improvement Real? · 2019-12-10T18:23:00.848Z · LW · GW

As I said, someone who is 100% in thrall to social reality will probably not be reading this. But once you peek outside the bubble there is still a long way to enlightenment: first learning how signaling, social roles, tribal impulses etc. shape your behavior so you can avoid their worst effects, then learning to shape the rules of social reality to suit your own goals. Our community is very helpful for getting the first part right, it certainly has been for me. And hopefully we can continue fruitfully exploring the second part too.

Comment by Jacob Falkovich (Jacobian) on Is Rationalist Self-Improvement Real? · 2019-12-10T16:10:47.739Z · LW · GW

Somewhat unrelated, but one can think of RSI as being a *meta* self-improvement approach — it's what allows you to pick and choose between many competing theories of self-improvement.

Aside from that, I didn't read the academic literature on TAPs before trying them out. I tried them out and measured how well they work for me, and then decided when and where to use them. Good Rationalist advice is to know when to read meta-analyses and when to run a cheap experiment yourself :)

Comment by Jacob Falkovich (Jacobian) on Is Rationalist Self-Improvement Real? · 2019-12-10T15:53:41.670Z · LW · GW

I have several friends in New York who are a match to my Rationalist friends in age, class, intelligence etc. and who:

  • Pick S&P 500 stocks based on CNBC and blogs because their intuition tells them they've beat the market (but they don't check or track it, just remember the winners).
  • Stay in jobs they hate because they don't have a robust decision process for making such a switch (I used goal factoring, Yoda timer job research, and decision matrices to decide where to work).
  • Go so back asswards about dating that it hurts to watch (because they can't think about it systematically).
  • Retweet Trump with comment.
  • Throw the most boring parties.
  • Spend thousands of dollars on therapists but would never do a half-hour debugging session with a friend because "that would be weird".
  • In general, live mostly within "social reality" where the only question is "is this weird/acceptable" and never "is this true/false".

Now perhaps Rationalist self-improvement can't help them, but if you're reading LessWrong you may be someone who may snap out of social reality long enough for Rationality to change your life significantly.

> if you want to propose some kind of rationalist self-help exercise that I should try

Different strokes for different folks. You can go through alkjash's Hammertime Sequence and pick one, although even there the one that he rates lowest (goal factoring) is the one that was the most influential in my own life. You must be friends with CFAR instructors/mentors who know your personality and pressing issues better than I do and can recommend and teach a useful exercise.


Comment by Jacob Falkovich (Jacobian) on Is Rationalist Self-Improvement Real? · 2019-12-10T05:12:14.034Z · LW · GW

Thank you for the detailed reply. I'm not going to reply point by point because you made a lot of points, but also because I don't disagree with a lot of it. I do want to offer a couple of intuitions that run counter to your pessimism.

While you're right that we shouldn't expect Rationalists to be 10x better at starting companies because of efficient markets, the same is not true of things that contribute to personal happiness. For example: how many people have a strong incentive in helping you build fulfilling romantic relationships? Not the government, not capitalism, not most of your family or friends, often not even your potential partners. Even dating apps make money when you *don't* successfully seduce your soulmate. But Rationality can be a huge help: learning that your emotions are information, learning about biases and intuitions, learning about communication styles, learning to take 5-minute timers to make plans — all of those can 10x your romantic life.

Going back to efficient markets, I get the sense that a lot of things out there are designed by the 1% most intelligent and ruthless people to take advantage of the 95% and their psychological biases. Outrage media, predatory finance, conspicuous brand consumption and other expensive status ladders, etc. Rationality doesn't help me design a better YouTube algorithm or finance scam, but at least it allows me to escape the 95% and keeps me away from outrage and in index funds.

Finally, I do believe that the world is getting weirder faster, and the thousands of years of human tradition are becoming obsolete at a faster pace. We are moving ever further from our "design specs". In this weirding world, I already hit jackpot with Bitcoin and polyamory, two things that couldn't really exist successfully 100 years ago. Rationality guided me to both. You hit jackpot with blogging— can you imagine your great grand uncle telling you that you'll become a famous intellectual by writing about cactus people and armchair sociology for free? And we're both still very young.

For any particular achievement like basketball or making your first million, there are more dedicated practices that help you to your goal faster than Rationality. But for taking advantage of unknown unknowns, the only two things I know that work are Rationality and making friends.

Comment by Jacob Falkovich (Jacobian) on What determines the balance between intelligence signaling and virtue signaling? · 2019-12-09T17:21:27.822Z · LW · GW
Another idea is that intelligence is valued more when a society feels threatened by an outside force, for which they need competent people to protect themselves from.

Building up on this, virtue is valued more when a society is threatened from the inside. If people are worried about being betrayed or undermined by those who appear to be part of their tribe they will look for virtue signals. We see this a lot in the high correlation of virtue signaling with signals of ingroup loyalty, while intelligence signaling often takes the shape of disagreeing with the group.

In general, an outside threat or goal allows people to measure themselves against it. Status is set by the number of enemy scalps one collects, for example. But without an external measuring stick people will jockey for relative status by showing loyalty and virtue