Posts

Selfless Dating 2022-01-05T17:44:42.060Z
10 Reasons You’re Lazy About Dating 2022-01-05T17:43:02.867Z
Consensual Hostility 2022-01-05T17:40:48.752Z
Sex Versus 2022-01-05T17:38:12.266Z
Money Stuff 2021-11-01T16:08:02.700Z
The Anxious Philosopher King 2021-11-01T16:05:25.660Z
Easily Top 20% 2021-09-29T18:24:06.281Z
Sex-Positivity is Countercultural 2021-09-29T18:22:35.007Z
Don't Sell Your Soul 2021-04-06T19:02:55.391Z
Monastery and Throne 2021-04-06T19:00:52.623Z
Above the Narrative 2021-02-25T04:37:21.587Z
Confirmation Bias in Action 2021-01-24T17:38:08.873Z
Review: LessWrong Best of 2018 – Epistemology 2020-12-28T04:32:50.070Z
My Model of the New COVID Strain and US Response 2020-12-27T04:34:23.796Z
What trade should we make if we're all getting the new COVID strain? 2020-12-25T07:09:41.026Z
The Good Life Quantified 2020-12-11T17:52:21.219Z
How I Write 2020-12-02T23:17:28.742Z
SubOnlyStackFans 2020-11-03T02:33:00.091Z
The Treacherous Path to Rationality 2020-10-09T15:34:17.490Z
Against Victimhood 2020-09-18T01:58:31.041Z
On Suddenly Not Being Able to Work 2020-08-25T22:14:45.747Z
Writing with GPT-3 2020-07-24T15:22:46.729Z
Kelly Bet on Everything 2020-07-10T02:48:12.868Z
Fight the Power 2020-06-22T02:19:39.042Z
Do Women Like Assholes? 2020-06-22T02:14:43.503Z
TFW No Incels 2020-05-03T16:56:24.278Z
Sex, Lies, and Canaanites 2020-04-23T16:26:10.458Z
The Origin of Consciousness Reading Companion, Part 1 2020-04-06T22:07:35.190Z
The Great Annealing 2020-03-30T01:08:24.268Z
Tales From the Borderlands 2020-03-25T19:11:48.373Z
Seeing the Smoke 2020-02-28T18:26:58.839Z
The Skewed and the Screwed: When Mating Meets Politics 2020-01-29T15:50:31.681Z
Go F*** Someone 2020-01-15T18:39:33.080Z
100 Ways To Live Better 2019-12-31T20:23:12.039Z
Is Rationalist Self-Improvement Real? 2019-12-09T17:11:03.337Z
Genesis 2019-11-14T16:20:47.508Z
Aella on Rationality and the Void 2019-10-31T21:40:52.042Z
Polyamory is Rational(ist) 2019-10-18T16:48:52.990Z
Interview with Aella, Part I 2019-09-19T14:05:18.523Z
Predictable Identities - Midpoint Review 2019-09-12T14:39:44.348Z
Unstriving 2019-08-19T14:31:56.786Z
Jacob's Twit, errr, Shortform 2019-08-17T23:49:43.993Z
Diana Fleischman and Geoffrey Miller - Audience Q&A 2019-08-10T22:37:53.090Z
Cephaloponderings 2019-08-04T16:45:57.065Z
Interview With Diana Fleischman and Geoffrey Miller 2019-07-16T01:34:26.156Z
PlayStation Odysseys 2019-07-01T17:41:52.499Z
Podcast - Putanumonit on The Switch 2019-06-23T04:09:25.723Z
Get Rich Real Slowly 2019-06-10T17:51:32.654Z
Lonelinesses 2019-05-31T13:55:55.135Z
Thinking Fast and Hard 2019-05-13T19:58:34.089Z

Comments

Comment by Jacob Falkovich (Jacobian) on Kelly Bet on Everything · 2022-01-22T07:17:41.741Z · LW · GW

This is a useful clarification. I use "edge" normally to include both the difference in probability of winning and losing and the different payout ratios. I think this usage is intuitive: if you're betting 5:1 on rolls of a six-sided die, no one would say they have a 66.7% "edge" in guessing that a particular number will NOT come up 5/6 of the time — it's clear that the payout ratio offsets the probability ratio.

Anyway, I don't want to clunk up the explanation so I just added a link to the precise formula on Wikipedia. If this essay gets selected on condition that I clarify the math, I'll make whatever edits are needed.

Comment by Jacob Falkovich (Jacobian) on Omicron Post #13: Outlook · 2022-01-11T02:57:34.586Z · LW · GW

I feel like I don't have a good sense of what China is trying to do by locking down millions of people for weeks at a time and how they're modeling this. Some possibilities:

  • They're just looking to keep a lid on things until the Chinese New Year (2/1) and the Olympics (2/4 - 2/20) at which point they'll relax restrictions and just try to flatten the top in each city.
  • They legit think they're going to keep omicron contained forever (or until omicron-targeting vaccines?) and will lock down hard wherever it pops out.
  • No one thinks they're not merely delaying the inevitable, but "zero COVID" is now the official party line and no one at any level of governance can ever admit it and so by the spring they're likely to have draconian lockdowns and exponential omicron.

Ironically, if the original SARS-COV-2 looked like a bioweapon targeted at the west (which wasn't disciplined enough about lockdowns), omicron really looks like a bioweapon targeted at China (which is too disciplined about even hopeless lockdowns).

Comment by Jacob Falkovich (Jacobian) on Selfless Dating · 2022-01-10T08:40:31.158Z · LW · GW

I was in a few long-term relationship in my early twenties when I myself wasn't mature/aware enough for selfless dating. Then, after a 4-year relationship that was very explicit-rules based had ended, I went on about 25 first dates in the space of about 1 year before meeting my wife. Basically all of those 25 didn't work because of a lack of mutual interest, not because we both tried to make it a long-term thing but failed to hunt stag.

If I was single today, I would date not through OkCupid as I did back in 2014 but through the intellectual communities I'm part of now. And with the sort of women I would like to date in these communities I would certainly talk about things like selfless dating (and dating philosophy in general) on a first date. Of course, I am unusually blessed in the communities I'm part of (including Rationality).

A lot of my evidence comes from hearing other people's stories, both positive and negative. I've been writing fairly popular posts on dating for half a decade now, and I've had both close friends and anonymous online strangers in the dozens share their dating stories and struggles with me. For people who seem generally in a good place to go in the selfless direction the main pitfalls seem to be insecurity spirals and forgetting to communicate.

The former is when people are unable to give their partner the benefit of the doubt on a transgression, which usually stems from their own insecurity. Then they act more selfishly themselves, which causes the partner to be more selfish in turn, and the whole thing spirals.

The latter is when people who hit a good spot stop talking about their wants and needs. As those change they end up with a stale model of each other. Then they inevitably end up making bad decisions and don't understand why their idyll is deteriorating.

To address your general tone: I am lucky in my dating life, and my post (as I wrote in the OP itself) doesn't by itself constitute enough evidence for an outside-view update that selfless relationships are better. If this speaks to you intuitively, hopefully this post is an inspiration. If it doesn't, hopefully it at least informs you of an alternative. But my goal isn't to prove anything to a rationalist standard, in part because I think this way of thinking is not really helpful in the realm of dating where every person's journey must be unique.

Comment by Jacob Falkovich (Jacobian) on Sex Versus · 2022-01-10T08:22:08.039Z · LW · GW

As a note, I've spoken many times about the importance of having empathy for romanceless men because they're a common punching bag and have written about incel culture specifically. The fact that the absolute worst and most aggravating commenters on my blog identify as incels doesn't make me anti-incel, it just makes me anti those commenters.

Comment by Jacob Falkovich (Jacobian) on Money Stuff · 2021-11-15T05:16:19.713Z · LW · GW

I should've written "capitulated to consumerism" but "capitulate to capital" just sounds really cool if you say it out loud.

Comment by Jacob Falkovich (Jacobian) on Jacob's Twit, errr, Shortform · 2021-11-15T04:53:51.974Z · LW · GW

"Bitcoin" comes from the old Hebrew "Beit Cohen", meaning "house of the priest" or "temple". Jesus cleansed the temple in Jerusalem by driving out the money lenders. The implications of this on Bitcoin vis a vis interchangeable fiat currencies are obvious and need no elaboration.

The full text of John 2 proves this connection beyond any doubt. "𝘏𝘦 𝘴𝘤𝘢𝘵𝘵𝘦𝘳𝘦𝘥 𝘵𝘩𝘦 𝘤𝘩𝘢𝘯𝘨𝘦𝘳'𝘴 𝘤𝘰𝘪𝘯𝘴 𝘢𝘯𝘥 𝘰𝘷𝘦𝘳𝘵𝘩𝘳𝘦𝘸 𝘵𝘩𝘦𝘪𝘳 𝘵𝘢𝘣𝘭𝘦𝘴" (John 2:15) refers to overthrowing the database tables of the centralized ledger and "scattering" the record of transactions among decentralized nodes on the blockchain.

"𝘔𝘢𝘯𝘺 𝘱𝘦𝘰𝘱𝘭𝘦 𝘴𝘢𝘸 𝘵𝘩𝘦 𝘴𝘪𝘨𝘯𝘴 𝘩𝘦 𝘸𝘢𝘴 𝘱𝘦𝘳𝘧𝘰𝘳𝘮𝘪𝘯𝘨 𝘢𝘯𝘥 𝘣𝘦𝘭𝘪𝘦𝘷𝘦𝘥 𝘪𝘯 𝘩𝘪𝘴 𝘯𝘢𝘮𝘦. 𝘉𝘶𝘵 𝘑𝘦𝘴𝘶𝘴 𝘸𝘰𝘶𝘭𝘥 𝘯𝘰𝘵 𝘦𝘯𝘵𝘳𝘶𝘴𝘵 𝘩𝘪𝘮𝘴𝘦𝘭𝘧 𝘵𝘰 𝘵𝘩𝘦𝘮." (John 2:23-24) couldn't be any clearer as to the identity of Satoshi Nakamoto.

And finally, "𝘏𝘦 𝘥𝘪𝘥 𝘯𝘰𝘵 𝘯𝘦𝘦𝘥 𝘢𝘯𝘺 𝘵𝘦𝘴𝘵𝘪𝘮𝘰𝘯𝘺 𝘢𝘣𝘰𝘶𝘵 𝘮𝘢𝘯𝘬𝘪𝘯𝘥, 𝘧𝘰𝘳 𝘩𝘦 𝘬𝘯𝘦𝘸 𝘸𝘩𝘢𝘵 𝘸𝘢𝘴 𝘪𝘯 𝘦𝘢𝘤𝘩 𝘱𝘦𝘳𝘴𝘰𝘯." (John 2:25) explains that there's no need for the testimony of any trusted counterparty when you can see what's in each person's submitted block of Bitcoin transactions.

And if you think of shorting Bitcoin, remember John 2:19: "𝘑𝘦𝘴𝘶𝘴 𝘢𝘯𝘴𝘸𝘦𝘳𝘦𝘥 𝘵𝘩𝘦𝘮, '𝘋𝘦𝘴𝘵𝘳𝘰𝘺 𝘵𝘩𝘪𝘴 𝘵𝘦𝘮𝘱𝘭𝘦, 𝘢𝘯𝘥 𝘐 𝘸𝘪𝘭𝘭 𝘳𝘢𝘪𝘴𝘦 𝘪𝘵 𝘢𝘨𝘢𝘪𝘯 𝘪𝘯 𝘵𝘩𝘳𝘦𝘦 𝘥𝘢𝘺𝘴.'"

Comment by Jacob Falkovich (Jacobian) on I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness · 2021-10-30T17:07:57.441Z · LW · GW

The ordering is based on measures of neuro-correlates of the level of consciousness like neural entropy or perturbational complexity, not on how groovy it subjectively feels.

Comment by Jacob Falkovich (Jacobian) on I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness · 2021-10-30T02:05:07.739Z · LW · GW

Copying from my Twitter response to Eliezer

Anil Seth usefully breaks down consciousness into 3 main components: 
1. level of consciousness (anesthesia < deep sleep < awake < psychedelic)
2. contents of consciousness (qualia — external, interoceptive, and mental)
3. consciousness of the self, which can further be broken down into components like feeling ownership of a body, narrative self, and a 1st person perspective. 

He shows how each of these can be quite independent. For example, the selfhood of body-ownership can be fucked with using rubber arms and mirrors, narrative-self breaks with amnesia, 1st person perspective breaks in out-of-body experiences which can be induced in VR, even the core feeling of the reality of self can be meditated away. 

Qualia such as pain are also very contextual, the same physical sensation can be interpreted positively in the gym or a BDSM dungeon and as acute suffering if it's unexpected and believed to be caused by injury. Being a self, or thinking about yourself, is also just another perception — a product of your brain's generative model of reality — like color or pain are. I believe enlightened monks who say they experience selfless bliss, and I think it's equally likely that chickens experience selfless pain.

Eliezer seems to believe that self-reflection or some other component of selfhood is necessary for the existence of the qualia of pain or suffering. A lot of people believe this simply because they use the word "consciousness" to refer to both (and 40 other things besides). I don't know if Eliezer is making such a basic mistake, but I'm not sure why else he would believe that selfhood is necessary for suffering.

Comment by Jacob Falkovich (Jacobian) on The LessWrong Team is now Lightcone Infrastructure, come work with us! · 2021-10-01T17:34:17.571Z · LW · GW

The "generalist" description is basically my dream job right until 

>The team is in Berkeley, California, and team members must be here full-time.

Just yesterday I was talking to a friend who wants to leave his finance job to work on AI safety and one of his main hesitations is that whichever organization he joins will require him to move to the Bay. It's one thing to leave a job, it's another to leave a city and a community (and a working partner, and a house, and a family...)

This also seems somewhat inefficient in terms of hiring. There are many qualified AI safety researchers and Lightcone-aligned generalists in the Bay, but there are surely even more outside it. So all the Bay-based orgs are competing for the same people, all complaining about being talent-constrained above anything else. At the same time, NYC, Austin, Seattle, London, etc. are full of qualified people with nowhere to apply.

I'm actually not suggesting you should open this particular job to non-Berkeley people. I want to suggest something even more ambitious. NYC and other cities are crying out for a salary-paying organization that will do mission-aligned work and would allow people to change careers into this area without uprooting their entire lives, potentially moving on to other EA organizations later. Given that a big part of Lightcone's mission is community building, having someone start a non-Bay office could be a huge contribution that will benefit the entire EA/Rationality ecosystem by channeling a lot of qualified people into it. 

And if you decide to go that route you'll probably need a generalist who knows people...

Comment by Jacob Falkovich (Jacobian) on Sex-Positivity is Countercultural · 2021-10-01T17:08:20.249Z · LW · GW

I'm not sure what's wrong, it works for me. Maybe change the https to http?
https://quillette.com/2021/05/13/the-sex-negative-society/

Googling "sex negative society quillette" should bring it up in any case.

Comment by Jacob Falkovich (Jacobian) on A review of Steven Pinker's new book on rationality · 2021-09-29T18:19:41.086Z · LW · GW

rationality is not merely a matter of divorcing yourself from mythology. Of course, doing so is necessary if we want to seek truth...

I think there's a deep error here, one that's also present in the sequences. Namely, the idea that "mythology mindset" is something one should or can just get rid of, a vestige of silly stories told by pre-enlightenment tribes in a mysterious world.

I think the human brain does "mythological thinking" all the time, and it serves an important individual function of infusing the world with value and meaning alongside the social function of binding a tribe together. Thinking that you can excise mythological thinking from your brain only blinds you to it. The paperclip maximizer is a mythos, and the work it does in your mind of giving shape and color to complex ideas about AGI is no different from the work Biblical stories do for religious people. "Let us for the purpose of thought experiment assume that in the land of Uz lived a man whose name was Job and he was righteous and upright..."

The key to rationality is recognizing this type of thinking in yourself and others as distinct from Bayesian thinking. It's the latter that's a rare skill that can be learned by some people in specialized dojos like LessWrong. When you really need to get the right answer to a reality-based question you can keep the mythological thinking from polluting the Bayesian calculation — if you're trained at recognizing it and haven't told yourself "I don't believe in myths".

Comment by Jacob Falkovich (Jacobian) on Are PS5 scalpers actually bad? · 2021-05-18T17:50:56.682Z · LW · GW

PS5 scalpers redistribute consoles away from those willing to burn time to those willing to spend money. Normally this would be a positive — time burned is just lost, whereas the money is just transferred from Sony to the scalpers who wrote the quickest bot. However, you can argue that gaming consoles in particular are more valuable to people with a lot of spare time to burn than to people with day jobs and money!

Disclosure: I'm pretty libertarian and have a full-time job but because there weren't any good exclusives in the early months I decided to ignore the scalpers. I followed https://twitter.com/PS5StockAlerts and got my console at base price in April just in time for Returnal. Returnal is excellent and worth getting the PS5 for even if costs you a couple of hours or an extra $100.

Comment by Jacob Falkovich (Jacobian) on MIRI location optimization (and related topics) discussion · 2021-05-09T01:32:20.673Z · LW · GW

Empire State of Mind

I want to second Daniel and Zvi's recommendation of New York culture as an advantage for Peekskill. An hour away from NYC is not so different from being in NYC — I'm in a pretty central part of Brooklyn and regularly commute an hour to visit friends uptown or further east in BK and Queens. An hour in traffic sucks, an hour on the train is pleasant. And being in NYC is great. 

A lot of the Rationalist-adjacent friends I made online in 2020 have either moved to NYC in the last couple of months or are thinking about it, as rents have dropped up to 20% in some neighborhoods and everyone is eager to rekindle their social life. New York is also a vastly better dating market for male nerds given a slightly female-majority sex ratio and thousands of the smartest and coolest women on the planet as compared to the male-skewed and smaller Bay Area.  

Peekskill is also 2 hours from Philly and 3 from Boston, which is not too much for a weekend trip. That could make it the Schelling point for East Coast megameetups/conferences/workshops since it's as easy to get to as NYC and a lot cheaper to rent a giant AirBnB in.

Won't Someone Think of the Children

I love living in Brooklyn, but the one thing that could make us move in the next year or two is a community of my tribe that are willing to help each other with childcare, from casual babysitting to homeschooling pods. I'm keenly following the news of where Rationalist groups are settling, especially those who plan to (like us) or already have kids. A critical mass of Rationalist parents in Peekskill may be enticing enough for us to move there, since we could have the combined benefits of living space, proximity to NYC, and the community support we would love.

Comment by Jacob Falkovich (Jacobian) on Monastery and Throne · 2021-04-09T18:55:23.058Z · LW · GW

I don't think that nudgers are consequentialists who also try to accurately account for public psychology. I think 99% of the time they are doing something for non-consequentialist reasons, and using public psychology as a rationalization. Ezra Klein pretty explicitly cares about advancing various political factions above mere policy outcomes, IIRC on a recent 80,000 Hours podcast Rob was trying to talk about outcomes and Klein ignored him to say that it's bad politics.

Comment by Jacob Falkovich (Jacobian) on Politics is way too meta · 2021-03-17T22:13:30.293Z · LW · GW

I understand, I think we have an honest disagreement here. I'm not saying that the media is cringe in an attempt to make it so, as a meta move. I honestly think that the current prestige media establishment is beyond reform, a pure appendage of power. It's impact can grow weaker or stronger, but it will not acquire honesty as a goal (and in fact, seems to be giving up even on credibility). 

In any case, this disagreement is beyond the scope of your essay. What I learn from it is to be more careful of calling things cringe or whatever in my own speech, and to see this sort of thing as an attack on the social reality plane rather than an honest report of objective reality.

Comment by Jacob Falkovich (Jacobian) on Politics is way too meta · 2021-03-17T21:15:07.571Z · LW · GW

Other people have commented here that journalism is in the business of entertainment, or in the business of generating clicks etc. I think that's wrong. Journalism is in the business of establishing the narrative of social reality. Deciding what's a gaffe and who's winning, who's "controversial" and who's "respected", is not a distraction from what they do. It's the main thing.

So it's weird to frame this is "politics is way too meta". Too meta for whom? Politicians care about being elected, so everything they say is by default simulacrum level 3 and up. Journalists care about controlling the narrative, so everything they say is by default simulacrum level 3 and up. They didn't aim at level 1 and miss, they only brush against level 1 on rare occasion, by accident.

Here are some quotes from our favorite NY Times article, Silicon Valley's Safe Space:

the right to discuss contentious issues

The ideas they exchanged were often controversial

even when those words were untrue or could lead to violence

sometimes spew hateful speech

step outside acceptable topics

turned off by the more rigid and contrarian beliefs

his influential, and controversial, writings

push people toward toxic beliefs

These aren't accidental. Each one of the bolded words just means "I think this is bad, and you better follow me". They're the entire point of the article — to make it so that it's social reality to think that Scott is bad.

So I think there are two takeaways here. One is for people like us, EAs discussing charity impact or Rationalists discussing life-optimization hacks. The takeaway for us is to spend less time writing about the meta and more about the object level. And then there's a takeaway about them, journalists and politicians and everyone else who lives entirely in social reality. And the takeaway is to understand that almost nothing they say is about objective reality, and that's unlikely to change.

Comment by Jacob Falkovich (Jacobian) on Above the Narrative · 2021-03-02T02:37:35.101Z · LW · GW

I agree that advertising revenue is not an immediate driving force, something like "justifying the use of power by those in power" is much closer to it and advertising revenue flows downstream from that (because those who are attracted to power read the Times).

I loved the rest of Viliam's comment though, it's very well written and the idea of the eigen-opinion and being constrained by the size of your audience is very interesting.

Comment by Jacob Falkovich (Jacobian) on Jacob's Twit, errr, Shortform · 2021-01-29T07:25:58.234Z · LW · GW

Here's my best model of the current GameStop situation, after nerding out about it for two hours with smart friends. If you're enjoying the story as a class warfare morality play you can skip this, since I'll mostly be talking finance. I may all look really dumb or really insightful in the next few days, but this is a puzzle I wanted to figure out. I'm making this public so posterity can judge my epistemic rationality skillz — I don't have a real financial stake either way.

Summary: The longs are playing the short game, the shorts are playing the long game.

At $300, GameStop is worth about $21B. A month ago it was worth $1B, so there's $20B at stake between the long-holders and short sellers.

Who's long right now? Some combination of WSBers on a mission, FOMOists looking for a quick buck, and institutional money (i.e., other hedge funds). The WSBers don't know fear, only rage and loss aversion. A YOLOer who bought at $200 will never sell at $190, only at $1 or the moon. FOMOists will panic but they're probably a majority and today's move shook them off. The hedgies care more about risk, they may hedge with put options or trust that they'll dump the stock faster than the retail traders if the line breaks.

The interesting question is who's short. Shorts can probably expect to need a margin equal to ~twice the current share price, so anyone who shorted too early or for 50% of their bankroll (like Melvin and Citron) got squeezed out already. But if you shorted at $200 and for 2% of your bankroll you can hold for a long time. The current borrowing fee is 31% APR, or just 0.1% a day. I think most of the shorts are in the latter category, here's why:

Short interest has stayed at 71M shares even as this week saw more than 500M shares change hands. I think this means that new shorts are happy to take the places of older shorts who cash out, they're only constrained by the fact that ~71M are all that's available to borrow. Naked shorts aren't really a thing, forget about that. So everyone short $GME now is short because they want to be, if they wanted to get out they could. In a normal short squeeze the available float is constrained, but this hasn't really happened with $GME.

WSBers can hold the line but can't push higher without new money that would take some of these 71M shares out of borrowing circulation or who will push the price up so fast the shorts will get margin-called or panic. For the longs to win, they probably need something dramatic to happen soon.

One dramatic thing that could happen is that people who sold the huge amount of call options expiring Friday aren't already hedged and will need to buy shares to deliver. It's unclear if that's realistic, most option sellers are market makers who don't stay exposed for long. I don't think there were options sold above the current price of $320, so there's no gamma left to squeeze.

I think $GME getting taken off retail brokerages really hurt the WSBers. It didn't cause panic, but it slowed the momentum they so dearly needed and scared away FOMOists. By the way, I don't think brokers did it to screw with the small people, they're their clients after all. It just became too expensive for brokerages to make the trade because they need to post clearing collateral for two days. They were dumb not to anticipate this, but I don't think they were bribed by Citadel or anything.

For the shorts to win they just need to wait it out not get over-greedy. Eventually the longs will either get bored or turn on each other — with no squeeze this becomes just a pyramid scheme. If the shorts aren't knocked out tomorrow morning by a huge flood of FOMO retail buys, I think they'll win over the next weeks.

Comment by Jacob Falkovich (Jacobian) on Is Rationalist Self-Improvement Real? · 2021-01-10T18:56:25.881Z · LW · GW

This is a self-review, looking back at the post after 13 months.

I have made a few edits to the post, including three major changes:
1. Sharpening my definition of what counts as "Rationalist self-improvement" to reduce confusion. This post is about improved epistemics leading to improved life outcomes, which I don't want to conflate with some CFAR techniques that are basically therapy packaged for skeptical nerds.
2. Addressing Scott's "counterargument from market efficiency" that we shouldn't expect to invent easy self-improvement techniques that haven't been tried.
3. Talking about selection bias, which was the major part missing from the original discussion. My 2020 post The Treacherous Path to Rationality is somewhat of a response to this one, concluding that we should expect Rationality to work mostly for those who self-select into it and that we'll see limited returns to trying to teach it more broadly.

The past 13 months also provided more evidence in favor of epistemic Rationality being ever more instrumentally useful. In 2020 I saw a few Rationalist friends fund successful startups and several friends cross the $100k mark for cryptocurrency earnings. And of course, LessWrong led the way on early and accurate analysis of most COVID-related things. One result of this has been increased visibility and legitimacy, and of course another is that Rationalists have a much lower number of COVID cases than all other communities I know.

In general, this post is aimed at someone who discovered Rationality recently but is lacking the push to dive deep and start applying it to their actual life decisions. I think the main point still stands: if you're Rationalist enough to think seriously about it, you should do it.

Comment by Jacob Falkovich (Jacobian) on What trade should we make if we're all getting the new COVID strain? · 2021-01-04T17:22:32.987Z · LW · GW

Trade off to a promising start :P
 

Comment by Jacob Falkovich (Jacobian) on Review: LessWrong Best of 2018 – Epistemology · 2021-01-01T03:10:03.656Z · LW · GW

There's a whole lot to respond to here, and it may take the length of Surfing Uncertainty to do so. I'll point instead to one key dimension.

You're discussing PP as a possible model for AI, whereas I posit PP as a model for animal brains. The main difference is that animal brains are evolved and occur inside bodies.

Evolution is the answer to the dark room problem. You come with prebuilt hardware that is adapted a certain adaptive niche, which is equivalent to modeling it. Your legs are a model of the shape of the ground and the size of your evolutionary territory. Your color vision is a model of berries in a bush, and your fingers that pick them. Your evolved body is a hyperprior you can't update away. In a sense, you're predicting all the things that are adaptive: being full of good food, in the company of allies and mates, being vigorous and healthy, learning new things. Lying hungry in a dark room creates a persistent error in your highest-order predictive models (the evolved ones) that you can't change.

Your evolved prior supposes that you have a body, and that the way you persist over time is by using that body. You are not a disembodied agent learning things for fun or getting scored on some limited test of prediction or matching. Everything your brain does is oriented towards acting on the world effectively. 

You can see that perception and action rely on the same mechanism in many ways, starting with the simple fact that when you look at something you don't receive a static picture, but rather constantly saccade and shift your eyes, contract and expand your pupil and cornea, move your head around, and also automatically compensate for all of this motion. None of this is relevant to an AI who processes images fed to it "out of the void", and whose main objective function is something other than maintaining homeostasis of a living, moving body.

Zooming out, Friston's core idea is a direct consequence of thermodynamics: for any system (like an organism) to persist in a state of low entropy (e.g. 98°F) in an environment that is higher entropy but contains some exploitable order (e.g. calories aren't uniformly spread in the universe but concentrated in bananas), it must exploit this order. Exploiting it is equivalent to minimizing surprise, since if you're surprised there some pattern of the world that you failed to make use of (free energy). 

Now if you just apply this basic principle to your genes persisting over an evolutionary time scale and your body persisting over the time scale of decades and this sets the stage for PP applied to animals.

For more, here's a conversation between Clark, Friston, and an information theorist about the Dark Room problem.

Comment by Jacob Falkovich (Jacobian) on Review: LessWrong Best of 2018 – Epistemology · 2020-12-30T17:49:13.930Z · LW · GW

Off the top of my head, here are some new things it adds:


1. You have 3 ways of avoiding prediction error: updating your models, changing your perception, acting on the world. Those are always in play and you often do all three in some combination (see my model of confirmation bias in action).
2. Action is key, and it shapes and is shaped by perception. The map you build of any territory is prioritized and driven by the things you can act on most effectively. You don't just learn "what is out there" but "what can I do with it".
3. You care about prediction over the lifetime scale, so there's an explore/exploit tradeoff between potentially acquiring better models and sticking with the old ones.
4. Prediction goes from the abstract to the detailed. You perceive specifics in a way that aligns with your general model, rarely in contradiction.
5. Updating always goes from the detailed to the abstract. It explains Kuhn's paradigm shifts but for everything — you don't change your general theory and then update the details, you accumulate error in the details and then the general theory switches all at once to slot them into place.
6. In general, your underlying models are a distribution but perception is always unified, whatever your leading model is. So when perception changes it does so abruptly.
7. Attention is driven in a Bayesian way, to the places that are most likely to confirm/disconfirm your leading hypothesis, balancing the accuracy of perceiving the attended detail correctly and the leverage of that detail to your overall picture.
8. Emotions through the lens of PP.
9. Identity through the lens of PP.
10. The above is fractal, applying at all levels from a small subconscious module to a community of people.
 

Comment by Jacob Falkovich (Jacobian) on What trade should we make if we're all getting the new COVID strain? · 2020-12-30T02:51:31.860Z · LW · GW

The new strain has been confirmed in the US and the vaccine rollout is still sluggish and messed up, so the above are in effect. The trades I made so far are buying out-of-the-money calls on VXX (volatility) and puts on USO (oil) and JETS (airlines) all for February-March. I'll hold until the market has a clear, COVID related drop or until these options all expire worthless and I take the cap gains write-off. And I'm HODLing all crypto although that's not particularly related to COVID. I'm not in any way confident that this is wise/useful, but people asked.

Comment by Jacob Falkovich (Jacobian) on My Model of the New COVID Strain and US Response · 2020-12-27T17:07:18.781Z · LW · GW

I don't think it was that easy to get to the saturated end with the old strain. As I remember, the chance of catching COVID from a sick person in your household was only around 20-30%, and at superspreader events it was still just a small minority of total attendees that were infected.

Comment by Jacob Falkovich (Jacobian) on What trade should we make if we're all getting the new COVID strain? · 2020-12-25T20:28:11.939Z · LW · GW

The VXX is basically at multi-year lows right now, so one of the following is true:
1. Markets think that the global economy is very calm and predictable right now.
2. I'm misunderstanding an important link between "volatility = unpredictability of world economics" and "volatility = premium on short-term SP500 options".

Comment by Jacob Falkovich (Jacobian) on What trade should we make if we're all getting the new COVID strain? · 2020-12-25T20:10:10.409Z · LW · GW

Some options and their 1-year charts:
JETS - Airline ETF

XLE - Energy and oil company ETF

AWAY - Travel tech (Expedia, Uber) ETF

Which would you buy put options on, and with what expiration?

Comment by Jacob Falkovich (Jacobian) on How Lesswrong helped me make $25K: A rational pricing strategy · 2020-12-25T03:43:59.382Z · LW · GW

Those are good points. I think competition (real and potential) is always at least worth considering in any question of business, and I was surprised the OP didn't even mention it. But yes, I can imagine situations where you operate with no relevant competition.

But this again would make me think that pricing and the story you tell a client is strictly secondary to finding these potential clients in the first place. If they were the sort of people who go out seeking help you'd have competition, so that means you have to find people who don't advertise their need. That seems to be the main thing the author doing and the value they're providing: finding people who need recruitment help and don't realize it.

Comment by Jacob Falkovich (Jacobian) on How Lesswrong helped me make $25K: A rational pricing strategy · 2020-12-22T19:03:55.826Z · LW · GW

This pricing makes sense if your only competition is your client just going at it by themselves, in which case you clearly demonstrate that you offer a superior deal. But job seekers have a lot of consultants/agencies/headhunters they can turn to and I'd imagine your price mostly depends on the competition. In the worst case, you not only lose good clients to cheaper competition, but get an adverse selection of clients who would really struggle to find a job in 22 weeks and so your services are cheap/free for them.

Comment by Jacob Falkovich (Jacobian) on The Curse Of The Counterfactual · 2020-12-16T00:57:19.791Z · LW · GW

This statement for example:
> Motivating you to punish things is what that part of your brain does, after all; it’s not like it can go get another job!

I'm coming more from a predictive processing / bootstrap learning / constructed emotion paradigm in which your brain is very flexible about building high-level modules like moral judgment and punishment. The complex "moral brain" that you described is not etched into our hardware and it's not universal, it's learned. This means it can work quite differently or be absent in some people, and in others it can be deconstructed or redirected — "getting another job" as you'd say.

I agree that in practice lamenting the existence of your moral brain is a lot less useful than dissolving self-judgment case-by-case. But I got a sense from your description that you see it as universal and immutable, not as something we learned from parents/peers and can unlearn.

P.S.
Personal bias alert — I would guess that my own moral brain is perhaps in the 5th percentile of judginess and desire to punish transgressors. I recently told a woman about EA and she was outraged about young people taking it on themselves to save lives in Africa when billionaires and corporations exist who aren't helping. It was a clear demonstration of how different people's moral brains are.

Comment by Jacob Falkovich (Jacobian) on The Curse Of The Counterfactual · 2020-12-14T00:13:47.541Z · LW · GW

I've come across a lot of discussion recently about self-coercion, self-judgment, procrastination, shoulds, etc. Having just read it, I think this post is unusually good at offering a general framework applicable to many of these issues (i.e., that of the "moral brain" taking over). It's also peppered with a lot of nice insights, such as why feeling guilty about procrastination is in fact moral licensing that enables procrastination.

While there are many parts of the posts that I quibble with (such as the idea of the "moral brain" as an invariant specialized module), this post is a great standalone introduction and explanation of a framework that I think is useful and important.

Comment by Jacob Falkovich (Jacobian) on Blackmail · 2020-12-13T23:15:05.820Z · LW · GW

But if evidence of that regrettable night is all over the internet, that is much worse. You then likely have a lot of other regrettable nights. College acceptances are rescinded, jobs lost.

I have a major quibble with this prediction. Namely my model is that the regrettability of nights, and moral character of people, is always graded on a curve, not absolutely.

Colleges still need to admit students. Employers still need employees. In a world where everyone smokes weed in high school but this is known about only 5% of students, it makes sense for jobs and colleges to exclude weed-smokers. But if 80% of people are known to have smoked weed (or had premarital sex, or shoplifted from CVS, or gotten into a fight), then it stops being a big deal. 

An example from the other side would be cheating on your spouse: by some accounts half of us do it, but a lot fewer than half are publicly exposed for it. So today this still carries a huge stigma, but in a world where every cheater was being blackmailed, one of the main effects would be that cheating on a spouse would cease to be seen as an irredeemable sin.

Comment by Jacob Falkovich (Jacobian) on Book Summary: Consciousness and the Brain · 2020-12-13T20:38:55.794Z · LW · GW

The GNW theory has been kicking about for at least two decades, and this book has been published in 2014. Given this it is almost shocking that the idea wasn't written up on LW before giving it's centrality to any understanding of rationality. Shocking but perhaps fortunate, since Kaj has given it a thorough and careful treatment that enables the reader both to understand the idea and evaluate its merits (and almost certainly to save the purchase price of the book).

First, on GNW itself. A lot of the early writing on rationality used the simplified system 1 / system 2 abstraction as the central concept. GNW puts actual meat on this skeleton, describing exactly what unconscious (formerly known as system 1) processes can and can't do, how they learn, and under what conditions consciousness comes into play. Kaj elaborates more on system 2 in another post, but this review offers enough to reframe the old model in GNW-terms — a reframing that I've been convinced is more accurate and meaningful.

As for the post itself, it's main strength and weakness is that it's very long. The length is not due to fluff — I've compiled my own summary of this post in Roam that runs more than 1,000 words, with almost every paragraph worthy of inclusion. But perhaps, in particular for purposes of a book, the post could more fruitfully broken up in two parts: one to describe the GNW model and its implications, one to cover the experimental evidence for the model and its reliability. The latter takes up almost half of the text of the post by volume, and while it is valuable the former could perhaps stand alone as a worthwhile article (with a reference to a discussion of the experiments so people can assess whether they buy it).

Comment by Jacob Falkovich (Jacobian) on Is Rationalist Self-Improvement Real? · 2020-12-12T22:36:25.433Z · LW · GW

D'oh. I'm dumb.

Comment by Jacob Falkovich (Jacobian) on Is Rationalist Self-Improvement Real? · 2020-12-11T17:56:52.691Z · LW · GW

EDIT: The Treacherous Path was published in 2020 so never mind.

Thank you (and to alkjash) for the nomination! 

I guess I'm not supposed to nominate things I wrote myself, but this post, if published, should really be read along with The Treacherous Path to Rationality. I hope someone nominates that too.

This post is an open invitation to everyone (such as the non-LWers who may read the books to join us). The obvious question is whether this actually works for everyone, and the latter post makes the case for the opposite-mood. I think that in conjunction they offer a much more balanced take on who and what applied rationality is good for.

Comment by Jacob Falkovich (Jacobian) on How I Write · 2020-12-03T00:20:31.479Z · LW · GW

Do you have trouble writing for short periods of time, or do you have enough long chunks of free time that there's no use for small chunks?

If my life was so busy that I couldn't even find 4-5 hourlong chunks throughout the week I probably wouldn't blog at all. I sometimes write in 15-20 minute bits while in the office (remember those?) but almost every single post took a multi-hour chunk to come together.

Comment by Jacob Falkovich (Jacobian) on The Treacherous Path to Rationality · 2020-10-19T20:29:08.309Z · LW · GW

Yes, really smart domain experts were smarter and earlier but, as you said, they mostly kept it to themselves. Indeed, the first rationalists picked up COVID worry from private or unpublicized communication with domain experts, did the math and sanity checks, and started spreading the word. We did well on COVID not by outsmarting domain experts, but by coordinating publicly on what domain experts (especially any with government affiliations) kept private.

Comment by Jacob Falkovich (Jacobian) on The Treacherous Path to Rationality · 2020-10-12T18:51:50.137Z · LW · GW

We didn't get COVID, for starters. I live in NYC, where approximately 25% of the population got sick but no rationalists that I'm aware of did.

Comment by Jacob Falkovich (Jacobian) on The Treacherous Path to Rationality · 2020-10-12T18:43:24.453Z · LW · GW

If I, a rationalist atheist, was in Francis Bacon's shoes I would 100% live my life in such a way that history books would record me as being a "devout Anglican". 

Comment by Jacob Falkovich (Jacobian) on Most Prisoner's Dilemmas are Stag Hunts; Most Stag Hunts are Schelling Problems · 2020-10-01T15:25:46.541Z · LW · GW

The longer (i.e., more iterations) you spend in the shaded triangles of defection the more you'll be pulled to the defect-defect equilibrium as a natural reaction to what the other person is doing and the outcome you're getting. The longer you spend in the middle "wedge of cooperation", the more you'll end moving up and to the right in Pareto improvements. So we want to make that wedge bigger.

The size of that wedge is determined by the ratio of a player's outcome from C-C to their outcome in D-D. In this case the ratio is 2:1, so the wedge is between the slopes of 2 and 1/2. If C-C only guaranteed 1.1-1.1 to each player while a defection got them at least 1, the wedge would be a tiny sliver. Conversely, if the payoff for C-C was 999-999 almost the entire square would be the wedge. 

But the bigger the wedge, the more difference there is between outcomes on the pareto frontier so the outcome of 100% C-C is a lot less stable than if any deviation from it immediately led to non-equilibrium points that degenerate to D-D.

Comment by Jacob Falkovich (Jacobian) on Covid-19 6/18: The Virus Goes South · 2020-06-20T22:06:20.583Z · LW · GW

Here's what I wrote about coordinated moving when Raymond was talking about leaving the Bay for a while:

"Coordinated moving seems hard. It seems unlikely to happen. But, I think that uncoordinated moving can end up quite coordinated.

If I'm thinking of leaving Brooklyn, I have 10,000 small towns to choose from. If [Zvi, or Ray, or anyone like that] publicizes which one he goes to after doing research, that town is immediately in my top 10 options I'll actually consider. Not just because I'd want to live near [Zvi/Ray] and I trust his research, but also because I know that hundreds of other people I like would know about that town and consider moving there. So if people just move out without coordinating but tell all their friends about it, I think we'll end up with decent enough agglomerations of friends wherever the pioneers end up going."

On a related note, I'm planning to go on a small road trip around the northeast in July and would love to visit you in Warwick if you're accepting visitors (got tested this week, alas no antibodies, still distancing at home).

Comment by Jacob Falkovich (Jacobian) on Simulacra and Subjectivity · 2020-05-20T14:16:39.467Z · LW · GW

Let me know if this matches — the way I understand it is that level 3 is often about signaling belonging to a group, and level 4 is about shaping how well different belonging signal works.

So:

Level 1: "Believe all women" = If a woman accuses someone of sexual assault, literally believe her.

Level 2: "Believe all women" = I want accusations of sexual assault to be taken more seriously.

Level 3: "Believe all women" = I'm part of the politically progressive tribe that takes sexual assault seriously.

Level 4: "Believe all women" = Taking sexual assault seriously should be a more important signal of political progressivism than other issues.

Level 5: "Believe all women" = But actually take sexual assault seriously even if it becomes opposed to political progressivism because Biden.

Comment by Jacob Falkovich (Jacobian) on Jacob's Twit, errr, Shortform · 2020-04-22T23:08:08.190Z · LW · GW

People ask what the goal of the Rationalist community is. It's to raise the sanity waterline. To flood the cities with sanity. To wash the streets with pure reason. To engulf the land in common sense. And when our foes, gasping for air, scream "this literally can't be happening!" we'll remind them that 0 and 1 are not probabilities.

Comment by Jacob Falkovich (Jacobian) on Premature death paradox · 2020-04-16T22:54:03.492Z · LW · GW

If you die at age 90, you died prematurely relative to what we'd expect a month before you died, but (postmaturely? it should be a word) relative to what we'd expect and bet on 80 years before your death (i.e., at age 10).

Now, you may still think there's a paradox in the following sense: let's say the median lifespan expected at birth is 70. That means that the 50% of people who died before 70 died prematurely relative to all predictions made throughout their lives, while for the remaining 50% some of the predictions were too pessimistic (those made early in their lives) but some were optimistic. Isn't there still a skew towards being surprised that people died early?

The imbalance disappears if we count not people, but people-seconds. I.e., if we predict how long everyone is going to live at every second of their lives, the average prediction will not be either pre- or post-mature. The people who live longer will accumulate more pessimistic early death predictions through the sheer fact that they live more seconds and so more predictions are made about them. A person who lives to 100 may accumulate 95 years of too-pessimistic predictions and only 5 years of too-optimistic ones.

Comment by Jacob Falkovich (Jacobian) on April Coronavirus Open Thread · 2020-04-16T22:25:48.981Z · LW · GW

Hydroxychloroquine update!

A smart friend pointed me to this study that explains that mediocre antivirals only work if administered right after infection. By the onset of symptoms the effect is already much reduced. (The study isn't clear as to what counts as "symptoms" except that they occurred 3 days before hospitalization, so maybe early warning signs like loss of smell don't count). HCQ is, at best, a mediocre antiviral.
https://www.medrxiv.org/content/10.1101/2020.04.04.20047886v1

This model agrees with a new study from China (N=150) that showed zero effect when giving patients HCQ 16-17 days after the onset of the disease. Of note, the study compared Standard of Care to SOC+HCQ, and I have no idea what the Chinese SOC is beyond the minimal requirement of intravenous fluids, oxygen, and monitoring that's mentioned in the paper. In particular, there's no info on whether it includes antibiotics like azithromycin, and whether it includes zinc. It's hypothesized that HCQ works partly by easing the entry of zinc into cells where it slows viral replication, and so they work well in conjunction.
https://www.medrxiv.org/content/10.1101/2020.04.10.20060558v1

Bottom line: it may still be worth it to take HCQ+zinc if you cough and lose your sense of smell two days after going through an airport, but HCQ may not be of any help to heavily symptomatic people (and it still has nasty side effects).

Real bottom line: now that hydroxychloroquine is a politicized issue, you can't trust anything journalists have to say about it and have to read the studies yourself.

Comment by Jacob Falkovich (Jacobian) on The Great Annealing · 2020-04-01T03:52:34.547Z · LW · GW

As a follow up on the media angle, here's something I posted on my Facebook:

We're going to see a lot of research on hydroxychloroquine and azithromycin (HC&A), among other drugs, coming out in the next few weeks from around the world. HC&A is already the standard of care in several countries, in part because the drugs are cheap and widely available and in part because early results are promising. The combined evidence of these studies may show that other treatments are better as a first choice, or that HC&A is better, or that it depends on the particular characteristics of each patient. It’s always going to be complicated.

What the studies will never be able to do is *prove* that HC&A cures COVID since we already know that nothing works 100% for it. There is too much variance in how patients are selected for each study, how they're treated, how outcomes are measured, and how an individual responds. There's never one big indisputable hammer in small-N drug research, and there are always outlier results for people to cherry-pick one way or another. However, enough Bayesian evidence could mount that taking 600 mg of hydroxychloroquine at home at the first onset of symptoms or a positive test is better than chicken soup or going to an overcrowded hospital, all else being equal [1].

And if that happens, there is little doubt in my mind that mainstream media will fight for weeks against admitting that it is the case. They will hide behind "it's not proven" and "more research is needed" and "but the FDA". Facebook will be along for the denial ride claiming they "fight unofficial misinformation", which is anything that’s not coming from the WHO (which is currently telling people not to wear masks). Many politicians will fight to suppress this information as well, especially if Trump starts gloating over some particularly poor pro-HC&A study and saying that he called it. Trump is an idiot, but reversed stupidity is not intelligence.

So, please don’t fall prey to Gell-Mann amnesia. The same people who bullshitted you about “it’s just the flu” and about closing borders and about masks would 100% keep bullshitting you about drugs. Journalists aren’t smart enough to understand cumulative research evidence, and organizations like WHO and FDA have institutional incentives that will force them to react two months and thousands of corpses too late. You have to learn how to read medical studies yourself, or follow people who can and who aren’t compromised by working in media or politics. The lives of your loved ones are at stake.

[1] I will not disclose here whether I think that’s already the case for two reasons. First, I don’t want Facebook to remove this post for giving unsolicited medical advice, so I’m only giving information consumption advice. Second, I am not the authority you should be listening to. It’s better that we all find different sources to read and share our independent conclusions.

Comment by Jacob Falkovich (Jacobian) on "No evidence" as a Valley of Bad Rationality · 2020-03-30T14:19:11.706Z · LW · GW

I just thought of this in the context of this study on hydroxychloroquine in which 14/15 patients on the drug improved vs 13/15 patients treated with something else. To the average Joe, HCQ curing 14/15 people is an amazing positive result, and it's heartening to know that other antivirals are almost as good. To the galaxy-brained journalist, there's p>0.05 and so "the new study casts doubt on hydroxychloroquine effectiveness... a prime example of why Trump shouldn't be endorsing... actually isn't any more effective."

Comment by Jacob Falkovich (Jacobian) on Seeing the Smoke · 2020-02-29T06:07:34.367Z · LW · GW

I think the economic impact will also be huge. Businesses are prepared for 2% of their workers being out with the flu on any given day through the winter, but not for 20% to be sick while the other 80% are quarantined as COVID-19 hits their city. And the company who needs the input parts from that first business is not prepared to not have them for a month, and the companies that rely on them are not prepared, and most industries have slim enough cash reserves and profit margins that a pandemic can knock a lot good companies out of business for good. This could all mean just slightly more expensive electronics for two years, or it could mean a decade of unemployment and restructuring.

Comment by Jacob Falkovich (Jacobian) on Go F*** Someone · 2020-01-22T16:36:50.372Z · LW · GW

Attractiveness comes in many forms. I'm extroverted and write better than I look, so I do well at dinner parties and OKCupid. You can be attractive in dancing skill, in spiritual practice, in demonstrable expertise, in an artistic pursuit... guitar players get laid even if they're not that good looking.

And yet, everyone's first association when talking about "aim for 100 dates" is Tinder, which works only for the men who are top 20% in the one aspect of attractiveness that's crowded and hard to improve - physical looks. This includes men who self-report as unattractive, like this commenter (and presumably, "Simon").

The minimum threshold of attractivenes on Tinder is incredibly high, much higher than almost any other place to look for dates. It's certainly higher than my own good looks — I only turn Tinder on when I leave the country.

Comment by Jacob Falkovich (Jacobian) on Go F*** Someone · 2020-01-22T16:25:50.202Z · LW · GW

I was thinking of people who write comments without reading the post, which pollutes the conversation. Or people who form broad opinions about a writer or a blog without reading. I deal with those people all day every day on Twitter and in the blog comments.

I didn't mean people deciding what to read based on the title. Of course everyone does that! Someone seeing 'Go F*** Someone' may assume that the post will be somewhat vulgar, and will talk about sex. Both things are true. People not interested in vulgar writing about sex shouldn't read it. If I titled it 'A Consideration of Narcissism as it Affects the Formation of Long Term Bonds' that would actually be more misleading, since people would not expect it to be a vulgar post about sex and will get upset.

Comment by Jacob Falkovich (Jacobian) on Go F*** Someone · 2020-01-20T22:56:22.607Z · LW · GW

I understand your concerns.

I cross-post everything I write on Putanumonit to LW by default, which I understood to be the intention of "personal blogposts". I didn't write this for LW. If anyone on the mod team told me that this would be better as a link post or off LW entirely, not because it's bad but because it's not aligned with LW's reputation, I'll be happy to comply.

I could imagine casual readers quickly looking at this and assuming it's related to the PUA community

With that said, my personal opinion is that LW shouldn't cater to people who form opinions on things before reading them and we should discourage them from hanging out here.