Running the Stack 2019-09-26T16:03:46.518Z · score: 36 (12 votes)
lionhearted's Shortform 2019-08-31T09:15:46.049Z · score: 8 (1 votes)
Drive-By Low-Effort Criticism 2019-07-31T11:51:37.844Z · score: 38 (29 votes)
On the Regulation of Perception 2019-03-09T16:28:19.887Z · score: 18 (7 votes)
Team Cohesion and Exclusionary Egalitarianism 2018-09-17T04:48:33.894Z · score: 36 (19 votes)
Secondary Stressors and Tactile Ambition 2018-07-13T00:26:23.561Z · score: 17 (9 votes)
Putting Logarithmic-Quality Scales On Time 2018-07-08T15:00:37.568Z · score: 15 (7 votes)
A Short Celebratory / Appreciation Post 2018-05-23T00:02:18.423Z · score: 133 (44 votes)
Some Simple Observations Five Years After Starting Mindfulness Meditation 2018-04-19T22:28:47.338Z · score: 80 (26 votes)
Explicit and Implicit Communication 2018-03-21T08:58:34.415Z · score: 106 (42 votes)
"Just Suffer Until It Passes" 2018-02-12T04:01:13.922Z · score: 147 (52 votes)
Fashionable or Fundamental Thought in Communities 2018-01-19T09:03:48.109Z · score: 37 (14 votes)
Success and Fail Rates of Monthly Policies 2017-12-09T15:24:37.148Z · score: 47 (19 votes)
Doing a big survey on work, stress, and productivity. Feedback / anything you're curious about? 2017-08-29T14:19:37.241Z · score: 1 (1 votes)
Perhaps a better form factor for Meetups vs Main board posts? 2016-01-28T11:50:20.360Z · score: 14 (15 votes)
Crossing the History-Lessons Threshold 2014-10-17T00:17:42.822Z · score: 34 (42 votes)
Flashes of Nondecisionmaking 2014-01-27T14:30:26.937Z · score: 28 (31 votes)
Confidence In Opinions, Intensity In Opinion 2013-09-04T16:56:17.883Z · score: 0 (11 votes)
Reflective Control 2013-09-02T17:45:58.356Z · score: 13 (16 votes)
A Rational Approach to Fashion 2011-10-10T18:53:00.594Z · score: 22 (44 votes)
"Technical implication: My worst enemy is an instance of my self." 2011-09-22T08:46:49.941Z · score: -3 (8 votes)
Malice, Stupidity, or Egalité Irréfléchie? 2011-06-13T20:57:06.178Z · score: 24 (53 votes)
Chemicals and Electricity 2011-05-09T17:55:25.123Z · score: 6 (27 votes)
The Cognitive Costs to Doing Things 2011-05-02T09:13:17.840Z · score: 39 (39 votes)
Convincing Arguments Aren’t Necessarily Correct – They’re Merely Convincing 2011-04-25T12:43:07.217Z · score: 9 (21 votes)
Defecting by Accident - A Flaw Common to Analytical People 2010-12-01T08:25:47.450Z · score: 102 (132 votes)
"Nahh, that wouldn't work" 2010-11-28T21:32:09.936Z · score: 75 (82 votes)
Reference Points 2010-11-17T08:09:04.227Z · score: 32 (33 votes)
Activation Costs 2010-10-25T21:30:58.150Z · score: 29 (36 votes)
The Problem With Trolley Problems 2010-10-23T05:14:07.308Z · score: 11 (64 votes)
Collecting and hoarding crap, useless information 2010-10-10T21:05:51.331Z · score: 18 (29 votes)
Steps to Achievement: The Pitfalls, Costs, Requirements, and Timelines 2010-09-11T22:58:38.145Z · score: 18 (26 votes)
A "Failure to Evaluate Return-on-Time" Fallacy 2010-09-07T19:01:42.066Z · score: 55 (65 votes)


Comment by lionhearted on Follow-Up to Petrov Day, 2019 · 2019-09-28T01:06:36.609Z · score: 10 (13 votes) · LW · GW

You know what, I think LessWrong has collectively been worth more than $1,672 to me — especially after the re-launch. Heck, maybe even Petrov Day alone was. Incredibly insightful and potentially important.

I'd do this privately, but Eliezer wrote that story about how the pro-social people are too quiet and don't announce it. So yeah, I'm in for $1,672. Obviously, I wouldn't have done this if some knucklehead had nuked the site.

Now for the key question —

What kind of numbers do we need to put together to get another Ben Pace quality dev on the team? (And don't tell us it's priceless, people were willing to sell out your faith in humanity for less than the price of a Macbook Air! ;)

And yeah, mechanics for donating to LW specifically? Can follow up on email but I imagine it'd be good to have in this thread.

Edit: Before anyone suggests I donate to some highly-ranked charity, after I'd had some success in business I was in the nonprofit world for years and always 100% volunteer, have spent an immense amount of hours both understanding the space and getting things done, and was reasonably effective though not legendarily so or anything. By my quick back of the envelope math, I imagine any given large country's State Department would have paid $50,000 to $100,000 to have Petrov Day happen successfully in such a public way. Large corporations — I've worked with a few — maybe double that range. It was a really important thing and while "budget for hiring developers on a site that facilitates discussion of rationality" has far more nebulous and hard-to-pin-down value than some very worthy projects, it's first a threshold-break thing where a little more might produce much more results, and I think this site can be really important. If I might suggest something, though, perhaps an 80/20 eng-driven growth plan for the site that prioritizes preserving quality and norms would also make sense? We should have 10x the people here. It's very doable. I'm really busy but happy to help if I can. I think a lot of us would be happy to help make it happen if y'all would make it a little easier to know how. Something special is happening here.

Edit2: Okay, my donation is now conditional on banning whoever downvoted this ;) - just kidding. But man, what a strange mix of really great people and total idiots here huh? "I liked this a lot and I'd like to give money." WTF who does this guy think he is. Oh, me? Just someone trying to support the really fucking cool thing that's happening and asking for the logistics of doing so to be posted in case anyone else thinks it's been really cool and great for their life.

Comment by lionhearted on Follow-Up to Petrov Day, 2019 · 2019-09-28T00:54:40.354Z · score: 18 (13 votes) · LW · GW

What an incredible experience.

Felt like I got to understand myself a bit better, got exposed to a variety of arguments I never would have anticipated, forced to clarify my own thoughts and implications, did some math, did some sanity-check math on "what's the value of destroying some of Ben Pace's faith in humanity" (higher than any reasonable dollar amount alone, incidentally — and that's just one variable)... and yeah, this was really cool and legit innovative.

We should make sure the word about this gets out more.

We need more people on LessWrong, and more stuff like this.

People thinking this is just a chat board should think a little bigger. There's some real visionary thinking going on here, and an exceptionally smart and thoughtful community. I'm really grateful I got to see and participate in this. Thanks for all the great work — and for trusting me. Seriously. Y'all are aces.

Comment by lionhearted on Feature Wish List for LessWrong · 2019-09-28T00:26:38.950Z · score: 10 (2 votes) · LW · GW

(1) I want this too and would use it and participate more.

(2) Following logically from that, some sort of "Lists" feature like Twitter might be good, EX:

("Friending" is typically double-confirm, lists would seem much easier and less complex to implement. Perhaps lists, likewise, could be public or private)

Comment by lionhearted on Running the Stack · 2019-09-28T00:20:11.480Z · score: 15 (3 votes) · LW · GW

Thanks. Awesome.

I'm actually not sure what you mean by "running down the stack." Do you mean "when I get distracted I mentally review my whole stack, from most recent item added to most ancient item"?

Well, of course, it's whatever works for you.

For a simple example, let's say I'm (1) putting new sheets on my bed, and then (2) I get an incoming phone call, which results in me simultaneously needing to (3 and 4) send a calendar invite and email while still on the phone.

I'll pick which of the cal invite or email I'm doing first. Let's say I decide I'm sending the cal invite first.

I'll then,

(4) Send cal invite - done - off stack.

(3) Send email - done - off stack.

(2) Check whether anything else needs to be done before ending call, confirming, etc. If need to do another activity -> add it to stack as new (3). If not, end call.

And here's where the magic happens. I then,

(1) Go finish making the bed.

I'm not fanatic about it, but I won't get a snack first or anything significant until that done.

Or do you mean "when I get distracted, I 'pop' the next item/intention in the stack (the one that was added most recently), and execute that one next (as opposed to some random one).

This, yes. Emphasis added.

Less payoff to getting distracted? To being distractible?
Why is that? Because if you get distracted you have to complete the distraction?

Well, I can speculate on theory but I'll just say empirically — it works for me.

But let's speculate with an example.

You're midway through cleaning your kitchen and you remember you needed to send some email.

If you don't really wanna clean your kitchen deep down, you're likely to wind up on email or Twitter or LessWrong instead.

Now that's fine, if I see a second email I want to reply to, I'll snipe that.

But at the end, I have to go finish the kitchen unless things have materially changed.

Knowing there's no payoff in "escaping" is probably part of it. It probably shapes real-time cost/benefit tradeoffs somewhat. It means less cognitive processing time needed to pick next task. It makes one pick tasks slightly more carefully knowing you'll finish them. It leads to single-tasking and focus.

Umm, probably a lot more. I'm not fanatic about it, I'll shift gears if it's relevant but I don't like to do so.

Comment by lionhearted on Long-term Donation Bunching? · 2019-09-27T23:57:52.876Z · score: 12 (4 votes) · LW · GW

Do we have any lawyers here at LessWrong?


Would it be possible to legitimately write some sort of standardized financial instrument that functions as a loan with no repayment date, with options for conversion into charitable donation?

Speculations (non-lawyer here) —

(1) Maybe there's something equivalent to a SAFE Note (invented by YCombinator to simplify and standardize startup financing in a way friendly to both parties). It seems like a decent jumping-off point:

(2) On the other hand, there's a variety of mechanisms where you can't just do clever stuff. And there's a variety of arcane rules. You can, I think, donate property that's appreciated in value without paying capital gains first for instance, but maybe there's specific definitions around the timing of cash flows, donations, and deductions?

(3) On the other-other hand, seems like American tax policy in general is very amenable to people supporting worthy charitable causes.

(4) On the other-other-other-hand, you'd have to make sure it's not game-able and doesn't result in strange second-order consequences.

(5) And finally, if it's ambiguous, it seems like the type of thing where it'd be possible to get some sort of preliminary ruling from the relevant authorities. (Presumably the Treasury/IRS, but maybe someone else.)

Seems like a good idea though? If someone donates $10k a year for 5 years, it seems reasonable that they'd be able to write off that $50k at the end of the end of 5 years.

Comment by lionhearted on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-27T23:11:59.117Z · score: 12 (3 votes) · LW · GW

You guys are total heroes. Full stop. In the 1841 "On Heroes" sense of the word, which is actually pretty well-defined. (Good book, btw.)

Comment by lionhearted on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-27T23:07:12.804Z · score: 8 (3 votes) · LW · GW

There's rationalists who are in the mafia?


No insightful comment, just, like — this Petrov thread is the gift that keeps on giving.

Comment by lionhearted on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-27T23:04:56.485Z · score: 7 (4 votes) · LW · GW

Well, why stop there?

World GDP is $80.6 trillion.

Why doesn't the United States threaten to nuke everyone if they don't give a very reasonable 20% of their GDP per year to fund X-Risk — or whatever your favorite worthwhile projects are?

Screw it, why don't we set the bar at 1%?

Imagine you're advising the U.S. President (it's Donald Trump right now, incidentally). Who should President Trump threaten with nuking if they don't pay up to fund X-Risk? How much?

Now, let's say 193 countries do it, and $X trillion is coming in and doing massive good.

Only Switzerland and North Korea defect. What do you do? Or rather, what do you advise Donald Trump to do?

Comment by lionhearted on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-27T22:36:12.969Z · score: 18 (7 votes) · LW · GW
LW frontpage going down is also not particularly bad [...] If you wanted to convince me, you could make a case that destroying trust is really bad

Umm, respectfully, I think this is extremely arrogant. Dangerously so.

Anyways, I'm being blunt here, but I think respectful and hopefully useful. Think about this. Reasoning follows —

The instructions if you got launch codes (also in the above post) were as such (emphasis added with underline) —

"Every Petrov Day, we practice not destroying the world. One particular way to do this is to practice the virtue of not taking unilateralist action.

It’s difficult to know who can be trusted, but today I have selected a group of LessWrong users who I think I can rely on in this way. You’ve all been given the opportunity to show yourselves capable and trustworthy.


This Petrov Day, between midnight and midnight PST, if you, {{username}}, enter the launch codes below on LessWrong, the Frontpage will go down for 24 hours.

I hope to see you on the other side of this, with our honor intact."

So, to Ben Pace at least (the developer who put in a tremendous amount of hours and thought into putting this together), it represents...

*"practicing not destroying the world"

*"practicing the virtue of not taking unilateralist action"

*implications around his own uncertainty of who to trust

*de facto for Ben that he can't rely on you personally, by his standards, if you do it

*showing yourself not "capable and trustworthy" by his standards

*having the total group's "honor" "not be intact", under Ben's conception

And you want me to make a case for you on a single variable while ignoring the rather clear and straightforward written instructions for your own simple reductive understanding?

For Ben at least, the button thing was a symbolic exercise analogous to not nuking another country and he specifically asked you not to and said he's trusting you.

So, no, I don't want to "convince you" nor "make a case that destroying trust is really bad." You're literally stating you should set the burden of proof and others should "make a case."

In an earlier comment you wrote,

You can in fact compare whether or not a particular trade is worth it if the situation calls for it, and a one-time situation that has an upside of $1672 for ~no work seems like such a situation.

"No work"? You mean aside from the work that Ben and the team did (a lot) and demonstrating to the world at large that the rationality community can't press a "don't destroy our own website" button to celebrate a Soviet soldier who chose restraint?

I mean, I don't even want to put numbers on it, but if we gotta go to "least common denominator", then $1672 is less than a week's salary of the median developer in San Francisco. You'd be doing a hell of a lot more damage than that to morale and goodwill, I reckon, among the dev team here.

To be frank, I think the second-order and third-order effects of this project going well on Ben Pace alone is worth more than $1672 in "generative goodness" or whatever, and the potential disappointment and loss of faith in people he "thinks but is uncertain he can rely upon and trust" is... I mean, you know that one highly motivated person leading a community can make an immense difference right?

Just so you can get $1672 for charity ("upside") with "~no work"?

And that's just productivity, ignoring any potential negative affect or psychological distress, and being forced to reevaluate who he can trust. I mean, to pick a more taboo example, how many really nasty personal insults would you shout at a random software developer for $1672 to charity? That's almost "no work" — it's just you shouting some words, and whatever trivial psychological distress they feel, and I wager getting random insults from a stranger is much lower than having people you "are relying on and trusting" press a "don't nuke the world simulator button."

Like, if you just read what Ben wrote, you'd realize that risking destroying goodwill and faith in a single motivated innovative person alone should be priced well over $20k. I wouldn't have done it for $100M going to charity. Seriously.

If you think that's insane, stop and think why our numbers are four orders of magnitude apart — our priors must be obviously very different. And based on the comments, I'm taking into account more things than you, so you might be missing something really important.

(I could go on forever about this, but here's one more: what's the difference in your expected number of people discovering and getting into basic rationality, cognitive biases, and statistics with pressing the "failed at 'not destroying the world day' commemoration" vs not? Mine: high. What's the value of more people thinking and acting rationally? Mine: high. So multiply the delta by the value. That's just one more thing. There's a lot you're missing. I don't mean this disrespectfully, but maybe think more instead of "doing you" on a quick timetable?)

(Here's another one you didn't think about: we're celebrating a Soviet engineer. Run this headline in a Russian newspaper: "Americans try to celebrate Stanislav Petrov by not pressing 'nuke their own website' button, arrogant American pushes button because money isn't donated to charity.")

(Here's another one you didn't think about: I'll give anyone 10:1 odds this is cited in a mainstream political science journal within 15 years, which are read by people who both set and advise on policy, and that "group of mostly American and European rationalists couldn't not nuke their own site" absolutely is the type of thing to shape policy discussions ever-so-slightly.)

(Here's another one you didn't think about: some fraction of the people here are active-duty or reserve military in various countries. How does this going one way or another shape their kill/no-kill decisions in ambiguous warzones? Have you ever read any military memoirs about people who made to make those calls quickly, EX overwatch snipers in Mogadishu? No?)

(Not meant to be snarky — Please think more and trust your own intuition less.)

Comment by lionhearted on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-27T05:16:46.002Z · score: 2 (10 votes) · LW · GW

Oh in case you missed the subtext, it's a SciFi joke.

It's funny cuz it's sort of almost plausibly true and gets people thinking about what if their life had higher stakes and their decisions mattered, eh?

Obviously, it's just a silly amusing joke. And it's obviously going to look really counterproductively weird if analyzed or discussed among normal people, since they don't get nerd humor. I recommend against doing that.

Just laugh and maybe learn something.

Don't be stupid and overthink it.

Comment by lionhearted on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-27T05:02:22.358Z · score: -4 (8 votes) · LW · GW

Great comment.

Side note, I occasionally make a joke that I'm sent from another part of the multiverse (colloquially, "the future") to help fix this broken fucked up instance of the universe.

The joke goes — it's not a stupid teleportation thing like Terminator, it's a really expensive process two-step process to edit even a tiny bit of information in another universe. So with right CTC relays you can edit a tiny bit of information, creating some high-variance people in a dense area, and then the only people who get their orders are people who reach a sufficient level of maturity, competence, and duty. Not everyone who we give the evolved post-sapien genetics gets their orders; the overwhelming majority fail actually.

Now, the reason we at the Agency — in the joke, I'm on the Solar Task Force — are trying to fix this universe is because it effects other parts of the multiverse. There's a lot of stuff, but here's a simple one — the coordinates of Earth are similar in many branches. Setting off tons of nukes and beaming random stuff into space calls attention to Earth's location. I believe a game theoretic solution to the Fermi Pardox was proposed recently in SciFi and no one was paying attention. I mean, did anyone check that out? Right? Don't let Earth's coordinates get out. Jeez guys. This isn't complicatd. C'mon.

Now normally things work correctly, but this particular universe came about because you idiots — I mean, not you since you weren't alive — but collectively, this idiot branch of humans took a homeless bohemian artist who was a kinda-brave messenger solider in World War One (already a disaster but then the error compounds) and they took this loser with a bad attitude and put him in charge of a major industrial power at one of the most leveraged moments in human history. He wasn't even German! He was Austrian! And he took over the Nazi Party as only the 55th member after he was sent in as a police officer to watch the group. (Look it up on Wikipedia, is true.) Then, he tries a putsch — a coup — and it fails, and the state semi-prosecutes him, making him famous, but then lets him off easily. He turns that fame (infamy, really) into wealth, that into political power, and takes over. Then he does a ton of damage, including invading and destroying the most important city in the world at the time. Right, where are all those physicists and mathematicians from? Starts with a "B"? Used to be a monarchy? Destroyed by the Nazis? And after those people aged out and had completed their work, we went through a stagnation period for quite a while? Right? Isn't that what happened?

What a comedy of fucking errors. So much emotionalism. This branch of the universe is so incredibly fucked, I hate being here, but I'm doing my best. I like you humans, some of you are marvelous and all of you I want to succeed but man I fucking hate it here. Anyway, the first time I made this joke I was worried my CO would be pissed at me since I'm breaking rule#1, but it's actually so bad here that I didn't even get paradox warnings. (A true paradox crashes the universe, which we actually do when things are sufficiently bad and the rot is liable to spread.)

Anyway, this is just a joke. But yes, "desire for infamy" — fucking homo sapien sapiens. Evolve faster, please.

Just kidding.

(If I wanted to continue the joke, I'd say I am going certainly to get in trouble sooner or later, but this amuses the hell out of me and this is a really high stress unpleasant job. Anyway, not joking, now I'll go back to building my peak performance tech company that prompts clear thinking, intentional action, and generally more eustress and joy while eliminating distress. I'll build that into one of the largest companies on Earth while also producing subtly-but-not-subtly producing useful media with a lot of subtext lessons and building an elite team that does a mix of internal inventing like Bell Labs as well diffusion PayPal Mafia style, those people also going on to also start large important prosocial institutions. After the first few billion, I'll fund better sensors for asteroid defense and bring down the cost of regular testing/monitoring bloodwork and simple "already known best practices" in biochemical regulation. Anyway, I'm just joking around cuz this amuses me and working 90-110 hours per week while in a mostly human body is very tiring. I like this whole button thing btw, this is really good. It gives me a little bit of hope. I guess hope is dangerous too though. Anyway, back to work, I'm going to teach my brilliant junior team that "there is value in writing a clear agenda of what we want to accomplish in a meeting". I'd rather be developing new branches of mathematics — I already developed one for real, it blows people's minds when I show it to them (ask me in person whenever a whiteboard is around), and I'll write it up when I have some spare time — but yeah, "we shouldn't just fuck around for no purpose in meetings" is the current level of the job. So be it. Anyway, this button thing is good, I needed this. Thanks.)

Comment by lionhearted on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-27T04:39:36.565Z · score: 1 (4 votes) · LW · GW

Upvoted for poetry.

Commenting to underline it for "the call to infamy" — wonderful phrase.

Comment by lionhearted on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-27T04:38:16.834Z · score: 3 (4 votes) · LW · GW

The more famous version of the Pandora myth comes from another of Hesiod's poems, Works and Days. In this version of the myth, Hesiod expands upon her origin, and moreover widens the scope of the misery she inflicts on humanity. As before, she is created by Hephaestus, but now more gods contribute to her completion: Athena taught her needlework and weaving; Aphrodite "shed grace upon her head and cruel longing and cares that weary the limbs"; Hermes gave her "a shameful mind and deceitful nature"; Hermes also gave her the power of speech, putting in her "lies and crafty words"; Athena then clothed her; next Persuasion and the Charites adorned her with necklaces and other finery; the Horae adorned her with a garland crown. Finally, Hermes gives this woman a name: Pandora – "All-gifted" – "because all the Olympians gave her a gift". (In Greek, Pandora has an active rather than a passive meaning; hence, Pandora properly means "All-giving." The implications of this mistranslation are explored in "All-giving Pandora: mythic inversion?" below.) In this retelling of her story, Pandora's deceitful feminine nature becomes the least of humanity's worries. For she brings with her a jar (which, due to textual corruption in the sixteenth century, came to be called a box) containing "burdensome toil and sickness that brings death to men", diseases and "a myriad other pains". Prometheus had (fearing further reprisals) warned his brother Epimetheus not to accept any gifts from Zeus. But Epimetheus did not listen; he accepted Pandora, who promptly scattered the contents of her jar. As a result, Hesiod tells us, "the earth and sea are full of evils""

What's in the box? What's in the box? Don't open it! Oh, shit...

(Grace, longing and care, and being gifted causes the box to be opened. It's like history just keeps repeating itself or something...)

Comment by lionhearted on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-27T04:29:26.119Z · score: 4 (3 votes) · LW · GW

Let's, for the hell of it, assume real money got involved. Like, it was $50M or something.

Now — who would you want to be able to vote on whether destruction happens if their values aren't met with that amount of money at stake?

If it's the whole internet, most people will treat it as entertainment or competition as opposed to considering what we actually care about.

But if we're going to limit it only to people that are thoughtful, that invalidates the point of majority vote doesn't it?

Think about it, I'm not going to write out all the implications, but I think your faith in crowdsourced voting mechanisms for things with known-short-payoff against with long-unknown-costs that destroy long-unknown-gains is perhaps misplaced...?

Most people are — factually speaking — not educated on all relevant topics, not fully numerate on statistics and payoff calculations, go with their feelings instead of analysis, and are short-term thinkers..........

Comment by lionhearted on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-27T04:23:09.465Z · score: 7 (3 votes) · LW · GW

Note to self: Does lighthearted dark humor highlighting risk increase or decrease chances of bad things happening?

Initial speculation: it might have an inverted response curve. One or two people making the joke might increase gravity, everyone joking about it might change norms and salience.

Comment by lionhearted on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-27T04:19:55.521Z · score: 22 (10 votes) · LW · GW

Firm disagree. Second-order and third-order effects go limit->infinity here.

Also btw, I'm running a startup that's now looking at — best case scenario — handling significant amounts of money over multiple years.

It makes me realize that "a lot of money" on the individual level is a terrible heuristic. Seriously, it's hard to get one's mind around it, but a million dollars is decidedly not a lot of money on the global scale.

For further elaboration, this is relevant and incredibly timely:

Comment by lionhearted on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-27T04:16:57.549Z · score: 7 (5 votes) · LW · GW

I wouldn't do it for $100M.


Because it increases the marginal chance that humanity goes extinct ever-so-slightly.

If you have launch codes, wait until tomorrow to read the last part eh? —

(V zrna, hayrff lbh guvax gur rkcrevzrag snvyvat frpergyl cebzbgrf pnhgvba naq qrfgeblf bcgvzvfz, juvpu zvtug or gehr.)

Comment by lionhearted on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-27T04:14:41.154Z · score: 13 (4 votes) · LW · GW

Nooooo you're a good person but you're promoting negotiating with terrorists literally boo negative valence emotivism to highlight third-order effects, boo, noooooo................

Comment by lionhearted on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-27T04:11:59.292Z · score: -5 (3 votes) · LW · GW

Dank EA Memes? What? Really? How do I get in on this?


(I shouldn't joke "I have launch codes" — that's grossly irresponsible for a cheap laugh — but umm, I just meta made the joke.)

Comment by lionhearted on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-27T04:09:37.036Z · score: 16 (6 votes) · LW · GW

This whole thread is awesome. This is the maybe the best thing that's happened on LessWrong since Eliezer more-or-less went on hiatus.

Huge respect to everyone. This is really great. Hard but great. Actually it's great because it's hard.

Comment by lionhearted on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-27T04:07:32.115Z · score: 25 (10 votes) · LW · GW

"Rae, this is a friendly reminder from the universe that you can only at best control the first-order effects of systems you create..."

Comment by lionhearted on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-27T04:05:07.143Z · score: 24 (9 votes) · LW · GW

Or, if we want to go all max-Schelling at the risk of veering almost into Stalinism, tell people they'll get a karma bounty for pressing it but then coordinate with LW, CFAR, MIRI, and various meetups to ban that person for life from everything if the actually do it. 😂

Comment by lionhearted on lionhearted's Shortform · 2019-09-26T16:34:11.564Z · score: 5 (3 votes) · LW · GW

"According to some theorists (e.g. Anderson 2001), information processing speed forms the basis of individual differences in IQ. [...] Inspection time among individuals with autism has been reported to be (i) much better than expected, based upon measured IQ, (ii) equal to that of a typically developing group with mean IQ scores 25 points higher..."

Note 1: Lightly edited to remove acronyms.

Note 2: !!!! Whoa.

Comment by lionhearted on Free Money at PredictIt? · 2019-09-26T16:27:20.448Z · score: 6 (4 votes) · LW · GW

Oh man, this is like your MTG work except it's free money, which is even better.

But — umm, is it possible to ask for a quick 80/20 on the mechanics of this particular prediction market as of September 2019, especially any counterintuitive or worth highlighting points?

Yes of course I can review past posts and Google, but little details wrong around counterparty risk or lockup period of money, or y'know doing something stupid like the equivalent in-force-one-day market order instead of a limit order in a low volume market...

If it'd be fun for you, could you perhaps do a 5-10 sentence "off the top of your head" take on the mechanics? I wouldn't trust a shallow analysis from myself to not miss any details, and a "not shallow" analysis clocks in somewhere in the 3-20 hours range.

Comment by lionhearted on What is operations? · 2019-09-26T16:12:57.977Z · score: 17 (8 votes) · LW · GW

(1) This post is awesome. I agree. I'll dive in later to apply guidance. Ops are awesomely powerful and underrated by a lot of abstract thinkers ("blah, those are just details" effect).

(2) Ready to have your mind totally blown?

Here you go —

Comment by lionhearted on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-26T15:23:33.333Z · score: 5 (5 votes) · LW · GW

If you've got launch codes, wait until tomorrow to read this eh? —

Lbhe pbzzrag znxrf zr jnag gb chfu gur ohggba.



V'z abg n zbq be nssvyvngrq va nal jnl, whfg pevgvpvmvat crbcyr qbvat n tbbq guvat jura jr'er nyy abzvanyyl ba gur fnzr grnz qevirf zr penml. Ohg gura ntnva, znlor gung'f gur Xvffvatre evtugrbhfarff dhbgr ntnva.

Comment by lionhearted on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-26T14:57:20.879Z · score: 5 (4 votes) · LW · GW

Rot13 comment, if you have launch codes, recommend you wait until tomorrow to read this eh?

(1) V'z phevbhf ubj znal crbcyr jvgu ynhapu pbqrf pyvpxrq gur ohggba "gb purpx vg bhg" jvgubhg ragrevat ynhapu pbqrf. V qvqa'g qb fb, npghnyyl, fb V pna bayl cerfhzr lbh'q unir gb ragre pbqrf.

(2) V jbaqre vs gur yvfg bs anzrf jnf znqr choyvp vs crbcyr jbhyq or zber yvxryl be yrff yvxryl gb cerff vg. Anvir nafjre vf yrff yvxryl, ohg vg zvtug unir n fgenatr "lbh pna'g pbageby zr ivn funzr" serrqbz rssrpg sbyybjrq ol xnobbz.

(3) Qrfver sbe yvoregl — be frys-rkcerffvba — ner obgu tbbq guvatf, lrf? Naq LRG, V guvax nzbat n uvtu-yriry pebjq, gung'f zber yvxryl gb pnhfr n ohggba chfu guna abezny obevat gebyy znyvpr. V'z erzvaqrq bs Urael Xvffvatre'f dhbgr, "Gur zbfg shaqnzragny ceboyrz bs cbyvgvpf vf abg gur pbageby bs jvpxrqarff ohg gur yvzvgngvba bs evtugrbhfarff."

Ebg13 urer gb abg zrff hc gur qngn cyhf naq ubcrshyyl abg rssrpg erfhygf gbb zhpu. V guvax pbzzragvat V jnf srryvat "Natrfcnaag" vf jvguva obhaqf naq vf abezny orunivbe — yvxr n "url sbyxf, jubn, guvf vf vagrafr ru?" — ohg pbzzragvat ba fgngvfgvpf naq orunivbe cnggreaf zber yvxryl gb rssrpg bhgpbzr.

Comment by lionhearted on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-26T14:28:37.246Z · score: 47 (15 votes) · LW · GW

Oh this is wild. This generated a strange emotion.

Anyone here know the word "Angespannt"? One of my team members taught, German word with no exact English equivalent. We talked about it —

"It's a mix of tense and alert in a way. It's like the feeling you get before you go on stage."

Like, why should I care? I'm obviously not going to press the damn thing. And yet, simply knowing the button is there generates some tension and alertness.

Fascinating. Thank you for doing this.

(Well, sort of thank you, to be more precise...)

Comment by lionhearted on How Specificity Works · 2019-09-17T02:17:21.618Z · score: 13 (3 votes) · LW · GW

Hi Liron,

(1) I love this post and how you're thinking. I don't like many things and I want to offer you my highest compliments. There's many points of clarity in here that are super super valuable. Thank you.

(2) I've got something that I think might be really really important for you about a flaw in your reasoning. Not like "hey this is an important comment" but like — really really important for your thinking. Can I suggest reading this comment I'm making closely and processing the implication?

Take your point:

"A stage presentation of publicly-available educational material, hand-produced and performed by a professor who works at your educational institution, which you watch by locating yourself in a set building at a set meeting time, and which proceeds in a fixed order and at a fixed rate like broadcast television pre-YouTube."

This is, I suppose, correct enough to work with.

But I strongly suspect you're reasoning from first principles about the current state of things based on a certain set of unspoken premises of what's valuable and missing orthogonal tracks of valid and correct reasoning, specifically, historical context of how we got here that's off the mainstream understanding of the topic.

To break that down,

A. You're reasoning from first principles,

B. Current state of things,

C. Unspoken premises about what's valuable,

D. Missing orthogonal tracks,

E. Which are valid and correct reasoning,

F. Specifically, historical context,

G. No, not that historical context. The historical context that nobody's thinking about, that you only get through very careful thought.

I suspect you'd grant A, B, C are uncontroversial and at least true-ish. "D" you'd probably grant (there's a lot of tracks of reasoning we don't bother with, either because they're irrelevant or unhelpful). "E" is the key statement the thing swings on. "F" is the one you'd be like "no actually I do that", and I'm like no — take a look at "G".

Specifically, look at who went to university and why, and when that changed, and why.

Lectures used to make sense, and indeed, still make sense. If we ever wind up meeting in person, ask me about the story of the friend of mine who went from American public school to a Swiss boarding school when his father moved abroad. You don't even need to remind me of the context, I'll tell the story and it's both funny and insightful.

Pardon me for being vague! There's probably a reason. I certainly ain't going to spell it out. Nietzsche is too hardcore, and I certainly don't stand by what he says or anything, but his "insights ... follies" thing is worth Googling. It's the third part of his sentence that's the key part.

I don't like that the world is this way! It is, however, this way. This is a small thing but might be useful to you. This is probably kinda important — pardon me for being subtle, I just wrote it for you since there's a lot of great thinking here that's been quite valuable for me. This is just kinda "ah, thank you, this is valuable — but you know, you're like 95% correct, you're just missing ________" — what's the blank line there? (Just process it in your head, don't reply or anything, sheesh.)

It's what I'm trying to point out. Thanks for the post. This might be important btw, at least, once I got cleared up on this my thinking improved in lots of obvious and non-obvious ways. Oh, one last thing - do me a favor and don't try to convert my subtext thing here into text publicly? If I wanted to do that, I would've just done that. Even this isn't subtle enough, it's kinda "subtle like a hammer." The more subtle version would be the one sentence "Specifically, look..." — again, we're talking about whether lectures have any value, in what context, etc, as an illustration of a larger point.

Anyway, that's the best I could come up with. The world is a strange place. Awesome post and great reasoning, thanks again.

Comment by lionhearted on How Much is Your Time Worth? · 2019-09-03T01:28:30.690Z · score: 26 (9 votes) · LW · GW

Great post.

By the way, taken to its logical conclusion —

People don't move to new apartments frequently enough.

If your neighbors suck badly and you can't influence them, or if you live in a place that's badly maintained and the building management won't do anything about it, you really should strongly consider moving.

You tell that to somebody, you're likely to get to get one of the following arguments —

(1) That'd be too expensive (time, money, etc)! Possibly. I didn't say the person should move, or should move immediately. Just said "strongly consider" — aka, run the math and search out options, see if you can be creative, consider doing a temporary solution like crashing with a friend or staying with your parents or finding some subsidized housing for a short period of time to bank cash and then get a better place. If your apartment is causing major lifestyle disruptions/headaches with any sort of frequently, I'm just saying you should strongly consider moving. I feel really strongly people should do this, because there's been two or three times in my life that I moved too slowly, and I'd have been much better off taking a $1000-$3000 + dozens of hours of cost to move apartments, even if additionally a huge hassle even beyond those factors, because my life got hugely obviously better after moving. Just, at least, run an analysis of all the costs and research options and weigh it against expected value. I'm not saying you gotta do it, just you really ought to think about it.

(2) You don't know what it's like to be broke! Ah, the moral argument in favor of not even considering changing a bad situation. This argument is basically, "Don't make me feel bad and don't assert that I can have agency here." This argument is kinda unfortunate, because "hey, dude/dudette, you should really consider moving given how much your living setup sucks and is getting you down" seems pretty reasonable and is usually a pretty friendly argument.

For the record, by the way, that second argument is false in my case — the nickname of one of my first apartments was "The House of Horrors." Windows were partially broken in a ghetto Boston suburb. My bedroom got freezing cold in the winter. Lots of crime in the neighborhood, and regular rowdy behavior from patrons of local boozeries made getting a decent sleep on Friday and Saturday evenings a dice roll most weekends. (A dice roll I usually lost.) Kitchen was full of broken stuff, mold in the refrigerator, ceiling at times leaked water through a lighting fixture which umm, seemed dangerous.

One day I was sleeping in around 10AM and I woke up to hear a chainsaw from inside my own apartment. Like a horror movie — this was when the apartment got its nickname — and found out my landlord had decided to do something about the water-leaking-into-light-fixture problem and got a handy-man to chainsaw my ceiling but didn't think to knock before letting himself in nor check my bedroom, just assuming I was out. So I woke up to a man with a chainsaw in my apartment chainsawing my kitchen ceiling. It wasn't perhaps as dramatic as it sounds in text; nevertheless — somewhat unsettling.

So yeah, actually, I know what it's like to be broke as fuck. Nevertheless — while amusing years later, I ought to have at least strongly considered moving sooner. It seems a bit irrational in retrospect to not strongly consider it sooner. Life got a lot better once I did.

Comment by lionhearted on Peter Thiel/Eric Weinstein Transcript on Growth, Violence, and Stories · 2019-08-31T15:25:22.401Z · score: 6 (3 votes) · LW · GW

I've transcribed a few things. This must have taken, like, a really long time.

Thank you. Both Thiel and Weinstein are terrific thinkers.

You already listed some highlights, but did you have a part that you felt was particularly insightful to you personally?

Comment by lionhearted on lionhearted's Shortform · 2019-08-31T09:15:46.402Z · score: 9 (6 votes) · LW · GW

"Decisionmaking", I think, is better a single compound word than two words.

I think it's more like "treehouse" than "tree house." I write it as such, and hope it catches on.

First — Agree/disagree?

Second — Any thoughts on how to go about influencing usage, dictionaries, and autocorrect over time?

Comment by lionhearted on What product are you building? · 2019-07-05T05:58:52.507Z · score: 6 (4 votes) · LW · GW

Fascinating. The word "project" encapsulates a lot of these — from software to a party — establishing norms without a more tangible/legible "stake in the ground" (say, starting a Running Club) seems more ephemeral and we don't really have good words for it.

Thought-provoking post here. There's a gap in the vocabulary around this.

Comment by lionhearted on On the Regulation of Perception · 2019-03-10T23:05:17.212Z · score: 14 (4 votes) · LW · GW

Useful post, cheers — it seems someone has tread this ground before.

Just got the book, it's on incidentally —

Comment by lionhearted on On the Regulation of Perception · 2019-03-10T18:01:17.756Z · score: 2 (1 votes) · LW · GW
What do you mean by “when we eat we regulate perception”?

I think most people think of hunger like a gas gauge on the car — eating because the gas gauge is on "Empty" to fill it out.

But, actually, we're eating to change our perception — changing from the "I perceive myself to be hungry" to that not being the case any more.

The problem is that that might not map to actual nutritional needs, desired life/lifestyle, biochemistry, body composition, etc etc.

Comment by lionhearted on Less Competition, More Meritocracy? · 2019-01-22T04:08:15.636Z · score: 8 (3 votes) · LW · GW

Much ado about nothing, I think this is the most quotable thing you've ever written.

Appease or wipe out, perverse desperadoes, etc etc.

Anyways — exceptional piece. Feels like classical Zvi deep analysis as applied to high-leverage non-constructed scenarios. Or rather, how to turn a draft into constructed, without participants knowing. One marvels over what type of win rate would be possible if this can be successfully executed....

Comment by lionhearted on 18-month follow-up on my self-concept work · 2018-12-19T13:28:11.364Z · score: 21 (6 votes) · LW · GW

First — congratulations.

Second — an observation and a bit of an abstract question.

Observation: it seems to me that it's often the most introspective, pro-social, and thoughtful people that seem to wrestle with things like shame and potentially damaging self-concept.

Can you think of why that might be true? Obviously I don't know you super well, but you always came across like a very admirable person to me; i.e., exactly the type of person that would benefit least from rumination and feelings of shame or anxiety that might lead to some sort of paralysis.

It seems to me the more pro-social, reflective, and thoughtful someone is, the most ideal position for society would be for that person to go most confidently through life, no? Yes, of course, everyone gets some stuff wrong, and you don't want to shut down introspection, but...... I wonder why this is? Is it being very thoughtful causes both pro-sociality and rumination/shame/anxiety? Or that going through a round of heavy rumination makes one more pro-social? Or that becoming pro-social leads one to higher standards and more rumination?

Trying to navigate the cause-and-effect a little bit, but it seems like a darn shame to me.

Congrats again, of course — and any thoughts on why the general case occurs?

Comment by lionhearted on Peanut Butter · 2018-12-12T14:47:04.203Z · score: 13 (2 votes) · LW · GW

Well said.

On the Nietzsche front, "Formerly all the world was insane" is certainly remarkable. "Follies and Crimes" and "Galaxies of Joy" are both right up there with it.

Here's Galaxies —

“What? The final aim of science should be to give man as much pleasure and as little displeasure as possible? But what if pleasure and displeasure are so intertwined that whoever wants as much as possible of one must also have as much as possible of the other — that whoever wants to learn to ‘jubilate up to the heavens’ must also be prepared for ‘grief unto death’? And that may well be the way things are! […] Even today you still have the choice: either as little displeasure as possible, in short, lack of pain — and socialists and politicians of all parties fundamentally have no right to promise any more than that — or as much displeasure as possible as the price for the growth of a bounty of refined pleasures and joys that hitherto have seldom been tasted. Should you decide on the former, i.e. if you want to decrease and diminish people’s susceptibility to pain, you also have to decrease and diminish their capacity for joy. With science one can actually promote either of these goals! So far it may still be better known for its power to deprive man of his joys and make him colder, more statue-like, more stoic. But it might yet be found to be the great giver of pain! — And then its counterforce might at the same time be found: its immense capacity for letting new galaxies of joy flare up!”

— Nietzsche, The Gay Science, 1882

Comment by lionhearted on Public Positions and Private Guts · 2018-10-17T03:44:29.380Z · score: 2 (1 votes) · LW · GW

Good call. Replied above to Vaniver on this point.

Comment by lionhearted on Public Positions and Private Guts · 2018-10-17T03:41:41.774Z · score: 5 (2 votes) · LW · GW

Yeah, that's it — IDC, circling, etc are things I'm peripherally aware of but which I haven't tried and which aren't really contextualized; it felt sort of like, "If you know this, here's how they connect; if not, well, you could go find out and come back." I also got the feeling that 'Agenty Duck' was more significant than the short description of it, but I hadn't come across that before and felt like I was probably missing something.

I think the biggest issue, actually, wasn't the specific technical terms that I was aware I wasn't fully up to speed on, but rather with words like "coherence" — I wasn't sure if there was a formal definition/exploration being alluded to that I haven't heard, or if it's the plain English meaning. So my trust in my own ability to be reading the piece correctly really started to decrease at the end of the "Public Guts" section — I wasn't sure which words/phrases were technical terms that I wasn't up to speed on, and which were just plain English meaning that I could read and use natural assumptions to keep going.

Even then, still got a lot of it — just wanted to point it out since I liked the piece a lot. Also, it does make sense much of the time to write for an audience that's maximally informed to push the field forwards; this community and the world at large certainly benefits from both technical pieces that assume context as well as more "spell it out for you" materials.

Comment by lionhearted on Public Positions and Private Guts · 2018-10-14T19:04:54.550Z · score: 17 (5 votes) · LW · GW

I enjoyed this tremendously.

Small feedback: this post is a mix of fundamentals and good introductions to key concepts, but also seems to assume a very high level of knowledge of the norms and current recent terminology and developments in the rationality community.

I'm probably among the top 5-10% of time spent in the community (read all the sequences in real time when Eliezer was originally writing them at Overcoming Bias), but I'm certainly not in the top 1-2%, and there were a number of references I didn't get and therefore points I couldn't quite follow. Of course, I could dig-up and cross-reference everything to work at them towards max understanding, but then, I just dropped in for 30 minutes here at LW before I'm about to get on a phonecall.

If that's intentional, no problem. Just pointing it out so you can do a quick check on target audience. What I did get seemed really marvelous, at least a half-dozen very interesting ideas here.

Comment by lionhearted on Anti-social Punishment · 2018-09-30T23:21:48.225Z · score: 9 (2 votes) · LW · GW

I'm from Boston originally. Very interesting to note that Boston didn't score the highest in the non-punishment variant — it was high, but lower than Copenhagen — but scored the at the top with punishment added.

That squares with my experience of my Bostonians — reasonably friendly and pro-social but not as much so as say Scandinavia, but very much willing to get righteous if someone is defecting, probably moreso than Scandinavia.

But then, reasonably quick to forgive if someone did bad but gets with the program.

Or maybe I'm flattering my native city. But the results aren't surprise compared to my intuition. (I wish I'd made a prediction about how Boston would come out before reading the results, but alas, missed opportunity there.)

Comment by lionhearted on Team Cohesion and Exclusionary Egalitarianism · 2018-09-18T03:38:47.676Z · score: 2 (1 votes) · LW · GW

Nice. Thanks. I'm due to learn Markdown...

Comment by lionhearted on A Dialogue on Rationalist Activism · 2018-09-15T22:38:30.731Z · score: 9 (2 votes) · LW · GW

I came to write that exact comment —

"That was a really fun read."

Nothing more substantive for now. Fun, for sure, though.

Comment by lionhearted on On Robin Hanson’s Board Game · 2018-09-09T02:33:43.233Z · score: 14 (4 votes) · LW · GW

Loved this. The vast majority of analysis on games is shallow, tending to look at the stated rules and explicit mechanics, and ignoring derived and implied rules (the vastly different property value-payoff heatmaps in Monopoly, the "genre awareness" here), ignoring tempo/timing issues, and ignoring win conditions / endgame considerations.

I love when stuff like this gets boiled down elegantly:

>You want to accumulate contracts in suspects you ‘like’ (which mostly means the ones you think are good bets), so you can get ‘control’ of one or more of them. Control means that if they did it, you win.

Ah, cool, that's a win condition.

And then the logical corollaries:

>... Suppose you’re a poor player. ... A basic gambit would be to buy up all the contracts you can of a suspect everyone has dismissed. Even if there are very good reasons they seem impossible and are trading at $5, you can still get enough of them to win if it works out, and you might have no competition since players in better shape have juicier targets. Slim chance beats none. But if even that’s out of the question, you’ll have to rebuild your position. You will need to speculate via day trading. Any other play locks in a loss.

And tempo:

>This is a phenomenon we see in many strategic games. Early in the game, players mostly are concerned about getting good value, as most assets are vaguely fungible and will at some point be useful or necessary. As the game progresses, players develop specialized strategies, and start to do whatever they need to do to get what they need, often at terrible exchange rates, and dump or ignore things they don’t need, also at terrible exchange rates.

I love this stuff. It's so cool. Hat's off.

Incidentally, did you ever read "The best Puerto Rico strategy"?

A gem of a broad overview that similarly looks in depth across the whole stack. Puerto Rico is a wonderful game in that it's incredibly simple and satisfying for new players — no overtly destructive actions against fellow players, everyone gets a turn, you're always building up your island no matter what — but has incredibly deep mathematical and behavioral complexity underneath the hood. Rare that games can hit both of those notes.

Definitely recommend that one if you haven't read it. Seriously, huge respect and hat's off for this article — being able to intuit game dynamics, strategic and tactical considerations, elucidating win conditions and relevant play adjustments when winning or losing before having played the game extensively is... damn impressive.

I'm totally hoping you teach a class on game analysis at Stanford or MIT or whatever someday, and put the lectures online. I'd watch 'em. What you do is, frankly, really damn impressive. No idle flattery, but where do you reckon you're at percentile-wise in this skill? Top 0.00001% of analysts/theorists would be 1 in 10 million or so. There's probably not more than 30 better analysts in the USA than you, no? No flattery, more like a factual statement. If someone gave me an under/over bet of "100 more talented game analysts than Zvi in the USA", I'm leaning super strongly to the the under; under/over of 1000 people and I take the under without hesitating.

Comment by lionhearted on Secondary Stressors and Tactile Ambition · 2018-07-14T18:43:08.922Z · score: 2 (1 votes) · LW · GW

Very useful concept and phrase. Thanks.

Comment by lionhearted on Secondary Stressors and Tactile Ambition · 2018-07-14T12:33:42.950Z · score: 2 (1 votes) · LW · GW

Interesting way of putting it.

It seems to me ambition differs slightly from motivation — ambition, I think, often includes some medium-intensity negative emotion with it — but, insightful take here.

Comment by lionhearted on Secondary Stressors and Tactile Ambition · 2018-07-14T12:32:15.985Z · score: 12 (3 votes) · LW · GW

This was my favorite disagreeing comment on this thread, and insightful.

Comment by lionhearted on Secondary Stressors and Tactile Ambition · 2018-07-14T12:31:55.244Z · score: 2 (1 votes) · LW · GW

That's one of my favorite essays, incidentally.

That said, I'm not going for poetics or linguistic beauty — I'm looking for an easily-used technical term.

I'm not particularly attached to "secondary stressors" — I just want a precise phrase for the phenomenon. Other people in the thread proposed other ones, EX "worrying about worrying" (which is close but I think again not as precise).

Comment by lionhearted on Secondary Stressors and Tactile Ambition · 2018-07-13T16:23:37.200Z · score: 11 (2 votes) · LW · GW

I respectfully disagree.

You seem to be saying that you prefer general words that encompass many concepts rather than specific and more precise words. EX:

>Did you just come up with a new way to say "motivation"? It's true that some people might get a quick boost from that.

Are motivation and ambition the same thing? I don't think so. It seems to me that ambition typically encompasses a certain lack alongside it; it seems to most typically occur with some medium-intensity negative emotion.

It's very possible to say someone is motivated to throw a birthday party for their son or daughter, but you wouldn't usually say they're "ambitious to throw a birthday party" — while ambition in its various forms (abstract or tactile) might be a subset of motivation, maybe, I think there's a useful distinction there.

Of course, the key is having language that works for you — if it doesn't work for you, by all means don't use it.