Posts

"Win First" vs "Chill First" 2020-09-28T06:48:21.511Z
Enforcing Type Distinction 2020-07-31T11:39:20.026Z
Running the Stack 2019-09-26T16:03:46.518Z
lionhearted's Shortform 2019-08-31T09:15:46.049Z
Drive-By Low-Effort Criticism 2019-07-31T11:51:37.844Z
On the Regulation of Perception 2019-03-09T16:28:19.887Z
Team Cohesion and Exclusionary Egalitarianism 2018-09-17T04:48:33.894Z
Secondary Stressors and Tactile Ambition 2018-07-13T00:26:23.561Z
Putting Logarithmic-Quality Scales On Time 2018-07-08T15:00:37.568Z
A Short Celebratory / Appreciation Post 2018-05-23T00:02:18.423Z
Some Simple Observations Five Years After Starting Mindfulness Meditation 2018-04-19T22:28:47.338Z
Explicit and Implicit Communication 2018-03-21T08:58:34.415Z
"Just Suffer Until It Passes" 2018-02-12T04:01:13.922Z
Fashionable or Fundamental Thought in Communities 2018-01-19T09:03:48.109Z
Success and Fail Rates of Monthly Policies 2017-12-09T15:24:37.148Z
Doing a big survey on work, stress, and productivity. Feedback / anything you're curious about? 2017-08-29T14:19:37.241Z
Perhaps a better form factor for Meetups vs Main board posts? 2016-01-28T11:50:20.360Z
Crossing the History-Lessons Threshold 2014-10-17T00:17:42.822Z
Flashes of Nondecisionmaking 2014-01-27T14:30:26.937Z
Confidence In Opinions, Intensity In Opinion 2013-09-04T16:56:17.883Z
Reflective Control 2013-09-02T17:45:58.356Z
A Rational Approach to Fashion 2011-10-10T18:53:00.594Z
"Technical implication: My worst enemy is an instance of my self." 2011-09-22T08:46:49.941Z
Malice, Stupidity, or Egalité Irréfléchie? 2011-06-13T20:57:06.178Z
Chemicals and Electricity 2011-05-09T17:55:25.123Z
The Cognitive Costs to Doing Things 2011-05-02T09:13:17.840Z
Convincing Arguments Aren’t Necessarily Correct – They’re Merely Convincing 2011-04-25T12:43:07.217Z
Defecting by Accident - A Flaw Common to Analytical People 2010-12-01T08:25:47.450Z
"Nahh, that wouldn't work" 2010-11-28T21:32:09.936Z
Reference Points 2010-11-17T08:09:04.227Z
Activation Costs 2010-10-25T21:30:58.150Z
The Problem With Trolley Problems 2010-10-23T05:14:07.308Z
Collecting and hoarding crap, useless information 2010-10-10T21:05:51.331Z
Steps to Achievement: The Pitfalls, Costs, Requirements, and Timelines 2010-09-11T22:58:38.145Z
A "Failure to Evaluate Return-on-Time" Fallacy 2010-09-07T19:01:42.066Z

Comments

Comment by lionhearted on What is the right phrase for "theoretical evidence"? · 2020-11-02T19:50:34.594Z · LW · GW

First, I love this question.

Second, this might seem way out of left field, but I think this might help you answer it —

https://en.wikipedia.org/wiki/B%C3%BCrgerliches_Gesetzbuch#Abstract_system_of_alienation

One of the BGB's [editor: the German Civil Law Code] fundamental components is the doctrine of abstract alienation of property (German: Abstraktionsprinzip), and its corollary, the separation doctrine (Trennungsprinzip). Derived from the works of the pandectist scholar Friedrich Carl von Savigny, the Code draws a sharp distinction between obligationary agreements (BGB, Book 2), which create enforceable obligations, and "real" or alienation agreements (BGB, Book 3), which transfer property rights. In short, the two doctrines state: the owner having an obligation to transfer ownership does not make you the owner, but merely gives you the right to demand the transfer of ownership.

I have an idea of what might be going on here with your question.

It might be the case that there's two fairly-tightly-bound — yet slightly distinct — components in your conception of "theoretical evidence." 

I'm having a hard time finding the precise words, but something around evidence, which behaves more-or-less similarly to how we typically use the phrase, and something around... implication, perhaps... inference, perhaps... something to do with causality or prediction... I'm having a hard time finding the right words here, but something like that.

I think it might be the case that these components are quite tightly bound together, but can be profitably broken up into two related concepts — and thus, being able to separate them BGB-style might be a sort of solution.

Maybe I'm mistaken here — my confidence isn't super high, but when I thought through this question the German Civil Law concept came to mind quickly. 

It's profitable reading, anyways — BGB I think can be informative around abstract thinking, logic, and order-of-operations. Maybe intellectually fruitful towards your question or maybe not, but interesting and recommended either way.

Comment by lionhearted on On Destroying the World · 2020-09-29T01:32:01.434Z · LW · GW

Good points.

I'll review and think more carefully later — out at dinner with a friend now — but my quick thought is that the proper venue, time, and place for expressing discontent with a cooperative community project is probably afterwards, possibly beforehand, and certainly not during... I don't believe in immunity from criticism, obviously, but I am against defection when one doesn't agree with a choice of norms.

That's the quick take, will review more closely later.

Comment by lionhearted on On Destroying the World · 2020-09-28T22:45:26.525Z · LW · GW

Hey - to preface - obviously I'm a great admirer of yours Kaj and I've been grateful to learn a lot from you, particularly in some of the exceptional research papers you've shared with me.

With that said, of course your emotions are your own but in terms of group ethics and standards, I'm very much in disagreement.

The upset feels similar to what I've previously experienced when something that's obviously a purely symbolic gesture is treated as a Big Important Thing That's Actually Making A Difference.

On the one hand, you're totally right. On the other hand, basically the entire world is made up of abstractions along these lines. What can the Supreme Court opinion in Marbury vs Madison be recognized as other than a purely symbolic gesture? Madison wasn't going to deliver the commissions, Justice Marshall (no relation) knew that for sure, and he made a largely symbolic gesture in how he navigated the thing. It had no practical importance for a long time but now forms one of the foundations of American jurisprudence effecting, indirectly, billions of lives. But at the time, if you dig into the history, it really was largely symbolic at the time.

The world is built out of all sorts of abstract symbolism and intersubjective convention. 

That by itself wouldn't trigger the reaction; the world is full of purely symbolic gestures that are claiming to make a difference, but they mostly haven't upset me in a long time. Some of the communication around Petrov Day has. I think it's because of a sense that this idea is being pushed on people-that-I-care-about as something important despite not actually being in accordance to their values, and that there's social pressure for people to be quiet about it and give in to the social pressure at a cost to their epistemics.

Canonical reply is this one:

https://www.lesswrong.com/s/pvim9PZJ6qHRTMqD3/p/7FzD7pNm9X68Gp5ZC

("Canonical" was intentionally chosen, incidentally.)

I feel like Oliver's comment is basically saying "people should have taken this seriously and people who treat this light-heartedly are in the wrong". It's spoken from a position of authority, and feels like it's shaming people whose main sin is that they aren't particularly persuaded by this ritual actually being significant, as no compelling reason for this ritual actually being significant has ever been presented.

https://www.lesswrong.com/posts/tscc3e5eujrsEeFN4/well-kept-gardens-die-by-pacifism

From Well-Kept Gardens:

In any case the light didn't go on in my head about egalitarian instincts (instincts to prevent leaders from exercising power) killing online communities until just recently.  [...] I have seen rationalist communities die because they trusted their moderators too little.

Honestly, for anything that wasn't clearly egregiously wrong, I'd support the leadership team on here even if my feelings ran in a different direction. Like, leadership is hard. Really really really hard. If there was something I didn't believe in, I'd just quietly opt out. 

Now, I fully understand I'm in the minority on this position — but I'm against both  'every interpretation is valid' type thinking (why would every interpretation be valid as it relates to a group activity where your behavior effects the whole group?).

Likewise, pushing back against "shaming people whose main sin is that they aren't particularly persuaded by this ritual actually being significant" — isn't that actually both good and necessary if we want to be able to coordinate and actually solve problems?

There's a dozen or so Yudkowsky citations about this. Here's another:

https://www.lesswrong.com/posts/KsHmn6iJAEr9bACQW/bayesians-vs-barbarians

Let's say we have two groups of soldiers.  In group 1, the privates are ignorant of tactics and strategy; only the sergeants know anything about tactics and only the officers know anything about strategy.  In group 2, everyone at all levels knows all about tactics and strategy.

Should we expect group 1 to defeat group 2, because group 1 will follow orders, while everyone in group 2 comes up with better ideas than whatever orders they were given?

In this case I have to question how much group 2 really understands about military theory, because it is an elementary proposition that an uncoordinated mob gets slaughtered.

And finally,

Now it may be the case - a more agreeable part of me wants to interject - that this ritual actually is important, and that it should be treated as more than just a game.

But.

If so, I have never seen a particularly strong case being made for it.

I made that case last year extensively:

https://www.lesswrong.com/posts/vvzfFcbmKgEsDBRHh/honoring-petrov-day-on-lesswrong-in-2019?commentId=ZZ87dbYiGDu6uMtF8

I even did, like, math and stuff. The "shut up and multiply" thing.

Long story short — I think shared trust and demonstrated cooperation are super valuable, good leadership is incredibly underappreciated, and whimsical defection is really bad.

Again though — all written respectfully, etc etc, and I know I'm in the minority position here in terms of many subjective personal values, especially harm/care and seriousness/fun.

Finally, it's undoubtedly true my estimate of the potential utility of building out a base of successfully navigated low-stakes cooperative endeavors is undoubtedly multiple orders of magnitude higher than others. I put the dollar-value of that as, actually, pretty high. Reasonable minds can differ on many of these points, but that's my logic.

Comment by lionhearted on Honoring Petrov Day on LessWrong, in 2020 · 2020-09-28T22:16:09.526Z · LW · GW

Ah, I see, I read the original version partially wrong, my mistake. We're in agreement. Regards.

Comment by lionhearted on On Destroying the World · 2020-09-28T22:08:37.820Z · LW · GW

Hmm. Appreciate your reply. I think there's a subtle difference here, let me think about it some.

Hmm.

Okay.

Thrashing it out a bit more, I do think a lot of semi-artificial situations are predictive of future behavior. 

Actually, to use an obviously extreme example that doesn't universally apply, that's more-or-less the theory behind the various Special Forces selection procedures —

https://bootcampmilitaryfitnessinstitute.com/media/tv-documentaries/elite-special-forces-documentaries/

As opposed to someone artificially creating a conflict to see how the other party navigates it — which I'm not at all a fan of — I think exercises in shared trust have both predictive value for future behavior and build good team cohesion when overcome.

I'd be interested to hear various participants' and observers' takes on the actual impact of this event

Me too, but I'd ideally want the data captured semi-anonymously. Most people, especially effective people, won't comment publicly "I think this is despicable and have incremented downwards various confidences in people as a result" whereas the "aww it's ok, no big deal" position is much more easily vocalized.

(Personally, I'm trying to tone down that type of vocalization myself. It's unproductive on an individual level — it makes people dislike you for minimal gain. But I speculate that the absence of that level of dialogue and expression of genuine sentiment potentially leads to evaporative cooling of people who believe in teamwork, mission, mutual trust, etc.)

Reasonable minds can differ on this and related points, of course. And I'm very aware my values diverge a bit from many here, again around stuff like seriousness/camaraderie/cohesion/intensity/harm-vs-care/self-expression/defection/etc. 

Comment by lionhearted on "Win First" vs "Chill First" · 2020-09-28T21:27:43.202Z · LW · GW

Great comment. Insightful phrasing, examples, and takeaways. Thank you.

Comment by lionhearted on Honoring Petrov Day on LessWrong, in 2020 · 2020-09-28T21:26:22.289Z · LW · GW

Two thoughts —

(1) Some sort of polling or surveying might be useful. In the Public Goods Game, researchers rigorously check whether participants understand the game and its consequences before including them in datasets. It's quite possible that there's incredibly divergent understandings of Petrov Day among the user population. Some sort of surveying would be useful to understand that, as well as things like people's sentiments towards unilateralist action, trust, etc no? It'd be self-reported data but it'd be better than nothing.

(2) I wonder how Petrov Day setup and engagement would change if the site went down for a month as a consequence.

Comment by lionhearted on "Win First" vs "Chill First" · 2020-09-28T21:05:53.935Z · LW · GW

Interesting thought yeah.

My first guess is there's some overlap but it's slightly orthogonal — btw, it might not have come across in original post, but Butler is a really well-loved teammate who is happy to defer to other guys on his team, set them up for success, etc. He doesn't need to be "the guy" any given night — he just wants his team to win with a rather extreme fervor about it.

Comment by lionhearted on On Destroying the World · 2020-09-28T20:41:13.552Z · LW · GW

I honestly don't get it - do you have a link to the previous discussion that justified why anyone's taking it all that seriously?

Here was my analysis last year —

https://www.lesswrong.com/posts/vvzfFcbmKgEsDBRHh/honoring-petrov-day-on-lesswrong-in-2019?commentId=ZZ87dbYiGDu6uMtF8

In fairness, my values diverge pretty substantially from a lot of the community here, particularly around "life is serious" vs "life isn't very serious" and the value of abstract bonds/ties/loyalties/camaraderie. 

Comment by lionhearted on On Destroying the World · 2020-09-28T11:03:04.605Z · LW · GW

You're being very kind in far-mode consequentialism here, but come on now.

Making your friend look foolish in front of thousands of people is bad etiquette in most social circles.

Comment by lionhearted on On Destroying the World · 2020-09-28T09:56:42.673Z · LW · GW

Why would there be?

Different social norms, I suppose. 

I'm trying to think if  we ever prank each other or socially engineer each other in my social circle, and the answer is yes but it's always by doing something really cool — like, an ambiguous package shows up but there's a thoughtful gift inside. 

(Not necessarily expensive — a friend found a textbook on Soviet accounting for me, I got him a hardcover copy of Junichi Saga's Memories of Silk and Straw. Getting each other nice tea, coffee, soap, sometimes putting it in a funny box so it doesn't look like what it is. Stuff like that. Sometimes nicer stuff, but it's not about the money.)

Then I'm trying to think how my circle in general would respond to no-permission-given out-of-scope pranking of someone's real life community that they're member of — and yeah, there'd be pretty severe consequences in my social circle if someone did that. If I heard someone did what your buddy did who was currently a friend or acquaintance, they'd be marked as someone incredibly discourteous and much less trustworthy. It would just get marked as... pointless rude destructive behavior. 

And it's pretty tech heavy btw, we do joke around a lot, it's just when we do pranks it's almost always at the end a gift or something uplifting.

I don't mean this to be blunt btw, I just re-read it before posting and it reads more blunt than I meant it to — I was just running through whether this would happen in my social circle, I ran it out mentally, and this is what I came up with.

Obviously, everyone's different. And that's of course one of the reasons it's hard for people to get along. Some sort of meta-lesson, I suppose.

Comment by lionhearted on On Destroying the World · 2020-09-28T07:49:31.637Z · LW · GW

Umm. Grudgingly upvoted. 

(For real though, respect for taking the time to write an after-action report of your thinking.)

I was tricked by one of my friends:

Serious question - will there be any consequences for your friendship, you think?

Comment by lionhearted on "Win First" vs "Chill First" · 2020-09-28T07:18:20.193Z · LW · GW

It'd take a few paragraphs to tell the whole story if you don't already follow basketball, but this —

https://www.cbssports.com/nba/news/76ers-coach-brett-brown-wants-ben-simmons-taking-at-least-one-3-pointer-a-game-moving-forward/

Long story really short, the 76ers have a player who is an incredible athlete but doesn't feel comfortable taking jump shots far away from the basketball hoop.

Thus, defenses can ignore him when he's out on the perimeter.

His coach told him publicly to take one 3-point shot per game. Coach said he doesn't even care if he hits it or not.

The player basically refused to do it. 

It's more detailed than that, but the 80/20 is a young incredible athlete with immense potential on the team refused to follow his coach's (incredibly reasonable) instruction. 

In most sports and at most levels of play in sports, that'd get you benched by the coach. 

But in the NBA, when a coach and star player feud, the coach gets fired around 9 times out of 10. (The other time, the star player gets traded. But the coach usually gets fired first in the NBA.)

Comment by lionhearted on Honoring Petrov Day on LessWrong, in 2020 · 2020-09-28T07:04:14.664Z · LW · GW

So, I think it's important that LessWrong admins do not get to unilaterally decide that You Are Now Playing a Game With Your Reputation. 

Dude, we're all always playing games with our reputations. That's, like, what reputation is.

And good for Habyka for saying he feels disappointment at the lack of thoughtfulness and reflection, it's very much not just permitted but almost mandated by the founder of this place —

https://www.lesswrong.com/posts/tscc3e5eujrsEeFN4/well-kept-gardens-die-by-pacifism

https://www.lesswrong.com/posts/RcZCwxFiZzE6X7nsv/what-do-we-mean-by-rationality-1

Here's the relevant citation from Well-Kept Gardens:

I confess, for a while I didn't even understand why communities had such trouble defending themselves—I thought it was pure naivete.  It didn't occur to me that it was an egalitarian instinct to prevent chieftains from getting too much power.

This too:

I have seen rationalist communities die because they trusted their moderators too little.

Let's give Habryka a little more respect, eh? Disappointment is a perfectly valid thing to be experiencing and he's certainly conveying it quite mildly and graciously. Admins here did a hell of a job resurrecting this place back from the dead, to express very mild disapproval at a lack of thoughtfulness during a community event is....... well that seems very much on-mission, at least according to Yudkowsky.

Comment by lionhearted on Honoring Petrov Day on LessWrong, in 2020 · 2020-09-28T06:53:03.351Z · LW · GW

Y'know, there was a post I thought about writing up, but then I was going to not bother to write it up, but I saw your comment here H and "high level of disappointment reading this response"... and so I wrote it up.

Here you go:

https://www.lesswrong.com/posts/scL68JtnSr3iakuc6/win-first-vs-chill-first

That's an extreme-ish example, but I think the general principle holds to some extent in many places.

Comment by lionhearted on Against Victimhood · 2020-09-20T11:42:34.764Z · LW · GW

Yeah, I have first-pass intuitions but I genuinely don't know.

In a era with both more trustworthy scholarship (replication crisis, etc) and less polarization, I think this would actually be an amazing topic for a variety of longitudinal studies. 

Alas, probably not possible right now.

Comment by lionhearted on Against Victimhood · 2020-09-19T20:34:43.076Z · LW · GW

Respectfully — and I do mean this respectfully — I think you're talking completely past Jacob and missed his point.

You comment starts:

How much your life is determined by your actions, and how much by forces beyond your control, that is an empirical question. You seem to believe it's mostly your actions.

But Jacob didn't say that.

You're inferring something he didn't say — actually, you're inferring something that he explicitly disclaimed against.

Here's the opening of his piece right after the preface; it's more-or-less his thesis:

What’s bad about victim mentality? Most obviously, inhabiting a narrative where the world has committed a great injustice against which you are helpless against is extremely distressing. Whether the narrative is justified or not, it causes suffering.

(Emphasis added.)

You made some other interesting points, but I don't think he was trying to ascribe macro-causality to internal or external factors. 

He was saying, simply, in 2020-USA he thinks you'll get both (1) better practical outcomes and (2) better wellbeing if you eschew what he calls victim mentality. 

He says it doesn't apply universally (eg, Ancient Sparta). 

And he might be right or he might be mistaken.

But that's broadly what his point was.

You're inferring something for whatever reason that isn't what he said, and actually pretty much said he didn't believe, and then you went from there.

Comment by lionhearted on Most Prisoner's Dilemmas are Stag Hunts; Most Stag Hunts are Battle of the Sexes · 2020-09-16T12:50:56.167Z · LW · GW

Going through these now. I started with #3. It's astoundingly interesting. Thank you.

Comment by lionhearted on The Fusion Power Generator Scenario · 2020-08-15T13:10:25.344Z · LW · GW

Hmm. I'm having a hard time writing this clearly, but I wonder if you could get interesting results by:

  • Training on a wide range of notably excellent papers from "narrow-scoped" domains,
  • Training on a wide range of papers that explore "we found this worked in X field, and we're now seeing if it also works in Y field" syntheses,
  • Then giving GPT-N prompts to synthesize narrow-scoped domains in which that hasn't been done yet.

You'd get some nonsense, I imagine, but it would probably at least spit out plausible hypotheses for actual testing, eh?

Comment by lionhearted on Free Money at PredictIt? · 2020-08-13T10:27:18.622Z · LW · GW

By the way, wanted to say this caught my attention and I did this successfully recently on this question —

https://www.predictit.org/markets/detail/5883/Who-will-win-the-2020-Democratic-vice-presidential-nomination

Combined probabilities were over 110%, so I went "No" on all candidates. Even with PredictIt's 10% fee on winning, I was guaranteed to make a tiny bit on any outcome. If a candidate not on the list was chosen, I'd have made more.

My market investment came out to ($0.43) — that's negative 43 cents; ie, no capital required to stay in it — on 65 no shares across the major candidates. (I'd have done more, but I don't understand how the PredictIt $850 limit works yet and I didn't want to wind up not being able to take all positions.) 

I need to figure out how the $850 limit works in practice soon — is it 850 shares, $850 at risk, $850 max payout, or.....? Kinda unclear from their documentation, will do some research.

But yeah, it was fun and it works. Thanks for pointing this out.

Comment by lionhearted on You Need More Money · 2020-08-13T06:57:08.182Z · LW · GW

This is an interesting post — you're covering a lot of ground in a wide-ranging fashion. I think it's a virtual certainty that you'll come with some interesting and very useful points, but a quick word of caution — I think this is an area where "mostly correct" theory can be a little dangerous.

Specifically:

>If you earn 4% per year, then you need the aforementioned $2.25 million for the $90,000 half-happiness income. If you earn 10% per year, you only need $900,000. If you earn 15% per year, you only need $600,000. At 18% you need $500,000; at 24% you need $375,000. And of course, you can acquire that nest egg a lot faster if you're earning a good return on your smaller investments. [...] I'm oversimplifying a bit here. While I do think 24% returns (or more!) are achievable, they would be volatile.

You're half correct here, but you might be making a subtle mistake — specifically, you might be using ensemble probability in a non-ergodic space.

Recommended reading (all of these can be Googled): safe withdrawal rate, expected value, variance, ergodicity, ensemble probability, Kelly criterion.

Specifically, naive expected value (EV) in investing tends to implicitly assume ergodicity; financial returns are non-ergodic; it's very possible to wind up broke with near certainty even with high returns if your amount of capital deployed is too low for the strategy you're operating.

Yes, there's valid counter-counterarguments here but you didn't make any of them! The words/phrases safety, margin of safety, bankroll, ergodicity, etc etc didn't show up.

The best counterargument is probably low-capital-required arbitrage such as what Zvi described here; indeed, I followed his line of thinking and personally recently got pure arbitrage on this question — just for the hell of it, on nominal money. It's, like, a hobby thing. [Edit: btw, thanks Zvi.] This is more-or-less only possible because some odd rules they've adopted for regulatory reasons and for UI/UX simplicity that result in some odd behavior.

Anyway, I digress; I like the general area of exploration you're embarking on a lot, but "almost correct" in finance is super dangerous and I wanted to flag one instance of that. Consistent high returns on a small amount of capital does not seem like a good strategy to me; further, if you can get 24%+ a year on any substantial volume, you should probably just stack up some millions for a few years and then you could rely on passive returns after that without the intense amount of discipline needed to keep getting those returns (even setting aside ergodicity/bankroll issues). 

Lynch's One Up on Wall Street is an excellent take by someone who actually managed to make those type of returns for multiple decades; it's not exactly something you do casually...

(Disclaimer: certainly not an expert, potentially some mistakes here, not comprehensive, etc etc etc.)

Comment by lionhearted on Sunday August 9, 1pm (PDT) — talks by elityre, jacobjacob, Ruby · 2020-08-09T17:52:01.657Z · LW · GW

Hi all,

I'm going to withdraw my talk for today — after doing some prep yesterday with Jacob and clarifying everyone's skill level and background, I put a few hours in and couldn't get to the point where I thought my talk would be great.

The quality level has been so uniformly high, I'd rather just leave more time for people to discuss and socialize than to lower the bar.

Apologies for any inconvenience, gratitude, and godspeed.

Comment by lionhearted on Reveal Culture · 2020-07-27T12:01:02.560Z · LW · GW

Incredibly thought-provoking.

Thank you.

Reading this made me think about my own communication styles.

Hmm.

After some quick reflection, among people I know well I think actually oscillate between two — on the one hand, something very close to Ray Dalio's Bridgewater norms (think "radical honesty but with more technocracy, ++logos/--pathos").

On the other hand, a near-polar opposite in Ishin-denshin — a word that's so difficult to translate from Japanese that one of the standard "close enough" definitions for it is..... "telepathy."

No joke.

Almost impossible to explain briefly; heck, I'm not sure it could be explained in 7000 words if you hadn't immersed yourself in it at least a substantial amount and studied Japanese history and culture additionally after the immersion.

But it's really cool when it works.

Hmm... I've never really reasoned through how and why I utilize those two styles — which are so very different on the surface — but my quick guess is that they're both really, really efficient when running correctly. 

Downside — while both are easy and comfortable to maintain once built, they're expensive and sometimes perilous to build.

Some good insights in here for further refinement and thinking — grateful for this post, I'll give this a couple hours of thought at my favorite little coffee bar next weekend or something.

Comment by lionhearted on Swiss Political System: More than You ever Wanted to Know (I.) · 2020-07-20T12:32:44.890Z · LW · GW

> Very good post, highly educational, exactly what I love to see on LessWrong.

Likewise — I don't have anything substantial to add except that I'm grateful to the author. Very insightful.

Comment by lionhearted on Roll for Sanity · 2020-07-13T17:31:56.415Z · LW · GW

Interesting metaphor. Enjoyed it.

Comment by lionhearted on How to Find Sources in an Unreliable World · 2020-07-04T18:32:44.216Z · LW · GW

The quality I'm describing isn't quite "readability" — it overlaps, but that's not quite it. 

Feynman has it —

http://www.faculty.umassd.edu/j.wang/feynman.pdf

It's hard to nail down; it'd probably be a very long essay to even try.

And it's not a perfect predictor, alas — just evidence.

But I believe there's a certain way to spot "good reasoning" and "having thoroughly worked out the problem" from one's writing. It's not the smoothness of the words, nor the simplicity.

I's hard to describe, but it seems somewhat consistently recognizable. Yudkowsky has it, incidentally. 

Comment by lionhearted on How to Find Sources in an Unreliable World · 2020-07-02T21:02:22.702Z · LW · GW

I like to start by trying to find one author who has excellent thinking and see what they cite — this works for both papers and books with bibliographies, but increasingly other forms of media. 

For instance, Dan Carlin of the (exceptional and highly recommended) Hardcore History podcast cites all the sources he uses when he does a deep investigation of a historical era, which is a good jumping-off point if you want to go deep.

The hard part is finding that first excellent thinker, especially in a domain where you can't differentiate quality in a field yet. But there's some general conventions of how smart thinkers tend to write and reason that you can learn to spot. There's a certain amount of empathy, clarity, and — for lack of a better word — "good aesthetics" that, if they're present, the author tends to be smart and trustworthy. 

The opposite isn't necessarily the case — there are good thinkers who don't follow those practices and are hard to follow (say, Laozi or Wittgenstein maybe) — but when those factors are present, I tend to weight the thinking well.

Even if you have no technical background at all, this piece by Paul Graham looks credible (emphasis added) —

https://sep.yimg.com/ty/cdn/paulgraham/acl1.txt?t=1593689476&

"What does addn look like in C?  You just can't write it.

You might be wondering, when does one ever want to do things like this?  Programming languages teach you not to want what they cannot provide.  You have to think in a language to write programs in it, and it's hard to want something you can't describe.  When I first started writing programs-- in Basic-- I didn't miss recursion, because I didn't know there was such a thing.  I thought in Basic. I could only conceive of iterative algorithms, so why should I miss recursion?

If you don't miss lexical closures (which is what's being made in the preceding example), take it on faith, for the time being, that Lisp programmers use them all the time.  It would be hard to find a Common Lisp program of any length that did not take advantage of closures.  By page 112 you will be using them yourself."

When I spot that level of empathy/clarity/aesthetics, I think, "Ok, this person likely knows what they're talking about."

So, me, I start by looking for someone like Paul Graham or Ray Dalio or Dan Carlin, and then I look at who they cite and reference when I want to go deeper.

Comment by lionhearted on A reply to Agnes Callard · 2020-06-30T16:53:33.767Z · LW · GW

Hi Agnes, I just wanted to say — much respect and regards for logging on to discuss and debate your views.

Regardless if we agree or not (personally, I'm in partial agreement with you) — regardless, if more people would create accounts and engage thoughtfully in different spaces after sharing a viewpoint, the world would be a much better place.

Salutations and welcome.

Comment by lionhearted on What's Your Cognitive Algorithm? · 2020-06-19T12:45:41.878Z · LW · GW

I think you'd probably like the work of John Boyd:

https://en.wikipedia.org/wiki/John_Boyd_(military_strategist)

He's really interesting in that he worked on a mix of problems and areas with many different levels of complexity and rigor.

Notably, while he's usually talked about in terms of military strategy, he did some excellent work in physics that's fundamentally sound and still used in civilian and military aviation today:

https://en.wikipedia.org/wiki/Energy%E2%80%93maneuverability_theory

He was a skilled fighter pilot, so he was able to both learn theory and convert into tactile performance.

Then, later, he explored challenges in organizational structures, bureaucracy, decision making, corruption, consensus, creativity, inventing, things like that.

There's a good biography on him called "Boyd: The Fighter Pilot Who Changed the Art of War" - and then there's a variety of briefings, papers, and presentations he made floating around online. I went through a phase of studying them all; there's some gems there.

Notably, his "OODA" loop is often incorrectly summarized as a linear process but he defined it like this —

https://taskandpurpose.com/.image/c_fit%2Ccs_srgb%2Cfl_progressive%2Cq_auto:good%2Cw_620/MTcwNjAwNDYzNjEyMTI2ODcx/18989583.jpg

I think the most interesting part of it is under-discussed — the "Implicit Guidance and Control" aspect, where people can get into cycles of Observe/Act/Observe/Act rapidly without needing to intentionally orient themselves or formally make a decision.

Since he comes at it from a different mix of backgrounds with a different mix of ability to do formal mathematics or not, he provides a lot of insights. Some of his takeaways seem spot-on, but more interesting are the ways he can prime thinking on topics like these. I think you and he were probably interested in some similar veins of thought, so it might produce useful insights to dive in a bit.

Comment by lionhearted on Baking is Not a Ritual · 2020-06-03T00:04:30.573Z · LW · GW

Great post. 

I've seen recipes written in the precise ritualistic format many times, but rarely seen discussions on the chemistry patterns/etc — how do people typically learn the finer points?

I imagine there's some cookbooks / tutorials that go into the deeper mechanics — is it that, or learning from a knowledgeable baker that understands the mechanics, or...?

Comment by lionhearted on Why Science is slowing down, Universities and Maslow's hierarchy of needs · 2020-02-19T12:59:45.693Z · LW · GW

Agreed.

>I have a low prior they will show anything else other than "University is indeed confounded by IQ and/or IQ + income in money earning potential"

Probably also confounded by...

Networks (if you inherited a lot of social connections from your upbringing, university is less useful);

Exposure to certain types of ideas (we take the scientific method and "De Omnibus Dubitandum" for granted but there's people that only get these ideas first at university);

And most interestingly, whether particular institutions are good at helping students on rare habit formation (eg, MIT seems almost uniquely exceptional at inculcating "tinker with things quickly once you get an early understanding of them").

Actually, that last point — rare habit formation — might be where the lower Maslow's Hierarchy and higher Maslow's Hierarchy needs could meet each other. Alas, this seems an underexplored area that's arguably going in the wrong direction at many institutions...

Comment by lionhearted on Exercises in Comprehensive Information Gathering · 2020-02-19T12:54:17.883Z · LW · GW

Makes sense. This is probably worth a top level post? —

>People haven't had much time to figure out how to get lots of value out of the internet, and this is one example which I expect will become more popular over time.

Sounds obvious when put like that, but I think — as you implied — a lot of people haven't thought about it yet.

Comment by lionhearted on Exercises in Comprehensive Information Gathering · 2020-02-19T12:53:09.217Z · LW · GW

Ahh, great question.

I think eventually patterns start to emerge — so eventually, you start reading about federalization of Chinese Law and you're "ah, this is like German Unification with a few key differences."

While you do find rare outliers — the Ottoman legal system continues to fascinate me ( https://en.wikipedia.org/wiki/Millet_(Ottoman_Empire) ) — you eventually find that there's only a few major ways that legal systems have been formulated at larger modern country scales than earlier local scales.

Science, art, and sport are also ones I've delved into incidentally. And there's also some patterns there.

Comment by lionhearted on Exercises in Comprehensive Information Gathering · 2020-02-16T14:22:57.319Z · LW · GW

Phenomenal post.

I've done similarly. It's actually remarkable how little time it takes to overview the history of breakthroughs in a sub-field, or all the political and military leaders of an obscure country during a particular era, or the history of laws and regulations of a a particular field.

Question to muse over —

Given how inexpensive and useful it is to do this, why do so few people it?

Comment by lionhearted on Why Science is slowing down, Universities and Maslow's hierarchy of needs · 2020-02-16T13:47:06.534Z · LW · GW

Apprenticeship seems promising to me. It's died out in most of the world, but there's still formal apprenticeship programs in Germany that seem to work pretty well.

Also, it's a surprisingly common position among very successful people I know that young people would benefit from 2 years of national service after high school. It wouldn't have to be military service — it could be environmental conservation, poverty relief, Peace Corps type activities, etc.

We actually have reasonable control groups for this both in countries with mandatory national service and the Mormon Church, whom the majority of their members go on a 2-year mission. I haven't looked at hard numbers or anything, but my sense is that both countries with national service and Mormons tend to be more successful than similar cohorts that don't undergo such experiences.

Comment by lionhearted on Why Science is slowing down, Universities and Maslow's hierarchy of needs · 2020-02-16T13:42:26.410Z · LW · GW

Great post.

To one small point:

>After all there’s a surprising lack of studies (aka 0 that I could find, and I dug for them a lot) with titles around the lines of “Economic value of university degree when controlling for IQ, time lost and student debt”.

I'm reminded of Upton Sinclair's quote,

"It is difficult to get a man to understand something when his salary depends upon his not understanding it."

Comment by lionhearted on The Road to Mazedom · 2020-01-21T00:37:28.153Z · LW · GW

Just tracing the edges of hard problems is huge progress to solving them. Respect.

Comment by lionhearted on The Road to Mazedom · 2020-01-19T06:43:47.859Z · LW · GW

Two thoughts.

First, small technical feedback — do you think there's some classification of these factors, however narrow or broad, that could be sub-headlines?

For instance, #24 and #29 seem to be similar things:

#24 As the overall maze level rises, mazes gain a competitive advantage over non-mazes. 

#29 As maze levels rise, mazes take control of more and more of an economy and people’s lives.

As do #27 and #28:

#27: Mazes have reason to and do obscure that they are mazes, and to obscure the nature of mazes and maze behaviors. This allows them to avoid being attacked or shunned by those who retain enough conventional not-reversed values that they would recoil in horror from such behaviors if they understood them, and potentially fight back against mazes or to lower maze levels. The maze embracing individuals also take advantage of those who do not know of the maze nature. It is easy to see why the organizations described in Moral Mazes would prefer people not read the book Moral Mazes. 

#28: Simultaneously with pretending to the outside not to be mazes, those within them will claim if challenged that everybody knows they are mazes and how mazes work.

While it's hard to pin down exactly what the categories would be, It seems that the first cluster is about something like feedback loops and the second culture is about something like deceit, self-deceit, etc.

The categories could even be very broad like "Inherent Biases", "Incentives and Rewards", "Feedback Loops", etc. Or could be narrower. But it's difficult to follow a list of 37 propositions, some of which are relatively simple and self-contained and others are synthesis, conclusion, and extrapolation of previous points.

Ok, second thought —

This is all largely written from the point of view of how bad these things are as a participant. I bet it'd be interesting to flip the viewpoint and analysis and explore it from the view of a leader/executive/etc who was trying to forestall these effects.

For instance, your #4 seems important:

#4: Middle management performance is inherently difficult to assess. Maze behaviors systematically compound this problem. They strip away points of differentiation beyond loyalty to the maze and willingness to sacrifice one’s self on its behalf, plus politics. Information and records are destroyed. Belief in the possibility of differentiation in skill level, or of object-level value creation, is destroyed.

Ok, granted middle management performance is inherently difficult to assess.

So uhh, how do we solve that? Thoughts? Pointing out that this is a crummy equilibrium can certainly help inspire people to notice and avoid participating in it, but y'know, we've got institutions and we'll probably have institutions for forever-ish, coordination is hard, etc etc, so do you have thoughts on surmounting the technical problems here? Not the runaway feedback loops — or those, too, sure — but the inherent hard problem of assessing middle management performance?

Comment by lionhearted on In Defense of the Arms Races… that End Arms Races · 2020-01-16T01:23:58.248Z · LW · GW
So if an arms race is good or not basically depends on if the “good guys” are going to win (and remain good guys).

Quick thought — it's not apples and apples, but it might be worth investigating which fields hegemony works well in, and which fields checks and balances works well in:

https://en.wikipedia.org/wiki/Hegemony

https://en.wikipedia.org/wiki/Separation_of_powers

There's also the question with AGI of what we're more scared of — one country or organization dominating the world, or an early pioneer in AGI doing a lot of damage by accident?

#2 scares me more than #1. You need to create exactly one resource-commandeering positive feedback loop without an off switch to destroy the world, among other things.

Comment by lionhearted on Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think · 2020-01-11T07:15:57.868Z · LW · GW

Lots of great comments already so not sure if this will get seen, but a couple possibly useful points —

Metaphors We Live By by George Lakoff is worth a skim — https://en.wikipedia.org/wiki/Metaphors_We_Live_By

Then I think Wittgenstein's Tractatus is good, but his war diaries are even better http://www.wittgensteinchronology.com/7.html

"[Wittgenstein] sketches two people, A and B, swordfighting, and explains how this sketch might assert ‘A is fencing with B’ by virtue of one stick-figure representing A and the other representing B. In this picture-writing form, the proposition can be true or false, and its sense is independent of its truth or falsehood. LW declares that ‘It must be possible to demonstrate everything essential by considering this case’."

Lakoff illuminates some common metaphors — for example, a positive-valence mood in American English is often "up" and a negative-valence mood in American English is often "down."

If you combine Lakoff and Wittgenstein, using an accepted metaphor from your culture ("How are you?" "I'm flying today") makes the picture you paint for the other person correspond to your mood (they hear the emphasized "flying" and don't imagine you literally flying, but rather in a high-positive valence mood) — then you're in the realm of true.

There's independently some value in investigating your metaphors, but if someone asks me "Hey how'd custom building project your neighbor was doing go?" and I answer "Man, it was a fuckin' trainwreck" — you know what I'm saying: not only did the project fail, but it failed in a way that caused damage and hassle and was unaesthetic, even over and beyond what a normal "mere project failure" would be.

The value in metaphors, I think, is that you can get high information density with them. "Fuckin' trainwreck" conveys a lot of information. The only more denser formulation might be "Disaster" — but that's also a metaphor if it wasn't literally a disaster. Metaphors are sneaky in that way, we often don't notice them — but they seem like a valid high-accuracy usage of language if deployed carefully.

(Tangentially: Is "deployed" there a metaphor? Thinking... thinking... yup. Lakoff's book is worth skimming, we use a lot more metaphors than we realize...)

Comment by lionhearted on human psycholinguists: a critical appraisal · 2020-01-11T06:43:06.504Z · LW · GW

Lots of useful ideas here, thanks.

Did you play AI Dungeon yet, by chance?

https://www.aidungeon.io/

Playing it was a bit of a revelation for me. It doesn't have to get much better at all to obsolete the whole lower end of formulaic and derivative entertainment...

Comment by lionhearted on Of arguments and wagers · 2020-01-11T06:25:18.063Z · LW · GW

Multiple fascinating ideas here. Two thoughts:

1. Solo formulation -> open to market mechanism?

Jumping to your point on Recursion — I imagine you could ask participants to (1) specify their premises, (2) specify their evidence for each premise, (3) put confidence numbers on given facts, and (4) put something like a "strength of causality" or "strength of inference" on causal mechanisms, which collectively would output their certainty.

In this case, you wouldn't need to have two people who want to wager against each other, but rather anyone with a difference in confidence of a given fact or the (admittedly vague) "strength of causality" for how much a true-but-not-the-only-variable input effects a system.

Something along these lines might let you use the mechanism more as a market than an arbiter.

2. Discount rate?

After that, I imagine most people would want some discount rate to participate in this — I'm trying to figure out what odds I'd accept if I was 99% sure in a proposition to wager against someone... I don't think I'd lay 80:1 odds, even though it's in theory a good bet, just because the sole fact that someone was willing to bet against me at such odds would be evidence I might well be wrong!

The likelihood that anyone participating in a thoughtful process along these lines and laying real money (or other valuable commodity like computing power) against me means there's probably a greater than 1 in 50 chance I made an error somewhere.

Of course, if the time for Alice and Bob to prepare arguments was sufficiently low, if the resource pool Kelly Criterion style was sufficiently large, and there was sufficient liquidity to get regression to the mean on reasonable timeframes to reduce variance, then you'd be happy to play with small discounts if you were more-right-than-not and reasonably well-calibrated.

Anyway — this is fascinating, lots of ideas here. Salut.

Comment by lionhearted on The Bat and Ball Problem Revisited · 2020-01-06T03:25:01.660Z · LW · GW

I just wanted to say this was a really fun read. I hadn't considered the multiple ways people could get to the right or wrong answer.

Comment by lionhearted on What is Life in an Immoral Maze? · 2020-01-06T02:14:36.069Z · LW · GW

I think this starts to make more sense if you realize that there's a lot of organizations where a manager can't make an outsized improvement in results but can do a lot of damage; in those places, selection effects are going to give you risk-averse conforming people.

But in places with very objective performance numbers — finance and sales in particular — there's plenty of eccentric managers and leaders.

Same with tech and inventing, though eventually a lot of companies that were risk-seeking and innovative do drift to risk-averse and conforming. It's admirable when organizations fight that off. I don't have very many data points, but the managers I've met from Apple have all seemed noticeably brilliant and preserved their personal eccentricities, though there is a certain "Apple polish" in way of speaking and grooming that seems to be almost de rigeur.

That's probably not a bad standard to be expected to conform to, though, since it's like, pretty cool.

Comment by lionhearted on Propagating Facts into Aesthetics · 2019-12-20T20:20:16.666Z · LW · GW

Okay, one more — Grimes's "We Appreciate Power" is an electro-pop song about artificial intelligence, simulation, and brain uploading among other things:

https://www.youtube.com/watch?v=gYG_4vJ4qNA

A lot of the kids that like it no doubt enjoy it for the rebellious countersignaling aspect of it, combined with catchy beat.

But I like it on, I think, a different level than a 15 year old that'd like it. When I was 15, I listened to Rage Against the Machine — I had no idea what the heck RATM was talking about with Ireland and burning crosses or whatever, it was just, like, loud and rebellious and cool.

It's not groundbreaking to say people can appreciate things on different levels, but I wonder how much my intellectual enjoyment of We Appreciate Power backpropagates into liking the beat, vocal range, tempo, etc more.

[Bridge: Grimes & HANA]

And if you long to never die
Baby, plug in, upload your mind
Come on, you're not even alive
If you're not backed up on a drive
And if you long to never die
Baby, plug in, upload your mind
Come on, you're not even alive
If you're not backed up, backed up on a drive

Comment by lionhearted on Propagating Facts into Aesthetics · 2019-12-20T20:12:29.163Z · LW · GW

Relatedly — I used to find motorcycles swerving through traffic dangerous/ugly.

After I learned to ride a motorcycle, it (1) now is more predictable and seems less dangerous and (2) now seems beautiful/reasonable/cool rather than ugly/random/annoying.

Comment by lionhearted on Propagating Facts into Aesthetics · 2019-12-20T20:10:53.504Z · LW · GW

Great post.

Martial valor is another interesting one that people tend to find beautiful or ugly, and rarely if ever neutral.

I wonder if there's some component of simulating yourself either participating in an environment or activity and imagining how you'd feel.

Deserts — though there's counterintuitive things like them being cold at night — probably seem more tractable on how to navigate them than swamps.

I wonder if people see a patriotic rally and implicitly attempt to simulate "what the hell would I be doing if I was there, like, waving a flag around???" — and mentally encode it ugly. Vice-versa being at a spiritual retreat for people who'd enjoy a rally.

There's quite likely some "implicitly mentally trying it on" going on, no?

Comment by lionhearted on Follow-Up to Petrov Day, 2019 · 2019-09-28T01:06:36.609Z · LW · GW

You know what, I think LessWrong has collectively been worth more than $1,672 to me — especially after the re-launch. Heck, maybe even Petrov Day alone was. Incredibly insightful and potentially important.

I'd do this privately, but Eliezer wrote that story about how the pro-social people are too quiet and don't announce it. So yeah, I'm in for $1,672. Obviously, I wouldn't have done this if some knucklehead had nuked the site.

Now for the key question —

What kind of numbers do we need to put together to get another Ben Pace quality dev on the team? (And don't tell us it's priceless, people were willing to sell out your faith in humanity for less than the price of a Macbook Air! ;)

And yeah, mechanics for donating to LW specifically? Can follow up on email but I imagine it'd be good to have in this thread.

Edit: Before anyone suggests I donate to some highly-ranked charity, after I'd had some success in business I was in the nonprofit world for years and always 100% volunteer, have spent an immense amount of hours both understanding the space and getting things done, and was reasonably effective though not legendarily so or anything. By my quick back of the envelope math, I imagine any given large country's State Department would have paid $50,000 to $100,000 to have Petrov Day happen successfully in such a public way. Large corporations — I've worked with a few — maybe double that range. It was a really important thing and while "budget for hiring developers on a site that facilitates discussion of rationality" has far more nebulous and hard-to-pin-down value than some very worthy projects, it's first a threshold-break thing where a little more might produce much more results, and I think this site can be really important. If I might suggest something, though, perhaps an 80/20 eng-driven growth plan for the site that prioritizes preserving quality and norms would also make sense? We should have 10x the people here. It's very doable. I'm really busy but happy to help if I can. I think a lot of us would be happy to help make it happen if y'all would make it a little easier to know how. Something special is happening here.

Edit2: Okay, my donation is now conditional on banning whoever downvoted this ;) - just kidding. But man, what a strange mix of really great people and total idiots here huh? "I liked this a lot and I'd like to give money." WTF who does this guy think he is. Oh, me? Just someone trying to support the really fucking cool thing that's happening and asking for the logistics of doing so to be posted in case anyone else thinks it's been really cool and great for their life.

Comment by lionhearted on Follow-Up to Petrov Day, 2019 · 2019-09-28T00:54:40.354Z · LW · GW

What an incredible experience.

Felt like I got to understand myself a bit better, got exposed to a variety of arguments I never would have anticipated, forced to clarify my own thoughts and implications, did some math, did some sanity-check math on "what's the value of destroying some of Ben Pace's faith in humanity" (higher than any reasonable dollar amount alone, incidentally — and that's just one variable)... and yeah, this was really cool and legit innovative.

We should make sure the word about this gets out more.

We need more people on LessWrong, and more stuff like this.

People thinking this is just a chat board should think a little bigger. There's some real visionary thinking going on here, and an exceptionally smart and thoughtful community. I'm really grateful I got to see and participate in this. Thanks for all the great work — and for trusting me. Seriously. Y'all are aces.

Comment by lionhearted on Feature Wish List for LessWrong · 2019-09-28T00:26:38.950Z · LW · GW

(1) I want this too and would use it and participate more.

(2) Following logically from that, some sort of "Lists" feature like Twitter might be good, EX:

https://twitter.com/zackkanter/lists/aws

("Friending" is typically double-confirm, lists would seem much easier and less complex to implement. Perhaps lists, likewise, could be public or private)