Posts

Enforcing Type Distinction 2020-07-31T11:39:20.026Z · score: 27 (11 votes)
Running the Stack 2019-09-26T16:03:46.518Z · score: 36 (12 votes)
lionhearted's Shortform 2019-08-31T09:15:46.049Z · score: 8 (1 votes)
Drive-By Low-Effort Criticism 2019-07-31T11:51:37.844Z · score: 38 (29 votes)
On the Regulation of Perception 2019-03-09T16:28:19.887Z · score: 18 (7 votes)
Team Cohesion and Exclusionary Egalitarianism 2018-09-17T04:48:33.894Z · score: 36 (19 votes)
Secondary Stressors and Tactile Ambition 2018-07-13T00:26:23.561Z · score: 18 (10 votes)
Putting Logarithmic-Quality Scales On Time 2018-07-08T15:00:37.568Z · score: 15 (7 votes)
A Short Celebratory / Appreciation Post 2018-05-23T00:02:18.423Z · score: 138 (45 votes)
Some Simple Observations Five Years After Starting Mindfulness Meditation 2018-04-19T22:28:47.338Z · score: 80 (26 votes)
Explicit and Implicit Communication 2018-03-21T08:58:34.415Z · score: 111 (43 votes)
"Just Suffer Until It Passes" 2018-02-12T04:01:13.922Z · score: 149 (54 votes)
Fashionable or Fundamental Thought in Communities 2018-01-19T09:03:48.109Z · score: 37 (14 votes)
Success and Fail Rates of Monthly Policies 2017-12-09T15:24:37.148Z · score: 47 (19 votes)
Doing a big survey on work, stress, and productivity. Feedback / anything you're curious about? 2017-08-29T14:19:37.241Z · score: 1 (1 votes)
Perhaps a better form factor for Meetups vs Main board posts? 2016-01-28T11:50:20.360Z · score: 14 (15 votes)
Crossing the History-Lessons Threshold 2014-10-17T00:17:42.822Z · score: 34 (42 votes)
Flashes of Nondecisionmaking 2014-01-27T14:30:26.937Z · score: 28 (31 votes)
Confidence In Opinions, Intensity In Opinion 2013-09-04T16:56:17.883Z · score: 0 (11 votes)
Reflective Control 2013-09-02T17:45:58.356Z · score: 13 (16 votes)
A Rational Approach to Fashion 2011-10-10T18:53:00.594Z · score: 22 (44 votes)
"Technical implication: My worst enemy is an instance of my self." 2011-09-22T08:46:49.941Z · score: -3 (8 votes)
Malice, Stupidity, or Egalité Irréfléchie? 2011-06-13T20:57:06.178Z · score: 24 (53 votes)
Chemicals and Electricity 2011-05-09T17:55:25.123Z · score: 6 (27 votes)
The Cognitive Costs to Doing Things 2011-05-02T09:13:17.840Z · score: 39 (39 votes)
Convincing Arguments Aren’t Necessarily Correct – They’re Merely Convincing 2011-04-25T12:43:07.217Z · score: 9 (21 votes)
Defecting by Accident - A Flaw Common to Analytical People 2010-12-01T08:25:47.450Z · score: 103 (133 votes)
"Nahh, that wouldn't work" 2010-11-28T21:32:09.936Z · score: 76 (83 votes)
Reference Points 2010-11-17T08:09:04.227Z · score: 32 (33 votes)
Activation Costs 2010-10-25T21:30:58.150Z · score: 29 (36 votes)
The Problem With Trolley Problems 2010-10-23T05:14:07.308Z · score: 19 (64 votes)
Collecting and hoarding crap, useless information 2010-10-10T21:05:51.331Z · score: 18 (29 votes)
Steps to Achievement: The Pitfalls, Costs, Requirements, and Timelines 2010-09-11T22:58:38.145Z · score: 18 (26 votes)
A "Failure to Evaluate Return-on-Time" Fallacy 2010-09-07T19:01:42.066Z · score: 57 (67 votes)

Comments

Comment by lionhearted on The Fusion Power Generator Scenario · 2020-08-15T13:10:25.344Z · score: 8 (1 votes) · LW · GW

Hmm. I'm having a hard time writing this clearly, but I wonder if you could get interesting results by:

  • Training on a wide range of notably excellent papers from "narrow-scoped" domains,
  • Training on a wide range of papers that explore "we found this worked in X field, and we're now seeing if it also works in Y field" syntheses,
  • Then giving GPT-N prompts to synthesize narrow-scoped domains in which that hasn't been done yet.

You'd get some nonsense, I imagine, but it would probably at least spit out plausible hypotheses for actual testing, eh?

Comment by lionhearted on Free Money at PredictIt? · 2020-08-13T10:27:18.622Z · score: 9 (2 votes) · LW · GW

By the way, wanted to say this caught my attention and I did this successfully recently on this question —

https://www.predictit.org/markets/detail/5883/Who-will-win-the-2020-Democratic-vice-presidential-nomination

Combined probabilities were over 110%, so I went "No" on all candidates. Even with PredictIt's 10% fee on winning, I was guaranteed to make a tiny bit on any outcome. If a candidate not on the list was chosen, I'd have made more.

My market investment came out to ($0.43) — that's negative 43 cents; ie, no capital required to stay in it — on 65 no shares across the major candidates. (I'd have done more, but I don't understand how the PredictIt $850 limit works yet and I didn't want to wind up not being able to take all positions.) 

I need to figure out how the $850 limit works in practice soon — is it 850 shares, $850 at risk, $850 max payout, or.....? Kinda unclear from their documentation, will do some research.

But yeah, it was fun and it works. Thanks for pointing this out.

Comment by lionhearted on You Need More Money · 2020-08-13T06:57:08.182Z · score: 23 (7 votes) · LW · GW

This is an interesting post — you're covering a lot of ground in a wide-ranging fashion. I think it's a virtual certainty that you'll come with some interesting and very useful points, but a quick word of caution — I think this is an area where "mostly correct" theory can be a little dangerous.

Specifically:

>If you earn 4% per year, then you need the aforementioned $2.25 million for the $90,000 half-happiness income. If you earn 10% per year, you only need $900,000. If you earn 15% per year, you only need $600,000. At 18% you need $500,000; at 24% you need $375,000. And of course, you can acquire that nest egg a lot faster if you're earning a good return on your smaller investments. [...] I'm oversimplifying a bit here. While I do think 24% returns (or more!) are achievable, they would be volatile.

You're half correct here, but you might be making a subtle mistake — specifically, you might be using ensemble probability in a non-ergodic space.

Recommended reading (all of these can be Googled): safe withdrawal rate, expected value, variance, ergodicity, ensemble probability, Kelly criterion.

Specifically, naive expected value (EV) in investing tends to implicitly assume ergodicity; financial returns are non-ergodic; it's very possible to wind up broke with near certainty even with high returns if your amount of capital deployed is too low for the strategy you're operating.

Yes, there's valid counter-counterarguments here but you didn't make any of them! The words/phrases safety, margin of safety, bankroll, ergodicity, etc etc didn't show up.

The best counterargument is probably low-capital-required arbitrage such as what Zvi described here; indeed, I followed his line of thinking and personally recently got pure arbitrage on this question — just for the hell of it, on nominal money. It's, like, a hobby thing. [Edit: btw, thanks Zvi.] This is more-or-less only possible because some odd rules they've adopted for regulatory reasons and for UI/UX simplicity that result in some odd behavior.

Anyway, I digress; I like the general area of exploration you're embarking on a lot, but "almost correct" in finance is super dangerous and I wanted to flag one instance of that. Consistent high returns on a small amount of capital does not seem like a good strategy to me; further, if you can get 24%+ a year on any substantial volume, you should probably just stack up some millions for a few years and then you could rely on passive returns after that without the intense amount of discipline needed to keep getting those returns (even setting aside ergodicity/bankroll issues). 

Lynch's One Up on Wall Street is an excellent take by someone who actually managed to make those type of returns for multiple decades; it's not exactly something you do casually...

(Disclaimer: certainly not an expert, potentially some mistakes here, not comprehensive, etc etc etc.)

Comment by lionhearted on Sunday August 9, 1pm (PDT) — talks by elityre, jacobjacob, Ruby · 2020-08-09T17:52:01.657Z · score: 6 (3 votes) · LW · GW

Hi all,

I'm going to withdraw my talk for today — after doing some prep yesterday with Jacob and clarifying everyone's skill level and background, I put a few hours in and couldn't get to the point where I thought my talk would be great.

The quality level has been so uniformly high, I'd rather just leave more time for people to discuss and socialize than to lower the bar.

Apologies for any inconvenience, gratitude, and godspeed.

Comment by lionhearted on Reveal Culture · 2020-07-27T12:01:02.560Z · score: 10 (2 votes) · LW · GW

Incredibly thought-provoking.

Thank you.

Reading this made me think about my own communication styles.

Hmm.

After some quick reflection, among people I know well I think actually oscillate between two — on the one hand, something very close to Ray Dalio's Bridgewater norms (think "radical honesty but with more technocracy, ++logos/--pathos").

On the other hand, a near-polar opposite in Ishin-denshin — a word that's so difficult to translate from Japanese that one of the standard "close enough" definitions for it is..... "telepathy."

No joke.

Almost impossible to explain briefly; heck, I'm not sure it could be explained in 7000 words if you hadn't immersed yourself in it at least a substantial amount and studied Japanese history and culture additionally after the immersion.

But it's really cool when it works.

Hmm... I've never really reasoned through how and why I utilize those two styles — which are so very different on the surface — but my quick guess is that they're both really, really efficient when running correctly. 

Downside — while both are easy and comfortable to maintain once built, they're expensive and sometimes perilous to build.

Some good insights in here for further refinement and thinking — grateful for this post, I'll give this a couple hours of thought at my favorite little coffee bar next weekend or something.

Comment by lionhearted on Swiss Political System: More than You ever Wanted to Know (I.) · 2020-07-20T12:32:44.890Z · score: 2 (1 votes) · LW · GW

> Very good post, highly educational, exactly what I love to see on LessWrong.

Likewise — I don't have anything substantial to add except that I'm grateful to the author. Very insightful.

Comment by lionhearted on Roll for Sanity · 2020-07-13T17:31:56.415Z · score: 6 (3 votes) · LW · GW

Interesting metaphor. Enjoyed it.

Comment by lionhearted on How to Find Sources in an Unreliable World · 2020-07-04T18:32:44.216Z · score: 4 (2 votes) · LW · GW

The quality I'm describing isn't quite "readability" — it overlaps, but that's not quite it. 

Feynman has it —

http://www.faculty.umassd.edu/j.wang/feynman.pdf

It's hard to nail down; it'd probably be a very long essay to even try.

And it's not a perfect predictor, alas — just evidence.

But I believe there's a certain way to spot "good reasoning" and "having thoroughly worked out the problem" from one's writing. It's not the smoothness of the words, nor the simplicity.

I's hard to describe, but it seems somewhat consistently recognizable. Yudkowsky has it, incidentally. 

Comment by lionhearted on How to Find Sources in an Unreliable World · 2020-07-02T21:02:22.702Z · score: 10 (4 votes) · LW · GW

I like to start by trying to find one author who has excellent thinking and see what they cite — this works for both papers and books with bibliographies, but increasingly other forms of media. 

For instance, Dan Carlin of the (exceptional and highly recommended) Hardcore History podcast cites all the sources he uses when he does a deep investigation of a historical era, which is a good jumping-off point if you want to go deep.

The hard part is finding that first excellent thinker, especially in a domain where you can't differentiate quality in a field yet. But there's some general conventions of how smart thinkers tend to write and reason that you can learn to spot. There's a certain amount of empathy, clarity, and — for lack of a better word — "good aesthetics" that, if they're present, the author tends to be smart and trustworthy. 

The opposite isn't necessarily the case — there are good thinkers who don't follow those practices and are hard to follow (say, Laozi or Wittgenstein maybe) — but when those factors are present, I tend to weight the thinking well.

Even if you have no technical background at all, this piece by Paul Graham looks credible (emphasis added) —

https://sep.yimg.com/ty/cdn/paulgraham/acl1.txt?t=1593689476&

"What does addn look like in C?  You just can't write it.

You might be wondering, when does one ever want to do things like this?  Programming languages teach you not to want what they cannot provide.  You have to think in a language to write programs in it, and it's hard to want something you can't describe.  When I first started writing programs-- in Basic-- I didn't miss recursion, because I didn't know there was such a thing.  I thought in Basic. I could only conceive of iterative algorithms, so why should I miss recursion?

If you don't miss lexical closures (which is what's being made in the preceding example), take it on faith, for the time being, that Lisp programmers use them all the time.  It would be hard to find a Common Lisp program of any length that did not take advantage of closures.  By page 112 you will be using them yourself."

When I spot that level of empathy/clarity/aesthetics, I think, "Ok, this person likely knows what they're talking about."

So, me, I start by looking for someone like Paul Graham or Ray Dalio or Dan Carlin, and then I look at who they cite and reference when I want to go deeper.

Comment by lionhearted on A reply to Agnes Callard · 2020-06-30T16:53:33.767Z · score: 40 (10 votes) · LW · GW

Hi Agnes, I just wanted to say — much respect and regards for logging on to discuss and debate your views.

Regardless if we agree or not (personally, I'm in partial agreement with you) — regardless, if more people would create accounts and engage thoughtfully in different spaces after sharing a viewpoint, the world would be a much better place.

Salutations and welcome.

Comment by lionhearted on What's Your Cognitive Algorithm? · 2020-06-19T12:45:41.878Z · score: 12 (3 votes) · LW · GW

I think you'd probably like the work of John Boyd:

https://en.wikipedia.org/wiki/John_Boyd_(military_strategist)

He's really interesting in that he worked on a mix of problems and areas with many different levels of complexity and rigor.

Notably, while he's usually talked about in terms of military strategy, he did some excellent work in physics that's fundamentally sound and still used in civilian and military aviation today:

https://en.wikipedia.org/wiki/Energy%E2%80%93maneuverability_theory

He was a skilled fighter pilot, so he was able to both learn theory and convert into tactile performance.

Then, later, he explored challenges in organizational structures, bureaucracy, decision making, corruption, consensus, creativity, inventing, things like that.

There's a good biography on him called "Boyd: The Fighter Pilot Who Changed the Art of War" - and then there's a variety of briefings, papers, and presentations he made floating around online. I went through a phase of studying them all; there's some gems there.

Notably, his "OODA" loop is often incorrectly summarized as a linear process but he defined it like this —

https://taskandpurpose.com/.image/c_fit%2Ccs_srgb%2Cfl_progressive%2Cq_auto:good%2Cw_620/MTcwNjAwNDYzNjEyMTI2ODcx/18989583.jpg

I think the most interesting part of it is under-discussed — the "Implicit Guidance and Control" aspect, where people can get into cycles of Observe/Act/Observe/Act rapidly without needing to intentionally orient themselves or formally make a decision.

Since he comes at it from a different mix of backgrounds with a different mix of ability to do formal mathematics or not, he provides a lot of insights. Some of his takeaways seem spot-on, but more interesting are the ways he can prime thinking on topics like these. I think you and he were probably interested in some similar veins of thought, so it might produce useful insights to dive in a bit.

Comment by lionhearted on Baking is Not a Ritual · 2020-06-03T00:04:30.573Z · score: 2 (1 votes) · LW · GW

Great post. 

I've seen recipes written in the precise ritualistic format many times, but rarely seen discussions on the chemistry patterns/etc — how do people typically learn the finer points?

I imagine there's some cookbooks / tutorials that go into the deeper mechanics — is it that, or learning from a knowledgeable baker that understands the mechanics, or...?

Comment by lionhearted on Why Science is slowing down, Universities and Maslow's hierarchy of needs · 2020-02-19T12:59:45.693Z · score: 8 (1 votes) · LW · GW

Agreed.

>I have a low prior they will show anything else other than "University is indeed confounded by IQ and/or IQ + income in money earning potential"

Probably also confounded by...

Networks (if you inherited a lot of social connections from your upbringing, university is less useful);

Exposure to certain types of ideas (we take the scientific method and "De Omnibus Dubitandum" for granted but there's people that only get these ideas first at university);

And most interestingly, whether particular institutions are good at helping students on rare habit formation (eg, MIT seems almost uniquely exceptional at inculcating "tinker with things quickly once you get an early understanding of them").

Actually, that last point — rare habit formation — might be where the lower Maslow's Hierarchy and higher Maslow's Hierarchy needs could meet each other. Alas, this seems an underexplored area that's arguably going in the wrong direction at many institutions...

Comment by lionhearted on Exercises in Comprehensive Information Gathering · 2020-02-19T12:54:17.883Z · score: 8 (1 votes) · LW · GW

Makes sense. This is probably worth a top level post? —

>People haven't had much time to figure out how to get lots of value out of the internet, and this is one example which I expect will become more popular over time.

Sounds obvious when put like that, but I think — as you implied — a lot of people haven't thought about it yet.

Comment by lionhearted on Exercises in Comprehensive Information Gathering · 2020-02-19T12:53:09.217Z · score: 11 (3 votes) · LW · GW

Ahh, great question.

I think eventually patterns start to emerge — so eventually, you start reading about federalization of Chinese Law and you're "ah, this is like German Unification with a few key differences."

While you do find rare outliers — the Ottoman legal system continues to fascinate me ( https://en.wikipedia.org/wiki/Millet_(Ottoman_Empire) ) — you eventually find that there's only a few major ways that legal systems have been formulated at larger modern country scales than earlier local scales.

Science, art, and sport are also ones I've delved into incidentally. And there's also some patterns there.

Comment by lionhearted on Exercises in Comprehensive Information Gathering · 2020-02-16T14:22:57.319Z · score: 14 (6 votes) · LW · GW

Phenomenal post.

I've done similarly. It's actually remarkable how little time it takes to overview the history of breakthroughs in a sub-field, or all the political and military leaders of an obscure country during a particular era, or the history of laws and regulations of a a particular field.

Question to muse over —

Given how inexpensive and useful it is to do this, why do so few people it?

Comment by lionhearted on Why Science is slowing down, Universities and Maslow's hierarchy of needs · 2020-02-16T13:47:06.534Z · score: 10 (2 votes) · LW · GW

Apprenticeship seems promising to me. It's died out in most of the world, but there's still formal apprenticeship programs in Germany that seem to work pretty well.

Also, it's a surprisingly common position among very successful people I know that young people would benefit from 2 years of national service after high school. It wouldn't have to be military service — it could be environmental conservation, poverty relief, Peace Corps type activities, etc.

We actually have reasonable control groups for this both in countries with mandatory national service and the Mormon Church, whom the majority of their members go on a 2-year mission. I haven't looked at hard numbers or anything, but my sense is that both countries with national service and Mormons tend to be more successful than similar cohorts that don't undergo such experiences.

Comment by lionhearted on Why Science is slowing down, Universities and Maslow's hierarchy of needs · 2020-02-16T13:42:26.410Z · score: 11 (3 votes) · LW · GW

Great post.

To one small point:

>After all there’s a surprising lack of studies (aka 0 that I could find, and I dug for them a lot) with titles around the lines of “Economic value of university degree when controlling for IQ, time lost and student debt”.

I'm reminded of Upton Sinclair's quote,

"It is difficult to get a man to understand something when his salary depends upon his not understanding it."

Comment by lionhearted on The Road to Mazedom · 2020-01-21T00:37:28.153Z · score: 2 (1 votes) · LW · GW

Just tracing the edges of hard problems is huge progress to solving them. Respect.

Comment by lionhearted on The Road to Mazedom · 2020-01-19T06:43:47.859Z · score: 12 (3 votes) · LW · GW

Two thoughts.

First, small technical feedback — do you think there's some classification of these factors, however narrow or broad, that could be sub-headlines?

For instance, #24 and #29 seem to be similar things:

#24 As the overall maze level rises, mazes gain a competitive advantage over non-mazes. 

#29 As maze levels rise, mazes take control of more and more of an economy and people’s lives.

As do #27 and #28:

#27: Mazes have reason to and do obscure that they are mazes, and to obscure the nature of mazes and maze behaviors. This allows them to avoid being attacked or shunned by those who retain enough conventional not-reversed values that they would recoil in horror from such behaviors if they understood them, and potentially fight back against mazes or to lower maze levels. The maze embracing individuals also take advantage of those who do not know of the maze nature. It is easy to see why the organizations described in Moral Mazes would prefer people not read the book Moral Mazes. 

#28: Simultaneously with pretending to the outside not to be mazes, those within them will claim if challenged that everybody knows they are mazes and how mazes work.

While it's hard to pin down exactly what the categories would be, It seems that the first cluster is about something like feedback loops and the second culture is about something like deceit, self-deceit, etc.

The categories could even be very broad like "Inherent Biases", "Incentives and Rewards", "Feedback Loops", etc. Or could be narrower. But it's difficult to follow a list of 37 propositions, some of which are relatively simple and self-contained and others are synthesis, conclusion, and extrapolation of previous points.

Ok, second thought —

This is all largely written from the point of view of how bad these things are as a participant. I bet it'd be interesting to flip the viewpoint and analysis and explore it from the view of a leader/executive/etc who was trying to forestall these effects.

For instance, your #4 seems important:

#4: Middle management performance is inherently difficult to assess. Maze behaviors systematically compound this problem. They strip away points of differentiation beyond loyalty to the maze and willingness to sacrifice one’s self on its behalf, plus politics. Information and records are destroyed. Belief in the possibility of differentiation in skill level, or of object-level value creation, is destroyed.

Ok, granted middle management performance is inherently difficult to assess.

So uhh, how do we solve that? Thoughts? Pointing out that this is a crummy equilibrium can certainly help inspire people to notice and avoid participating in it, but y'know, we've got institutions and we'll probably have institutions for forever-ish, coordination is hard, etc etc, so do you have thoughts on surmounting the technical problems here? Not the runaway feedback loops — or those, too, sure — but the inherent hard problem of assessing middle management performance?

Comment by lionhearted on In Defense of the Arms Races… that End Arms Races · 2020-01-16T01:23:58.248Z · score: 9 (2 votes) · LW · GW
So if an arms race is good or not basically depends on if the “good guys” are going to win (and remain good guys).

Quick thought — it's not apples and apples, but it might be worth investigating which fields hegemony works well in, and which fields checks and balances works well in:

https://en.wikipedia.org/wiki/Hegemony

https://en.wikipedia.org/wiki/Separation_of_powers

There's also the question with AGI of what we're more scared of — one country or organization dominating the world, or an early pioneer in AGI doing a lot of damage by accident?

#2 scares me more than #1. You need to create exactly one resource-commandeering positive feedback loop without an off switch to destroy the world, among other things.

Comment by lionhearted on Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think · 2020-01-11T07:15:57.868Z · score: 10 (2 votes) · LW · GW

Lots of great comments already so not sure if this will get seen, but a couple possibly useful points —

Metaphors We Live By by George Lakoff is worth a skim — https://en.wikipedia.org/wiki/Metaphors_We_Live_By

Then I think Wittgenstein's Tractatus is good, but his war diaries are even better http://www.wittgensteinchronology.com/7.html

"[Wittgenstein] sketches two people, A and B, swordfighting, and explains how this sketch might assert ‘A is fencing with B’ by virtue of one stick-figure representing A and the other representing B. In this picture-writing form, the proposition can be true or false, and its sense is independent of its truth or falsehood. LW declares that ‘It must be possible to demonstrate everything essential by considering this case’."

Lakoff illuminates some common metaphors — for example, a positive-valence mood in American English is often "up" and a negative-valence mood in American English is often "down."

If you combine Lakoff and Wittgenstein, using an accepted metaphor from your culture ("How are you?" "I'm flying today") makes the picture you paint for the other person correspond to your mood (they hear the emphasized "flying" and don't imagine you literally flying, but rather in a high-positive valence mood) — then you're in the realm of true.

There's independently some value in investigating your metaphors, but if someone asks me "Hey how'd custom building project your neighbor was doing go?" and I answer "Man, it was a fuckin' trainwreck" — you know what I'm saying: not only did the project fail, but it failed in a way that caused damage and hassle and was unaesthetic, even over and beyond what a normal "mere project failure" would be.

The value in metaphors, I think, is that you can get high information density with them. "Fuckin' trainwreck" conveys a lot of information. The only more denser formulation might be "Disaster" — but that's also a metaphor if it wasn't literally a disaster. Metaphors are sneaky in that way, we often don't notice them — but they seem like a valid high-accuracy usage of language if deployed carefully.

(Tangentially: Is "deployed" there a metaphor? Thinking... thinking... yup. Lakoff's book is worth skimming, we use a lot more metaphors than we realize...)

Comment by lionhearted on human psycholinguists: a critical appraisal · 2020-01-11T06:43:06.504Z · score: 11 (4 votes) · LW · GW

Lots of useful ideas here, thanks.

Did you play AI Dungeon yet, by chance?

https://www.aidungeon.io/

Playing it was a bit of a revelation for me. It doesn't have to get much better at all to obsolete the whole lower end of formulaic and derivative entertainment...

Comment by lionhearted on Of arguments and wagers · 2020-01-11T06:25:18.063Z · score: 9 (2 votes) · LW · GW

Multiple fascinating ideas here. Two thoughts:

1. Solo formulation -> open to market mechanism?

Jumping to your point on Recursion — I imagine you could ask participants to (1) specify their premises, (2) specify their evidence for each premise, (3) put confidence numbers on given facts, and (4) put something like a "strength of causality" or "strength of inference" on causal mechanisms, which collectively would output their certainty.

In this case, you wouldn't need to have two people who want to wager against each other, but rather anyone with a difference in confidence of a given fact or the (admittedly vague) "strength of causality" for how much a true-but-not-the-only-variable input effects a system.

Something along these lines might let you use the mechanism more as a market than an arbiter.

2. Discount rate?

After that, I imagine most people would want some discount rate to participate in this — I'm trying to figure out what odds I'd accept if I was 99% sure in a proposition to wager against someone... I don't think I'd lay 80:1 odds, even though it's in theory a good bet, just because the sole fact that someone was willing to bet against me at such odds would be evidence I might well be wrong!

The likelihood that anyone participating in a thoughtful process along these lines and laying real money (or other valuable commodity like computing power) against me means there's probably a greater than 1 in 50 chance I made an error somewhere.

Of course, if the time for Alice and Bob to prepare arguments was sufficiently low, if the resource pool Kelly Criterion style was sufficiently large, and there was sufficient liquidity to get regression to the mean on reasonable timeframes to reduce variance, then you'd be happy to play with small discounts if you were more-right-than-not and reasonably well-calibrated.

Anyway — this is fascinating, lots of ideas here. Salut.

Comment by lionhearted on The Bat and Ball Problem Revisited · 2020-01-06T03:25:01.660Z · score: 2 (1 votes) · LW · GW

I just wanted to say this was a really fun read. I hadn't considered the multiple ways people could get to the right or wrong answer.

Comment by lionhearted on What is Life in an Immoral Maze? · 2020-01-06T02:14:36.069Z · score: 15 (5 votes) · LW · GW

I think this starts to make more sense if you realize that there's a lot of organizations where a manager can't make an outsized improvement in results but can do a lot of damage; in those places, selection effects are going to give you risk-averse conforming people.

But in places with very objective performance numbers — finance and sales in particular — there's plenty of eccentric managers and leaders.

Same with tech and inventing, though eventually a lot of companies that were risk-seeking and innovative do drift to risk-averse and conforming. It's admirable when organizations fight that off. I don't have very many data points, but the managers I've met from Apple have all seemed noticeably brilliant and preserved their personal eccentricities, though there is a certain "Apple polish" in way of speaking and grooming that seems to be almost de rigeur.

That's probably not a bad standard to be expected to conform to, though, since it's like, pretty cool.

Comment by lionhearted on Propagating Facts into Aesthetics · 2019-12-20T20:20:16.666Z · score: 11 (3 votes) · LW · GW

Okay, one more — Grimes's "We Appreciate Power" is an electro-pop song about artificial intelligence, simulation, and brain uploading among other things:

https://www.youtube.com/watch?v=gYG_4vJ4qNA

A lot of the kids that like it no doubt enjoy it for the rebellious countersignaling aspect of it, combined with catchy beat.

But I like it on, I think, a different level than a 15 year old that'd like it. When I was 15, I listened to Rage Against the Machine — I had no idea what the heck RATM was talking about with Ireland and burning crosses or whatever, it was just, like, loud and rebellious and cool.

It's not groundbreaking to say people can appreciate things on different levels, but I wonder how much my intellectual enjoyment of We Appreciate Power backpropagates into liking the beat, vocal range, tempo, etc more.

[Bridge: Grimes & HANA]

And if you long to never die
Baby, plug in, upload your mind
Come on, you're not even alive
If you're not backed up on a drive
And if you long to never die
Baby, plug in, upload your mind
Come on, you're not even alive
If you're not backed up, backed up on a drive

Comment by lionhearted on Propagating Facts into Aesthetics · 2019-12-20T20:12:29.163Z · score: 9 (2 votes) · LW · GW

Relatedly — I used to find motorcycles swerving through traffic dangerous/ugly.

After I learned to ride a motorcycle, it (1) now is more predictable and seems less dangerous and (2) now seems beautiful/reasonable/cool rather than ugly/random/annoying.

Comment by lionhearted on Propagating Facts into Aesthetics · 2019-12-20T20:10:53.504Z · score: 15 (5 votes) · LW · GW

Great post.

Martial valor is another interesting one that people tend to find beautiful or ugly, and rarely if ever neutral.

I wonder if there's some component of simulating yourself either participating in an environment or activity and imagining how you'd feel.

Deserts — though there's counterintuitive things like them being cold at night — probably seem more tractable on how to navigate them than swamps.

I wonder if people see a patriotic rally and implicitly attempt to simulate "what the hell would I be doing if I was there, like, waving a flag around???" — and mentally encode it ugly. Vice-versa being at a spiritual retreat for people who'd enjoy a rally.

There's quite likely some "implicitly mentally trying it on" going on, no?

Comment by lionhearted on Follow-Up to Petrov Day, 2019 · 2019-09-28T01:06:36.609Z · score: 10 (13 votes) · LW · GW

You know what, I think LessWrong has collectively been worth more than $1,672 to me — especially after the re-launch. Heck, maybe even Petrov Day alone was. Incredibly insightful and potentially important.

I'd do this privately, but Eliezer wrote that story about how the pro-social people are too quiet and don't announce it. So yeah, I'm in for $1,672. Obviously, I wouldn't have done this if some knucklehead had nuked the site.

Now for the key question —

What kind of numbers do we need to put together to get another Ben Pace quality dev on the team? (And don't tell us it's priceless, people were willing to sell out your faith in humanity for less than the price of a Macbook Air! ;)

And yeah, mechanics for donating to LW specifically? Can follow up on email but I imagine it'd be good to have in this thread.

Edit: Before anyone suggests I donate to some highly-ranked charity, after I'd had some success in business I was in the nonprofit world for years and always 100% volunteer, have spent an immense amount of hours both understanding the space and getting things done, and was reasonably effective though not legendarily so or anything. By my quick back of the envelope math, I imagine any given large country's State Department would have paid $50,000 to $100,000 to have Petrov Day happen successfully in such a public way. Large corporations — I've worked with a few — maybe double that range. It was a really important thing and while "budget for hiring developers on a site that facilitates discussion of rationality" has far more nebulous and hard-to-pin-down value than some very worthy projects, it's first a threshold-break thing where a little more might produce much more results, and I think this site can be really important. If I might suggest something, though, perhaps an 80/20 eng-driven growth plan for the site that prioritizes preserving quality and norms would also make sense? We should have 10x the people here. It's very doable. I'm really busy but happy to help if I can. I think a lot of us would be happy to help make it happen if y'all would make it a little easier to know how. Something special is happening here.

Edit2: Okay, my donation is now conditional on banning whoever downvoted this ;) - just kidding. But man, what a strange mix of really great people and total idiots here huh? "I liked this a lot and I'd like to give money." WTF who does this guy think he is. Oh, me? Just someone trying to support the really fucking cool thing that's happening and asking for the logistics of doing so to be posted in case anyone else thinks it's been really cool and great for their life.

Comment by lionhearted on Follow-Up to Petrov Day, 2019 · 2019-09-28T00:54:40.354Z · score: 18 (13 votes) · LW · GW

What an incredible experience.

Felt like I got to understand myself a bit better, got exposed to a variety of arguments I never would have anticipated, forced to clarify my own thoughts and implications, did some math, did some sanity-check math on "what's the value of destroying some of Ben Pace's faith in humanity" (higher than any reasonable dollar amount alone, incidentally — and that's just one variable)... and yeah, this was really cool and legit innovative.

We should make sure the word about this gets out more.

We need more people on LessWrong, and more stuff like this.

People thinking this is just a chat board should think a little bigger. There's some real visionary thinking going on here, and an exceptionally smart and thoughtful community. I'm really grateful I got to see and participate in this. Thanks for all the great work — and for trusting me. Seriously. Y'all are aces.

Comment by lionhearted on Feature Wish List for LessWrong · 2019-09-28T00:26:38.950Z · score: 10 (2 votes) · LW · GW

(1) I want this too and would use it and participate more.

(2) Following logically from that, some sort of "Lists" feature like Twitter might be good, EX:

https://twitter.com/zackkanter/lists/aws

("Friending" is typically double-confirm, lists would seem much easier and less complex to implement. Perhaps lists, likewise, could be public or private)

Comment by lionhearted on Running the Stack · 2019-09-28T00:20:11.480Z · score: 15 (3 votes) · LW · GW

Thanks. Awesome.

I'm actually not sure what you mean by "running down the stack." Do you mean "when I get distracted I mentally review my whole stack, from most recent item added to most ancient item"?

Well, of course, it's whatever works for you.

For a simple example, let's say I'm (1) putting new sheets on my bed, and then (2) I get an incoming phone call, which results in me simultaneously needing to (3 and 4) send a calendar invite and email while still on the phone.

I'll pick which of the cal invite or email I'm doing first. Let's say I decide I'm sending the cal invite first.

I'll then,

(4) Send cal invite - done - off stack.

(3) Send email - done - off stack.

(2) Check whether anything else needs to be done before ending call, confirming, etc. If need to do another activity -> add it to stack as new (3). If not, end call.

And here's where the magic happens. I then,

(1) Go finish making the bed.

I'm not fanatic about it, but I won't get a snack first or anything significant until that done.

Or do you mean "when I get distracted, I 'pop' the next item/intention in the stack (the one that was added most recently), and execute that one next (as opposed to some random one).

This, yes. Emphasis added.

Less payoff to getting distracted? To being distractible?
Why is that? Because if you get distracted you have to complete the distraction?

Well, I can speculate on theory but I'll just say empirically — it works for me.

But let's speculate with an example.

You're midway through cleaning your kitchen and you remember you needed to send some email.

If you don't really wanna clean your kitchen deep down, you're likely to wind up on email or Twitter or LessWrong instead.

Now that's fine, if I see a second email I want to reply to, I'll snipe that.

But at the end, I have to go finish the kitchen unless things have materially changed.

Knowing there's no payoff in "escaping" is probably part of it. It probably shapes real-time cost/benefit tradeoffs somewhat. It means less cognitive processing time needed to pick next task. It makes one pick tasks slightly more carefully knowing you'll finish them. It leads to single-tasking and focus.

Umm, probably a lot more. I'm not fanatic about it, I'll shift gears if it's relevant but I don't like to do so.

Comment by lionhearted on Long-term Donation Bunching? · 2019-09-27T23:57:52.876Z · score: 12 (4 votes) · LW · GW

Do we have any lawyers here at LessWrong?

Idea:

Would it be possible to legitimately write some sort of standardized financial instrument that functions as a loan with no repayment date, with options for conversion into charitable donation?

Speculations (non-lawyer here) —

(1) Maybe there's something equivalent to a SAFE Note (invented by YCombinator to simplify and standardize startup financing in a way friendly to both parties). It seems like a decent jumping-off point:

https://en.wikipedia.org/wiki/Simple_agreement_for_future_equity_(SAFE)

(2) On the other hand, there's a variety of mechanisms where you can't just do clever stuff. And there's a variety of arcane rules. You can, I think, donate property that's appreciated in value without paying capital gains first for instance, but maybe there's specific definitions around the timing of cash flows, donations, and deductions?

(3) On the other-other hand, seems like American tax policy in general is very amenable to people supporting worthy charitable causes.

(4) On the other-other-other-hand, you'd have to make sure it's not game-able and doesn't result in strange second-order consequences.

(5) And finally, if it's ambiguous, it seems like the type of thing where it'd be possible to get some sort of preliminary ruling from the relevant authorities. (Presumably the Treasury/IRS, but maybe someone else.)

Seems like a good idea though? If someone donates $10k a year for 5 years, it seems reasonable that they'd be able to write off that $50k at the end of the end of 5 years.

Comment by lionhearted on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-27T23:11:59.117Z · score: 12 (3 votes) · LW · GW

You guys are total heroes. Full stop. In the 1841 "On Heroes" sense of the word, which is actually pretty well-defined. (Good book, btw.)

Comment by lionhearted on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-27T23:07:12.804Z · score: 8 (3 votes) · LW · GW

There's rationalists who are in the mafia?

Whoa.

No insightful comment, just, like — this Petrov thread is the gift that keeps on giving.

Comment by lionhearted on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-27T23:04:56.485Z · score: 7 (4 votes) · LW · GW

Well, why stop there?

World GDP is $80.6 trillion.

Why doesn't the United States threaten to nuke everyone if they don't give a very reasonable 20% of their GDP per year to fund X-Risk — or whatever your favorite worthwhile projects are?

Screw it, why don't we set the bar at 1%?

Imagine you're advising the U.S. President (it's Donald Trump right now, incidentally). Who should President Trump threaten with nuking if they don't pay up to fund X-Risk? How much?

Now, let's say 193 countries do it, and $X trillion is coming in and doing massive good.

Only Switzerland and North Korea defect. What do you do? Or rather, what do you advise Donald Trump to do?

Comment by lionhearted on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-27T22:36:12.969Z · score: 18 (7 votes) · LW · GW
LW frontpage going down is also not particularly bad [...] If you wanted to convince me, you could make a case that destroying trust is really bad

Umm, respectfully, I think this is extremely arrogant. Dangerously so.

Anyways, I'm being blunt here, but I think respectful and hopefully useful. Think about this. Reasoning follows —

The instructions if you got launch codes (also in the above post) were as such (emphasis added with underline) —

"Every Petrov Day, we practice not destroying the world. One particular way to do this is to practice the virtue of not taking unilateralist action.

It’s difficult to know who can be trusted, but today I have selected a group of LessWrong users who I think I can rely on in this way. You’ve all been given the opportunity to show yourselves capable and trustworthy.

[...]

This Petrov Day, between midnight and midnight PST, if you, {{username}}, enter the launch codes below on LessWrong, the Frontpage will go down for 24 hours.

I hope to see you on the other side of this, with our honor intact."

So, to Ben Pace at least (the developer who put in a tremendous amount of hours and thought into putting this together), it represents...

*"practicing not destroying the world"

*"practicing the virtue of not taking unilateralist action"

*implications around his own uncertainty of who to trust

*de facto for Ben that he can't rely on you personally, by his standards, if you do it

*showing yourself not "capable and trustworthy" by his standards

*having the total group's "honor" "not be intact", under Ben's conception

And you want me to make a case for you on a single variable while ignoring the rather clear and straightforward written instructions for your own simple reductive understanding?

For Ben at least, the button thing was a symbolic exercise analogous to not nuking another country and he specifically asked you not to and said he's trusting you.

So, no, I don't want to "convince you" nor "make a case that destroying trust is really bad." You're literally stating you should set the burden of proof and others should "make a case."

In an earlier comment you wrote,

You can in fact compare whether or not a particular trade is worth it if the situation calls for it, and a one-time situation that has an upside of $1672 for ~no work seems like such a situation.

"No work"? You mean aside from the work that Ben and the team did (a lot) and demonstrating to the world at large that the rationality community can't press a "don't destroy our own website" button to celebrate a Soviet soldier who chose restraint?

I mean, I don't even want to put numbers on it, but if we gotta go to "least common denominator", then $1672 is less than a week's salary of the median developer in San Francisco. You'd be doing a hell of a lot more damage than that to morale and goodwill, I reckon, among the dev team here.

To be frank, I think the second-order and third-order effects of this project going well on Ben Pace alone is worth more than $1672 in "generative goodness" or whatever, and the potential disappointment and loss of faith in people he "thinks but is uncertain he can rely upon and trust" is... I mean, you know that one highly motivated person leading a community can make an immense difference right?

Just so you can get $1672 for charity ("upside") with "~no work"?

And that's just productivity, ignoring any potential negative affect or psychological distress, and being forced to reevaluate who he can trust. I mean, to pick a more taboo example, how many really nasty personal insults would you shout at a random software developer for $1672 to charity? That's almost "no work" — it's just you shouting some words, and whatever trivial psychological distress they feel, and I wager getting random insults from a stranger is much lower than having people you "are relying on and trusting" press a "don't nuke the world simulator button."

Like, if you just read what Ben wrote, you'd realize that risking destroying goodwill and faith in a single motivated innovative person alone should be priced well over $20k. I wouldn't have done it for $100M going to charity. Seriously.

If you think that's insane, stop and think why our numbers are four orders of magnitude apart — our priors must be obviously very different. And based on the comments, I'm taking into account more things than you, so you might be missing something really important.

(I could go on forever about this, but here's one more: what's the difference in your expected number of people discovering and getting into basic rationality, cognitive biases, and statistics with pressing the "failed at 'not destroying the world day' commemoration" vs not? Mine: high. What's the value of more people thinking and acting rationally? Mine: high. So multiply the delta by the value. That's just one more thing. There's a lot you're missing. I don't mean this disrespectfully, but maybe think more instead of "doing you" on a quick timetable?)

(Here's another one you didn't think about: we're celebrating a Soviet engineer. Run this headline in a Russian newspaper: "Americans try to celebrate Stanislav Petrov by not pressing 'nuke their own website' button, arrogant American pushes button because money isn't donated to charity.")

(Here's another one you didn't think about: I'll give anyone 10:1 odds this is cited in a mainstream political science journal within 15 years, which are read by people who both set and advise on policy, and that "group of mostly American and European rationalists couldn't not nuke their own site" absolutely is the type of thing to shape policy discussions ever-so-slightly.)

(Here's another one you didn't think about: some fraction of the people here are active-duty or reserve military in various countries. How does this going one way or another shape their kill/no-kill decisions in ambiguous warzones? Have you ever read any military memoirs about people who made to make those calls quickly, EX overwatch snipers in Mogadishu? No?)

(Not meant to be snarky — Please think more and trust your own intuition less.)

Comment by lionhearted on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-27T05:16:46.002Z · score: 2 (10 votes) · LW · GW

Oh in case you missed the subtext, it's a SciFi joke.

It's funny cuz it's sort of almost plausibly true and gets people thinking about what if their life had higher stakes and their decisions mattered, eh?

Obviously, it's just a silly amusing joke. And it's obviously going to look really counterproductively weird if analyzed or discussed among normal people, since they don't get nerd humor. I recommend against doing that.

Just laugh and maybe learn something.

Don't be stupid and overthink it.

Comment by lionhearted on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-27T05:02:22.358Z · score: -4 (8 votes) · LW · GW

Great comment.

Side note, I occasionally make a joke that I'm sent from another part of the multiverse (colloquially, "the future") to help fix this broken fucked up instance of the universe.

The joke goes — it's not a stupid teleportation thing like Terminator, it's a really expensive process two-step process to edit even a tiny bit of information in another universe. So with right CTC relays you can edit a tiny bit of information, creating some high-variance people in a dense area, and then the only people who get their orders are people who reach a sufficient level of maturity, competence, and duty. Not everyone who we give the evolved post-sapien genetics gets their orders; the overwhelming majority fail actually.

Now, the reason we at the Agency — in the joke, I'm on the Solar Task Force — are trying to fix this universe is because it effects other parts of the multiverse. There's a lot of stuff, but here's a simple one — the coordinates of Earth are similar in many branches. Setting off tons of nukes and beaming random stuff into space calls attention to Earth's location. I believe a game theoretic solution to the Fermi Pardox was proposed recently in SciFi and no one was paying attention. I mean, did anyone check that out? Right? Don't let Earth's coordinates get out. Jeez guys. This isn't complicatd. C'mon.

Now normally things work correctly, but this particular universe came about because you idiots — I mean, not you since you weren't alive — but collectively, this idiot branch of humans took a homeless bohemian artist who was a kinda-brave messenger solider in World War One (already a disaster but then the error compounds) and they took this loser with a bad attitude and put him in charge of a major industrial power at one of the most leveraged moments in human history. He wasn't even German! He was Austrian! And he took over the Nazi Party as only the 55th member after he was sent in as a police officer to watch the group. (Look it up on Wikipedia, is true.) Then, he tries a putsch — a coup — and it fails, and the state semi-prosecutes him, making him famous, but then lets him off easily. He turns that fame (infamy, really) into wealth, that into political power, and takes over. Then he does a ton of damage, including invading and destroying the most important city in the world at the time. Right, where are all those physicists and mathematicians from? Starts with a "B"? Used to be a monarchy? Destroyed by the Nazis? And after those people aged out and had completed their work, we went through a stagnation period for quite a while? Right? Isn't that what happened?

What a comedy of fucking errors. So much emotionalism. This branch of the universe is so incredibly fucked, I hate being here, but I'm doing my best. I like you humans, some of you are marvelous and all of you I want to succeed but man I fucking hate it here. Anyway, the first time I made this joke I was worried my CO would be pissed at me since I'm breaking rule#1, but it's actually so bad here that I didn't even get paradox warnings. (A true paradox crashes the universe, which we actually do when things are sufficiently bad and the rot is liable to spread.)

Anyway, this is just a joke. But yes, "desire for infamy" — fucking homo sapien sapiens. Evolve faster, please.

Just kidding.

(If I wanted to continue the joke, I'd say I am going certainly to get in trouble sooner or later, but this amuses the hell out of me and this is a really high stress unpleasant job. Anyway, not joking, now I'll go back to building my peak performance tech company that prompts clear thinking, intentional action, and generally more eustress and joy while eliminating distress. I'll build that into one of the largest companies on Earth while also producing subtly-but-not-subtly producing useful media with a lot of subtext lessons and building an elite team that does a mix of internal inventing like Bell Labs as well diffusion PayPal Mafia style, those people also going on to also start large important prosocial institutions. After the first few billion, I'll fund better sensors for asteroid defense and bring down the cost of regular testing/monitoring bloodwork and simple "already known best practices" in biochemical regulation. Anyway, I'm just joking around cuz this amuses me and working 90-110 hours per week while in a mostly human body is very tiring. I like this whole button thing btw, this is really good. It gives me a little bit of hope. I guess hope is dangerous too though. Anyway, back to work, I'm going to teach my brilliant junior team that "there is value in writing a clear agenda of what we want to accomplish in a meeting". I'd rather be developing new branches of mathematics — I already developed one for real, it blows people's minds when I show it to them (ask me in person whenever a whiteboard is around), and I'll write it up when I have some spare time — but yeah, "we shouldn't just fuck around for no purpose in meetings" is the current level of the job. So be it. Anyway, this button thing is good, I needed this. Thanks.)

Comment by lionhearted on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-27T04:39:36.565Z · score: 1 (4 votes) · LW · GW

Upvoted for poetry.

Commenting to underline it for "the call to infamy" — wonderful phrase.

Comment by lionhearted on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-27T04:38:16.834Z · score: 3 (4 votes) · LW · GW

https://en.wikipedia.org/wiki/Pandora#Works_and_Days

The more famous version of the Pandora myth comes from another of Hesiod's poems, Works and Days. In this version of the myth, Hesiod expands upon her origin, and moreover widens the scope of the misery she inflicts on humanity. As before, she is created by Hephaestus, but now more gods contribute to her completion: Athena taught her needlework and weaving; Aphrodite "shed grace upon her head and cruel longing and cares that weary the limbs"; Hermes gave her "a shameful mind and deceitful nature"; Hermes also gave her the power of speech, putting in her "lies and crafty words"; Athena then clothed her; next Persuasion and the Charites adorned her with necklaces and other finery; the Horae adorned her with a garland crown. Finally, Hermes gives this woman a name: Pandora – "All-gifted" – "because all the Olympians gave her a gift". (In Greek, Pandora has an active rather than a passive meaning; hence, Pandora properly means "All-giving." The implications of this mistranslation are explored in "All-giving Pandora: mythic inversion?" below.) In this retelling of her story, Pandora's deceitful feminine nature becomes the least of humanity's worries. For she brings with her a jar (which, due to textual corruption in the sixteenth century, came to be called a box) containing "burdensome toil and sickness that brings death to men", diseases and "a myriad other pains". Prometheus had (fearing further reprisals) warned his brother Epimetheus not to accept any gifts from Zeus. But Epimetheus did not listen; he accepted Pandora, who promptly scattered the contents of her jar. As a result, Hesiod tells us, "the earth and sea are full of evils""

What's in the box? What's in the box? Don't open it! Oh, shit...

(Grace, longing and care, and being gifted causes the box to be opened. It's like history just keeps repeating itself or something...)

Comment by lionhearted on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-27T04:29:26.119Z · score: 4 (3 votes) · LW · GW

Let's, for the hell of it, assume real money got involved. Like, it was $50M or something.

Now — who would you want to be able to vote on whether destruction happens if their values aren't met with that amount of money at stake?

If it's the whole internet, most people will treat it as entertainment or competition as opposed to considering what we actually care about.

But if we're going to limit it only to people that are thoughtful, that invalidates the point of majority vote doesn't it?

Think about it, I'm not going to write out all the implications, but I think your faith in crowdsourced voting mechanisms for things with known-short-payoff against with long-unknown-costs that destroy long-unknown-gains is perhaps misplaced...?

Most people are — factually speaking — not educated on all relevant topics, not fully numerate on statistics and payoff calculations, go with their feelings instead of analysis, and are short-term thinkers..........

Comment by lionhearted on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-27T04:23:09.465Z · score: 7 (3 votes) · LW · GW

Note to self: Does lighthearted dark humor highlighting risk increase or decrease chances of bad things happening?

Initial speculation: it might have an inverted response curve. One or two people making the joke might increase gravity, everyone joking about it might change norms and salience.

Comment by lionhearted on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-27T04:19:55.521Z · score: 22 (10 votes) · LW · GW

Firm disagree. Second-order and third-order effects go limit->infinity here.

Also btw, I'm running a startup that's now looking at — best case scenario — handling significant amounts of money over multiple years.

It makes me realize that "a lot of money" on the individual level is a terrible heuristic. Seriously, it's hard to get one's mind around it, but a million dollars is decidedly not a lot of money on the global scale.

For further elaboration, this is relevant and incredibly timely:

https://slatestarcodex.com/2019/09/18/too-much-dark-money-in-almonds/

Comment by lionhearted on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-27T04:16:57.549Z · score: 7 (5 votes) · LW · GW

I wouldn't do it for $100M.

Seriously.

Because it increases the marginal chance that humanity goes extinct ever-so-slightly.

If you have launch codes, wait until tomorrow to read the last part eh? —

(V zrna, hayrff lbh guvax gur rkcrevzrag snvyvat frpergyl cebzbgrf pnhgvba naq qrfgeblf bcgvzvfz, juvpu zvtug or gehr.)

Comment by lionhearted on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-27T04:14:41.154Z · score: 13 (4 votes) · LW · GW

Nooooo you're a good person but you're promoting negotiating with terrorists literally boo negative valence emotivism to highlight third-order effects, boo, noooooo................

Comment by lionhearted on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-27T04:11:59.292Z · score: -5 (3 votes) · LW · GW

Dank EA Memes? What? Really? How do I get in on this?

(Serious.)

(I shouldn't joke "I have launch codes" — that's grossly irresponsible for a cheap laugh — but umm, I just meta made the joke.)

Comment by lionhearted on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-27T04:09:37.036Z · score: 16 (6 votes) · LW · GW

This whole thread is awesome. This is the maybe the best thing that's happened on LessWrong since Eliezer more-or-less went on hiatus.

Huge respect to everyone. This is really great. Hard but great. Actually it's great because it's hard.

Comment by lionhearted on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-27T04:07:32.115Z · score: 25 (10 votes) · LW · GW

"Rae, this is a friendly reminder from the universe that you can only at best control the first-order effects of systems you create..."