Posts

Feedback calibration 2021-03-15T14:24:44.244Z
Three more stories about causation 2020-11-03T15:51:58.820Z
cousin_it's Shortform 2019-10-26T17:37:44.390Z
Announcement: AI alignment prize round 4 winners 2019-01-20T14:46:47.912Z
Announcement: AI alignment prize round 3 winners and next round 2018-07-15T07:40:20.507Z
How to formalize predictors 2018-06-28T13:08:11.549Z
UDT can learn anthropic probabilities 2018-06-24T18:04:37.262Z
Using the universal prior for logical uncertainty 2018-06-16T14:11:27.000Z
Understanding is translation 2018-05-28T13:56:11.903Z
Announcement: AI alignment prize round 2 winners and next round 2018-04-16T03:08:20.412Z
Using the universal prior for logical uncertainty (retracted) 2018-02-28T13:07:23.644Z
UDT as a Nash Equilibrium 2018-02-06T14:08:30.211Z
Beware arguments from possibility 2018-02-03T10:21:12.914Z
An experiment 2018-01-31T12:20:25.248Z
Biological humans and the rising tide of AI 2018-01-29T16:04:54.749Z
A simpler way to think about positive test bias 2018-01-22T09:38:03.535Z
How the LW2.0 front page could be better at incentivizing good content 2018-01-21T16:11:17.092Z
Beware of black boxes in AI alignment research 2018-01-18T15:07:08.461Z
Announcement: AI alignment prize winners and next round 2018-01-15T14:33:59.892Z
Announcing the AI Alignment Prize 2017-11-04T11:44:19.000Z
Announcing the AI Alignment Prize 2017-11-03T15:47:00.092Z
Announcing the AI Alignment Prize 2017-11-03T15:45:14.810Z
The Limits of Correctness, by Bryan Cantwell Smith [pdf] 2017-08-25T11:36:38.585Z
Using modal fixed points to formalize logical causality 2017-08-24T14:33:09.000Z
Against lone wolf self-improvement 2017-07-07T15:31:46.908Z
Steelmanning the Chinese Room Argument 2017-07-06T09:37:06.760Z
A cheating approach to the tiling agents problem 2017-06-30T13:56:46.000Z
What useless things did you understand recently? 2017-06-28T19:32:20.513Z
Self-modification as a game theory problem 2017-06-26T20:47:54.080Z
Loebian cooperation in the tiling agents problem 2017-06-26T14:52:54.000Z
Thought experiment: coarse-grained VR utopia 2017-06-14T08:03:20.276Z
Bet or update: fixing the will-to-wager assumption 2017-06-07T15:03:23.923Z
Overpaying for happiness? 2015-01-01T12:22:31.833Z
A proof of Löb's theorem in Haskell 2014-09-19T13:01:41.032Z
Consistent extrapolated beliefs about math? 2014-09-04T11:32:06.282Z
Hal Finney has just died. 2014-08-28T19:39:51.866Z
"Follow your dreams" as a case study in incorrect thinking 2014-08-20T13:18:02.863Z
Three questions about source code uncertainty 2014-07-24T13:18:01.363Z
Single player extensive-form games as a model of UDT 2014-02-25T10:43:12.746Z
True numbers and fake numbers 2014-02-06T12:29:08.136Z
Rationality, competitiveness and akrasia 2013-10-02T13:45:31.589Z
Bayesian probability as an approximate theory of uncertainty? 2013-09-26T09:16:04.448Z
Notes on logical priors from the MIRI workshop 2013-09-15T22:43:35.864Z
An argument against indirect normativity 2013-07-24T18:35:04.130Z
"Epiphany addiction" 2012-08-03T17:52:47.311Z
AI cooperation is already studied in academia as "program equilibrium" 2012-07-30T15:22:32.031Z
Should you try to do good work on LW? 2012-07-05T12:36:41.277Z
Bounded versions of Gödel's and Löb's theorems 2012-06-27T18:28:04.744Z
Loebian cooperation, version 2 2012-05-31T18:41:52.131Z
Should logical probabilities be updateless too? 2012-03-28T10:02:09.575Z

Comments

Comment by cousin_it on On the limits of idealized values · 2021-06-23T09:34:22.373Z · LW · GW

Very nice and clear writing, thank you! This is exactly the kind of stuff I'd love to see more on LW:

Suppose I can create either this galaxy Joe’s favorite world, or a world of happy puppies frolicking in the grass. The puppies, from my perspective, are a pretty safe bet: I myself can see the appeal.

Though I think some parts could use more work, shorter words and clearer images:

Second (though maybe minor/surmountable): even if your actual attitudes yield determinate verdicts about the authoritative form of idealization, it seems like we’re now giving your procedural/meta evaluative attitudes an unjustified amount of authority relative to your more object-level evaluative attitudes.

But most of the post is good.

R. Scott Bakker made a related point in Crash Space:

The reliability of our heuristic cues utterly depends on the stability of the systems involved. Anyone who has witnessed psychotic episodes has firsthand experience of consequences of finding themselves with no reliable connection to the hidden systems involved. Any time our heuristic systems are miscued, we very quickly find ourselves in ‘crash space,’ a problem solving domain where our tools seem to fit the description, but cannot seem to get the job done.

And now we’re set to begin engineering our brains in earnest. Engineering environments has the effect of transforming the ancestral context of our cognitive capacities, changing the structure of the problems to be solved such that we gradually accumulate local crash spaces, domains where our intuitions have become maladaptive. Everything from irrational fears to the ‘modern malaise’ comes to mind here. Engineering ourselves, on the other hand, has the effect of transforming our relationship to all contexts, in ways large or small, simultaneously. It very well could be the case that something as apparently innocuous as the mass ability to wipe painful memories will precipitate our destruction. Who knows? The only thing we can say in advance is that it will be globally disruptive somehow, as will every other ‘improvement’ that finds its way to market.

Human cognition is about to be tested by an unparalleled age of ‘habitat destruction.’ The more we change ourselves, the more we change the nature of the job, the less reliable our ancestral tools become, the deeper we wade into crash space.

In other words, yeah, I can imagine an alter ego who sees more and thinks better than me. As long as it stays within human evolutionary bounds, I'm even okay with trusting it more than myself. But once it steps outside these bounds, it seems like veering into "crash space" is the expected outcome.

Comment by cousin_it on How can there be a godless moral world ? · 2021-06-21T13:46:28.336Z · LW · GW

"Immoral" interactions between people are mostly interactions that reduce the total pie. So groups that are best at suppressing such interactions within the group (while maybe still allowing harm to outsiders) end up with the biggest total pie - the nicest goods, the best weapons and so on. That's why all Earth is now ruled by governments that reduce murder far below hunter-gatherer level. That doesn't explain all niceness we see, but a big part of it, I think.

Comment by cousin_it on Non-poisonous cake: anthropic updates are normal · 2021-06-18T22:04:28.058Z · LW · GW

Where are you on the spectrum from "SSA and SIA are equally valid ways of reasoning" to "it's more and more likely that in some sense SIA is just true"? I feel like I've been at the latter position for a few years now.

Comment by cousin_it on Reply to Nate Soares on Dolphins · 2021-06-10T15:28:02.862Z · LW · GW

I think the genealogical definition is fine in this case - once you diverge from fish, you're no longer fish, same as birds are no longer dinosaurs. But I would also add that Nate might not have been fully serious, and you tend to get a bit worked up sometimes :-)

Comment by cousin_it on Often, enemies really are innately evil. · 2021-06-07T20:26:14.392Z · LW · GW

So you think that between two theories - "evil comes from people's choices" and "evil comes from circumstances" - the former can't be "leveraged" and we should adopt the latter apriori, regardless of which one is closer to truth? I think that's jumping the gun a bit. Let's figure out what's true first, then make decisions based on that.

Comment by cousin_it on Often, enemies really are innately evil. · 2021-06-07T15:39:13.004Z · LW · GW

I think that theory is false. In an unconstrained wild west environment, an asshole with a gun will happily bully those who he knows don't have guns. And conversely, people have found ways to be good even in very constrained environments. Good and evil are the responsibility of the person doing it, not the environment.

Comment by cousin_it on What to optimize for in life? · 2021-06-06T09:52:05.352Z · LW · GW

One possible answer is "maximize win-win trades with other people", explained a bit more in this comment.

Comment by cousin_it on Alcohol, health, and the ruthless logic of the Asian flush · 2021-06-05T15:02:00.670Z · LW · GW

Wait, you don't know? Disulfiram implants are widely used in Eastern Europe.

Comment by cousin_it on Social behavior curves, equilibria, and radicalism · 2021-06-05T08:30:50.525Z · LW · GW

What a beautiful model! Indeed it seems like a rediscovery of Granovetter's threshold model, but still, great work finding it.

I'm not sure "radical" is the best word for people at the edges of the curve, since figure 19 shows that the more of them you have, the more society is resistant to change. Maybe "independent" instead?

Comment by cousin_it on Rationalists should meet Integral Theory · 2021-06-05T01:51:54.026Z · LW · GW

I agree that something like a math theorem can be independent from its author's life details. But Wilber is a philosopher of life, talking about human development and so on, and the people he holds up as examples again and again turn out to be abusers and frauds. There's just no way his philosophy of life is any good.

Comment by cousin_it on Rationalists should meet Integral Theory · 2021-06-05T00:22:01.608Z · LW · GW

I've read a lot of stuff from EST, Castaneda, Rajneesh and so on. Before my first comment on this post, I downloaded a book by Wilber and read a good chunk of it. It's woo all right.

But attacking woo on substance isn't always the best approach. I don't want to write a treatise on "holons" to which some acolyte will respond with another treatise. As Pelevin wrote, "a dull mind will sink like an iron in an ocean of shit, and a sharp mind will sink like a Damascene blade". It's enough that the idea comes from a self-aggrandizing "guru" who surrounds himself with identical "gurus", each one with a harem, a love for big donations, and a trail of abuse lawsuits. For those who have seen such things before, the first link I gave (showing the founder of the movement promoting a do-nothing quantum trinket) is already plenty.

Comment by cousin_it on Unattributed Good · 2021-06-04T22:22:07.882Z · LW · GW

Good post. I think the principle you attribute to Truman is originally from the Sermon on the Mount ("don't let your left hand know what your right hand is doing").

Comment by cousin_it on Rationalists should meet Integral Theory · 2021-06-04T21:36:13.455Z · LW · GW

Well, he's the founder and leader of the whole thing. Often referred to as the "Einstein of consciousness studies", as he describes himself.

He also enthusiastically promoted this guy (Ctrl+F "craniosacral rhythm"), this guy (Ctrl+F "wives"), and this guy (Ctrl+F "blood"). Are these examples of great people we'd gain?

Comment by cousin_it on Alcohol, health, and the ruthless logic of the Asian flush · 2021-06-04T21:04:15.714Z · LW · GW

Just kidding. It’s called disulfiram, and it was approved by the FDA in 1951.

Cute turnaround + mention of FDA = instant feeling of reading Scott.

Comment by cousin_it on What question would you like to collaborate on? · 2021-06-04T20:29:22.749Z · LW · GW

Specifically looking for conceptual shifts that allow you to do something better.

Comment by cousin_it on An Intuitive Guide to Garrabrant Induction · 2021-06-04T17:54:47.528Z · LW · GW

Interesting! Can you write up the WLIC, here or in a separate post?

Comment by cousin_it on What question would you like to collaborate on? · 2021-06-04T17:26:03.723Z · LW · GW

It worked on me. The change was surprisingly fast, in a couple days I went from "no drawing talent, stick figures only" to one-minute sketches similar to this or this (not mine, but should give the idea). Getting to this level doesn't require any technique, it's purely a conceptual shift. You learn how to trick your mind into "drawing what you see" instead of "drawing what you think". Betty Edwards describes this shift very clearly and gives a couple counterintuitive exercises for achieving it. I wouldn't be surprised if some people got it in an hour.

The result isn't "drawing very well" (which takes more and different kinds of work), but I'm pretty confident that I can look at anything and make a pencil drawing that looks roughly similar. It doesn't even matter what! When you "draw what you see", you no longer care if it's a person or tree or car or whatever, it's all just a bunch of shapes in your visual field that you copy to paper.

In singing, there's a similar concept of "singing with breath support", which is also a kind of primitive indivisible feeling that good singers have. But as far as I know, nobody has found a description of it that would reliably work on beginners.

Comment by cousin_it on Rationalists should meet Integral Theory · 2021-06-04T15:19:40.695Z · LW · GW

The post is talking about this guy, who's also the biggest defender of this guy. I don't want LW to have any association with this, so I strongly downvoted the post.

Comment by cousin_it on What question would you like to collaborate on? · 2021-06-04T13:57:56.781Z · LW · GW

"Drawing on the right side of the brain".

Comment by cousin_it on An Intuitive Guide to Garrabrant Induction · 2021-06-04T11:32:06.452Z · LW · GW

I thought Diffractor's result was pretty troubling for the logical induction criterion:

...the limit of a logical inductor, P_inf, is a constant distribution, and by this result, isn't a logical inductor! If you skip to the end and use the final, perfected probabilities of the limit, there's a trader that could rack up unboundedly high value!

But maybe understanding has changed since then? What's the current state?

Comment by cousin_it on What's your visual experience when reading technical material? · 2021-06-03T17:00:29.054Z · LW · GW

Interesting! So how do you think about e.g. a bivariate normal distribution? To me it looks like either a bunch of dots in the shape of a fuzzy circle or ellipse, or a correspondingly shaped hill on a 2D plane, depending on what the problem needs. I can't imagine how to think about it without having such mental images.

Comment by cousin_it on What question would you like to collaborate on? · 2021-06-03T16:30:50.670Z · LW · GW

For a long time I've been very inspired by the fact that Betty Edwards' book exists. It turns people who have "no drawing talent" into people who can easily draw anything they see, not by strenuous exercise, but by a conceptual shift that can be achieved in a few hours. I'd like to know if such conceptual shifts can be found for math ("math is too abstract"), programming ("I'm not good with computers"), music ("I can't sing") or other areas. It would be nice to study this collaboratively.

Comment by cousin_it on Finite Factored Sets · 2021-05-28T14:09:01.477Z · LW · GW

That's part of the problem, but it seems to me that there's more. A thermodynamical system starts with low entropy and ends at high entropy. That means the initial states we're interested in are usually full of correlated variables (like "all nitrogen is over here and all oxygen is over there"), while final states consist of uncorrelated, or almost uncorrelated, variables (the two gases mixed). So you'll infer a reverse arrow of time.

Comment by cousin_it on The Homunculus Problem · 2021-05-28T08:39:38.546Z · LW · GW

Not sure I see what problem the post is talking about. We perceive A as darker than B, but actually A and B have the same brightness (as measured by a brightness-measuring device). The brain doesn't remove shadow, it adjusts the perceived brightness of things in shadow.

Comment by cousin_it on What's your visual experience when reading technical material? · 2021-05-28T00:09:49.505Z · LW · GW

Very nice, thank you! I just read through part of it, Cayley diagrams make sense and it's pretty easy to see subgroups. Will read more tomorrow.

Comment by cousin_it on What's your visual experience when reading technical material? · 2021-05-27T21:21:01.307Z · LW · GW

I visualize math all the time. Geometry, linear algebra and analysis are of course very visual. Probability theory and combinatorics too. Some topics, like group theory, I don't know how to make visual, so I don't learn them :-(

Comment by cousin_it on Finite Factored Sets · 2021-05-27T18:07:36.885Z · LW · GW

Wait, can you describe the temporal inference in more detail? Maybe that's where I'm confused. I'm imagining something like this:

  1. Check which variables look uncorrelated

  2. Assume they are orthogonal

  3. From that orthogonality database, prove "before" relationships

Which runs into the problem that if you let a thermodynamical system run for a long time, it becomes a "soup" where nothing is obviously correlated to anything else. Basically the final state would say "hey, I contain a whole lot of orthogonal variables!" and that would stop you from proving any reasonable "before" relationships. What am I missing?

Comment by cousin_it on Finite Factored Sets · 2021-05-27T10:00:55.818Z · LW · GW

I think your argument about entropy might have the same problem. Since classical physics is reversible, if we build something like a heat engine in your model, all randomness will be already contained in the initial state. Total "entropy" will stay constant, instead of growing as it's supposed to, and the final state will be just as good a factorization as the initial. Usually in physics you get time (and I suspect also causality) by pointing to a low probability macrostate and saying "this is the start", but your model doesn't talk about macrostates yet, so I'm not sure how much it can capture time or causality.

That said, I like really like how your model talks only about information, without postulating any magical arrows. Maybe it has a natural way to recover macrostates, and from them, time?

Comment by cousin_it on Finite Factored Sets · 2021-05-26T22:40:49.673Z · LW · GW

Thanks for the response! Part of my confusion went away, but some still remains.

In the game of life example, couldn't there be another factorization where a later step is "before" an earlier one? (Because the game is non-reversible and later steps contain less and less information.) And if we replace it with a reversible game, don't we run into the problem that the final state is just as good a factorization as the initial?

Comment by cousin_it on Finite Factored Sets · 2021-05-26T18:37:34.133Z · LW · GW

Not sure we disagree, maybe I'm just confused. In the post you show that if X is orthogonal to X XOR Y, then X is before Y, so you can "infer a temporal relationship" that Pearl can't. I'm trying to understand the meaning of the thing you're inferring - "X is before Y". In my example above, Bob tells Alice a lossy function of his knowledge, and Alice ends up with knowledge that is "before" Bob's. So in this case the "before" relationship doesn't agree with time, causality, or what can be computed from what. But then what conclusions can a scientist make from an inferred "before" relationship?

Comment by cousin_it on Is there a term for 'the mistake of making a decision based on averages when you could cherry picked instead'? · 2021-05-26T13:54:21.206Z · LW · GW

If you consider homeschooling your kids and look at statistics, it is important to realize that they are based on the average of these two groups, so your chances are better.

Haha, I just realized that can be true no matter which group you're in. If you want to give your kids better education, statistics will say homeschooling isn't great at that; if you want to protect your kids from sin, statistics will say homeschooling isn't great at that either; but your chances of achieving your goal, whichever of the two it is, are better than statistics suggest. I wonder where else this kind of quirk happens.

Comment by cousin_it on Finite Factored Sets · 2021-05-26T09:14:19.403Z · LW · GW

I feel that interpreting "strictly before" as causality is making me more confused.

For example, here's a scenario with a randomly changed message. Bob peeks at ten regular envelopes and a special envelope that gives him a random boolean. Then Bob tells Alice the contents of either the first three envelopes or the second three, depending on the boolean. Now Alice's knowledge depends on six out of ten regular envelopes and the special one, so it's still "strictly before" Bob's knowledge. And since Alice's knowledge can be computed from Bob's knowledge but not vice versa, in FFS terms that means the "cause" can be (and in fact is) computed from the "effect", but not vice versa. My causal intuition is just blinking at all this.

Here's another scenario. Alice gets three regular envelopes and accurately reports their contents to Bob, and a special envelope that she keeps to herself. Then Bob peeks at seven more envelopes. Now Alice's knowledge isn't "before" Bob's, but if later Alice predictably forgets the contents of her special envelope, her knowledge becomes "before" Bob's. Even though the special envelope had no effect on the information Alice gave to Bob, didn't affect the causal arrow in any possible world. And if we insist that FFS=causality, then by forgetting the envelope, Alice travels back in time to become the cause of Bob's knowledge in the past. That's pretty exotic.

Comment by cousin_it on We should probably buy ADA? · 2021-05-25T11:10:14.879Z · LW · GW

Helping Ethiopia and other governments build better services for their citizens might be a good business, but I'm very skeptical that it needs any cryptocurrency tech.

Comment by cousin_it on Finite Factored Sets · 2021-05-25T10:40:04.085Z · LW · GW

I think the definition of history is the most natural way to recover something like causal structure in these models.

I'm not sure how much it's about causality. Imagine there's a bunch of envelopes with numbers inside, and one of the following happens:

  1. Alice peeks at three envelopes. Bob peeks at ten, which include Alice's three.

  2. Alice peeks at three envelopes and tells the results to Bob, who then peeks at seven more.

  3. Bob peeks at ten envelopes, then tells Alice the contents of three of them.

Under the FFS definition, Alice's knowledge in each case is "strictly before" Bob's. So it seems less of a causal relationship and more like "depends on fewer basic facts".

Comment by cousin_it on Finite Factored Sets · 2021-05-24T16:50:24.924Z · LW · GW

Scott, look, you misunderstand my question. Look at my first three comments in this thread. Each of them has an example! And to each of them you reply with... abstract reasoning.

I learn things through examples. There are many people like me. Pearl's causality only clicked for me due to the smoking/tar/cancer thing. Your method is interesting; in the post you give one worked example for it. I kicked it from this and that direction, and now I know what makes it tick. What I'm asking you to do, is give examples number two and three. As down-to-earth as possible. They can be about inference, or about some larger frame, I don't mind. Can you do that?

Sorry about the tone - I just don't know how to get the point across otherwise. Feel free to divide the emotional content by ten. I don't actually feel exasperated or angry. Just curious about this new thing you've thought up, and my curiosity demands food :-)

Comment by cousin_it on Don't feel bad about not knowing basic things · 2021-05-24T16:41:46.005Z · LW · GW

Not knowing some trivia is fine. But bouncing off something when I try to figure it out, if my peers have no trouble with it, doesn't feel fine to me. It feels terrible. It makes me double my effort, then double it again. And this reaction feels right to me, I wouldn't want to get rid of it.

Comment by cousin_it on Finite Factored Sets · 2021-05-24T16:03:37.718Z · LW · GW

Can you give some more examples to motivate your method? Like the smoking/tar/cancer example for Pearl's causality, or Newcomb's problem and counterfactual mugging for UDT.

Comment by cousin_it on Finite Factored Sets · 2021-05-24T11:57:11.539Z · LW · GW

Well, imagine we have three boolean random variables. In "general position" there are no independence relations between them, so we can't say much. Constrain them so two of the variables are independent, that's a bit less "general", and we still can't say much. Constrain some more so the xor of all three variables is always 1, that's even less "general", now we can use your method to figure out that the third variable is downstream of the first two. Constrain some more so that some of the probabilities are 1/2, and the method stops working. What I'd like to understand is the intuition, which real world cases have the particular "general position" where the method works.

Comment by cousin_it on Finite Factored Sets · 2021-05-24T01:30:36.954Z · LW · GW

Yeah, that's what I thought, the method works as long as certain "conspiracies" among probabilities don't happen. (1/2 is not the only problem case, it's easy to find others, but you're right that they have measure zero.)

But there's still something I don't understand. In the general position, if X is before Y, it's not always true that X is independent of X XOR Y. For example, if X = "person has a car on Monday" and Y = "person has a car on Tuesday", and it's more likely that a car-less person gets a car than the other way round, the independence doesn't hold. It requires a conspiracy too. What's the intuitive difference between "ok" and "not ok" conspiracies?

Comment by cousin_it on Finite Factored Sets · 2021-05-24T00:43:42.608Z · LW · GW

And if X is independent of X XOR Y, we’re actually going to be able to conclude that X is before Y!

It's interesting to translate that to the language of probabilities. For example, your condition holds for any X,Y (possibly dependent) such that P(X)=P(Y)=1/2, but it doesn't make sense to say that X is before Y in every such pair. For a real world example, take X = "person has above median height" and Y = "person has above median age".

Comment by cousin_it on My Journey to the Dark Side · 2021-05-08T19:15:01.985Z · LW · GW

The people who got pushed off the deep end didn't end up contributing much to the puzzles, though. Feynman was right, we should've emphasized the playful feeling of research, not its epic importance.

Comment by cousin_it on My Journey to the Dark Side · 2021-05-08T10:08:04.507Z · LW · GW

Yeah. It makes sense in retrospect that Eliezer's writings full of weighty meaning would attract lots of people with a "meaning-shaped hole". I wish we'd kept to fun puzzles about decision theory, evolution etc.

Comment by cousin_it on My Journey to the Dark Side · 2021-05-06T21:04:24.617Z · LW · GW

I think all these worldview enhancements and drugs are no longer a solution to your original problem (not fitting in) and have evolved to a scary problem in itself. You should probably start pushing back.

Comment by cousin_it on Helpless Individuals · 2021-04-10T21:26:45.829Z · LW · GW

Schoenberg was a controversial composer from the beginning, well before he finally decided to stop writing key signatures in his scores. Works such as Verklärte Nacht that are now considered audience-pleasers were initially received with a great deal of hostility.

Wow, that piece (the version for strings) is a lot of fun to listen to, like a soundtrack to an imaginary Disney movie. No catchy melody, but the chords evoke so many different moods. While his later atonal stuff, like "Variations for Orchestra", is just robotic and unpleasant. So I agree with others in this discussion, atonality seems like a wrong turn.

Comment by cousin_it on Convict Conditioning Book Review · 2021-04-10T08:47:40.169Z · LW · GW

I like push-ups and pull-ups, they're my main thing now that the office gym is closed. But I kinda miss the deadlift, there seems to be no bodyweight exercise that approximates it.

Comment by cousin_it on Don't Sell Your Soul · 2021-04-06T20:43:20.804Z · LW · GW

I immediately thought of Ross Scott's story about buying souls.

Comment by cousin_it on I'm from a parallel Earth with much higher coordination: AMA · 2021-04-06T09:22:06.813Z · LW · GW

Sounds like a decent society, cribbing ideas from various religions and utopian movements. I guess the main problem is that young people, being naturally rebellious but inexperienced with superstimuli, would be quite vulnerable to the export image of the US - sex drugs rock-n-roll and coca cola. See 80s USSR, or the story of the mall in Najran.

Comment by cousin_it on Logan Strohl on exercise norms · 2021-03-30T11:33:24.877Z · LW · GW

I think "nerds" need to be taught exercise differently: not by pushing them to compete with others right away, but by lots and lots of patience, repetition, and focus on basics. At some point it will click and they'll be able to compete and have fun. Same as with "non-nerds" and math, really.

Comment by cousin_it on Process Orientation · 2021-03-21T23:58:06.635Z · LW · GW

From my humble experience, I've come to believe that Deming's view was the wisest:

The idea of a merit rating is alluring. The sound of the words captivates the imagination: pay for what you get; get what you pay for; motivate people to do their best, for their own good. The effect is exactly the opposite of what the words promise. Everyone propels himself forward, or tries to, for his own good, on his own life preserver. The organization is the loser.

Comment by cousin_it on Jean Monnet: The Guerilla Bureaucrat · 2021-03-20T22:22:01.901Z · LW · GW

Very informative, thanks! The "tribalism" section does leave some unanswered questions, which only grow when you look at the actual (very complicated) org chart of the EU. Designing an international organization that doesn't fall prey to gridlock, doesn't fall apart, doesn't get pushed aside, doesn't gain too much power and so on seems like a fascinating problem. Do you have any thoughts about this?