Posts

What was the official story for many top physicists congregating in Los Alamos during the Manhattan Project? 2019-07-03T18:05:12.944Z

Comments

Comment by moses on Thoughts on ADHD · 2020-10-09T10:09:18.517Z · LW · GW

I recognize myself very much in the dandelion child description; makes me feel slightly better about not being gifted or a high achiever :)

Comment by moses on Rationality Vienna [Virtual] Meetup, May 2020 · 2020-05-16T11:41:12.363Z · LW · GW

https://meet.jit.si/viennarationality

The password is "schmachtenberger"

Sorry for the very late info, the organizer only just posted it (Viliam had asked a couple times without a reply)

Comment by moses on What is Success in an Immoral Maze? · 2020-05-15T23:27:52.592Z · LW · GW

I'm not clear on one thing: managers participate in mazes, I presume, because the higher positions pay much better.

But why do corporations pay higher positions much better in the first place? Why do they incentivize the maze like that? Surely they would rather have their managers focus on doing their job than on clawing their way up.

Sure, if the higher positions were paid exactly as much as the lower ones, nobody would want to take them (more responsibility for the same money), but in a maze on the other hand, they're paid so much better that managers will sacrifice their firstborns to get to them.

Wouldn't the corporation want to set some kind of a middle road between these two extremes? Where managers are mostly focusing on their job and are indifferent between staying where they are and taking up more responsibility?

Comment by moses on Solar system colonisation might not be driven by economics · 2020-04-22T20:12:30.000Z · LW · GW

I don't think it has to be value on Earth; economic reasons to go to space can also mean creating value in space.

Comment by moses on Solar system colonisation might not be driven by economics · 2020-04-22T09:05:58.826Z · LW · GW

True, but what I'm arguing against here is the point of the post:

there may not be good economic reasons to go to space; therefore space colonisation would be driven by non-economic reasons

I'm arguing that there are good economic reasons to go to space. (There are also good economic reasons to build things that we're not building here on Earth, but that's tangential to the discussion.)

Comment by moses on Solar system colonisation might not be driven by economics · 2020-04-22T08:08:31.225Z · LW · GW

Bridges might not be the most valuable thing you can build with your resources right now, but that's different than just letting the resources go unused

Comment by moses on Solar system colonisation might not be driven by economics · 2020-04-22T07:12:32.216Z · LW · GW

Why would we ever want to stop growing our economy and accomodating ever more people? We have always expanded and organized matter into valuable forms, why would we forever settle for the matter and energy available to us on Earth? We can create so much value with even just the matter inside the Moon or Mercury, let alone Jupiter or the Sun. Why would we pass up on it?

Comment by moses on Assessing Kurzweil's 1999 predictions for 2019 · 2020-04-11T08:16:52.061Z · LW · GW

lots of the remote learning stuff does suffer from predicting 2019 instead of 2020

I wouldn't call it a successful prediction anyway—he predicts this to be the normal state of affairs, whereas the current situation is a temporary reaction to extraordinary circumstances

Comment by moses on Assessing Kurzweil's 1999 predictions for 2019 · 2020-04-09T19:03:52.755Z · LW · GW

Wow, I haven't read the book so this was the first time encountering the predictions. They were… surprisingly bad, given it's just 20 years. Even adjusting for the fact that predictions tend to be ridiculous in general, this really exceeded all expectations

Comment by moses on What are some articles that updated your beliefs a lot on an important topic? · 2020-03-12T18:35:14.732Z · LW · GW

Iirc this article on climate change made me update notably in the direction of climate change being serious and something worth paying attention to.

Definitely not the first piece of content on climate change I consumed, but maybe first that had a significantly over-the-top alarmist tone to shake me up?

Comment by moses on What are you reading? · 2019-12-28T16:51:40.332Z · LW · GW

I'm halfway through, so far it's good, I'm glad I picked it up.

First half is about his general vision of transforming politics/governance from current industrial-era party politics to post-industrial, the main point being about the relationship between government and citizen. Currently there is pervasive individualism: you get a welfare check, but nobody has the job of giving a shit about your mental health, development, emotional wellbeing, needs, etc. He proposes overturning the individualist ethos and having society get involved with the wellbeing of its members.

In the second part, he introduces four lines of developmental stages: cognitive (kinda like Piaget but more stages extending into adulthood), cultural (traditional, modern, post-modern, meta-modern etc. cultural codes), and two more, but I haven't gotten that far.

He foreshadows that the fact that adults exist on different developmental stages will be important for his vision of how exactly governance should work, which is in the next book, Nordic Ideology.

Comment by moses on What are you reading? · 2019-12-24T16:07:00.310Z · LW · GW

The Listening Society by Hanzi Freinacht

Comment by moses on RAISE post-mortem · 2019-11-25T18:08:05.588Z · LW · GW

The number could easily be infinity; I have no problem imagining that most people have zero positive impact for more than half the years of their careers (even the ones that end up having some positive impact overall)

Comment by moses on [anonymous]'s sketchpad · 2019-11-07T12:34:22.210Z · LW · GW

Sounds very close to what Peterson says

Comment by moses on Skill and leverage · 2019-11-05T15:35:26.546Z · LW · GW

The last thing you should do if you come across a hard-for-you unimportant action is stop looking for other things to do.

I think you can go even more general and say, "don't do unimportant things".

Comment by moses on Skill and leverage · 2019-11-05T15:30:32.032Z · LW · GW

Those are all people who don't really consider cleaning their room important (Alice, if she considered it important, could easily hire a cleaning service with her programmer salary).

I'm not talking about people who don't clean up because they're "pouring energy into something else" or because "putting away dishes is boring" or because they have a physical disability. I'm talking about the people from Katja's post:

‘how can make a big difference to the world, when I can’t make a big difference to that pile of dishes in my sock drawer?’

This sounds to me like someone who wants to load the dishwasher, but finds themselves unable to. Like someone who's frustrated with themselves; not someone who's happy with the state of the affairs because they have better things to do.

In this case, I would expect this to be a pretty good predictor of not being able to do things that are more difficult (for an able-bodied person) than loading the dishwasher. And while there will not be much of a correlation between difficulty and importance, I would still say that virtually all non-trivial accomplishments in the world will be over the "loading the dishwasher" threshold of difficulty.

Does that make sense?

Comment by moses on Elon Musk is wrong: Robotaxis are stupid. We need standardized rented autonomous tugs to move customized owned unpowered wagons. · 2019-11-04T15:58:02.746Z · LW · GW

Sounds good! I think many people would forego the luxury (parking spaces are expensive in cities & for a lot of people the transportation itself is enough), but I can imagine some part of the market being like this

Comment by moses on Skill and leverage · 2019-11-04T15:44:36.475Z · LW · GW

Yes, if you can't do an unimportant thing X, I can't judge whether you'll be able to do important thing Y, because I don't know what the relationship in difficulty is between X and Y.

But if you can't do an easy thing, surely I can judge that you'll have even more of a problem doing a difficult thing.

And there aren't going to be many things in the world easier than cleaning your room or loading the dishwasher, right?

Comment by moses on What LessWrong/Rationality/EA chat-servers exist that newcomers can join? · 2019-10-25T13:55:35.029Z · LW · GW

Nor is the group, unfortunately

Comment by moses on Where to absolutely start? · 2019-10-21T09:47:19.625Z · LW · GW

I agree, the "big, vague things" are the bedrock of epistemology, the lens which helps you be more discerning and critical when you read anything else (e.g. those more "hands on" materials) and get more value out of it.

I think that the sequences could be rewritten to half the length and still retain the vast majority of the value, but oh well, this is where we are at the moment with core rationality literature.

HPMOR is especially fun if you've read the original HP.

Comment by moses on Rent Needs to Decrease · 2019-10-12T09:14:15.855Z · LW · GW

I mean something like: in the equilibrium, all consumer surplus is extracted by rents.

I'm saying "on aggregate", because it might often not be the case in individual cases; landlords are not capable of doing perfect price discrimination on individual basis, only at the level of something like neighborhoods, roughly speaking (people sort themselves into neighborhoods by income, so the landlords can price-discriminate based on "how affluent a neighborhood you wanna live in"; people also want as short commutes as possible, so you can price-discriminate based on the distance to the nearest megalopolis).

This is made very complicated by the distinction between land and land improvements, i.e. the bare plot of land itself, on one hand, and the infrastructure and buldings built on top of it, on the other. When I talk about lands and rents, I talk about the former. The supply of land improvements is somewhat elastic (you can build more floors); the supply of land itself is absolutely inelastic.

I unfortunately don't wanna go into the mechanism by which consumer surplus is actually extracted by rents, because I already spent some time thinking and writing this comment and I originally wanted to do something else with my Saturday.

Viliam hinted at the mechanism: land is a positional good, so, to quote him, "as long as the life at some place is better than at other places, people will keep moving there."

Compare with other positional goods: e.g. all sports clubs' profits will eventually be extracted by players and their agents, unless a league instantiates a wage cap, precisely because players are a positional good: you don't care how good your players are, you only care how good they are in comparison to other teams' players.

In the same vein, you don't care where you live, you care how far you are from the center of gravity of where other people live (roughly speaking).

Comment by moses on "Mild Hallucination" Test · 2019-10-11T14:01:32.033Z · LW · GW

All work for me. I've had psychedelics a few times.

Also, while watching out for the snow (very prominent once you know what you're looking for, kinda like tinnitus), I noticed how (if you keep your vision still) everything constantly glides/shifts/jerks around a little bit, like when you're really drunk.

Comment by moses on Rent Needs to Decrease · 2019-10-11T13:36:50.242Z · LW · GW

If you build more units, more people will move into the city. The equlibrium is one in which the population is on aggregate paying through the nose, one way or another. I don't think there's a good solution that doesn't require a change in the economic system.

Comment by moses on Taxing investment income is complicated · 2019-09-23T09:03:11.457Z · LW · GW

we want a broad tax base in general—doubling the size of a tax quadruples its social cost, so it’s better to have lots of small taxes rather than a few big taxes

But why not do taxes that don't have social costs? Taxes on land and land-like assets; taxes that internalize externalities (e.g. carbon taxes); taxes on zero-sum games (e.g. higher education); taxes where the dead-weight loss is intended (e.g. cigarettes).

Comment by moses on Examples of Examples · 2019-09-06T15:30:26.207Z · LW · GW

Reminds me of that scene from Family Guy: https://youtu.be/UjtiAkakogM?t=40

Comment by moses on Peter Thiel/Eric Weinstein Transcript on Growth, Violence, and Stories · 2019-08-31T17:02:17.433Z · LW · GW

Does anyone have any opinions about their view that technology overall has been stagnating since the 70s?

Comment by moses on A Game of Giants [Wait But Why] · 2019-08-30T06:40:27.835Z · LW · GW

From footnote 3:

I wrote this chapter of the series from an intuitive perspective before digging into what the evolutionary scientists say about it.

"I made shit up, then checked the facts later." This made me lol, because that's exactly my impression of Urban's writing.

Comment by moses on Searle’s Chinese Room and the Meaning of Meaning · 2019-08-06T17:43:17.947Z · LW · GW

Searle meant the mechanically performing technician as an analogy for the mechanical, deterministic processes in a computer. You cannot reject Searle by magically introducing computation which is outside of the symbol lookup table, just like in a computer, there is no computation happening outside of the computer's circuits.

Now, the mistake that Searle made was much more trivial and embarrassing. From Wikipedia:

The question Searle wants to answer is this: does the machine literally "understand" Chinese? Or is it merely simulating the ability to understand Chinese?

There is no empirical difference underlying the conundrum. If Searle was made to explain what he means by "literally understand" and how it differs from "merely simulating", the problem would dissolve.

Comment by moses on [deleted post] 2019-07-29T14:26:36.975Z
  • Omega3 from algae, 750 mg a day
  • Vitamin D, ~2000 IU a day
  • B12, 2.5 mg per week
  • Melatonin 0.4 mg as needed for sleep
  • Creatine, 4–5 g a day
  • Planning to get Ashwaganda
  • Coffee, but that's more of a drug than supplement
Comment by moses on Nutrition is Satisficing · 2019-07-17T15:36:10.498Z · LW · GW

Eat plenty of vegetables.

Notice that this is one of the few things the contradictory nutrition theories all happen to agree about.

…Except all the low-carb, keto, and straight up carnivore diets that are getting increasingly popular :)

Comment by moses on 87,000 Hours or: Thoughts on Home Ownership · 2019-07-06T09:32:33.539Z · LW · GW

Mind if I reeeee real quick?

First let's address the idea that renting is 'throwing money away'. This ignores the opportunity cost of investing extra income and the lump sum of the down payment into a house instead of the stock market.

I have a feeling this explanation is misleading.

Investing in the stock market and in the real estate market are two different things, different risk profiles etc. Correctly, this should read, "This ignores the opportunity cost of investing extra income and the lump sum of the down payment into a house instead of the exact same house, but renting it out to someone and collecting the rent."

This points to a better explanation: You always "pay rent". You being alive and taking up space always costs something, because space just costs something. Either you live in someone else's space, in which case the cost for you of taking up that space is the rent you're paying them; or you live in your own space, in which case the cost for you is the rent that you could collect from someone else if you didn't take up the space.

When you effectively "pay rent to yourself" in this manner, economists call it "imputed rent". Treat it like you would treat any other rent in your calculation.

When thinking about whether to invest your money into a house or something else, typically you want to decouple that decision from your living situation and see if it still makes sense. Regardless of where I live—given my net worth, does it make sense for $XXX,000 of my investment portfolio to be tied into this specific piece of real estate?

Then the standard investing advice applies:

  • you can only systematically make money in a market if you know more than other people who participate in that market know (e.g. maybe you have a friend in the zoning comittee);
  • don't put most (or God forbid, all) of your wealth into a very specific asset (like real estate in a specific city);
  • etc.

you are making a long term bet which, if it doesn't pan out, will simultaneously leave you without that high paying job, either forcing you into a long commute or selling the house and moving to a different city.

See, you're conflating land ownership and tenancy here. You can absolutely move wherever you want while owning and renting out a house somewhere else.

These scenarios are still highly reliant on you being sure you want to stay in a particular spot for at least 10 years

Same issue here.

There is of course a reason to own a place: so that you can refurbish it to your liking—or you might even want to build a house to your liking, if,like me, you think there just aren't any existing buildings designed in a sane way. You just can't rent a place from someone else and start tearing down walls and installing gadgets—and even if you were allowed to, you don't wanna invest into improving someone else's place.

(ETA: Just to clarify, even if you're buying/building a place for the reasons mentioned in the previous paragraph, you should still be aware of the tradeoffs; the price you pay for the ability to customize your living space is that most of the value can disappear with a particularly shitty election outcome, war, immigration, the collapse of the financial system in that country—anything you can't get insurance for.)

Comment by moses on What was the official story for many top physicists congregating in Los Alamos during the Manhattan Project? · 2019-07-04T12:41:19.109Z · LW · GW

I read Feynman, but I don't think he said anything about how the US government explained the withdrawal of top physicists from public space.

Maybe it was the case that "the US military is using top physicists to do something" was not a secret; it only was a secret what exactly they're working on.

In that case this would not be repeatable either, because "the US military is using top AI researchers to do something" is not quite the same level of vague :)

Comment by moses on What's up with self-esteem? · 2019-06-25T07:04:06.319Z · LW · GW

Yeah, you got it right. You wanna take as much as possible from others without getting slaughtered, so you keep track of your status. Not much to it.

You get a whole lot of pathological anxiety and suicide these days because the environment has shifted somewhat, Instagram and billionaires and precarious labor and whatnot. I would like to see the numbers for suicide pre-industrial revolution; I wouldn't expect a lot of them.

Comment by moses on Causal Reality vs Social Reality · 2019-06-25T06:39:32.066Z · LW · GW

Would you expect an evolved species to care about death in the abstract? By what mechanism?

Also,

If you primarily inhabit causal reality (like most people on LessWrong)

You're in a group of people where non-conformity happens to be a social good, no need to posit specialness here. We're all running on the same architecture, some people just didn't have their personality traits and happenstance combine in a way that landed them on LW.

Comment by moses on Matt Goldenberg's Short Form Feed · 2019-06-22T14:45:46.619Z · LW · GW

Yeah, I've read that one, and I guess that would let someone who've had the same experience understand what you mean, but not someone who haven't had the experience.

I feel similarly to when I read Valentine's post on kensho—there is clearly something valuable, but I don't have the slightest idea of what it is. (At least unlike with kensho, in this example it is possible to eventually have an objective account to point to, e.g. video.)

Comment by moses on Matt Goldenberg's Short Form Feed · 2019-06-22T04:15:00.241Z · LW · GW

I'm so curious about this. I presume there isn't, like, a video example of "vibing"? I'd love to see that

Comment by moses on STRUCTURE: A Hazardous Guide to Words · 2019-06-20T20:36:48.187Z · LW · GW

Hah, I was thinking along the same lines as you—I have two pieces of advice that I give to everyone, that would solve most of their problems, at that nobody ever takes: meditate daily, and read the goddamn Sequences. So naturally, I'm gonna write up my own, heavily condensed version of the Sequences, and then my friends will have to read them because it's rude not to read something if your friend wrote it (right?)

My version will sit in my brain for at least a few more years, potentially forever, but I did take a few notes on structure at least. I thought I'd dump them here because they might be of interest to you (?) They might not be very legible, but they're for my own consumption mostly, I'm dumping them as-is just to give an idea what I found important in the Sequences (and elsewhere).


topics

  • introduction: something to protect
  • theoretical epistemology
    • probability, evidence, betting
      • bayesian inference
        • the importance of priors
      • privileging the hypothesis
    • causality, causal relationship between reality and beliefs
      • ontology, what we mean by "real"
        • "supernatural"
    • how to use language, dissolving questions
      • predictions, expectations
  • human brains/neural nets
    • evolution, how to theorize about evolution (tooby & cosmides)
    • evopsych, the political mind
      • escaping the paradigm is impossible, but there's some slack
    • moral psychology
    • brain, neural nets
      • all the GPT-2 shit
    • predictive processing and world models, memory
    • tinted lenses
    • wanting/liking/goals/values/motivation
    • the Self, identity
    • the press secretary, introspection
    • fake explanations (elan vital)
    • fake beliefs
      • the dragon in the garage
  • mental movements/habits
    • noticing
    • noticing confusion
      • noticing "makes sense", armchair logic (e.g. 80k's reasoning about how to have impact)
    • noticing resistance to ideas (e.g. [redacted personal example])
    • noticing already knowing in advance the conclusion (in a debate, as soon as your opponent opens their mouth, you know that there's going to be something wrong with their argument)
    • noticing the pull to lash out in an argument with a loved one
    • noticing when you do motte and bailey
    • feeling certainty (e.g. private property = good, taxes and coercion = bad)
    • being stuck in a loop (e.g. trying to get out of bed in a semi-dreamy state)
    • ? suffering
    • practical epistemology (perception tinted by e.g. depression, fear, but mainly when you tie your Self to an ideology)
      • everything is a lens, there is no "lens-less" view
      • caching, normalization, prediction overrides perception ?
    • tracking debate propositions ?
    • scout mindset
  • practical topics
    • politics vs policy
      • ethics, consequentialism vs. other stances
      • goals and values vs. methods, goal factoring
      • beware grand unified theories
      • though experiments, double crux, counterfactual thinking
      • charity, compassion, empathy, we're all in this together, kumbaya
        • "how would I end up with these beliefs/behavior?"
        • addressing the cases when extending charity is already an act of desertion
      • cooperative discussion
    • adversarial optimization: Russian bots, scammers, marketing, manufactured addiction, manufactured outrage
    • social dynamics; purity spirals; meta-norms
    • commitment, identity, choosing which status game you play, leaving yourself/others a line of retreat
      • personality, character, mask, the web
    • philosophy case studies
      • chinese room, free will, teleportation/personal identity
    • footguns: "I am a rationalist, I couldn't possibly be making trivial cognitive mistakes", "all conspiracy theories are bullshit, I am smart", "I am better than others because I am more rational and smarter", "I know NVC, therefore I cannot get into dumb fights", church of Rationality; isolated demands for rigor, the fallacy fallacy
      • remember there are underlying determinants of e.g. open-mindedness etc.
  • [interesting biases]
    • status quo
    • just/non-horrible universe
    • mind projection fallacy (e.g. "the meaning of life")
  • [practical considerations]
    • ? read this book with other people, discuss
Comment by moses on Is the "business cycle" an actual economic principle? · 2019-06-18T18:21:52.412Z · LW · GW

But to the degree that the business cycle is a separate understandable phenomenon, can't investors use that understanding to place bets which make them money while dampening the effect?

The theory can tell you that we're headed for a downturn, but when exactly that will happen is unpredictable, because the system is chaotic and the bubble burst can be triggered by any Schelling point that looks like "uh-oh, time to sell". E.g. Lehmann Brothers. E.g. 9/11.

If you don't know whether the recession is coming in a year or three, that's too much ambiguity to make any money off of it.

Comment by moses on Is the "business cycle" an actual economic principle? · 2019-06-18T18:11:49.787Z · LW · GW

As johnswentworth notes, recessions are way worse than what you'd get from a random walk. There is something to be explained.

To his list of theories, I would add Austrian business cycle theory (ABCT), which introduces the notion of the temporal structure of capital and explains e.g. why fluctuations in employment are larger in industries further up the capital pipeline/further away from end consumers (mining, refining) than in industries closer to the consumers (retail, hospitality).

According to ABCT, when the central bank lowers interest rates below the natural rate which clears supply (household savings) against demand (businesses seeking loans for investments), this causes overinvestment and undersaving. More specifically, the lower interest rate guides businesses to invest in capital further up the capital pipeline, with longer time to "maturation" (to being useful to the end consumer), e.g. oil tankers. The gap between savings and investment (which no longer matches) is covered by new money being put in circulation by banks.

As the new money keeps pouring in, inflation picks up, the central bank reacts by lifting the interest rates again, this makes the long-term investment projects (which were profitable) unprofitable again, this makes all the capital that was already put into them mis-allocated and much lower in value than before (if you have an unprofitable, half-built oil tanker, it's not easy to convert it to some more useful form of capital, like cellphones).

Thus, real wealth has actually been lost throughout the economy (as opposed to Keynesian theory, where no value is actually lost, everyone is just caught in crowd psychosis, like a murmuration of sparrows following each other down the market).

The central bank reacts by monetary easing, dropping the interest rate to zero, thus preventing the healthy clearance of the built-up misallocation, and a new bubble can build on top of the previous one.

(In reality, I think, each recession will have slightly different causes and many theories will be partly right about particular recessions. There is always some element of the Keynesian "I will sell because everybody else is selling" etc.)

Comment by moses on Agents dissolved in coffee · 2019-06-04T12:12:05.855Z · LW · GW

Nice! I find it much more pleasant to read :)

Comment by moses on Agents dissolved in coffee · 2019-06-04T08:55:19.947Z · LW · GW

Please read this in the most loving-kindness way possible: every time I see a LW post starting with a paragraph of hedging and self-deprecation (which is about half of them), I feel like taking the author by their shoulders and shaking them violently until they gain some self-confidence.

Let the reader judge for themselves whether the post is misguided, badly structured, or repetitive. I guarantee nothing bad will happen to you if they come to this conclusion themselves. The worst thing that could happen is that Someone On The Internet will think bad things about you, but I let me assure you, this will not be any worse if you leave out the hedging.

Note: people will be more likely to attack your idea (i.e. provide you with valuable feedback, i.e. this is a good thing) if you seem to stand behind the idea (i.e. if you leave out the hedging).

Before someone says something about conveying confidence levels:

  • Saying your post is "rambling" and "repetitive" has nothing to do with confidence levels.
  • Insight porn[1] doesn't need confidence levels.

On the other hand, I have to commend you for not starting your paragraph of hedging with the phrase "Epistemic status".


  1. By "insight porn" I mean a genre of writing, I don't mean this as a derogatory term. ↩︎

Comment by moses on [deleted post] 2019-05-30T07:41:52.048Z

It's about not wasting people's time with half-baked ideas.

If you waste someone's time, that's not your responsibility, they decided to come over to your blog and lay their eyes on your …research notes, let's say.

Write immediately and write continuously, as you learn. This will (1) give you practice, so that your writing is much better by the time you have something really good that would deserve good writing, and (2) you get feedback from your friends (if you gently beat it out of them), which will make your progress on your ideas faster.

If I explain my ideas now, I'm going to be embarrassed by it next year.

What if you just honestly report on what's going on in your head? Not, "folks, I know the Truth, let me lay it out for you," but "these are the results of my research so far, and these are the half-baked intuitions that I get out of that, at the moment, and this is where I'm planning to go next to sharpen or falsify those intuitions." Sounds reasonable and non-embarrassing.

Also, the more often you bump your thoughts against the harsh judgement of other people, the faster they sharpen up and the less embarrassing they will be in the long run. Something like that.

Comment by moses on What is your personal experience with "having a meaningful life"? · 2019-05-23T15:20:41.548Z · LW · GW

It's difficult to understand what people mean when they say "meaning", because they're always so mysterious and vague around the term.

It seems to me that most of the time, when people talk about "meaning", they mean the dopamine hit you get when you move towards something that will elevate your social status, e.g. helping others, or sacrificing yourself for the well-being of the tribe/superorganism, or whatever else can be used as a virtue signal. (So, for example, making money for the sake of making money doesn't feel as meaningful as making money under the veil of "fulfilling a mission" and "having impact" and "making a dent in the universe", exactly in proportion to how much less status/prestige you'd be awarded by your tribe for the former. But, I mean, depends on your tribe; there are tribes where cynical money generation is cool (e.g. crypto traders), you're awarded status for it, and you'll find a corresponding sense of meaning in it.)

This is to distinguish meaning from the dopamine kick you get from moving towards other (notably short-term) goals, like food or sex. I don't think people call that one "meaning".

NB: The "meaning" circuit can be apparently hacked by superstimuli, like every other motivation-related part of the brain, hence videogames feeling "meaningful".

There are other phenomena that people sometimes call "meaning", but I think this is the most common one. (E.g. I usually use the term to mean literally meaning, in the semiotics sense, i.e. how dense the symbol web in your life is, i.e. minimum meaning = suññatā, maximum meaning = schizophrenia.)

Comment by moses on When is rationality useful? · 2019-04-27T15:01:49.838Z · LW · GW

Hm. Yes, rationality gave us such timeless techniques like "think about the problem for at least 5 minutes by the clock", but I'm saying that nothing in the LW canon helps you make sure that what you come up with in those 5 minutes will be useful.

Not to mention, this sounds to me like "trying to solve the problem" rather than "solving the problem" (more precisely, "acting out the role of someone making a dutiful attempt to solve the problem", I'm sure there's a Sequence post about this). I feel like people who want to do X (in the sense of the word "want" where it's an actual desire, no Elephant-in-the-brain bullshit) do X, so they don't have time to set timers to think about how to do X.

What I'm saying here about rationality is that it doesn't help you figure out, on your own, unprompted, whether what you're doing is acting out a role to yourself rather than taking action. (Meditation helps, just in case anyone thought I would ever shut up about meditation.)

But rationality does help you to swallow your pride and listen when someone else points it out to you, prompts you to think about it, which is why I think rationality is very useful.

I don't think you can devise a system for yourself which prompts you in this way, because the prompt must come from someone who sees the additional dimension of the solution space. They must point you to the additional dimension. That might be hard. Like explaining 3D to a 2-dimensional being.

On the other hand, pointing out when you're shooting yourself in the foot (e.g. eating unhealthy, not working out, spending money on bullshit) is easy for other people and rationality gives you the tools to listen and consider. Hence, rationality protects you against shooting yourself in the foot, because the information about health etc. is out there in abundance, most people just don't use their ears.

I might be just repeating myself over and over again, I don't know, anyway, these are the things that splosh around in my head.

Comment by moses on Asymmetric Justice · 2019-04-26T20:31:35.753Z · LW · GW

In what we will call the Good Place system (…) If you take actions with good consequences, you only get those points if your motive was to do good. (…) You lose points for bad actions whether or not you intended to be bad.

See also: Knobe effect. People seem also seem to asymetrically judge whether your action was intentional in the first place.

In a study published in 2003, Knobe presented passers-by in a Manhattan park with the following scenario. The CEO of a company is sitting in his office when his Vice President of R&D comes in and says, ‘We are thinking of starting a new programme. It will help us increase profits, but it will also harm the environment.’ The CEO responds that he doesn’t care about harming the environment and just wants to make as much profit as possible. The programme is carried out, profits are made and the environment is harmed.

Did the CEO intentionally harm the environment? The vast majority of people Knobe quizzed – 82 per cent – said he did. But what if the scenario is changed such that the word ‘harm’ is replaced with ‘help’? In this case the CEO doesn’t care about helping the environment, and still just wants to make a profit – and his actions result in both outcomes. Now faced with the question ‘Did the CEO intentionally help the environment?’, just 23 per cent of Knobe’s participants said ‘yes’ (Knobe, 2003a).

Comment by moses on When is rationality useful? · 2019-04-25T10:22:43.588Z · LW · GW

In other words: Rationality (if used well) protects you against shooting your foot off, and almost everyone does shoot their foot off, so if you ask me, all the Rationalists who walk around with both their feet are winning hard at life, but having both feet doesn't automatically make you Jeff Bezos.

Comment by moses on When is rationality useful? · 2019-04-25T10:14:40.647Z · LW · GW

I think my views are somewhat similar. Let me crosspost a comment I made in a private conversation a while ago:

I think the main reason why people are asking "Why aren't Rationalists winning?" is because Rationality was simply being oversold.

Yeah, seems like it. I was thinking: why would you expect rationality to make you exceptionally high status and high income?[1] And I think rationality was sold as general-purpose optimal decision-making, so once you have that, you can reach any goals which are theoretically reachable from your starting point by some hypothetical optimal decision-maker—and if not, that's only because the Art is not fully mature yet.

Now, in reality, rationality was something like:

  • a collection of mental movements centered around answering difficult/philosophical questions—with the soft implication that you should ingrain them, but not a clear guide on how (aside from CFAR workshops);
  • a mindset of transhumanism and literally-saving-the-world, doing-the-impossible ambition, delivered via powerfully motivational writing;
  • a community of (1) nerds who (2) pathologically overthink absolutely everything.

I definitely would expect rationalists to do better at some things than the reference class of {nerds who pathologically overthink everything}:

I would expect them not to get tripped up if explicitly prompted to consider confusing philosophical topics like meaning or free will, because the mental movement of {difficult philosophical question → activate Rationality™} is pretty easy and straightforward.

Same thing if they encounter e.g. a different political opinions or worldviews: I'd expect them to be much better at reconsidering their dogmas, if, again, externally prompted. I'd even expect them to do better evaluating strategies.

But I don't think there's a good reason to expect rationalists to do better unprompted—to have more unprompted imagination, creativity, to generate strategies—or to notice things better: their blind spots, additional dimensions in the solution space.

Rationality also won't help you with inherent traits like conscientiousness, recklessness, tendency for leadership, the biological component of charisma (beyond what reading self-help literature might do for you).

I also wouldn't expect rationalists to be able to dig their way through arbitrarily many layers of Resistance on their own. They might notice that they want to do a thing T and are not doing it, but then instead of doing it, they might start brainstorming ways how to make themselves do T. And then they might notice that they're overthinking things, but instead of doing T, they start thinking about how to stop overthinking and instead start doing. And then they might notice that and pat themselves on the back and everything and think, "hey, that would make a great post on LW", and so they write a post on LW about overthinking things instead of fucking doing the fucking thing already.

Rationality is great for critical thinking, for evaluating whatever inputs you get; so that helps you to productively consider good external ideas, not get tripped by bad ideas, and not waste your time being confused. In the ideal case. (It might even make you receptive to personal feedback in the extreme case. Depending on your personality traits, I guess.)

On the other hand, rationality doesn't help you with exactly those things that might lead to status and wealth: generating new ideas, changing your biological proclivities, noticing massive gaps in your epistemology, or overturning that heavily selected-for tendency to overthink and just stumbling ass-first out into the world and doing things.


  1. "High status and high income" is a definition of "winning" that you get if you read all the LW posts about "why aren't Rationalists winning?", look at what the author defines as "winning", then do an intersection of those. ↩︎

Comment by moses on 1960: The Year The Singularity Was Cancelled · 2019-04-23T15:29:28.173Z · LW · GW

Yes, my confusion was indeed about the underlying model of innovation. Intuitively is seems to me that progress on a particular research problem would be a function of how smart {the smartest person working on the problem} is, but then I guess if you have more smart people, you can attack more research problems at once, so I guess the model does make sense 🤔

Comment by moses on 1960: The Year The Singularity Was Cancelled · 2019-04-23T13:15:37.084Z · LW · GW

I skimmed the paper but I still can't understand how von Foerster comes up with the notion that more people = faster technological growth. (Kurzgesagt use the same assumption in their video on "egostic altruism", but they don't explain where they got it from either.) Does someone know how that works?

Comment by moses on [deleted post] 2019-04-19T22:00:30.214Z

Hm, I see the widget in Chrome though