AllAmericanBreakfast's Shortform

post by AllAmericanBreakfast · 2020-07-11T19:08:01.705Z · LW · GW · 129 comments

129 comments

Comments sorted by top scores.

comment by AllAmericanBreakfast · 2020-09-28T00:33:47.003Z · LW(p) · GW(p)

SlateStarCodex, EA, and LW helped me get out of the psychological, spiritual, political nonsense in which I was mired for a decade or more.

I started out feeling a lot smarter. I think it was community validation + the promise of mystical knowledge.

Now I've started to feel dumber. Probably because the lessons have sunk in enough that I catch my own bad ideas and notice just how many of them there are. Worst of all, it's given me ambition to do original research. That's a demanding task, one where you have to accept feeling stupid all the time.

But I still look down that old road and I'm glad I'm not walking down it anymore.

comment by Viliam · 2020-09-28T19:47:43.506Z · LW(p) · GW(p)

I started out feeling a lot smarter. I think it was community validation + the promise of mystical knowledge.

Too smart for your own good. You were supposed to believe it was about rationality. Now we have to ban you and erase your comment before other people can see it. :D

Now I've started to feel dumber. Probably because the lessons have sunk in enough that I catch my own bad ideas and notice just how many of them there are. [...] you have to accept feeling stupid all the time. But I still look down that old road and I'm glad I'm not walking down it anymore.

Yeah, same here.

comment by AllAmericanBreakfast · 2020-07-15T03:30:21.988Z · LW(p) · GW(p)

Things I come to LessWrong for:

  • An outlet and audience for my own writing
  • Acquiring tools of good judgment and efficient learning
  • Practice at charitable, informal intellectual argument
  • Distraction
  • A somewhat less mind-killed politics

Cons: I'm frustrated that I so often play Devil's advocate, or else make up justifications for arguments under the principle of charity. Conversations feel profit-oriented and conflict-avoidant. Overthinking to the point of boredom and exhaustion. My default state toward books and people is bored skepticism and political suspicion. I'm less playful than I used to be.

Pros: My own ability to navigate life has grown. My imagination feels almost telepathic, in that I have ideas nobody I know has ever considered, and discover that there is cutting edge engineering work going on in that field that I can be a part of, or real demand for the project I'm developing. I am more decisive and confident than I used to be. Others see me as a leader.

comment by Viliam · 2020-07-15T19:01:20.185Z · LW(p) · GW(p)

Some people optimize for drama. It is better to put your life in order, which often means getting the boring things done. And then, when you need some drama, you can watch a good movie.

Well, it is not completely a dichotomy. There is also some fun to be found e.g. in serious books. Not the same intensity as when you optimize for drama, but still. It's like when you stop eating refined sugar, and suddenly you notice that the fruit tastes sweet.

comment by AllAmericanBreakfast · 2020-08-09T16:23:49.240Z · LW(p) · GW(p)

Math is training for the mind, but not like you think

Just a hypothesis:

People have long thought that math is training for clear thinking. Just one version of this meme that I scooped out of the water:

“Mathematics is food for the brain,” says math professor Dr. Arthur Benjamin. “It helps you think precisely, decisively, and creatively and helps you look at the world from multiple perspectives . . . . [It’s] a new way to experience beauty—in the form of a surprising pattern or an elegant logical argument.”

But math doesn't obviously seem to be the only way to practice precision, decision, creativity, beauty, or broad perspective-taking. What about logic, programming, rhetoric, poetry, anthropology? This sounds like marketing.

As I've studied calculus, coming from a humanities background, I'd argue it differently.

Mathematics shares with a small fraction of other related disciplines and games the quality of unambiguous objectivity. It also has the ~unique quality that you cannot bullshit your way through it. Miss any link in the chain and the whole thing falls apart.

It can therefore serve as a more reliable signal, to self and others, of one's own learning capacity.

Experiencing a subject like that can be training for the mind, because becoming successful at it requires cultivating good habits of study and expectations for coherence.

comment by niplav · 2020-08-09T21:06:36.993Z · LW(p) · GW(p)

Math is interesting in this regard because it is both very precise and there's no clear-cut way of checking your solution except running it by another person (or becoming so good at math to know if your proof is bullshit).

Programming, OTOH, gives you clear feedback loops.

comment by AllAmericanBreakfast · 2020-08-10T00:17:43.442Z · LW(p) · GW(p)

In programming, that's true at first. But as projects increase in scope, there's a risk of using an architecture that works when you’re testing, or for your initial feature set, but will become problematic in the long run.

For example, I just read an interesting article on how a project used a document store database (MongoDB), which worked great until their client wanted the software to start building relationships between data that had formerly been “leaves on the tree.” They ultimately had to convert to a traditional relational database.

Of course there are parallels in math, as when you try a technique for integrating or parameterizing that seems reasonable but won’t actually work.

comment by G Gordon Worley III (gworley) · 2020-08-10T04:14:03.213Z · LW(p) · GW(p)

Yep. Having worked both as a mathematician and a programmer, the idea of objectivity and clear feedback loops starts to disappear as the complexity amps up and you move away from the learning environment. It's not unusual to discover incorrect proofs out on the fringes of mathematical research that have not yet become part of the cannon, nor is it uncommon (in fact, it's very common) to find running production systems where the code works by accident due to some strange unexpected confluence of events.

comment by Viliam · 2020-08-16T18:21:52.701Z · LW(p) · GW(p)
Programming, OTOH, gives you clear feedback loops.

Feedback, yes. Clarity... well, sometimes it's "yes, it works" today, and "actually, it doesn't if the parameter is zero and you called the procedure on the last day of the month" when you put it in production.

comment by MikkW (mikkel-wilson) · 2020-08-10T22:07:04.006Z · LW(p) · GW(p)

Proof verification is meant to minimize this gap between proving and programming

comment by Viliam · 2020-08-16T18:43:32.418Z · LW(p) · GW(p)

The thing I like about math is that it gives the feeling that the answers are in the territory. (Kinda ironic, when you think about what the "territory" of math is.) Like, either you are right or you are wrong, it doesn't matter how many people disagree with you and what status they have. But it also doesn't reward the wrong kind of contrarianism.

Math allows you to make abstractions without losing precision. "A sum of two integers is always an integer." Always; literally. Now with abstractions like this, you can build long chains out of them, and it still works. You don't create bullshit accidentally, by constructing a theory from approximations that are mostly harmless individually, but don't resemble anything in the real world when chained together.

Whether these are good things, I suppose different people would have different opinions, but it definitely appeals to my aspie aesthetics. More seriously, I think that even when in real world most abstractions are just approximations, having an experience with precise abstractions might make you notice the imperfection of the imprecise ones, so when you formulate a general rule, you also make a note "except for cases such as this or this".

(On the other hand, for the people who only become familiar with math as a literary genre [LW · GW], it might have an opposite effect: they may learn that pronouncing abstractions with absolute certainty is considered high-status.)

comment by elityre · 2020-08-14T15:20:39.055Z · LW(p) · GW(p)
Mathematics shares with a small fraction of other related disciplines and games the quality of unambiguous objectivity. It also has the ~unique quality that you cannot bullshit your way through it. Miss any link in the chain and the whole thing falls apart.

Isn't programming even more like this?

I could get squidgy about whether a proof is "compelling", but when I write a program, it either runs and does what I expect, or it doesn't, with 0 wiggle room.

comment by AllAmericanBreakfast · 2020-08-14T18:31:07.779Z · LW(p) · GW(p)

Sometimes programming is like that, but then I get all anxious that I just haven’t checked everything thoroughly!

My guess is this has more to do with whether or not you’re doing something basic or advanced, in any discipline. It’s just that you run into ambiguity a lot sooner in the humanities

comment by ChristianKl · 2020-08-12T09:49:30.386Z · LW(p) · GW(p)

It helps you to look at the world from multiple perspectives: It gets you into a position to make a claim like that soley based on anecdotal evidence and wishful thinking.

comment by AllAmericanBreakfast · 2020-11-25T21:58:53.555Z · LW(p) · GW(p)

The Rationalist Move Club

Imagine that the Bay Area rationalist community did all want to move. But no individual was sure enough that others wanted to move to invest energy in making plans for a move. Nobody acts like they want to move, and the move never happens.

Individuals are often willing to take some level of risk and make some sacrifice up-front for a collective goal with big payoffs. But not too much, and not forever. It's hard to gauge true levels of interest based off attendance at a few planning meetings.

Maybe one way to solve this is to ask for escalating credible commitments.

A trusted individual sets up a Rationalist Move Fund. Everybody who's open to the idea of moving puts $500 in a short-term escrow. This makes them part of the Rationalist Move Club.

If the Move Club grows to a certain number of members within a defined period of time (say 20 members by March 2020), then they're invited to planning meetings for a defined period of time, perhaps one year. This is the first checkpoint. If the Move Club has not grown to that size by then, the money is returned and the project is cancelled.

By the end of the pre-defined planning period, there could be one of three majority consensus states, determined by vote (approval vote, obviously!):

  1. Most people feel there is a solid timetable and location for a move, and want to go forward that plan as long as half or more of the Move Club members also approve of this option. To cast a vote approving of this choice requires an additional $2,000 deposit per person into the Move Fund, which is returned along with their initial $500 deposit after they've signed a lease or bought a residence in the new city, or in 3 years, whichever is sooner.
  2. Most people want to continue planning for a move, but aren't ready to commit to a plan yet. To cast a vote approving of this choice requires an additional $500 deposit per person into the Move Fund, unless they paid $2,000 to approve of option 1.
  3. Most people want to abandon the move project. Anybody approving only of this option has their money returned to them and exits the Move Club, even if (1) or (2) is the majority vote. If this option is the majority vote, all money in escrow is returned to the Move Club members and the move project is cancelled.

Obviously the timetables, monetary commitments could be modified. Other "commitment checkpoints" could be added in as well. I don't live in the Bay Area, but if those of you who do feel this framework could be helpful, please feel free to steal it.

comment by AllAmericanBreakfast · 2020-08-08T18:52:21.313Z · LW(p) · GW(p)

What gives LessWrong staying power?

On the surface, it looks like this community should dissolve. Why are we attracting bread bakers, programmers, stock market investors, epidemiologists, historians, activists, and parents?

Each of these interests has a community associated with it, so why are people choosing to write about their interests in this forum? And why do we read other people's posts on this forum when we don't have a prior interest in the topic?

Rationality should be the art of general intelligence. It's what makes you better at everything. If practice is the wood and nails, then rationality is the blueprint. 

To determine whether or not we're actually studying rationality, we need to check whether or not it applies to everything. So when I read posts applying the same technique to a wide variety of superficially unrelated subjects, it confirms that the technique is general, and helps me see how to apply it productively.

This points at a hypothesis, which is that general intelligence is a set of defined, generally applicable techniques. They apply across disciplines. And they apply across problems within disciplines. So why aren't they generally known and appreciated? Shouldn't they be the common language that unites all disciplines?

Perhaps it's because they're harder to communicate and appreciate. If I'm an expert baker, I can make another delicious loaf of bread. Or I can reflect on what allows me to make such tasty bread, and speculate on how the same techniques might apply to architecture, painting, or mathematics. Most likely, I'm going to choose to bake bread.

This is fine, until we start working on complex, interdisciplinary projects. Then general intelligence becomes the bottleneck for having enough skill to get the project done. Sounds like the 21st century. We're hitting the limits of what's achievable through sheer persistence in a single specialty, and we're learning to automate them away.

What's left is creativity, which arises from structured decision-making. I've noticed that the longer I practice rationality, the more creative I become. I believe that's because it gives me the resources to turn an intuition into a specified problem, envision a solution, create a sort of Fermi-approximation to give it definition, and guidance on how to develop the practical skills and relationships that will let me bring it into being.

If I'm right, human application of these techniques will require deliberate practice with the general techniques - both synthesizing them and practicing them individually, until they become natural.

The challenge is that most specific skills lend themselves to that naturally. If I want to become a pianist, I practice music until I'm good. If I want to be a baker, I bake bread. To become an architect, design buildings.

What exactly do you do to practice the general techniques of rationality? I can imagine a few methods:

  1. Participate in superforecasting tournaments, where Bayesian and gears/policy level thinking are the known foundational techniques.
  2. Learn a new skill, and as you go, notice the problems you encounter along the way. Try to imagine what a general solution to that problem might look like. Then go out and build it.
  3. Pick a specific rationality technique, and try to apply it to every problem you face in your life.
comment by Viliam · 2020-08-15T21:11:38.008Z · LW(p) · GW(p)
What gives LessWrong staying power?

For me, it's the relatively high epistemic standards combined with relative variety of topics. I can imagine a narrowly specialized website with no bullshit, but I haven't yet seen a website that is not narrowly specialized and does not contain lots of bullshit. Even most smart people usually become quite stupid outside the lab. Less Wrong is a place outside the lab that doesn't feel painfully stupid. (For example, the average intelligence at Hacker News seems quite high, but I still regularly find upvoted comments that make me cry.)

comment by AllAmericanBreakfast · 2020-08-16T01:51:59.847Z · LW(p) · GW(p)

Yeah, Less Wrong seems to be a combination of project and aesthetic. Insofar as it's a project, we're looking for techniques of general intelligence, partly by stress-testing them on a variety of topics. As an aesthetic, it's a unique combination of tone, length, and variety + familiarity of topics that scratches a particular literary itch.

comment by AllAmericanBreakfast · 2020-12-19T21:03:29.390Z · LW(p) · GW(p)

Thoughts on cheap criticism

It's OK for criticism to be imperfect. But the worst sort of criticism has all five of these flaws:

  1. Prickly: A tone that signals a lack of appreciation for the effort that's gone in to presenting the original idea, or shaming the presenter for bringing it up.
  2. Opaque: Making assertions or predictions without any attempt at specifying a contradictory gears-level model, evidence basis, even on the level of anecdote or fiction.
  3. Nitpicky: Attacking the one part of the argument that seems flawed, without arguing for how the full original argument should be reinterpreted in light of the local disagreement.
  4. Disengaged: Not signaling any commitment to continue the debate to mutual satisfaction, or even to listen to/read and respond to a reply.
  5. Shallow: An obvious lack of engagement with the details of the argument or evidence originally offered.

I am absolutely guilty of having delivered Category 5 criticism, the worst sort of cheap shots.

There is an important tradeoff here. If standards are too high for critical commentary, it can chill debate and leave an impression that either nobody cares, everybody's on board, or the argument's simply correct. Sometimes, an idea can be wrong for non-obvious reasons, and it's important for people to be able to say "this seems wrong for reasons I'm not clear about yet" without feeling like they've done wrong.

On the other hand, cheap criticism is so common because it's cheap. It punishes all discourse equally, which means that the most damage is done to those who've put in the most effort to present their ideas. That is not what we want.

It usually takes more work to punish more heavily. Executing someone for a crime takes more work than jailing them, which takes more work than giving them a ticket. Addressing a grievance with murder is more dangerous than starting a brawl, which is more dangerous than starting an argument, which is more dangerous than giving someone the cold shoulder.

But in debate, cheap criticism has this perverse quality where it does the most to diminish a discussion, while being the easiest thing to contribute.

I think this is a reason to,  on the whole, create norms against cheap criticism. If discussion is already at a certain volume, that can partially be accomplished by ignoring cheap criticism entirely.

But for many ideas, cheap criticism is almost all it gets, early on in the discussion. Just one or two cheap criticisms can kill an idea prematurely. So being able to address cheap criticisms effectively, without creating unreasonably high standards for critical commentary, seems important.

comment by Matt Goldenberg (mr-hire) · 2020-12-20T23:03:22.038Z · LW(p) · GW(p)

This seems like a fairly valuable framework.  It occurs to me that all 5 of these flaws are present in the "Snark" genre present in places like Gawker and Jezebel.

comment by AllAmericanBreakfast · 2020-12-19T23:26:06.251Z · LW(p) · GW(p)

I am going to experiment with a karma/reply policy to what I think would be a better incentive structure if broadly implemented. Loosely, it looks like this:

  1. Strong downvote plus a meaningful explanatory comment for infractions worse than cheap criticism; summary deletions for the worst offenders.
  2. Strong downvote for cheap criticism, no matter whether or not I agree with it.
  3. Weak downvote for lazy or distracting comments.
  4. Weak upvote for non-cheap criticism or warm feedback of any kind.
  5. Strong upvote for thoughtful responses, perhaps including an appreciative note.
  6. Strong upvote plus a thoughtful response of my own to comments that advance the discussion.
  7. Strong upvote, a response of my own, and an appreciative note in my original post referring to the comment for comments that changed or broadened my point of view.
comment by Luke Allen (luke-allen) · 2021-01-04T22:05:25.316Z · LW(p) · GW(p)

I'm trying a live experiment: I'm going to see if I can match your erisology one-to-one as antagonists to the Elements of Harmony from My Little Pony:

  1. Prickly: Kindness
  2. Opaque: Honesty
  3. Nitpicky: Generosity
  4. Disengaged: Loyalty
  5. Shallow: Laughter

Interesting! They match up surprisingly well, and you've somehow also matched the order of 3 out of 5 of the corresponding "seeds of discord" from 1 Peter 2:1, CSB: "Therefore, rid yourselves of all malice, all deceit, hypocrisy, envy, and all slander." If my pronouncement of success seems self-serving and opaque, I'll elaborate soon:

  1. Malice: Kindness
  2. Deceit: Honesty
  3. Hypocrisy: Loyalty
  4. Envy: Generosity
  5. Slander: Laughter

And now the reveal. I'm a generalist; I collect disparate lists of qualities (in the sense of "quality vs quantity"), and try to integrate all my knowledge into a comprehensive worldview. My world changed the day I first saw My Little Pony; it changed in a way I never expected, in a way many people claim to have been affected by HPMOR. I believed I'd seen a deep truth, and I've been subtly sharing it wherever I can.

The Elements of Harmony are the character qualities that, when present, result in a spark of something that brings people together. My hypothesis is that they point to a deep-seated human bond-testing instinct. The first time I noticed a match-up was when I heard a sermon on The Five Love Languages, which are presented in an entirely different order:

  1. Words of affirmation: Honesty
  2. Quality time: Laughter
  3. Receiving gifts: Generosity
  4. Acts of service: Loyalty
  5. Physical touch: Kindness

Well! In just doing the basic research to write this reply, it turns out I'm re-inventing the wheel! Someone else has already written a psychometric analysis of the Five Love Languages and found they do indeed match up with another relational maintenance typology.

Thank you for your post; you've helped open my eyes up to existing research I can use in my philosophical pursuits, and sparked thoughts of what "effective altruism" use I can put them to.

comment by AllAmericanBreakfast · 2020-12-23T03:16:33.331Z · LW(p) · GW(p)

Does rationality serve to prevent political backsliding?

It seems as if politics moves far too fast for rational methods can keep up. If so, does that mean rationality is irrelevant to politics?

One function of rationality might be to prevent ethical/political backsliding. For example, let's say that during time A, institution X is considered moral. A political revolution ensues, and during time B, X is deemed a great evil and is banned.

A change of policy makes X permissible during time C, banned again during time D, and absolutely required for all upstanding folk during time E.

Rational deliberation about X seems to play little role in the political legitimacy of X.

However, rational deliberation about X continues in the background. Eventually, a truly convincing argument about the ethics of X emerges. Once it does, it is so compelling that it has a permanent anchoring effect on X.

Although at some times, society's policy on X contradicts the rational argument, the pull of X is such that it tends to make these periods of backsliding shorter and less frequent.

The natural process of developing the rational argument about X also leads to an accretion of arguments that are not only correct, but convincing as well. This continues even when the ethics of X are proven beyond a shadow of a doubt, which continues to shorten and prevent periods of backsliding.

In this framework, rationality does not "lead" politics. Instead, it channels it. The goal of a rational thinker should not be to achieve an immediate political victory. Instead, it should be to build the channels of rational thought higher and stronger, so that the fierce and unpredictable waters of politics eventually are forced to flow in a more sane and ethical direction.

The reason you'd concern yourself with persuasion in this context is to prevent the fate of Gregor Mendel, whose ideas on inheritance were lost in a library for 40 years. If you come up with a new or better ethical argument about X, make sure that it becomes known enough to survive and spread. Your success is not your ability to immediately bring about the political changes your idea would support. Instead, it's to bring about additional consideration of your idea, so that it can take root, find new expression, influence other ideas, and either become a permanent fixture of our ethics or be discarded in favor of an even stronger argument.

comment by AllAmericanBreakfast · 2020-10-23T16:15:41.315Z · LW(p) · GW(p)

Thinking, Fast and Slow was the catalyst that turned my rumbling dissatisfaction into the pursuit of a more rational approach to life. I wound up here. After a few years, what do I think causes human irrationality? Here's a listicle.

  1. Cognitive biases, whatever these are
  2. Not understanding statistics
  3. Akrasia
  4. Little skill in accessing and processing theory and data
  5. Not speaking science-ese
  6. Lack of interest or passion for rationality
  7. Not seeing rationality as a virtue, or even seeing it as a vice.
  8. A sense of futility, the idea that epistemic rationality is not very useful, while instrumental rationality is often repugnant
  9. A focus on associative thinking
  10. Resentment
  11. Not putting thought into action
  12. Lack of incentives for rational thought and action itself
  13. Mortality
  14. Shame
  15. Lack of time, energy, ability
  16. An accurate awareness that it's impossible to distinguish tribal affiliation and culture from a community
  17. Everyone is already rational, given their context
  18. Everyone thinks they're already rational, and that other people are dumb
  19. It's a good heuristic to assume that other people are dumb
  20. Rationality is disruptive, and even very "progressive" people have a conservative bias to stay the same, conform with their peers, and not question their own worldview
  21. Rationality can misfire if we don't take it far enough
  22. All the writing, math, research, etc. is uncomfortable and not very fun compared to alternatives
  23. Epistemic rationality is directly contradictory to instrumental rationality
  24. Nihilism
  25. Applause lights confuse people about what even is rationality
  26. There's at least 26 factors deflecting people from rationality, and people like a clear, simple answer
  27. No curriculum
  28. It's not taught in school
  29. In an irrational world, epistemic rationality is going to hold you back
  30. Life is bad, and making it better just makes people more comfortable in badness
  31. Very short-term thinking
  32. People take their ideas way too seriously, without taking ideas in general seriously enough
  33. Constant distraction
  34. The paradox of choice
  35. Lack of faith in other people or in the possibility for constructive change
  36. Rationality looks at the whole world, which has more people in it than Dunbar's number
  37. The rationalists are all hiding on obscure blogs online
  38. Rationality is inherently elitist
  39. Rationality leads to convergence on the truth if we trust each other, but it leads to fragmentation of interests since we can't think about everything, which makes us more isolated
  40. Slinging opinions around is how people connect. Rationality is an argument.
  41. "Rationality" is stupid. What's really smart is to get good at harnessing your intuition, your social instincts, to make friends and play politics.
  42. Rationality is paperclipping the world. Every technological advance that makes individuals more comfortable pillages the earth and increases inequality, so they're all bad and we should just embrace the famine and pestilence until mother nature takes us back to the stone age and we can all exist in the circular dreamtime.
  43. You can't rationally commit to rationality without being rational first. We have no baptism ceremony.
  44. We need a baptism ceremony but don't want to be a cult, so we're screwed, which we would also be if we became a cult.
  45. David Brooks is right that EA is bad, we like EA, so we're probably bad too.
  46. We're secretly all spiritual and just faking rational atheism because what we really want to do is convert.
  47. There's too much verbiage already in the world.
  48. The singularity is coming; what's the point?
  49. Our leaders have abandoned us, and the best of us have been cut down like poppies.
  50. Eschewing the dark arts is a self-defeating stance
comment by Dagon · 2020-10-23T19:09:07.597Z · LW(p) · GW(p)

A few other (even less pleasant) options:

51) God is inscrutable and rationality is no better than any other religion.

52) Different biology and experience across humans leads to very different models of action.

53) Everyone lies, all the time.  

comment by AllAmericanBreakfast · 2020-08-01T14:26:30.501Z · LW(p) · GW(p)

Are rationalist ideas always going to be offensive to just about everybody who doesn’t self-select in?

One loved one was quite receptive to Chesterton’s Fence the other day. Like, it stopped their rant in the middle of its tracks and got them on board with a different way of looking at things immediately.

On the other hand, I routinely feel this weird tension. Like to explain why I think as I do, I‘d need to go through some basic rational concepts. But I expect most people I know would hate it.

I wish we could figure out ways of getting this stuff across that was fun,  made it seem agreeable and sensible and non-threatening.

Less negativity - we do sooo much critique. I was originally attracted to LW partly as a place where I didn’t  feel obligated to participate in the culture war. Now, I do, just on a set of topics that I didn’t associate with the CW before LessWrong.

My guess? This is totally possible. But it needs a champion. Somebody willing to dedicate themselves to it. Somebody friendly, funny, empathic, a good performer, neat and practiced. And it needs a space for the educative process - a YouTube channel, a book, etc. And it needs the courage of its convictions. The sign of that? Not taking itself too seriously, being known by the fruits of its labors.

comment by Viliam · 2020-08-02T19:29:30.552Z · LW(p) · GW(p)

Traditionally, things like this are socially achieved by using some form of "good cop, bad cop" strategy. You have someone who explains the concepts clearly and bluntly, regardless of whom it may offend (e.g. Eliezer Yudkowsky), and you have someone who presents the concepts nicely and inoffensively, reaching a wider audience (e.g. Scott Alexander), but ultimately they both use the same framework.

The inoffensiveness of Scott is of course relative, but I would say that people who get offended by him are really not the target audience for rationalist thought. Because, ultimately, saying "2+2=4" means offending people who believe that 2+2=5 and are really sensitive about it; so the only way to be non-offensive is to never say anything specific.

If a movement only has the "bad cops" and no "good cops", it will be perceived as a group of assholes. Which is not necessarily bad if the members are powerful; people want to join the winning side. But without actual power, it will not gain wide acceptance. Most people don't want to go into unnecessary conflicts.

On the other hand, a movement with "good cops" without "bad cops" will get its message diluted. First, the diplomatic believers will dilute their message in order not to offend anyone. Their fans will further dilute the message, because even the once-diluted version is too strong for normies' taste. At the end, the message may gain popular support... kind of... because the version that gains the popular support will actually contain maybe 1% of the original message, but mostly 99% of what the normies already believed, peppered by the new keywords.

The more people will present rationality using different methods, the better. Because each of them will reach a different audience. So I completely approve the approach you suggest... in addition to the existing ones.

comment by AllAmericanBreakfast · 2020-08-02T23:57:57.601Z · LW(p) · GW(p)

You're right.

I need to try a lot harder to remember that this is just a community full of individuals airing their strongly held personal opinions on a variety of topics.

comment by Viliam · 2020-08-03T12:27:49.602Z · LW(p) · GW(p)

Those opinions often have something in common -- respect for the scientific method, effort to improve one's rationality, concern about artificial intelligence -- and I like to believe it is not just a random idiosyncratic mix (a bunch of random things Eliezer likes), but different manifestations of the same underlying principle (use your intelligence to win, not to defeat yourself). However, not everyone is interested in all of this.

And I would definitely like to see "somebody friendly, funny, empathic, a good performer, neat and practiced" promoting these values in a YouTube channel or in books. But that requires a talent I don't have, so I can only wait until someone else with the necessary skills does it.

This reminded me of the YouTube channel of Julia Galef, but the latest videos there are 3 years old.

comment by Pontor · 2020-11-28T17:54:26.272Z · LW(p) · GW(p)

Her podcast is really good IMHO. She does a singularly good job of challenging guests in a friendly manner, dutifully tracking nuance, steelmanning, etc. It just picked back up after about a yearlong hiatus (presumably due to her book writing).

Unfortunately, I see the lack of notoriety for her podcast to be some evidence against the prospects of the "skilled & likeable performer" strategy. I assume that potential subscribers are more interested in lower-quality podcasts and YouTubers that indulge in bias rather than confronting it. Dunno what to do about that, but I'm glad she's back to podcasting.

comment by Viliam · 2020-11-29T17:44:06.770Z · LW(p) · GW(p)

It just picked back up after about a yearlong hiatus

That's wonderful news, thank you for telling me!

For those who have clicked on the YouTube link in my previous comment, there is no new content as of now, go to the Rationally Speaking podcast.

comment by TAG · 2020-08-03T13:55:33.748Z · LW(p) · GW(p)

You're both assuming that you have a set of correct ideas coupled with bad PR...but how well are Bayes, Aumann and MWI (eg.) actually doing?

comment by seed · 2020-11-28T09:36:40.090Z · LW(p) · GW(p)

Look, I'm neurotypical and I don't find anything Eliezer writes offensive, will you please stop ostracizing us.

comment by Ben Pace (Benito) · 2020-11-28T11:03:52.988Z · LW(p) · GW(p)

Did either of them say neurotypical? I just heard them say normies.

comment by seed · 2020-12-04T05:40:16.266Z · LW(p) · GW(p)

Oh, sorry, I've only heard the word used in that context before, I thought that's what it meant. Turns out it has a broader meaning. 

comment by Pongo · 2020-08-01T22:46:57.624Z · LW(p) · GW(p)

Like to explain why I think as I do, I‘d need to go through some basic rational concepts.

I believe that if the rational concepts are pulling their weight, it should be possible to explain the way the concept is showing up concretely in your thinking, rather than justifying it in the general case first.

As an example, perhaps your friend is protesting your use of anecdotes as data, but you wish to defend it as Bayesian, if not scientific, evidence [LW · GW]. Rather than explaining the difference in general, I think you can say "I think that it's more likely that we hear this many people complaining about an axe murderer downtown if that's in fact what's going on, and that it's appropriate for us to avoid that area today. I agree it's not the only explanation and you should be able to get a more reliable sort of data for building a scientific theory, but I do think the existence of an axe murderer is a likely enough explanation for these stories that we should act on it"

If I'm right that this is generally possible, then I think this is a route around the feeling of being trapped on the other side of an inferential gap (which is how I interpreted the 'weird tension')

comment by AllAmericanBreakfast · 2020-08-02T04:06:13.732Z · LW(p) · GW(p)

I think you're right, when the issue at hand is agreed on by both parties to be purely a "matter of fact."

As soon as social or political implications crop in, that's no longer a guarantee.

But we often pretend like our social/political values are matters of fact. The offense arises when we use rational concepts in a way that gives the lie to that pretense. Finding an indirect and inoffensive way to present the materials and let them deconstruct their pretenses is what I'm wishing for here. LW has a strong culture surrounding how these general-purpose tools get applied, so I'd like to see a presentation of the "pure theory" that's done in an engaging way not obviously entangled with this blog.

The alternative is to use rationality to try and become savvier social operators. This can be "instrumental rationality" or it can be "dark arts," depending on how we carry it out. I'm all for instrumental rationality, but I suspect that spreading rational thought further will require that other cultural groups appropriate the tools to refine their own viewpoints rather than us going out and doing the convincing ourselves. 

comment by AllAmericanBreakfast · 2020-08-08T19:48:42.333Z · LW(p) · GW(p)

Markets are the worst form of economy except for all those other forms that have been tried from time to time.

comment by Matt Goldenberg (mr-hire) · 2020-08-09T01:37:33.440Z · LW(p) · GW(p)

I used this line when having a conversation at a party with a bunch of people who turned out to be communists, and the room went totally silent except for one dude who was laughing.

comment by AllAmericanBreakfast · 2020-08-09T04:48:03.500Z · LW(p) · GW(p)

It was the silence of sullen agreement.

comment by AllAmericanBreakfast · 2020-07-31T16:16:52.601Z · LW(p) · GW(p)

I'm annoyed that I think so hard about small daily decisions.

Is there a simple and ideally general pattern to not spend 10 minutes doing arithmetic on the cost of making burritos at home vs. buying the equivalent at a restaurant? Or am I actually being smart somehow by spending the time to cost out that sort of thing?

Perhaps:

"Spend no more than 1 minute per $25 spent and 2% of the price to find a better product."

This heuristic cashes out to:

  • Over a year of weekly $35 restaurant meals, spend about $35 and an hour and a half finding better restaurants or meals.
  • For $250 of monthly consumer spending, spend a total of $5 and 10 minutes per month finding a better product.
  • For bigger buys of around $500 (about 2x/year), spend $10 and 20 minutes on each purchase.
  • Buying a used car ($15,000) I'd spend $300 and 10 hours. I could use the $300 to hire somebody at $25/hour to test-drive an additional 5-10 cars, a mechanic to inspect it on the lot, a good negotiator to help me secure a lower price.
  • For work over the next year ($30,000), spend $600 and 20 hours.
  • Getting a Master's degree ($100,000 including opportunity costs), spend 66 hours and $2,000 finding the best school.
  • Choosing from among STEM career options ($100,000 per year), spend about 66 hours and $600 per year exploring career decisions.

Comparing that with my own patterns, that simplifies to:

Spend much less time thinking about daily spending. You're correctly calibrated for ~$500 buys. Spend much more time considering your biggest buys and decisions.

comment by Dagon · 2020-07-31T22:00:48.491Z · LW(p) · GW(p)

For some (including younger-me), the opposite advice was helpful - I'd agonize over "big" decisions, without realizing that the oft-repeated small decisions actually had a much larger impact on my life.

To account for that, I might recommend you notice cache-ability and repetition, and budget on longer timeframes. For monthly spending, there's some portion that's really $120X decade spending (you can optimize once, then continue to buy monthly for the next 10 years), a bunch that's probably $12Y of annual spending, and some that's really $Z that you have to re-consider every month.

Also, avoid the mistake of inflexible permissions. Notice when you're spending much more (or less!) time optimizing a decision than your average, but there are lots of them that actually benefit from the extra time. And lots that additional time/money doesn't change the marginal outcome by much, so you should spend less time on.

comment by AllAmericanBreakfast · 2020-07-31T23:09:18.399Z · LW(p) · GW(p)

I wonder if your problem as a youth was in agonizing over big decisions, rather than learning a productive way to methodically think them through. I have lots of evidence that I underthink big decisions and overthink small ones. I also tend to be slow yet ultimately impulsive in making big changes, and fast yet hyper-analytical in making small changes.

Daily choices have low switching and sunk costs. Everybody's always comparing, so one brand at a given price point tends to be about as good as another.

But big decisions aren't just big spends. They're typically choices that you're likely stuck with for a long time to come. They serve as "anchors" to your life. There are often major switching and sunk costs involved. So it's really worthwhile anchoring in the right place. Everything else will be influenced or determined by where you're anchored.

The 1 minute/$25 + 2% of purchase price rule takes only a moment's thought. It's a simple but useful rule, and that's why I like it.

There are a few items or services that are relatively inexpensive, but have high switching costs and are used enough or consequential enough to need extra thought. Examples include pets, tutors, toys for children, wedding rings, mattresses, acoustic pianos, couches, safety gear, and textbooks. A heuristic and acronym for these exceptions might be CHEAPS: "Is it a Curriculum? Is it Heavy? Is it Ergonomic? Is it Alive? Is it Precious? Is it Safety-related?"

comment by AllAmericanBreakfast · 2020-11-13T23:06:20.883Z · LW(p) · GW(p)

I want to put forth a concept of "topic literacy."

Topic literacy roughly means that you have both the concepts and the individual facts memorized for a certain subject at a certain skill level. That subject can be small or large. The threshold is that you don't have to refer to a reference text to accurately answer within-subject questions at the skill level specified.

This matters, because when studying a topic, you always have to decide whether you've learned it well enough to progress to new subject matter. This offers a clean "yes/no" answer to that essential question at what I think is a good tradeoff between difficulty and adequacy.

I'm currently taking an o-chem class, and we're studying IR spectra. For this, it's necessary to be able to interpret spectral diagrams in terms of shape, intensity, and wavenumber; to predict signals that a given molecule will produce; and to explain the underlying mechanisms that produce these signals for a given molecule.

Most students will simply be answering the required homework problems, with heavy reference to notes and the textbook. In particular, they'll be referring to a key chart that lists the signals for 16 crucial bond types, to 6 subtypes of conjugated vs. unconjugated bonds, and referring back for reminders on the equations and mechanisms underpinning these patterns.

Memorizing that information only took me about an extra half hour, and dramatically increased my confidence in answering questions. It made it tractable for me to rapidly go through and answer every single study question in the chapter. This was the key step to transitioning from "topic familiar" to "topic literate."

If I had to bin levels of understanding of an academic subject, I might do it like this:

  1. "Topic ignorant." You've never before encountered a formal treatment of the topic.
  2. "Topic familiar." You understand the concepts well enough to use them, but require review of facts and concepts in most cases.
  3. "Topic literate." You have memorized concepts and facts enough to be able to answer most questions that will be posed to you (at the skill level in question) quickly and confidently, without reference to the textbook.

Go for "topic literate" before moving on.

comment by AllAmericanBreakfast · 2020-10-15T23:32:16.557Z · LW(p) · GW(p)

We do things so that we can talk about it later.

I was having a bad day today. Unlikely to have time this weekend for something I'd wanted to do. Crappy teaching in a class I'm taking. Ever increasing and complicating responsibilities piling up.

So what did I do? I went out and bought half a cherry pie.

Will that cherry pie make me happy? No. I knew this in advance. Consciously and unconsciously: I had the thought, and no emotion compelled me to do it.

In fact, it seemed like the least-efficacious action: spending some of my limited money, to buy a pie I don't need, to respond to stress that's unrelated to pie consumption and is in fact caused by lack of time (that I'm spending on buying and eating pie).

BUT. What buying the pie allowed me to do was tell a different story. To myself and my girlfriend who I was texting with. Now, today can be about how I got some delicious pie.

And I really do feel better. It's not the pie, nor the walk to the store to buy it. It's the relief of being able to tell my girlfriend that I bought some delicious cherry pie, and that I'd share it with her if she didn't live a three-hour drive away. It's the relief of reflecting on how I dealt with my stress, and seeing a pie-shaped memory at the end of the schoolwork.

If this is a correct model of how this all works, then it suggests a couple things:

  • This can probably be optimized.
  • The way I talk about that optimization process will probably affect how well it works. For example, if I then think "what's the cheapest way to get this effect," that intuitively doesn't feel good. I don't want to be cheap. I need to find the right language, the right story to tell, so that I can explain my "philosophy" to myself and others in a way that gets the response I want.

Is that the darks arts? I don't think so. I think this is one area of life where the message is the medium.

comment by Viliam · 2020-10-16T17:46:54.126Z · LW(p) · GW(p)

So the "stupid solutions to problems of life" are not really about improving the life, but about signaling to yourself that... you still have some things under control? (My life may suck, but I can have a cherry pie whenever I want to!)

This would be even more important if the cherry pie would somehow actively make your life worse. For example, if you are trying to lose weight, but at the same time keep eating cherry pie every day in order to improve the story of your day. Or if instead of cherry pie it would be cherry liqueur.

The way I talk about that optimization process will probably affect how well it works.

Just guessing, but it would probably help to choose the story in advance. "If I am doing X, my life is great, and nothing else matters" -- and then make X something useful that doesn't take much time. Even better, have multiple alternatives X, Y, Z, such that doing any of them is a "proof" of life being great.

comment by AllAmericanBreakfast · 2020-10-16T18:55:30.761Z · LW(p) · GW(p)

I do chalk a lot of dysfunction up to this story-centric approach to life. I just suspect it’s something we need to learn to work with, rather than against (or to deny/ignore it entirely).

My sense is that storytelling - to yourself or others - is an art. To get the reaction you want - from self or others - takes some aesthetic sensitivity.

My guess is there’s some low hanging fruit here. People often talk about doing things “for the story,” which they resort to when they're trying to justify doing something dumb/wasteful/dangerous/futile. Perversely, it often seems that when people talk in detail about their good decisions, it comes of as arrogant. Pointless, tidy philosophical paradoxes seem to get people's puzzle-solving brains going better than confronting the complexity of the real world.

But maybe we can simply start building habits of expressing gratitude. Finding ways to present good ideas and decisions in ways that are delightful in conversation. Spinning interesting stories out of the best parts of our lives.

comment by AllAmericanBreakfast · 2020-10-29T23:33:54.854Z · LW(p) · GW(p)

A lot of my akrasia is solved by just "monkey see, monkey do." Physically put what I should be doing in front of my eyeballs, and pretty quickly I'll do it. Similarly, any visible distractions, or portals to distraction, will also suck me in.

But there also seems to be a component that's more like burnout. "Monkey see, monkey don't WANNA."

On one level, the cure is to just do something else and let some time pass. But that's not explicit enough for my taste. For one thing, something is happening that recovers my motivation. For another, "letting time pass" is an activity with other effects, which might be energy-recovering but distracting or demotivating in other ways. Letting time pass involves forgetting, value drift, passing up opportunities, and spending one form of slack (time) to gain another (energy). It's costly, not just something I forget to do. So I'd like to understand my energy budget on a more fine-grained level.

Act as if tiredness and demotivation does not exist. Gratitude journaling can transform my attitude all at once, even though nothing changed in my life. Maybe "tiredness and demotivation" is a story I tell myself, not a real state that says "stop working."

One clue is that there must be a difference between "tiredness and demotivation" as an folk theory and as a measurable phenomenon. Clearly, if I stay up for 24 hours straight, I'm going to perform worse on a math test at the end of that time that I would have at the beginning. That's measurable. But if I explain my behaviors right in this moment as "because I'm tired," that's my folk theory explanation.

An approach I could take is to be skeptical of the folk theory of tiredness. Admit that fatigue will affect my performance, but open myself to possibilities like:

  1. I have more capacity for sustained work than I think. Just do it.
  2. A lot of "fatigue" is caused by self-reinforcing cycles of complaining that I'm tired/demotivated.
  3. Extremely regular habits, far beyond what I've ever practiced, would allow me to calibrate myself quite carefully for an optimal sense of wellbeing.
  4. Going with the flow, accepting all the ups and downs, and giving little to no thought about my energetic state - just allowing myself to be driven by reaction and response - is actually the best way to  go.
  5. Just swallow the 2020 wellness doctrine hook, line, and sinker. Get 8 hours of sleep. Get daily exercise. Eat a varied diet. Less caffeine, less screens, more conversation, brief breaks throughout the day, sunshine, etc. Prioritize wellness above work. If I get to the end of the day and I haven't achieved all my "wellness goals," that's a more serious problem than if I haven't completed all my work deadlines.
comment by AllAmericanBreakfast · 2021-01-14T20:51:53.152Z · LW(p) · GW(p)

The structure of knowledge is an undirected cyclic graph between concepts. To make it easier to present to the novice, experts convert that graph into a tree structure by removing some edges. Then they convert that tree into natural language. This is called a textbook.

Scholarship is the act of converting the textbook language back into nodes and edges of a tree, and then filling in the missing edges to convert it into the original graph.

The mind cannot hold the entire graph in working memory at once. It's as important to practice navigating between concepts as learning the concepts themselves. The edges are as important to the structure as the nodes. If you have them all down pat, then you can easily get from one concept to another.

It's not always necessary to memorize every bit of knowledge. Part of the graph is knowing which things to memorize, which to look up, and where to refer to if you need to look something up.

Feeling as though you've forgotten is not easily distinguishable from never having learned something. When people consult their notes and realize that they can't easily call to mind the concepts they're referencing, this is partly because they've never practiced connecting the notes to the concepts. There are missing edges on the graph.

comment by AllAmericanBreakfast · 2020-10-22T19:24:21.171Z · LW(p) · GW(p)

Paying your dues

I'm in school at the undergraduate level, taking 3 difficult classes while working part-time.

For this path to be useful at all, I have to be able to tick the boxes: get good grades, get admitted to grad school, etc. For now, my strategy is to optimize to complete these tasks as efficiently as possible (what Zvi calls "playing on easy mode"), in order to preserve as much time and energy for what I really want: living and learning.

Are there dangers in getting really good at paying your dues?
 

1) Maybe it distracts you/diminishes the incentive to get good at avoiding dues.

2) Maybe there are two ways to pay dues (within the rules): one that gives you great profit and another that merely satisfies the requirement.

In general, though, I tend to think that efficient accomplishment is about avoiding or compressing work until you get to the "efficiency frontier" in your field. Good work is about one of two things:

  1. Getting really fast/accurate at X because it's necessary for reason R to do Y.
  2. Getting really fast/accurate at X because it lets you train others to do (or better yet, automate) X.

In my case, X is schoolwork, R is "triangulation of me and graduate-level education," and Y is "get a research job."

X is also schoolwork, R is "practice," and Y is learning. But this is much less clear. It may be that other strategies would be more efficient for learning.

However, since the expected value of my learning is radically diminished if I don't get into grad school, it makes sense to optimize first for aceing my schoolwork, and then in the time that remains to optimize for learning. Treating these as two separate activities with two separate goals makes sense.

This isn't "playing on easy mode," so much as purchasing fuzzies (As) and utilons (learning) separately. [LW · GW]

comment by NaiveTortoise (An1lam) · 2020-10-22T23:08:33.722Z · LW(p) · GW(p)

If you haven't seen Half-assing it with everything you've got, I'd definitely recommend it as an alternative perspective on this issue.

comment by AllAmericanBreakfast · 2020-10-23T16:28:54.448Z · LW(p) · GW(p)

I see my post as less about goal-setting ("succeed, with no wasted motion") and more about strategy-implementing ("Check the unavoidable boxes first and quickly, to save as much time as possible for meaningful achievement"). 

comment by Dagon · 2020-10-22T22:55:37.626Z · LW(p) · GW(p)

I suspect "dues" are less relevant in today's world than a few decades ago.  It used to be a (partial) defense against being judged harshly for your success, by showing that you'd earned it without special advantage.  Nowadays, you'll be judged regardless, as the assumption is that "the system" is so rigged that anyone who succeeds had a headstart.

To the extent that the dues do no actual good (unlike literal dues, which the recipient can use to buy things, presumably for the good of the group), skipping them seems very reasonable to me.  The trick, of course, is that it's very hard to distinguish unnecessary hurdles ("dues") from socially-valuable lessons in conformity and behavior ("training").  

Relevant advice when asked if you've paid your dues: https://www.youtube.com/watch?v=PG0YKVafAe8

comment by AllAmericanBreakfast · 2020-09-16T23:47:17.441Z · LW(p) · GW(p)

I've been thinking about honesty over the last 10 years. It can play into at least three dynamics.

One is authority and resistance. The revelation or extraction of information, and the norms, rules, laws, and incentives surrounding this, including moral concepts, are for the primary purpose of shaping the power dynamic.

The second is practical communication. Honesty is the idea that specific people have a "right to know" certain pieces of information from you, and that you meet this obligation. There is wide latitude for "white lies," exaggeration, storytelling, "noble lies," self-protective omissions, image management, and so on in this conception. It's up to the individual's sense of integrity to figure out what the "right to know" entails in any given context.

The third is honesty as a rigid rule. Honesty is about revealing every thought that crosses your mind, regardless of the effect it has on other people. Dishonesty is considered a person's natural and undesirable state, and the ability to reveal thoughts regardless of external considerations is considered a form of personal strength.

comment by AllAmericanBreakfast · 2020-09-03T03:45:28.995Z · LW(p) · GW(p)

Better rationality should lead you to think less, not more. It should make you better able to

  • Set a question aside
  • Fuss less over your decisions
  • Accept accepted wisdom
  • Be brief

while still having good outcomes. What's your rationality doing to you?

comment by Dagon · 2020-09-03T20:07:49.175Z · LW(p) · GW(p)

I like this line of reasoning, but I'm not sure it's actually true. "better" rationality should lead your thinking to be more effective - better able to take actions that lead to outcomes you prefer. This could express as less thinking, or it could express as MORE thinking, for cases where return-to-thinking is much higher due to your increase in thinking power.

Whether you're thinking less for "still having good outcomes", or thinking the same amount for "having better outcomes" is a topic for introspection and rationality as well.

comment by AllAmericanBreakfast · 2020-09-04T02:02:43.147Z · LW(p) · GW(p)

That's true, of course. My post is really a counter to a few straw-Vulcan tendencies: intelligence signalling, overthinking everything, and being super argumentative all the time. Just wanted to practice what I'm preaching!

comment by AllAmericanBreakfast · 2020-08-10T22:12:53.657Z · LW(p) · GW(p)

How should we weight and relate the training of our mind, body, emotions, and skills?

I think we are like other mammals. Imitation and instinct lead us to cooperate, compete, produce, and take a nap. It's a stochastic process that seems to work OK, both individually and as a species.

We made most of our initial progress in chemistry and biology through very close observation of small-scale patterns. Maybe a similar obsessiveness toward one semi-arbitrarily chosen aspect of our own individual behavior would lead to breakthroughs in self-understanding?

comment by AllAmericanBreakfast · 2020-08-02T03:28:56.893Z · LW(p) · GW(p)

I'm experimenting with a format for applying LW tools to personal social-life problems. The goal is to boil down situations so that similar ones will be easy to diagnose and deal with in the future.

To do that, I want to arrive at an acronym that's memorable, defines an action plan and implies when you'd want to use it. Examples:

OSSEE Activity - "One Short Simple Easy-to-Exit Activity." A way to plan dates and hangouts that aren't exhausting or recipes for confusion.

DAHLIA - "Discuss, Assess, Help/Ask, Leave, Intervene, Accept." An action plan for how to deal with annoying behavior by other people. Discuss with the people you're with, assess the situation, offer to help or ask the annoying person to stop, leave if possible, intervene if not, and accept the situation if the intervention doesn't work out.

I came up with these by doing a brief post-mortem analysis on social problems in my life. I did it like this:

  1. Describe the situation as fairly as possible, both what happened and how it felt to me and others.
  2. Use LW concepts to generalize the situation and form an action plan. For example, OSSEE Activity arose from applying the concept of "diminishing marginal returns" to my outings.
  3. Format the action plan into a mnemonic, such as an acronym.
  4. Experiment with applying the action plan mnemonic in life and see if it leads you to behave differently and proves useful.
comment by AllAmericanBreakfast · 2020-11-27T19:36:11.715Z · LW(p) · GW(p)

A celebrity is someone famous for being famous.

Is a rationalist someone famous for being rational? Someone who’s leveraged their reputation to gain privileged access to opportunity, other people’s money, credit, credence, prestige?

Are there any arenas of life where reputation-building is not a heavy determinant of success?

comment by Ben Pace (Benito) · 2020-11-28T05:22:24.494Z · LW(p) · GW(p)

A physicist is someone who is interested in and studies physics.

A rationalist is someone who is interested in and studies rationality.

comment by Viliam · 2020-11-27T22:21:46.918Z · LW(p) · GW(p)

A rationalist is someone who can talk rationally about rationality, I guess. :P

One difference between rationality and fame is that you need some rationality in order to recognize and appreciate rationality, while fame can be recognized and admired also (especially?) by people who are not famous. Therefore, rationality has a limited audience.

Suppose you have a rationalist who "wins at life". How would a non-rational audience perceive them? Probably as someone "successful", which is a broad category that also includes e.g. lottery winners.

Even people famous for being smart, such as Einstein, are probably perceived as "being right" rather than being good at updating, research, or designing experiments.

A rationalist can admire another rationalist's ability of changing their mind. And also "winning at life" to the degree we can control for their circumstances (privilege and luck), so that we can be confident it is not mere "success" we admire, but rather "success disportionate to resources and luck". This would require either that the rationalist celebrity regularly publishes their though processes, or that you know them personally. Either way, you need lots of data about how they actually succeeded.

Are there any arenas of life where reputation-building is not a heavy determinant of success?

You could become a millionaire by buying Bitcoin anonymously, so that would be one example.

Depends on what precisely you mean by "success": it is something like "doing/getting X" or rather "being recognized as X"? The latter is inherently social, the former you can often achieve without anyone knowing about it. Sometimes it easier to achieve things if you don't want to take credit; for example if you need a cooperation of a powerful person, it can be useful to convince them that X was actually their idea. Or you can have the power, but live in the shadows, while other people are in the spotlight, and only they know that they actually take commands from you.

To be more specific, I think you could make a lot of money by learning something like programming, getting a well-paid but not very exceptional job, saving half of your income, investing it wisely; then you could early retire, find a good partner (or multiple partners), start a family if you want one, have harmonious relations with people close to you; spend your time interacting with friends and doing your hobbies. This already is a huge success in my opinion; many people would like to have it, but only a few ever get it. Then, depending on what is your hobby, you could write a successful book (it is possible to publish a world-famous book anonymously), or use your money to finance a project that cures malaria or eradicates some form of poverty (the money can come through a corporation with unknown owner). Hypothetically speaking, you could build a Friendly superhuman AI in your basement.

I am not saying the anonymous way is the optimal one, only that it is possible. Reputation-building could help you make more money, attract more/better partners, find collaborators on your projects, etc.

comment by AllAmericanBreakfast · 2020-11-28T01:37:27.258Z · LW(p) · GW(p)

Certainly it is possible to find success in some areas anonymously. No argument with you there!

I view LW-style rationality as a community of practice, a culture of people aggregating, transmitting, and extending knowledge about how to think rationally. As in "The Secret of Our Success," we don't accomplish this by independently inventing the techniques we need to do our work. We accomplish this primarily by sharing knowledge that already exists.

Another insight from TSOOS is that people use prestige as a guide for who they should imitate. So rationalists tend to respect people with a reputation for rationality.

But what if a reputation for rationality can be cultivated separately from tangible accomplishments?

In fact, prestige is already one step removed from the tangible accomplishments. But how do we know if somebody is prestigious?

Perhaps a reputation can be built not by gaining the respect of others through a track record of tangible accomplishments, but by persuading others that:

a) You are widely respected by other people whom they haven't met, or by anonymous people they cannot identify, making them feel behind the times, out of the loop.

b) That the basis on which people allocate prestige conventionally is flawed, and that they should do it differently in a way that is favorable to you, making them feel conformist or conservative.

c) That other people's track record of tangible accomplishments are in fact worthless, because they are not of the incredible value of the project that the reputation-builder is "working on," or are suspect in terms of their actual utility. This makes people insecure.

d) Giving people an ability to participate in the incredible value you are generating by convincing them to evangelize your concept, and thereby to evangelize you. Or of course, just donating money. This makes people feel a sense of meaning and purpose.

I could think of other strategies for building hype. One is to participate in cooperative games, whereby you and other hype each other, create a culture of exclusivity. If enough people do this, it could perhaps trick our monkey brains into perceiving that they're a socially dominant person in a much larger sphere than they really are.

Underlying this anxious argument is a conjunction that I want to make explicit, because it could lead to fallacy:

  1. It rests on a hypothesis that prestige has historically been a useful proxy for success...
  2. ... and that imitation of prestigious people has been a good way to become successful...
  3. ... and that we're hardwired to continue using it now...
  4. ... and that prestige can be cheap to cultivate or credit easy to steal in some domains, with rationality being one such domain; or that we can delude ourselves about somebody's prestige more easily in a modern social and technological context...
  5. ... and that we're likely enough to imitate a rationalist-by-reputation rather than a rationalist-in-fact that this is a danger worth speaking about...
  6. ... and perhaps that one such danger is that we pervert our sense of rationality to align with success in reputation-management rather than success in doing concrete good things.

You could argue against this anxiety by arguing against any of these six points, and perhaps others. It has many failure points.

One counterargument is something like this:

People are selfish creatures looking out for their own success. They have a strong incentive not to fall for hype unless it can benefit them. They are also incentivized to look for ideas and people who can actually help them be more successful in their endeavors. If part of the secret of our success is cultural transmission of knowledge, another part is probably the cultural destruction of hype. Perhaps we're wired for skepticism of strangers and slow admission into the circle of people we trust.

Hype is excitement. Excitement is a handy emotion. It grabs your attention fleetingly. Anything you're excited about has only a small probability of being as true and important as it seems at first. But occasionally, what glitters is gold. Likewise, being attracted to a magnetic, apparently prestigious figure is fine, even if the majority of the time they prove to be a bad role model, if we're able to figure that out in time to distance ourselves and try again.

So the Secret of Our Success isn't blind, instinctive imitation of prestigious people and popular ideas. Nor is it rank traditionalism.

Instead, it's cultural and instinctive transmission of knowledge among people with some capacity for individual creativity and skeptical realism.

So as a rationalist, the approach this might suggest is to use popularity, hype, and prestige to decide which books to buy, which blogs to peruse, which arguments to read. But actually read and question these arguments with a critical mind. Ask whether they seem true and useful before you accept them. If you're not sure, find a few people who you think might know better and solicit their opinion.

Gain some sophistication in interpreting why controversy among experts persists, even when they're all considering the same questions and are looking at the same evidence. As you go examining arguments and ideas in building your own career and your own life, be mindful not only of what argument is being made, but of who's making it. If you find them persuasive and helpful, look for other writings. See if you can form a relationship with them, or with others who find them respectable. Look for opportunities to put those ideas to the test. Make things.

I find this counter-argument more persuasive than the idea of being paranoid of people's reputations. In most cases, there are too many checks on reputation for a faulty one to last for too long; there are too many reputations with a foundation in fact to make the few that are baseless be common confounders; we seem to have some level of instinctive skepticism that prevents us from giving ourselves over full to a superficially persuasive argument or to one person's ill-considered dismissal; and even being "taken in" by a bad argument may often lead to a learning process that has long term value. Perhaps the vivid examples of durable delusions are artifacts of survivorship bias: most people have many dalliances with a large number of bad ideas, but end up having selected enough of the few true and useful ones to end up in a pretty good place in the end.

comment by Viliam · 2020-11-28T16:57:23.320Z · LW(p) · GW(p)

Ah, so you mean within the rationalist (and adjacent) community; how can we make sure that we instinctively copy our most rational members, as opposed to random or even least rational ones.

When I reflect on what I do by default... well, long ago I perceived "works at MIRI/CFAR" as the source of prestige, but recently it became "writes articles I find interesting". Both heuristics have their advantages and disadvantages. The "MIRI/CFAR" heuristic allows me to outsource judgment to people who are smarter than me and have more data about their colleagues; but it ignores people outside Bay Area and those who already have another job. The "blogging" heuristic allows me to judge the thinking of authors; but it ignores people who are too busy doing something important or don't wish to write publicly.

But what if a reputation for rationality can be cultivated separately from tangible accomplishments?

Here is how to exploit my heuristics:

  • Be charming, and convince people at MIRI/CFAR/GiveWell/etc. to give you some role in their organization; it could be a completely unimportant one. Make your association known.
  • Have good verbal skills, and deep knowledge of some topic. Write a blog about that topic and the rationalist community.

Looking at your list: Option a) if someone doesn't live in Bay Area, it could be quite simple to add a few rationalist celebrities as friends on Facebook, and then pretend that you have some deeper interaction with them. People usually don't verify this information, so if no one at your local meetup is in regular contact with them, the risk of exposure is low. Your prestige is then limited to the local meetup.

Options b) and c) would probably lead to a big debate. Arguably, "metarationality" is an example of "actually, all popular rationalists are doing it wrong, this is the true rationality" claim.

Option d) was tried by Intentional Insights, Logic Nation, and I have heard about people who try to extract free work from programmers at LW meetups. Your prestige is limited to the few people you manage to recruit.

Rationalist community has a few people in almost undefeatable positions (MIRI and CFAR, Scott Alexander), who have the power to ruin the reputation of any pretender, if they collectively choose so. Someone trying to get undeserved prestige needs to stay under their radar, or infiltrate them, because trying to replace them by a paralell structure would be too much work.

At this point, for someone trying to get into a position of high prestige, it would be much easier to simply start their own movement, built on different values. However, should the rationalist community become more powerful in the future, this equation may change.

comment by AllAmericanBreakfast · 2020-09-25T19:51:35.811Z · LW(p) · GW(p)

Idea for online dating platform:

Each person chooses a charity and an amount of money that you must donate to swipe right on them. This leads to higher-fidelity match information while also giving you a meaningful topic to kick the conversation off.

comment by AllAmericanBreakfast · 2020-07-16T19:25:32.104Z · LW(p) · GW(p)

Goodhart's Epistemology

If a gears-level understanding becomes the metric of expertise, what will people do?

  • Go out and learn until they have a gears-level understanding?
  • Pretend they have a gears-level understanding by exaggerating their superficial knowledge?
  • Feel humiliated because they can't explain their intuition?
  • Attack the concept of gears-level understanding on a political or philosophical level?

Use the concept of gears-level understanding to debug your own knowledge. Learn for your own sake, and allow your learning to naturally attract the credibility it deserves.

Evaluating expertise in others is a different matter. Probably you want to use a cocktail of heuristics:

  • Can they articulate a gears-level understanding?
  • Do they have the credentials and experience you'd expect someone with deep learning in the subject to have?
  • Can they improvise successfully when a new problem is thrown at them?
  • Do other people in the field seem to respect them?

I'm sure there are more.

comment by AllAmericanBreakfast · 2021-01-22T23:29:50.222Z · LW(p) · GW(p)

Reading and re-reading

The first time you read a textbook on a new subject, you're taking in new knowledge. Re-read the same passage a day later, a week later, or a year later, and it will qualitatively feel different.

You'll recognize the sentences. In some parts, you'll skim, because you know it already. Or because it looks familiar -- are you sure which?

And in that skimming mode, you might zoom into and through a patch that you didn't know so well.

When you're reading a textbook for the first time, in short, there are more inherent safeguards to keep you from wasting time. At the very least, when you're reading a sentence, you're gaining knowledge of what's contained in the textbook. Most likely, you're absorbing a lot of new information, even if you only retain a small fraction of it.

Next time, many of those safeguards are lost. A lot of your time will be wasted.

Unfortunately, it's very convenient to "review" by just re-reading the textbook.

When it comes to what in particular they're trying to do, physically with their bodies and books or mentally, most people are incoherent and inarticulate. But I propose that we can do much better.

Reviewing is about checking that you know X. To check that you know X, you need two things:

  • Knowing the set of all X that you have to review.
  • A test for whether or not you know X.

Let's say you're reviewing acid-base reactions. Then X might include things like the Henderson-Hasselbach equation, the definition of Ka and pKa, the difference between the Bronsted-Lowry and Lewis definitions of an acid, and so on.

To be able to list these topics is to "know the set of X." To have a meaningful way of checking your understanding is "a test for whether or not you know X."

The nature of that test is up to you. For example, with the Henderson-Hasselbach equation, You might intuitively decide that just being able to recite it, define each term, and also to define pKa and Ka is good enough.

The set of things to review and the tests that are relevant to your particular state of learning are, in effect, what goes on a "concept sheet" and a "problem set" or set of flashcards.

So learning becomes creating an updated concept sheet to keep track of the concepts you actually need to review, as well as a set of resources for testing your knowledge either by being able to recall what those concepts mean or using them to solve problems.

The textbook is a reference for when you're later mystified by what you wrote down on the concept sheet, but in theory you're only reading it straight through so that you can create a concept sheet in the first place. The concept sheet should have page numbers so that it's easy to refer to specific parts of the textbook.

Eventually, you'll even want to memorize the concept sheet. This allows you to "unfold" the concept tree or graph that the textbook contains, all within your mind. Of course, you don't need to recite word-for-word or remember every example, problem, and piece of description. It doesn't need to be the entire textbook, just the stuff that you think is worthwhile to invest in retaining long-term. This is for you, and it's not meant to be arbitrary.

But I propose that studying should never look like re-reading the textbook. You read the textbook to create a study-sheet that references descriptions and practice problems by page number. Then you practice recalling what the concepts on the study-sheet mean, and also memorizing the study sheet itself.

That might ultimately look like being able to remember all the theorems in a math textbook. Maybe not word for word, but good enough that you can get the important details pretty much correct. Achieve this to a good enough degree, and I believe that the topic will become such a part of you that it'll be easier to learn more in the future, rehearse your knowledge conveniently in your head, and add new knowledge with less physical infrastructure.

comment by AllAmericanBreakfast · 2021-01-07T00:42:21.918Z · LW(p) · GW(p)

I just started using GreaterWrong.com, in anti-kibitzer mode. Highly recommended. I notice how unfortunately I've glommed on to karma and status more than is comfortable. It's a big relief to open the front page and just see... ideas!

comment by Raemon · 2021-01-07T02:12:24.278Z · LW(p) · GW(p)

I just went to try this, and something I noticed immediately was that while the anti-kibbitzer applies itself to the post list and to the post page, it doesn't seem to apply to the post-hover-preview.

comment by AllAmericanBreakfast · 2020-12-31T07:14:10.546Z · LW(p) · GW(p)

There's a pretty simple reason why the stock market didn't tank long-term due to COVID. Even if we get 3 million total deaths due to the pandemic, that's "only" around a 5% increase in total deaths over the year where deaths are at their peak. 80% of those deaths are among people of retirement age. Though their spending is around 34% of all spending, the money of those who die from COVID will flow to others who will also spend it.

My explanation for the original stock market crash back in Feb/March is that investors were nervous that we'd impose truly strict lockdown measures, or perhaps that the pandemic would more seriously harm working-age people than it does. That would have had a major effect on the economy.

comment by AllAmericanBreakfast · 2020-12-31T03:51:56.578Z · LW(p) · GW(p)

Striving

At any given time, many doors stand wide open before you. They are slowly closing, but you have plenty of time to walk through them. The paths are winding.

Striving is when you recognize that there are also many shortcuts. Their paths are straighter, but the doors leading to them are almost shut. You have to run to duck through.

And if you do that, you'll see that through the almost-shut doors, there are yet straighter roads even further ahead, but you can only make it through if you make a mad dash. There's no guarantee.

To run is exhilarating at first, but soon it becomes deadening as you realize there is a seemingly endless series of doors. There will never be any end to the striving unless you choose to impose such an end. Always, there is a greater reward that you've given up when you do so. Was all your previous striving for naught, to give up when you almost had the greater prize in hand?

There's no solution. This is just what it feels like to move to the right on a long-tailed curve. It's a fact of life, like the efficient market hypothesis. If you're willing to strive, the long-term rewards come at ever-greater short-term costs, and the short-term costs will continue to mount as long as you're making that next investment.

comment by AllAmericanBreakfast · 2020-12-27T20:26:28.068Z · LW(p) · GW(p)

The direction I'd like to see LW moving in as a community

Criticism has a perverse characteristic:

  1. Fresh ideas are easier to criticize than established ideas, because the language, supporting evidence, and theoretical mechanics have received less attention.
  2. Criticism has more of a chilling effect on new thinkers with fresh ideas than on established thinkers with popular ideas.

Ideas that survive into adulthood will therefore tend to be championed by thinkers who are less receptive to criticism.

Maybe we need some sort of "baby criticism" for new ideas. A "developmentally-appropriate criticism," so to speak.

As a community, that might look something like this:

  1. We presume that each post has a core of a good idea contained within it.
  2. We are effusive in our praise of those posts.
  3. We ask clarifying questions, for examples, and for what sorts of predictions the post makes, as though the OP were an expert already. This process lets them get their thoughts together, flesh out the model, and build on it perhaps in future posts.
  4. We focus on parts of the post that seem correct but under-specified, rather than on the parts that seem wrong. If you're digging for gold, 99% of the earth around you will contain nothing of value. If you focus on "digging for dirt," it's highly unlikely that you'll find gold. But if you pan the stream, looking for which direction to walk in where you find the most flecks of gold, you'll start to zero in on the place with the most value to be found.
  5. We show each other care and attention as people who are helping each other develop as writers and thinkers, rather than treating the things people write as the primary object of our concern.
comment by Viliam · 2021-01-01T19:19:31.650Z · LW(p) · GW(p)

This reminds me of the "babble and prune" concept. We should allow... maybe not literally the "babble" stage, but something in between, when the idea is already half-shaped but not completed.

I think the obvious concern is that all kinds of crackpottery may try to enter this open door, so what would be the balance mechanism? Should authors specify their level of certainty and be treated accordingly? (Maybe choose one of predefined levels from "just thinking aloud" to "nitpicking welcome".) In a perfect world, certainty could be deduced from the tone of the article, but this does not work reliably. Something else...?

comment by ChristianKl · 2020-12-28T22:14:03.431Z · LW(p) · GW(p)

We show each other care and attention as people who are helping each other develop as writers and thinkers, rather than treating the things people write as the primary object of our concern.

While this sounds nice on the abstract level I'm not sure what concrete behavior you are pointing to. Could you link to examples of comments that you think do this well?

comment by AllAmericanBreakfast · 2020-12-28T23:08:44.894Z · LW(p) · GW(p)

I don't want to take the time to do what you've requested. Some hypothetical concrete behaviors, however:

  • Asking questions with a tone that conveys a tentative willingness to play with the author's framework or argument, and an interest in hearing more of the authors' thoughts.
  • Compliments, "this made me think of," "my favorite part of your post was"
  • Noting connections between a post and the authors' previous writings.
  • Offers to collaborate or edit.
comment by AllAmericanBreakfast · 2020-12-03T21:12:31.462Z · LW(p) · GW(p)

Cost/benefit anxiety is not fear of the unknown

When I consider doing a difficult/time-consuming/expensive but potentially rewarding activity, it often provokes anxiety. Examples include running ten miles, doing an extensive blog post series on regenerative medicine, and going to grad school. Let's call this cost/benefit anxiety.

Other times, the immediate actions I'm considering are equally "costly," but one provokes more fear than the others even though it is not obviously stupid. One example is whether or not to start blogging under my real name. Call it fear of the unknown.

It's natural that the brain uses the same emotional system to deal with both types of decisions.

It seems more reasonable to take cost/benefit anxiety seriously. There, the emotion seems to be a "slow down and consider this thoroughly" signal.

Fear of the unknown is different. This seems the appropriate domain for Isusr's fear heuristic [LW · GW]: do your due diligence to check that the feared option is not obviously stupid, and if not, do it.

Then again, maybe fear of the unknown is just another form of cost/benefit anxiety. Maybe it's saying "you haven't adequately thought about the worst or long-term potential consequences here." Perhaps the right approach is in fact to change your action to have small, manageable confrontations with the anxiety in a safe environment; or to do small tests of the considered action to build confidence.

comment by AllAmericanBreakfast · 2020-12-01T19:47:11.856Z · LW(p) · GW(p)

A machine learning algorithm is advertising courses in machine learning to me. Maybe the AI is already out of the box.

comment by AllAmericanBreakfast · 2020-11-21T00:36:44.455Z · LW(p) · GW(p)

An end run around slow government

The US recommended daily amount (RDA) of vitamin D is about 600 IUs per day. This was established in 2011, and hasn't been updated since. The Food and Nutrition Board of the Institute of Medicine at the National Academy of Sciences sets US RDAs.

According to a 2017 paper, "The Big Vitamin D Mistake," the right level is actually around 8,000 IUs/day, and the erroneously low level is due to a statistical mistake. I haven't been able to find out yet whether there is any transparency about when the RDA will be reconsidered.

But 3 years is a long time to wait. Especially when vitamin D deficiency is linked to COVID mortality. And if we want to be good progressives, we can also note that vitamin D deficiency is linked to race, and may be driving the higher rates of death in black communities due to COVID.

We could call the slowness to update the RDA an example of systemic racism!

What do we do when a regulatory board isn't doing its job? Well, we can disseminate the truth over the internet.

But then you wind up with an asymmetric information problem. Reading the health claims of many people promising "the truth," how do you decide whom to believe?

Probably you have the most sway in tight-knit communities, such as your family, your immediate circle of friends, and online forums like this one.

What if you wanted to pressure the FNB to reconsider the RDA sooner rather than later?

Probably giving them some bad press would be one way to do it. This is a symmetric weapon, but this is a situation where we don't actually have anybody who really thinks that incorrect vitamin D RDA levels are a good thing. Except maybe literal racists who are also extremely informed about health supplements?

In a situation where we're not dealing with a partisan divide, but only an issue of bureaucratic inefficiency, applying pressure tactics seems like a good strategy to me.

How do you start such a pressure campaign? Probably you reach out to leaders of the black community, as well as doctors and dietary researchers, and try to get them interested in this issue. Ask them what's being done, and see if there's some kind of work going on behind the scenes. Are most of them aware of this issue?

Prior to that, it's probably important to establish both your credibility and your communication skills. Bring together the studies showing that the issue is a) real and b) relevant in a format that's polished and easy to digest.

And prior to that, you probably want to gauge the difficulty from somebody with some knowhow, and get their blessing. Blessings are important. In my case, my dad spent his career in public health, and I'm going to start there.

comment by Dagon · 2020-11-21T21:46:04.974Z · LW(p) · GW(p)

So, you can't trust the government.  Why do you trust that study?  I talked to my MD about it, and he didn't actually know any more than I about reasoning, but did know that there is some toxicity at higher levels, and strongly recommended I stay below 2500 IU/day.  I haven't fully followed that, as I still have a large bottle of 5000 IU pills, which I'm now taking every third day (with 2000 IUs on the intervening days).  

EU Food Safety Administration in 2006 (2002 for vitamin D, see page 167 of https://www.efsa.europa.eu/sites/default/files/efsa_rep/blobserver_assets/ndatolerableuil.pdf.  Page 180 for the recommendation) found that 50ug (2000IU) per day is the safe upper limit.  

I'm not convinced it's JUST bureaucratic inefficiency - there may very well be difficulties in finding a balanced "one-size-fits-all" recommendation as well, and the judgement of "supplement a bit lightly is safer than over-supplementing" is well in-scope for these general guidelines.

comment by AllAmericanBreakfast · 2020-11-21T23:25:31.081Z · LW(p) · GW(p)

You raise two issues here. One is about vitamin D, and the other is about trust.

Regarding vitamin D, there is an optimal dose for general population health that lies somewhere in between "toxically deficient" and "toxically high." The range from the high hundreds to around 10,000 appears to be well within that safe zone. The open question is not whether 10,000 IUs is potentially toxic - it clearly is not - but whether, among doses in the safe range, a lower dose can be taken to achieve the same health benefits.

One thing to understand is that in the outdoor lifestyle we evolved for, we'd be getting 80% of our vitamin D from sunlight and 20% through food. In our modern indoor lifestyles, we are starving ourselves for vitamin D.

"Supplement a bit lightly is safer than over-supplementing" is only a meaningful statement if you can define the dose that constitutes "a bit lightly" and the dose that is "over-supplementing." Beyond these points, we'd have "dangerously low" and "dangerously high" levels.

To assume that 600 IU is "a bit lightly" rather than "dangerously low" is a perfect example of begging the question.

On the issue of trust, you could just as easily say "so you don't trust these papers, why do you trust your doctor or the government?"

The key issue at hand is that in the absence of expert consensus, non-experts have to come up with their own way of deciding who to trust.

In my opinion, there are three key reasons to prefer a study of the evidence to the RDA in this particular case:

  1. The RDA hasn't been revisited in almost a decade, even simply to reaffirm it. This is despite ongoing research in an important area of study that may have links to our current global pandemic. That's strong evidence to me that the current guidance is as it is for reasons other than active engagement by policy-makers with the current state of vitamin D research.
  2. The statistical error identified in these papers is easy for me to understand. The fact that it hasn't received an official response, nor a peer-reviewed scientific criticism, further undermines the credibility of the current RDA.
  3. The rationale for the need for 10,000 IU/day vitamin D supplements makes more sense to me than the rationale for being concerned about the potential toxic effects of that level of supplementation.

However, I have started an email conversation with the author of The Big Vitamin D Mistake, and have emailed the authors of the original paper identifying the statistical error it cites, to try and understand the research climate further.

I want to know why it is difficult to achieve a scientific consensus on these questions. Everybody has access to the same evidence, and reasonable people ought to be able to find a consensus view on what it means. Instead, the author of the paper described to me a polarized climate in that field. I am trying to check with other researchers he cites about whether his characterization is accurate.

comment by AllAmericanBreakfast · 2020-09-28T22:44:40.675Z · LW(p) · GW(p)

Explanation for why displeasure would be associated with meaningfulness, even though in fact meaning comes from pleasure [LW · GW]:

Meaningful experiences involve great pleasure. They also may come with small pains. Part of how you quantify your great pleasure is the size of the small pain that it superceded.

Pain does not cause meaning. It is a test for the magnitude of the pleasure. But only pleasure is a causal factor for meaning.

comment by Viliam · 2020-09-29T20:36:50.728Z · LW(p) · GW(p)

In a perfect situation, it would be possible to achieve meaningful experiences without pain, but usually it is not possible. A person who optimizes for short-term pain avoidance, will not reach the meaningful experience. Because optimizing for short-term pain avoidance is natural, we have to remind ourselves to overcome this instinct.

comment by AllAmericanBreakfast · 2020-09-29T23:31:12.558Z · LW(p) · GW(p)

This fits with the idea that meaning comes from pleasure, and that great pleasure can be worth a fair amount of pain to achieve. The pain drains meaning away, but the redeeming factor is that it can serve as a test of the magnitude of pleasure, and generate pleasurable stories in the future.

An important counter argument to my hypothesis is how we may find a privileged “high road” to success and pleasure to be less meaningful. This at first might seem to suggest that we do inherently value pain.

In fact, though, what frustrates people about people born with a silver spoon in their mouths is that society seems set up to ensure their pleasure at another’s expense.

It’s not their success or pleasure we dislike. It’s the barriers and pain that we think it’s contextualized in. If pleasure for one means pain for another, then of course we find the pleasure to be less meaningful.

So this isn’t about short-term pain avoidance. It’s about long-term, overall, wise and systemic pursuit of pleasure.

And that pleasure must be not only in the physical experiences we have, but in the stories we tell about it - the way we interpret life. We should look at it, and see that it is good.

If people are wireheading, and we look at that tendency and it causes us great displeasure, that is indeed an argument against wireheading.

We need to understand that there’s no single bucket where pleasure can accumulate. There is a psychological reward system where pleasure is evaluated according to the sensory input and brain state.

Utilitarian hedonism isn’t just about nerve endings. It’s about how we interpret them. If we have a major aesthetic objection to wireheading, that counts from where we’re standing, no matter how much you rachet up the presumed pleasure of wireheading.

The same goes recursively for any “hack” that could justify wireheading. For example, say you posited that wireheading would be seen as morally good, if only we could find a catchy moral justification for it.

So we let our finest AI superintelligences get to work producing one. Indeed, it’s so catchy that the entire human population acquiesces to wireheading.

Well, if we take offense to the prospect of letting the AI superintelligence infect us with a catchy pro-wireheading meme, then that’s a major point against doing so.

In general “It pleases or displeases me to find action X moral” is a valid moral argument - indeed, the only one there is.

The way moral change happens is by making moral arguments or having moral experiences that in themselves are pleasing or displeasing.

What’s needed, then, for moral change to happen, is to find a pleasing way to spread an idea that is itself pleasing to adopt - or unpleasant to abandon. To remain, that idea needs to generate pleasure for the subscriber, or to generate displeasure at the prospect of abandoning it in favor of a competing moral scheme.

To believe in some notion of moral truth or progress requires believing that the psychological reward mechanism we have attached to morality corresponds best with moral schemes that accord with moral truth.

An argument for that is that true ideas are easiest to fashion into a coherent, simple argument. And true ideas best allow us to interface with reality to advantage. Being good tends to make you get along with others better than being bad, and that’s a more pleasant way to exist.

Hence, even though strong cases can be constructed for immoral behavior, truth and goodness will tend to win in the arms race for the most pleasing presentation. So we can enjoy the idea that there is moral progress and objective moral truth, even though we make our moral decisions merely by pursuing pleasure and avoiding pain.

comment by Matt Goldenberg (mr-hire) · 2020-09-29T00:10:51.240Z · LW(p) · GW(p)

I looked through that post but didn't see any support for the claim that meaning comes from pleasure.

My own theory is that meaning comes from values, and both pain and pleasure are a way to connect to the things we value, so both are associated with meaning.

comment by AllAmericanBreakfast · 2020-09-29T01:53:04.812Z · LW(p) · GW(p)

I'm a classically trained pianist. Music practice involves at least four kinds of pain:

  • Loneliness
  • Frustration
  • Physical pain
  • Monotony

I perceive none of these to add meaning to music practice. In fact, it was loneliness, frustration, and monotony that caused my music practice to be slowly drained of its meaning and led me ultimately to stop playing, even though I highly valued my achievements as a classical pianist and music teacher. If there'd been an issue with physical pain, that would have been even worse.

I think what pain can do is add flavor to a story. And we use stories as a way to convey meaning. But in that context, the pain is usually illustrating the pleasures of the experience or of the positive achievement. In the context of my piano career, I was never able to use these forms of pain as a contrast to the pleasures of practice and performance. My performance anxiety was too intense, and so it also was not a source of pleasure.

By contrast, I herded sheep on the Navajo reservation for a month in the middle of winter. That experience generated many stories. Most of them revolve around a source of pain, or a mistake. But that pain or mistake serves to highlight an achievement.

That achievement could be the simple fact of making it through that month while providing a useful service to my host. Or moments of success within it: getting the sheep to drink from the hole I cut in the icy lake, busting a tunnel through the drifts with my body so they could get home, finding a mother sheep that had gotten lost when she was giving birth, not getting cannibalized by a Skinwalker.

Those make for good stories, but there is pleasure in telling those stories. I also have many stories from my life that are painful to tell. Telling them makes me feel drained of meaning.

So I believe that storytelling has the ability to create pleasure out of painful or difficult memories. That is why it feels meaningful: it is pleasurable to tell stories. And being a good storyteller can come with many rewards. The net effect of a painful experience can be positive in the long run if it lends itself to a lot of good storytelling.

Where do values enter the picture?

I think it's because "values" is a term for the types of stories that give us pleasure. My community gets pleasure out of the stories about my time on the Navajo reservation. They also feel pleasure in my story about getting chased by a bear. I know which of my friends will feel pleasure in my stories from Burning Man, and who will find them uncomfortable.

So once again, "values" is a gloss for the pleasure we take in certain types of stories. Meaning comes from pleasure; it appears to come from values because values also come from pleasure. Meaning can come from pain only indirectly. Pain can generate stories, which generate pleasure in the telling.

comment by Matt Goldenberg (mr-hire) · 2020-09-29T17:52:19.495Z · LW(p) · GW(p)

"values" is a term for the types of stories that give us pleasure.

It really depends on what you mean by "pleasure".  If pleasure is just "things you want", then almost tautologically meaning comes from pleasure, since you want meaning.

If instead, pleasure is a particular phenomological feeling similar to feeling happy or content, I think that many of us actually WANT the meaning that comes from living our values, and it also happens to give us pleasure.  I think that there are also people that just WANT the pleasure, and if they could get it while ignoring their values, they would.

I call this the"Heaven/Enlightenment" dichotomy, and I think it's a frequent misunderstanding.

I've seen some people say "all we care about is feeling good, and people who think they care about the outside world are confused." I've also seen people say "All we care about is meeting our values, and people who think it's about feeling good are confused."

Personally, I think that people are more towards one side of the spectrum or the other along different dimensions, and I'm inclined to believe both sides about their own experience.

comment by AllAmericanBreakfast · 2020-09-29T19:30:16.479Z · LW(p) · GW(p)

I think we can consider pleasure, along with altruism, consistency, rationality, fitting the categorical imperative, and so forth as moral goods.

People have different preferences for how they trade off one against the other when they're in conflict. But they of course prefer them not to be in conflict.

What I'm interested is not what weights people assign to these values - I agree with you that they are diverse - but on what causes people to adopt any set of preferences at all.

My hypothesis is that it's pleasure. Or more specifically, whatever moral argument most effectively hijacks an individual person's psychological reward system.

So if you wanted to understand why another person considers some strange action or belief to be moral, you'd need to understand why the belief system that they hold gives them pleasure.

Some predictions from that hypothesis:

  • People who find a complex moral argument unpleasant to think about won't adopt it.
  • People who find a moral community pleasant to be in will adopt its values.
  • A moral argument might be very pleasant to understand, rehearse, and think about, and unpleasant to abandon. It might also be unpleasant in the actions it motivates its subscriber to undertake. It will continue to exist in their mind if the balance of pleasure in belief to displeasure in action is favorable.
  • Deprogramming somebody from a belief system you find abhorrent is best done by giving them alternative sources of "moral pleasure." Examples of this include the ways people have deprogrammed people from cults and the KKK, by including them in their social gatherings, including Jewish religious dinners, and making them feel welcome. Eventually, the pleasure of adopting the moral system of that shared community displaces whatever pleasure they were deriving from their former belief system.
  • Paying somebody in money and status to uphold a given belief system is a great way to keep them doing it, no matter how silly it is.
  • If you want people do do more of a painful but necessary action X, helping them feel compensating forms of moral pleasure is a good way to go about it. Effective Altruism is a great example. By helping people understand how effective donations or direct work can save lives, they give people a feeling of heroism. Its failure mode is making people feel like the demands are impossible, and the displeasure of that disappointment is a primary issue in that community.
  • Another good way to encourage more of a painful but necessary action X is to teach people how to shape it into a good story that they and others will appreciate in the telling. Hence the story-fication of charity.
  • Many people don't give to charity because their community disparages it as "do-gooderism," as futile, as bragging, or as a tasteless display of wealth and privilege. If you want people to give more to charity, you have to give people a way of being able to enjoy talking about their charitable contributions. One solution is to form a community in which that's openly accepted and appreciated. Like EA.
  • Likewise for the rationality community. If you want people to do more good epistemology outside of academia, give them an outlet where that'll be appreciated and an axis from where it can be spread.
comment by Matt Goldenberg (mr-hire) · 2020-09-30T19:10:44.428Z · LW(p) · GW(p)

My hypothesis is that it's pleasure. Or more specifically, whatever moral argument most effectively hijacks an individual person's psychological reward system.

This just kicks the can down the road on you defining pleasure, all of my points still apply

If instead, pleasure is a particular phenomological feeling similar to feeling happy or content, I think that many of us actually WANT the meaning that comes from living our values, and it also happens to give us pleasure.

That is, I think it's possible to say that pleasure kicks in around values that we really want, rather than vice versa.

comment by AllAmericanBreakfast · 2020-09-23T17:13:29.408Z · LW(p) · GW(p)

Sci-hub has moved to https://sci-hub.st/

comment by AllAmericanBreakfast · 2020-09-23T16:58:35.350Z · LW(p) · GW(p)

Do you treat “the dark arts” as a set of generally forbidden behaviors, or as problematic only in specific contexts?

As a war of good and evil or as the result of trade-offs between epistemic rationality and other values?

Do you shun deception and manipulation, seek to identify contexts where they’re ok or wrong, or embrace them as a key to succeeding in life?

Do you find the dark arts dull, interesting, or key to understanding the world, regardless of whether or not you employ them?

Asymmetric weapons may be the only source of edge for the truth itself. But should the side of the truth therefore eschew symmetric weapons?

What is the value of the label/metaphor “dark arts/dark side?” Why the normative stance right from the outset? Isn’t the use of this phrase, with all its implications of evil intent or moral turpitude, itself an example of the dark arts? An attempt to halt the workings of other minds, or of our own?

comment by Viliam · 2020-09-24T15:31:15.963Z · LW(p) · GW(p)

There are things like "lying for a good cause", which is a textbook example of what will go horribly wrong because you almost certainly underestimate the second-order effects. Like the "do not wear face masks, they are useless" expert advice for COVID-19, which was a "clever" dark-arts move aimed to prevent people from buying up necessary medical supplies. A few months later, hundreds of thousands have died (also) thanks to this advice.

(It would probably be useful to compile a list of lying for a good cause gone wrong, just to drive home this point.)

Thinking about historical record of people promoting the use of dark arts within rationalist community, consider Intentional Insights [EA · GW]. Turned out, the organization was also using the dark arts against the rationalist community itself. (There is a more general lesson here: whenever a fan of dark arts tries to make you see the wisdom of their ways, you should assume that at this very moment they are probably already using the same techniques on you. Why wouldn't they, given their expressed belief that this is the right thing to do?)

The general problem with lying is that people are bad at keeping multiple independent models of the world in their brains. The easiest, instinctive way to convince others about something is to start believing it yourself. Today you decide that X is a strategic lie necessary for achieving goal Y, and tomorrow you realize that actually X is more correct than you originally assumed (this is how self-deception feels from inside). This is in conflict with our goal to understand the world better. Also, how would you strategically lie as a group? Post it openly online: "Hey, we are going to spread the lie X for instrumental reasons, don't tell anyone!" :)

Then there are things like "using techniques-orthogonal-to-truth to promote true things". Here I am quite guilty myself, because I have long ago advocated turning the Sequences into a book, reasoning, among other things, that for many people, a book is inherently higher-status than a website. Obviously, converting a website to a book doesn't increase its truth value. This comes with smaller risks, such as getting high on your own supply (convincing ourselves that articles in the book are inherently more valuable than those that didn't make it for whatever reason, e.g. being written after the book was published), or wasting too many resources on things that are not our goal.

But at least, in this category, one can openly and correctly describe their beliefs and goals.

Metaphorically, reason is traditionally associated with vision/light (e.g. "enlightenment"), ignorance and deception with blindness/darkness. The "dark side" also references Star Wars, which this nerdy audience is familiar with. So, if the use of the term itself is an example of dark arts (which I suppose it is), at least it is the type where I can openly explain how it works and why we do it, without ruining its effect.

But does it make us update too far against the use of deception? Uhm, I don't know what is the optimal amount of deception. Unlike Kant, I don't believe it's literally zero. I also believe that people err on the side of lying more than is optimal, so a nudge in the opposite direction is on average an improvement, but I don't have a proof for this.

comment by AllAmericanBreakfast · 2020-09-24T16:26:07.587Z · LW(p) · GW(p)

We already had words for lies, exaggerations, incoherence, and advertising. Along with a rich discourse of nuanced critiques and defenses of each one.

The term “dark arts” seems to lump all these together, then uses cherry picked examples of the worst ones to write them all off. It lacks the virtue of precision. We explicitly discourage this way of thinking in other areas. Why do we allow it here?

comment by AllAmericanBreakfast · 2020-09-04T02:50:17.488Z · LW(p) · GW(p)

How to reach simplicity?

You can start with complexity, then simplify. But that's style.

What would it mean to think simple?

I don't know. But maybe...

  • Accept accepted wisdom.
  • Limit your words.
  • Rehearse your core truths, think new thoughts less.
  • Start with inner knowledge. Intuition. Genius. Vision. Only then, check yourself.
  • Argue if you need to, but don't ever debate. Other people can think through any problem you can. Don't let them stand in your way just because they haven't yet.
  • If you know, let others find their own proofs. Move on with the plan.
  • Be slow. Rest. Deliberate. Daydream. But when you find the right project, unleash everything you have. Learn what you need to learn and get the job done right.
comment by AllAmericanBreakfast · 2020-08-12T18:09:44.626Z · LW(p) · GW(p)

Question re: "Why Most Published Research Findings are False":

Let R be the ratio of the number of “true relationships” to “no relationships” among those tested in the field... The pre-study probability of a relationship being true is R/(R + 1).

What is the difference between "the ratio of the number of 'true relationships' to 'no relationships' among those tested in the field" and "the pre-study probability of a relationship being true"?

comment by AllAmericanBreakfast · 2020-08-12T20:17:07.430Z · LW(p) · GW(p)

From Reddit:

You could think of it this way: If R is the ratio of (combinations that total N on two dice) to (combinations that don't total N on two dice), then the chance of (rolling N on two dice) is R/(R+1). For example, there are 2 ways to roll a 3 (1 and 2, and 2 and 1) and 34 ways to not roll a 3. The probability of rolling a 3 is thus (2/34)/(1+2/34)=2/36.

comment by AllAmericanBreakfast · 2021-01-19T19:24:35.474Z · LW(p) · GW(p)

How much of rationality is specialized?

Cultural transmission of knowledge is the secret of our success.

Children comprise a culture. They transmit knowledge of how to insult and play games, complain and get attention. They transmit knowledge on how to survive and thrive with a child's priorities, in a child's body, in a culture that tries to guarantee that the material needs of children are taken care of.

General national cultures teach people very broad, basic skills. Literacy, the ability to read and discuss the newspaper. How to purchase consumer goods. How to cope with boredom. What to do if your life was in danger. Perhaps how to meet people. 

All people are involved in some sort of personal culture. This comprises their understanding of the personalities of our coworkers, friends and relations; technical or social knowledge of use on the job; awareness of their own preferences and possessions.

 A general-rationality culture transmits skills to help us find and enter into environments that depend on sound thinking, technology, and productivity, and where the participants are actively trying to improve their own community.

That general-rationality culture may ultimately push people into a much narrower specialized-rationality culture, such as a specific technical career, or a specific set of friendships that are actively self-improving. This becomes our personal culture.

To extend the logic further, there are nested general and specialized rational cultures within a single specialized culture. For example, there are over 90,000 pediatricians in the USA. That career is a specialized rational culture, but it also has a combination of "how to approach general pediatrics in a rational manner" and "specialized rational cultures within pediatrics."

It may turn out that, at whatever level of specialization a person is at, general-purpose rationality is:

  • Overwhelmingly useful at a specific point in human development, and then less and less so as they move further down a specialized path.
  • Constantly necessary, but only as a small fraction of their overall approach to life.
  • Less and less necessary over time as their culture improves its ability to coordinate between specialties and adapt to change.
  • Defined by being self-eliminating. The most potent instrumental and epistemic rationality may be best achieved by moving furthest down a very specialized path. The faster they exchange general knowledge and investments for more specialized forms, the better they achieve our goals, and the more they can say we were being rational in the first place. Rationality is known by the tendency of its adherents to become very specialized and very comfortable and articulate about why they ended up so specialized. They have a weird job and they know exactly why they're doing it.
comment by AllAmericanBreakfast · 2020-11-11T06:38:41.745Z · LW(p) · GW(p)

What is the #1 change that LW has instilled in me?

Participating in LW has instilled the virtue of goal orientation. All other virtues, including epistemic rationality, flow from that.

Learning how to set goals, investigate them, take action to achieve them, pivot when necessary, and alter your original goals in light of new evidence is a dynamic practice, one that I expect to retain for a long time.

Many memes circulate around this broad theme. But only here have I been able to develop an explicit, robust, ever-expanding framework for making and thinking about choices and actions.

This doesn't mean I'm good at it, although I am much better than I used to be. It simply means that I'm goal-oriented about being goal-oriented. It feels palpably, viscerally different, from moment to moment.

Strangely enough, this goal orientation developed from a host of pre-existing desires. For coherence, precision, charity, logic, satisfaction, security. Practicing those led to goal orientation. Goal orientation is leading to other places.

Now, I recognize that the sense of right thinking comes through in a piece of writing when the author seems to share my goals and to advance them through their work. They are on my team, not necessarily culturally or politically, but on a more universal level, and they are helping us win.

I think that goal orientation is a hard quality to instill, although we are biologically hardwired to have desires, imaginations, intentions, and all the other psychological precursors to a goal.

But a goal. That is something refined and abstracted from the realm of the biological, although still bearing a 1-1 relation to it. I don't know how you'd teach it. I think it comes through practice. From the sense that something can be achieved. Then trying to achieve and realizing that not only were you right, but you were thinking too small. SO many things can be achieved.

And then the passion starts, perhaps. The intoxication of building a mechanism - in any medium - that gives the user some new capability or idea, makes you wonder what you can do next. It makes you want to connect with others in a new way: fellow makers and shapers of the world, fellow agents. It drives home the pressing need for a shared language and virtuous behavior, lest potential be lost or disaster strike.

I don't move through the world as I did before.

comment by AllAmericanBreakfast · 2021-01-07T22:32:19.753Z · LW(p) · GW(p)

I've noticed that when I write posts or questions, much of the text functions as "planning" for what's to come. Often, I'm organizing my thoughts as I write, so that's natural.

But does that "planning" text help organize the post and make it easier to read? Or is it flab that I should cut?

comment by AllAmericanBreakfast · 2020-11-28T05:01:02.618Z · LW(p) · GW(p)

Thinking, Too Fast and Too Slow

I've noticed that there are two important failure modes in studying for my classes.

Too Fast: This is when learning breaks down because I'm trying to read, write, compute, or connect concepts too quickly.

Too Slow: This is when learning fails, or just proceeds too inefficiently, because I'm being too cautious, obsessing over words, trying to remember too many details, etc.

One hypothesis is that there's some speed of activity that's ideal for any given person, depending on the subject matter and their current level of comfort with it.

I seem to have some level of control over the speed and cavalier confidence I bring to answering questions. Do I put down the first response that comes into my head, or wrack my brains looking for some sort of tricky exception that might be relevant?

Deciding what that speed should be has always been intuitive. Is there some leverage here to enhance learning by sensitizing myself to the speed at which I ought to be practicing?

comment by AllAmericanBreakfast · 2020-11-11T22:08:14.647Z · LW(p) · GW(p)

Different approaches to learning seem to be called for in fields with varying levels of paradigm consensus. The best approach to learning undergraduate math/CS/physics/chemistry seems different from the best one to take for learning biology, which again differs from the best approach to studying the economics/humanities*.

High-consensus disciplines have a natural sequential order, and the empirical data is very closely tied to an a priori predictive structure. You develop understanding by doing calculations and making theory-based arguments, along with empirical work/applications and intuition-building.

Medium-consensus disciplines start with a lot of memorization of empirical data, tied together with broad frameworks that let the parts "hang together" in a way that is legible and reliable, but imprecise. Lack of scientific knowledge about empirical data, along with massive complexity of the systems under study, prevent a full consensus accounting.

Low-consensus disciplines involve contrasting perspectives, translating complex arguments into accessible language, and applying broad principles to current dilemmas.

High-consensus disciplines can be very fun to study. Make the argument successfully, and you've "made a discovery."

The low-consensus disciplines are also fun. When you make an argument, you're engaged in an act of persuasion. That's what the humanities are for.

But those medium-consensus disciplines are in kind of an uncomfortable middle that doesn't always satisfy. You wind up memorizing and regurgitating a lot of empirical data and lab work, but persuasive intellectual argument is the exception, rather than the rule.

For someone who's highly motivated by persuasive intellectual argument, what's the right way forward? To try and engage with biology in a way that somehow incorporates more persuasive argument? To develop a passion for memorization? To accept that many more layers of biological knowledge must accumulate before you'll be conversant in it?

*I'm not sure these categories are ideal.

comment by AllAmericanBreakfast · 2020-10-23T20:14:32.765Z · LW(p) · GW(p)

What rationalists are trying to do is something like this:

  1. Describe the paragon of virtue: a society of perfectly rational human beings.
  2. Explain both why people fall short of that ideal, and how they can come closer to it.
  3. Explore the tensions in that account, put that plan into practice on an individual and communal level, and hold a meta-conversation about the best ways to do that.

This looks exactly like virtue ethics.

Now, we have heard that the meek shall inherit the earth. So we eschew the dark arts; embrace the virtues of accuracy, precision, and charity; steelman our opponents' arguments; try to cite our sources; question ourselves first; and resist the temptation to simply the message for public consumption.

Within those bounds, however, we need to address a few question-clusters.

What keeps our community strong? What weakens it? What is the greatest danger to it in the next year? What is the greatest opportunity? What is present in abundance? What is missing?

How does this community support our individual growth as rationalists? How does it detract from it? How could we be leveraging what our community has to offer? How could we give back?

comment by AllAmericanBreakfast · 2020-08-21T03:13:20.241Z · LW(p) · GW(p)

You can justify all sorts of spiritual ideas by a few arguments:

  1. They're instrumentally useful in producing good feelings between people.
  2. They help you escape the typical mind fallacy.
  3. They're memetically refined, which means they'll fit better with your intuition than, say, trying to guess where the people you know fit on the OCEAN scale.
  4. They're provocative and generative of conversation in a way that scientific studies aren't. Partly that's because the language they're wrapped in is more intriguing, and partly isn't because everybody's on a level playing field.
  5. It's a way to escape the trap of intelligence-signalling and lowers the barrier for verbalizing creative ideas. If you're able to talk about astrology, it lets people feel like they have permission to babble.
  6. They're aesthetically pleasing if you don't take them too seriously
comment by AllAmericanBreakfast · 2020-08-21T03:31:47.872Z · LW(p) · GW(p)

I would be interested in arguments about why we should eschew them that don't resort to activist ideas of making the world a "better place" by purging the world of irrationality and getting everybody on board with a more scientific framework for understanding social reality or psychology.

I'm more interested in why individual people should anticipate that exploring these spiritual frameworks will make their lives worse, either hedonistically or by some reasonable moral framework. Is there a deontological or utilitarian argument against them?

comment by AllAmericanBreakfast · 2020-07-29T19:22:41.714Z · LW(p) · GW(p)

A checklist for the strength of ideas:

Think "D-SHARP"

  • Is it worth discussing?
  • Is it worth studying?
  • Is it worth using as a heuristic?
  • Is it worth advertising?
  • Is it worth regulating or policing?

Worthwhile research should help the idea move either forward or backward through this sequence.

comment by AllAmericanBreakfast · 2020-07-27T01:24:00.194Z · LW(p) · GW(p)

Why isn’t California investing heavily in desalination? Has anybody thought through the economics? Is this a live idea?

comment by Dagon · 2020-07-27T16:12:03.215Z · LW(p) · GW(p)

There's plenty of research going on, but AFAIK, no particular large-scale push for implementation. I haven't studied the topic, but my impression is that this is mostly something they can get by with current sources and conservation for a few decades yet. Desalinization is expensive, not just in terms of money, but in terms of energy - scaling it up before absolutely needed is a net environmental harm.

comment by ChristianKl · 2020-07-27T18:06:35.582Z · LW(p) · GW(p)

This article seems to be about the case. The economics seem unclear. The politics seem bad because it means taking on the enviromentalists. 

comment by AllAmericanBreakfast · 2020-07-22T01:57:22.891Z · LW(p) · GW(p)

My modified Pomodoro has been working for me. I set a timer for 5 minutes and start working. Every 5 minutes, I just reset the timer and continue.

For some reason it gets my brain into "racking up points" mode. How many 5-minute sessions can I do without stopping or getting distracted? Aware as I am of my distractability, this has been an unquestionably powerful technique for me to expand my attention span.

comment by AllAmericanBreakfast · 2020-07-21T20:23:29.056Z · LW(p) · GW(p)

All actions have an exogenous component and an endogenous component. The weights we perceive differ from action to action, context to context.

The endogenous component has causes and consequences that come down to the laws of physics.

The exogenous component has causes and consequences from its social implications. The consequences, interpretation, and even the boundaries of where the action begins and ends are up for grabs.

comment by AllAmericanBreakfast · 2020-07-15T22:27:55.981Z · LW(p) · GW(p)

Failure modes in important relationships

  • Being quick and curt when they want to connect and share positive emotions
  • Meeting negative emotions with blithe positive emotions (ie. pretending like they're not angry, anxious, etc)
  • Mirroring negative emotions: meeting anxiety with anxiety, anger with anger
  • Being uncompromising, overly "logical"/assertive to get your way in the moment
  • Not trying to express what you want, even to yourself
  • Compromising/giving in, hoping next time will be "your turn"

Practice this:

  • Focusing [LW · GW]to identify your own elusive feelings
  • Empathy to identify and express the other person's needs, feelings, information. Look for a "that's right." You're not rushing to win, nor rushing to receive empathy. The more they reveal, the better it is for you (and for them, because now you can help find a high-value trade rather than a poor compromise).
comment by AllAmericanBreakfast · 2020-07-11T19:47:54.530Z · LW(p) · GW(p)

Good reading habit #1: Turn absolute numbers into proportions and proportions into absolute numbers.

For example, in reading "With almost 1,000 genes discovered to be differentially expressed between low and high passage cells [in mouse insulinoma cells]," look up the number of mouse genes (25,000) and turn it into a percentage so that you can see that 1,000 genes is 4% of the mouse genome.

comment by AllAmericanBreakfast · 2020-07-18T02:01:43.041Z · LW(p) · GW(p)

What is the difference between playing devil's advocate and steelmanning an argument? I'm interested in any and all attempts to draw a useful distinction, even if they're only partial.

Attempts:

  • Devil's advocate comes across as being deliberately disagreeable, while steelmanning comes across as being inclusive.
  • Devil's advocate involves advancing a clearly-defined argument. Steelmanning is about clarifying an idea that gets a negative reaction due to factors like word choice or some other superficial factor.
  • Devil's advocate is a political act and is only relevant in a conversation between two or more people. Steelmanning can be social, but it can also be done entirely in conversation with yourself.
  • Devil's advocate is about winning an argument, and can be done even if you know exactly how the argument goes and know in advance that you'll still disagree with it when you're done making it. Steelmanning is about exploring an idea without preconceptions about where you'll end up.
  • Devil's advocate doesn't necessarily mean advancing the strongest argument, only the one that's most salient, hardest for your conversation partner to argue against, or most complex or interesting. Steelmanning is about searching for an argument that you genuinely find compelling, even if it's as simple as admitting your own lack of expertise and the complexity of the issue.
  • Devil's advocate can be a diversionary or stalling tactic, meant to delay or avoid an unwanted conclusion of a larger argument by focusing in on one of its minor components. Steelmanning is done for its own sake.
  • Devil's advocate comes with a feeling of tension, attention-hogging, and opposition. Steelmanning comes with a feeling of calm, curiosity, and connection.
comment by AllAmericanBreakfast · 2020-07-17T01:58:50.950Z · LW(p) · GW(p)

Empathy is inexpensive and brings surprising benefits. It takes a little bit of practice and intent. Mainly, it involves stating the obvious assumption about the other person's experience and desires. Offer things you think they'd want and that you'd be willing to give. Let them agree or correct you. This creates a good context in which high-value trades can occur, without needing an conscious, overriding, selfish goal to guide you from the start.

comment by Matt Goldenberg (mr-hire) · 2020-07-17T21:13:29.481Z · LW(p) · GW(p)

FWIW, I like to be careful about my terms here.

Empathy is feeling what the other person is feeling.

Understanding is understanding what the other person is feeling.

Active Listening is stating your understanding and letting the other person correct you.

Empathic listening is expressing how you feel what the other person is feeling.

In this case, you stated Empathy, but you're really talking about Active Listening.  I agree it's inexpensive and brings surprising benefits.

comment by Raemon · 2020-07-17T21:33:28.116Z · LW(p) · GW(p)

I think whether it's inexpensive isn't that obvious. I think it's a skill/habit, and it depends a lot on whether you've cultivated the habit, and on your mental architecture.

comment by Matt Goldenberg (mr-hire) · 2020-07-17T21:37:18.528Z · LW(p) · GW(p)

Active listening at a low level is fairly mechanical, and can still acrue quite a few benefits. Its not as dependent on mental architecture as something like empathic listening.  It does require some mindfulness to create the habit, but for most people I'd put it on only a slightly higher level of difficulty to acquire than e.g. brushing your teeth.

comment by Raemon · 2020-07-17T21:46:25.390Z · LW(p) · GW(p)

Fair, but I think gaining a new habit like brushing your teeth is actually pretty expensive.

comment by AllAmericanBreakfast · 2020-07-17T22:40:55.679Z · LW(p) · GW(p)

Empathy isn't like brushing your teeth. It's more like berry picking. Evolution built you to do it, you get better with practice, and it gives immediate positive feedback. Nevertheless, due to a variety of factors, it is a sorely neglected practice, even when the bushes are growing in the alley behind your house.

comment by AllAmericanBreakfast · 2020-07-17T22:36:49.165Z · LW(p) · GW(p)

I don't think what I'm calling empathy, either in common parlance or in actual practice, decomposes neatly. For me, these terms comprise a model of intuition that obscures with too much artificial light.

comment by Matt Goldenberg (mr-hire) · 2020-07-17T23:22:35.549Z · LW(p) · GW(p)

In that case, I don't agree that the thing you're claiming has low costs. As Raemon says in another comment this type of intuition only comes easily to certain people.  If you're trying to lump together the many skills I just pointed to, some are easy for others and some harder.

If however, the thing you're talking about is the skill of checking in to see if you understand another person, then I would refer to that as active listening.

comment by AllAmericanBreakfast · 2020-07-18T01:47:54.007Z · LW(p) · GW(p)

Of course, you're right. This is more a reminder to myself and others who experience empathy as inexpensive.

Though empathy is cheap, there is a small barrier, a trivial inconvenience, a non-zero cost to activating it. I too often neglect it out of sheer laziness or forgetfulness. It's so cheap and makes things so much better that I'd prefer to remember and use it in all conversations, if possible.

comment by AllAmericanBreakfast · 2020-07-15T21:36:37.866Z · LW(p) · GW(p)

Chris Voss thinks empathy is key to successful negotiation.

Is there a line between negotiating and not, or only varying degrees of explicitness?

Should we be openly negotiating more often?

How do you define success, when at least one of his own examples of a “successful negotiation” is entirely giving over to the other side?

I think the point is that the relationship comes first, greed second. Negotiation for Voss is exchange of empathy, seeking information, being aware of your leverage. Those factors are operating all the time - that’s the relationship.

The difference between that and normal life? Negotiation is making it explicit.

Are there easy ways to extend more empathy in more situations? Casual texts? First meetings? Chatting with strangers?

comment by AllAmericanBreakfast · 2021-01-04T05:43:46.441Z · LW(p) · GW(p)

Hot top: "sushi-grade" and "sashimi-grade" are marketing terms that mean nothing in terms of food safety. Freezing inactivates pretty much any parasites that might have been in the fish.

I'm going to leave these claims unsourced, because I think you should look it up and judge the credibility of the research for yourself.

comment by Matt Goldenberg (mr-hire) · 2021-01-04T18:51:05.790Z · LW(p) · GW(p)

It's partially about taste isn't it? Sushi grade and sashimi grade will theoretically smell less fishy

comment by AllAmericanBreakfast · 2021-01-04T20:15:17.551Z · LW(p) · GW(p)

Fishy smell in saltwater fish is caused by breakdown of TMAO to TME. You can rinse off TME on the surface to reduce the smell. Fresher fish should also have less smell.

So if people are saying “sushi grade” when what they mean is “fresh,” then why not just say “fresh?” It’s a marketing term.

comment by Matt Goldenberg (mr-hire) · 2021-01-04T21:05:16.144Z · LW(p) · GW(p)

I always thought sushi grade was just the term for "really really fresh  :)"

comment by AllAmericanBreakfast · 2020-10-01T04:05:35.793Z · LW(p) · GW(p)

FUN GAME:

Guess the R^2 for the trendline on a plot of bioinformatics master's degrees: tuition vs. US news & world report ranking. 

Answer...

.

.

.

.

.

.

.

.

.

.

0.137

comment by AllAmericanBreakfast · 2020-07-11T19:08:19.200Z · LW(p) · GW(p)

Good reading habit #1: Turn absolute numbers into proportions and proportions into absolute numbers.

For example, in reading "With almost 1,000 genes discovered to be differentially expressed between low and high passage cells [in mouse insulinoma cells]," look up the number of mouse genes (25,000) and turn it into a percentage so that you can see that 1,000 genes is 4% of the mouse genome.