AllAmericanBreakfast's Shortform

post by AllAmericanBreakfast · 2020-07-11T19:08:01.705Z · score: 5 (1 votes) · LW · GW · 86 comments

86 comments

Comments sorted by top scores.

comment by AllAmericanBreakfast · 2020-09-28T00:33:47.003Z · score: 27 (8 votes) · LW(p) · GW(p)

SlateStarCodex, EA, and LW helped me get out of the psychological, spiritual, political nonsense in which I was mired for a decade or more.

I started out feeling a lot smarter. I think it was community validation + the promise of mystical knowledge.

Now I've started to feel dumber. Probably because the lessons have sunk in enough that I catch my own bad ideas and notice just how many of them there are. Worst of all, it's given me ambition to do original research. That's a demanding task, one where you have to accept feeling stupid all the time.

But I still look down that old road and I'm glad I'm not walking down it anymore.

comment by Viliam · 2020-09-28T19:47:43.506Z · score: 6 (3 votes) · LW(p) · GW(p)

I started out feeling a lot smarter. I think it was community validation + the promise of mystical knowledge.

Too smart for your own good. You were supposed to believe it was about rationality. Now we have to ban you and erase your comment before other people can see it. :D

Now I've started to feel dumber. Probably because the lessons have sunk in enough that I catch my own bad ideas and notice just how many of them there are. [...] you have to accept feeling stupid all the time. But I still look down that old road and I'm glad I'm not walking down it anymore.

Yeah, same here.

comment by AllAmericanBreakfast · 2020-07-15T03:30:21.988Z · score: 25 (7 votes) · LW(p) · GW(p)

Things I come to LessWrong for:

  • An outlet and audience for my own writing
  • Acquiring tools of good judgment and efficient learning
  • Practice at charitable, informal intellectual argument
  • Distraction
  • A somewhat less mind-killed politics

Cons: I'm frustrated that I so often play Devil's advocate, or else make up justifications for arguments under the principle of charity. Conversations feel profit-oriented and conflict-avoidant. Overthinking to the point of boredom and exhaustion. My default state toward books and people is bored skepticism and political suspicion. I'm less playful than I used to be.

Pros: My own ability to navigate life has grown. My imagination feels almost telepathic, in that I have ideas nobody I know has ever considered, and discover that there is cutting edge engineering work going on in that field that I can be a part of, or real demand for the project I'm developing. I am more decisive and confident than I used to be. Others see me as a leader.

comment by Viliam · 2020-07-15T19:01:20.185Z · score: 5 (3 votes) · LW(p) · GW(p)

Some people optimize for drama. It is better to put your life in order, which often means getting the boring things done. And then, when you need some drama, you can watch a good movie.

Well, it is not completely a dichotomy. There is also some fun to be found e.g. in serious books. Not the same intensity as when you optimize for drama, but still. It's like when you stop eating refined sugar, and suddenly you notice that the fruit tastes sweet.

comment by AllAmericanBreakfast · 2020-08-09T16:23:49.240Z · score: 23 (9 votes) · LW(p) · GW(p)

Math is training for the mind, but not like you think

Just a hypothesis:

People have long thought that math is training for clear thinking. Just one version of this meme that I scooped out of the water:

“Mathematics is food for the brain,” says math professor Dr. Arthur Benjamin. “It helps you think precisely, decisively, and creatively and helps you look at the world from multiple perspectives . . . . [It’s] a new way to experience beauty—in the form of a surprising pattern or an elegant logical argument.”

But math doesn't obviously seem to be the only way to practice precision, decision, creativity, beauty, or broad perspective-taking. What about logic, programming, rhetoric, poetry, anthropology? This sounds like marketing.

As I've studied calculus, coming from a humanities background, I'd argue it differently.

Mathematics shares with a small fraction of other related disciplines and games the quality of unambiguous objectivity. It also has the ~unique quality that you cannot bullshit your way through it. Miss any link in the chain and the whole thing falls apart.

It can therefore serve as a more reliable signal, to self and others, of one's own learning capacity.

Experiencing a subject like that can be training for the mind, because becoming successful at it requires cultivating good habits of study and expectations for coherence.

comment by niplav · 2020-08-09T21:06:36.993Z · score: 6 (4 votes) · LW(p) · GW(p)

Math is interesting in this regard because it is both very precise and there's no clear-cut way of checking your solution except running it by another person (or becoming so good at math to know if your proof is bullshit).

Programming, OTOH, gives you clear feedback loops.

comment by AllAmericanBreakfast · 2020-08-10T00:17:43.442Z · score: 4 (2 votes) · LW(p) · GW(p)

In programming, that's true at first. But as projects increase in scope, there's a risk of using an architecture that works when you’re testing, or for your initial feature set, but will become problematic in the long run.

For example, I just read an interesting article on how a project used a document store database (MongoDB), which worked great until their client wanted the software to start building relationships between data that had formerly been “leaves on the tree.” They ultimately had to convert to a traditional relational database.

Of course there are parallels in math, as when you try a technique for integrating or parameterizing that seems reasonable but won’t actually work.

comment by G Gordon Worley III (gworley) · 2020-08-10T04:14:03.213Z · score: 7 (4 votes) · LW(p) · GW(p)

Yep. Having worked both as a mathematician and a programmer, the idea of objectivity and clear feedback loops starts to disappear as the complexity amps up and you move away from the learning environment. It's not unusual to discover incorrect proofs out on the fringes of mathematical research that have not yet become part of the cannon, nor is it uncommon (in fact, it's very common) to find running production systems where the code works by accident due to some strange unexpected confluence of events.

comment by Viliam · 2020-08-16T18:21:52.701Z · score: 3 (2 votes) · LW(p) · GW(p)
Programming, OTOH, gives you clear feedback loops.

Feedback, yes. Clarity... well, sometimes it's "yes, it works" today, and "actually, it doesn't if the parameter is zero and you called the procedure on the last day of the month" when you put it in production.

comment by MikkW (mikkel-wilson) · 2020-08-10T22:07:04.006Z · score: 2 (2 votes) · LW(p) · GW(p)

Proof verification is meant to minimize this gap between proving and programming

comment by Viliam · 2020-08-16T18:43:32.418Z · score: 2 (1 votes) · LW(p) · GW(p)

The thing I like about math is that it gives the feeling that the answers are in the territory. (Kinda ironic, when you think about what the "territory" of math is.) Like, either you are right or you are wrong, it doesn't matter how many people disagree with you and what status they have. But it also doesn't reward the wrong kind of contrarianism.

Math allows you to make abstractions without losing precision. "A sum of two integers is always an integer." Always; literally. Now with abstractions like this, you can build long chains out of them, and it still works. You don't create bullshit accidentally, by constructing a theory from approximations that are mostly harmless individually, but don't resemble anything in the real world when chained together.

Whether these are good things, I suppose different people would have different opinions, but it definitely appeals to my aspie aesthetics. More seriously, I think that even when in real world most abstractions are just approximations, having an experience with precise abstractions might make you notice the imperfection of the imprecise ones, so when you formulate a general rule, you also make a note "except for cases such as this or this".

(On the other hand, for the people who only become familiar with math as a literary genre [LW · GW], it might have an opposite effect: they may learn that pronouncing abstractions with absolute certainty is considered high-status.)

comment by elityre · 2020-08-14T15:20:39.055Z · score: 2 (1 votes) · LW(p) · GW(p)
Mathematics shares with a small fraction of other related disciplines and games the quality of unambiguous objectivity. It also has the ~unique quality that you cannot bullshit your way through it. Miss any link in the chain and the whole thing falls apart.

Isn't programming even more like this?

I could get squidgy about whether a proof is "compelling", but when I write a program, it either runs and does what I expect, or it doesn't, with 0 wiggle room.

comment by AllAmericanBreakfast · 2020-08-14T18:31:07.779Z · score: 2 (1 votes) · LW(p) · GW(p)

Sometimes programming is like that, but then I get all anxious that I just haven’t checked everything thoroughly!

My guess is this has more to do with whether or not you’re doing something basic or advanced, in any discipline. It’s just that you run into ambiguity a lot sooner in the humanities

comment by ChristianKl · 2020-08-12T09:49:30.386Z · score: 2 (1 votes) · LW(p) · GW(p)

It helps you to look at the world from multiple perspectives: It gets you into a position to make a claim like that soley based on anecdotal evidence and wishful thinking.

comment by AllAmericanBreakfast · 2020-08-08T18:52:21.313Z · score: 14 (7 votes) · LW(p) · GW(p)

What gives LessWrong staying power?

On the surface, it looks like this community should dissolve. Why are we attracting bread bakers, programmers, stock market investors, epidemiologists, historians, activists, and parents?

Each of these interests has a community associated with it, so why are people choosing to write about their interests in this forum? And why do we read other people's posts on this forum when we don't have a prior interest in the topic?

Rationality should be the art of general intelligence. It's what makes you better at everything. If practice is the wood and nails, then rationality is the blueprint. 

To determine whether or not we're actually studying rationality, we need to check whether or not it applies to everything. So when I read posts applying the same technique to a wide variety of superficially unrelated subjects, it confirms that the technique is general, and helps me see how to apply it productively.

This points at a hypothesis, which is that general intelligence is a set of defined, generally applicable techniques. They apply across disciplines. And they apply across problems within disciplines. So why aren't they generally known and appreciated? Shouldn't they be the common language that unites all disciplines?

Perhaps it's because they're harder to communicate and appreciate. If I'm an expert baker, I can make another delicious loaf of bread. Or I can reflect on what allows me to make such tasty bread, and speculate on how the same techniques might apply to architecture, painting, or mathematics. Most likely, I'm going to choose to bake bread.

This is fine, until we start working on complex, interdisciplinary projects. Then general intelligence becomes the bottleneck for having enough skill to get the project done. Sounds like the 21st century. We're hitting the limits of what's achievable through sheer persistence in a single specialty, and we're learning to automate them away.

What's left is creativity, which arises from structured decision-making. I've noticed that the longer I practice rationality, the more creative I become. I believe that's because it gives me the resources to turn an intuition into a specified problem, envision a solution, create a sort of Fermi-approximation to give it definition, and guidance on how to develop the practical skills and relationships that will let me bring it into being.

If I'm right, human application of these techniques will require deliberate practice with the general techniques - both synthesizing them and practicing them individually, until they become natural.

The challenge is that most specific skills lend themselves to that naturally. If I want to become a pianist, I practice music until I'm good. If I want to be a baker, I bake bread. To become an architect, design buildings.

What exactly do you do to practice the general techniques of rationality? I can imagine a few methods:

  1. Participate in superforecasting tournaments, where Bayesian and gears/policy level thinking are the known foundational techniques.
  2. Learn a new skill, and as you go, notice the problems you encounter along the way. Try to imagine what a general solution to that problem might look like. Then go out and build it.
  3. Pick a specific rationality technique, and try to apply it to every problem you face in your life.
comment by Viliam · 2020-08-15T21:11:38.008Z · score: 8 (4 votes) · LW(p) · GW(p)
What gives LessWrong staying power?

For me, it's the relatively high epistemic standards combined with relative variety of topics. I can imagine a narrowly specialized website with no bullshit, but I haven't yet seen a website that is not narrowly specialized and does not contain lots of bullshit. Even most smart people usually become quite stupid outside the lab. Less Wrong is a place outside the lab that doesn't feel painfully stupid. (For example, the average intelligence at Hacker News seems quite high, but I still regularly find upvoted comments that make me cry.)

comment by AllAmericanBreakfast · 2020-08-16T01:51:59.847Z · score: 6 (3 votes) · LW(p) · GW(p)

Yeah, Less Wrong seems to be a combination of project and aesthetic. Insofar as it's a project, we're looking for techniques of general intelligence, partly by stress-testing them on a variety of topics. As an aesthetic, it's a unique combination of tone, length, and variety + familiarity of topics that scratches a particular literary itch.

comment by AllAmericanBreakfast · 2020-08-08T19:48:42.333Z · score: 9 (5 votes) · LW(p) · GW(p)

Markets are the worst form of economy except for all those other forms that have been tried from time to time.

comment by mr-hire · 2020-08-09T01:37:33.440Z · score: 7 (4 votes) · LW(p) · GW(p)

I used this line when having a conversation at a party with a bunch of people who turned out to be communists, and the room went totally silent except for one dude who was laughing.

comment by AllAmericanBreakfast · 2020-08-09T04:48:03.500Z · score: 4 (2 votes) · LW(p) · GW(p)

It was the silence of sullen agreement.

comment by AllAmericanBreakfast · 2020-08-01T14:26:30.501Z · score: 9 (5 votes) · LW(p) · GW(p)

Are rationalist ideas always going to be offensive to just about everybody who doesn’t self-select in?

One loved one was quite receptive to Chesterton’s Fence the other day. Like, it stopped their rant in the middle of its tracks and got them on board with a different way of looking at things immediately.

On the other hand, I routinely feel this weird tension. Like to explain why I think as I do, I‘d need to go through some basic rational concepts. But I expect most people I know would hate it.

I wish we could figure out ways of getting this stuff across that was fun,  made it seem agreeable and sensible and non-threatening.

Less negativity - we do sooo much critique. I was originally attracted to LW partly as a place where I didn’t  feel obligated to participate in the culture war. Now, I do, just on a set of topics that I didn’t associate with the CW before LessWrong.

My guess? This is totally possible. But it needs a champion. Somebody willing to dedicate themselves to it. Somebody friendly, funny, empathic, a good performer, neat and practiced. And it needs a space for the educative process - a YouTube channel, a book, etc. And it needs the courage of its convictions. The sign of that? Not taking itself too seriously, being known by the fruits of its labors.

comment by Viliam · 2020-08-02T19:29:30.552Z · score: 17 (5 votes) · LW(p) · GW(p)

Traditionally, things like this are socially achieved by using some form of "good cop, bad cop" strategy. You have someone who explains the concepts clearly and bluntly, regardless of whom it may offend (e.g. Eliezer Yudkowsky), and you have someone who presents the concepts nicely and inoffensively, reaching a wider audience (e.g. Scott Alexander), but ultimately they both use the same framework.

The inoffensiveness of Scott is of course relative, but I would say that people who get offended by him are really not the target audience for rationalist thought. Because, ultimately, saying "2+2=4" means offending people who believe that 2+2=5 and are really sensitive about it; so the only way to be non-offensive is to never say anything specific.

If a movement only has the "bad cops" and no "good cops", it will be perceived as a group of assholes. Which is not necessarily bad if the members are powerful; people want to join the winning side. But without actual power, it will not gain wide acceptance. Most people don't want to go into unnecessary conflicts.

On the other hand, a movement with "good cops" without "bad cops" will get its message diluted. First, the diplomatic believers will dilute their message in order not to offend anyone. Their fans will further dilute the message, because even the once-diluted version is too strong for normies' taste. At the end, the message may gain popular support... kind of... because the version that gains the popular support will actually contain maybe 1% of the original message, but mostly 99% of what the normies already believed, peppered by the new keywords.

The more people will present rationality using different methods, the better. Because each of them will reach a different audience. So I completely approve the approach you suggest... in addition to the existing ones.

comment by AllAmericanBreakfast · 2020-08-02T23:57:57.601Z · score: 5 (2 votes) · LW(p) · GW(p)

You're right.

I need to try a lot harder to remember that this is just a community full of individuals airing their strongly held personal opinions on a variety of topics.

comment by Viliam · 2020-08-03T12:27:49.602Z · score: 3 (2 votes) · LW(p) · GW(p)

Those opinions often have something in common -- respect for the scientific method, effort to improve one's rationality, concern about artificial intelligence -- and I like to believe it is not just a random idiosyncratic mix (a bunch of random things Eliezer likes), but different manifestations of the same underlying principle (use your intelligence to win, not to defeat yourself). However, not everyone is interested in all of this.

And I would definitely like to see "somebody friendly, funny, empathic, a good performer, neat and practiced" promoting these values in a YouTube channel or in books. But that requires a talent I don't have, so I can only wait until someone else with the necessary skills does it.

This reminded me of the YouTube channel of Julia Galef, but the latest videos there are 3 years old.

comment by TAG · 2020-08-03T13:55:33.748Z · score: 1 (1 votes) · LW(p) · GW(p)

You're both assuming that you have a set of correct ideas coupled with bad PR...but how well are Bayes, Aumann and MWI (eg.) actually doing?

comment by Pongo · 2020-08-01T22:46:57.624Z · score: 11 (5 votes) · LW(p) · GW(p)

Like to explain why I think as I do, I‘d need to go through some basic rational concepts.

I believe that if the rational concepts are pulling their weight, it should be possible to explain the way the concept is showing up concretely in your thinking, rather than justifying it in the general case first.

As an example, perhaps your friend is protesting your use of anecdotes as data, but you wish to defend it as Bayesian, if not scientific, evidence [LW · GW]. Rather than explaining the difference in general, I think you can say "I think that it's more likely that we hear this many people complaining about an axe murderer downtown if that's in fact what's going on, and that it's appropriate for us to avoid that area today. I agree it's not the only explanation and you should be able to get a more reliable sort of data for building a scientific theory, but I do think the existence of an axe murderer is a likely enough explanation for these stories that we should act on it"

If I'm right that this is generally possible, then I think this is a route around the feeling of being trapped on the other side of an inferential gap (which is how I interpreted the 'weird tension')

comment by AllAmericanBreakfast · 2020-08-02T04:06:13.732Z · score: 2 (1 votes) · LW(p) · GW(p)

I think you're right, when the issue at hand is agreed on by both parties to be purely a "matter of fact."

As soon as social or political implications crop in, that's no longer a guarantee.

But we often pretend like our social/political values are matters of fact. The offense arises when we use rational concepts in a way that gives the lie to that pretense. Finding an indirect and inoffensive way to present the materials and let them deconstruct their pretenses is what I'm wishing for here. LW has a strong culture surrounding how these general-purpose tools get applied, so I'd like to see a presentation of the "pure theory" that's done in an engaging way not obviously entangled with this blog.

The alternative is to use rationality to try and become savvier social operators. This can be "instrumental rationality" or it can be "dark arts," depending on how we carry it out. I'm all for instrumental rationality, but I suspect that spreading rational thought further will require that other cultural groups appropriate the tools to refine their own viewpoints rather than us going out and doing the convincing ourselves. 

comment by AllAmericanBreakfast · 2020-07-31T16:16:52.601Z · score: 9 (5 votes) · LW(p) · GW(p)

I'm annoyed that I think so hard about small daily decisions.

Is there a simple and ideally general pattern to not spend 10 minutes doing arithmetic on the cost of making burritos at home vs. buying the equivalent at a restaurant? Or am I actually being smart somehow by spending the time to cost out that sort of thing?

Perhaps:

"Spend no more than 1 minute per $25 spent and 2% of the price to find a better product."

This heuristic cashes out to:

  • Over a year of weekly $35 restaurant meals, spend about $35 and an hour and a half finding better restaurants or meals.
  • For $250 of monthly consumer spending, spend a total of $5 and 10 minutes per month finding a better product.
  • For bigger buys of around $500 (about 2x/year), spend $10 and 20 minutes on each purchase.
  • Buying a used car ($15,000) I'd spend $300 and 10 hours. I could use the $300 to hire somebody at $25/hour to test-drive an additional 5-10 cars, a mechanic to inspect it on the lot, a good negotiator to help me secure a lower price.
  • For work over the next year ($30,000), spend $600 and 20 hours.
  • Getting a Master's degree ($100,000 including opportunity costs), spend 66 hours and $2,000 finding the best school.
  • Choosing from among STEM career options ($100,000 per year), spend about 66 hours and $600 per year exploring career decisions.

Comparing that with my own patterns, that simplifies to:

Spend much less time thinking about daily spending. You're correctly calibrated for ~$500 buys. Spend much more time considering your biggest buys and decisions.

comment by Dagon · 2020-07-31T22:00:48.491Z · score: 5 (3 votes) · LW(p) · GW(p)

For some (including younger-me), the opposite advice was helpful - I'd agonize over "big" decisions, without realizing that the oft-repeated small decisions actually had a much larger impact on my life.

To account for that, I might recommend you notice cache-ability and repetition, and budget on longer timeframes. For monthly spending, there's some portion that's really $120X decade spending (you can optimize once, then continue to buy monthly for the next 10 years), a bunch that's probably $12Y of annual spending, and some that's really $Z that you have to re-consider every month.

Also, avoid the mistake of inflexible permissions. Notice when you're spending much more (or less!) time optimizing a decision than your average, but there are lots of them that actually benefit from the extra time. And lots that additional time/money doesn't change the marginal outcome by much, so you should spend less time on.

comment by AllAmericanBreakfast · 2020-07-31T23:09:18.399Z · score: 3 (2 votes) · LW(p) · GW(p)

I wonder if your problem as a youth was in agonizing over big decisions, rather than learning a productive way to methodically think them through. I have lots of evidence that I underthink big decisions and overthink small ones. I also tend to be slow yet ultimately impulsive in making big changes, and fast yet hyper-analytical in making small changes.

Daily choices have low switching and sunk costs. Everybody's always comparing, so one brand at a given price point tends to be about as good as another.

But big decisions aren't just big spends. They're typically choices that you're likely stuck with for a long time to come. They serve as "anchors" to your life. There are often major switching and sunk costs involved. So it's really worthwhile anchoring in the right place. Everything else will be influenced or determined by where you're anchored.

The 1 minute/$25 + 2% of purchase price rule takes only a moment's thought. It's a simple but useful rule, and that's why I like it.

There are a few items or services that are relatively inexpensive, but have high switching costs and are used enough or consequential enough to need extra thought. Examples include pets, tutors, toys for children, wedding rings, mattresses, acoustic pianos, couches, safety gear, and textbooks. A heuristic and acronym for these exceptions might be CHEAPS: "Is it a Curriculum? Is it Heavy? Is it Ergonomic? Is it Alive? Is it Precious? Is it Safety-related?"

comment by AllAmericanBreakfast · 2020-10-23T16:15:41.315Z · score: 8 (4 votes) · LW(p) · GW(p)

Thinking, Fast and Slow was the catalyst that turned my rumbling dissatisfaction into the pursuit of a more rational approach to life. I wound up here. After a few years, what do I think causes human irrationality? Here's a listicle.

  1. Cognitive biases, whatever these are
  2. Not understanding statistics
  3. Akrasia
  4. Little skill in accessing and processing theory and data
  5. Not speaking science-ese
  6. Lack of interest or passion for rationality
  7. Not seeing rationality as a virtue, or even seeing it as a vice.
  8. A sense of futility, the idea that epistemic rationality is not very useful, while instrumental rationality is often repugnant
  9. A focus on associative thinking
  10. Resentment
  11. Not putting thought into action
  12. Lack of incentives for rational thought and action itself
  13. Mortality
  14. Shame
  15. Lack of time, energy, ability
  16. An accurate awareness that it's impossible to distinguish tribal affiliation and culture from a community
  17. Everyone is already rational, given their context
  18. Everyone thinks they're already rational, and that other people are dumb
  19. It's a good heuristic to assume that other people are dumb
  20. Rationality is disruptive, and even very "progressive" people have a conservative bias to stay the same, conform with their peers, and not question their own worldview
  21. Rationality can misfire if we don't take it far enough
  22. All the writing, math, research, etc. is uncomfortable and not very fun compared to alternatives
  23. Epistemic rationality is directly contradictory to instrumental rationality
  24. Nihilism
  25. Applause lights confuse people about what even is rationality
  26. There's at least 26 factors deflecting people from rationality, and people like a clear, simple answer
  27. No curriculum
  28. It's not taught in school
  29. In an irrational world, epistemic rationality is going to hold you back
  30. Life is bad, and making it better just makes people more comfortable in badness
  31. Very short-term thinking
  32. People take their ideas way too seriously, without taking ideas in general seriously enough
  33. Constant distraction
  34. The paradox of choice
  35. Lack of faith in other people or in the possibility for constructive change
  36. Rationality looks at the whole world, which has more people in it than Dunbar's number
  37. The rationalists are all hiding on obscure blogs online
  38. Rationality is inherently elitist
  39. Rationality leads to convergence on the truth if we trust each other, but it leads to fragmentation of interests since we can't think about everything, which makes us more isolated
  40. Slinging opinions around is how people connect. Rationality is an argument.
  41. "Rationality" is stupid. What's really smart is to get good at harnessing your intuition, your social instincts, to make friends and play politics.
  42. Rationality is paperclipping the world. Every technological advance that makes individuals more comfortable pillages the earth and increases inequality, so they're all bad and we should just embrace the famine and pestilence until mother nature takes us back to the stone age and we can all exist in the circular dreamtime.
  43. You can't rationally commit to rationality without being rational first. We have no baptism ceremony.
  44. We need a baptism ceremony but don't want to be a cult, so we're screwed, which we would also be if we became a cult.
  45. David Brooks is right that EA is bad, we like EA, so we're probably bad too.
  46. We're secretly all spiritual and just faking rational atheism because what we really want to do is convert.
  47. There's too much verbiage already in the world.
  48. The singularity is coming; what's the point?
  49. Our leaders have abandoned us, and the best of us have been cut down like poppies.
  50. Eschewing the dark arts is a self-defeating stance
comment by Dagon · 2020-10-23T19:09:07.597Z · score: 8 (2 votes) · LW(p) · GW(p)

A few other (even less pleasant) options:

51) God is inscrutable and rationality is no better than any other religion.

52) Different biology and experience across humans leads to very different models of action.

53) Everyone lies, all the time.  

comment by AllAmericanBreakfast · 2020-10-15T23:32:16.557Z · score: 8 (4 votes) · LW(p) · GW(p)

We do things so that we can talk about it later.

I was having a bad day today. Unlikely to have time this weekend for something I'd wanted to do. Crappy teaching in a class I'm taking. Ever increasing and complicating responsibilities piling up.

So what did I do? I went out and bought half a cherry pie.

Will that cherry pie make me happy? No. I knew this in advance. Consciously and unconsciously: I had the thought, and no emotion compelled me to do it.

In fact, it seemed like the least-efficacious action: spending some of my limited money, to buy a pie I don't need, to respond to stress that's unrelated to pie consumption and is in fact caused by lack of time (that I'm spending on buying and eating pie).

BUT. What buying the pie allowed me to do was tell a different story. To myself and my girlfriend who I was texting with. Now, today can be about how I got some delicious pie.

And I really do feel better. It's not the pie, nor the walk to the store to buy it. It's the relief of being able to tell my girlfriend that I bought some delicious cherry pie, and that I'd share it with her if she didn't live a three-hour drive away. It's the relief of reflecting on how I dealt with my stress, and seeing a pie-shaped memory at the end of the schoolwork.

If this is a correct model of how this all works, then it suggests a couple things:

  • This can probably be optimized.
  • The way I talk about that optimization process will probably affect how well it works. For example, if I then think "what's the cheapest way to get this effect," that intuitively doesn't feel good. I don't want to be cheap. I need to find the right language, the right story to tell, so that I can explain my "philosophy" to myself and others in a way that gets the response I want.

Is that the darks arts? I don't think so. I think this is one area of life where the message is the medium.

comment by Viliam · 2020-10-16T17:46:54.126Z · score: 4 (2 votes) · LW(p) · GW(p)

So the "stupid solutions to problems of life" are not really about improving the life, but about signaling to yourself that... you still have some things under control? (My life may suck, but I can have a cherry pie whenever I want to!)

This would be even more important if the cherry pie would somehow actively make your life worse. For example, if you are trying to lose weight, but at the same time keep eating cherry pie every day in order to improve the story of your day. Or if instead of cherry pie it would be cherry liqueur.

The way I talk about that optimization process will probably affect how well it works.

Just guessing, but it would probably help to choose the story in advance. "If I am doing X, my life is great, and nothing else matters" -- and then make X something useful that doesn't take much time. Even better, have multiple alternatives X, Y, Z, such that doing any of them is a "proof" of life being great.

comment by AllAmericanBreakfast · 2020-10-16T18:55:30.761Z · score: 2 (1 votes) · LW(p) · GW(p)

I do chalk a lot of dysfunction up to this story-centric approach to life. I just suspect it’s something we need to learn to work with, rather than against (or to deny/ignore it entirely).

My sense is that storytelling - to yourself or others - is an art. To get the reaction you want - from self or others - takes some aesthetic sensitivity.

My guess is there’s some low hanging fruit here. People often talk about doing things “for the story,” which they resort to when they're trying to justify doing something dumb/wasteful/dangerous/futile. Perversely, it often seems that when people talk in detail about their good decisions, it comes of as arrogant. Pointless, tidy philosophical paradoxes seem to get people's puzzle-solving brains going better than confronting the complexity of the real world.

But maybe we can simply start building habits of expressing gratitude. Finding ways to present good ideas and decisions in ways that are delightful in conversation. Spinning interesting stories out of the best parts of our lives.

comment by AllAmericanBreakfast · 2020-10-22T19:24:21.171Z · score: 6 (3 votes) · LW(p) · GW(p)

Paying your dues

I'm in school at the undergraduate level, taking 3 difficult classes while working part-time.

For this path to be useful at all, I have to be able to tick the boxes: get good grades, get admitted to grad school, etc. For now, my strategy is to optimize to complete these tasks as efficiently as possible (what Zvi calls "playing on easy mode"), in order to preserve as much time and energy for what I really want: living and learning.

Are there dangers in getting really good at paying your dues?
 

1) Maybe it distracts you/diminishes the incentive to get good at avoiding dues.

2) Maybe there are two ways to pay dues (within the rules): one that gives you great profit and another that merely satisfies the requirement.

In general, though, I tend to think that efficient accomplishment is about avoiding or compressing work until you get to the "efficiency frontier" in your field. Good work is about one of two things:

  1. Getting really fast/accurate at X because it's necessary for reason R to do Y.
  2. Getting really fast/accurate at X because it lets you train others to do (or better yet, automate) X.

In my case, X is schoolwork, R is "triangulation of me and graduate-level education," and Y is "get a research job."

X is also schoolwork, R is "practice," and Y is learning. But this is much less clear. It may be that other strategies would be more efficient for learning.

However, since the expected value of my learning is radically diminished if I don't get into grad school, it makes sense to optimize first for aceing my schoolwork, and then in the time that remains to optimize for learning. Treating these as two separate activities with two separate goals makes sense.

This isn't "playing on easy mode," so much as purchasing fuzzies (As) and utilons (learning) separately. [LW · GW]

comment by NaiveTortoise (An1lam) · 2020-10-22T23:08:33.722Z · score: 7 (4 votes) · LW(p) · GW(p)

If you haven't seen Half-assing it with everything you've got, I'd definitely recommend it as an alternative perspective on this issue.

comment by AllAmericanBreakfast · 2020-10-23T16:28:54.448Z · score: 3 (2 votes) · LW(p) · GW(p)

I see my post as less about goal-setting ("succeed, with no wasted motion") and more about strategy-implementing ("Check the unavoidable boxes first and quickly, to save as much time as possible for meaningful achievement"). 

comment by Dagon · 2020-10-22T22:55:37.626Z · score: 4 (2 votes) · LW(p) · GW(p)

I suspect "dues" are less relevant in today's world than a few decades ago.  It used to be a (partial) defense against being judged harshly for your success, by showing that you'd earned it without special advantage.  Nowadays, you'll be judged regardless, as the assumption is that "the system" is so rigged that anyone who succeeds had a headstart.

To the extent that the dues do no actual good (unlike literal dues, which the recipient can use to buy things, presumably for the good of the group), skipping them seems very reasonable to me.  The trick, of course, is that it's very hard to distinguish unnecessary hurdles ("dues") from socially-valuable lessons in conformity and behavior ("training").  

Relevant advice when asked if you've paid your dues: https://www.youtube.com/watch?v=PG0YKVafAe8

comment by AllAmericanBreakfast · 2020-09-16T23:47:17.441Z · score: 6 (3 votes) · LW(p) · GW(p)

I've been thinking about honesty over the last 10 years. It can play into at least three dynamics.

One is authority and resistance. The revelation or extraction of information, and the norms, rules, laws, and incentives surrounding this, including moral concepts, are for the primary purpose of shaping the power dynamic.

The second is practical communication. Honesty is the idea that specific people have a "right to know" certain pieces of information from you, and that you meet this obligation. There is wide latitude for "white lies," exaggeration, storytelling, "noble lies," self-protective omissions, image management, and so on in this conception. It's up to the individual's sense of integrity to figure out what the "right to know" entails in any given context.

The third is honesty as a rigid rule. Honesty is about revealing every thought that crosses your mind, regardless of the effect it has on other people. Dishonesty is considered a person's natural and undesirable state, and the ability to reveal thoughts regardless of external considerations is considered a form of personal strength.

comment by AllAmericanBreakfast · 2020-09-03T03:45:28.995Z · score: 6 (5 votes) · LW(p) · GW(p)

Better rationality should lead you to think less, not more. It should make you better able to

  • Set a question aside
  • Fuss less over your decisions
  • Accept accepted wisdom
  • Be brief

while still having good outcomes. What's your rationality doing to you?

comment by Dagon · 2020-09-03T20:07:49.175Z · score: 5 (3 votes) · LW(p) · GW(p)

I like this line of reasoning, but I'm not sure it's actually true. "better" rationality should lead your thinking to be more effective - better able to take actions that lead to outcomes you prefer. This could express as less thinking, or it could express as MORE thinking, for cases where return-to-thinking is much higher due to your increase in thinking power.

Whether you're thinking less for "still having good outcomes", or thinking the same amount for "having better outcomes" is a topic for introspection and rationality as well.

comment by AllAmericanBreakfast · 2020-09-04T02:02:43.147Z · score: 3 (2 votes) · LW(p) · GW(p)

That's true, of course. My post is really a counter to a few straw-Vulcan tendencies: intelligence signalling, overthinking everything, and being super argumentative all the time. Just wanted to practice what I'm preaching!

comment by AllAmericanBreakfast · 2020-08-10T22:12:53.657Z · score: 6 (4 votes) · LW(p) · GW(p)

How should we weight and relate the training of our mind, body, emotions, and skills?

I think we are like other mammals. Imitation and instinct lead us to cooperate, compete, produce, and take a nap. It's a stochastic process that seems to work OK, both individually and as a species.

We made most of our initial progress in chemistry and biology through very close observation of small-scale patterns. Maybe a similar obsessiveness toward one semi-arbitrarily chosen aspect of our own individual behavior would lead to breakthroughs in self-understanding?

comment by AllAmericanBreakfast · 2020-08-02T03:28:56.893Z · score: 6 (3 votes) · LW(p) · GW(p)

I'm experimenting with a format for applying LW tools to personal social-life problems. The goal is to boil down situations so that similar ones will be easy to diagnose and deal with in the future.

To do that, I want to arrive at an acronym that's memorable, defines an action plan and implies when you'd want to use it. Examples:

OSSEE Activity - "One Short Simple Easy-to-Exit Activity." A way to plan dates and hangouts that aren't exhausting or recipes for confusion.

DAHLIA - "Discuss, Assess, Help/Ask, Leave, Intervene, Accept." An action plan for how to deal with annoying behavior by other people. Discuss with the people you're with, assess the situation, offer to help or ask the annoying person to stop, leave if possible, intervene if not, and accept the situation if the intervention doesn't work out.

I came up with these by doing a brief post-mortem analysis on social problems in my life. I did it like this:

  1. Describe the situation as fairly as possible, both what happened and how it felt to me and others.
  2. Use LW concepts to generalize the situation and form an action plan. For example, OSSEE Activity arose from applying the concept of "diminishing marginal returns" to my outings.
  3. Format the action plan into a mnemonic, such as an acronym.
  4. Experiment with applying the action plan mnemonic in life and see if it leads you to behave differently and proves useful.
comment by AllAmericanBreakfast · 2020-09-25T19:51:35.811Z · score: 5 (3 votes) · LW(p) · GW(p)

Idea for online dating platform:

Each person chooses a charity and an amount of money that you must donate to swipe right on them. This leads to higher-fidelity match information while also giving you a meaningful topic to kick the conversation off.

comment by AllAmericanBreakfast · 2020-07-16T19:25:32.104Z · score: 5 (3 votes) · LW(p) · GW(p)

Goodhart's Epistemology

If a gears-level understanding becomes the metric of expertise, what will people do?

  • Go out and learn until they have a gears-level understanding?
  • Pretend they have a gears-level understanding by exaggerating their superficial knowledge?
  • Feel humiliated because they can't explain their intuition?
  • Attack the concept of gears-level understanding on a political or philosophical level?

Use the concept of gears-level understanding to debug your own knowledge. Learn for your own sake, and allow your learning to naturally attract the credibility it deserves.

Evaluating expertise in others is a different matter. Probably you want to use a cocktail of heuristics:

  • Can they articulate a gears-level understanding?
  • Do they have the credentials and experience you'd expect someone with deep learning in the subject to have?
  • Can they improvise successfully when a new problem is thrown at them?
  • Do other people in the field seem to respect them?

I'm sure there are more.

comment by AllAmericanBreakfast · 2020-09-28T22:44:40.675Z · score: 4 (2 votes) · LW(p) · GW(p)

Explanation for why displeasure would be associated with meaningfulness, even though in fact meaning comes from pleasure [LW · GW]:

Meaningful experiences involve great pleasure. They also may come with small pains. Part of how you quantify your great pleasure is the size of the small pain that it superceded.

Pain does not cause meaning. It is a test for the magnitude of the pleasure. But only pleasure is a causal factor for meaning.

comment by Viliam · 2020-09-29T20:36:50.728Z · score: 4 (2 votes) · LW(p) · GW(p)

In a perfect situation, it would be possible to achieve meaningful experiences without pain, but usually it is not possible. A person who optimizes for short-term pain avoidance, will not reach the meaningful experience. Because optimizing for short-term pain avoidance is natural, we have to remind ourselves to overcome this instinct.

comment by AllAmericanBreakfast · 2020-09-29T23:31:12.558Z · score: 2 (1 votes) · LW(p) · GW(p)

This fits with the idea that meaning comes from pleasure, and that great pleasure can be worth a fair amount of pain to achieve. The pain drains meaning away, but the redeeming factor is that it can serve as a test of the magnitude of pleasure, and generate pleasurable stories in the future.

An important counter argument to my hypothesis is how we may find a privileged “high road” to success and pleasure to be less meaningful. This at first might seem to suggest that we do inherently value pain.

In fact, though, what frustrates people about people born with a silver spoon in their mouths is that society seems set up to ensure their pleasure at another’s expense.

It’s not their success or pleasure we dislike. It’s the barriers and pain that we think it’s contextualized in. If pleasure for one means pain for another, then of course we find the pleasure to be less meaningful.

So this isn’t about short-term pain avoidance. It’s about long-term, overall, wise and systemic pursuit of pleasure.

And that pleasure must be not only in the physical experiences we have, but in the stories we tell about it - the way we interpret life. We should look at it, and see that it is good.

If people are wireheading, and we look at that tendency and it causes us great displeasure, that is indeed an argument against wireheading.

We need to understand that there’s no single bucket where pleasure can accumulate. There is a psychological reward system where pleasure is evaluated according to the sensory input and brain state.

Utilitarian hedonism isn’t just about nerve endings. It’s about how we interpret them. If we have a major aesthetic objection to wireheading, that counts from where we’re standing, no matter how much you rachet up the presumed pleasure of wireheading.

The same goes recursively for any “hack” that could justify wireheading. For example, say you posited that wireheading would be seen as morally good, if only we could find a catchy moral justification for it.

So we let our finest AI superintelligences get to work producing one. Indeed, it’s so catchy that the entire human population acquiesces to wireheading.

Well, if we take offense to the prospect of letting the AI superintelligence infect us with a catchy pro-wireheading meme, then that’s a major point against doing so.

In general “It pleases or displeases me to find action X moral” is a valid moral argument - indeed, the only one there is.

The way moral change happens is by making moral arguments or having moral experiences that in themselves are pleasing or displeasing.

What’s needed, then, for moral change to happen, is to find a pleasing way to spread an idea that is itself pleasing to adopt - or unpleasant to abandon. To remain, that idea needs to generate pleasure for the subscriber, or to generate displeasure at the prospect of abandoning it in favor of a competing moral scheme.

To believe in some notion of moral truth or progress requires believing that the psychological reward mechanism we have attached to morality corresponds best with moral schemes that accord with moral truth.

An argument for that is that true ideas are easiest to fashion into a coherent, simple argument. And true ideas best allow us to interface with reality to advantage. Being good tends to make you get along with others better than being bad, and that’s a more pleasant way to exist.

Hence, even though strong cases can be constructed for immoral behavior, truth and goodness will tend to win in the arms race for the most pleasing presentation. So we can enjoy the idea that there is moral progress and objective moral truth, even though we make our moral decisions merely by pursuing pleasure and avoiding pain.

comment by mr-hire · 2020-09-29T00:10:51.240Z · score: 2 (1 votes) · LW(p) · GW(p)

I looked through that post but didn't see any support for the claim that meaning comes from pleasure.

My own theory is that meaning comes from values, and both pain and pleasure are a way to connect to the things we value, so both are associated with meaning.

comment by AllAmericanBreakfast · 2020-09-29T01:53:04.812Z · score: 5 (3 votes) · LW(p) · GW(p)

I'm a classically trained pianist. Music practice involves at least four kinds of pain:

  • Loneliness
  • Frustration
  • Physical pain
  • Monotony

I perceive none of these to add meaning to music practice. In fact, it was loneliness, frustration, and monotony that caused my music practice to be slowly drained of its meaning and led me ultimately to stop playing, even though I highly valued my achievements as a classical pianist and music teacher. If there'd been an issue with physical pain, that would have been even worse.

I think what pain can do is add flavor to a story. And we use stories as a way to convey meaning. But in that context, the pain is usually illustrating the pleasures of the experience or of the positive achievement. In the context of my piano career, I was never able to use these forms of pain as a contrast to the pleasures of practice and performance. My performance anxiety was too intense, and so it also was not a source of pleasure.

By contrast, I herded sheep on the Navajo reservation for a month in the middle of winter. That experience generated many stories. Most of them revolve around a source of pain, or a mistake. But that pain or mistake serves to highlight an achievement.

That achievement could be the simple fact of making it through that month while providing a useful service to my host. Or moments of success within it: getting the sheep to drink from the hole I cut in the icy lake, busting a tunnel through the drifts with my body so they could get home, finding a mother sheep that had gotten lost when she was giving birth, not getting cannibalized by a Skinwalker.

Those make for good stories, but there is pleasure in telling those stories. I also have many stories from my life that are painful to tell. Telling them makes me feel drained of meaning.

So I believe that storytelling has the ability to create pleasure out of painful or difficult memories. That is why it feels meaningful: it is pleasurable to tell stories. And being a good storyteller can come with many rewards. The net effect of a painful experience can be positive in the long run if it lends itself to a lot of good storytelling.

Where do values enter the picture?

I think it's because "values" is a term for the types of stories that give us pleasure. My community gets pleasure out of the stories about my time on the Navajo reservation. They also feel pleasure in my story about getting chased by a bear. I know which of my friends will feel pleasure in my stories from Burning Man, and who will find them uncomfortable.

So once again, "values" is a gloss for the pleasure we take in certain types of stories. Meaning comes from pleasure; it appears to come from values because values also come from pleasure. Meaning can come from pain only indirectly. Pain can generate stories, which generate pleasure in the telling.

comment by mr-hire · 2020-09-29T17:52:19.495Z · score: 2 (1 votes) · LW(p) · GW(p)

"values" is a term for the types of stories that give us pleasure.

It really depends on what you mean by "pleasure".  If pleasure is just "things you want", then almost tautologically meaning comes from pleasure, since you want meaning.

If instead, pleasure is a particular phenomological feeling similar to feeling happy or content, I think that many of us actually WANT the meaning that comes from living our values, and it also happens to give us pleasure.  I think that there are also people that just WANT the pleasure, and if they could get it while ignoring their values, they would.

I call this the"Heaven/Enlightenment" dichotomy, and I think it's a frequent misunderstanding.

I've seen some people say "all we care about is feeling good, and people who think they care about the outside world are confused." I've also seen people say "All we care about is meeting our values, and people who think it's about feeling good are confused."

Personally, I think that people are more towards one side of the spectrum or the other along different dimensions, and I'm inclined to believe both sides about their own experience.

comment by AllAmericanBreakfast · 2020-09-29T19:30:16.479Z · score: 2 (1 votes) · LW(p) · GW(p)

I think we can consider pleasure, along with altruism, consistency, rationality, fitting the categorical imperative, and so forth as moral goods.

People have different preferences for how they trade off one against the other when they're in conflict. But they of course prefer them not to be in conflict.

What I'm interested is not what weights people assign to these values - I agree with you that they are diverse - but on what causes people to adopt any set of preferences at all.

My hypothesis is that it's pleasure. Or more specifically, whatever moral argument most effectively hijacks an individual person's psychological reward system.

So if you wanted to understand why another person considers some strange action or belief to be moral, you'd need to understand why the belief system that they hold gives them pleasure.

Some predictions from that hypothesis:

  • People who find a complex moral argument unpleasant to think about won't adopt it.
  • People who find a moral community pleasant to be in will adopt its values.
  • A moral argument might be very pleasant to understand, rehearse, and think about, and unpleasant to abandon. It might also be unpleasant in the actions it motivates its subscriber to undertake. It will continue to exist in their mind if the balance of pleasure in belief to displeasure in action is favorable.
  • Deprogramming somebody from a belief system you find abhorrent is best done by giving them alternative sources of "moral pleasure." Examples of this include the ways people have deprogrammed people from cults and the KKK, by including them in their social gatherings, including Jewish religious dinners, and making them feel welcome. Eventually, the pleasure of adopting the moral system of that shared community displaces whatever pleasure they were deriving from their former belief system.
  • Paying somebody in money and status to uphold a given belief system is a great way to keep them doing it, no matter how silly it is.
  • If you want people do do more of a painful but necessary action X, helping them feel compensating forms of moral pleasure is a good way to go about it. Effective Altruism is a great example. By helping people understand how effective donations or direct work can save lives, they give people a feeling of heroism. Its failure mode is making people feel like the demands are impossible, and the displeasure of that disappointment is a primary issue in that community.
  • Another good way to encourage more of a painful but necessary action X is to teach people how to shape it into a good story that they and others will appreciate in the telling. Hence the story-fication of charity.
  • Many people don't give to charity because their community disparages it as "do-gooderism," as futile, as bragging, or as a tasteless display of wealth and privilege. If you want people to give more to charity, you have to give people a way of being able to enjoy talking about their charitable contributions. One solution is to form a community in which that's openly accepted and appreciated. Like EA.
  • Likewise for the rationality community. If you want people to do more good epistemology outside of academia, give them an outlet where that'll be appreciated and an axis from where it can be spread.
comment by mr-hire · 2020-09-30T19:10:44.428Z · score: 2 (1 votes) · LW(p) · GW(p)

My hypothesis is that it's pleasure. Or more specifically, whatever moral argument most effectively hijacks an individual person's psychological reward system.

This just kicks the can down the road on you defining pleasure, all of my points still apply

If instead, pleasure is a particular phenomological feeling similar to feeling happy or content, I think that many of us actually WANT the meaning that comes from living our values, and it also happens to give us pleasure.

That is, I think it's possible to say that pleasure kicks in around values that we really want, rather than vice versa.

comment by AllAmericanBreakfast · 2020-09-23T17:13:29.408Z · score: 4 (2 votes) · LW(p) · GW(p)

Sci-hub has moved to https://sci-hub.st/

comment by AllAmericanBreakfast · 2020-09-23T16:58:35.350Z · score: 4 (2 votes) · LW(p) · GW(p)

Do you treat “the dark arts” as a set of generally forbidden behaviors, or as problematic only in specific contexts?

As a war of good and evil or as the result of trade-offs between epistemic rationality and other values?

Do you shun deception and manipulation, seek to identify contexts where they’re ok or wrong, or embrace them as a key to succeeding in life?

Do you find the dark arts dull, interesting, or key to understanding the world, regardless of whether or not you employ them?

Asymmetric weapons may be the only source of edge for the truth itself. But should the side of the truth therefore eschew symmetric weapons?

What is the value of the label/metaphor “dark arts/dark side?” Why the normative stance right from the outset? Isn’t the use of this phrase, with all its implications of evil intent or moral turpitude, itself an example of the dark arts? An attempt to halt the workings of other minds, or of our own?

comment by Viliam · 2020-09-24T15:31:15.963Z · score: 6 (3 votes) · LW(p) · GW(p)

There are things like "lying for a good cause", which is a textbook example of what will go horribly wrong because you almost certainly underestimate the second-order effects. Like the "do not wear face masks, they are useless" expert advice for COVID-19, which was a "clever" dark-arts move aimed to prevent people from buying up necessary medical supplies. A few months later, hundreds of thousands have died (also) thanks to this advice.

(It would probably be useful to compile a list of lying for a good cause gone wrong, just to drive home this point.)

Thinking about historical record of people promoting the use of dark arts within rationalist community, consider Intentional Insights [EA · GW]. Turned out, the organization was also using the dark arts against the rationalist community itself. (There is a more general lesson here: whenever a fan of dark arts tries to make you see the wisdom of their ways, you should assume that at this very moment they are probably already using the same techniques on you. Why wouldn't they, given their expressed belief that this is the right thing to do?)

The general problem with lying is that people are bad at keeping multiple independent models of the world in their brains. The easiest, instinctive way to convince others about something is to start believing it yourself. Today you decide that X is a strategic lie necessary for achieving goal Y, and tomorrow you realize that actually X is more correct than you originally assumed (this is how self-deception feels from inside). This is in conflict with our goal to understand the world better. Also, how would you strategically lie as a group? Post it openly online: "Hey, we are going to spread the lie X for instrumental reasons, don't tell anyone!" :)

Then there are things like "using techniques-orthogonal-to-truth to promote true things". Here I am quite guilty myself, because I have long ago advocated turning the Sequences into a book, reasoning, among other things, that for many people, a book is inherently higher-status than a website. Obviously, converting a website to a book doesn't increase its truth value. This comes with smaller risks, such as getting high on your own supply (convincing ourselves that articles in the book are inherently more valuable than those that didn't make it for whatever reason, e.g. being written after the book was published), or wasting too many resources on things that are not our goal.

But at least, in this category, one can openly and correctly describe their beliefs and goals.

Metaphorically, reason is traditionally associated with vision/light (e.g. "enlightenment"), ignorance and deception with blindness/darkness. The "dark side" also references Star Wars, which this nerdy audience is familiar with. So, if the use of the term itself is an example of dark arts (which I suppose it is), at least it is the type where I can openly explain how it works and why we do it, without ruining its effect.

But does it make us update too far against the use of deception? Uhm, I don't know what is the optimal amount of deception. Unlike Kant, I don't believe it's literally zero. I also believe that people err on the side of lying more than is optimal, so a nudge in the opposite direction is on average an improvement, but I don't have a proof for this.

comment by AllAmericanBreakfast · 2020-09-24T16:26:07.587Z · score: 2 (1 votes) · LW(p) · GW(p)

We already had words for lies, exaggerations, incoherence, and advertising. Along with a rich discourse of nuanced critiques and defenses of each one.

The term “dark arts” seems to lump all these together, then uses cherry picked examples of the worst ones to write them all off. It lacks the virtue of precision. We explicitly discourage this way of thinking in other areas. Why do we allow it here?

comment by AllAmericanBreakfast · 2020-09-04T02:50:17.488Z · score: 4 (4 votes) · LW(p) · GW(p)

How to reach simplicity?

You can start with complexity, then simplify. But that's style.

What would it mean to think simple?

I don't know. But maybe...

  • Accept accepted wisdom.
  • Limit your words.
  • Rehearse your core truths, think new thoughts less.
  • Start with inner knowledge. Intuition. Genius. Vision. Only then, check yourself.
  • Argue if you need to, but don't ever debate. Other people can think through any problem you can. Don't let them stand in your way just because they haven't yet.
  • If you know, let others find their own proofs. Move on with the plan.
  • Be slow. Rest. Deliberate. Daydream. But when you find the right project, unleash everything you have. Learn what you need to learn and get the job done right.
comment by AllAmericanBreakfast · 2020-08-12T18:09:44.626Z · score: 4 (2 votes) · LW(p) · GW(p)

Question re: "Why Most Published Research Findings are False":

Let R be the ratio of the number of “true relationships” to “no relationships” among those tested in the field... The pre-study probability of a relationship being true is R/(R + 1).

What is the difference between "the ratio of the number of 'true relationships' to 'no relationships' among those tested in the field" and "the pre-study probability of a relationship being true"?

comment by AllAmericanBreakfast · 2020-08-12T20:17:07.430Z · score: 2 (1 votes) · LW(p) · GW(p)

From Reddit:

You could think of it this way: If R is the ratio of (combinations that total N on two dice) to (combinations that don't total N on two dice), then the chance of (rolling N on two dice) is R/(R+1). For example, there are 2 ways to roll a 3 (1 and 2, and 2 and 1) and 34 ways to not roll a 3. The probability of rolling a 3 is thus (2/34)/(1+2/34)=2/36.

comment by AllAmericanBreakfast · 2020-10-23T20:14:32.765Z · score: 2 (1 votes) · LW(p) · GW(p)

What rationalists are trying to do is something like this:

  1. Describe the paragon of virtue: a society of perfectly rational human beings.
  2. Explain both why people fall short of that ideal, and how they can come closer to it.
  3. Explore the tensions in that account, put that plan into practice on an individual and communal level, and hold a meta-conversation about the best ways to do that.

This looks exactly like virtue ethics.

Now, we have heard that the meek shall inherit the earth. So we eschew the dark arts; embrace the virtues of accuracy, precision, and charity; steelman our opponents' arguments; try to cite our sources; question ourselves first; and resist the temptation to simply the message for public consumption.

Within those bounds, however, we need to address a few question-clusters.

What keeps our community strong? What weakens it? What is the greatest danger to it in the next year? What is the greatest opportunity? What is present in abundance? What is missing?

How does this community support our individual growth as rationalists? How does it detract from it? How could we be leveraging what our community has to offer? How could we give back?

comment by AllAmericanBreakfast · 2020-08-21T03:13:20.241Z · score: 2 (1 votes) · LW(p) · GW(p)

You can justify all sorts of spiritual ideas by a few arguments:

  1. They're instrumentally useful in producing good feelings between people.
  2. They help you escape the typical mind fallacy.
  3. They're memetically refined, which means they'll fit better with your intuition than, say, trying to guess where the people you know fit on the OCEAN scale.
  4. They're provocative and generative of conversation in a way that scientific studies aren't. Partly that's because the language they're wrapped in is more intriguing, and partly isn't because everybody's on a level playing field.
  5. It's a way to escape the trap of intelligence-signalling and lowers the barrier for verbalizing creative ideas. If you're able to talk about astrology, it lets people feel like they have permission to babble.
  6. They're aesthetically pleasing if you don't take them too seriously
comment by AllAmericanBreakfast · 2020-08-21T03:31:47.872Z · score: 2 (1 votes) · LW(p) · GW(p)

I would be interested in arguments about why we should eschew them that don't resort to activist ideas of making the world a "better place" by purging the world of irrationality and getting everybody on board with a more scientific framework for understanding social reality or psychology.

I'm more interested in why individual people should anticipate that exploring these spiritual frameworks will make their lives worse, either hedonistically or by some reasonable moral framework. Is there a deontological or utilitarian argument against them?

comment by AllAmericanBreakfast · 2020-07-29T19:22:41.714Z · score: 2 (1 votes) · LW(p) · GW(p)

A checklist for the strength of ideas:

Think "D-SHARP"

  • Is it worth discussing?
  • Is it worth studying?
  • Is it worth using as a heuristic?
  • Is it worth advertising?
  • Is it worth regulating or policing?

Worthwhile research should help the idea move either forward or backward through this sequence.

comment by AllAmericanBreakfast · 2020-07-27T01:24:00.194Z · score: 2 (1 votes) · LW(p) · GW(p)

Why isn’t California investing heavily in desalination? Has anybody thought through the economics? Is this a live idea?

comment by Dagon · 2020-07-27T16:12:03.215Z · score: 4 (2 votes) · LW(p) · GW(p)

There's plenty of research going on, but AFAIK, no particular large-scale push for implementation. I haven't studied the topic, but my impression is that this is mostly something they can get by with current sources and conservation for a few decades yet. Desalinization is expensive, not just in terms of money, but in terms of energy - scaling it up before absolutely needed is a net environmental harm.

comment by ChristianKl · 2020-07-27T18:06:35.582Z · score: 2 (1 votes) · LW(p) · GW(p)

This article seems to be about the case. The economics seem unclear. The politics seem bad because it means taking on the enviromentalists. 

comment by AllAmericanBreakfast · 2020-07-22T01:57:22.891Z · score: 2 (1 votes) · LW(p) · GW(p)

My modified Pomodoro has been working for me. I set a timer for 5 minutes and start working. Every 5 minutes, I just reset the timer and continue.

For some reason it gets my brain into "racking up points" mode. How many 5-minute sessions can I do without stopping or getting distracted? Aware as I am of my distractability, this has been an unquestionably powerful technique for me to expand my attention span.

comment by AllAmericanBreakfast · 2020-07-21T20:23:29.056Z · score: 2 (1 votes) · LW(p) · GW(p)

All actions have an exogenous component and an endogenous component. The weights we perceive differ from action to action, context to context.

The endogenous component has causes and consequences that come down to the laws of physics.

The exogenous component has causes and consequences from its social implications. The consequences, interpretation, and even the boundaries of where the action begins and ends are up for grabs.

comment by AllAmericanBreakfast · 2020-07-15T22:27:55.981Z · score: 2 (2 votes) · LW(p) · GW(p)

Failure modes in important relationships

  • Being quick and curt when they want to connect and share positive emotions
  • Meeting negative emotions with blithe positive emotions (ie. pretending like they're not angry, anxious, etc)
  • Mirroring negative emotions: meeting anxiety with anxiety, anger with anger
  • Being uncompromising, overly "logical"/assertive to get your way in the moment
  • Not trying to express what you want, even to yourself
  • Compromising/giving in, hoping next time will be "your turn"

Practice this:

  • Focusing [LW · GW]to identify your own elusive feelings
  • Empathy to identify and express the other person's needs, feelings, information. Look for a "that's right." You're not rushing to win, nor rushing to receive empathy. The more they reveal, the better it is for you (and for them, because now you can help find a high-value trade rather than a poor compromise).
comment by AllAmericanBreakfast · 2020-07-11T19:47:54.530Z · score: 2 (2 votes) · LW(p) · GW(p)

Good reading habit #1: Turn absolute numbers into proportions and proportions into absolute numbers.

For example, in reading "With almost 1,000 genes discovered to be differentially expressed between low and high passage cells [in mouse insulinoma cells]," look up the number of mouse genes (25,000) and turn it into a percentage so that you can see that 1,000 genes is 4% of the mouse genome.

comment by AllAmericanBreakfast · 2020-07-18T02:01:43.041Z · score: 1 (3 votes) · LW(p) · GW(p)

What is the difference between playing devil's advocate and steelmanning an argument? I'm interested in any and all attempts to draw a useful distinction, even if they're only partial.

Attempts:

  • Devil's advocate comes across as being deliberately disagreeable, while steelmanning comes across as being inclusive.
  • Devil's advocate involves advancing a clearly-defined argument. Steelmanning is about clarifying an idea that gets a negative reaction due to factors like word choice or some other superficial factor.
  • Devil's advocate is a political act and is only relevant in a conversation between two or more people. Steelmanning can be social, but it can also be done entirely in conversation with yourself.
  • Devil's advocate is about winning an argument, and can be done even if you know exactly how the argument goes and know in advance that you'll still disagree with it when you're done making it. Steelmanning is about exploring an idea without preconceptions about where you'll end up.
  • Devil's advocate doesn't necessarily mean advancing the strongest argument, only the one that's most salient, hardest for your conversation partner to argue against, or most complex or interesting. Steelmanning is about searching for an argument that you genuinely find compelling, even if it's as simple as admitting your own lack of expertise and the complexity of the issue.
  • Devil's advocate can be a diversionary or stalling tactic, meant to delay or avoid an unwanted conclusion of a larger argument by focusing in on one of its minor components. Steelmanning is done for its own sake.
  • Devil's advocate comes with a feeling of tension, attention-hogging, and opposition. Steelmanning comes with a feeling of calm, curiosity, and connection.
comment by AllAmericanBreakfast · 2020-07-17T01:58:50.950Z · score: 1 (1 votes) · LW(p) · GW(p)

Empathy is inexpensive and brings surprising benefits. It takes a little bit of practice and intent. Mainly, it involves stating the obvious assumption about the other person's experience and desires. Offer things you think they'd want and that you'd be willing to give. Let them agree or correct you. This creates a good context in which high-value trades can occur, without needing an conscious, overriding, selfish goal to guide you from the start.

comment by mr-hire · 2020-07-17T21:13:29.481Z · score: 2 (1 votes) · LW(p) · GW(p)

FWIW, I like to be careful about my terms here.

Empathy is feeling what the other person is feeling.

Understanding is understanding what the other person is feeling.

Active Listening is stating your understanding and letting the other person correct you.

Empathic listening is expressing how you feel what the other person is feeling.

In this case, you stated Empathy, but you're really talking about Active Listening.  I agree it's inexpensive and brings surprising benefits.

comment by Raemon · 2020-07-17T21:33:28.116Z · score: 2 (1 votes) · LW(p) · GW(p)

I think whether it's inexpensive isn't that obvious. I think it's a skill/habit, and it depends a lot on whether you've cultivated the habit, and on your mental architecture.

comment by mr-hire · 2020-07-17T21:37:18.528Z · score: 2 (1 votes) · LW(p) · GW(p)

Active listening at a low level is fairly mechanical, and can still acrue quite a few benefits. Its not as dependent on mental architecture as something like empathic listening.  It does require some mindfulness to create the habit, but for most people I'd put it on only a slightly higher level of difficulty to acquire than e.g. brushing your teeth.

comment by Raemon · 2020-07-17T21:46:25.390Z · score: 2 (1 votes) · LW(p) · GW(p)

Fair, but I think gaining a new habit like brushing your teeth is actually pretty expensive.

comment by AllAmericanBreakfast · 2020-07-17T22:40:55.679Z · score: 1 (1 votes) · LW(p) · GW(p)

Empathy isn't like brushing your teeth. It's more like berry picking. Evolution built you to do it, you get better with practice, and it gives immediate positive feedback. Nevertheless, due to a variety of factors, it is a sorely neglected practice, even when the bushes are growing in the alley behind your house.

comment by AllAmericanBreakfast · 2020-07-17T22:36:49.165Z · score: 1 (1 votes) · LW(p) · GW(p)

I don't think what I'm calling empathy, either in common parlance or in actual practice, decomposes neatly. For me, these terms comprise a model of intuition that obscures with too much artificial light.

comment by mr-hire · 2020-07-17T23:22:35.549Z · score: 2 (1 votes) · LW(p) · GW(p)

In that case, I don't agree that the thing you're claiming has low costs. As Raemon says in another comment this type of intuition only comes easily to certain people.  If you're trying to lump together the many skills I just pointed to, some are easy for others and some harder.

If however, the thing you're talking about is the skill of checking in to see if you understand another person, then I would refer to that as active listening.

comment by AllAmericanBreakfast · 2020-07-18T01:47:54.007Z · score: 1 (1 votes) · LW(p) · GW(p)

Of course, you're right. This is more a reminder to myself and others who experience empathy as inexpensive.

Though empathy is cheap, there is a small barrier, a trivial inconvenience, a non-zero cost to activating it. I too often neglect it out of sheer laziness or forgetfulness. It's so cheap and makes things so much better that I'd prefer to remember and use it in all conversations, if possible.

comment by AllAmericanBreakfast · 2020-07-15T21:36:37.866Z · score: 1 (1 votes) · LW(p) · GW(p)

Chris Voss thinks empathy is key to successful negotiation.

Is there a line between negotiating and not, or only varying degrees of explicitness?

Should we be openly negotiating more often?

How do you define success, when at least one of his own examples of a “successful negotiation” is entirely giving over to the other side?

I think the point is that the relationship comes first, greed second. Negotiation for Voss is exchange of empathy, seeking information, being aware of your leverage. Those factors are operating all the time - that’s the relationship.

The difference between that and normal life? Negotiation is making it explicit.

Are there easy ways to extend more empathy in more situations? Casual texts? First meetings? Chatting with strangers?

comment by AllAmericanBreakfast · 2020-10-01T04:05:35.793Z · score: 2 (1 votes) · LW(p) · GW(p)

FUN GAME:

Guess the R^2 for the trendline on a plot of bioinformatics master's degrees: tuition vs. US news & world report ranking. 

Answer...

.

.

.

.

.

.

.

.

.

.

0.137

comment by AllAmericanBreakfast · 2020-07-11T19:08:19.200Z · score: 1 (1 votes) · LW(p) · GW(p)

Good reading habit #1: Turn absolute numbers into proportions and proportions into absolute numbers.

For example, in reading "With almost 1,000 genes discovered to be differentially expressed between low and high passage cells [in mouse insulinoma cells]," look up the number of mouse genes (25,000) and turn it into a percentage so that you can see that 1,000 genes is 4% of the mouse genome.