The Comprehension Curve 2021-02-22T22:51:49.820Z
Could first doses first be better for the vaccine hesitant? 2021-02-22T16:24:55.847Z
Principles of Agile Rationalism 2021-02-21T01:47:06.090Z
How Should We Respond to Cade Metz? 2021-02-13T16:11:45.355Z
Networking and Expert Consulting As Rationality Bottlenecks 2021-02-05T05:41:07.723Z
Self-Criticism Can Be Wrong And Harmful 2021-02-01T01:07:33.737Z
Build Your Number Sense 2021-01-27T21:01:27.803Z
Define Your Learning Goal: Competence Or Broad Knowledge 2021-01-25T20:47:50.993Z
The Multi-Tower Study Strategy 2021-01-21T08:42:20.807Z
For Better Commenting, Avoid PONDS 2021-01-20T01:11:36.860Z
The Rube Goldberg Machine of Morality 2021-01-06T20:24:36.844Z
Vaccination with the EMH 2020-12-29T03:05:22.466Z
The Good Try Rule 2020-12-27T02:38:11.168Z
We desperately need to talk more about human challenge trials. 2020-12-19T10:00:42.109Z
Toward A Culture of Persuasion 2020-12-07T02:42:42.705Z
Forecasting is a responsibility 2020-12-05T00:40:48.251Z
Observe, babble, and prune 2020-11-06T21:45:13.077Z
Should students be allowed to give good teachers a bonus? 2020-11-02T00:19:54.578Z
The Trouble With Babbles 2020-10-22T21:31:23.260Z
What's the difference between GAI and a government? 2020-10-21T23:04:01.319Z
How to not be an alarmist 2020-09-30T21:35:59.285Z
Visualizing the textbook for fun and profit 2020-09-24T19:37:47.932Z
Let the AI teach you how to flirt 2020-09-17T19:04:05.632Z
A case study in simulacra levels and the Four Children of the Seder 2020-09-14T22:31:39.484Z
Rationality and playfulness 2020-09-12T05:14:29.624Z
Choose simplicity and structure. 2020-09-10T21:45:13.770Z
Resist epistemic (and emotional) learned helplessness! 2020-09-10T02:58:24.681Z
Loneliness and the paradox of choice 2020-09-09T16:30:20.374Z
Loneliness 2020-09-08T08:14:56.557Z
Should we write more about social life? 2020-08-19T20:07:17.055Z
Sentence by Sentence: "Why Most Published Research Findings are False" 2020-08-13T06:51:02.126Z
Tearing down the Chesterton's Fence principle 2020-08-02T04:56:43.339Z
Change the world a little bit 2020-07-26T22:35:05.546Z
Inefficient doesn't mean indifferent, but it might mean wimpy. 2020-07-20T18:27:48.332Z
Praise of some popular LW articles 2020-07-20T00:32:35.849Z
Criticism of some popular LW articles 2020-07-19T01:16:50.230Z
Telling more rational stories 2020-07-17T17:47:31.831Z
AllAmericanBreakfast's Shortform 2020-07-11T19:08:01.705Z
Was a PhD necessary to solve outstanding math problems? 2020-07-10T18:43:17.342Z
Was a terminal degree ~necessary for inventing Boyle's desiderata? 2020-07-10T04:47:15.902Z
Survival in the immoral maze of college 2020-07-08T21:27:27.214Z
An agile approach to pre-research 2020-06-25T18:29:47.645Z
The point of a memory palace 2020-06-20T01:00:41.975Z
Using a memory palace to memorize a textbook. 2020-06-19T02:09:18.172Z
Bathing Machines and the Lindy Effect 2020-06-17T21:44:46.931Z
Two Kinds of Mistake Theorists 2020-06-11T14:49:47.186Z
Visual Babble and Prune 2020-06-04T18:49:30.044Z
Trust-Building: The New Rationality Project 2020-05-28T22:53:36.876Z
My stumble on COVID-19 2020-04-18T04:32:30.987Z
How superforecasting could be manipulated 2020-04-17T06:47:51.289Z


Comment by allamericanbreakfast on Coincidences are Improbable · 2021-02-25T03:10:05.509Z · LW · GW

Thanks for the push, I think the scansion is better optimized now.

Comment by allamericanbreakfast on Coincidences are Improbable · 2021-02-24T20:12:05.561Z · LW · GW

I vote we abandon correlation does not imply causation in favor of connection does not imply direction. Or even better:

connection alone, direction unknown

But I'd like it best if we had a positive version.

Correlation does imply some sort of causal link.

To guess at its direction, simple models help you think.

Controlled experiments, if they are well beyond the brink

Of .05 significance will make your unknowns shrink.


Replications prove there's something new under the sun.

Did one cause the other? Did the other cause the one?

Are they both controlled by something already begun?

Or was it their coincidence that caused it to be done?



Comment by allamericanbreakfast on The Comprehension Curve · 2021-02-23T18:36:31.709Z · LW · GW

If we assume that the accuracy improvements to researching a given question are logarithmic, then it would make sense to read broadly on unimportant questions and read deeply on crucial questions.

Signaling also seems relevant here. It might be advantageous to be widely informed, or to be seen as the kind of person who only speaks on their domain of expertise.

There could also be times when you just need to be conversant in the subject enough to know who to delegate the deeper research to.

So in general, I would expect the value of broad vs. deep research to be highly contextual.

But I wonder if the same habits that may lead people to anchor on an inappropriate reading speed also lead them to anchor on a sometimes inappropriate reading depth. It's plausible to me that people who tend to read broadly by habit could reap significant gains by practicing deep reading on even an arbitrary subject, and vice versa.

It would be interesting if there was an equivalent to the DSM, but for reading habits. Could we imagine a test or a set of diagnostic criterion that could classify people both according to their level of reading proficiency, and also according to their habitual level of depth/breadth? So for example, a low-skill but deep reader might be a religious fundamentalist who has their text of choice practically memorized, yet who has very little familiarity with the nuances of interpretation. By contrast, we can imagine low-skill broad readers, who read all kinds of novels and newspapers but remember very little of it. And high-skill broad or deep readers, of course.

I think this is related to one of my perennial topics of interest, which is the path toward a specialization. A science student in undergrad or earlier reads broadly about science. At some point, if they continue on a scientific path, they eventually focus on a much narrower area, and their whole reading program focuses on acquiring knowledge that they perceive as directly useful to a specific research project.

As I've graduated into this phase, I've found that the deep, related, specialized, purposeful reading is vastly more satisfying than the broad, shallow, disconnected reading that came before. It makes me suspect that one reason people get turned off of science early is that they never get the experience of "cocooning" in a specialty in which all the articles you read are riffing off each other, interrelated, and building toward a goal. It's the closest thing that I've found to programming, which also entails building an interrelated construct to make predictions and do useful work.

I'm also interested in whether and how "broad reading" can be done with an equivalent sense of purpose. There's an article on Applied Divinity Studies, Beware the Casual Polymath, which I think can be characterized as a criticism of superficially high-skilled, but in fact low-skilled broad readers. It's pointing out that just because you're reading all kinds of Smart People Stuff doesn't mean that you're actually learning effectively.

I would imagine that a high-skilled broad reader would be somebody who has a role that involves lots of delegation and decision-making. The fictional example that comes to mind is a member of the Bartlett senior staff in the West Wing, who have to understand a huge number of issues of national significance, but only just enough to know who to delegate to or what positions are at least not-insane. For them, making reasonable, if not necessarily perfectly optimized, choices, but making a decision, is much more important than getting the exact right answer. So I would describe them as a depiction of a high-skilled, broad reader.

Comment by allamericanbreakfast on The Comprehension Curve · 2021-02-23T05:04:27.211Z · LW · GW

I take it as a big compliment that you wrote such a long and thoughtful reply to my post! Thank you!

The distinction you draw between broad vs. deep readers is the reason I didn't operationalize comprehension. "Comprehension" is just the type of understanding you want to extract from the text on your particular read-through, defined as you please. Maybe let's think of them in terms of abstract units called Comprehendalons, analogous to a Utilon.

You could define a Deep-Comprehendalon as knowing "wrong with the abstract, and what they think the real conclusions are," in which case a very slow reading speed is ideal. An ideal reading speed for Deep-Comprehendalons might be 10 wpm, or even slower, and it might take you several hours to acquire just one.

A Shallow-Comprehendalon might be picking up an single atomic fact. An ideal reading speed for Shallow-Comprehendalons might be 500 wpm, or even faster, and you might be able to pick up a huge number of them in a short period of time.

One thing I infer from this framework is that Shallow-Comprehendalons don't add up to Deep-Comprehendalons. They are not fungible, not the same type of good. Optimizing for one may mean sacrificing the other.

However, that seems debatable. Even if it's true, the Comprehension Curve would still hold. You'd just have a different ideal reading speed and maximum comprehension rate for each type.

Interestingly, for me, fully engaging my auditory cortex has, I believe, really helped me to move closer to my maximum comprehension rate. I'll describe this in a future post. One of my motivations for writing this is that I think that speed reading advocates are doing something fundamentally good -- experimenting -- but doing it in a screwy way, where they invent a whole theory of why their method is the best, without exploring contrasting hypotheses. And then when sober scientists get to studying it, they approach the field by testing the claims of speed readers, rather than by reflecting a priori on what approach to learning ought to enhance comprehension. The latter is what I've tried to advance toward here.

I appreciate you bringing up the point that much of speed reading advice revolves not around eye technique, but around mental technique - the bypassing of the auditory cortex. That's true, and I just entirely left that out for no good reason. I'm going to edit the OP to include a reference to it, along with a credit to you for reminding me of it.

One of my future posts will discuss what I've noticed in regards to skimming. I'll sum it up for you, as practice and since you brought it up.

Let's consider the following sentence from a biochemistry textbook:

"Oxidative reactions of the citric acid cycle generate reduced electron carriers that are then reoxidized to drive the synthesis of ATP."

For someone like myself who's familiar with biochemistry, many of the individual words and phrases refer to concepts that I already understand well. But there are particular keywords within the sentence that "focalize" a new concept, build out of the others. Let me break down my experience reading it:

  1. "Oxidative reactions" - the word oxidative is key, and "contains" the concept of a reaction within it.
  2. "of the citric acid cycle" - citric is key, and automatically refers to "citric acid cycle"
  3. "generate reduced electron carriers" - reduced is key, and "generate" is implied by the connection between "citric [acid cycle]" and "reduced"
  4. "that are then reoxidized" - reoxidized is key
  5. "to drive the synthesis of ATP." - ATP is key, because I already know that the end result is to synthesize, rather than consume ATP.

So as I read this sentence, the words "oxidative," "citric," "reduced," and "ATP" get lodged in my working memory, repeated in my auditory cortex in a sort of earworm-like jingle. While they repeat, my eyes scan the rest of the words to observe how they link up. So I read with a two-layered awareness: the working memory jingle-words that isolate and relate the key concepts, and the non-auditory connecting words that give them meaning and relate them together. This is just the approach to reading that I'm playing with right now, and I make no claim that it's useful or ideal for myself or anybody else.

But it's an interesting riff on skimming and speed reading. Instead of divorcing myself from my auditory cortex, I use it for keywords only, while relying on non-auditory reading for connecting words, which I can largely skim through.

The only way to develop ideas like this is to experiment openly, with a goal not of reading quickly, but of being experimental with your approach and trying to intuitively feel your way toward a method that is satisfying and feels like you've comprehended the material well. I find that this approach makes it far easier to pay nuanced attention to the material, read for long stretches of time without fatigue, and relate concepts.

Comment by allamericanbreakfast on Oliver Sipple · 2021-02-20T19:43:36.955Z · LW · GW

I think the closer framing is something like: if you're the 100,000th person to deliberately run your car over a person's body, are you liable for vehicular manslaughter?

Comment by allamericanbreakfast on AllAmericanBreakfast's Shortform · 2021-02-20T10:16:10.972Z · LW · GW

Business idea: Celebrity witness protection.

There are probably lots of wealthy celebrities who’d like to lose their fame and resume a normal life. Imagine a service akin to witness protection that helped them disappear and start a new life.

I imagine this would lead to journalists and extortionists trying to track them down, so maybe it’s not tractable in the end.

Comment by allamericanbreakfast on Oliver Sipple · 2021-02-20T10:12:32.295Z · LW · GW

Right, but that’s why it’s interesting.

From a utilitarian perspective, is Oliver’s outing morally redeemed by using him as an example in journalistic ethics classes? Or would it be, if it helped reduce the incidence of future privacy invasions?

If so, then Harvey Milk is a hero in this story. He not only made Oliver into a gay hero, probably saving more than one life in the long run by advancing the cause of gay rights, but he also gave us a great example of the consequences of privacy invasion that we can use in ethics classes. A two-fer!

That doesn’t feel right.

Maybe the way to make sense of it is this:

Although we can find redeeming value in continuing to talk about the story of Oliver Sipple, there’s a certain tone we must take. It needs to be somewhat ashamed, noting the paradox, condemning the privacy invasion even as we seem to perpetuate it, insisting that we try to treat Sipple as an end in himself even in death. In this way, the ethics lesson maintains its force.

Consequentialism, then, dictates that we use a “deontological” framing.

And that’s what I think is interesting: deontology not as an ethical system but as a storytelling technique that’s necessary for consequentialism to work with the human psyche.

Comment by allamericanbreakfast on Oliver Sipple · 2021-02-20T02:15:12.899Z · LW · GW

It’s an interesting case from a deontological perspective. If it wasn’t ok to invade his privacy, but he’s dead and his story’s fully publicized, is it now ok to keep on talking about it? Is there a meaningful sense in which we can fail to treat the dead as an end in themselves?

Comment by allamericanbreakfast on “PR” is corrosive; “reputation” is not. · 2021-02-17T02:35:00.477Z · LW · GW

That's a good thought.

I certainly think that some people attack principle P with conscious intent to erode it, based on valuing V, an alternative principle W, or trying to get X from you. Standing up for P in the face of such anti-P partisans can only be done by rejecting their anti-P stance.

However, people will also attack principle P for a variety of other reasons.

  • P is the foundation of principle Q, which they support. But anti-P propaganda has severed this link in their mind. Appealing for P on the basis of their value for Q might be more effective than a straightforward defense of P.
  • They actually support P, but they're surrounded by punitive anti-P partisans. You have to appeal to them by building trust that you're not an anti-P partisan.
  • They support P intellectually, but feel no urgency about defending it. You don't need to defend P to them, but to appeal to them by showing that P is under attack.
Comment by allamericanbreakfast on MikkW's Shortform · 2021-02-16T22:34:33.935Z · LW · GW

If you have bluetooth earbuds, you would just look to most other people like you're having a conversation with somebody on the phone. I don't know if that would alleviate the awkwardness, but I thought it was worth mentioning. I have forgotten that other people can't tell when I'm talking to myself when I have earbuds in.

Comment by allamericanbreakfast on AllAmericanBreakfast's Shortform · 2021-02-16T22:30:42.528Z · LW · GW

A Nonexistent Free Lunch

  1. More Wrong

On an individualPredictIt market, sometimes you can find a set of "no" contracts whose price (1 share of each) adds up to less than the guaranteed gross take.

Toy example:

  • Will A get elected? No = $0.30
  • Will B get elected? No = $0.70
  • Will C get elected? No = $0.90
  • Minimum guaranteed pre-fee winnings = $2.00
  • Total price of 1 share of both No contracts = $1.90
  • Minimum guaranteed pre-fee profits = $0.10

There's always a risk of black swans. PredictIt could get hacked. You might execute the trade improperly. Unexpected personal expenses might force you to sell your shares and exit the market prematurely.

But excluding black swans, I though that as long as three conditions held, you could make free money on markets like these. The three conditions were:

  1. You take PredictIt's profit fee (10%) into account
  2. You can find enough such "free money" opportunities that your profits compensate for PredictIt's withdrawal fee (5% of the total withdrawal)
  3. You take into account the opportunity cost of investing in the stock market (average of 10% per year)

In the toy example above, I calculated that you'd lose $0.10 x 10% = $0.01 to PredictIt's profit fee if you bought 1 of each "No" contract. Your winnings would therefore be $0.99. Of course, if you withdrew it immediately, you'd only take away $0.99 x 95% = $0.94, but you'd still make a 4 cent profit.

In other situations, with more prices, your profit margins might be thinner. The withdrawal fee might turn a gain into a loss, unless you were able to rack up many such profitable trades before withdrawing your money from the PredictIt platform.

I build some software to grab prices off PredictIt and crunch the numbers, and lo and behold, I found an opportunity that seemed to offer 13% returns on this strategy, beating the stock market. Out of all the markets on PredictIt, only this one offered non-negative gains, which I took as evidence that my model was accurate. I wouldn't expect to find many such opportunities, after all.

2. Less Wrong

Fortunately, I didn't act on it. I rewrote the price-calculating software several times, slept on it, and worked on it some more the next day.

Then it clicked.

PredictIt wasn't going to take my losses on the failed "No" contract (the one that resolved to "yes") into account in offsetting the profits from the successful "No" contracts.

In the toy model above, I calculated that PredictIt would take a 10% cut of the $0.10 net profit across all my "No" contracts.

In reality, PredictIt would take a 10% cut of the profits on both successful trades. In the worst case scenario, "Will C get elected" would be the contract that resolved to "yes," meaning I would earn $0.70 from the "A" contract and $0.30 from the "B" contract, for a total "profit" of $1.00.

PredictIt would take a 10% cut, or $0.10, rather than the $0.01 I'd originally calculated. This would leave me with $2.00 from two successful contracts, minus $0.10 from the fees, leaving me with $1.90, and zero net profit or loss. No matter how many times I repeated this bet, I would be left with the same amount I put in, and when I withdrew it, I would take a 5% loss.

When I ran my new model with real numbers from PredictIt, I discovered that every single market would leave me not with zero profit, but with about a 1-2% loss, even before the 5% withdrawal fee was taken into account.

If there is a free lunch, this isn't it. Fortunately, I only wasted a few hours and no money.

There genuinely were moments along the way where I was considering plunking down several thousand dollars to test this strategy out. If I hadn't realized the truth, would I have gotten a wake-up call when the first round of contracts was called in and I came out $50 in the red, rather than $50 in the black? And then exited PredictIt feeling embarrassed and having lost perhaps $300, after withdrawal fees?

3. Meta Insights

What provoked me to almost make a mistake?

I started looking into it in the first place because I'd heard that PredictIt's fee structure and other limitations created inefficiencies, and that you could sometimes find arbitrage opportunities on it. So there was an "alarm bell" for an easy reward. Maybe knowing about PredictIt and being able to program well enough to evaluate for these opportunities would be the advantage that let me harvest that reward.

There were two questions at hand for this strategy. One was "how, exactly, does this strategy work given PredictIt's fee structure?"

The other was "are there actually enough markets on PredictIt to make this strategy profitable in the long run?"

The first question seemed simpler, so I focused on the second question at first. Plus, I like to code, and it's fun to see the numbers crank out of the machine.

But those questions were lumped together into a vague sort of "is this a good idea?"-type question. It took all the work of analysis to help me distinguish them.

How did I catch my error?

Lots of "how am I screwing this up?" checks. I wrote and rewrote the software, refactoring code, improving variable names, and so on. I did calculations by hand. I wrote things out in essay format. Once I had my first, wrong, model in place, I walked through a trade by hand using it, which is what showed me how it would fail. I decided to sleep on it, and had intended to actually spend several weeks or months investigating PredictIt to try and understand how often these "opportunities" arose before pulling the trigger on the strategy.

Does this error align with other similar errors I've made in the past?

It most reminds me of how I went about choosing graduate schools. I sunk many, many hours into creating an enormous spreadsheet with tuition and cost of living expenses, US News and World report rankings, and a linear regression graph. I constantly updated and tweaked it until I felt very confident that it was self-consistent.

When I actually contacted the professors whose labs I was interested in working in, in the midst of filling out grad school applications, they told me to reconsider the type of grad school programs (to switch from bioinformatics to bio- or mechanical engineering). So all that modeling and research lost much of its value.

The general issue here is that an intuitive notion, a vision, has to be decomposed into specific data, model, and problem. There's something satisfying about building a model, dumping data into it, and watching it crank out a result.

The model assumes authority prematurely. It's easy to conflate the fit between the model's design and execution with the fit between the model and the problem. And this arises because to understand the model, you have to design it, and to design it, you have to execute it.

I've seen others make similar errors. I saw a talk by a scientist who'd spent 15 years inventing a "new type of computer," that produced these sort of cloud-like rippling images. He didn't have a model of how those images would translate into any sort of useful calculation. He asked the audience if they had any ideas. That's... the wrong way to build a computer.

AI safety research seems like an attempt to deal with exactly this problem. What we want is a model (AI) that fits the problem at hand (making the world a better place by a well-specified set of human values, whatever that means). Right now, we're dumping a lot of effort into execution, without having a great sense for whether or not the AI model is going to fit the values problem.

How can you avoid similar problems?

One way to test whether your model fits the problem is to execute the model, make a prediction for the results you'll get, and see if it works. In my case, this would have looked like building the first, wrong version of the model, calculating the money I expected to make, seeing that I made less, and then re-investigating the model. The problem is that this is costly, and sometimes you only get one shot.

Another way is to simulate both the model and the problem, which is what saved me in this case. By making up a toy example that I could compute by hand, I was able to spot my error.

It also helps to talk things through with experienced experts who have an incentive to help you succeed. In my grad school application, it was talking things out with scientists doing the kind of research I'm interested in. In the case of the scientist with the nonfunctional experimental computer, perhaps he could have saved 15 years of pointless effort by talking his ideas over with Steven Hsu and engaging more with the quantum computing literature.

A fourth way is to develop heuristics for promising and unpromising problems to work on. Beating the market is unpromising. Building a career in synthetic biology is promising. The issue here is that such heuristics are themselves models. How do you know that they're a good fit to the problem at hand?

In the end, you are to some extent forced ultimately into open-ended experimentation. Hopefully, you at least learn something from the failures, and enjoy the process.

The best thing to do is make experimentation fast, cheap, and easy. Do it well in advance, and build up some cash, so that you have the slack for it. Focusing on a narrow range of problems and tools means you can afford to define each of them better, so that it'll be easier to test each tool you've mastered against each new problem, and each new tool against your well-understood problems.

The most important takeaway, then, is to pick tools and problems that you'll be happy to fail at many times. Each failure is an investment in better understanding the problem or tool. Make sure not to have so few tools/problems that you have time on your hands, or so many that you can't master them.

You can then imagine taking an inventory of your tools and problems. Any time you feel inspired to take a whack at learning a new skill or a new problem, you can ask if it's an optimal addition to your skillset. And you can perhaps (I'm really not sure about this) ask if there are new tool-problem pairs you haven't tried, just through oversight.

Comment by allamericanbreakfast on AllAmericanBreakfast's Shortform · 2021-02-14T23:28:07.208Z · LW · GW

Just a notepad/stub as I review writings on filtered evidence:

One possible solution to the problem of the motivated arguer is to incentivize in favor of all arguments being motivated. Eliezer covered this in "What Evidence Filtered Evidence?" So a rationalist response to the problem of filtered evidence might be to set up a similar structure and protect it against tampering.

What would a rationalist do if they suspected a motivated arguer was calling a decision to their attention and trying to persuade them of option A? It might be to become a motivated arguer in the opposite direction, for option B. This matches what we see in psych studies. And this might be not a partisan reaction in favor of option B, but rather a rejection of a flawed decision-making process in which the motivated arguers are acting both as lawyer, jury, and judge. Understood in this framework, doubling-down when confronted with evidence to the contrary is a delaying tactic, a demand for fairness, not a cognitive bias.

Eliezer suggests that it's only valid to believe in evolution if you've spent 5 minutes listening to creationists as well. But that only works if you're trying to play the role of dispassionate judge. If instead you're playing the role of motivated arguer, the question of forming true beliefs about the issue at hand is beside the point.

Setting up a fair trial, with a judge and jury whose authority and disinterest is acknowledged, is an incredibly fraught issue even in actual criminal trials, where we've had the most experience at it.

But if the whole world is going to get roped into being a motivated arguer for one side or the other, because everybody believes that their pet issue isn't getting a fair trial, then there's nobody left to be the judge or the jury.

What makes a judge a judge? What makes a jury a jury? In fact, these decompose into a set of several roles:

  • Trier of law
  • Trier of fact
  • Passer of sentence
  • Keeper of order

In interpreting arguments, to be a rationalist perhaps means to choose the role of judge or jury, rather than the role of lawyer.

In the "trier of law" role, the rationalist would ask whether the procedures for a fair trial are being followed. As "trier of fact," the rationalist would be determining whether the evidence is valid and what it means. As "passer of sentence," the rationalist would be making decisions based on it. As "keeper of order," they are ensuring this process runs smoothly.

I think the piece we're often missing is "passer of sentence." It doesn't feel like it means much if the only decision that will be influenced by your rationalism is your psychological state. Betting, or at least pre-registration with your reputation potentially at stake, seems to serve this role in the absence of any other consequential decision. Some people like to think, to write, but not to do very much with their thoughts or writings. I think a rationalist needs to strive to do something with their rationalism, or at least have somebody else do something, to be playing their part correctly.

Actually, there's more to a trial that that:

  • What constitutes a crime?
  • What triggers an investigation?
  • What triggers an arrest and trial?
  • What sorts of policing should society enact? How should we allocate resources? How should we train? What should policy be for responding to problems?

As parallels for a rationalist, we'd have:

"How, in the abstract, do we define evidence and connect it with a hypothesis?"

"When do we start considering whether or not to enter into a formal 'make a decision' about something? I.e. to place a bet on our beliefs?"

"What should lead us to move from considering a bet, to committing to one?"

"What practical strategies should we have for what sorts of data streams to take in, how to coordinate around processing them, how to signal our roles to others, build trust with the community, and so on? How do we coordinate this whole 'rationality' institution?"

And underneath it all, a sense for what "normalcy" looks like, the absence of crime.

I kind of like this idea of a mapping -> normalcy -> patrol -> report -> investigation -> trial -> verdict -> carrying out sentence analogy for rationalism. Instead it would be more like:


  • getting a sense of the things that matter in the world. for the police, it's "where's the towns? the people? where are crime rates high and low? what do we especially need to protect?"
  • this seems to be the stage where an idealization matters most. for example, if you decide that "future lives matter" then you base your mapping off the assumption that the institutions of society ought to protect future lives, even if it turns out to be that they normally don't.
  • the police aren't supposed to be activists or politicians. in the same way, i think it makes sense for rationalists to split their function between trying to bring normalcy in line with their idealized mapping, and improving their sense of what is normal. here we have the epistemics/instrumentality divide again. the police/politics analogy doesn't make perfect sense here, except that the policing are trying to bring actual observational behavior in line with theoretically optimal epistemic practices.

Normalcy might look like:

  • the efficient market hypothesis
  • the world being full of noise, false and superficial chatter, deviations from the ideal, cynicism and stupidity
  • understanding how major institutions are supposed to function, and how they actually function
  • base rates for everything
  • parasitism, predation


  • once a sense of normalcy on X is established, you look for deviations from it - not just the difference between how they're supposed to function vs. actually function, but how they normally actually function vs. deviations from that
  • perhaps "expected" is better than "normal" in many situations? "normal" assumes a static situation, while "expected" can fit both static and predictably changing situations.


  • conveying your observations of a deviation from normalcy to someone who cares (maybe yourself)


  • gathering evidence for whether or not your previous model of normalcy can encompass the deviation, or whether it needs to be updated/altered/no longer holds


  • creating some system for getting strong arguments for and against the previous model
  • a dispassionate judge/jury


  • Some way of making a decision on which side wins, or declaring a mistrial

Carrying out the sentence:

  • Placing a bet or taking some other costly action, determined at least to some extent in advance, the way that sentencing guidelines do.

It's tricky though.

If you want a big useful picture of the world, you can't afford to investigate every institution from the ground up. If you want to be an effective operator, you need to join in a paradigm and help advance it, not try to build a whole entire worldview from scratch with no help. If you want to invent a better battery, you don't personally re-invent physics first.

So maybe the police metaphor doesn't work so well. In fact, we need to start with a goal. Then we work backwards to decide what kinds of models we need to understand in order to determine what actions to take in order to achieve that goal.

So we have a split.

Goal setting = ought

Epistemics = is

The way we "narrow down" epistemics is by limiting our research to fit our goals. We shouldn't just jump straight to epistemics. We need a very clear sense of what our goals are, why, a strong reason. Then the epistemics follow.

I've had some marvelously clarifying experiences with deliberately setting goals. What makes a good goal?

  • Any goal has a state (success, working, failure), a justification (consequences, costs, why you?), and strategy/tactics for achieving it. Goals sometimes interlink, or can have sharp arbitrary constraints (i.e. given that I want to work in biomedical research, what's the best way I can work on existential risk?).
  • You gather evidence that the state, justification, strategy/tactics are reasonable. The state is clear, the justification is sound, the strategy/tactics are in fact leading towards the success state. Try to do this with good epistemic hygiene

Doing things with no fundamental goal in mind I think leads to, well, never having had any purpose at all. What if my goal were to live in such a way that all my behaviors were goal-oriented?

Comment by allamericanbreakfast on “PR” is corrosive; “reputation” is not. · 2021-02-14T07:04:15.740Z · LW · GW

PR is about managing how an antagonist could distort your words and actions to portray you in a negative light.

By contrast, for the concept of “honor” to mean anything, you have to be imagining that there’s a community of people who care about honor and approach that question with integrity. It assumes a level of charity and sophistication in the people you’re appealing to.

Comment by allamericanbreakfast on How Should We Respond to Cade Metz? · 2021-02-14T02:45:00.370Z · LW · GW

I updated my OP with the link, thanks for sharing it!

Comment by allamericanbreakfast on How Should We Respond to Cade Metz? · 2021-02-13T21:07:09.687Z · LW · GW


Comment by allamericanbreakfast on AllAmericanBreakfast's Shortform · 2021-02-12T04:49:00.441Z · LW · GW

Status and Being a "Rationalist"

The reticence many LWers feel about the term "rationalist" stems from a paradox: it feels like a status-grab and low-status at the same time.

It's a status grab because LW can feel like an exclusive club. Plenty of people say they feel like they can hardly understand the writings here, and that they'd feel intimidated to comment, let alone post. Since I think most of us who participate in this community wish that everybody would be more into being rational and that it wasn't an exclusive club, this feels unfortunate.

It's low status because it sounds cultish. Being part of the blogosphere is a weird hobby by many people's standards, like being into model railroad construction. It has "alt-something" vibes. And identifying with a weird identity, especially a serious one, can provoke anxiety. Heck, identifying with a mainstream identity can provoke anxiety. Just being anything except whatever's thoroughly conventional among your in-group risks being low-status, because if it wasn't, the high-status people would be doing it.

The nice thing about "rationalist" is that it's one word and everybody kinda knows what it means. "Less Wronger" is also short but you have to explain it. "Aspiring rationalist" is cute but too long. Tough nut to crack.

Comment by allamericanbreakfast on Covid 2/11: As Expected · 2021-02-12T02:37:17.860Z · LW · GW

One of the weird things about the "Ministry of Truth" problem is how, somehow, liberals have a perception that other liberals will think that "Facebook should ban 'disinformation'/'pseudoscience'" is the responsible position. How does that happen? I don't think we were ever explicitly told by somebody. Nobody made a cogent, persuasive argument in favor of that position.

If I had to try and trace it back, I think it went a little something like this:

  1. People talk a lot about how much stupid stressful crap is on Facebook.
  2. Mark Zuckerberg/Facebook gets a crappy public image, not for producing the crap, but for profiting off the crap.
  3. People blame MZ/Facebook for a lot of social ills: pseudoscience, Trumpism, depression/anxiety/teen suicide, privacy concerns, hate speech, etc. They're in everybody's bad books. Yet because there are so many competing notions for what FB ought to do to correct the problem, they don't have any obvious fix.
  4. Facebook doesn't want to be in the bad books. They strategize about how to project the image of a responsible, upstanding company. For a while, their position is "we'd be irresponsible to police 'disinformation,'" which is based on the fear that people would in fact call them irresponsible if they did. Asymmetric justice applies: Facebook is safer doing nothing (or less things) than doing something (or more things).
  5. COVID-19 changed the game, giving a much more unified concept for what "responsible social media behavior" would look like. Somebody else, the freakin' CDC, makes the policy for what constitutes improper speech. Facebook just enforces it. It's a time of crisis, when public health concerns take priority over privacy/free speech/democratic concerns. Facebook can look like a noble guardian of public health. The few remaining "free speech" advocates and uncredentialed contrarian science enthusiasts can not only shove it, but get framed (by a gigantic corporation!) as the new villains in the story.

It'll be important for people who are worried about being cast inappropriately as a villain to look for people who might have been deemed as such manage to alter their perception. How does Zeynep Tufekci manage to get in and stay in the good books? How do we raise controversial but important issues with our families and friends when they're so much less fluent in the issues than we are that we risk looking unbelievably overconfident just for talking about what we know?

I think these things are possible, and I think it'll help if we focus more of our attention on tractable solutions to them than on overemphasizing the current climate. After all, politics isn't the mind killer. Fear is the mind killer. How do we let some of this fearful stuff pass over us and through us, and yet remain?

Comment by allamericanbreakfast on Unwitting cult leaders · 2021-02-11T15:21:29.127Z · LW · GW

There are enormous numbers of people in the world who are leaders, but whose relationship with their followers doesn’t strike anybody as cult-like.

My intuitive model is that cult-like behavior emerges only in specific contexts. The grocery store manager probably will not turn the clerks into cultists.

Who has a shot at it? If I had to guess, it’s maybe people who deal in emotions, identities and ideas as their primary trade, and whose personal work doesn’t cash out in a concrete, worldly endeavor.

A CEO deals in ideas, but if they have to sell a product at the end of the day, I think they’re unlikely to become cult leaders. At least not the central case.

I speculate that this isn’t just because they don’t have time on their hands. I think it’s also because their image is wrapped up in something too concrete. And they’ll have to optimize for goals other than “like and let like.”

Is becoming a cult a result of Goodhart’s law? People use “do I like them” as a measure of “should I give them my attention?” And wind up with a cult?

Comment by allamericanbreakfast on Book review: The Geography of Thought · 2021-02-10T16:34:06.817Z · LW · GW

Right, I think it's just hard to interpret the results of this test. 

Comment by allamericanbreakfast on Book review: The Geography of Thought · 2021-02-10T03:59:00.942Z · LW · GW

My girlfriend, who's into homesteading, thought the cow goes with the grass without knowing the context of the question.

China is 45% rural, while the US is 14% rural. Maybe the fact that the US appears to be much more urbanized leads more of its people to lean on abstract groupings? What would the results be if restricted to rural vs. rural or urban vs. urban samples of the population of each country? Even then, I would have to assume the average Chinese college student tends to have more connections with rural lifestyles than the average American college student.

Comment by allamericanbreakfast on Book review: The Geography of Thought · 2021-02-10T03:52:51.643Z · LW · GW

Part of why Chinese people want contracts to change in response to unexpected contexts is they live in a laissez-faire economic system compared to the Free World.

Given China's history of communism, I found this confusing. However, I know very little about China and you seem to know more. Can you elaborate?

Comment by allamericanbreakfast on How do you optimize productivity with respect to your menstrual cycle? · 2021-02-09T04:05:43.022Z · LW · GW

No helpful comments but just saying I'm 100% in favor of open discussion of bodily functions, particularly including menstruation. Go you for broaching it.

Comment by allamericanbreakfast on Quadratic, not logarithmic · 2021-02-08T18:33:06.420Z · LW · GW

Does a simple quadratic model really work for modeling disease spread? Other factors that seem critical:

  • How closely connected each new person you see is to the others in your prior network of contacts
  • The degree to which people trade off safety precautions against additional risk
  • The number of people you see in the window during which you could be infectious
  • The fact that the kind of person who's seeing lots of other people might also be the kind of person to eschew other safety precautions
  • Whether or not you see these contacts every day, or whether you see a sequence of new contacts without seeing the previous people in the sequence

Before I used anything like this to try and model the effect of changing the number of contacts, I'd really want to see some more robust simulation.

Some aspects of this model seem plausible on its face. For example, a recluse who starts to see one person only puts himself at risk. If he starts seeing two people, though, he's now creating a bridge between them, putting them both at elevated risk.

Comment by allamericanbreakfast on AllAmericanBreakfast's Shortform · 2021-02-06T18:22:34.023Z · LW · GW

Hard numbers

I'm managing a project to install signage for a college campus's botanical collection.

Our contractor, who installed the sign posts in the ground, did a poor job. A lot of them pulled right out of the ground.

Nobody could agree on how many posts were installed: the groundskeeper, contractor, and two core team members, each had their own numbers from "rough counts" and "lists" and "estimates" and "what they'd heard."

The best decision I've made on this project was to do a precise inventory of exactly which sign posts are installed correctly, completed, or done badly/missing. In our meeting with the VP yesterday, this inventory helped cut through the BS and create a consensus around where the project stands. Instead of arguing over whose estimate is correct, I can redirect conversation to "we need to get this inventory done so that we'll really know."

Of course, there are many cases where even the hardest numbers we can get are contestable.

But maybe we should be relieved that the project of "getting hard data" (i.e. science) is able to create some consensus some of the time. It is a social coordination mechanism. Strategically, the "hardness" of a number is its ability to convince the people you want to convince, and drive the work in the direction you think it should go.

And the way you make a number "hard" can be social rather than technical. For example, on this project, the fact that I did the first part of the inventory in the company of a horticulturist rather than on my own probably was useful in creating consensus, as was the fact that the VP and grounds manager have my back and are now aware of the inventory. Nobody actually inspected my figures. Just the fact that I was seen as basically trustworthy and was able to articulate the data and how to interpret it was enough.

It should surprise nobody that the level of interest I got in these numbers far exceeds the level of interest in my Fermi-style model of the expected value of taking RadVac. I am plugged into a social network with a particular interest in the signage. It's real money and real effort, to create a real product.

It doesn't matter how much knowledge I personally have about EV modeling. If I don't plug myself into a social network capable of driving consensus about whether or not to take RadVac, my estimate does not matter in the slightest except to me. In this case, individual knowledge is not the bottleneck, but rather social coordination.

Comment by allamericanbreakfast on Networking and Expert Consulting As Rationality Bottlenecks · 2021-02-06T06:43:25.901Z · LW · GW

I think that the case of Aubrey de Grey, the leader of SENS, is a good case study.

He seems to think that his high-level anti-aging research strategy is novel and tractable. All he needs is the funding to hire enough researchers and equipment to implement it, and the knowledge will flow.

To develop that strategy in the first place, he needed to be plugged into a network of other scientists studying various aspects of aging, both to gain knowledge and credibility.

His SENS foundation and book, Ending Aging, are both aimed in part at broadcasting his message and expanding his network. He's not trying to increase his knowledge (of aging or of making money) in order to put together the cure or the cash for himself. Instead, he's trying to expand his network, to convince government or private funders to support his vision.

I think that to make progress on evaluating these hypotheses (effectiveness is bottlenecked by network vs. by knowledge), we need to figure out how to distinguish them clearly.

For example, parts of the psychological research community seem bottlenecked by their collective lack of knowledge about statistics. But if they committed to collaborate more closely with some statisticians, that would probably help. Does that represent a "knowledge bottleneck" or a "network bottleneck?"

Likewise, I currently have only vague guesses about the specific skills/knowledge that would make me an effective tissue engineer. That'll become much more clear once I'm working in a lab next year. So is my problem that I'm bottlenecked by lack of knowledge about the specific needs of the lab I'll be working in, or is it that I'm not plugged into the social network at that lab?

I think that the distinction might rest more in written vs. unwritten knowledge.

Aubrey de Grey, the psychologists, and myself, are all bottlenecked by our lack of unwritten knowledge (how to meet an anti-aging billionaire, how to initiate a collaboration with a statistician, which lab I'll end up working in and what their needs are). Unwritten knowledge tends to be stored in social networks.

It's just the fact that people often think of "knowledge" as book knowledge that creates this confusion. So perhaps I should restate this hypothesis:

It may also be that above a certain level of rationality, access to unwritten knowledge, not one's ability to learn and practice publicly-available skills and knowledge, become the bottleneck.

Comment by allamericanbreakfast on Making Vaccine · 2021-02-06T02:43:23.917Z · LW · GW

I talked this out with a consultant friend who got his BS in biology. Here's what we came up with.

A conceptual solution would have the following variables, labeled for clarity.

Cost of vaccine = C

  • C = (Cost of manufacturing RacVac) ÷ (Doses you'll administer) + (Dollar value to represent cost of unconventionality of the project)

Probably that vaccine provides value = P

  • P = (Chance that RadVac works at all) x (Effectiveness if it does work) x (Chance you'll catch COVID before getting vaccinated) x (Chance you'll fail in your execution)

Value that could be provided per person = V

  • V = [ (Dollar value of your life) x (Chance you'll die if you catch COVID) + (Dollar value of avoiding a day on a ventilator) x (Chance of serious case of COVID) x (About 14 days on a ventilator) + (Dollar value of avoiding a day of fatigue/anosia) x (Chance of long-term fatigue/anosia) x (Expected length of long-term fatigue) + (Expected number of days out of work) x (Cost of lost work) + (Expected out-of-pocket cost of medical care if you caught COVID)]
  • P' = 1 - (Chance you'll transmit it to a particular other person if you catch it) x (Chance they'd have caught it anyway)
  • V' = Calculation of V but for another specific person in your life who'd be at risk of COVID if you caught it

If C < P[V + ΣP'V'], it would be worth taking RadVac.

Potential sources for some of these estimates:

  • (Chance that RadVac works at all) = (Number of vaccines major pharma companies send to preclinical trials) / (Number of vaccines they put in clinical trials) x 33.4%
  • (Effectiveness if it does work) = (Average effectiveness of mRNA vaccines that have been released so far)
  • (Chance you'll catch COVID before getting vaccinated), (Chance you'll transmit it to a particular other person if you catch it), (Chance they'd have caught it anyway) = Calculated by adding up your own and other people's activities using the microCOVID risk calculator.
  • (Dollar value of a life) = (Dollar value placed on a citizen's life by their federal government)
  • (Chance they'll die if you catch COVID) = Hospitalization and death rates by age
  • (Chance of lingering effects of COVID) = 52.3%

However, you'd first want to consider if there are other interventions that are even more cost-effective for the same risk factor. For example, if you're still shopping at the grocery store, consider having your groceries delivered for the next six months.

Comment by allamericanbreakfast on Making Vaccine · 2021-02-05T20:42:02.026Z · LW · GW

I agree with the point of your comment, that vaccines brought to clinical trials is a suboptimal reference class. However, I think that this is a locally invalid argument:

Would you conclude that, because some lines of code can navigate a rocket to the moon, that your code is pretty likely to navigate a rocket to Mars?

A computational model plus grounding in theory, if done right, should increase our confidence in the the efficacy of a sequence of peptides taken from the virus above the efficacy we'd assume for a random sequence of peptides.

How much? Can't say.

As others have pointed out here, we on the other hand are comparing a new and perhaps much more effective means of designing a vaccine to the methods that were used from 2000-2015, which may be less effective. Hence, perhaps the reference class is suboptimal in the opposite direction as well.

I have no way to know how to weigh these competing factors. So I think the best thing to do is to start with the basic formula I concocted above, then modify it based on our intuitions about these other factors.

Alternatively, you could very justifiably stick with the rule "I don't take untested medications." Although as someone else pointed out, if you have that rule then perhaps you should also make sure to not use any drugs? I don't have the answer, but wanted to try and provide some clarity for people who are considering breaking the "take no untested medications" rule.

Comment by allamericanbreakfast on Making Vaccine · 2021-02-04T16:46:13.943Z · LW · GW

Well, a couple of researchers estimated that drug mules made a median of $1313 back in 2014, so I'd need to smuggle a lot of cocaine to earn that much. Seems like it would take a while...

Comment by allamericanbreakfast on Making Vaccine · 2021-02-04T04:36:10.941Z · LW · GW

Vaccines that are brought to clinical trials have a 33.4% approval rate, which seems like a reasonable estimate of the chances that this vaccine works if executed correctly. Note that this is from trials conducted from 2000-2015.

I probably have a roughly 5% chance of catching COVID before I'm vaccinated. Given my age, COVID would put me at a 0.2% risk of death. Let's double that to account for suffering and the risk of long-term disability.

If I value my life at $10,000,000, then an intervention that gives me a 33.4% chance of avoiding a 5% chance of a 0.4% chance of death is worth $668. So it seems like I'd want to be vaccinating at least one other person in order for this to be worthwhile.

I welcome any further thoughts on this expected value calculation. In particular, I think it's possible that I'm dramatically underestimating the risk and potential severity of long-term symptoms. It doesn't take much additional risk to make this project worthwhile for a single person.

Comment by allamericanbreakfast on Making Vaccine · 2021-02-04T03:18:29.062Z · LW · GW

I double-cruxed this article because my "voice of caution" objected to it.

I eventually realized the issue was that part of my decision-making process when I do something weird, potentially risky, or expensive, is to consult with friends and family. Yet I feel that the feedback I would get from them would be so thoughtless, negative, frustrating, and potentially damaging, that it's not worthwhile. And I don't want to ignore this "consult someone first" rule, because it seems like a generally good rule that loses its force if ignored.

However, I do know some specific people who might be good to talk it over with. They're warm, open-minded, very smart, scientifically literate, unconventional, have my best interests in mind, trustworthy, and willing to discuss this kind of stuff at length. My next move is probably not to read the paper, but rather to discuss it with them.

Comment by allamericanbreakfast on The Multi-Tower Study Strategy · 2021-02-01T17:43:52.524Z · LW · GW

That’s a harder question, and I have no confident answers yet. I don’t think you’d ideally want to measure chunks by word count. It’s something about how many linked concepts you have to understand to get it. Or maybe how easy it is to synthesize them. Or maybe on how well-developed your memory skills are?

Clearly though, there is a chunk size that’s too small (imagine if you just had to read one sentence of your textbook per day) and a size that’s too big (read an entire textbook in a day).

Comment by allamericanbreakfast on Define Your Learning Goal: Competence Or Broad Knowledge · 2021-01-31T22:49:34.486Z · LW · GW

To me, being competent at sight-reading comes from immersive practice. But sight-reading itself helps you build broad knowledge, because it gives you the ability to sample lots and lots of pieces of music - not just the sounds, but the physicality.

And uniting mind and body in sight reading is an excellent complement to just listening to other people's recordings as you explore the world of classical music. For example, let's say you were capable of stumbling through a Beethoven sonata at low speeds via sight-reading. You might be able to play (badly) 1-2 pages of the sonata in the time it would take you to listen to it.

One way to really get familiar with a wide swath of the classical music literature in, say, an hour a day, might be to listen to 2 sonatas (each ~15 minutes), and sight-read two other sonatas for ~15 minutes. Try to space out your listenings and your sight-readings of the same piece. For example, listen to them in order from first to last, but sight read them in reverse order so that most of the listenings/playthroughs are spaced out. This capitalizes on the spacing effect.

In this way, you would be able to expose yourself to all 32 Beethoven sonatas in a couple weeks. I bet you'd have a much better memory of how they go if you did the sight reading + listening combo, rather than doubling the amount of listening and doing no sight reading.

Comment by allamericanbreakfast on Define Your Learning Goal: Competence Or Broad Knowledge · 2021-01-31T22:41:00.657Z · LW · GW

One challenge in understanding your own memory slips is determining their cause.

For piano performance, is it insufficient practice? Leaning too hard on muscle memory? Not playing on enough different pianos/environments? Playing in front of an audience? Or maybe you just had a bad day that day for some completely unrelated reason, like some other stressor prior to the recital?

There are lots of things you can do to avoid memory slips, and the more the better. But I think it's also good for people to be skeptical/open-minded about inferring cause and effect. Better just to do the virtuous actions and assume they'll all work together to give a benefit.

Comment by allamericanbreakfast on The GameStop Situation: Simplified · 2021-01-31T03:16:31.596Z · LW · GW

This is a great anecdote to show why it's not a good idea to try and time the market.

Comment by allamericanbreakfast on AllAmericanBreakfast's Shortform · 2021-01-31T03:14:37.542Z · LW · GW

Yeah, just a tag like that would be ideal as far as I'm concerned. You could also allow people to filter those in or out of their feed.

Comment by allamericanbreakfast on AllAmericanBreakfast's Shortform · 2021-01-31T02:51:13.677Z · LW · GW

Aspects of learning that are important but I haven't formally synthesized yet:

  • Visual/spatial approaches to memorization
  • Calibrating reading speeds/looking up definitions/thinking up examples: filtering and organizing to distinguish medium, "future details," and the "learning edge"
  • Mental practice/review and stabilizing an inner monologue/thoughts
  • Organization and disambiguation of review questions/procedures
  • Establishing procedures and stabilizing them so you can know if they're working
  • When to carefully tailor your approach to a particular learning challenge, and when to just behave mechanically (i.e. carefully selecting problems addressing points of uncertainty vs. "just do the next five problems every 3 days")
  • Planning, anticipating, prioritizing, reading directions adequately, for successful execution of complex coordinated tasks in the moment.
  • Picking and choosing what material to focus on and what not to study


Cognitivist vs behaviorist, symbolic vs visual/spatial, exposure vs review, planning vs doing, judgment vs mechanism.

Comment by allamericanbreakfast on Define Your Learning Goal: Competence Or Broad Knowledge · 2021-01-31T02:38:32.491Z · LW · GW

Another thing that might have helped would have been to practice the same piece on many different pianos, ideally with strangers in the room. When you play the same piece on the same piano, you're not only immersing yourself in a specific piece, but in a specific instrument/environment/social context. When you keep the piece the same but change the context, the memory risks falling apart.

Comment by allamericanbreakfast on For those who advocate Anki · 2021-01-31T02:36:03.797Z · LW · GW

I wrote a post about this recently. Spaced repetition/flashcards are for building broad knowledge. What you want is competence, which comes from immersion. Both are useful forms of learning, but they are distinct and built differently.

I think your plan of finding ways to incorporate Korean immersion into your life is exactly the right way to go.

Comment by allamericanbreakfast on AllAmericanBreakfast's Shortform · 2021-01-31T02:31:48.486Z · LW · GW

Cognitive vs. behaviorist approaches to the study of learning

I. Cognitivist approaches

To study how people study on an internal, mental level, you could do a careful examination of what they report doing with their minds as they scan a sentence of a text that they're trying to learn from.

For example, what does your mind do if you read the following sentence, with the intent to understand and remember the information it contains?

"The cerebral cortex is the site where the highest level of neural processing takes place, including language, memory and cognitive function."

For me, I do the following (which takes deliberate effort and goes far beyond what I would do/experience doing if I were to just be reading "naturally"):

  • Isolate and echo "cerebral cortex" and "neural processing." By this, I mean I'll look at and inner-monologue the phrase by itself, waiting a few seconds at least to let it "sink in."
  • Visualize the phrase "highest level of neural processing," by picturing a set of tiers with sort of electrical sparts/wires in them, with the word "cerebral cortex" across the top and a sort of image of grey matter resting on it like a brain on a shelf.
  • Isolate and echo "language," and visualize a mouth on that "top shelf" image.
  • Isolate and echo "memory," and visualize a thought-bubble cloud on the "top shelf" image.
  • Isolate and echo "cognitive function," and visualize a white board with some sort of diagram on it on the "top shelf" image.
  • Try to paraphrase the whole sentence from memory with my eyes closed.

Going beyond this, I might do more things to try and make the contents of the sentence sink in deeper, taken alone. I might give examples by thinking of things I couldn't do without my cerebral cortex: speak, write, and understand people talking to me; remember my experiences, or make decisions. I'd be a mute, stone-like void, a vegetable; or perhaps demented.

II. Behaviorist approaches

A behaviorist approach misses a lot of the important stuff, but also offers some more tractable solutions. A behaviorist might study things like how long a skilled learner's eyes rest on any given sentence in a textbook, or rates of information transmission in spoken language. A behaviorist recommendation might be something like:

"All languages transmit information at a fairly consistent rate. But different fields compress information, reference prior knowledge, and use intuitive human cognitive abilities like visual/spatial memory, to different degrees.

Because of this, different scholarly fields may be best read at differing rates.

Furthermore, each reading might need to be approached differently. Some text that was unfamiliar before is now not only familiar but unnecessary. Other text that was overly detailed is now of primary relevance. The text may no longer function to teach new concepts, but rather to remind you of how concepts you understand fit together, or to fill gaps in a model that you already have in place.

Because of that, different sections of the text, and different readings, in different textbooks, need to be read at different speeds to match your ideal rate of information intake. Too fast or too slow, and you will not learn as quickly as you could.

But because your mind, and the particular text you're reading, are so idiosyncratic, it's not tractable to give you much guidance about this.

Instead, simply try slowing down your reading speed in difficult sections. Try pausing for several seconds after every sentence, or even after certain phrases."

There's very little reference to what people should do "inside their minds" here. The most is a casual reference to "difficult sections," which implies that the reader has to use their inner reaction to the text to gauge whether or not the text is difficult and worth slowing down on.

III. Conclusion

This line between cognitivist and behaviorist approaches to the science of learning seems valuable for studying one's own process of learning how to learn. Of course, there is an interface between them, just as there's an interface between chemistry and biology.

But defining this distinction allows you to limit the level of detail you're trying to capture, which can be valuable for modeling. As I continue to explore learning how to learn, I'll try to do it with this distinction in mind.

Comment by allamericanbreakfast on AllAmericanBreakfast's Shortform · 2021-01-31T02:04:43.780Z · LW · GW

I use LessWrong as a place not just to post rambly thoughts and finished essays, but something in between.

The in between parts are draft essays that I want feedback on, and want to get out while the ideas are still hot. Partly it's so that I can have a record of my thoughts that I can build off of and update in the future. Partly it's that the act of getting my words together in a way I can communicate to others is an important part of shaping my own views.

I wish there was a way to tag frontpage posts with something like "Draft - seeking feedback" vs. "Final draft" or something like that. The editor is so much better for frontpage that I often find it a hassle to put a draft product in shortform. Plus it feels a bit of a "throwaway" space, which is what it's intended for, but to post there feels like it doesn't do justice to a certain amount of work I've put into something.

Comment by allamericanbreakfast on What are some real life Inadequate Equilibria? · 2021-01-30T07:54:41.276Z · LW · GW

One is blame games. Many problems don’t have a clear cause. Pointing the finger creates stress and defensiveness that’s counterproductive to solving the problem and emotionally harmful, and also doesn’t help identify how to avoid similar problems in the future. It would be better not to even start. But the fact that each person anticipates being on the receiving end of a lot of unfair blame prevents them from not engaging.

Comment by AllAmericanBreakfast on [deleted post] 2021-01-29T03:54:35.343Z

We could require that both the agent and the predictor are machines that always halt.

However to think about Newcomb's problem entails "casting yourself" as the agent and predictor both, with a theoretically unlimited amount of time to consider strategies for the agent to defeat the predictor, as well as for the predictor to defeat the agent.

That's just shifting the goalposts. Now you are predicting the behavior of both the agent and predictor. If you could create an agent capable of defeating the predictor, you'd have to adjust the predictor. If you could create a predictor capable of defeating the agent, you'd have to resume proposing strategies for the agent.

You are now the machine trying to simulate your own future behavior with how you modify the agent and predictor. And there is no requirement that you take a finite amount of time, or use a finite amount of computing power, when considering the problem. For example, the problem does not say "come up with the best strategy for the agent and predictor you can within X minutes."

Hence, we have two equally uninteresting cases:

  • The agent/thinker are limited in the time or computational resources available to them, while the predictor is unlimited.
  • The agent/thinker and predictor are both unlimited in time and computational resources, and both must be continuously and forever modified to try and defeat each others' strategies. They are leapfrogging up the oracle hierarchy, forever. Newcomb's problem invites you to try and compute where they'll end up, and the answer is undecidable, a loop.
Comment by allamericanbreakfast on Extracting Money from Causal Decision Theorists · 2021-01-28T23:46:57.203Z · LW · GW

I’ve picked up my game theory entirely informally. But in real world terms, perhaps we’re imagining a situation where a randomization approach isn’t feasible for some other reason than a random number generator being unavailable.

This connects slightly with the debate over whether or not to administer untested COVID vaccine en masse. To pick randomly “feels scary” compared to picking “for a reason,” but to pick “for a reason” when there isn’t an actual evidence basis yet undermines the authority of regulators, so regulators don’t pick anything until they have a “good reason” to do so. Their political calculus, in short, makes them unable to use a randomization scheme.

So in terms of real world applicability, the constraint on a non-randomizing strategy seems potentially relevant, although the other aspects of this puzzle don’t map onto COVID vaccine selection specifically.

Comment by allamericanbreakfast on Build Your Number Sense · 2021-01-27T22:21:03.878Z · LW · GW

Thanks, I was kind of hasty in throwing the visual examples together :D

Comment by allamericanbreakfast on What is up with spirituality? · 2021-01-27T06:30:49.865Z · LW · GW

Maybe it's that we lack language to articulate what are in fact perfectly bioelectrochemical experiences, due to millennia of religious dominance? And so when we need to describe these inner states, we resort to the language of spirituality?

From this perspective, "what's up with spirituality" is another way of saying "what's up with having feelings, especially important and meaningful and hard-to-explain feelings" and "why are people into exploring their feelings?" Which is totally a valid question.

But automatically couching it in terms of "what's up with spirituality" seems, to me, to be a symptom of the history of language and politics. Are you more interested in the feelings, the framework, or the culturally-specific ways in which people connect the two?

Comment by allamericanbreakfast on Leaky Delegation: You are not a Commodity · 2021-01-26T16:27:55.965Z · LW · GW

Great post, and it can get even trickier. On one of my team projects, I had formerly modeled “insource” vs “outsource” as meaning “someone from our college does it” vs “we hire a contractor to do it.”

After telling my teammate to tell the facilities director to hire a contractor to have his team install some signs, I discovered just how wrong that is. This entailed four levels of outsourcing, not just one. And every layer of outsourcing makes the project riskier.

The sign installers do a bad job because neither we nor they are around. Their boss doesn’t check the work because he already got paid. The facilities director is slow and opaque because he did a bad job and doesn’t want it known. My teammate complains and points the finger because he’s more worried about his job with the college than getting the project done.

This also reminds me of the story where one hitman hires another hitman to do an assassination. That hitman hires another who hires another who turns them in to the police.

So it's not just a matter of "insourcing" vs. "outsourcing" "a task." Every task has many parts, and when you outsource, there can be many layers.

Comment by allamericanbreakfast on Lessons I've Learned from Self-Teaching · 2021-01-25T19:58:46.471Z · LW · GW

There's definitely a tradeoff between breadth and depth/speed. On a real-world project, you can attain great speed and/or great depth on the narrow set of techniques/concepts that are directly relevant to the work you do every day.

It's very expensive to maintain that kind of fluency. I've never done any programming in a language that uses infix vs. prefix operators, yet I know the difference just from a couple minutes total studying the concept at a few different widely-spaced points in time. This concept would not pose a challenge to me were I to pick up a functional programming language, although it would take time to build a habit of using them.

When you overinvest in study-through-practice in a concept you've already mastered, you will forget a lot of the more broad knowledge. Anki gives you a way to retain it, or to get it back when you're ready for it. If you see yourself as having no need for a broad knowledge-base -- if you see your career as being a happy code monkey banging out programs that are within the wheelhouse you've already established, then you're probably good.

But if you have a different vision for your career, it's possible that broad knowledge will be really helpful. And spaced repetition/Anki gives you a set of tools to build and maintain that broad knowledge-base.

I'm really glad you posted this objection to Anki, because I think it's probably common. It's also a fair point: sometimes, we don't care about building a broad knowledge base. We're just trying to become fluent in the narrow set of skills that let us execute a technical project. Clarifying that distinction is very valuable from the perspective of budgeting your study time wisely.

Comment by allamericanbreakfast on Exercise: Taboo "Should" · 2021-01-24T20:57:43.874Z · LW · GW

Sorry, I know that this runs the risk of being an exchange of essays, making it hard to respond to each point.

In Toxoplasma of Rage, the part just prior to the reference to the war on terror goes like this:

Toxoplasma is a neat little parasite that is implicated in a couple of human diseases including schizophrenia. Its life cycle goes like this: it starts in a cat. The cat poops it out. The poop and the toxoplasma get in the water supply, where they are consumed by some other animal, often a rat. The toxoplasma morphs into a rat-compatible form and starts reproducing. Once it has strength in numbers, it hijacks the rat’s brain, convincing the rat to hang out conspicuously in areas where cats can eat it. After a cat eats the rat, the toxoplasma morphs back into its cat compatible form and reproduces some more. Finally, it gets pooped back out by the cat, completing the cycle.

[Lion King image] It’s the ciiiiiircle of life!

What would it mean for a meme to have a life cycle as complicated as toxoplasma?

Consider the war on terror.

Now, maybe Scott's description of Toxoplasma doesn't evoke the same visceral disgust reaction you might have if you were scooping out the litter box of a cat that you knew was infected with toxoplasma.

But it seems clear that Scott's conscious intent here was to evoke that feeling. The point is not to use toxoplasma as intellectual scaffolding to explain the cause-and-effect model of violence-begets-violence. 

Instead, it was to link the violence-begets-violence model with disgusting imagery. Read that article, and when somebody talks about the War on Terror, if your previous association was with the proud image of soldiers going off to a noble battle with a determined enemy, now it's with cat shit.

Likewise, read enough "problem with media" articles that selectively reference the silliest media pieces -- classic cherry picking on a conceptual level -- then slowly but surely, when you think of "media" you think of the worst stuff out there, rather than the average or the best. Is Matt Yglesias looking for the silliest takes that Scott Alexander ever wrote and excoriating them? No, because Scott's on his team in the blogosphere.

Now, you could certainly interpret that as Yglesias administering a status slap-down to some journalist(s), but if his goal was a status slap-down there are far more effective ways to do that. He could just write an actual hit-piece.

I disagree with this. In fact, his methods are an extremely effective way of administering a status slap-down. If he wrote an actual hit-piece, that's what his readership would interpret it as: a hit-piece. But when he writes this article, his readership interprets it as a virtuous piece advocating good epistemic standards. It's a Trojan Horse.

There are non-status/ethics/politics reasons to think about things which have status/ethics/politics implications.

This is true. But what I'm saying is that advocacy of epistemic accuracy as opposed to virtue signaling is in many contexts primarily motivated by status/ethics/politics implications.

And that is fine. There is nothing inherently wrong with an argument about the nature of reality also having status/ethics/politics implications. It's even fine to pretend, for vague "this makes us think better" reasons, that for the sake of a discussion those status/ethics/politics implications are irrelevant.

But those implications are always present. They've just been temporarily pushed to the side for a moment. Often, they come swooping back in, right after the "epistemic discussion" is concluded. That's the nature of a policy debate.

Comment by allamericanbreakfast on Exercise: Taboo "Should" · 2021-01-24T19:03:51.425Z · LW · GW

Making factory farmed eggs available to customers isn't objectively ethical. An appeal to choice only works if the choice decreased the amount of suffering that there is in the world.

Since that appears to not be the case, then you need to find another angle. Maybe factory farmed eggs decrease the amount of suffering in the world because egg eaters can eat more eggs for less money? Even if that were the case, that's only a small amount of suffering that was removed from the world that is outweighed by the suffering of factory farmed chickens. Moreover, egg eaters can still explore other foods. So you'd need to do show the data.

Basing morality off choice in and of itself isn't viable. Just because something causes a decision state does not magically make that thing moral or immoral. You also run into problems when basing morality off of choice like the libertarians who want to destroy the government because regulations technically cause more harm today than ever before. Basing morality off of choice is a moralistic fallacy and frankly dualistic.

Taking choice into account does matter though since choice affects the amount of suffering there is in the world. For example, you can choose to avoid causing agony. But that doesn't make the choice good in and of itself, only good instrumentally so, since it has caused a decrease in the amount of suffering.

Comment by allamericanbreakfast on Exercise: Taboo "Should" · 2021-01-24T18:46:56.136Z · LW · GW

My gut reaction to this post is that it's importantly wrong. This is just my babbled response, and I don't have time to engage in a back-and-forth. Hope you find it valuable though!

My issue is with the idea that any of your examples have successfully tabooed "should."

In fact, what's happening here is that the idea that we "should taboo 'should'" is being used to advocate for a different policy conclusion.

Let's use Toxoplasma Memes as an example. Well, just for starters, framing Jihad vs. War on Terror as "toxoplasma" works by choosing a concept handle that evokes a disgust reaction to affect an ethical reframing of the issue. Both Jihad/War on Terror theorists and "Toxoplasma" theorists have causal models that are inseparable from their ethical models. To deny that is not to accomplish a cleaner separation between epistemology and ethics, it's to disguise reality to give cover to one's own ethical/epistemic combination. You can do it, sure, it's the oldest trick in the book. If I were to say you shouldn't, "because it disguises the truth," I'm just being a hypocrite.

Likewise, the fact that "tabooing 'should'" makes the Copenhagen interpretation of ethics seem silly also illustrates how "tabooing 'should'" is a moral as much as an epistemic move. The point is to make an idea and its advocates look silly. It's to manipulate their social status and tickle people's jimmies until they agree with you. It might not work, but that's the intent.

Yglesias simply misrepresented the claim made by at least the snippet of the PS5 review that you cite.

The review said:

a lot of people simply won’t be able to buy a PlayStation 5, regardless of supply. Or if they can, concerns over increasing austerity in the United States and the growing threat of widespread political violence supersede any enthusiasm

Yglesias said:

the sales outlook for a new video game console system is very good.

"Regardless of supply" is a colloquialism. I think a more charitable reading of this statement, which obviously isn't meant primarily as a projection of console sales, would be that there will be people who want a PS5 who can't afford it, and that we have bigger issues in the world than being excited about the PS5. This is obviously true.

Yglesias's rhetoric isn't just meant to refocus the discussion on the sales outlook for the PS5. It's to smack down the status of the author of the PS5 review and those in the same camp, as thoughtless nincompoops who don't understand reality and therefore aren't qualified to be moral authorities either.

Now, that's all fine, because it's really all there is to do once you're getting into the realm of policy. If your goals and values aren't axiomatic, but if instead you're debating some conjunction of epistemics, values, and goals, as we usually are, then it might be a great rhetorical move to pretend like you're just having a "facts and logic and predictive abilities" debate, but that's rarely true.

Like, if Yglesias really was interested in that, then why would he ever, ever pick out such an obviously stupid piece of writing to address in the first place? He has the greatest thinkers and writers available to him to engage with! Why pick out the dumbest thing anybody ever wrote about the PS5?

Well, you know why already. There is a contest over facts/status/virtue/goals going on known as the "Culture Wars," and he's participating in it while pretending like he's not.

And maybe this is a culture war that needs to be fought. Maybe we really are ruled by the dumbest things anybody ever wrote on Medium and social media, and it's time to change that. And maybe there's value in pretending like you're performing a purely technical analysis when considering economics or the war on terror or how to address homelessness. I'm not a subjectivist. I do think that, although it may be impossible to prove what the truth is in some perfectly self-satisfying fashion, there's such a thing as being "more wrong" and "less wrong," and that it's virtuous to strive for the latter.

But I think that here, in our community of practice where we strive for the latter, we should strive to be skeptical about claims to objectivity. It's not that it's impossible. It's that it's a great rhetorical move to advance a subjective position, and how would you tell the difference?