Posts

Observe, babble, and prune 2020-11-06T21:45:13.077Z
Should students be allowed to give good teachers a bonus? 2020-11-02T00:19:54.578Z
The Trouble With Babbles 2020-10-22T21:31:23.260Z
What's the difference between GAI and a government? 2020-10-21T23:04:01.319Z
How to not be an alarmist 2020-09-30T21:35:59.285Z
Visualizing the textbook for fun and profit 2020-09-24T19:37:47.932Z
Let the AI teach you how to flirt 2020-09-17T19:04:05.632Z
A case study in simulacra levels and the Four Children of the Seder 2020-09-14T22:31:39.484Z
Rationality and playfulness 2020-09-12T05:14:29.624Z
Choose simplicity and structure. 2020-09-10T21:45:13.770Z
Resist epistemic (and emotional) learned helplessness! 2020-09-10T02:58:24.681Z
Loneliness and the paradox of choice 2020-09-09T16:30:20.374Z
Loneliness 2020-09-08T08:14:56.557Z
Should we write more about social life? 2020-08-19T20:07:17.055Z
Sentence by Sentence: "Why Most Published Research Findings are False" 2020-08-13T06:51:02.126Z
Tearing down the Chesterton's Fence principle 2020-08-02T04:56:43.339Z
Change the world a little bit 2020-07-26T22:35:05.546Z
Inefficient doesn't mean indifferent, but it might mean wimpy. 2020-07-20T18:27:48.332Z
Praise of some popular LW articles 2020-07-20T00:32:35.849Z
Criticism of some popular LW articles 2020-07-19T01:16:50.230Z
Telling more rational stories 2020-07-17T17:47:31.831Z
AllAmericanBreakfast's Shortform 2020-07-11T19:08:01.705Z
Was a PhD necessary to solve outstanding math problems? 2020-07-10T18:43:17.342Z
Was a terminal degree ~necessary for inventing Boyle's desiderata? 2020-07-10T04:47:15.902Z
Survival in the immoral maze of college 2020-07-08T21:27:27.214Z
An agile approach to pre-research 2020-06-25T18:29:47.645Z
The point of a memory palace 2020-06-20T01:00:41.975Z
Using a memory palace to memorize a textbook. 2020-06-19T02:09:18.172Z
Bathing Machines and the Lindy Effect 2020-06-17T21:44:46.931Z
Two Kinds of Mistake Theorists 2020-06-11T14:49:47.186Z
Visual Babble and Prune 2020-06-04T18:49:30.044Z
Trust-Building: The New Rationality Project 2020-05-28T22:53:36.876Z
My stumble on COVID-19 2020-04-18T04:32:30.987Z
How superforecasting could be manipulated 2020-04-17T06:47:51.289Z
Alarm bell for the next pandemic, V.2 2020-04-15T06:47:59.415Z
Curiosity: A Greedy Feeling 2020-04-11T04:38:09.544Z
Would 2014-2016 Ebola ring the alarm bell? 2020-04-08T02:01:47.031Z
Would 2009 H1N1 (Swine Flu) ring the alarm bell? 2020-04-07T07:16:11.367Z
An alarm bell for the next pandemic 2020-04-06T01:35:03.283Z
Has LessWrong been a good early alarm bell for the pandemic? 2020-04-03T09:44:39.205Z
Forecasting an 80% chance of an effective anti COVID-19 drug (probably Remdesivir) 2020-03-15T19:21:31.187Z
Raw Post: Talking With My Brother 2019-07-13T02:57:42.142Z
AI Alignment "Scaffolding" Project Ideas (Request for Advice) 2019-07-11T04:39:11.401Z
The I Ching Series (2/10): How should I prioritize my career-building projects? 2019-07-09T22:55:05.848Z
The Results of My First LessWrong-inspired I Ching Divination 2019-07-08T21:26:36.133Z
Here Be Epistemic Dragons 2019-07-04T22:31:44.061Z
Archive of all LW essay contests 2019-05-30T06:40:02.587Z
Seeking suggestions for EA cash-prize contest 2019-05-29T20:44:35.311Z

Comments

Comment by allamericanbreakfast on What is “protein folding”? A brief explanation · 2020-12-01T18:00:24.235Z · LW · GW

This blog post is from the 2018 CASP13, not this year, but it still does a good job of digging in to some of the nitty-gritty issues in evaluating the advance this represents and the questions about its utility for practical research.

Comment by allamericanbreakfast on What is “protein folding”? A brief explanation · 2020-12-01T17:56:13.130Z · LW · GW

I think this blog post does a careful job of evaluating the advance this represents:
 

https://moalquraishi.wordpress.com/2018/12/09/alphafold-casp13-what-just-happened/

Comment by allamericanbreakfast on [Linkpost] AlphaFold: a solution to a 50-year-old grand challenge in biology · 2020-11-30T18:16:30.588Z · LW · GW

I don’t know if there are too many tech advances that are unqualified “good” from an X risk perspective. In this case, any advances in bioengineering might make it easier to create bioweapons, for example. Any advances in AI create more demand for AI...

Comment by allamericanbreakfast on AllAmericanBreakfast's Shortform · 2020-11-28T05:01:02.618Z · LW · GW

Thinking, Too Fast and Too Slow

I've noticed that there are two important failure modes in studying for my classes.

Too Fast: This is when learning breaks down because I'm trying to read, write, compute, or connect concepts too quickly.

Too Slow: This is when learning fails, or just proceeds too inefficiently, because I'm being too cautious, obsessing over words, trying to remember too many details, etc.

One hypothesis is that there's some speed of activity that's ideal for any given person, depending on the subject matter and their current level of comfort with it.

I seem to have some level of control over the speed and cavalier confidence I bring to answering questions. Do I put down the first response that comes into my head, or wrack my brains looking for some sort of tricky exception that might be relevant?

Deciding what that speed should be has always been intuitive. Is there some leverage here to enhance learning by sensitizing myself to the speed at which I ought to be practicing?

Comment by allamericanbreakfast on AllAmericanBreakfast's Shortform · 2020-11-28T01:37:27.258Z · LW · GW

Certainly it is possible to find success in some areas anonymously. No argument with you there!

I view LW-style rationality as a community of practice, a culture of people aggregating, transmitting, and extending knowledge about how to think rationally. As in "The Secret of Our Success," we don't accomplish this by independently inventing the techniques we need to do our work. We accomplish this primarily by sharing knowledge that already exists.

Another insight from TSOOS is that people use prestige as a guide for who they should imitate. So rationalists tend to respect people with a reputation for rationality.

But what if a reputation for rationality can be cultivated separately from tangible accomplishments?

In fact, prestige is already one step removed from the tangible accomplishments. But how do we know if somebody is prestigious?

Perhaps a reputation can be built not by gaining the respect of others through a track record of tangible accomplishments, but by persuading others that:

a) You are widely respected by other people whom they haven't met, or by anonymous people they cannot identify, making them feel behind the times, out of the loop.

b) That the basis on which people allocate prestige conventionally is flawed, and that they should do it differently in a way that is favorable to you, making them feel conformist or conservative.

c) That other people's track record of tangible accomplishments are in fact worthless, because they are not of the incredible value of the project that the reputation-builder is "working on," or are suspect in terms of their actual utility. This makes people insecure.

d) Giving people an ability to participate in the incredible value you are generating by convincing them to evangelize your concept, and thereby to evangelize you. Or of course, just donating money. This makes people feel a sense of meaning and purpose.

I could think of other strategies for building hype. One is to participate in cooperative games, whereby you and other hype each other, create a culture of exclusivity. If enough people do this, it could perhaps trick our monkey brains into perceiving that they're a socially dominant person in a much larger sphere than they really are.

Underlying this anxious argument is a conjunction that I want to make explicit, because it could lead to fallacy:

  1. It rests on a hypothesis that prestige has historically been a useful proxy for success...
  2. ... and that imitation of prestigious people has been a good way to become successful...
  3. ... and that we're hardwired to continue using it now...
  4. ... and that prestige can be cheap to cultivate or credit easy to steal in some domains, with rationality being one such domain; or that we can delude ourselves about somebody's prestige more easily in a modern social and technological context...
  5. ... and that we're likely enough to imitate a rationalist-by-reputation rather than a rationalist-in-fact that this is a danger worth speaking about...
  6. ... and perhaps that one such danger is that we pervert our sense of rationality to align with success in reputation-management rather than success in doing concrete good things.

You could argue against this anxiety by arguing against any of these six points, and perhaps others. It has many failure points.

One counterargument is something like this:

People are selfish creatures looking out for their own success. They have a strong incentive not to fall for hype unless it can benefit them. They are also incentivized to look for ideas and people who can actually help them be more successful in their endeavors. If part of the secret of our success is cultural transmission of knowledge, another part is probably the cultural destruction of hype. Perhaps we're wired for skepticism of strangers and slow admission into the circle of people we trust.

Hype is excitement. Excitement is a handy emotion. It grabs your attention fleetingly. Anything you're excited about has only a small probability of being as true and important as it seems at first. But occasionally, what glitters is gold. Likewise, being attracted to a magnetic, apparently prestigious figure is fine, even if the majority of the time they prove to be a bad role model, if we're able to figure that out in time to distance ourselves and try again.

So the Secret of Our Success isn't blind, instinctive imitation of prestigious people and popular ideas. Nor is it rank traditionalism.

Instead, it's cultural and instinctive transmission of knowledge among people with some capacity for individual creativity and skeptical realism.

So as a rationalist, the approach this might suggest is to use popularity, hype, and prestige to decide which books to buy, which blogs to peruse, which arguments to read. But actually read and question these arguments with a critical mind. Ask whether they seem true and useful before you accept them. If you're not sure, find a few people who you think might know better and solicit their opinion.

Gain some sophistication in interpreting why controversy among experts persists, even when they're all considering the same questions and are looking at the same evidence. As you go examining arguments and ideas in building your own career and your own life, be mindful not only of what argument is being made, but of who's making it. If you find them persuasive and helpful, look for other writings. See if you can form a relationship with them, or with others who find them respectable. Look for opportunities to put those ideas to the test. Make things.

I find this counter-argument more persuasive than the idea of being paranoid of people's reputations. In most cases, there are too many checks on reputation for a faulty one to last for too long; there are too many reputations with a foundation in fact to make the few that are baseless be common confounders; we seem to have some level of instinctive skepticism that prevents us from giving ourselves over full to a superficially persuasive argument or to one person's ill-considered dismissal; and even being "taken in" by a bad argument may often lead to a learning process that has long term value. Perhaps the vivid examples of durable delusions are artifacts of survivorship bias: most people have many dalliances with a large number of bad ideas, but end up having selected enough of the few true and useful ones to end up in a pretty good place in the end.

Comment by allamericanbreakfast on AllAmericanBreakfast's Shortform · 2020-11-27T19:36:11.715Z · LW · GW

A celebrity is someone famous for being famous.

Is a rationalist someone famous for being rational? Someone who’s leveraged their reputation to gain privileged access to opportunity, other people’s money, credit, credence, prestige?

Are there any arenas of life where reputation-building is not a heavy determinant of success?

Comment by allamericanbreakfast on AllAmericanBreakfast's Shortform · 2020-11-25T21:58:53.555Z · LW · GW

The Rationalist Move Club

Imagine that the Bay Area rationalist community did all want to move. But no individual was sure enough that others wanted to move to invest energy in making plans for a move. Nobody acts like they want to move, and the move never happens.

Individuals are often willing to take some level of risk and make some sacrifice up-front for a collective goal with big payoffs. But not too much, and not forever. It's hard to gauge true levels of interest based off attendance at a few planning meetings.

Maybe one way to solve this is to ask for escalating credible commitments.

A trusted individual sets up a Rationalist Move Fund. Everybody who's open to the idea of moving puts $500 in a short-term escrow. This makes them part of the Rationalist Move Club.

If the Move Club grows to a certain number of members within a defined period of time (say 20 members by March 2020), then they're invited to planning meetings for a defined period of time, perhaps one year. This is the first checkpoint. If the Move Club has not grown to that size by then, the money is returned and the project is cancelled.

By the end of the pre-defined planning period, there could be one of three majority consensus states, determined by vote (approval vote, obviously!):

  1. Most people feel there is a solid timetable and location for a move, and want to go forward that plan as long as half or more of the Move Club members also approve of this option. To cast a vote approving of this choice requires an additional $2,000 deposit per person into the Move Fund, which is returned along with their initial $500 deposit after they've signed a lease or bought a residence in the new city, or in 3 years, whichever is sooner.
  2. Most people want to continue planning for a move, but aren't ready to commit to a plan yet. To cast a vote approving of this choice requires an additional $500 deposit per person into the Move Fund, unless they paid $2,000 to approve of option 1.
  3. Most people want to abandon the move project. Anybody approving only of this option has their money returned to them and exits the Move Club, even if (1) or (2) is the majority vote. If this option is the majority vote, all money in escrow is returned to the Move Club members and the move project is cancelled.

Obviously the timetables, monetary commitments could be modified. Other "commitment checkpoints" could be added in as well. I don't live in the Bay Area, but if those of you who do feel this framework could be helpful, please feel free to steal it.

Comment by allamericanbreakfast on Why is there a "clogged drainpipe" effect in idea generation? · 2020-11-23T07:44:31.034Z · LW · GW

The mind's a hack. So maybe it's just harnessing the same mechanism you use to suppress socially undesirable thoughts?

Rather than saying "forget about it," it's saying "ABSOLUTELY DO NOT SAY THIS THING."

Saying it shows the mind there's no bad consequences and allows it to stop focusing on avoidance.

Comment by allamericanbreakfast on Epistemic Progress · 2020-11-23T06:04:53.200Z · LW · GW

I’m less interested in comparing groups of forecasters with each other based on brier scores than with getting a referendum on forecasting generally.

The forecasting industry has a collective interest in maintaining their reputation for predictive accuracy on general questions. I want to know if they are in fact accurate in general questions, or whether some of their apparent success rests on choosing the questions that they address with some cunning.

Comment by allamericanbreakfast on It’s not economically inefficient for a UBI to reduce recipient’s employment · 2020-11-23T05:28:22.476Z · LW · GW

UBI could enhance production for some people, if it enables them to invest more in job skills or other forms of capital. The argument for every social program - the military and police, vaccination, education, infrastructure, scientific R&D, and so on - is that they produce more value than they cost.

This also applies to forms of welfare. For example, the ER visits circumvented by housing the homeless may save the taxpayer more money than providing the housing costs.

The essential argument about UBI is not whether greater leisure time is worth the cost.

It's whether we can get more leisure time and more production at a net savings to the taxpayer with UBI.

For example, I am currently in school preparing for a degree in bioinformatics, but I am also working part-time in my old job as a piano teacher. Society could allow me to pump more STEM knowledge into my head if I didn't have to work 20 hours a week providing an after-school activity for bored rich children. It could also reduce the risk that I'll burn out before I make good on my investment.

Whether this sort of dynamic outweighs the productive loss from people choosing to live off UBI and not work at all is an empirical question.

Comment by allamericanbreakfast on Epistemic Progress · 2020-11-22T03:08:42.764Z · LW · GW

I have an idea along these lines: adversarial question-asking.

I have a big concern about various forms of forecasting calibration.

Each forecasting team establishes its reputation by showing that its predictions, in aggregate, are well-calibrated and accurate on average.

However, questions are typically posed by a questioner who's part of the forecasting team. This creates an opportunity for them to ask a lot of softball questions that are easy for an informed forecaster to answer correctly, or at least to calibrate their confidence on.

By advertising their overall level of calibration and average accuracy, they can "dilute away" inaccuracies on hard problems that other people really care about. They gain a reputation for accuracy, yet somehow don't seem so accurate when we pose a truly high-stakes question to them.

This problem could be at least partly solved by having an external, adversarial question-asker. Even better would be some sort of mechanical system for generating the questions that forecasters must answer.

For example, imagine that you had a way to extract every objectively answerable question posed by the New York Times in 2021.

Currently, their headline article is "Duty or Party? For Republicans, a Test of Whether to Enable Trump"

Though it does not state this in so many words, one of the primary questions it raises is whether the Michigan board that certifies vote results will certify Biden's victory ahead of the Electoral College vote on Dec. 14.

Imagine that one team's job was to extract such questions from a newspaper. Then they randomly selected a certain number of them each day, and posed them to a team of forecasters.

In this way, the work of superforecasters would be chained to the concerns of the public, rather than spent on questions that may or may not be "hackable."

To me, this is a critically important, and to my knowledge totally unexplored question that I would very much like to see treated.

Comment by allamericanbreakfast on Epistemic Progress · 2020-11-22T00:45:21.598Z · LW · GW

My impression of where this would lead is something like this:

While enormous amounts of work has been done globally to develop and employ epistemic aids, we have relatively little study being done to explore which epistemic interventions are most useful for specific problems.

We can envision an analog to the medical system. Instead of diagnosing physical sickness, it diagnoses epistemic illness and prescribes solutions on the basis of evidence.

We can also envision two wings of this hypothetical system. One is the "public epistemic health" wing, which studies mass interventions. Another is patient-centered epistemic medicine, which focuses on the problems of individual people or teams.

"Effective epistemics" is the attempt to move toward mechanistic theories of epistemology that are equivalent in explanatory power to the germ theory of disease. Whether such mechanistic theories can be found remains to be seen. But there was also a time during which medical research was forced to proceed without a germ theory of disease. We'd never have gotten medicine to the point where it is today if early scientists had said "we don't know what causes disease, so what's the point in studying it?"

So having a reasonable expectation that formal study would uncover mechanisms with equivalent explanatory power would be a good use of resources, considering the extreme importance of correct decision-making for every problem humanity confronts.

Is this a good way to look at what you're trying to do?

Comment by allamericanbreakfast on AllAmericanBreakfast's Shortform · 2020-11-21T23:25:31.081Z · LW · GW

You raise two issues here. One is about vitamin D, and the other is about trust.

Regarding vitamin D, there is an optimal dose for general population health that lies somewhere in between "toxically deficient" and "toxically high." The range from the high hundreds to around 10,000 appears to be well within that safe zone. The open question is not whether 10,000 IUs is potentially toxic - it clearly is not - but whether, among doses in the safe range, a lower dose can be taken to achieve the same health benefits.

One thing to understand is that in the outdoor lifestyle we evolved for, we'd be getting 80% of our vitamin D from sunlight and 20% through food. In our modern indoor lifestyles, we are starving ourselves for vitamin D.

"Supplement a bit lightly is safer than over-supplementing" is only a meaningful statement if you can define the dose that constitutes "a bit lightly" and the dose that is "over-supplementing." Beyond these points, we'd have "dangerously low" and "dangerously high" levels.

To assume that 600 IU is "a bit lightly" rather than "dangerously low" is a perfect example of begging the question.

On the issue of trust, you could just as easily say "so you don't trust these papers, why do you trust your doctor or the government?"

The key issue at hand is that in the absence of expert consensus, non-experts have to come up with their own way of deciding who to trust.

In my opinion, there are three key reasons to prefer a study of the evidence to the RDA in this particular case:

  1. The RDA hasn't been revisited in almost a decade, even simply to reaffirm it. This is despite ongoing research in an important area of study that may have links to our current global pandemic. That's strong evidence to me that the current guidance is as it is for reasons other than active engagement by policy-makers with the current state of vitamin D research.
  2. The statistical error identified in these papers is easy for me to understand. The fact that it hasn't received an official response, nor a peer-reviewed scientific criticism, further undermines the credibility of the current RDA.
  3. The rationale for the need for 10,000 IU/day vitamin D supplements makes more sense to me than the rationale for being concerned about the potential toxic effects of that level of supplementation.

However, I have started an email conversation with the author of The Big Vitamin D Mistake, and have emailed the authors of the original paper identifying the statistical error it cites, to try and understand the research climate further.

I want to know why it is difficult to achieve a scientific consensus on these questions. Everybody has access to the same evidence, and reasonable people ought to be able to find a consensus view on what it means. Instead, the author of the paper described to me a polarized climate in that field. I am trying to check with other researchers he cites about whether his characterization is accurate.

Comment by allamericanbreakfast on AllAmericanBreakfast's Shortform · 2020-11-21T00:36:44.455Z · LW · GW

An end run around slow government

The US recommended daily amount (RDA) of vitamin D is about 600 IUs per day. This was established in 2011, and hasn't been updated since. The Food and Nutrition Board of the Institute of Medicine at the National Academy of Sciences sets US RDAs.

According to a 2017 paper, "The Big Vitamin D Mistake," the right level is actually around 8,000 IUs/day, and the erroneously low level is due to a statistical mistake. I haven't been able to find out yet whether there is any transparency about when the RDA will be reconsidered.

But 3 years is a long time to wait. Especially when vitamin D deficiency is linked to COVID mortality. And if we want to be good progressives, we can also note that vitamin D deficiency is linked to race, and may be driving the higher rates of death in black communities due to COVID.

We could call the slowness to update the RDA an example of systemic racism!

What do we do when a regulatory board isn't doing its job? Well, we can disseminate the truth over the internet.

But then you wind up with an asymmetric information problem. Reading the health claims of many people promising "the truth," how do you decide whom to believe?

Probably you have the most sway in tight-knit communities, such as your family, your immediate circle of friends, and online forums like this one.

What if you wanted to pressure the FNB to reconsider the RDA sooner rather than later?

Probably giving them some bad press would be one way to do it. This is a symmetric weapon, but this is a situation where we don't actually have anybody who really thinks that incorrect vitamin D RDA levels are a good thing. Except maybe literal racists who are also extremely informed about health supplements?

In a situation where we're not dealing with a partisan divide, but only an issue of bureaucratic inefficiency, applying pressure tactics seems like a good strategy to me.

How do you start such a pressure campaign? Probably you reach out to leaders of the black community, as well as doctors and dietary researchers, and try to get them interested in this issue. Ask them what's being done, and see if there's some kind of work going on behind the scenes. Are most of them aware of this issue?

Prior to that, it's probably important to establish both your credibility and your communication skills. Bring together the studies showing that the issue is a) real and b) relevant in a format that's polished and easy to digest.

And prior to that, you probably want to gauge the difficulty from somebody with some knowhow, and get their blessing. Blessings are important. In my case, my dad spent his career in public health, and I'm going to start there.

Comment by allamericanbreakfast on Covid 11/19: Don’t Do Stupid Things · 2020-11-20T23:14:23.421Z · LW · GW

Thanks for that information. I'll pass it along.

Comment by allamericanbreakfast on Covid 11/19: Don’t Do Stupid Things · 2020-11-20T05:25:26.295Z · LW · GW

This post motivated me to order Vitamin D supplements, and write a thoughtful email to my family advocating that they do the same. Note: Mayo Clinic advocates around 600 IU/day for young adults and 800 IU/day for adults over 80. Too high a dose can apparently counteract the benefits. Most Vitamin D supplements on Amazon are in the 5,000-10,000 IU range. These ones are 400 IU/day.

Comment by allamericanbreakfast on Should we postpone AGI until we reach safety? · 2020-11-20T03:49:36.218Z · LW · GW

The key point that I think you’re missing here is that evaluating whether such a policy “should” be implemented necessarily depends on how it would be implemented.

We could in theory try to kill all AI researchers (or just go whole hog and try to kill all software engineers, better safe than sorry /s). But then of course we need to think about the side effects of such a program, ya know, like them running and hiding in other countries and dedicating their lives to fighting back against the countries that are hunting them. Or whatever.

That’s just one example, and I use it because it might be the only tractable way to stop this form of tech progress: literally wiping out the knowledge base.

I do not endorse this idea, by the way.

I’m just trying to show that your reaction to “should we” depends hugely on “how.”

Comment by allamericanbreakfast on Should we postpone AGI until we reach safety? · 2020-11-20T03:45:14.283Z · LW · GW

This is a much more thorough and eloquent statement echoing what I was articulating in my comment above. I fully endorse it.

Comment by allamericanbreakfast on Should we postpone AGI until we reach safety? · 2020-11-20T03:39:25.489Z · LW · GW

So we’d have all major military nations agreeing to a ban on artificial intelligence research, while all of them simultaneously acknowledge that AI research is key to their military edge? And then trusting each other not to carry out such research in secret? While policing anybody who crosses some undefinable line about what constitutes banned AI research?

That sounds both intractable and like a policing nightmare to me - one that would have no end in sight. If poorly executed, it could be both repressive and ineffective.

So I would like to know what a plan to permanently and effectively repress a whole wing of scientific inquiry on a global scale would look like.

The most tractable way seems like it would be to treat it like an illegal biological weapons program. That might be a model.

The difference is that people generally interested in the study of bacteria and viruses still have many other outlets. Also, bioweapons haven’t been a crucial element in any nation’s arsenal. They don’t have a positive purpose.

None of this applies to AI. So I see it as having some important differences from a bioweapons program.

Would we be willing to launch such an intrusive program of global policing, with all the attendant risks of permanent infringement of human rights, and risk setting up a system that both fails to achieve its intended purpose and sucks to live under?

Would such a system actually reduce the chance of unsafe GAI long term? Or, as you’ve pointed out, would it risk creating a climate of urgency, secrecy, and distrust among nations and among scientists?

I’d welcome work to investigate such plans, but it doesn’t seem on its face to be an obviously great solution.

Comment by allamericanbreakfast on Should we postpone AGI until we reach safety? · 2020-11-19T10:05:34.695Z · LW · GW

Presuming that this is a serious topic, then we need to understand what the world would look like if we could put the brakes on technology. Right now, we can’t. What would it look like if we as a civilization were really trying hard to stop a certain branch of research? Would we like that state of affairs?

Comment by allamericanbreakfast on Should we postpone AGI until we reach safety? · 2020-11-18T18:32:06.022Z · LW · GW

I think there are two more general questions that need to be answered first.

  1. How would we find out for sure whether there any tractable methods to put the brakes on a particular arm of technological progress?
  2. What would be the tradeoffs of such a potent investigation into our civilizational capacity?

We clearly do have the capacity to do (1) to some extent:

  • Religious activists have managed to slow progress in stem cell research, though the advent of iPSCs has created a way to bypass this issue to some extent.
  • The anti-nuclear movement has probably helped slow down progress in nuclear power research, though ironically they don't seem to have slowed down research on nuclear bombs (I could be wrong here).
  • Some people argue that the current structure of using cost-benefit analysis to allocate research funding does more harm than good, and thus it could be considered a decelerating force. But I'm not sure that's true, and even if it is, I'm not sure that applies to a field like AI with so many commercial purposes.

But these are clearly not the durable, carefully calibrated brakes we're talking about.

Comment by allamericanbreakfast on AllAmericanBreakfast's Shortform · 2020-11-13T23:06:20.883Z · LW · GW

I want to put forth a concept of "topic literacy."

Topic literacy roughly means that you have both the concepts and the individual facts memorized for a certain subject at a certain skill level. That subject can be small or large. The threshold is that you don't have to refer to a reference text to accurately answer within-subject questions at the skill level specified.

This matters, because when studying a topic, you always have to decide whether you've learned it well enough to progress to new subject matter. This offers a clean "yes/no" answer to that essential question at what I think is a good tradeoff between difficulty and adequacy.

I'm currently taking an o-chem class, and we're studying IR spectra. For this, it's necessary to be able to interpret spectral diagrams in terms of shape, intensity, and wavenumber; to predict signals that a given molecule will produce; and to explain the underlying mechanisms that produce these signals for a given molecule.

Most students will simply be answering the required homework problems, with heavy reference to notes and the textbook. In particular, they'll be referring to a key chart that lists the signals for 16 crucial bond types, to 6 subtypes of conjugated vs. unconjugated bonds, and referring back for reminders on the equations and mechanisms underpinning these patterns.

Memorizing that information only took me about an extra half hour, and dramatically increased my confidence in answering questions. It made it tractable for me to rapidly go through and answer every single study question in the chapter. This was the key step to transitioning from "topic familiar" to "topic literate."

If I had to bin levels of understanding of an academic subject, I might do it like this:

  1. "Topic ignorant." You've never before encountered a formal treatment of the topic.
  2. "Topic familiar." You understand the concepts well enough to use them, but require review of facts and concepts in most cases.
  3. "Topic literate." You have memorized concepts and facts enough to be able to answer most questions that will be posed to you (at the skill level in question) quickly and confidently, without reference to the textbook.

Go for "topic literate" before moving on.

Comment by allamericanbreakfast on AllAmericanBreakfast's Shortform · 2020-11-11T22:08:14.647Z · LW · GW

Different approaches to learning seem to be called for in fields with varying levels of paradigm consensus. The best approach to learning undergraduate math/CS/physics/chemistry seems different from the best one to take for learning biology, which again differs from the best approach to studying the economics/humanities*.

High-consensus disciplines have a natural sequential order, and the empirical data is very closely tied to an a priori predictive structure. You develop understanding by doing calculations and making theory-based arguments, along with empirical work/applications and intuition-building.

Medium-consensus disciplines start with a lot of memorization of empirical data, tied together with broad frameworks that let the parts "hang together" in a way that is legible and reliable, but imprecise. Lack of scientific knowledge about empirical data, along with massive complexity of the systems under study, prevent a full consensus accounting.

Low-consensus disciplines involve contrasting perspectives, translating complex arguments into accessible language, and applying broad principles to current dilemmas.

High-consensus disciplines can be very fun to study. Make the argument successfully, and you've "made a discovery."

The low-consensus disciplines are also fun. When you make an argument, you're engaged in an act of persuasion. That's what the humanities are for.

But those medium-consensus disciplines are in kind of an uncomfortable middle that doesn't always satisfy. You wind up memorizing and regurgitating a lot of empirical data and lab work, but persuasive intellectual argument is the exception, rather than the rule.

For someone who's highly motivated by persuasive intellectual argument, what's the right way forward? To try and engage with biology in a way that somehow incorporates more persuasive argument? To develop a passion for memorization? To accept that many more layers of biological knowledge must accumulate before you'll be conversant in it?

*I'm not sure these categories are ideal.

Comment by allamericanbreakfast on Khanivore's Shortform · 2020-11-11T18:23:34.518Z · LW · GW

There are a few clearly true statements that get compressed in your original statement, which can then be used to suggest wrong statements. This is an equivocation.

One question is “does our experience of evil or suffering inform our moral judgments?”

Another is “have people fought against evil and won?”

A third is “can a lesser wrong be acceptable in pursuit of a greater good?”

A fourth is something like “will our total human experience of evil and suffering prove so useful in aligning superintelligent AI that it is a net positive?”

A fifth is “does even the most extreme, straightforward examples of evil and suffering have any side benefits, despite being heavily net negative?”

Again, it’s fine to babble, but the culture here is that you need to clarify what question your asking. Then argue it persuasively to the best of your ability. Or try arguing the opposite, just to get clarity. Precision is a virtue.

I can’t spend time to reply any more to introduce you to the site’s culture. But I think you might benefit from a philosophy class.

Comment by allamericanbreakfast on Khanivore's Shortform · 2020-11-11T06:49:11.547Z · LW · GW

Hi Khanivore. You are new, and also you are welcome to post anything here.

One thing you'll find is that people will engage with and appreciate/criticize your posts. I don't want to alienate you by criticizing too harshly or too soon, because I think this community has a lot to offer for everyone, and I'm interested to see what you'll make of it. However, you may find this site more interesting if you get some engagement, so I'm offering that.

Your post is a good example of cached thoughts.

One thing that seems evil is the starvation, neglect, abuse, torture, murder, and conscription into armies of children. One thing that seems good is the self-actualization of children. I am a teacher of children, and I have tried for ten years to help them self-actualize.

I don't think that evil perpetrated on children is necessary for the good of children.

One thing you can do is argue with me and make a genuine attempt to persuade. Another is to steelman the counterargument. What's the strongest argument you can make that evil isn't necessary for good to exist? Can you convince yourself of the contrary,  just for the sake of your own edification.

There are other things you can do, but those are a couple suggestions that you might find valuable.

Comment by allamericanbreakfast on AllAmericanBreakfast's Shortform · 2020-11-11T06:38:41.745Z · LW · GW

What is the #1 change that LW has instilled in me?

Participating in LW has instilled the virtue of goal orientation. All other virtues, including epistemic rationality, flow from that.

Learning how to set goals, investigate them, take action to achieve them, pivot when necessary, and alter your original goals in light of new evidence is a dynamic practice, one that I expect to retain for a long time.

Many memes circulate around this broad theme. But only here have I been able to develop an explicit, robust, ever-expanding framework for making and thinking about choices and actions.

This doesn't mean I'm good at it, although I am much better than I used to be. It simply means that I'm goal-oriented about being goal-oriented. It feels palpably, viscerally different, from moment to moment.

Strangely enough, this goal orientation developed from a host of pre-existing desires. For coherence, precision, charity, logic, satisfaction, security. Practicing those led to goal orientation. Goal orientation is leading to other places.

Now, I recognize that the sense of right thinking comes through in a piece of writing when the author seems to share my goals and to advance them through their work. They are on my team, not necessarily culturally or politically, but on a more universal level, and they are helping us win.

I think that goal orientation is a hard quality to instill, although we are biologically hardwired to have desires, imaginations, intentions, and all the other psychological precursors to a goal.

But a goal. That is something refined and abstracted from the realm of the biological, although still bearing a 1-1 relation to it. I don't know how you'd teach it. I think it comes through practice. From the sense that something can be achieved. Then trying to achieve and realizing that not only were you right, but you were thinking too small. SO many things can be achieved.

And then the passion starts, perhaps. The intoxication of building a mechanism - in any medium - that gives the user some new capability or idea, makes you wonder what you can do next. It makes you want to connect with others in a new way: fellow makers and shapers of the world, fellow agents. It drives home the pressing need for a shared language and virtuous behavior, lest potential be lost or disaster strike.

I don't move through the world as I did before.

Comment by allamericanbreakfast on When Money Is Abundant, Knowledge Is The Real Wealth · 2020-11-11T06:20:54.585Z · LW · GW

I think the greater concern is that it's hard to measure. And yes, you could imagine that owning shares against, say, the efficacy of a vaccine being above a certain level could be read as an incentive to sabotage the effort to develop it.

Comment by allamericanbreakfast on When Money Is Abundant, Knowledge Is The Real Wealth · 2020-11-10T23:30:41.488Z · LW · GW

Welcome to LW, by the way :)

You’re doing something (a good thing) that we call Babble. Freely coming up with ideas that all circle around a central question, without worrying too much about whether they’re silly, important, obvious, or any of the other reasons we hold stuff back.

I’d suggest going further. Feel free to use this comment thread (or make a shortform) to throw out ideas about “why philanthropy might benefit from more (or less) cost/benefit analysis”.

We often suggest trying to come up with 50 ideas all in one go. Have at it!

Comment by allamericanbreakfast on When Money Is Abundant, Knowledge Is The Real Wealth · 2020-11-10T19:43:02.739Z · LW · GW

Well, if you’re a subscriber to mainstream EA, the idea is that neither traditionalism nor dart-throwing is best. We need a rigorous cost-benefit analysis.

If one believes that, yet also that less cost-benefit analysis is needed (or tractable) in science, that needs an explanation.

Again, I think that this post is getting at something important, but the definitions here aren’t precise enough to make it easy to apply to real issues. Like, can a billionaire use his money to buy a cost/benefit analysis of an investment of interest? Definitely.

But how can he evaluate it? Does he have to do it himself? Does he focus on creating an incentive structure for the people producing it? If so, what about Goodhart’s Law - how will he evaluate the incentive structure?

It’s “who will watch the watchmen” all the way down, but that’s a pretty defeatist perspective. My guess is that institutions do best when they adopt a variety of metrics and evaluative methods to make decisions, possibly including some randomization just to keep things spicy.

Comment by allamericanbreakfast on When Money Is Abundant, Knowledge Is The Real Wealth · 2020-11-06T22:35:24.717Z · LW · GW

The extreme form of that idea is If we could evaluate the quality of scientists, then we could fully computerize research. Since we cannot fully computerize research, we therefore have no ability to evaluate the quality of scientists.

The most valuable thing to do would be to observe what's going on right now, and the possibilities we haven't tried (or have abandoned). Insofar as we have credence in the "we know nothing" hypothesis, we should blindly dump money on random scientists. Our credence should never be zero, so this implies that some nonzero amount of random money-dumping is optimal.

Comment by allamericanbreakfast on Where do (did?) stable, cooperative institutions come from? · 2020-11-05T20:45:51.952Z · LW · GW

I think there are a few puzzles about a good institutional culture (IC) and their tendency to fade/thrive here.

  1. If an institution has a good IC, does it always thrive?
  2. Can an institution thrive even without a good IC?
  3. How can we distinguish carefully between a good IC and the condition of fading/thriving? For example, if a good IC brings a good reputation, but we say that the NY Times is fading because its reputation is declining, then saying that lacking a good IC caused the NY Times to fade is tautological.
  4. A good IC may neither be necessary nor sufficient for an institution to thrive. But is the creation of a good IC necessary (if not sufficient) to create an institution?
  5. If a good IC is of overriding importance to the existence, success, or failure of institutions, then what are the necessary and sufficient conditions to create a good IC from scratch? To turn a bad IC into a good one?

My dad has a story from when he took over his first nonprofit clinic as a CEO. It was in severe financial distress when he started. Two of the doctors, he discovered, liked to play a game. Rather than taking on patients (they were salaried), they would stand in one of the rooms and try to toss pennies out the window and get them to land on the window ledge of the building across the alley. One of the things my dad did to turn it around was to tell them that they'd be fired if they didn't change their ways.

In this case, I think the order of events was something like:

The clinic was fading. Decision-makers responsible for it brought in a new leader to investigate the IC. He discovered that there was a disconnect in the self-reinforcing cycle: two of the producers were neither being rewarded for working nor punished for not working. By threatening to punish them for not working, the leader took a step towards restarting a good IC.

This suggests a new, important facet to your question. How does the IC of an institution get repaired and maintained when it breaks down? If an institution fails entirely, when and how do its parts get recycled into healthier institutions? When a fading institution breaks down, is this generally bad (because it's gone from bad to nonexistent) or good (because now its parts can be reincorporated into institutions that are thriving).

It also suggests to me that a hierarchy of responsible leadership is important. Having supervisors who can add, remove, or replace workers, or leaders, at various levels of the hierarchy has two good effects. It is both incentivizing as a reward or punishment, and it allows a more detailed investigation of the inner workings of a particular institution.

This loosely suggests that one failure mode for an institution is when the highest leadership is corrupt. In a democracy, the highest leader is the citizenry who elect officials. If the citizens are corrupt, then the democracy will suffer. In a small business, the highest leader is the owner. If the owner is corrupt, then the business will suffer. In a publicly-traded company, it is the shareholders and the government (and, by proxy, the citizens) who are the highest leaders; if one or both is corrupt, then the corporation will suffer.

Comment by allamericanbreakfast on When Money Is Abundant, Knowledge Is The Real Wealth · 2020-11-05T00:53:22.348Z · LW · GW

I'm starting to read Braden. The thing is, that if Braden's analysis is true, then either:

  1. We can filter for the right people, we're just doing it wrong. We need to empower a few senior scientists who no longer have a dog in the fight to select who they think should be endowed with money for unconstrained research. Money can buy knowledge if you do it right.
  2. We truly can't filter for the right ideas. Either rich people need to do research, researchers need to get rich, or we need to just randomly dump money on researchers and hope that a few of them turn out to be the next Einstein.

I think there's a fairly rigorous, step-by-step, logical way to ground this whole argument we're having, but I think it's suffering from a lack of precision somehow...

Comment by allamericanbreakfast on When Money Is Abundant, Knowledge Is The Real Wealth · 2020-11-04T16:49:30.805Z · LW · GW

When it comes to funding science, there’s quite a bit of scrutiny that goes into determining which projects to fund. Labs coordinate to solve problems. Researchers do set their sights on goals. Governments organize field-wide roadmaps to chart the next ten years of research.

I would love to see John, or anyone with an interest in the subject, do an explainer on all the ways science organizes and coordinates to solve problems.

In line with John’s argument here, we should develop a robust gears-level understanding of scientific funding and organization before assuming that more power or more money can’t help.

Comment by allamericanbreakfast on When Money Is Abundant, Knowledge Is The Real Wealth · 2020-11-04T09:51:39.010Z · LW · GW

If you were the President or as rich as Jeff Bezos, you could use your power or money to just throw a lot more darts at the dartboard. There are plenty of research labs using old equipment, promising projects that don't get funding, post-docs who move into industry because they're discouraged about landing that tenure-track position, schools that can't attract competent STEM teachers partly because there's just so little money in it.

And of course, you can build institutions like OpenPhil to help reduce uncertainty about how to spend that money.

Using money or power to fix those problems is do-able. You don't have to know everything. You can be a dart, or, if you're lucky and hard-working, you can be a dart-thrower.

Comment by allamericanbreakfast on Should students be allowed to give good teachers a bonus? · 2020-11-03T19:05:25.371Z · LW · GW

Simpler, but then a "tip" could appear to be a bribe. This way, students aren't credibly able to promise a payment in exchange for a grade.

Comment by allamericanbreakfast on Should students be allowed to give good teachers a bonus? · 2020-11-03T17:11:27.012Z · LW · GW

We have very little immediate choice here. At the colleges I’ve attended, any given course is usually taught by one teacher at a time. You’d have to switch schools or maybe take the class a different term.

Comment by allamericanbreakfast on Should students be allowed to give good teachers a bonus? · 2020-11-03T17:07:34.576Z · LW · GW

I’m sure it’s different from place to place, but at my school, teachers who’ve taught for a while only get evaluated once every 5 years or something like that.

Student evals are directly interpreted and acted on by the teacher’s colleagues. So I’d be more concerned about student evaluations resulting from a teacher needing to be liked by the other teachers in their department. And of course, if the teacher is tenured, then that softens the blow of a bad evaluation.

Student evals are only one explanation for grade inflation. What if teachers just enjoy being liked by their students, and seek to avoid the hassle of student complaints? Speaking as a long time teacher, I can tell you that those factors are overwhelmingly important in determining my experience from day to day.

Comment by allamericanbreakfast on Should students be allowed to give good teachers a bonus? · 2020-11-03T03:28:49.482Z · LW · GW

I should have said "partially determined." I'm sure we can agree that perceived teaching quality is an important factor in why students choose one school over another.

Comment by allamericanbreakfast on Should students be allowed to give good teachers a bonus? · 2020-11-02T22:38:53.030Z · LW · GW

This seems compelling, but I'm not sure it's right. If the SAB actually does cause a significant improvement in teaching quality for a negligible cost, then students should be attracted to schools that implement it. That school can increase its tuition. As improved teaching is causing that increased demand, teachers can be expected to have improved negotiating power over their wages in the future.

If this is true, then why don't teachers step up their game now? My theory is that they're suffering from a free-rider problem. My premises are:

  1. Teacher salaries are primarily determined by overall student demand for the school. Student demand for the school is determined by the median teaching quality at the school.
  2. High performance is costly to the teacher - it takes more time, energy, or talent than mediocre teaching.

With these premises in mind, then we might expect that some individual teachers would tend to slack off.

By introducing an immediate incentive to up their game, it changes the incentive structure. Teachers now have a tangible reason not to slack off. So presuming that a Student-Allocated Bonus actually incentivizes teachers to teach well, I think it would have the effect of increasing their salaries in the long run.

This doesn't work as well if students are rewarding teachers for being soft graders and teaching easy classes. I think this is the stronger critique of the Student-Allocated Bonus: it may not incentivize the desired behavior from teachers.

Comment by allamericanbreakfast on Should students be allowed to give good teachers a bonus? · 2020-11-02T21:29:26.528Z · LW · GW

Where does the money come from?

As I mentioned, I envisioned a very small fee - perhaps $5/class. It's hard to imagine students thinking twice about this. I wanted to aim for around 5% of the income a typical community college teacher might make per class.

Given that this is an additional bonus rather than a threat of disciplinary action, it seems hard for teachers to argue that it would be disruptive. What's disruptive about getting $150 or so that you otherwise wouldn't have gotten?

Future employers are the primary consumers of both the credentials and the knowledge/abilities imparted to the students.  Taxpayers and students (in rather differing ratio based on type of college) are the main purchasers.

That's a nifty distinction, but I'd word it differently.

Students are the producers of trained employees. They consume educational services and capital (lab space and equipment, for example) in order to transform themselves into a professional. That transformation is the production of a professional out of the "raw material" of a student. The employer then consumes their professional services.

A close analogy is that a steel refinery (i.e. the student) is a primary consumer of ore (i.e. education), even though the steel itself still needs to be manufactured into a useful product (i.e. workplace productivity). Unless you'd consider people who buy cars the "primary consumers" of iron ore, I don't think it makes sense to consider businesses the "primary consumers" of educational services.

However, I think the issue you're getting at is important. For an efficient economy, you'd want producers to have a responsibility to create a product that consumers want to buy. We'd want consumers to have a loud, clear voice in guiding producers to make a useful product. Although we do have decent incentives for students - you can't get rid of student loan debt by declaring bankruptcy, for example - they still seem to make a lot of bad decisions. We have a habit of bequeathing 18 year olds with a "small loan" of tens or hundreds of thousands of dollars, and a lot of the time they pour it into a product that nobody wants to buy.

This would be an interesting idea if combined with income-share agreements

These do exist. My vague memory from hearing about them on a podcast a couple years ago is that they're a bad choice if you're able to attract loans to fund your education, for reasons that I don't understand. It seems like a great idea to me in theory.

In practice, it seems like you have to do a lot of enforcement, and it might be more complicated for the student to figure out as well. There's a lot of capital sloshing around, so I wouldn't expect it to be too hard for a strong student who wants to pursue a lucrative career to have much trouble funding their education.

I also like your idea of students being able to allocate money to a department, rather than to a specific teacher.

Comment by allamericanbreakfast on Should students be allowed to give good teachers a bonus? · 2020-11-02T19:24:57.627Z · LW · GW

That's along the lines of James_Miller's proposal of giving $1,000 to graduating seniors to allocate as merit pay.

Comment by allamericanbreakfast on Should students be allowed to give good teachers a bonus? · 2020-11-02T19:23:55.115Z · LW · GW

I think there's a critical distinction to draw here. There are three possibilities that are being conflated in your comment.

  1. Student evaluations correlate more strongly with test performance, and only weakly with deep understanding. ("Student evaluation of teacher performance is correlated to how well they perform on the immediate test more than it is correlated to how well they developed a deep understanding of the material.")
  2. Student evaluations correlate with test performance, and are not correlated with deep understanding. ("student evaluations reward professors who increase achievement in the contemporaneous course being taught, not those who increase deep learning.")
  3. Student evaluations correlate with test performance, and are inversely correlated with deep understanding - i.e. giving student evaluations coincides with more shallow understanding. ("In this way, student evaluation of a teacher can be inversely correlated to the quality of the education they receive.")

Only if (3) is true does correlation suggest that student evaluations may be harmful to learning.

If (2) is true, student evaluations seem less attractive, but maybe restructuring them would be an experiment worth trying.

If (1) is true, then there's a stronger case for student evaluations and seeing if we can get a better correlation with learning out of them.

Comment by allamericanbreakfast on Should students be allowed to give good teachers a bonus? · 2020-11-02T19:16:29.567Z · LW · GW

Did you get any feedback on your article after it was published? Have you seen others writing or adopting similar proposals?

Comment by allamericanbreakfast on Should students be allowed to give good teachers a bonus? · 2020-11-02T06:34:45.303Z · LW · GW

Excellent objections, commenters. It sounds like the overall reaction is "this incentivizes grade inflation, likeability, packed classes, and easiness - not quality teaching." I like waveman's point that graduate schools, employers, governments, and others have skin in the game as well.

And of course, the changes I'd like to see aren't going to be achieved by handing out petty cash here and there.

In my local community college, the main things I miss are well-designed curriculums and good teachers. It's just too spotty. Some teachers are angels, and some are just stinkers. Some curriculums are delightful, and others have gigantic flaws that make a class into 12 weeks of misery.

As a student, becoming intimately familiar with each and every flaw, it's easy to get the sense that "if this is how bad it is, this teacher/department must just not care about the experience of their students. They're getting paid the same whether or not they offer a quality class. Students aren't going to just quit if they get hit with a nasty teacher offering a disorganized class - they've already paid, they'd get a W or an F on their transcript, and even if not, where would they go?"

But I need to remember that even though this is kind of true, and even though this is a plausible explanation for why there are flaws in a class, there are other explanations that are more sympathetic:

  1. Some of the teachers may not be the best in the world, but they're still offering an education with high altruistic and earning potential attached to it at an amazingly low price relative to the long-term benefit. It's still a pretty good deal, even with all the stress and frustration.
  2. The teachers might have the same frustrations with the school bureaucracy, the curriculum vendors, shitty, dishonest, manipulative students, and their own demanding and stressful lives. They might actually be working hard to offer the best class they can - and it might just be a hard problem to solve.
  3. The systemic problems are extremely difficult and don't have any lovely solutions. Goodhart's Law isn't just a cute explanation for individual problems. It's the institutional equivalent of gravity. It's what drags everything down, slows us down, makes things collapse. Oh, we can try to evaluate and incentivize. We can test, we can evaluate, we can complain. But the value of it all is dubious.

So rather than anguishing over the shortcomings, the right response is to appreciate that we're able to have even so much. For a mere $12,000 or so, I've been able to go from a music teacher scraping by to a top student eagerly sought by bioinformatics grad programs that will offer the chance to more than triple my annual salary and do work that I'm very passionate about. That's the American story right there. It's not supposed to be easy or smooth. It's just possible with grit, hard work, and intelligence.

What if I shift from seeing schools as a "staircase to knowledge" to seeing them as "a climbing wall of knowledge?"

A staircase is something that takes effort, but every effort has been made to engineer the experience to be as stable, organized, and safe as possible.

A climbing wall still has safety features - you belay - but the point is to give you a better climbing experience. It's fundamentally a demanding, self-structured solo activity. It includes a combination of deadlines and grades as a commitment device, and some social supports. But if I take organic chemistry, learning it through a class is just the same as learning it solo, except that the experience has been structured the same way that routes get established on a climbing wall.

Does having that perspective on school change the way I interpret its "shortcomings?"

Definitely.

If schools do as good or better a job than students could have done as autodidacts, then they've succeeded. I doubt I could force as much knowledge of chemistry, linear algebra, and molecular biology that I'm acquiring this quarter into my head if I was studying on my own. It would probably be less stressful without the constant pressure, but it would be slower at the very least. If school can ram more knowledge into your head in 12 weeks than you could have shoved in there on your own, then it's doing its job.

If schools actually hold students back relative to what they could have learned on their own, then they've failed. I have taken one or two classes from teachers who truly seemed to add so much stress, with so many negative idiosyncrasies, that I think there's a good chance I'd have been better off with pure self-study. But even so, it's very hard to know, since I wouldn't have had the commitment device of having to earn a grade. And that's just my experience - maybe others would have done better in the class than on their own.

In the future, I will try to shift to the "climbing wall" perspective. School is here to offer some support and structure. But it's also here, primarily here, to breathe down my neck and MAKE me learn. It's here for the same reason the drill instructor is at boot camp. It's not to make the 100 pushups easy. It's to make damned sure that I do them and that I can't give myself an easy pass, push it off til next hour, next day, next week, next year. I'm paying $600 per quarter for a drill instructor.

Supplying the curiosity, the organization, the learning, the success - that's entirely up to me. School's a commitment device first and foremost. It's a place where I can rent educational capital (access to labs, tutoring, grading). It's also a place where I can purchase the opportunity to form relationships that I can leverage into further opportunities, through letters of recommendation, career advice, introductions, and so on.

The least important thing school has to offer is "teaching" or "education" or "learning."

Only I can offer that to myself.

Comment by allamericanbreakfast on AllAmericanBreakfast's Shortform · 2020-10-29T23:33:54.854Z · LW · GW

A lot of my akrasia is solved by just "monkey see, monkey do." Physically put what I should be doing in front of my eyeballs, and pretty quickly I'll do it. Similarly, any visible distractions, or portals to distraction, will also suck me in.

But there also seems to be a component that's more like burnout. "Monkey see, monkey don't WANNA."

On one level, the cure is to just do something else and let some time pass. But that's not explicit enough for my taste. For one thing, something is happening that recovers my motivation. For another, "letting time pass" is an activity with other effects, which might be energy-recovering but distracting or demotivating in other ways. Letting time pass involves forgetting, value drift, passing up opportunities, and spending one form of slack (time) to gain another (energy). It's costly, not just something I forget to do. So I'd like to understand my energy budget on a more fine-grained level.

Act as if tiredness and demotivation does not exist. Gratitude journaling can transform my attitude all at once, even though nothing changed in my life. Maybe "tiredness and demotivation" is a story I tell myself, not a real state that says "stop working."

One clue is that there must be a difference between "tiredness and demotivation" as an folk theory and as a measurable phenomenon. Clearly, if I stay up for 24 hours straight, I'm going to perform worse on a math test at the end of that time that I would have at the beginning. That's measurable. But if I explain my behaviors right in this moment as "because I'm tired," that's my folk theory explanation.

An approach I could take is to be skeptical of the folk theory of tiredness. Admit that fatigue will affect my performance, but open myself to possibilities like:

  1. I have more capacity for sustained work than I think. Just do it.
  2. A lot of "fatigue" is caused by self-reinforcing cycles of complaining that I'm tired/demotivated.
  3. Extremely regular habits, far beyond what I've ever practiced, would allow me to calibrate myself quite carefully for an optimal sense of wellbeing.
  4. Going with the flow, accepting all the ups and downs, and giving little to no thought about my energetic state - just allowing myself to be driven by reaction and response - is actually the best way to  go.
  5. Just swallow the 2020 wellness doctrine hook, line, and sinker. Get 8 hours of sleep. Get daily exercise. Eat a varied diet. Less caffeine, less screens, more conversation, brief breaks throughout the day, sunshine, etc. Prioritize wellness above work. If I get to the end of the day and I haven't achieved all my "wellness goals," that's a more serious problem than if I haven't completed all my work deadlines.
Comment by allamericanbreakfast on AllAmericanBreakfast's Shortform · 2020-10-23T20:14:32.765Z · LW · GW

What rationalists are trying to do is something like this:

  1. Describe the paragon of virtue: a society of perfectly rational human beings.
  2. Explain both why people fall short of that ideal, and how they can come closer to it.
  3. Explore the tensions in that account, put that plan into practice on an individual and communal level, and hold a meta-conversation about the best ways to do that.

This looks exactly like virtue ethics.

Now, we have heard that the meek shall inherit the earth. So we eschew the dark arts; embrace the virtues of accuracy, precision, and charity; steelman our opponents' arguments; try to cite our sources; question ourselves first; and resist the temptation to simply the message for public consumption.

Within those bounds, however, we need to address a few question-clusters.

What keeps our community strong? What weakens it? What is the greatest danger to it in the next year? What is the greatest opportunity? What is present in abundance? What is missing?

How does this community support our individual growth as rationalists? How does it detract from it? How could we be leveraging what our community has to offer? How could we give back?

Comment by allamericanbreakfast on The bads of ads · 2020-10-23T19:26:12.457Z · LW · GW

Life is full of pressures, it’s true. Not just from deliberate advertising, but from social comparisons.

Unfortunately, it seems to me that it’s ultimately our responsibility to engage or not. If a website demands you turn off adblocker to read, you can always click away.

If you literally feel like ads are driving you insane, equivalent to being beaten with a belt every time you glance at a billboard, or compromising your very ability to make meaningful choices...

Well, that is probably a stronger visceral reaction than most people have to ads.

I for one often choose to get free content in exchange for trying to ignore ads. I feel this deal is worthwhile to me sometimes, other times not. I pay for a subscription to Spotify so that my music listening can go uninterrupted by ads. I don’t pay for a subscription to news sites that I can read if I turn off my ad blocker.

Comment by allamericanbreakfast on The bads of ads · 2020-10-23T18:08:58.827Z · LW · GW

This metaphor doesn't work, unfortunately. You have huge ability to avoid advertising. Don't go on the internet or watch TV. Turn your eyes away from billboards.

Advertising is an annoyance, closer to noise than theft. It is extremely annoying. Like noise, it may have very bad effects on us. Perhaps that's an argument for regulating it somewhat more tightly: laws against advertising in public places seem reasonable to me, which is what most of your pictures focus on.

Comment by allamericanbreakfast on AllAmericanBreakfast's Shortform · 2020-10-23T16:28:54.448Z · LW · GW

I see my post as less about goal-setting ("succeed, with no wasted motion") and more about strategy-implementing ("Check the unavoidable boxes first and quickly, to save as much time as possible for meaningful achievement"). 

Comment by allamericanbreakfast on AllAmericanBreakfast's Shortform · 2020-10-23T16:15:41.315Z · LW · GW

Thinking, Fast and Slow was the catalyst that turned my rumbling dissatisfaction into the pursuit of a more rational approach to life. I wound up here. After a few years, what do I think causes human irrationality? Here's a listicle.

  1. Cognitive biases, whatever these are
  2. Not understanding statistics
  3. Akrasia
  4. Little skill in accessing and processing theory and data
  5. Not speaking science-ese
  6. Lack of interest or passion for rationality
  7. Not seeing rationality as a virtue, or even seeing it as a vice.
  8. A sense of futility, the idea that epistemic rationality is not very useful, while instrumental rationality is often repugnant
  9. A focus on associative thinking
  10. Resentment
  11. Not putting thought into action
  12. Lack of incentives for rational thought and action itself
  13. Mortality
  14. Shame
  15. Lack of time, energy, ability
  16. An accurate awareness that it's impossible to distinguish tribal affiliation and culture from a community
  17. Everyone is already rational, given their context
  18. Everyone thinks they're already rational, and that other people are dumb
  19. It's a good heuristic to assume that other people are dumb
  20. Rationality is disruptive, and even very "progressive" people have a conservative bias to stay the same, conform with their peers, and not question their own worldview
  21. Rationality can misfire if we don't take it far enough
  22. All the writing, math, research, etc. is uncomfortable and not very fun compared to alternatives
  23. Epistemic rationality is directly contradictory to instrumental rationality
  24. Nihilism
  25. Applause lights confuse people about what even is rationality
  26. There's at least 26 factors deflecting people from rationality, and people like a clear, simple answer
  27. No curriculum
  28. It's not taught in school
  29. In an irrational world, epistemic rationality is going to hold you back
  30. Life is bad, and making it better just makes people more comfortable in badness
  31. Very short-term thinking
  32. People take their ideas way too seriously, without taking ideas in general seriously enough
  33. Constant distraction
  34. The paradox of choice
  35. Lack of faith in other people or in the possibility for constructive change
  36. Rationality looks at the whole world, which has more people in it than Dunbar's number
  37. The rationalists are all hiding on obscure blogs online
  38. Rationality is inherently elitist
  39. Rationality leads to convergence on the truth if we trust each other, but it leads to fragmentation of interests since we can't think about everything, which makes us more isolated
  40. Slinging opinions around is how people connect. Rationality is an argument.
  41. "Rationality" is stupid. What's really smart is to get good at harnessing your intuition, your social instincts, to make friends and play politics.
  42. Rationality is paperclipping the world. Every technological advance that makes individuals more comfortable pillages the earth and increases inequality, so they're all bad and we should just embrace the famine and pestilence until mother nature takes us back to the stone age and we can all exist in the circular dreamtime.
  43. You can't rationally commit to rationality without being rational first. We have no baptism ceremony.
  44. We need a baptism ceremony but don't want to be a cult, so we're screwed, which we would also be if we became a cult.
  45. David Brooks is right that EA is bad, we like EA, so we're probably bad too.
  46. We're secretly all spiritual and just faking rational atheism because what we really want to do is convert.
  47. There's too much verbiage already in the world.
  48. The singularity is coming; what's the point?
  49. Our leaders have abandoned us, and the best of us have been cut down like poppies.
  50. Eschewing the dark arts is a self-defeating stance