Placebo effect report: chiropractic adjustment 2021-04-15T02:50:11.893Z
Nash Equilibria in Litigation Settlement Negotiation in the U.S.A. 2021-03-27T23:26:56.871Z
Internal Double Crux: Video Games 2021-03-14T11:45:24.431Z
supposedlyfun's Shortform 2020-09-26T20:56:30.074Z


Comment by supposedlyfun on What are good resources for gears models of joint health? · 2021-06-21T22:11:36.334Z · LW · GW

I'm going to wait for your thoughts on what would falsify your theory, because if it's a real effort, I'll be more inclined to put in the work you are requesting.

"Irrelevant" was the wrong word re images; sorry to have sent you down a rabbit hole--I should have said, "not obviously necessary to the point being made and/or unaccompanied by some explanation of why I should learn about what's in the image". I'd look at an image, read your text on either side of it, and have no idea why you were including it. 

If you are genuinely willing to give some thought to Base-Line theory then spend some time thinking about the anatomy, finding the muscles on your body and breathing with your Base-Line.

Lie on the floor and take a few deep breaths. Touch your pubic symphysis, navel and xiphoid process in turn. Imagine the line (the linea alba) that joins them extending as you breathe. Close your eyes and focus on the sensory information your body is providing. Give that a go. More than once.

Why? How many times, for how long? What evidence do you expect this practice to give me in support of your theory? If I don't feel anything, will you count that as evidence against your theory, or will you explain it as somehow supporting your theory, like Freud would claim that a patient was in denial if they claimed not to have some desire that his theory predicted that they would have?

Comment by supposedlyfun on What are good resources for gears models of joint health? · 2021-06-19T23:58:43.014Z · LW · GW

I am p>99.999 confident that what I propose is right.  I'd like that rigorously tested.    Break me, crush me.  Release me from the frustration of knowing (with every fibre in my body) that I'm right ; )


If you're that confident in your position=pain theory, why would you need DAMN-IT? Why would your assessment of a patient do anything other than figure out which of your Big 5 muscles are involved in the pain? If the answer is, "Strengthen the glutes and your pain will stop," then how is any pain ever properly characterized as degenerative?

Alternatively, if your theory is actually "position=pain/Big 5 unless some other pathology is involved," then doesn't your theory only say, "I'm 99.999 percent confident that pain properly diagnosed as idiopathic by someone who doesn't subscribe to my theory is explained by my theory"?

At what point are you describing an invisible dragon?

Here's the thing. You say you came to LW to get your theory disproven. Fine. But you are so confident in it that you expect to be wrong about one in one hundred thousand beliefs that you hold with that level of confidence. Beliefs I hold to that level of confidence include 9 * 7 = 63, because it's possible I am misremembering my multiplication tables. 

Now. Imagine trying to convince me that 9 * 7 = something else, just you and me in an empty room with no calculators. 

This is why your entire sequence went by with minimal engagement and mild upvoting. The amount of work involved in "breaking you" is tremendous, especially over the Internet, especially when your model takes eight disorganized posts and has many irrelevant images in it, and you seemingly haven't absorbed some basic lessons of The Sequences (TM). If I'm going to spend a bunch of time engaging with your theory and finding cruxes, I want to know in advance that you'll play by the rules of good reasoning.

I'm not unwilling, but can you first provide three substantive answers to the following question:

What evidence would falsify your theory?

Comment by supposedlyfun on The Apprentice Thread · 2021-06-18T23:08:22.117Z · LW · GW

[APPRENTICE] My first is due in November. I've had a very hard time finding evidence-based parenting resources on the Internet that aren't for extremely bad situations like poverty or abuse. I feel a burning need to be able to roughly model this kid's subjective experience on a rolling basis because I suspect that's what will make me the most emotionally effective AND let me impart the most rationality-adjacent thought habits. But the books I've come across have been either 1) "it's all Piaget!" which seems somewhat outdated or 2) "Piaget is a good framework but outdated, and I've read some studies, but I'm terrible at synthesis!".

Even just a reading list would be super great. Or a list of 10 heuristics for making parenting decisions. I feel like I need some kind of systematic approach.

I saw in one of your parenting posts that you cited, which I'd come across in my searches and looked promising, but I couldn't get enough clues from the site itself to figure out if it was a good foundation.

Comment by supposedlyfun on Assume long serving politicians are rationally maximizing their careers · 2021-06-18T22:37:41.114Z · LW · GW

My not-a-Democrat grandmother had this exact experience when meeting him. They spoke for a few minutes, and she felt like he thought she was the most interesting person in the room. It left a permanent impression.

Comment by supposedlyfun on Which rationalists faced significant side-effects from COVID-19 vaccination? · 2021-06-16T23:31:04.672Z · LW · GW

Six family members and I are vaccinated (Pfizer or Moderna) and did not have significant side effects by your definition.

Comment by supposedlyfun on supposedlyfun's Shortform · 2021-06-16T12:18:07.470Z · LW · GW

This is a Humble Bundle with a bunch of AI-related publications by Morgan & Claypool. $18 for 15 books. I'm a layperson re the material, but I'm pretty confident it's worth $18 just to have all of these papers collected in one place and formatted nicely. NB increasing my payment from $18 to $25 would have raised the amount donated to the charity from $0.90 to $1.25--I guess the balance of the $7 goes directly to Humble.

Comment by supposedlyfun on Tracey Davis and the Prisoner of Azkaban - Part 3 · 2021-06-11T07:45:29.617Z · LW · GW

Which classic amp sound does her sonorus model? Is it like a Line6 head but it can read the player's mind? Or is there a Vox AC30 in a pocket dimension? What's the mic setup if there are multiple amp speakers? Who handles the mixing? I have so many questions!

Comment by supposedlyfun on [Prediction] What war between the USA and China would look like in 2050 · 2021-05-28T21:54:47.241Z · LW · GW

Re trade vs conquest - If smart people are in charge of a smart populace, I agree. But China's South China Sea colonialism + attitude toward Taiwan suggest that they aren't viewing things solely in those terms. They act like a people who find terminal value in throwing their weight around and in taking Taiwan, or at least in reducing the influence of the U.S.-Japan alliance in the area by doing those things.

Re your example of Bretton Woods--in an analogous situation, the U.S./world order would be ready to give China great trade terms, but China would not even perceive such terms to be possible--wouldn't that give China an incentive to conquer instead of trade, as the Axis powers did? I am probably misinterpreting your point here. (Does China want more access to U.S./world markets than it already has?)

Comment by supposedlyfun on [Prediction] What war between the USA and China would look like in 2050 · 2021-05-27T12:02:07.330Z · LW · GW

This all seems pretty sensible.

The United States and China aren't expansionary powers

How long do you think it would take for China to go from its current level of expansionism to a level that would make war with the US plausibly worthwhile?  Could it happen in a generation, and what might precipitate it?  I'm thinking about Weimar Germany to Nazi Germany, or (the reverse) Imperial Japan to Solid-State Electronics Japan.

The Uighur ethnic cleansing is Han (versus "Chinese" more generally, since the Uighurs are citizens of PRC) expansionism, right?  Might that become more widespread and aggressive? 

(Contra, there's not much worth owning in southeast Asia or the Stan countries, and Russia would oppose outside influence in former Soviet states, based on past and current behavior.)  

What about taking over the Korean peninsula? Wouldn't be the first time. If China controlled DPRK's territory, which I assume they could at will, they could much more easily get troops into ROK than the U.S. could, especially if your view on missile-based ocean-area denial is correct. The 30,000 U.S. troops in ROK would have no realistic hope of reinforcement so long as neither side had air or sea superiority.  Does POTUS order them to fight to the last soldier, hoping that 30,000 dead or captured would motivate the country to fight back, or negotiate a peaceful retreat and withdrawal from ROK?  I guess it depends on who's POTUS.  

I bet the modern PRC could stop another Operation Chromite literally dead in the water. If nothing else, spotting an incoming sea assault is so much easier than it was in 1950.

These same issues would apply if China attacked Japan.

Comment by supposedlyfun on Bayeswatch 5: Hivemind · 2021-05-23T04:12:50.770Z · LW · GW

Do I detect an homage to Ann Leckie?

Comment by supposedlyfun on supposedlyfun's Shortform · 2021-05-22T22:41:43.581Z · LW · GW

I eventually got tired of not knowing where the karma increments were coming from, so I changed it to cache once a week. I just got my first weekly cache, and the information I got from seeing what was voted on outweighed the encouragement of any Internet Points Neurosis I may have.

Comment by supposedlyfun on Love on Cartesian Planes · 2021-05-18T21:53:40.565Z · LW · GW

This is good world-modeling.

Comment by supposedlyfun on Your Dog is Even Smarter Than You Think · 2021-05-02T03:27:04.563Z · LW · GW

I also mentioned Clever Hans, and you made a good point in response. Rather than sound like I am motte-and-baileying you, I will say that I was using "Clever Hans" irresponsibly imprecisely as a stand-in for more issues than were present in the Clever Hans case.

I've updated in the direction of "I'll eventually need to reconsider my relationship with my dog" but still expect a lot of these research threads to come apart through a combination of 

  • Subconscious cues from trainers - true Clever Hans effects (dogs are super clued in to us thanks to selection pressure, in ways we don't naturally detect)
  • Experiment design that has obvious holes in it (at first)
  • Experiment design that has subtle holes in it (once the easy problems are dealt with)
  • Alternative explanations, of experimentally established hole-free results, from professional scientists (once the field becomes large enough to attract widespread academic attention). Like, yes, you unambiguously showed experimental result x, which you attributed to p, which would indeed explain x, but q is an equally plausible explanation which your experiment does not differentiate against.

This is based on a model of lay science that tends to show these patterns, because lay science tends to be a "labor of love" that makes it harder to detect one's own biases.

Specifically on the volunteer-based projects, I expect additional issues with:

  • Selection effects in the experimentees (only unusually smart/perceptive/responsive/whatever dogs will make it past the first round of training; the others will have owners who get bored from lack of results and quit)
  • Selection effects in the experimenters (only certain types of people will even be aware of this research; only exceptionally talented dog trainers will stick with the program because training intelligent dogs takes so much f-ing patience, much less training dumber dogs)

There may be lines of research that conclusively establish some surprising things about dog intelligence, and I look forward to any such surprisal.  But I'm going to wait until the dust settles more--and until there are more published papers because I have to work a lot harder to understand technical information conveyed by video--before engaging with the research.

Comment by supposedlyfun on Your Dog is Even Smarter Than You Think · 2021-05-01T13:56:16.323Z · LW · GW

I have a dog and was aware of these people. My lack of reaction was due to a default assumption that this will turn out to be Clever Hansian once science brings its customary rigor to bear.

If not, I wonder if I will conclude that it's unethical not to teach my dog how to communicate.

Comment by supposedlyfun on Placebo effect report: chiropractic adjustment · 2021-04-17T00:11:26.037Z · LW · GW

*The placebo effect is an effect.*

Yes, I guess I'm just wrestling with how it pings both instrumental and epistemic rationality.

Comment by supposedlyfun on Placebo effect report: chiropractic adjustment · 2021-04-17T00:09:59.218Z · LW · GW

I don't know what "self-heal" means in your comment. Does that include conditions that go away on their own (episode of acute back pain, say)? In which case, wouldn't it make more sense to call those temporary conditions, rather than conditions which require the intervention of some self-healing mechanism?

The only thing the open-label placebo effect tends to prove, to me, is that the placebo effect is operating at a mind-level much deeper than our rationality efforts can hope to reach. 

Comment by supposedlyfun on Placebo effect report: chiropractic adjustment · 2021-04-17T00:03:08.957Z · LW · GW

I took an LW break for a few days and read the abstract of that Cochrane review. I'm going to go paragraph by paragraph in responding, which sometimes looks aggressive on the Internet but is just me crux-hunting.

From the Bayesian perspective you have a model of the world according to which different treatments have different likelihoods of having effects. Then you pay attention to reality and if reality doesn't behave in the way your model predicts your model has to be updated. That's the core of what epistemic rationality is about, being ready to update when your beliefs don't pay rent. 


If you want to go for the maximum of epistemic rationality, write down your credence for the effects of a given treatment down and then check afterwards how good your predictions have been. That's the way to get a world model that's aligned with empiric reality. 

Agreed. I would do this now in advance of another treatment I suspected was woo.

While doing this it's worth to keep in mind what you care about. One alternative medicine treatment is for example colon cleaning. People who do colon cleaning usually observe that after taking the colon cleaning substance their shit has a particular surprising form. If you were previously skeptical that the treatment did anything, you shouldn't take the fact that your shit now has a surprising form that you didn't expect as evidence that the treatment provides the medical benefits it's claimed. 

"Laughing deliriously" is not a result I would have expected from getting my leg tugged on exactly once, but I understand you (above) to be claiming that the unexpected result is evidence that it was not merely a placebo. I'm not a physiologist, but I can't even begin to think of a reason for leg-tug->delirious-laugh other than "placebo."

There are a bunch of alternative medical intervention that follow the pattern of providing surprising effects which then convince people that the intervention is great while not providing the hoped for benefits. 

This is my understanding as well, and it acts as a global "less likely" coefficient any time I hear any claim made by the alternative medicine community.

When thinking about the issue of chiropratic interventions there's also the question of whether the treated "pelvic misalignment" is the root cause. 

The DO made no claims about what caused the pelvic misalignment (although she speculated that it was because I drive a manual transmission!), only that pelvic misalignment was the cause of the pain. 

In the scenario where the "pelvic misalignment" is due to one leg being shorter then the other, it's plausible that the "pelvic misalignment" is going to happen repeatidly in the future if it gets fixed.

Yes, and grossly/radiographically visible pelvic misalignment is a thing that happens due to legs of different lengths, but you are being too charitable. This DO did not say my legs were different lengths or that the misalignment was grossly visible, and in fact, she claimed that many such misalignments were invisible to x-ray.

Then if you would for example go every month to get your "pelvic misalignment" from the chiropratic that's likely better then pain-killers but it's still not a perfect intervention.

In which case I think a "doctor" or practitioner of any stripe has an obligation to dig deeper for an actual root cause. Who stops at, "Very gently tugging on this guy's leg once a month provides some relief for his back pain that was bad enough that he went to the ER"? I'm not prepared to excuse that level of incuriousness; it causes me to down-update my trust in everything the practitioner says.

When it comes to working with the body there are a few strains of experts that develop their expertise through trained perception and who mostly work outside of the academia given that the trained expert perception is subjective in nature.  I don't think it's necessary that those experts are able to translate what they are doing concept that break down along lines that can be objectively observed (like x-ray's) instead of subjectively accessed. 

I completely disagree. I would expect what you're calling subjective trained expert perception to be constantly subject to all of the following cognitive biases (reading the Wikipedia list) and others I haven't thought of:

Anchoring bias (you learn bullshit in your bullshit school about what causes unilateral back pain, and that's your frame for all unilateral back pain now)

Availability bias (these other biases cause you to remember confirming data, which then...causes you to remember confirming data)

Confirmation bias (you remember everyone who you helped and forget the people you didn't help)

Backfire effect (when someone says you didn't help, you find a way to use that as evidence that your underlying theory is true)

Hindsight bias (when someone has a good result, you believe it was predictable at the time you treated them, which makes you look great in your own mind)

Illusory truth effect/availability cascade (everyone in your professional community says unilateral back pain is caused by pelvic misalignment)

Sunk-cost thinking (you spent a lot of money and professional time learning to tug on people's legs, so you tend to ignore evidence that your entire field is woo, or decide that "Western medicine" is the real villain)

Pareidolia (you perceive important patterns in random noise--cf. study where practitioners can't agree on where a supposed trigger point is)

Salience bias (a patient says you cured their back pain vs. a patient who you never see again and posts on the Internet dragging your entire field)

Summing over all these biases, I have basically no faith at all in subjective trained expert perception of people performing chiropractic/bodywork. Crux: Whether there are reliable studies tending to show that such practitioners have any ability to diagnose/recognize illness conditions better than chance, or that their diagnosis-specific chiropractic manipulations do better than the replacement-level intervention we would expect from "Western medicine" treating the same symptom. No partial credit for "You have a C4-C5 subluxation that won't show up on a CT scan; I prescribe [the same physical therapy you'd get from an MD]."

Just like a good musician might not be able to give you an objective model of his expertise doesn't mean that they don't have expertise. 

Bodywork-expertise claims to have both objective effects on humans and objective models behind those effects. I don't think musicians make similarly specific claims (beyond very general things like "these three notes sound weird because the third one is out of key" or "songs in a minor key tend to feel sad relative to songs in the major key"). Musicians and bodyworkers may both claim "expertise through trained perception," but a musician claiming that is making dramatically weaker factual claims at a dramatically weaker epistemic standard than the bodyworker. 

In general it makes sense to go first for the treatments that you think have the most likely success and then if they don't work depending on how desperate you are down the list to treatments that you believe have a lower chance of success. In practice it makes sense to also factor in what success in a given treatment means, the possible risks of the treatment and the costs. 


Comment by supposedlyfun on Placebo effect report: chiropractic adjustment · 2021-04-15T11:55:06.521Z · LW · GW

This is a sensible response, probably the ideally correct one, and I appreciate it.

My counter question is: as a limited agent, at what point, if ever, am I justified in writing off (that is, assuming it was a placebo) a treatment with no plausible mechanism of action? I've done mainstream treatments for this spasm as well, without the zany effect, and without the equivalent reduction in magnitude.

Comment by supposedlyfun on What weird beliefs do you have? · 2021-04-15T03:14:56.343Z · LW · GW

Jeffrey Epstein didn't commit suicide. Two cameras malfunctioned, the normal procedures weren't followed, and it's silly to think he didn't have compromising information on important people. And it was an incredibly high-profile prisoner. 

"Attorney General William Barr described Epstein's death as 'a perfect storm of screw-ups'." Yet several guards were indicted on charges of conspiracy and record falsification.

This belief is so obvious to me that I felt like I was being gaslighted by news outlets and even academics who later called the belief a conspiracy theory in the same class as QAnon and UFOs, including a guest on a FiveThirtyEight podcast about conspiracy theories (I'm a huge FiveThirtyEight fan; they laid the groundwork for me to appreciate this community, which in turn mostly increased my appreciation for FiveThirtyEight).

A majority of Americans seem to agree with me, although who knows why, so maybe it's not a "weird" belief except when compared against the mass media/"elite" narrative.

You could de-convince me with statistics about how often those and similar cameras malfunctioned and how often guards disregarded normal procedures with other prisoners, low profile and high profile. 

Comment by supposedlyfun on People Will Listen · 2021-04-12T01:17:35.193Z · LW · GW

This post addresses none of the valid criticisms in comments on your recent posts, especially challenges to the accuracy of your assessment of counterparty risk. These were all different ways of saying, "Your posts do not contain enough information to allow me to determine the likelihood that you are a genius versus a crank."

You can't just keep invoking Scott Alexander's Bitcoin regrets. Those opportunities are gone.

This post, in which you instead conclude that "haters gonna hate," does not advance your cause. I am just irritated that you keep posting without seeming to absorb the community's epistemic values.

Comment by supposedlyfun on Homeostatic Bruce · 2021-04-09T07:24:45.418Z · LW · GW

I like the model you're developing here as an intuitively plausible explanation of akrasia.  However, I think the comparison of BUD/S to, say, ritual scarification or bullet-ant gloves isn't strong enough to support your theory.

Like we might hope, it endows its survivors (because some die as a direct result of it) with focus, decisiveness, and basically all the conscientiousness they need to seriously “kick ass” — that is, underperform their cognitive potential much less than most do.

This point about BUD/S isn't obviously correct to me. Something like 75% of candidates drop out without completing the course.  This is strong evidence to me that BUD/S primarily selects for whatever it's designed to optimize for (whether intentionally or unintentionally) rather than endowing it.  

At least as of 1981, a major part of the weeding-out was occurring fairly early in the course:

Thirty-five percent of the attrites dropped during the indoctrination period; 27 percent, during the first 2 weeks of training, 15 percent, during Hellweek; and 23 percent,during the remainder of the training period.

(page v).  An average 20% of the attrites who passed the screen quit during Hell Week (page v), three weeks into the actual course, and as high as 36% did in two of the classes studied (page 18). If BUD/S cultivated traits rather than selecting for them, I would expect the dropouts to be more evenly distributed.  You could observe that many of the attrites during the indoctrination period failed the physical screening test, but then we have to determine how well conscientiousness correlates with passing the screen...

Admittedly, those statistics don't differentiate between medical attrition and voluntary attrition, which were each about 40% of total attrition.

This study by the Navy doesn't seem to support your claim that BUD/S makes people conscientious to the extent you suggest.  SEALs seem to be somewhat, but not hugely, above civilian average on the conscientiousness scale (page 10). Somewhat contra my arguments, this observational study admits that it could not rule out the possibility that BUD/S increases conscientiousness (page 11).

This study by RAND indicates (page 11) that the Air Force Research Laboratory concluded that higher-than-average conscientiousness was predictive of success in the Combat Controller course (a component of the Air Force's special operations side).  Combat Controllers work alongside other branches' special forces people, so presumably they need some of the same special sauce in order to succeed.  CCT school is much shorter than BUD/S, I admit, but it's some evidence that conscientiousness is a cause, not an effect, of success in Special Forces.

I think the most favorable claim you could make based on BUD/S is "To the extent that high conscientiousness is required for a BUD/S candidate's success in the course and as a SEAL, only 25% of the candidates either have the requisite conscientiousness at the start of the course or develop it during the course before the course selects against their then-current level of conscientiousness."


[edit: changed "SEALS" to "BUD/S" in the first graf]

Comment by supposedlyfun on A new acausal trading platform: RobinShould · 2021-04-02T01:29:17.072Z · LW · GW

I asked Omega to model your model's model of what trades my brain would have executed if I had created an account.  It seems I owe you $2,440.22.  

Please send me your bank account number, routing number, ATM PIN, signature exemplar, and Social Security card so I can send you the funds.  For your records, my username would have been ButtsLOL42069.

Comment by supposedlyfun on romeostevensit's Shortform · 2021-04-02T01:25:50.626Z · LW · GW

To me, the difficulty seems to lie not in defining sophistry but in detecting effective sophistry, because frequently you can't just skim a text to see if it's sophistic.  Effective sophists are good at sophistry.  You have to steelpersonishly recreate the sophist's argument in terms clear enough to pin down the wiggle room, then check for internal consistency and source validity.  In other words, you have to make the argument from scratch at the level of an undergraduate philosophy student.  It's time-consuming.  And sometimes you have to do it for arguments that have memetically evolved to activate all your brain's favorite biases and sneak by in a cloud of fuzzies.  

"The surprise at the sentence level..." reminds me of critiques of Malcolm Gladwell's writing.

Comment by supposedlyfun on A new acausal trading platform: RobinShould · 2021-04-02T01:10:40.405Z · LW · GW

In fact, our prediction technology is now so advanced that mesa-optimizers appear regularly in our prediction software.

Haven't choked on my coffee in a while. Thanks.

Although--I assume this means your software predicted that I would choke on my coffee, which implies either that choking on my coffee is a net positive in utilons or that your mesa-optimizers are misaligned.  Have you tried praying to them?  It sometimes worked on the misaligned AGI described in the Old Testament.

Comment by supposedlyfun on supposedlyfun's Shortform · 2021-03-24T02:25:32.221Z · LW · GW

This is great. Thank you. I'm fascinated by the fact that this problem was studied as far back as the 1960s.

Comment by supposedlyfun on Thirty-three randomly selected bioethics papers · 2021-03-24T01:29:55.040Z · LW · GW

This is a much better response than my comment deserved! I feel embarrassed about the disparity (not your fault; don't stop leaving great comments just because the OP half-assed it).  I think I'm still trying to find the right balance between "volume of comments and posts ~ liveliness of overall discussion" and "bulletproofness of comments and posts ~ quality of overall discussion".

Comment by supposedlyfun on Thirty-three randomly selected bioethics papers · 2021-03-23T21:43:35.015Z · LW · GW

I'm struck by how different people have such different responses to the list. Kaj felt like zir faith in humanity was restored; I rolled my eyes really hard and may have sighed or groaned. This caused me to moderately update in the direction of "I was being uncharitable".

Comment by supposedlyfun on Thirty-three randomly selected bioethics papers · 2021-03-23T21:39:13.754Z · LW · GW

Yep, and I disagree with the "sin" framing in that answer but scratched out my line about nudges in the comment above because it was half-baked. I've added the claim to my list of planned posts.

Comment by supposedlyfun on supposedlyfun's Shortform · 2021-03-23T21:17:21.116Z · LW · GW

Just by feel. At this stage, I'm just spitballing and reporting subjective sensation. The sensation went down but not away after a few hours.

Comment by supposedlyfun on sophia_xu's Shortform · 2021-03-23T03:17:14.683Z · LW · GW

I'm slowly working on a frontpage generally arguing that Critical Theory has value. My hypothesis is that some of the community will be allergic to it because it's associated with "SJW" issues and bad academia but that more of the community will thoughtfully engage if I present it in aspiring-rationalist terms, especially foundational ones like falsifiability and making beliefs pay rent.   

I'd be willing to help workshop entries in your sequence along those lines. And if you want to steal my particular idea for part of your sequence, go for it--the work is going slowly.

Comment by supposedlyfun on supposedlyfun's Shortform · 2021-03-23T03:07:07.477Z · LW · GW

Remote Desktop is bad for your brain?  

I live abroad but work for a US company and connect to my computer, located inside the company's office, through a VPN shell and then Windows' Remote Desktop function. I have a two-monitor setup at my local desk and use them both for RDP, the left one in horizontal orientation (email, Excel, billing software) and the right one vertical (for reading PDFs, drafting emails in Outlook, drafting documents in Word).

My computer shut itself off after hours in the US, so I had to get a Word document emailed to me so I could keep drafting it on my local computer.  I feel like getting rid of the lag between [keypress] and [character appears on screen], due to RDP lag (admittedly mild), is making me 30% smarter.  Like the delay was making me worse at thinking.  It's palpable. So it's either real or some kind of placebo effect associated with me being persnickety or both.  Anyone seen any data on this?

Comment by supposedlyfun on Thirty-three randomly selected bioethics papers · 2021-03-23T02:54:49.189Z · LW · GW

Provisional: a few weeks ago, the LW magic recommendation feed showed me this EY article about bioethics being stupid, from a while back.  I was somewhat persuaded by his point.  Your random sample seems to strongly confirm his model/hypothesis:

A doctor treating a patient should not try to be academically original, to come up with a brilliant new theory of bioethics. As I’ve written before, ethics is not supposed to be counterintuitive, and yet academic ethicists are biased to be just exactly counterintuitive enough that people won’t say, “Hey, I could have thought of that.” The purpose of ethics is to shape a well-lived life, not to be impressively complicated. Professional ethicists, to get paid, must transform ethics into something difficult enough to require professional ethicists.

Many of these papers seem to be doing exactly that, or to just be arguing with some other bioethicist who is doing that.  The pullquotes you put in the body of your post certainly make it look like a diseased discipline.  These people are just writing op/eds at each other, as far as I can tell.

I only got about halfway through your post because it all seemed aggressively uninteresting to anyone who is not an academic bioethicist. And I like reading philosophy of religion!

Finally, I think it's totally nuts that people assert that nudges are, or even can be, per se unethical. But I think maybe I need to do a full-on post about that after reading up on the issue. Upon reflection, I'm withdrawing this claim because a) it's not germane to the main post or my comment and b) I'm always complaining in my head about people posting contentious claims on LW without any support, so I shouldn't do that, either.

[edited to add a link to the Diseased Discipline article.]

Comment by supposedlyfun on Internal Double Crux: Video Games · 2021-03-21T00:23:20.514Z · LW · GW

I agree with what you're saying. I think it was some combination of not defining my scale with enough precision and overestimating the gaming's status as a cause rather than a symptom.

Comment by supposedlyfun on Babble Challenge: 50 Ways to Overcome Impostor Syndrome · 2021-03-20T23:30:41.971Z · LW · GW

I put on my robe and Babble Challenge hat.

  1. Do ten push-ups.

  2. "Hey Dr. Adviser, I'm experiencing impostor syndrome. Are you familiar with the idea, and do you sometimes get it, too?"

  3. "Hey Candidate Smith in another PhD program, I'm experiencing impostor syndrome. Are you familiar with the idea, and do you sometimes get it, too?"

  4. Post the kernel of your thesis on LW frontpage. People will think it's neat.

  5. Google "PhD impostor syndrome" and click on the first link you see from a person who's famous to you.

  6. Use your research skills and free access to zillions of journals to spend one hour looking at the best research on the syndrome; see what worked for people in studies.

  7. Jog two miles.

  8. Leave the house.

  9. Call the campus mental health center and make an appointment.

  10. See what your health insurance covers, and who accepts it, and make an appointment with a psychologist who's a "Dr."--and therefore has a PhD and probably went through it, too!

  11. Work at a coffee shop.

  12. Keep a CBT-style journal of all the impostory thoughts you have, maybe one page per thought-type, and just make a note of the thought, the date you had it, and what thought you wish you'd have instead.

  13. Get with a CBT professional (even a "lowly" LCSW can help with this) and do the work every day for six months.

  14. Get a smaller article published in a trade journal.

  15. Go to a conference...once that's possible.

  16. Get a hobby that you can use to partially define yourself, so even if you feel like an impostor PhD person, you can feel like a real...up-and-coming powerlifter, or something. (can you tell I think exercise and fitness are major components of all mental health angles)

  17. Write a book of poetry about your impostor syndrome.

  18. Consider, "Would I let this impostor-syndrome-voice jerk in my head live rent-free in my apartment? So why in my head?"

  19. Impostor syndrome subreddit.

  20. Join a union/org/etc of PhD candidates and talk to them about it.

  21. Spend time turning your thesis into concrete subsections and just hammer away at one until you finish. Voila, you accomplished something concrete. What impostor could do that?

  22. Develop an oracle AGI and ask it what to do. Please solve AI safety first.

  23. Call your parents and ask them about their impostor syndrome.

  24. Talk to GPT-3 about it.

  25. Talk to ELIZA the chatbot about it.

  26. Do this babble challenge.

  27. Set a five-minute timer to write down "all the reasons why my impostor syndrome voice is wrong about me".

  28. Consult with a BDSM professional to see if they have any helpful, uh, methods. Seems like a lot of their clients are high-flying professional people.

  29. Develop actual AGI so that everyone you work with is equally impostory, relative to the AGI, in your area of study. Please solve AI safety first.

  30. Take advantage of placebo effects--I bet a few magic spells to banish impostor syndrome would have some effect even if you explicitly did not believe in magic. Burn sage, hit yourself with a tulip, whatever.

  31. Have a conversation with your impostor syndrome and write down what it tells you. Take an outside view of these thoughts, or ask your friend for one.

  32. Re-read up on cognitive distortions and biases and see how they might be playing into your syndrome. I'm 80% confident that if you can recognize that some of what you're experiencing is impostor syndrome, then at least three distortions/biases will resonate with you.

  33. Ten minutes in the sauna.

  34. Take a month off from alcohol; observe effects.

  35. Offer free tutoring to a struggling but bright undergrad. See, you understand your subject so well, you can explain it to a struggling but bright undergrad.

  36. Five-minute timer: the ways in which your thesis will contribute to advancing your field.

  37. Spend 30 minutes in front of the sun or a sun lamp.

  38. Stay up for 36 hours--does it change? maybe it's depression-mediated? Cf. recent Astral Codex Ten called "Sleep Is the Mate of Death".

  39. Lexapro 10mg

  40. Interrogate your impostor syndrome like it's an al Qaeda operative in the 24 hours after 9/11, before we all started thinking hard about the ethics of torture. Imagine waterboarding it. Dehumanize it. It's trying to make you fail. How many seconds of waterboarding could it tolerate before it gives up and admits that you're doing fine?

  41. Tweet something at Elon Musk. Maybe he'll respond in some zany way that will make you chuckle but also be slightly concerned.

  42. Adopt a dog/cat/two rats.

  43. Use outside-view thinking to come up with a daily schedule broken down into Pomodoros. Pretend you are your own manager and structure it like you would for a subordinate. Maybe the lack of structure in PhD life is bothering you.

  44. Outside-view an update on the following claim: "I got this far despite being a fraud."

  45. Start tracking your work time in .25-hour increments. Maybe you're underestimating how much you get done.

  46. Start a literature review club with your colleagues. Volunteer to lead the first session.

  47. Post on a subreddit about your topic. Get a possibly net-unhealthy benefit from seeing Internet points roll in.

  48. Ask your advisor for a quarterly progress review.

  49. Marijuana edibles.

  50. Supposedly 30 minutes playing a musical instrument has outsized benefits on wellbeing. :::

Comment by supposedlyfun on Internal Double Crux: Video Games · 2021-03-20T23:20:29.758Z · LW · GW

Endweek update: I completed the week in compliance with the two-hour limit. I have not perceived a one-point-in-ten improvement is my wellbeing as a result, but a mild improvement nevertheless. My work productivity did not increase, although work was also slow on the supply side. Presumably I did something relatively more productive for that third hour, but I don't track my nonwork time to the tenth of an hour like I do my work time, so I don't know for sure.

I didn't read more Bermúdez, but I did start the Khan Academy course on Linear Algebra after getting stumped by M. Kosoy and M. Filan's discussion of closed...cone...somethings on the most recent AXRP, which I nevertheless recommend, the way one might recommend trying out the bucking bronco ride at the neighborhood honky-tonk.

Read Bermúdez was the clear winner here, although ze was overestimating the drag that too-much-gaming was having on my mood.

Overall, I think the exercise of treating this like a double crux between two subroutines in my brain was helpful. Just the process of making them write down their views was about 85% of the benefit. The effort involved in self-imposing the two-hour limit was also more than worthwhile in terms of wellbeing benefits. This is probably a combination of "playing too much video games is depressing" and "it feels good to exert agency over your own life". Maybe a 70/30 split?

Comment by supposedlyfun on Why sigmoids are so hard to predict · 2021-03-19T07:58:41.738Z · LW · GW

The graphs here really helped me to understand the points you were making.

Comment by supposedlyfun on Internal Double Crux: Video Games · 2021-03-18T04:59:21.437Z · LW · GW

Midweek update: SF has successfully limited himself to less than two hours per day. This has not been accompanied by increased work productivity or more Bermudez-reading. One thing at a time, maybe?

Comment by supposedlyfun on Dark Matters · 2021-03-15T02:43:13.953Z · LW · GW

This is fabulous content; thank you. I knew virtually nothing about the evidence for dark matter and now feel like I know a fair amount.  

One hangup: I'm missing one or more inferential steps in interpreting the picture of the Bullet Cluster. I understand that the pink is gas and purple is mass, but I am not getting how that relates to dark matter.  Since the rest of the piece is very explicitly "if A then B because x, A is true because y, therefore B" in spelling out the thought-steps, could you really break it down for the poets in the back? Maybe after "Explain this with modified gravity."

I'd be interested in an explainer about the theories of what dark matter might be, if there are major schools of thought. (If it's just people's pet theories, no need to bother.)

Comment by supposedlyfun on Apollo Creed problems and Clubber Lang problems -- framing, and how to address the latter? · 2021-03-14T23:05:19.906Z · LW · GW

This sounds like an Ugh Field (habryka post). See also the Aversion topic.

Comment by supposedlyfun on AXRP Episode 5 - Infra-Bayesianism with Vanessa Kosoy · 2021-03-12T01:10:41.103Z · LW · GW

Thank you. This helps. The explanations I found online required a background in linear algebra, which is well beyond my high school calculus.

Comment by supposedlyfun on AXRP Episode 5 - Infra-Bayesianism with Vanessa Kosoy · 2021-03-11T07:04:05.370Z · LW · GW


I was doing great following along for the first 20 minutes or so.  I've paused at 28:19 to learn what a convex probability set is and about closed convex cones.  I should've paused about five minutes ago but was hoping a convex probability set only sounded intimidating but would be easily explained (the same way you and your guests typically put difficult concepts in understandable terms).

I have never in my life gotten even close to such concepts, to my knowledge, but I was a humanities major.

I'm not complaining! Just feeding back.  This is the first time I've had to pause AXRP to go get educated.

Comment by supposedlyfun on supposedlyfun's Shortform · 2021-03-10T22:02:35.843Z · LW · GW

This makes sense re old posts.  Thanks for pointing to a valid use.

Inside my brain, I feel especially susceptible to anything that acts like Internet Points, and that little star was triggering the itch.  Without the star there, I click less often on my username to see how many Internet Points I got.  (I was also clicking on the star even when I knew there was no new information there!)  Removing the star removed some of the emotional immediacy.

Comment by supposedlyfun on supposedlyfun's Shortform · 2021-03-10T09:38:33.801Z · LW · GW

In case anyone sees this: I turned off my Vote Notifications, and it has increased my enjoyment of the site by at least 10%. You should, too.

Comment by supposedlyfun on What I'd change about different philosophy fields · 2021-03-09T02:29:40.807Z · LW · GW

For your readers' benefit, maybe just say what you mean, or actively look for a crux, instead of fencing/sparring? I'm having a very hard time figuring out what you are thinking.

Comment by supposedlyfun on A Semitechnical Introductory Dialogue on Solomonoff Induction · 2021-03-04T23:39:02.701Z · LW · GW

The language of physics is differential equations, and it turns out that this is something difficult to beat into some human brains

You rang?

I'm not sure why, but I find these dialogues easier to learn from than an article expressing the same ideas in the same order, even in an explicitly Q&A format.

Comment by supposedlyfun on Economic Class · 2021-03-04T22:31:56.919Z · LW · GW

businesses might be afraid of getting sued anyway

This is the correct explanation of why nobody does this.  

Here is what I hope is a gears-y rundown from a lawyer.  I would appreciate feedback as I've been pondering a post or sequence in this vein (probably a different topic).

If a company client came to me and said, "I want to hire based on IQ; to hell with Duke Power," I would say, 

"We can make this happen in a legally defensible way.  It will slightly increase the likelihood that a plaintiff-side employment lawyer will take the case of a person who doesn't get hired, because of Duke Power and because IQ tests are so unusual in the hiring context.  That, in turn, increases the likelihood that we end up in front of a jury, which might as well be an RNG for our purposes. [Plaintiffs without lawyers almost never make it to a jury, and they are terrible when they do.]  You're raising your annual risk of a big jury verdict by maybe half a percent per year.

"First, I need you to pretend I never went to college, then explain to me (1) why IQ is a good hiring measure for this job.  Then, (2) why some other measure isn't better than IQ (a college degree in your field, for example; remember, I am an average American, I like to think that college degrees mean something).  Then, (3) how you arrived at the IQ cutoff you plan to use for this job.  

"Next, we need to hire a psychometrician who will write a report endorsing your views on items 1 through 3.  The report will also state that whatever discrimination is baked into IQ test results based on {race, gender, religion, ethnicity, national origin, age, disability status*, military service} won't have a disparate impact on people in those categories. This needs to be a real psychometrician, maybe somebody who works for schools or is a tenured professor, not somebody we found on Expert Witness Search dot com. The report will go in my file and yours.  It should be updated every year.  If you can't get this report, I advise you not to include IQ in your hiring rubric.

"Now.  Are you sure about your answer to number 2?

I would be looking for an explanation like, "We're basically DARPA, but with harder problems, for [say] rocketry.  Three-quarters of the postdocs we hire with Ph.Ds in actual rocketry wash out because our rocketry is so complex.  They stare at the differential equations for five minutes, then vomit.  But we did Career Day with a Mensa club and ended up hiring several middle schoolers, who tripled our effectiveness.  I cry myself to sleep every night thinking about how even a Ph.D in rocketry is not a good enough cutoff for our hiring decisions.  I have tried everything else and still can't get good people.  I dream of rockets, and all I want is to--to rocket them.  There's just something about rocketeering that requires a high IQ."

Is this more than "demonstrably a reasonable measure"? Maybe. But it's good to have some safety margin built in, with employment law as with boxing an Oracle AI.

Comment by supposedlyfun on Texas Freeze Retrospective: meetup notes · 2021-03-03T22:27:59.189Z · LW · GW

This was extremely valuable content for me. This kind of object-level data doesn't come through in normal news reports. These people went longer without power, and got much colder, than the other specific stories I'd heard.

Comment by supposedlyfun on A whirlwind tour of Ethereum finance · 2021-03-02T22:53:41.643Z · LW · GW

One of the things I appreciate about this community is posts like this, where a smart person gets the itch to learn all about something interesting, then writes it up in a way that effectively conveys their synthesis.

Comment by supposedlyfun on Economic Class · 2021-03-02T08:00:04.762Z · LW · GW

Hiring directly based on IQ is illegal.

This is not correct, in the U.S. states where I've practiced employment law.  Are you talking about a particular state?

If this is your interpretation of the US Supreme Court's decision in Griggs v. Duke Power Company, I would disagree.  

Nothing in the Act precludes the use of testing or measuring procedures; obviously they are useful. What Congress has forbidden is giving these devices and mechanisms controlling force unless they are demonstrably a reasonable measure of job performance.

401 U.S. 424, 436 (1971).  What had happened was, Duke Power enjoyed its explicitly racially segregated work force and wanted to keep it, but the newish Title VII of the Civil Rights Act of 1964 said that was illegal, so Duke imposed an intelligence test on certain jobs it wanted to keep white but which didn't really require high intelligence-as-tested.  This had the effect of disproportionately excluding Black applicants, who apparently didn't do as well on the test for reasons that are not germane to the holding.  

Held: If a facially neutral job requirement has the effect of discriminating based on a protected category, the employer has to show that the requirement is "a reasonable measure of job performance"--with a strong undertone of "How stupid did you think we were?" from Chief Justice Burger, writing for a unanimous court.

Comment by supposedlyfun on Different kinds of language proficiency · 2021-03-01T01:37:23.892Z · LW · GW

English natively, then learned Spanish in school, all throughout primary, secondary, and university.  I often dreamed in Spanish and could communicate fluently.

The emotional valence you're talking about is something I never experienced, probably because I never trained it.  All of my Spanish was academic, even the literature classes I was taking in college.  I had very few organic Spanish conversations with Spanish speakers, and I never needed (or had the opportunity) to use Spanish to build a friendship or court anyone.  

The few times I was purely socializing with Spanish speakers for (say) an hour or more, the Spanish part of my brain kicked into overdrive, and I was even more fluent than normal--like the English part of my brain was just gone, and I was meta-aware of that feeling, but meta-aware in Spanish.  It was similar to a flow state.  The chasm was definitely gone in those situations.  Or at least it was for me--maybe my Spanish was just bad enough for the other person that I was in an uncanny valley (uncanny chasm??).