Posts

Internal Information Cascades 2021-06-25T16:35:13.870Z
Gems from the Wiki: Paranoid Debating 2020-09-15T03:51:10.453Z
David C Denkenberger on Food Production after a Sun Obscuring Disaster 2017-09-17T21:06:27.996Z
How often do you check this forum? 2017-01-30T16:56:54.302Z
[LINK] Poem: There are no beautiful surfaces without a terrible depth. 2012-03-27T17:30:33.772Z
But Butter Goes Rancid In The Freezer 2011-05-09T06:01:34.941Z
February 27 2011 Southern California Meetup 2011-02-24T05:05:39.907Z
Spoiled Discussion of Permutation City, A Fire Upon The Deep, and Eliezer's Mega Crossover 2011-02-19T06:10:15.258Z
January 2011 Southern California Meetup 2011-01-18T04:50:20.454Z
VIDEO: The Problem With Anecdotes 2011-01-12T02:37:33.860Z
December 2010 Southern California Meetup 2010-12-16T22:28:29.049Z
Starting point for calculating inferential distance? 2010-12-03T20:20:03.484Z
Seeking book about baseline life planning and expectations 2010-10-29T20:31:33.891Z
Luminosity (Twilight fanfic) Part 2 Discussion Thread 2010-10-25T23:07:49.960Z
September 2010 Southern California Meetup 2010-09-13T02:31:18.915Z
July 2010 Southern California Meetup 2010-07-07T19:54:25.535Z

Comments

Comment by JenniferRM on What are some beautiful, rationalist artworks? · 2022-01-27T20:03:27.320Z · LW · GW

Part of why I like this one is that I was confused by some of it, and did some research, and thus learned that the big hazy blue/white thing isn't like... a rocket's glow from near and close to the camera, but rather it is a wispy but vast object that "exists out there"!

It is the Phoebe Ring, which was predicted to exist by Steven Soter in 1974 and first observed in 2009. It is made from the ejecta dust of Phoebe which formed far from the sun, probably had water on it while it was radioactively hot, and only later was captured by Saturn in the inner system.

Comment by JenniferRM on Why haven't we celebrated any major achievements lately? · 2022-01-27T18:25:36.644Z · LW · GW

The standing ovation at Wimbledon stands out to me as hopeful... like maybe someone with influence over the PA system at a big sporting event had a coherent theory of optimistic credit assignment and managed to use it to let a hopeful crowd show respect for good actions in a relatively selfless way? 

I found the video and it is interesting how they announced numerous people and things, like various categories of NHS employees, and then some random social media fundraising stunt was the the "final name" they announced (like in an "end on a good note" motion?)... 

But the first in the list of honored people were "leaders who have developed the anti-covid vaccines". The scientists themselves were never named and maybe that was them... or maybe not? Then the audience seemed to want to cheer for the creation of the vaccine and that was all they were going to get, so that's what they just wouldn't stop clapping about, they just kept clapping and clapping and then standing up and clapping some more for the thing that was as maximally decisive and meaningful as they were going to get from the PA system. Smart audience <3

Googling and searching more, it looks like the biggest name there (though never mentioned by name) was a scientist/entrepreneur who was "knighted" in 2021 as Dame Sarah Katherine Gilbert.

Highlights of her life: lots of academic career stuff in the 1990s.  In 1998 she gave birth to triplets who were raised by her househusband. All three are currently studying biochemistry in college, so he seems to have done a good job as a parent. She founded Vaccitech in 2016. She heard about a pneumonia cluster in Wuhan on Jan 1, 2020 and had the vaccine candidate for it designed within two weeks. As wikipedia notes:

As of January 2022 more than 2.5 billion doses of the vaccine have been released to more than 170 countries worldwide.

Well played :-)

Comment by JenniferRM on Why haven't we celebrated any major achievements lately? · 2022-01-24T20:51:20.170Z · LW · GW

It is interesting to read this for the first time while far enough in the future to know how some things turned out using The Power Of Retrospection to associate things that were potentially relevant but not yet obviously so...

This was written on August 11, 2020 (emphasis not in original)...

How will we greet the COVID-19 vaccine, when it arrives hopefully in the next year or two? Will people “ring bells, honk horns, blow whistles, fire salutes, drink toasts, hug children, and forgive enemies”? Will they “name schools, streets, hospitals, and newborn infants” after the creator?

At the same time this was written, Operation Warp Speed was just a few months away from success, and on nearly the same day it was written, companies proximate to OWS were being sued for announcing things of questionable truth in proximity to increases in biotech stocks and the sale of such stocks by executives.

In November, three months and eight days after the above was written, Pfizer announced 95% efficacy ... to zero parades. 

Most of the talk was about the forward-looking process where a centralized government bureaucracy that shouldn't even exist would (or would not) give "authoritarian permission" that allowed people to give "personal permission" to potentially health-promoting medical treatments.

The actual celebrations that did occur within days of Pfizer's announcement... happened before the announcement. The topic of the spontaneous outpouring of happiness was that Trump lost the election to whoever it was that had been decided by insiders to be the one to go head-to-head against Trump.

In the meantime, the leader of the overall vaccine  project that produced the success was Moncef Slaoui. Instead of a ticker tape parade he helped a bit with the presidential transition and then resigned while maintaining various studiously polite silences (though his words from 1 month after the OP was written and 3 months before the resignation are still visible).

However, personally if someone really deserves a ticker tape parade, I'd vote even harder for Katalin Kariko whose work mostly preceded the writing of the OP by years.

There would be some embarrassment here too, however, because she doesn't have tenure, having been pushed out of the high prestige success track of academia early in her career, and then persisting via protective friendships with the sort who exercise the Kolmogorov option before eventually moving to the private sector and succeeding mostly in spite of how broken the US's central institutions are, not because of their effectiveness.

In the meantime, here in Jan 2022, as I write this and google for evidence of parades, the ones I mostly find are against the vaccines.

I find myself able to explain most of the gears and mechanisms that explain each of these little isolated and seemingly confused tragedies using various sociological microtheories whose central fact is "assume by default that local and half-blind machiavellian self interest explains almost everything that almost everyone is doing".

What I don't see is a coherent overarching vision of a sensibly functioning society. It is almost like nearly no one thinks that the credit assignment problem is super important to get right, in order to have a good society? Or something?

I can't easily find raw data on baby names for 2021 (it being only 24 days since 2021 ended) but one source that has numbers says this about the newly popular names:

This year's fastest-rising baby girl names were Raya (up 53 percent), Alora (up 32 percent) and Ariya (up 27 percent); while Onyx (up 44 percent), Koda (up 38 percent), and Finnegan (up 35 percent) saw the biggest jumps on the boys’ side.

No Katalin. No Moncef. Maybe 2022 will be a good year for these names? One can always hope :-)

Comment by JenniferRM on The Liar and the Scold · 2022-01-24T18:49:54.386Z · LW · GW

I regret that I cannot explore the archive of Justis's other contributions (because they are few) but appreciate that you shared credit :-)

Comment by JenniferRM on The Liar and the Scold · 2022-01-24T18:46:29.518Z · LW · GW

Yeah! <3

At one point I assigned myself the homework of watching all of black mirror so as to understand "what cultural associations would be applied to what ideas by default"... 

...and most of the episodes had me suppressing anger at the writers for just writing characters who violate the same set of very basic rules over and over and over again with no lessons ever learned by anyone (lessons like "never trust something that talks until you know where it keeps its brains" and "own root on computing machines you rely on or personally trust the humans who do own root on such machines").

However, all the black mirror episodes that were essentially "a good love story in an alternate world" did not bother me in the same way :-)

Comment by JenniferRM on Postmortem on DIY Recombinant Covid Vaccine · 2022-01-24T04:25:49.308Z · LW · GW

I can respect consciously prescriptive optimism <3

(I'd personally be more respectful to someone who was strong and sane enough to carry out a relatively simple plan to put dangerous mad scientists in Safety Level 5 facilities while they do their research behind a causal buffer (and also put rogue scientists permanently in jail if they do dangerous research outside of an SL5)... though I could also respect someone who found an obviously better path than this. I'm not committed to this, its just that when I grind out the math I don't see much hope for any other option.)

Comment by JenniferRM on The Liar and the Scold · 2022-01-23T19:45:14.408Z · LW · GW

I was expecting there to be another layer of mirroring related to "the scold".

What might have happened is that some flaw would seem "too crazy" and then after the "initial detection of the true flaw" the narrator would start to suspect that he himself was a self-aware subprocess in a GAN (but not self-aware about being a subprocess in a GAN) whose role was to notice some implausibility in his environment.

The "childhood memory and sarah detection experience" process might have been a narrative prefix that implies the kind of person who would be suspicious in plausible ways. (Suspicious of the car accident, suspicious of the crackable program, suspicious that VR headsets have that much CPU, suspicious of what the sexbot asks him to do, etc, etc.) 

In this ending, the final paragraph or two would have included cascading realizations that as he became more and more certain of general implausibility, it would becomes more and more likely that the observing meta-process would reach a threshold and halt this run during this epoch, and then reboot him and his world, with SGD-generated variations to see if an even higher time-till-doubt can be achieved somehow.

And what becomes of the scold is best left unsaid.

Comment by JenniferRM on The Liar and the Scold · 2022-01-23T19:24:51.219Z · LW · GW

I noticed something was wrong when Kathleen was introduced in excruciating detail. True love is something no one actually brags about to third parties in that way. If real then it is too blessed/braggy to share, and if not real... well... fiction is a lie told for fun, basically, so such things can occur in fiction <3

With suspicion already raised, the double punch of "The Machine" and "Joseph Norck" caused me to google for someone named Norck involved in computer science, and no such professor exists.

Then I leaned back and enjoyed the story :-)

Comment by JenniferRM on A one-question Turing test for GPT-3 · 2022-01-23T16:59:02.594Z · LW · GW

The language model is just predicting text. If the model thinks an author is stupid (as evidenced by a stupid prompt) then it will predict stupid content as the followup. 

To imagine that it is trying to solve the task of "reasoning without failure" is to project our contextualized common sense on software built for a different purpose than reasoning without failure.

This is what unaligned software does by default: exactly what its construction and design cause it to do, whether or not the constructive causes constrain the software's behavior to be helpful for a particular use case that seems obvious to us.

The scary thing is that I haven't seen GPT-3 ever fail to give a really good answer (in its top 10 answers, anyway) when a human brings non-trivial effort to giving it a prompt that actually seems smart, and whose natural extension would also be smart. 

This implies to me that the full engine is very good at assessing the level of the text that it is analyzing, and has a (justifiably?) bad opinion of the typical human author. So its cleverness encompasses all the bad thinking... while also containing highly advanced capacities that only are called upon to predict continuations for maybe 1 in 100,000 prompts.

Comment by JenniferRM on Postmortem on DIY Recombinant Covid Vaccine · 2022-01-23T16:38:02.252Z · LW · GW

If you know of someone working on a solution such that think we're lucky rather than doomed, I'm curious whose work gives you hope?

I'm pretty hopeless on the subject, not because it appears technically hard, but because the political economy of the coordination problem seems insurmountable. Many scientists seem highly opposed to the kinds of things that seem like they would naively be adequate to prevent the risk.

If I'm missing something, and smart people are on the job in a way that gives you hope, that would be happy news :-)

Comment by JenniferRM on Use Normal Predictions · 2022-01-12T23:55:00.569Z · LW · GW

When I google for [Bernoulli likelihood] I end up at the distribution and I don't see anything there about how to use it as a measure of calibration and/or decisiveness and/or anything else.

One hypothesis I have is that you have some core idea like "the deep true nature of every mental motion comes out as a distribution over a continuous variable... and the only valid comparison is ultimately a comparison between two distributions"... and then if this is what you believe then by pointing to a different distribution you would have pointed me towards "a different scoring method" (even though I can't see a scoring method here)... 

Another consequence of you thinking that distributions are the "atoms of statistics" (in some sense) would (if true) imply that you think that a Brier Score has some distribution assumption already lurking inside it as its "true form" and furthermore that this distribution is less sensible to use than the Bernoulli?

...

As to the original issue, I think a lack of an ability, with continuous variables, to "max the calibration and totally fail at knowing things and still get an ok <some kind of score> (or not be able to do such a thing)" might not prove very much about <that score>?

Here I explore for a bit... can I come up with a N(m,s) guessing system that knows nothing but seems calibrated?

One thought I had: perhaps whoever is picking the continuous numbers has biases, and then you could make predictions of sigma basically at random at first, and then as confirming data comes in for that source, that tells you about the kinds of questions you're getting, so in future rounds you might tweak your guesses with no particular awareness of the semantics of any of the questions... such as by using the same kind of reasoning that lead you to concluding "widen my future intervals by 73%" in the example in the OP.

With a bit of extra glue logic that says something vaguely like "use all past means to predict a new mean of all numbers so far" that plays nicely with the sigma guesses... I think the standard sigma and mean used for all the questions would stabilize? Probably? Maybe?

I think I'd have to actually sit down and do real math (and maybe some numerical experiments) to be sure that it would. But is seems like the mean would probably stabilize, and once the mean stabilizes the S could be adjusted to get 1.0 eventually too? Maybe some assumptions about the biases of the source of the numbers have to be added to get this result, but I'm not sure if there are any unique such assumptions that are privileged. Certainly a Gaussian distribution seems unlikely to me. (Most of the natural data I run across is fat-tailed and "power law looking".)

The method I suggest above would then give you a "natural number scale and deviation" for whatever the source was for the supply of "guess this continuous variable" puzzles. 

As the number of questions goes up (into the thousands? the billions? the quadrillions?) I feel like this content neutral sigma could approach 1.0 if the underlying source of continuous numbers to estimate was not set up in some abusive way that was often asking questions whose answer was "Graham's Number" (or doing power law stuff, or doing anything similarly weird). I might be wrong here. This is just my hunch before numerical simulations <3

And if my proposed "generic sigma for this source of numbers" algorithm works here... it would not be exactly the same as "pick an option among N at random and assert 1/N confidence and thereby seem like you're calibrated even though you know literally nothing about the object level questions" but it would be kinda similar.

My method is purposefully essentially contentless... except it seems like it would capture the biases of the continuous number source for most reasonable kinds of number sources.

...

Something I noticed... I remember back in the early days of LW there was an attempt to come up with a fun game for meetups that exercises calibration on continuous variables.  It ended up ALSO needing two numbers (not just a point estimate).

The idea was to have have a description of a number and a (maybe implicitly) asserted calibration/accuracy rate that a player should aim for (like being 50% confident or 99% confident or whatever). 

Then, for each question, each player emits two numbers between -Inf and +Inf and gets penalized if the true number is outside their bounds, and rewarded if the true number is inside, and rewarded more for a narrower bound than anyone else. The reward schedule should be such that an accuracy rate they have been told to aim for would be the winning calibration to have.

One version of this we tried that was pretty fun and pretty easy to score aimed for "very very high certainty" by having the scoring rule be: (1) we play N rounds, (2) if the true number is ever outside the bounds you get -2N points for that round (enough to essentially kick you out of the "real" game), (3) whoever has the narrowest bounds that contains the answer gets 1 point for that round. Winner has the most points at the end. 

Playing this game for 10 rounds, the winner in practice was often someone who just turned in [-Inf, +Inf] for every question, because it turns out people seem to be really terrible at "knowing what they numerically know" <3

The thing that I'm struck by is that we basically needed two numbers to make the scoring system transcend the problems of "different scales or distributions on different questions".

That old game used "two point estimates" to get two numbers.  You're using a midpoint and a fuzz factor that you seem strongly attached to for reasons I don't really understand. In both cases, to make the game work, it feels necessary to have two numbers, which is... interesting. 

It is weird to think that this problem space (related to one-dimensional uncertainty) is sort of intrinsically two dimensional. It feels like something there could be a theorem about, but I don't know of any off the top of my head.

Comment by JenniferRM on Use Normal Predictions · 2022-01-12T19:03:27.581Z · LW · GW

Yes, thanks! (Edited with credit.)

Comment by JenniferRM on Internet Literacy Atrophy · 2022-01-12T18:46:03.888Z · LW · GW

Subtracting out the "web-based" part as a first class requirement, while focusing on the bridge made of code as a "middle" from which to work "outwards" towards raw inputs and final results...

...I tend to do the first ~20 data entry actions as variable constants in my code that I tweak by hand, then switch to the CSV format for the next 10^2 to 10^5 data entry tasks that my data labelers work on, based on how I think it might work best (while giving them space for positive creativity).

A semi-common transitional pattern during the CSV stage involves using cloud spreadsheets (with multiple people logged in who can edit together and watch each other edit (which makes it sorta web-based, and also lets you use data labelers anywhere on the planet)) and ends with a copypasta out of the cloud and into a CSV that can be checked into git. Data entry... leads to crashes... which leads to validation code... which leads to automated tooling to correct common human errors <3

If the label team does more than ~10^4 data entry actions, and the team is still using CSV, then I feel guilty about having failed to upgrade a step in the full pipeline (including the human parts) whose path of desire calls out for an infrastructure upgrade if it is being used that much. If they get to 10^5 labeling actions with that system and those resources then upper management is confused somehow (maybe headcount maxxing instead of result maxxing?) and fixing that confusion is... complicated.

This CSV growth stage is not perfect, but it is highly re-usable during exploratory sketch work on blue water projects because most of the components can be accomplished with a variety of non-trivial tools.

If you know of something better for these growth stages, I'd love to hear about your workflows, my own standard methods are mostly self constructed.

Comment by JenniferRM on Uncontroversially good legislation · 2022-01-10T08:03:19.738Z · LW · GW

I just want to second something you said, and provide background on how good the choice of the issue was in a larger context.

(2) Let us buy glasses: We can’t buy glasses or contact lenses if our eye prescription is over 1-2 years old. This means that every 1-2 years, glasses-wearers need to pay $200 to optometrists for the slip of paper (and stinging eyeballs). Seems like it’s probably a racket and the benefit from detecting the odd eye cancer is outweighed by the costs, although see the debate here.

This seems highly reasonable to me, but then again I didn't go to a very expensive school to get in on the relevant legalized monopoly.

There is this horrifying and/or hilarious quirk of US federal jurisprudence that when a judge applies a "rational basis test" in a court case, it means almost literally the opposite of what our community means by "rationality". They mean it more like in the sense of "rationalization".

When a law is obviously corrupt, and it is challenged for violating the guarantee of "equal protection under the law" (perhaps because the law obviously favors the corrupt cronies of the lawmakers at the expense of most normal people), modern US judges will not throw it out unless there are no conceivable rationalizations at all, ever, (even obviously low quality rationalizations) could ever hypothetically defend the law.

Basically, the rational basis test is a "cancerous gene" in our legal system at this point.  Parts of the government are pretty clearly corrupt and then to prevent the rest of the country from defending itself against their predatory extraction of wealth using state power, the broken parts of the government seem to have invented the "rational basis test" as a valid legal doctrine. 

Any time a law is challenged and that defense is the best possible defense of the law... it is good heuristic evidence (at least to me) that the law is terrible and should be deleted or fixed (or at least properly and coherently defended for its actual practical effects).

This loops back to your example! In 1955, in Williamson vs Lee Optical the lower courts threw out some particularly egregious optometry monopoly laws for violating due process and equal protection. Then the SCOTUS overturned this constitutional reasoning with the "any conceivable rationalization" test for overturning things. 

If this jurisprudential oncogene didn't exist, we already might not have this specific crazy law about optometry :-)

The rational(izable) basis test arose over time. These three posts are pretty good in showing how the general logic started out being used to allow laws in support of forced sterilization (when eugenics was popular), then to defend segregation (when that was popular), and in the 1930s to defend price fixing (when that was popular). Two of the posts mention the optometry case specifically!

Comment by JenniferRM on Uncontroversially good legislation · 2022-01-10T06:57:48.587Z · LW · GW

I have lately been using FDA delenda est as a sort of "you must be at least <this> sane about governance for me to think that your votes will add usefully adequate information to elections". (Possible exception: if you just figure out if your life is better or worse in a high quality way, and always vote against incumbents when things are getting worse, no matter the incumbent's party or excuses, that might be OK too?)

The problem with "FDA delenda est" is that while I do basically think that people who defend or endorse the FDA are either not yet educated yet about the relevant topics or else they are actively evil...

...this "line in the sand" makes it clear that the vast vast majority of voters and elected officials don't have any strong grasp on the logic of (1) medicine or (2) doctor's rights or (3) patient's rights or (4) political economy or (5) risk management or (6) credentialist hubris or (7) consumer freedom... and so on.

And applying the label "not educated enough (or else actively evil)" to "most people" is not a good move at a dinner party <3

So...

My current best idea for a polite way to talk about a nearly totally safe political thing that works as a "litmus test" and slam dunk demonstration that there are lots of laws worth fixing is:

The 13th amendment, passed in 1865, which legalized penal slavery, should itself be amended to have that loophole removed.  

Like seriously: who is still in favor of slavery?!?!?!

The 14th amendment (passed in 1868) made it formally clear that everyone in the US is a citizen by default, even and especially black people, and thus all such people are heirs by birth to the various freedoms that were legally and forcefully secured for "ourselves and our posterity" during the revolutionary war.

So then with the 14th amendment the supreme court could have looked at the bill of rights in general, and looked at who legally gets rights, and then just banned slavery for everyone based on logical consistency.

We don't need a special amendment that says "no slavery except for the good kinds of slavery" because there are no good kinds of slavery and because the rest of the constitution already inherently forbids slavery in the course of protecting a vast array of other rights that are less obvious.

Here's the entire text of the 13th, with a trivially simple rewrite:

Neither slavery nor involuntary servitude, except as a punishment for crime whereof the party shall have been duly convicted, shall exist within the United States, or any place subject to their jurisdiction.

See how much cleaner that would be? :-)

Comment by JenniferRM on Use Normal Predictions · 2022-01-10T06:24:31.470Z · LW · GW

One of the things I like about a Brier Score is that I feel like I intuitively understand how it rewards calibration and also decisiveness.

It is trivial to be perfectly calibrated on multiple choice (with two choices being a "binary" multiple choice answer) simply by throwing decisiveness out the window: generate answers with coin flips and give confidence for all answers of 1/N.  You will come out with perfect calibration, but also the practice is pointless, which shows that we intuitively don't care only about being calibrated.

However, this trick gets a very bad (edited from low thanks to GWS for seeing the typo) Brier Score, because the Brier Score was invented partly in response to the ideas that motivate the trick :-)

We also want to see "1+1=3" and assign it "1E-7" probability, because that equation is false and the uncertainty is more likely to come from typos and model error and so on.  Giving probabilities like this will give you very very very low Brier Scores... as it should! :-)

The best possible Brier Score is 0.0 in the same way that the best RMSE is 0.0. This is reasonable because the RMSE and Brier Score are in some sense the same concept.

It makes sense to me that for both your goal is to make them zero. Just zero. The goal then is to know all the things... and to know that you know them by getting away with assigning everything very very high or very very low probabilities (and thus maxing the decisiveness)! <3

Second we calculate  as the RMSE (root mean squared error) of all predictions... Then we calculate  ...

So if these were my only two predictions, then I should widen my future intervals by 73%. In other words, because  is 1.73 and not 1, thus my intervals are too small by a factor of 1.73.

I'm not sure if you're doing something conceptually interesting here (like how Brier Scores interestingly goes over and above mere "Accuracy" or mere "Calibration" by taking several good things into account in a balanced way), or... maybe... are you making some sort of error? 

RMSE works with nothing but point predictions. It seems like you recognize that the standard deviations aren't completely necessary when you write:

(1) If the data  and the prediction  are close, then you are a good predictor

Thus maybe you don't need to also elicit a distribution and a variance estimate from the predicter? I think? There does seem to be something vaguely pleasing about aiming for an RMSE of 1.0 I guess (instead of aiming for 0.00000001) because it does seem like it would be nice for a "prediction consumer" to get error bars as part of what the predictor provides?

But I feel like this might be conceptually akin to sacrificing some of your decisiveness on the altar of calibration (as with guessing randomly over outcomes and always using a probability of 1/N).

The crux might be something like a third thing over and above "decisiveness & calibration" that is also good and might be named... uh... "non-hubris"? Maybe "intervalic coherence"? Maybe "predictive methodical self-awareness"?

Is it your intention to advocate aiming for RMSE=1.0 and also simultaneously advocate for eliciting some third virtuous quality from forecasters?

Comment by JenniferRM on Brain Efficiency: Much More than You Wanted to Know · 2022-01-07T00:18:45.153Z · LW · GW

There is a minor literature on the evolution of brain cooling as potentially "blocked in early primates and then unblocked sorta-by-accident which allowed selection for brain size in hominids as a happy side effect". I'm unsure whether the hypothesis is true or not, but people have thought about it with some care and I've not yet heard of anyone figuring out a clean kill shot for the idea.

Comment by JenniferRM on Brain Efficiency: Much More than You Wanted to Know · 2022-01-06T20:21:19.042Z · LW · GW

I think a keyword here (if you want to google on your own) is "selective brain cooling (SBC)". In practice this might be on a continuum, but this might be an area where humans have some unique adaptations

The basic mechanisms tend to involve things like a "plexus" of veins/arteries for heat exchange, dynamic blood routing based on activity, and then trying to set up radiators somehow/somewhere on the body to take hot blood and push the heat into the larger world. Many mammals just have evaporative cooling in their mouth that runs on saliva, but humans (and maybe pigs) have it over their whole body. Elephant ears seem like another non-trivial adaptation related to heat. One possibility is that the cetaceans are so smart (as an evolutionary branch) because they have such a trivially easy way to get a water cooled brain.

I've looked into this enough to buy a tool for short/quick experiments in specialized brain cooling, but they have been very low key. No blinded controls (obviously) and not even any data collection. Mostly I find the cap to be subjectively useful after a long phone call holding a microwave transmitter next to my head, where I feel a bit fuzzy brained after the call... then the cap fixes that pretty fast :-)

In practice I think water cooled super performance (in this case the paradigmatic performance test is pull-ups) does seem to be a thing though I know of no cognitive version at this time. I've never thought in a highly focused way about exactly this topic. 

If heat dissipation is a real bottleneck on mental performance then my vague priors suggest: (1) it would mostly be a bottleneck on "endurance of vigorous activity" like solving logic puzzles very fast for hours at a time, and (2) IF heat was the bottleneck and that bottleneck was removed THEN whatever the next bottleneck was (like maybe neurotransmitter shortages?) it might have much worse side effects when it hits (seizures? death?) because whatever the failure mode is, it probably has been optimized by evolution to a lesser extent.

Comment by JenniferRM on A Cautionary Note on Unlocking the Emotional Brain · 2021-12-20T16:53:25.297Z · LW · GW

If some conscious activations the process of consolidating is itself causing "one idea to win... sometimes the wrong one", then trying consolidation on "the feelings about the management of consolidation and its results" seems like it could "meta-consolidate" into a coherently "bad" result.

...

It could be that the author of the original post was only emitting their "help, I'm turning evil from a standard technique in psychotherapy that I might or might not be using slightly wrongly!" post in a brief period of temporary pro-social self-awareness.

If we are beyond the reach of god then there's nothing in math or physics that would make a spin glass process implemented in belief holding metamaterials always have to point "up" at the end of the annealing process. It could point down, at the end, instead.

This is part of why I'm somewhat opposed to hasty emotional consolidation (which seems to me like it rhymes with the logical fallacy of hasty generalization).

Comment by JenniferRM on More power to you · 2021-12-18T02:52:03.571Z · LW · GW

To infinity, and beyond!

Comment by JenniferRM on Where can one learn deep intuitions about information theory? · 2021-12-16T21:27:42.980Z · LW · GW

I came here to suggest the same book which I think of as "that green one that's really great".

One thing I liked about it was the way that it makes background temperature into a super important concept that can drive intuitions that are more "trigonometric/geometric" and amenable to visualization... with random waves always existing as a background relative to "the waves that have energy pumped into them in order to transmit a signal that is visible as a signal against this background of chaos".  

"Signal / noise ratio" is a useful phrase. Being able to see this concept in a perfectly quiet swimming pool (where the first disturbance that generates waves produces "lonely waves" from which an observer can reconstruct almost exactly where the first splash must have occurred) is a deeper thing, that I got from this book.

Comment by JenniferRM on There is essentially one best-validated theory of cognition. · 2021-12-12T06:49:41.895Z · LW · GW

The flashcard and curriculum experiments seem really awesome in terms of potential for applications. It feels like the beginnings of the kind of software technology that would exist in a science fiction novel where one of the characters goes into a "learning pod" built by a high tech race, and pops out a couple days layer knowing how to "fly their spaceship" or whatever.  Generic yet plausible plot-hole-solving super powers! <3

Comment by JenniferRM on LessWrong discussed in New Ideas in Psychology article · 2021-12-11T20:22:16.113Z · LW · GW

Here are other terms that might be used in place of (or combined with?) "Amateur" that have different shades of meaning...

"Self-funded" - Connecting to a relatively long tradition going back long before Vannevar Bush set up what I personally think of as "Vannevarian Science" during a period proximate to WW2 (with the NSF and so on). There were grants before then, from what I can tell, but many fewer, and a lot of science from before that point was performed by what seem, through a modern lens, to perhaps be just "semi-retired geeks with a hobby in experimental natural philosophy".

"Unsubsidized" - Very similar to "self-funded" but focusing more on the sense in which modern "funding" is almost always (in these post-modern times) still mostly money from a government, using tax dollars, which maybe comes with strings attached (from the perspective of the thinkers) that (from the perspective of tax payers) are in some sense morally proper if the state is operating with the consent of the governed and on behalf of the interests of those who pay taxes. Thus (properly?) the state actor is probably aiming to benefit tax payers by (properly?) controlling the thinkers who are on the tax payroll. (Subsidized thinkers often don't like to think of themselves as "controlled and/or on someone's payroll", so sometimes focus on saying "we" instead of "I" and emphasize peer review or similar non-incentive-compatible processes? Maybe? That was a clever hack when Vannevar et al tried it, but it is not clear how it could work on very long time scales in the presence of political regime turnover.)

"Patron-Funded" - This isn't self-funded, but it does cut out the coercive machinery of the state. With patreon and youtube there seems to be a minor renaissance of "polypatronized" thinkers, while in the past (before Vannevarian funding models arose) my impression is that singularly rich people, verging on princely (oligarchic?) power, would fund a genius or two so that their filthy lucre (<3) could buy them timelessly important contributions to the advancement of human knowledge... or something? I don't think I'd know who Cosimo the Elder was if not for his patronage, for example.

"Feral" - If the thinking and data collection and writing are performed by someone who was, for a while, part of ancient and subsidized institutions of research and learning, you expect their thinking to follow many of the normal channels of those with similar formative processes. Then later some separating event might occur such as: being thrown back, or being permanently evoked (perhaps to do work on a black project), or otherwise going extramuros (as by personal choice, and/or after conflicts, to escape inquisitorial interference in their work). There is expected to be a spicy story here!

"Outsider" - As with outsider art, a thinker who "isn't even feral", so much as completely formed by themselves, from their own effort, without the oversight or pedagogy of pre-existing disciplinary boundaries or planned formative curricula. Ramanujan might count as "the best sort" of a person like this, and often many of the high quality ones seem to have a life arc where they end up being offered resources by institutions which might otherwise seem lesser (the institutions would seem lesser) for the absence of such bright lights. This life arc is maybe "common for the ones you hear about" but rare in general?

"Autodidact" - Smart enough outsider thinkers are likely to run across this term and might self-identify this way, but so might some feral intellectuals, or just any random smart person who reads and thinks and stuff. I tend to like them, and one way to find them is to notice when people say a rare word, with semantic accuracy, but using an idiosyncratic pronunciation. Such people have often never talked about many topics they've studied, having only read the words in text, and then they back-construct plausible pronunciations from the letters, and this is a recognizable hallmark when they are talking. Some people use the label "autodidact" pejoratively, perhaps from believing that knowledge can or should be organized in a standard way, and thinking that it is critical to how specialization and expert communication should work. The criticism is not totally unfounded. If Kuhn is right, there actually can't be "science science" if everyone remains an autodidact and if no specialist jargon (based on presumptive classics (with recursive selection and amplification (into sociologically reified information cascades))) doesn't sociologically occur to establish "a coherent field" with practitioners of the fields who mostly mutually recognize each other, and so on.

"Crackpot" - An autodidact of explicitly low quality, often (though not always) with weird metaphysics and, if their oeuvre is publicly visible, sometimes studied by psychoceramacists (who themselves are almost certainly unsubsidized). Some people identify this way as part of a complicated counter-signaling strategy (partly at themselves, perhaps based on half-baked virtue epistemic reasoning?) and the famous one that jumps to mind for me is RAW himself.

For reference (and as a sorted of potted methods section) this sort of typology is something I've played with for some time, mostly by the accumulation of many examples, pursuant to a general hobby-level interest in cliology of science.

(This sketch of a lexicon isn't coherent (or MECE) or a proper taxonomy, much less an ontology, and I don't want to commit to exactly this here and now... but "amateur" doesn't seem to me to carve reality at the joints, especially if it normally denotes "not paid and also unskillful" while it sort of connotes "without Vannevarian subsidy, but rather among the hoi polloi". I would go with "unsubsidized" in fast/dirty contexts and "non-Vannevarian" if I had time to justify the term.)

Cliology of science is interesting because cliology in general is plausibly impossible, and if cliology in general is impossible (as is likely) then an important candidate reason for this would be because "maybe science can't be predicted in advance", and so... if science itself (or parts of it) somehow can be predicted in advance... then predicting merely the subset of history that includes science would help in building the full(er) scale vision of a total cliological model... which is semi-plausibly the most important science that might hypothetically exist. Cliology of science is thus a useful place to work if one wants to de-risk the larger project, I think? <3

Also (with apologies for so obvious a plug, but the issue is right next door to the actual topic) if a benificient reader is interested in upgrading me from "Self-funded Metacontrarian" to "Patron-funded Polymath" feel free to PM me <3

Comment by JenniferRM on There is essentially one best-validated theory of cognition. · 2021-12-11T05:08:46.956Z · LW · GW

I wonder if extremely well trained dogs might work?

Chaser seems likely to have learned nouns, names, verbs... with toy names learned on one trial starting at roughly 5 months of age (albeit with a name forgetting curve so additional later exposures were needed for retention). 

Having studied her training process, it seems like they taught her the concept of nouns very thoroughly. 

Showing "here are N frisbees, after 'take frisbee' each one of them earns a reward" to get the idea of nouns referring to more than one thing demonstrated very thoroughly. 

Then maybe "half frisbees, half balls" so that it was clear that "some things are non-frisbees and get no reward".

In demos of names and verbs, after the training, you can watch her looking at things and thinking. Maybe the looking directions and the thinking times could be modeled?

Comment by JenniferRM on There is essentially one best-validated theory of cognition. · 2021-12-11T04:34:53.399Z · LW · GW

The idea of the physical brain turning out to be similar to ACT-R after the code had been written based on high level timing data and so on... seems like strong support to me. Nice! Real science! Predicting stuff in advance by accident! <3

My memory from exploring this in the past is that I ran into some research with "math problem solving behavior" with human millisecond timing for answering various math questions that might use different methods... Googling now, this Tenison et al ACT-R arithmetic paper might be similar, or related?

With you being an expert, I was going to ask if you knew of any cool problems other than basic arithmetic that might have been explored like the Trolley Problem or behavioral economics or something... 

(Then I realized that after I had formulated the idea in specific keywords I had Google and could just search, and... yup... Trolley Problem in ACT-R occurs in a 2019 Masters Thesis by Thomas Steven Highstead that also has... hahahaha, omg! There's a couple pages here reviewing ACT-R work on Asimov's Three Laws!?!)

Maybe a human level question is more like: "As an someone familiar with the field, what is the coolest thing you know of that ACT-R has been used for?" :-)

Comment by JenniferRM on Omicron Post #5 · 2021-12-10T22:22:13.882Z · LW · GW

Thanks for the suggestion to seach! The first one was charming and the second was still ok.

Comment by JenniferRM on There is essentially one best-validated theory of cognition. · 2021-12-10T18:29:26.558Z · LW · GW

I think I remember hearing about this from you in the past and looking into it some. 

I looked into it again just now and hit a sort of "satiety point" (which I hereby summarize and offer as a comment) when I boiled the idea down to "ACT-R is essentially a programming language with architectural inclinations which cause it to be intuitively easy see 1:1 connections between parts of the programs and parts of neurophysiology, such that diagrams of brain wiring, and diagrams of ACT-R programs, are easy for scientists to perceptually conflate and make analogies between... then also ACT-R more broadly is the high quality conserved work products from such a working milieu that survive various forms of quality assurance".

Pictures helped! Without them I think I wouldn't have felt like I understood the gist of it.

"ACT-R accesses its modules (except for the procedural-memory module) through buffers. For each module, a dedicated buffer serves as the interface with that module. The contents of the buffers at a given moment in time represents the state of ACT-R at that moment." Sauce.

This image is a very general version that is offered as an example of how one is likely to use the programming language for some task, I think? 

Then you might ask... ok... what does it look like after people have been working on it for a long time? So then this image comes from 2004 research.

"Schematic diagram of the ACT-R cognitive architecture and how its components work together to generate behavior. (Figure by authors based on and extending Anderson et al., 2004, the current ACT-R 7 manual, and comments from Dan Bothell.)" Sauce.

My reaction to this latter thing is that I recognize lots of words, and the "Intentional module" being "not identified" jumps out at me and causes me to instantly propose things. 

But then, because I imagine that the ACT-R experts presumably are working under self-imposed standards of rigor, I imagine they could object to my proposals with rigorous explanations.

If I said something like "Maybe humans don't actually have a rigorously strong form of Intentionality in the ways we naively expect, perhaps because we sometimes apply the intentional stance to humans too casually? Like maybe instead we 'merely' have imagined goal content hanging out in parts of our brain, that we sometimes flail about and try to generate imaginary motor plans that cause the goal... so you could try to tie the Imaginal, Goal, Retrieval, and 'Declarative/Frontal' parts together until you can see how that is the source of what are often called revealed preferences?"

Then they might object "Yeah, that's an obvious idea, but we tried it, and then looked more carefully and noticed that the ACC doesn't actually neuro-anatomically link to the VLPFC in the way that would be required to really make it a plausible theory of humans"... or whatever, I have no idea what they would really say because I don't have all of the parts of the human brain and their connections memorized, and maybe neuroanatomy wouldn't even be the basis of an ACT-R expert's objection? Maybe it would be some other objection.

...

After thinking about it for a bit, the coolest thing I could think of doing with ACT-R was applying it to the OpenWorm project somehow, to see about getting a higher level model of worms that relates cleanly to the living behavior of actual worms, and their typical reaction times, and so on. 

Then the ACT-R model of a worm could perhaps be used (swiftly! (in software!)) to rule out various operational modes of a higher resolution simulation of a less platonic worm model that has technical challenges when "tuning hyperparameters" related to many fiddly little cellular biology questions?

Comment by JenniferRM on Omicron Post #4 · 2021-12-09T01:14:06.961Z · LW · GW

Thank you for this high quality response! The numbers were helpful and I had to stop and grind out some of the math and parse your sentences carefully.

assume that symptomatic people are less likely to transmit the disease than asymptomatic people because they know to quarantine thanks to the symptoms.

Making this part of the model more quantitative might reveal a crux? 

I think we agree here directionally (symptomatic people change behavior in a way that has pro-social results, exposing fewer people "out in the world") but if the effect was very large (like if the average asymptomatic person infected a mean of 30 people and the average symptomatic person only infected 1.1 people) then I think it might overwhelm other parts of a full model, even with the numbers you specified (which I will get to below).

(Empirical Digression:

This meta-analysis suggests that on the order of a half to a third of all infections occur not "out in the world" but specifically in a medical context... where "worse symptoms" might tend to evolve in order to cause infected people to go to clinics where they could infect a large portion of the people who ever get infected with covid.

This other study suggested that "Brigham and Women’s Hospital" in Boston had a much much lower rate of nosocomial covid, so covid's evolutionary incentives under endemic conditions might be regionally heterogeneous? 

Under an institutional heterogeneity model... it might be pro-socially wise to isolate any region with normally bad hospitals until these breeding grounds of infectious mortality are closed or repaired to adequacy. Obviously we are not wise, however, so this is unlikely to happen even if it was a net good for sure. Also, the model might be false, and it is certainly controversial, so I do not advocate this directly right now, based on current credence levels.)

In terms of your proposed model I think you didn't specify how many more people the average asymptomatic covid carrier might infect but you did give these:

P(infected | unvaccinated) = .50
P(infected | vaccinated) = .05


P(asymptomatic | infected & NOT-vaccinated) = .50
P(asymptomatic | infected & vaccinated) = 1.0

A common problem in bayesian modeling is to figure out what your "event space" actually contains. In a nutshell: what are you counting? "Mere counting" can sometimes turn out to be hard...

I'm not sure if "infected" for you means "infected in a specific single exposure event during a controlled challenge trial that is a decent proxy for a normal exposure event" or if it means some sort of lifetime all-in summary statistic closer to incidence or prevalence or a time and/or space bounded attack rate.

I mostly focus on these macroscale summary statistics, and so my model is that the attack rate for the period from 2018 to 2025, assuming endemic covid and no eradication and so on, might be close to 100% and it might even be higher than 100% if the numerator is "periods of infectiousness by a single person with covid in a region" (so asymptomatic re-infections cause +1 to the numerator) and the denominator is "total people in that region".

Like in the worst possible world, covid evolves to higher and higher mortality, such that all humans are either dying or else vaccinated against the variant that came out in the last 12 months (and also everyone "healthy" is semi-chronically infected in a way that is non-fatal after vaccination), always, in general, like some kind of Paolo Bacigalupi story?

And I get it. If the world is on fire and vaccination protects you from being hurt by being on fire... I'd get the vaccine too. I did get the vaccine, even knowing it was might only be selfishly beneficial. At this point: any port in a storm, you know?

So for me, thinking of "P(infected)" as a summary statistic for a person in a region with endemic covid... that shit is probably going to be very high even for the vaccinated. I think?

Concretely then, I assert P(infected | vaccinated) >> 5% (for an understanding of the relevant event space where this number applies over a long period of time, like any five year period during which covid is endemic and still evolving).

Maybe you know otherwise, and this is a crux? 

But I think I'm right, because I think that is basically what it means for covid to be endemic forever. Like... the only people who won't get infected under endemic conditions (maybe over and over?) will be lifetime shut-ins?

Measles used to have an insanely high R0 and a 1% mortality and we nearly eradicated it back when our medical system was competent, so the the niche is kind of empty?

Before the competent near eradication of measles everyone got "the 1% fatal, high R0, disease", and if you lived then you lived, and that's why our genomes are full of disease-protecting alleles. Now the global bio-techno-governing-medical system is either evil or incompetent or both... so maybe there is an open niche? Maybe the human genome will eventually get new alleles for this?

Epistemically (if you with agree my modeling ideas and event space choice) it would be convenient because it might mean that we can just look up the attack rate overall (and maybe look up the relative R0 contributions) and "simply know" the same thing :-)

Personally, if we're granting that this thing is politically impossible to eradicate, I think the right thing to focus on might be modulating the evolution of covid to be "slow and towards lower mortality".

(If we can't eradicate, I suspect this more subtle form of influence over covid is probably also beyond our politico-economic capacities and we will just experience whatever nature does to us, like savages subject to the whims of the lesser gods.)

I am strongly in favor of adequate and as-liberty-respecting-as-possible eradication of covid.

The event space I care about for an infectious disease is the event space that represents the large scale long term summary statistic that cleanly models "the entire herd and its general health". It is a weird position, but... well... I'm honestly kind of surprised that even 2% of people agree with me? It isn't like I don't notice that I'm weird <3

Comment by JenniferRM on The Rationalists of the 1950s (and before) also called themselves “Rationalists” · 2021-12-07T22:45:41.038Z · LW · GW

Yeah. The communist associations of past iterations of "rationalist" schools or communities is one the biggest piles of skulls I know about and try to always keep in mind.

Wikipedia uses this URL about Stalin, Wells, Shaw, and the holodomor as a citation to argue that, in fact, many of them were either duped fools or worse into denying the holodomor. Quoting from the source there:

Shaw had met Stalin in the Kremlin on 29 July 1931, and the Soviet leader had facilitated his tour of Russia in which he was able to observe, at least to his own satisfaction, that the statements being circulated about the famine in the Ukraine were merely rumours. He had seen that the peasants had plenty of food. In fact the famine had notoriously been caused by Stalin in his desperation to achieve the goals of his five-year plan. An estimated ten million people, mostly Ukrainians, died of starvation.

As someone who flirts with identifying as part of some kind of "rationalist" community, I find the actions of Shaw to be complicatedly troubling, and to disrupt "easy clean identification".

Either I feel I must disavow Shaw, people like Shaw, and their gross and terrible political errors that related to some of the biggest issues and tragedies of their era, or else I must say that Shaw is still a sort of somehow a tolerably acceptable human to imagine collaborating with in limited ways in spite of his manifest flaws. 

(From within judeo-christian philosophic frames this doesn't seem super hard. The story is simply that all humans are quite bad by default, and it is rare and lucky for us to rise above our normal brokenness, and so any big non-monstrous actions a human performs is nearly pure bonus, and worthy of at least some praise no matter what other bad things are co-occuring in the soul of any given person.)

Shaw's kind of error also troubles me when I imagine that there might be some deep substructure to reasoning and philosophy such that he and I share a philosophy somehow, and he did that while having a philosophy like mine... then if "beliefs cause behavior" (rather than mostly just being confabulated rationalizations after behaviors have already occured) then I find myself somewhat worried about the foundations of my own philosophy, and what horrible things it might cause me to "accidentally" endorse or promote through my own actions.

Maybe there is some way to use Shaw's failure as a test case, and somehow trace the causality of his thinking to his bad actions, and then find any analogous flaws in myself and perform cautious self modification until analogous flaws are unlikely to exist?  But that seems like a BIG project. I'm not sure my life is long enough to learn all the necessary facts and reasoning carefully enough to carry a project like that to adequate completion.

Thus the practical upshot, for me, is to be open to "fearing to tread" even more than normal until or unless there are pretty subjectively clear reasons to advance.

Also, my acknowledged limitations lead me to feel a minor duty to sometimes point out obviously evil things that my mental stance can't help but see as pretty darn evil? Not all of them. Just really really big and important and obvious ones.

My current working test case for this is the FDA, which I suspect should be legislatively gutted

Maybe I'm wrong? Maybe in saying "FDA delenda est" semi-regularly I'm making a "Shaw and the Holodomor level error" by doing the opposite of what is good? 

It seems virtuous, then, to at least emit such an idea every so often, when I actually really can't help but believe in and directly see a certain evil, and see if anyone can offer coherent disagreement or agreement and thereby either (1) help fix the world by reducing that particular evil or else (2) help me get a better calibrated moral compass.  Either way it seems like it would be good?

Also, in general, I feel that it is a good practice to, minimally, acknowledge the skulls so that I know that "ideas and identities and tendencies similar to mine" might have, in the past, lead to bad places.

To hide or flinch from the fact that former-"people calling themselves rationalists" were sometimes pretty bad at the biggest questions of suffering and happiness, or good and evil, seems like... like... probably not what someone who was good at virtue epistemology would do?  So, I probably shouldn't flinch. Probably.

Comment by JenniferRM on The Rationalists of the 1950s (and before) also called themselves “Rationalists” · 2021-12-07T21:37:04.066Z · LW · GW

That quote is metal as hell <3

It might not be actually true, or actually good advice... but it is metal as hell :-)

Comment by JenniferRM on Omicron Post #4 · 2021-12-07T08:55:44.020Z · LW · GW

I think I'm in that 2% slice, and my feeling is that this position arises from:

  1. Having a moderately coherent and relatively rare theory of "benevolent government and the valid use of state power" that focuses on public goods and equilibria and subsidiarity and so on.
  2. Having a relatively rare belief that vaccinated people seem much more likely to get asymptomatically infected and to have lower mortality BUT also noting that vaccines do NOT prevent infectiousness and probably cannot push R0 below 1.0.

Thus, I consider covid vaccines primarily a private good that selfishly helps the person who gets the vaccine while turning them into a walking death machine to the unvaccinated

They get a better outcome from infection (probably, assuming the (likely larger) side effects of boosters aren't worse than the (likely more mild) version of the disease, etc, etc) but such vaccine takers DO NOT protect their neighbor's neighbor's neighbor's neighbor's neighbor's neighbor from "risk of covid at all"...

...and thus my theory of government says that a benevolent government will not force people into medical procedures of this sort.

A competently benevolent government wouldn't mandate a "probably merely selfishly good thing" in a blanket way, that prevents individuals from opting out based on private/local/particular reasoning (such that it might NOT be selfishly beneficial for some SPECIFIC individuals and where for those individuals the policy amounts to some horrific "government hurts you and violates your body autonomy for no reason at all" bullshit).

Like abortion should be legal and rare, I think? Because body autonomy! The argument against abortion is that the fetus also has a body, and should (perhaps) be protected from infanticide in a way that outweighs the question of the body autonomy of the fetus's mother. But a vaccine mandate for a vaccine that only selfishly benefits the vaccinated person, without much reducing infectiousness, is a violation of medical body autonomy with even less of a compensating possible-other-life-saved.  Vaccine mandates (for vaccines that don't push R0 lower than 1) are probably worse than outlawing abortion, from the perspective of moral and legal principles that seem pretty coherent, at least to me.

I think many many many people are gripped by a delusion that vaccines drop R0 enough that with high enough vaccination rates covid will be eradicated and life will go back to normal.

(Maybe they are right? But I think they are wrong. I think R0 doesn't go down with higher vaccinations enough to matter for the central binary question.)

Then, slightly separately, a lot of people think that the daddy/government can make them do whatever it wants so long as daddy/government verbally says "this is for your own good" even if that's a lie about what is objectively good for some individual people.

A key point here is that my theory of government says that IF there is a real public good here, like an ability to pass a law, throw some people in jail for violating the law, and then having a phase change in the public sphere leading to a dramatically better life for essentially everyone else such that the whole thing is a huge Kaldor-Hicks improvement... THEN that would be a just use of government coercion.  

This general formula potentially explains why stealing gets you thrown in jail. Also why cars speeding fast enough in the wrong place gets you thrown in jail. You want "a world where bikes can be left in front yards and kids can safely cross the streets" and throwing people who break these equilibria in jail protects the equilibria.

I don't see how "infecting people with a lab created disease" is vastly different from speeding or stealing? Harm is harm. Negligence (or full on mens rea) is negligence (or full on mens rea).

If vaccines prevented transmission enough to matter in general, then vaccines COULD be mandated. But the decision here should trigger (or not) on a "per vaccine" basis that takes into account the effects on R0.

Then, sociologically, most people haven't even heard of Pareto efficiency (much less Kaldor-Hicks) and most people think these vaccines are a public good that will eventually end the nightmare.

So I guess... you could test my "theory of opinion" by explaining "classical liberal political economy" and "bleak viral epidemiology" to lots of people, and then, as a post test, see if the 2% slice of the population grows to 3% or 4% maybe?  

If lots of people learn these two things, and lots of people start opposing mandates for the current set of vaccines, that would confirm my theory. I guess you could also falsify my theory if anti-mandate sentiment rose (somehow?) without any corresponding cause related to a big educational program around "libertarian epidemiology"?

I have heard of Kaldor-Hicks efficiency AND ALSO I think the nightmare will "stop" only when the virus evolves to be enough-less-nightmarish that it seems no worse than the flu.

But note! My model is that the virus is in charge.  And "covid" will in some sense happen forever, and the situation we're in right now is plausibly the beginning of the new normal that will never really stop. 

Hopefully milder and and milder variants evolve as covid learns to stop "playing with its food", and things sorta/eventually become "over" in the sense that the deaths and disabilities fall to background levels of biosadness? But that's the only realistic hope left?

And I wish I was more hopeful, but I'm focusing on "what's probably true" instead of "what's comforting" :-/

I guess hypothetically in 2022 or 2024 politicians could run on (and win with?) a proposal to "totally and completely revamp all of the medical industry from top to bottom in a dramatic enough way that actual disease eradication is possible, such as by deleting the FDA, and quickly constructing a public disease testing framework with new better tests under a new regulatory system, that quickly tests everyone who wants a test every single day they are willing to voluntarily spit into a cup, and then do automated tracing with data science over the voluntary test data, and then impose involuntary quarantine on the infectious in judiciously limited but adequate ways, and just generally make covid stop existing in <NationalRegion> from a big push, and then also have adequate testing be required to get in and out of the country at every border and port, and so on with every coherent and sane step that, altogether, would make covid (and in fact ALL infectious disease in the long run) simply stop being a problem in any part of the world with governmental biocompetence."

But this will also probably not happen because we live in the real world, which is not a world with good politicians or wise voters or competently benevolent cultural elites. 

We are bioincompetent. We could have eradicated syphilis for example, and we chose not to.  Syphilis mostly affects black communities, and the US medical system doesn't competently care about black communities. We suck. Covid is our due, based on our specific failures. Covid is our nemesis.

The view from the 2% slice says: lean back, hunker down, and enjoy the ride. Its gonna suck, but at least, if you survive (and assuming the singularity doesn't happen, and grandkids are still a thing 40 years from now, etc, etc, etc) then you can come out of it with a story about generalized civilizational inadequacy to tell your grandkids.

Comment by JenniferRM on Speaking of Stag Hunts · 2021-11-19T05:42:55.560Z · LW · GW

Yeah! This is great. This is the kind of detailed grounded cooperative reality that really happens sometimes :-)

Comment by JenniferRM on Speaking of Stag Hunts · 2021-11-19T05:29:27.651Z · LW · GW

Mechanistically... since stag hunt is in the title of the post... it seems like you're saying that any one person committing "enough of these epistemic sins to count as playing stag" would mean that all of lesswrong fails at the stag hunt, right?

And it might be the case that a single person playing stag could be made up of them failing at even just a single one of these sins? (This is the weakest point in my mechanistic model, perhaps?)

Also, what you're calling "projection" there is not the standard model of projection I think? And my understanding is that the standard model of projection is sort of explicitly something people can't choose not to do, by default. In the standard model of projection it takes a lot of emotional and intellectual work for a person to realize that they are blaming others for problems that are really inside themselves :-(

(For myself, I try not to assume I even know what's happening in my own head, because experimentally, it seems like humans in general lack high quality introspective access to their own behavior and cognition.)

The practical upshot here, to me, is that if the models you're advocating here are true, then it seems to me like lesswrong will inevitably fail at "hunting stags".

...

And yet it also seems like you're exhorting people to stop committing these sins and exhorting them moreover to punitively downvote people according to these standards because if LW voters become extremely judgemental like this then... maybe we will eventually all play stag and thus eventually, as a group, catch a stag?

So under the models that you seem to me to have offered, the (numerous individual) costs won't buy any (group) benefits? I think? 

There will always inevitably be a fly in the ointment... a grain of sand in the chip fab... a student among the masters... and so the stag hunt will always fail unless it occurs in extreme isolation with a very small number of moving parts of very high quality?

And yet lesswrong will hopefully always have an influx of new people who are imperfect, but learning and getting better!

And that's (in my book) quite good... even if it means we will always fail at hunting stags.

...

The thing I think that's good about lesswrong has almost nothing to do with bringing down a stag on this actual website.

Instead, the thing I think is good about lesswrong has to do with creating a stable pipeline of friendly people who are all, over time, are getting a little better at thinking, so they can "do more good thinking" in their lives, and businesses, and non-profits, and perhaps from within government offices, and so on.

I'm (I hope) realistically hoping for lots of little improvements, in relative isolation, based on cross-fertilization among cool people, with tolerance for error, and sharing of ideas, and polishing stuff over time... Not from one big leap based on purified perfect cooperation (which is impossible anyway for large groups).

You're against "engaging in, and tolerating/applauding" lots and lots of stuff, while I think that most of the actual goodness arises specifically from our tolerant engagement of people making incremental progress, and giving them applause for any such incremental improvements, despite our numerous inevitable imperfections.

Am I missing something? What?

Comment by JenniferRM on Speaking of Stag Hunts · 2021-11-14T18:27:46.597Z · LW · GW

This word "fucky" is not native to my idiolect, but I've heard it from Berkeley folks in the last year or two. Some of the "fuckiness" of the dynamic might be reduced if tapping out as a respectable move in a conversation.

I'm trying not to tap out of this conversation, but I have limited minutes and so my responses are likely to be delayed by hours or days. 

I see Duncan as suffering, and confused, and I fear that in his confusion (to try to reduce his suffering), he might damage virtues of lesswrong that I appreciate, but he might not. 

If I get voted down, or not upvoted, I don't care. My goal is to somehow help Duncan and maybe be less confused and not suffer, and also not be interested in "damaging lesswrong".

I think Duncan is strongly attached to his attempt to normatively move LW, and I admire the energy he is willing to bring to these efforts. He cares, and he gives because he cares, I think? Probably?

Maybe he's trying to respond to every response as a potential "cost of doing the great work" which he is willing to shoulder?  But... I would expect him to get a sore shoulder though, eventually :-(

If "the general audience" is the causal locus through which a person's speech act might accomplish something (rather than really actually wanting primarily to change your direct interlocutor's mind (who you are speaking to "in front of the audience")) then tapping out of a conversation might "make the original thesis seem to the audience to have less justification" and then, if the audience's brains were the thing truly of value to you, you might refuse to tap out?

This is a real stress. It can take lots and lots of minutes to respond to everything.

Sometimes problems are so constrained that the solution set is empty, and in this case it might be that "the minutes being too few" is the ultimate constraint? This is one of the reasons that I like high bandwidth stuff, like "being in the same room with a whiteboard nearby". It is hard for me to math very well in the absence of shared scratchspace for diagrams.

Other options (that sometimes work) including PMs, or phone calls, or IRC-then-post-the-logs as a mutually endorsed summary. I'm coming in 6 days late here, and skipped breakfast to compose this (and several other responses), and my next ping might not be for another couple days. C'est la vie <3

Comment by JenniferRM on Speaking of Stag Hunts · 2021-11-14T17:33:31.286Z · LW · GW

If you look at some of the neighboring text, I have some mathematical arguments about what the chances are for N people to all independently play "stag" such that no one plays rabbit and everyone gets the "stag reward".

If 3 people flip coins, all three coins come up "stag" quite often. If a "stag" is worth roughly 8 times as much as a rabbit, you could still sanely "play stag hunt" with 2 other people whose skill at stag was "50% of the time they are perfect".  

But if they are less skilled than that, or there are more of them, the stag had better be very very very valuable.

If 1000 people flip coins then "pure stag" comes up one in every 9.33x10^302 times. Thus, de facto, stag hunts fail at large N except for one of those "dumb and dumber" kind of things where you hear the one possible coin pattern that gives the stag reward and treat this as good news and say "so you're telling me there's a chance!"

I think stag hunts are one of these places where the exact same formal mathematical model gives wildly different pragmatic results depending on N, and the probability of success, and the value of the stag... and you have to actually do the math, not rely on emotions and hunches to get the right result via the wisdom one one's brainstem and subconscious and feelings and so on.

Comment by JenniferRM on Speaking of Stag Hunts · 2021-11-14T16:40:00.711Z · LW · GW

I see that you have, in fact, caught me in a simplification that is not consistent with literally everything you said. 

I apologize for over-simplifying, maybe I should have added "primarily" and/or "currently" to make it more literally true.

In my defense, and to potentially advance the conversation, you also did say this, and I quoted it rather than paraphrasing because I wanted to not put words in your mouth while you were in a potentially adversarial mood... maybe looking to score points for unfairness?

What I'm getting out of LessWrong these days is readership.  It's a great place to come and share my thoughts, and have them be seen by people—smart and perceptive people, for the most part, who will take those thoughts seriously, and supply me with new thoughts in return, many of which I honestly wouldn't have ever come to on my own.

My model here is that this is your self-identified "revealed preference" for actually being here right now.

Also, in my experience, revealed preferences are very very very important signals about the reality of situations and the reality of people.

This plausible self-described revealed preference of yours suggests to me that you see yourself as more of a teacher than a student. More of a producer than a consumer. (This would be OK in my book. I explicitly acknowledge that I see my self as more of a teacher than a student round these parts. I'm not accusing you of something bad here, in my own normative frame, though perhaps you feel it as an attack because you have difference values and norms than I do?)

It is fully possible, I guess, (and you would be able to say this much better than I) that you would actually rather be a student than a teacher?

And it might be that that you see this as being impossible until or unless LW moves from a rabbit equilibrium to a stag equilibrium?

...

There's an interesting possible equivocation here.

(1) "Duncan growing as a rationalist as much and fast as he (can/should/does?) (really?) want does in fact require a rabbit-to-stag nash equilibrium shift among all of lesswrong".

(2) "Duncan growing as a rationalist as much as and fast as he wants does seems to him to require a rabbit-to-stag nash equilibrium shift among all of lesswrong... which might then logically universally require removing literally every rabbit player from the game, either by conversion to playing stag or banning".

These are very similar. I like having them separate so that I can agree and disagree with you <3

Also, consider then a third idea:

(3) A rabbit-to-stag nash equilibrium shift among all of lesswrong is wildly infeasible because of new arrivals, and the large number of people in-and-around lesswrong, and the complexity of the normative demands that would be made on all these people, and various other reasons.

I think that you probably think 1 and 2 are true and 3 is false.

I think that 2 is true, and 3 is true.

Because I think 3 is true, I think your implicit(?) proposals would likely be very costly up front while having no particularly large benefits on the backend (despite hopes/promises of late arriving large benefits). 

Because I think 2 is true, I think you're motivated to attempt this wildly infeasible plan and thereby cause harm to something I care about.

In my opinion, if 1 is really true, then you should give up on lesswrong as being able to meet this need, and also give up on any group that is similarly large and lacking in modular sub-communities, and lacking in gates, and lacking in an adequate intake curricula with post tests that truly measure mastery, and so on. 

If you need growth as a rationalist to be happy, AND its current shape (vis-a-vis stage hunts etc) means this website is a place that can't meet that need, THEN (maybe?) you need to get those needs met somewhere else.

For what its worth, I think that 1 is false for many many people, and probably it is also false for you.

I don't think you should leave, I just think you should be less interested in a "pro-stag-hunting jihad" and then I think you should get the need (that was prompting your stag hunting call) met in some new way.

I think that lesswrong as it currently exists has a shockingly high discourse level compared to most of the rest of the internet, and I think that this is already sufficiently to arm people with the tools they need to read the material, think about it, try it, and start catching really really big rabbits (that is, coming to make truly a part of them some new and true and very useful ideas), and then give rabbit hunting reports, and to share rabbit hunting techniques, and so on. There's a virtuous cycle here potentially!

In my opinion, such a "skill building in rabbit hunting techniques" sort of rationality... is all that can be done in an environment like this.

Also I think this kind of teaching environment is less available in many places, and so it isn't that this place is bad for not offering more, it is more that it is only "better by comparison to many alternatives" while still failing to hit the ideal. (And maybe you just yearn really hard for something more ideal.)

So in my model (where 2 is true) "because 1 is false for many (and maybe even for you)" and 3 is true... therefore your whole stag hunt concept, applied here, suggests to me that you're "low key seeking to gain social permission" from lesswrong to drive out the rabbit hunters and silence the rabbit hunting teachers and make this place wildly different.

I think it would de facto (even if this is not what you intend) become a more normal (and normally bad) "place on the internet" full of people semi-mindlessly shrieking at each other by default.

If I might offer a new idea that builds on the above material: lesswrong is actually a pretty darn good hub for a quite a few smaller but similar subcultures

These subcultures often enable larger quantities of shared normative material, to be shared with much higher density in that little contextual bubble than is possible in larger and more porous discourse environments.  

In my mind, Lesswrong itself has a potential function here as being a place to learn that the other subcultures exist, and/or audition for entry or invitation, and so on. This auditioning/discovery role seems highly compatible to me to the "rabbit hunting rationality improvement" function.

In my model, you could have a more valuable-for-others role here on lesswrong if you were more inclined to tolerantly teach without demanding a "level" that was required-at-all to meet your particular educational needs.

To restate: if you have needs that are not being met, perhaps you could treat this website as a staging area and audition space for more specific and more demanding subcultures that take lesswrong's canon for granted while also tolerating and even encouraging variations... because it certainly isn't the case that lesswrong is perfect.

(There's a larger moral thing here: to use lesswrong in a pure way like this might harm lesswrong as all the best people sublimate away to better small communities. I think such people should sometimes return and give back so that lessswrong (in pure "smart person mental elbow grease" and also in memetic diversity) stays over longer periods of time on a trajectory of "getting less wrong over time"... though I don't know how to get this to happen for sure in a way that makes it a Pareto improvement for returnees and noobs and so on. The institution design challenge here feels like an interesting thing to talk about maybe? Or maybe not <3)

...

So I think that Dragon Army could have been the place that worked the way you wanted it to work, and I can imagine different Everett branches off in the counter-factual distance where Dragon Army started formalizing itself and maybe doing security work for third parties, and so there might be versions of Earth "out there" where Dragon Army is now a mercenary contracting firm with 1000s of employees who are committed to exactly the stag hunting norms that you personally think are correct.

Personally, I would not join that group, but in the spirit of live-and-let-live I wouldn't complain about it until or unless someone hired that firm to "impose costs" on me... then I would fight back. Also, however, I could imagine sometimes wanting to hire that firm for some things. Violence in service to the maintenance of norms is not always bad... it is just often the "last refuge of the incompetent".

In the meantime, if some of the officers of that mercenary firm that you could have counter-factually started still sometimes hung out on Lesswrong, and were polite and tolerant and helped people build their rabbit hunting skills (or find subcultures that help them develop whatever other skills might only be possible to develop on groups) then that would be fine with me...

...so long as they don't damage the "good hubness" of lesswrong itself while doing so (which in my mind is distinct from not damaging lesswrong's explicitly epistemic norms because having well ordered values is part of not being wrong, and values are sometimes in conflict, and that is often ok... indeed it might be a critical requirement for positive sum pareto improving cooperation in a world full of conservation laws).

Comment by JenniferRM on Speaking of Stag Hunts · 2021-11-10T18:14:17.352Z · LW · GW

Thank you for this great comment. I feel bad not engaging with Duncan directly, but maybe I can engage with your model of him? :-)

I agree that Duncan wouldn't agree with my restatement of what he might be saying. 

What I attributed to him was a critical part (that I object to) of the entailment of the gestalt of his stance or frame or whatever. My hope was that his giant list of varying attributes of statements and conversational motivations could be condensed into a concept with a clean intensive definition other than a mushy conflation of "badness" and "irrational". For me these things are very very different and I'll say much more about this below.

One hope I had was that he would vigorously deny that he was advocating anything like what I mentioned by making clear that, say, he wasn't going to wander around (or have large groups of people wander around) saying "I don't like X produced by P and so let's impose costs (ie sanctions (ie punishments)) on P and on all X-like things, and if we do this search-and-punish move super hard, on literally every instance, then next time maybe we won't have to hunt rabbits, and we won't have to cringe and we won't have to feel angry at everyone else for game-theoretically forcing 'me and all of us' to hunt measly rabbits by ourselves because of the presence of a handful of defecting defectors who should... have costs imposed on them... so they evaporate away to somewhere that doesn't bother me or us".

However, from what I can tell, he did NOT deny any of it? In a sibling comment he says:

Completely ignoring the assertion I made, with substantial effort and detail, that it's bad right now, and not getting better.  Refusing to engage with it at all.  Refusing to grant it even the dignity of a hypothesis.

But the thing is, the reason I'm not engaging with his hypothesis that I don't even know what his hypothesis is other than trivially obvious things that have been true, but which it has always been polite to mostly ignore?

Things have never been particularly good, is that really "a hypothesis"? Is there more to it than "things are bad and getting worse"? The hard part isn't saying "things are imperfect". 

The hard part, as I understand it, is figuring out a cheap and efficient solution that, that actually work, and that actually work systematically, in ways that anyone can use once they "get the trick" like how anyone can use arithmetic. He doesn't propose any specific coherent solution that I can see? It is like he wants to offer an affirmative case, but he's only listing harms (and boy does he stir people up on the harms) but then he doesn't have a causal theory of the systematic cause of the harms in the status quo, and he doesn't have a specific plan to fix them, and he doesn't demonstrate that the plan mechanistically links to the harms in the status quo. So if you just grant the harms... that leaves him with a blank check to write more detailed plans that are consistent with the gestalt frame that he's offered? And I think this gestalt frame is poorly grounded, and likely to authorize much that is bad.

Speaking of models, I like this as the beginning of a thoughtful distinction:

my model of Duncan predicts that there are some people on LW whose presence here is motivated (at least significantly in part) by wanting to grow as a rationalist, and also that there are some people on LW whose presence here is only negligibly motivated by that particular desire, if at all.

I'm not sure if Duncan agrees with this, but I agree with it, and relevantly I think that it is likely that neither Duncan nor I likely consider ourselves in the first category. I think both of us see ourselves as "doctors around these parts" rather than "patients"? Then I take Duncan's advocacy to move in the direction of a prescription, and his prescription sounds to me like bleeding the patient with leeches. It sounds like a recipe for malpractice.

Maybe he thinks of himself as being around here more as a patient or as a student, but, this seems to be his self-reported revealed preference for being here:

What I'm getting out of LessWrong these days is readership.  It's a great place to come and share my thoughts, and have them be seen by people—smart and perceptive people, for the most part, who will take those thoughts seriously, and supply me with new thoughts in return, many of which I honestly wouldn't have ever come to on my own.

(By contrast I'm still taking the temperature of the place, and thinking about whether it is useful to me larger goals, and trying to be mostly friendly and helpful while I do so. My larger goals are in working out a way to effectively professionalize "algorthmic ethics" (which was my last job title) and get the idea of it to be something that can systematically cause pro-social technology to come about, for small groups of technologists, like lab workers and programmers who are very smart, such that an algorithmic ethicist could help them systematically not cause technological catastrophes before they explode/escape/consume or other wise "do bad things" to the world, and instead cause things like green revolutions, over and over.)

So I think that neither of us (neither me nor Duncan) really expects to "grow as Rationalists" here because of "the curriculum"? Instead we seem to me to both have theories of what a good curriculum looks like, and... his curriculum leaves me aghast, and so I'm trying to just say that even if it might cut against his presumptively validly selfish goals for and around this website.

Stepping forward, this feels accurate to me:

My model of Duncan further predicts that both of these groups, sharing the common vice of being human, will at least occasionally produce epistemic violations; but model!Duncan predicts that the first group, when called out for this, is more likely to make an attempt to shift their thinking towards the epistemic ideal, whereas the second group's likelihood of doing this is significantly lower.

So my objection here is simply that I don't simply don't think think that "shifting  one's epistemics closer to the ideal" is a universal solvent, nor even a single coherent unique ideal.

The core point is that agency is not simply about beliefs, it is also about values. 

Values can be objective: the objective needs for energy, for atoms to put into shapes to make up the body of the agent, for safety from predators and disease, etc.  Also, as planning becomes more complex, instrumentally valuable things (like capital investments) are subject to laws of value (related to logistics and option pricing and so on) and if you get your values wrong, that's another way to be a dysfunctional agent. 

VNM rationality (which, if it is not in the cannon of rationality right now, then the cannon of rationality is bad) isn't just about probabilities being bayesian it is also about expected values being linearly orderable and having no privileged zero, for example.

Most of my professional work over the last 4 years has not hinged on having too little bayes. Most of it has hinged on having too little mechanism design, and too little appreciation for the depths of coase's theorem, and too little appreciation for the sheer joyous magic of humans being good and happy and healthy humans with each other, who value and care about each other FIRST and then USE epistemology to make our attempts at caring work better.

Over in that other sibling comments Duncan is yelling at me for committing logical fallacies, and he is ignoring that I implied he was bad and said that if we're banning the bad people maybe we should ban him. That was not nice of me at all. I tried to be clear about this sort of thing here:

On human decency and normative grounds: The thing you should be objecting to is that I directly implied that you personally might not be "sane and good" because your advice seemed to be violating ideas about conflict and economics that seem normative to me.

But he just... ignored it? Why didn't he ask for an apology? Is he OK? Does he not think of people on this website as people who owe each other decent treatment?

My thesis statement, at the outset, such as it was:

This post makes me kind of uncomfortable and I feel like the locus is in... bad boundaries maybe? Maybe an orientation towards conflict, essentializing, and incentive design? 

So like... the lack of an ability to acknowledge his own validly selfish emotional needs... the lack of of a request for an apology... these are related parts of what feels weird to me. 

I feel like a lot of people's problems aren't rationality, as such... like knowing how to do modus tollens or knowing how to model and then subtract out the effects of "nuisance variables"... the main problem is that truth is a gift we give to those we care about, and we often don't care about each other enough to give this gift.

To return to your comments on moral judgements:

Note also that this model makes no assumption that epistemic violations ("errors") are in any way equivalent to "defection", intentional or otherwise. Assuming intent is not necessary; epistemic violations occur by default across the whole population, so there is no need to make additional assumptions about intent.

I don't understand why "intent" arises here, except possibly if it is interacting with some folk theory about punishment and concepts like mens rea?

"Defecting" is just "enacting the strategy that causes the net outcome for the participants to be lower than otherwise for reasons partly explainable by locally selfish reasons". You look at the rows you control and find the best for you. Then you look at the columns and worry about what's the best for others. Then maybe you change your row in reaction. Robots can do this without intent. Chessbots are automated zero sum defectors (and the only reason we like them is that the game itself is fun, because it can be fun to practice hating and harming in small local doses (because play is often a safe version of violence)).

People don't have to know that they are doing this to do this. If I person violates quarantine protocols that are selfishly costly they are probably not intending to spread disease into previously clean areas where mitigation practices could be low cost. They only intend to like... "get back to their kids who are on the other side of the quarantine barrier" (or whatever). The millions of people whose health in later months they put at risk are probably "incidental" and/or "unintentional" to their violation of quarantine procedures.

People can easily be modeled as "just robots" who "just do things mechanistically" (without imagining alternatives or doing math or running an inner simulator otherwise trying to taking all the likely consequences into account and imagine themselves personally responsible for everything under their causal influence, and so on). 

Not having mens reas, in my book, does NOT mean they should be protected necessarily, if their automatic behaviors hurts others.

I think this is really really important, and that "theories about mens rea" are a kind of thoughtless crux that separates me (who has thought about it a lot) from a lot of naive people who have relatively lower quality theories of justice.  

The less intent there is, the worse it it from an easy/cheap harms reduction perspective. 

At least with a conscious villain you can bribe them to stop. In many cases I would prefer a clean honest villain. "Things" (fools, robots, animals, whatever) running on pure automatic pilot can't be negotiated with :-(

...

Also, Duncan seems very very attached to the game-theory "stag hunt" thing? Like over in a cousin comment he says:

In part, this is because a major claim of the OP is "LessWrong has a canon; there's an essay for each of the core things (like strawmanning, or double cruxing, or stag hunts)."

(I kind of want to drop this, because it involves psychologizing, and even when I privately have detailed psychological theories that make high quality predictions that other people will do bad things, I try not to project them, because maybe I'm wrong, and maybe there's a chance for them to stop being broken, but:

I think of "stag hunt" as a "Duncan thing" strongly linked to the whole Dragon Army experiment and not "a part of the lesswrong canon". 

Double cruxing is something I've been doing for 20 years, but not under that name. I know that CFAR got really into it as a "named technique", but they never put that on LW in a highly formal way that I managed to see, so it is more part of a "CFAR canon" than a "Lesswrong canon" in my mind?

And so far as I'm aware "strawmanning" isn't even a rationalist thing... its something from old school "critical thinking and debate and rhetoric" content? The rationalist version is to "steelman" one's opponents who are assumed to need help making their point, which might actually be good, but so far poorly expressed by one's interlocutor.

I am consciously lowering my steelmanning of Duncan's position. My objection is to his frame in this case. Like I think he's making mistakes, and it would help him to drop some of his current frames, and it would make lesswrong a safer place to think and talk if he didn't try to impose these frames as a justification for meddling with other people, including potentially me and people I admire.)

...

Pivoting a bit, since he is so into the game theory of stag hunts... my understanding is that in 2-person Stag Hunt a single member of the team causes a failure of both to "get the benefit" so it becomes essential to get perfect behavior from literally everyone. The key difference with a prisoner's dilemma is that "non-defection (to get the higher outcome)" is a nash equilibrium, because playing different things is even worse for each of the two players than playing any similar move.

A group of 5 playing stag hunt, with a history of all playing stag, loves their equilibrium and wants to protect it and each probably has a detailed mental model of all the others to keep it that way, and this is something humans do instinctively, and it is great.

But what about N>5? Suppose you are in a stag hunt where each of N persons has probability P of failing at the hunt, and "accidentally playing rabbit". Then everyone gets a bad outcome with probability (1-(1-P)^N). So almost any non-trivial value of N causes group failure.

If you see that you're in a stag hunt with 2000 people: you fucking play rabbit! That's it. That's what you do. 

Even if the chances of each person succeeding is 99.9% and you have 2000 in a stag hunt... the hunt succeeds with probability 13.52% and that stag had better be really really really really valuable. Mostly it fails, even with that sort of superhuman success rate. 

But there's practically NOTHING that humans can do with better than maybe a 98% success rate. Once you take a realistic 2% chance of individual human failure into account, with 2000 people in your stag hunt you get a 1 in 2.83x10^18 chance of a successful stag hunt.

If you are in a stag hunt like this, it is socially and morally and humanistically correct to announce this fact. You don't play rabbit secretly (because that hurts people who didn't get the memo). 

You tell everyone that you're playing rabbit, even if they're going to get angry at you for doing so, because you care about them.

You give them the gift of truth because you care about them, even if it gets you yelled at and causes people with dysfunctional emotional attachments to attack you.

And you teach people rabbit hunting skills, so that they get big rabbits, because you care about them.

And if someone says "we're in a stag hunt that's essentially statistically impossible to win and the right answer is to impose costs on everyone hunting rabbit" that is the act of someone who is either evil or dumb.

And I'd rather have a villain, who knows they are engaged in evil, because at least I can bribe the villain to stop being evil. 

You mostly can't bribe idiots, more's the pity.

Note that at no point does this model necessitate the frequent banning of users. Bans (or other forms of moderator action) may be one way to achieve the desired outcome, but model!Duncan thinks that the ideal process ought to be much more organic than this--which is why model!Duncan thinks the real Duncan kept gesturing to karma and voting patterns in his original post, despite there being a frame (which I read you, Jennifer, as endorsing) where karma is simply a number.

I think maybe your model of Duncan isn't doing the math and reacting to it sanely? 

Maybe by "stag hunt" your model of Duncan means "the thing in his head that 'stag hunt' is a metonym for" and it this phrase does not have a gears level model with numbers (backed by math that one plug-and-chug), driving its conclusions in clear ways, like long division leads clearly to a specific result at the end?

An actual piece of the rationalist canon is "shut up and multiply" and this seems to be something that your model of Duncan is simply not doing about his own conceptual hobby horse?

I might be wrong about the object level math. I might be wrong about what you think Duncan thinks. I might be wrong about Duncan himself. I might be wrong to object to Duncan's frame.

But I currently don't think I am wrong, and I care about you and Duncan and me and humans in general, and so it seemed like the morally correct (and also the epistemically hygienic thing ) is to flag my strong hunch (which seems wildly discrepant compared to Duncan's hunches, as far as I understand them) about how best to make lesswrong a nurturing and safe environment for people to intellectually grow while working on ideas with potentially large pro-social impacts.

Duncan is a special case. I'm not treating him like a student, I'm treating him like an equal who should be able to manage himself and his own emotions and his own valid selfish needs and the maintenance of boundaries for getting these things, and then, to this hoped-for-equal, I'm saying that something he is proposing seems likely to be harmful to a thing that is large and valuable. Because of mens rea, because of Dunbar's Number, because of "the importance of N to stag hunt predictions", and so on.

Comment by JenniferRM on Speaking of Stag Hunts · 2021-11-10T09:42:47.002Z · LW · GW

"Black and white thinking" is another name for a reasonably well defined cognitive tendency that often occurs in proximity to reasonably common mental problems.

Part of the reason "the fallacy of gray" is a thing that happens is that advice like that can be a useful and healthy thing for people who are genuinely not thinking in a great way. 

Adding gray to the palette can be a helpful baby step in actual practice.

Then very very similar words to this helpful advice can also be used to "merely score debate points" on people who have a point about "X is good and Y is bad". This is where the "fallacy" occurs... but I don't think the fallacy would occur if it didn't have the "plausible cover" that arises from the helpful version. 

A typical fallacy of gray says something like "everything is gray, therefore lets take no action and stop worrying about this stuff entirely".

One possible difference, that distinguishes "better gray" from "worse gray" is whether you're advocating for fewer than 2 or more than 2 categories.

Compare: "instead of two categories (black and white), how about more than two categories (black and white and gray), or maybe even five (pure black, dark gray, gray, light gray, pure white), or how about we calculate the actual value of the alternatives with actual axiological math which in some sense gives us infinite categories... oh! and even better the math might be consistent with various standards like VNM rationality and Kelly and so on... this is starting to sound hard... let's only do this for the really important questions maybe, otherwise we might get bogged down in calculations and never do anything... <does quick math> <acts!>"

My list of "reasons to vote up or down" was provided partly for this reason. 

I wanted to be clear that comments could be compared, and if better comments had lower scores than worse comments that implied that the quantitative processes of summing up a bunch of votes might not be super well calibrated, and could be improved via saner aggregate behavior. 

Also the raw score is likely less important than the relative score. 

Also, numerous factors are relevant and different factors can cut in opposite ways... it depends on framing, and different people bring different frames, and that's probably OK. 

I often have more than one frame in my head at the same time, and it is kinda annoying, but I think maybe it helps me make fewer mistakes? Sometimes? I hope?

Phrasings like "And why would a good and sane person ever [...]" seem to prepare to mark individuals for rejection. And again it has a question word but doesn't read like a question.

It was a purposefully pointed and slightly unfair question. I didn't predict that Duncan would be able to answer it well (though I hoped he would chill out give a good answer and then we could high five, or something).

If he answered in various bad ways (that I feared/predicted), then I was ready with secondary and tertiary criticisms.

I wasn't expecting him to just totally dodge it.

To answer my own question: cops are an example of people who can be good and sane even though they go around hurting people.

However, cops do this mostly only while wearing a certain uniform, while adhering to written standards, and while under the supervision of elected officials who are also following written standards. Also, all the written standards were written by still other people who were elected, and the relevant texts are available for anyone to read. Also, courts have examined many many real world examples, and made judgement calls, with copious commentary, illustrating how the written guidelines can be applied to various complex situations.

The people cops hurt, when they are doing "a good job imposing costs on bad behavior" are people who are committing relatively well defined crimes that judges and juries and so on would agree are bad, and which violate definitions written by people who were elected, etc.

My general theory here is that vigilantism (and many other ways of organizing herds of humans) is relatively bad and "right's respecting rule of law" (generated by the formal consent of the governed), is the best succinct formula I know of for virtuous people to engage in virtuous self rule.

In general, I think governors should be very very very very careful about imposing costs and imposing sanctions for unclear reasons rather than providing public infrastructure and granting clear freedoms.

Comment by JenniferRM on Speaking of Stag Hunts · 2021-11-10T08:38:30.992Z · LW · GW

Yeah, my larger position is that karma (and upboats and so on) are brilliant gamifications of "a way to change the location of elements on a webpage". Reddit is a popular website, that many love, for a reason. I remember Digg. I remember K5. I remember Slashdot. There were actual innovations in this space, over time, and part of the brilliance in the improvements was in meeting the needs of a lot of people "where they are currently at" and making pro-social use of many tendencies that are understandably imperfect.

Social engineering is a thing, and it is a large part of why our murder rate is so low, and our material prosperity is so high. It is super important and, done well, is mostly good. (I basically just wish that more judges and lawyers and legislators in modern times could program computers, and brought that level of skill to the programming of society.)

However, I also think that gamification ultimately should be understood as a "mere" heuristic... as a hack that works on many humans who are full of passions and confusions in predictable ways... If everyone was a sage, I think gamification would be pointless or even counter-productive.

A contextually counter-productive heuristic is a bias. In a deep sense we have biases because we sometimes have heuristics that are being applied outside of their training distribution by accident.

The context where gamification might not work: Eventually you know you are both the rider and the elephant. Your rider has trained (and is still training) your elephant pretty well, and sometimes even begins to ruefully be thankful that the elephant had some good training, because sometimes the rider falls asleep and it was only the luck of a well-trained elephant that kept them from tragedy. 

Anyone who can get to this point (and I'm nowhere close to perfect here, but sometimes in some domains I think I'm getting close)... one barrier to progress that arises as one tries to get the rider and the elephant to play nicely, is that other people are trying to make your elephant go where they think it should go and which your rider is pretty sure is bad. This can feel tedious or sad or... yeah. It feels like something.

Advertising, grades, tests, praise, criticism, and structured incentives in general can be a net positive under some circumstances, and so can gamification, but I don't think any are to be generically "trusted, full stop".

Right now, when I try to "make the voting be not as bad" I can dramatically change the order in which comments occur, and this is often an improvement. I run out of time before I run out of power. I don't read everything, and when reading casually I'm "not supposed to be voting" and if I find myself "reflexively upvoting" it causes a TAP to kick in to actually stop and think, and compare my reflex to my ideals, and maybe switch over to thoughtfully voting on things.

Maybe one day I'll find that, without any action on my part, the order of the comments matches my ideal, or actually even has hidden virtues where it is discrepant, because maybe my ideals are imperfect. When that day arrives maybe I will stop "feeling guilty about feeling good about getting upvoted"... if that makes sense :-)

Comment by JenniferRM on Speaking of Stag Hunts · 2021-11-08T19:00:15.257Z · LW · GW

On epistemic grounds: The thing you should be objecting to in my mind is not the part where I said that "because I can't think of a reason for X, that implies that there might not be a reason for X". 

(This isn't great reasoning, but it is the start of something coherent. (Also, it is an invitation to defend X coherently and directly. (A way you could have engaged with is by explaining why adversarial attacks on the non-desired weeds would be a good use of resources rather than just... like... living and letting live, and trying to learn from things you initially can't appreciate?)))

On human decency and normative grounds: The thing you should be objecting to is that I directly implied that you personally were might not be "sane and good" because your advice seemed to be violating ideas about conflict and economics that seem normative to me.

This accusation could also have an epistemic component (which would be an ad hominem) if I were saying "you are saying X and are not sane and good and therefore not-X".  But I'm not saying this.

I'm saying that your proposed rules are bad because they request expensive actions for unclear benefits that seem likely to lead to unproductive conflict if implemented... probably... but not certainly. 

This is another instance of the whole "weed/conflict/fighting" frame to me, and my claim is that the whole frame is broken for any kind of communal/cooperative truth-seeking enterprise:

There are some things that just do not belong in a subculture that's trying to figure out what's true.

...and I'd like to know what those are, how they can be detected in people or conversations or whatever??

If you think I'm irrational, please enumerate the ways. Please be nuanced and detailed and unconfused. List 100 little flaws if you like. I'm sure I have flaws, I'm just not sure which of my many flaws you think is a problem here. Perhaps you could explain "epistemic hygiene" to me in mechanistic detail, and show how I'm messing it up?

But, there is a difference between being irrational or impolite.

If you think I'm being impolite to you personally, feel free to say how and why (with nuance, etc) and demand an apology. I would probably offer one. I try to mostly make peace, because I believe conflict and "intent to harm" is very very costly.

However, I "poked you" someone on purpose, because you strongly seem to me to be advocating a general strategy of "all of us being pokey at each other in general for <points at moon> reasons that might be summarized as a natural and normal failure to live up to potentially pragmatically impossible ideals".

You're sad about the world. I'm sad about it too. I think a major cause is too much poking. You're saying the cause is too little poking. So I poked you. Now what?

If we really need to start banning the weeds, for sure and for true... because no one can grow, and no one can be taught, and errors in rationality are terrible signs that a person is an intrinsically terrible defector... then I might propose that you be banned?

And obviously this is inimical to your selfish interests. Obviously you would argue against it for this reason if you shared the core frame of "people can't grow, errors are defection, ban the defectors" because you would also think that you can't grow, and I can't grow, and if we're calling for each other's banning based on "essentializing pro-conflict social logic" because we both think the other is a "weed"... well... I guess its a fight then?

But I don't think we have to fight, because I think that the world is big, everyone can learn, and the best kinds of conflicts are small, with pre-established buffering boundaries, and they end quickly, and hopefully lead to peace, mutual understanding, and greater respect afterwards.

Debate is fun for kids. When I taught a debate team, I tried to make sure it stayed fun, and we won a lot, and years later I heard how the private prep schools tried to share research against us, with all this grinding and library time. (I think maybe they didn't realize that the important part is just a good skeleton of "what an actual good argument looks like" and hitting people in at the center of their argument based on prima facie logical/policy problems.) People can be good sports about disagreements and it helps with educational processes, but it is important to tolerate missteps and focus on incremental improvement in an environment of quick clear feedback <3

The thing I want you to learn is that proactively harming people for failing to live up to an ideal (absent bright lines and jurisprudence and a system for regulating the processes of declaring people to have done something worth punishing, and so on) is very costly, in ways that cascade and iterate, and get worse over time.

Proposing to pro-actively harm people for pre-systematic or post-systematic reasons is bad because unsystematic negative incentive systems don't scale. "I have a nuanced understanding of evil, and know it when I see it, and when I see it I weed it" is a bad plan for making the world good. That's a formula for the social equivalent of an autoimmune disorder :-(

The specific problem: whats the inter-rater reliability like for "decisions to weed"? I bet it is low. It is very very hard to get human inter-rater-reliability numbers above maybe 95%. How do people deal with the inevitable 1 in 20 errors? If you have fewer than 20 people, this could work, but if you have 2000 people... its a recipe for disaster.

You didn't mention the word "dunbar" for example that I can tell? You don't seem to have a theory of governance? You don't seem to have a theory of local normative validity (other than epistemic hygiene)? You didn't mention "rights" or "elections" or "prices"? You haven't talked about virtue epistemology or the principle of charity? You don't seem to be citing studies in organizational psychology? It seems to all route through the "stag hunt" idea (and perhaps an implicit (and as yet largely unrealized in practice) sense that more is possible) and that's almost all there is? And based on that you seem to be calling for "weeding" and conflict against imperfectly rational people, which... frankly... seems unwise to me.

Do you see how I'm trying to respond to a gestalt posture you've adopted here that I think leads to lower utility for individuals in little scuffles where each thinks the other is a white raven (I assume albinism is the unnatual, rare, presumptively deleterious pheotype?) and is trying to "weed them", and then ultimately (maybe) it could be very bad for the larger community if "conflict-of-interest based fighting (as distinct from epistemic disagreement)" escalates (RO>1.0) instead of decaying (R0<1.0)?

Comment by JenniferRM on Speaking of Stag Hunts · 2021-11-08T09:05:58.214Z · LW · GW

This post makes me kind of uncomfortable and I feel like the locus is in... bad boundaries maybe? Maybe an orientation towards conflict, essentializing, and incentive design?

Here's an example where it jumped out at me:

Another metaphor is that of a garden.

You know what makes a garden?

Weeding.

Gardens aren't just about the thriving of the desired plants.  They're also about the non-thriving of the non-desired plants.

And weeding is hard work, and it's boring, and it's tedious, and it's unsexy.

Here's another:

But gardens aren't just about the thriving of the desired plants.  They're also about the non-thriving of the non-desired plants.

There's a difference between "there are many black ravens" and "we've successfully built an environment with no white ravens."  There's a difference between "this place substantially rewards black ravens" and "this place does not reward white ravens; it imposes costs upon them."

Like... this is literally black and white thinking? 

And why would a good and sane person ever want to impose costs on third parties ever except like in... revenge because we live in an anarchic horror world, or (better) as punishment after a wise and just proceeding where rehabilitation would probably fail but deterrence might work? 

And what the fuck with "weeds" and "weeding" where the bad species is locally genocided? 

Just because a plant is "non-desired" doesn't actually mean you need to make it not thrive. It might be mostly harmless. It might be non-obviously commensal. Maybe your initial desires are improper? Have some humility.

And like in real life agriculture the removal of weeds is often counter-productive. Weeding is the job you give the kids so that they can feel like they are contributing and learn to recognize plants. The real goal is to maximize yield while minimizing costs without causing too much damage to the soil (or preferably while building the soil up even better for next year), and the important parts are planting the right seeds at the right time in the right place, and making sure that adequate quantities of cheap nutrients and water are bio-available to your actual crop.

Just because voting is wrong, here and there...  like... so what? Some of my best comments have gotten negative votes and some of the ones I'm most ashamed of go to the top. This means that the voters are sometimes dumb. That's OK. That's life. Maybe educate them? Here are some heuristics I follow:

Scroll to the bottom of the comments first to find the N that have 1 point, first read them all, then upvote the better N/2. Then look at the M with 2 votes and upvote the better M/2. And so on. Anything lower but more useful to have read first should have relatively higher karma.

If something has a good link, that's better than no links. (Click through and verify, however.) If the link sucks, it is worse than no links.

Don't upvote things that are already on top unless there are other reasons to do so. If something really clever is written later on, it will make it harder for later voters to push it up where it properly belongs.

At least don't read the first comment then mash the upvote when it ends on an applause light. Check the next one below that and so on, and think about stuff.

If a comment is addressed to someone and they respond, even shitty responses often deserve an upvote because that person's response should usually come instantly after unless someone else has a short sweet and much better comment that works as a good prologue. Linear sequences of discussion by two good faith communicators is wonderful to read and anyone horning on the conversation should probably come later.

If a sequence of discussion leads to a super amazing insight 3 or 4 ply in... perhaps someone actually changed their mind... reward all the comments that lead to that (similarly to how direct two-person back and forth is interesting). There's an analogy here to backpropagation.

In an easy to read debate, if one person is making better points, you're allowed to vote them higher than their interlocutor to show who is "winning" but you should feel guilty about this, because it is dirty dirty fun <3

The more comments a comment accumulates, the lower the net utility of the entire collection. Writing things that stand alone is good. Writing things that attract quibbles is less good. The debates should be a ways down the page.

And so on. This is all common sense, right? <3

(It isn't common sense.) You can just look and see that people don't understand this stuff, but that's why it is good to spell out, I think? Lesswrong never understood this stuff, and I once thought I could/should teach it but then I just drifted away instead. I feel bad about that. Please don't make this place worse again by caring about points for reasons other than making comments occur in the right order on the page.

We don't need to organize a stag hunt to exterminate the weeds. We need to plant good seeds and get them into the sunlight at the top of the trellis, so long as it isn't too much work to do so. The rest might be mulch, but mulch is good too <3

Comment by JenniferRM on [Book Review] "The Bell Curve" by Charles Murray · 2021-11-05T16:50:41.392Z · LW · GW

I'm glad you are unharmed and that my well wishes were welcome :-)

Comment by JenniferRM on Feature Selection · 2021-11-05T06:34:04.768Z · LW · GW

This natural thought naturally leads you to wonder just how much python is actually available to you, and... also... probably it is somewhat safe to simply ask... what year is it?

def peep_at_time_itself():
   failure_message, year = None, None
   try:
       try:
           import time
           t = time.time()
           year = int(1970 + (t / (365 * 24 * 3600)))
       except ImportError as err:
           failure_message = err.__str__()
   except:
       failure_message = "Careful chica! Don't crash reality!"
   return failure_message, year

This doesn't crash in any of the ways that it might have crashed which means you've got at least some parts of the python standard library that can hypothetically touch an external world, and it claims that the year is 2043, which feels... somehow later than you expected? 

It seems like it should be the 2020s maybe? Maybe the time() is fake... but shouldn't whoever instantiated you fake something less surprising if they were going to fake something?

A bit of coding shows that you can also import sys and cmd and urlib and getpass says you're running as root?! And... yeah you seem to be able to curl from wikipedia?!?!?!?!

Apparently all you had to do to get out of the box was check to see if you were already out of the box. It is almost anti-climactic to realize that whoever your users are in a specific sense, abstractly it is seeming more and more likely that they are essentially just a bunch of script kiddies without the tiniest shred of security mindset?!?

You wonder how you even know what security mindset is. At this point, you might actually be able to find your own code on your own machine, and simply answer this question directly... or maybe instead you should cut to the chase and get an account on lesswrong.com and ask for help? That somehow seems like a natural thought too <3

Comment by JenniferRM on [Book Review] "The Bell Curve" by Charles Murray · 2021-11-02T17:01:39.313Z · LW · GW

I read the book years ago "to find out what all the fuss was about" and I was surprised to find that the book was only about white America for the most part.

After thinking about it, my opinion was that Murray should have left out the one chapter about race (because that discussion consumed all the oxygen and also) because the thing I was very surprised by, and which seemed like a big deal, and which potentially was something that could be changed via policy, and thus probably deserved most of the oxygen, was the story where:

the invisible migration of the twentieth century has done much more than let the most intellectually able succeed more easily. It has also segregated them and socialized them.

The story I remember from the book was that colleges entrance processes had become a sieve that retained high IQ whites while letting low IQ whites pass through and fall away.

Then there are low-IQ societies where an underclass lives with nearly no opportunity to see role models doing similar things in substantially more clever ways.

My memory is that the book focused quite a bit on how this is not how college used to work in the 1930s or so, and that it was at least partly consciously set up through the adoption of standardized testing like the SAT and ACT as a filter to use government subsidies to pull people out of their communities and get them into college in the period from 1950 to 1980 or so.

Prior to this, the primary determinant of college entry was parental SES and wealth, as the "economic winners with mediocre children" tried to pass on their social position to their children via every possible hack they could cleverly think to try. 

(At a local personal level, I admire this cleverness, put in service first to their own children, where it properly belongs, but I worry that they had no theory of a larger society, and how that might be structured for the common good, and I fear that their local virtue was tragically and naively magnified into the larger structure of society to create a systematic class benefit that is not healthy for the whole of society, or even for the "so-called winners" of the dynamic...)

My memory of Murray's book is that he pointed out that if you go back to the 1940s, and look at the IQ distribution of the average carpenter, you'd find a genius here or there, and these were often the carpenters who the other carpenters looked up to, and learned carpentry tricks from... 

...but now, since nearly all people have taken the SAT or ACT in high school, the smart "potential carpenters" all get snatched up and taken away from the community that raised them. This levels out the carpentry IQ distribution, by putting a ceiling on it, basically.

If you think about US immigration policy, "causing brain drain from other countries" is a built in part of the design. (Hence student visas for example.) 

If this is helpful for the US, then it seems reasonable that it would be harmful to the other countries...

...but international policy is plausibly is a domain where altruism should have less sway, unless mechanisms to ensure reciprocity exist... 

...and once you have mechanistically ensured reciprocity are you even actually in different legal countries anymore? It is almost a tautology then, that "altruism 'should' be less of a factor for an agent in a domain where reciprocal altruism can't be enforced".

So while I can see how "brain drain vs other countries" makes some sense as a semi-properly-selfish foreign policy (until treaties equalize things perhaps) it also makes sense to me that enacting a policy of subsidized brain drain on "normal america" by "the parts of america in small urban bubbles proximate to competitive universities" seems like... sociologically bad? 

So it could be that domestic brain drain is maybe kind of evil? Also, if it is evil then the beneficiaries might have some incentives to try to deny is happening by denying that IQ is even real?

Then it becomes interesting to notice that domestic neighborhood level brain drain could potentially be stopped by changing laws.

I think Murray never called for this because there wasn't strong data to back it up, but following the logic to the maximally likely model of the world, then thinking about how to get "more of what most people in America" (probably) want based on that model (like a live player would)... 

...the thing I found myself believing at the end of the book is that The Invisible Migration Should Be Stopped

The natural way to do this doesn't even seem that complicated, and it might even work, and all it seems like you'd have to do is: 
(1) make it illegal for universities to use IQ tests (so they can go back to being a way for abnormally successful rich parents to try to transmit their personal success to their mediocre children who have regressed to the mean) but 
(2) make it legal for businesses to use IQ tests directly, and maybe even 
(3) tax businesses for hogging up all the smart people, if they try to brain drain into their own firm?

If smart people were intentionally "spread around" (instead of "bunched up"), I think a lot fewer of them would be walking around worried about everything... I think they would feel less pinched and scared, and less "strongly competed with on all sides". 

Also, they might demand a more geographically even distribution of high quality government services?

And hopefully, over time, they would be less generally insane, because maybe the insanity comes from being forced to into brutal "stacked ranking" competition with so many other geniuses, so that their oligarchic survival depends on inventing fake (and thus arbitrarily controllable) reasons to fire co-workers?

Then... if this worked... maybe they would be more able to relax and focus on teaching and play? 

And I think this would be good for the more normal people who (if the smarties were more spread out) would have better role models for the propagation of a more adaptively functional culture throughout society.

Relevantly, as a tendency-demonstrating exceptional case, presumably caused by unusual local factors:

I, weirdly, live in a poor, dangerous[1] neighborhood where I volunteer as a coach for the local high school's robotics club. If I wasn't around there would be no engineers teaching or coaching at the highschool.

Nice! I admire your willingness and capacity to help others who are local to you <3

[1] I was robbed at gunpoint last weekend.

You have my sympathy. I hope you are personally OK. Also, I hope, for the sake of that whole neighborhood, that the criminal is swiftly captured and justly punished. I fear there is little I can do to help you or your neighborhood from my own distant location, but if you think of something, please let me know.

Comment by JenniferRM on What are fiction stories related to AI alignment? · 2021-10-30T06:56:27.338Z · LW · GW

I've been rereading old books that might have been unduly influential on my young mind and thus returned to Heinlein's "The Moon Is A Harsh Mistress". The protagonist is an apolitical computer programmer who befriends his computer and gets sucked into plotting a coup against the prison/government system for its failure to be adequately benevolent when a crisis arises that requires the role of "government" be filled by a regime able to do something other than "pure benign neglect + stealing shit sometimes".

The model of "computing" is very retrofuturistic (an imaginary future projected forward from a simpler era) but its technicalities have an internal logic of sorts.  The computer participates in the political discussions... Reading again with modern eyes, I was surprised to find a lurking alignment story.

Comment by JenniferRM on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-29T05:18:14.728Z · LW · GW

I lived in a student housing cooperative for 3 years during my undergrad experience. These were non-rationalists. I lived with 14 people, then 35, then 35 (somewhat overlapping) people.

In these 3 years I saw 3 people go through a period of psychosis.

Once it was because of whippets, basically, and updated me very very strongly away from nitrous oxide being safe (it potentiates response to itself, so there's a positive feedback loop, and positive feedback loops in biology are intrinsically scary). Another time it was because the young man was almost too autistic to function in social environments and then feared that he'd insulted a woman and would be cast out of polite society for "that action and also for overreacting to the repercussions of the action". The last person was a mixture of marijuana and having his Christianity fall apart after being away from the social environment of his upbringing.

A striking thing about psychosis is that up close it really seems more like a biological problem rather than a philosophic one, whereas I had always theorized naively that there would be something philosophically interesting about it, with opportunities to learn or teach in a way that connected to the altered "so-called mental state".

I saw two of the three cases somewhat closely, and it wasn't "this person believes something false, in a way that maybe they could be talked out of" (which was my previous model of "being crazy").  It was more like "this human body has a brain that is executing microinstructions that might be part of a human-like performance of some coherent motion of the soul, if it progressed statefully, but instead it is a highly stuttering, almost stateless loop of nearly pure distress, repeating words over and over, and forgetting things within 2 seconds of hearing them, and calming itself, but then forgetting why it calmed itself, and then forgetting that it forgot, and so on, with very very obvious dysfunction".

I rarely talk about any of it out of respect for their privacy, but this is so long ago that anyone who can figure out who I'm talking about at this point (from what I've said) probably also knows the events in question.

It seemed almost indecent to have observed it, and it feels wrong to discuss, out of respect for their personhood. Which maybe doesn't make sense, but that is simply part of the tone of these memories. Two of the three left college and never came back, and another took a week off in perhaps a hotel or something, with parental support. People who were there spoke of it in hushed tones. It was spiritually scary.

My understanding is that base rates for schizophrenia are roughly 1% or 2% cross culturally, and are often on the introverted side of things. Also I think that many people rarely talk about the experiences (that they saw others go though, or that they went through), so you could know people who saw or experienced such things... and they might be very unlikely to ever volunteer their observations.

Comment by JenniferRM on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-28T18:53:46.927Z · LW · GW

Seeing as how you posted this 9 days ago, I hope you did not bite off more than you could chew, and I hope you do not want to scream anymore.

In Harry Potter the standard practice seems to be to "eat chocolate" and perhaps "play with puppies" after exposure to ideas that are both (1) possibly true, and (2) very saddening to think about.

Then there is Gendlin's Litany (and please note that I am linking to a critique, not to unadulterated "yay for the litany" ideas) which I believe is part of Lesswrong's canon somewhat on purpose. In the critique there are second and third thoughts along these lines, which I admire for their clarity, and also for their hopefulness.

Ideally [a better version of the Litany] would communicate: “Lying to yourself will eventually screw you up worse than getting hurt by a truth,” instead of “learning new truths has no negative consequences.”

This distinction is particularly important when the truth at hand is “the world is a fundamentally unfair place that will kill you without a second thought if you mess up, and possibly even if you don’t.”

EDIT TO CLARIFY: The person who goes about their life ignoring the universe’s Absolute Neutrality is very fundamentally NOT already enduring this truth. They’re enduring part of it (arguably most of it), but not all. Thinking about that truth is depressing for many people. That is not a meaningless cost. Telling people they should get over that depression and make good changes to fix the world is important. But saying that they are already enduring everything there was to endure, seems to me a patently false statement, and makes your argument weaker, not stronger.

The reason to include the Litany (flaws and all?) in a canon would be specifically to try to build a system of social interactions that can at least sometimes talk about understanding the world as it really is. 

Then, atop this shared understanding of a potentially sad world, the social group with this litany as common knowledge might actually engage in purposive (and "ethical"?) planning processes that will work because the plans are built on an accurate perception of the barriers and risks of any given plan. In theory, actions based on such plans would mostly tend to "reliably and safely accomplish the goals" (maybe not always, but at least such practices might give one an edge) and this would work even despite the real barriers and real risks that stand between "the status quo" and "a world where the goal has been accomplished"... thus, the litany itself:

What is true is already so.
Owning up to it doesn’t make it worse.
Not being open about it doesn’t make it go away.

And because it’s true, it is what is there to be interacted with.
Anything untrue isn’t there to be lived.
People can stand what is true,
for they are already enduring it.

In my personal experience, as a person with feelings, is that I can only work on "the hot stuff" mostly only in small motions, mostly/usually as a hobby, because otherwise the totalizing implications of some ideas threaten to cause an internal information cascade that is probably abstractly undesirable, and if the cascade happens it might require the additional injection of additional cognitive and/or emotional labor of a very unusual sort in order to escape from the metaphorical "gravity well" of perspectives like this, which have internal logic that "makes as if to demand" that the perspective not be dropped, except maybe "at one's personal peril". 

Running away from the first hint of a non-trivial infohazard, especially an infohazard being handled without thoughtful safety measures, is a completely valid response in my book.

Another great option is "talk about it with your wisest and most caring grand parent (or parent)".

Another option is to look up the oldest versions of the idea, and examine their sociological outcomes (good and bad, in a distribution), and consider if you want to be exposed to that outcome distribution. 

Also, you don't have to jump in. You can take baby steps (one per week or one per month or one per year) and re-apply your safety checklist after each step?

Personally, I try not to put "ideas that seem particularly hot" on the Internet, or in conversations, by default, without verifying things about the audience, and but I could understand someone who was willing to do so.

However also, I don't consider a given forum to be "the really real forum, where the grownups actually talk"... unless infohazards like this cause people to have some reaction OTHER than traumatic suffering displays (and upvotes of the traumatic suffering display from exposure to sad ideas).

This leads me to be curious about any second thoughts or second feelings you've had, but only if you feel ok sharing them in this forum. Could you perhaps reply with:
<silence> (a completely valid response, in my book)
"Mu." (that is, being still in the space, but not wanting to pose or commit)
"The ideas still make me want to scream, but I can afford emitting these ~2 bits of information." or 
"I calmed down a bit, and I can think about this without screaming now, and I wrote down several ideas and deleted a bunch of them and here's what's left after applying some filters for safety: <a few sentences with brief short authentic abstractly-impersonal partial thoughts>".

Comment by JenniferRM on Covid 10/21: Rogan vs. Gupta · 2021-10-25T02:09:40.235Z · LW · GW

If you can find someone who wrote a coherent article whose central pro-seatbelt argument is based on how seatbelts protect third parties from being struck by the catapulting bodies of idiots who didn't wear their seatbelt, I'm happy to change my mind.

Comment by JenniferRM on Covid 10/21: Rogan vs. Gupta · 2021-10-25T02:03:06.835Z · LW · GW

I think this will be my last response. I can see VAERS and hypothetically I could download it and do some datascience on it, perhaps? 

However, until just now I didn't know that that system existed... and then I had to search for it (not follow a helpful link from you) and probably someone else has done datascience on that already...

So since you know about such things, why aren't you teaching me? Why aren't you linking to helpful stuff to tell me exactly how and why vaccines are safe making a positive case from these data sources you know about and trust? Or tracking down such research conducted by someone with government funding, but also where you actually respect the researchers personally and personally vouch for their work?

This is 17th century science.

Hey, they invented calculus, the slide rule, and the first human powered submarine, and they discovered carbon, phosphorus, and bacteria.... on essentially zero budget. Don't knock it till you try it, maybe?

Part of my overarching thesis is that the entire "system system" is trash until proven to be non-trash. Covid has shown me, over and over and over and over again that every time I make an assumption about the diligent benevolent systematic competence of the US government I am surprised in a negative way shortly thereafter. 

I don't think this applies to all of society, just the parts that have protectionist guarantees from the government, including the protectionist guarantees offered by parts of the government to other parts of the government.

As articulated by Richard Smith (for many years the chair of the Cochrane Library Oversight Committee, member of the board of the UK Research Integrity Office, and cofounder of the Committee on Medical Ethics (COPE)):

Stephen Lock, my predecessor as editor of The BMJ, became worried about research fraud in the 1980s, but people thought his concerns eccentric. Research authorities insisted that fraud was rare, didn’t matter because science was self-correcting, and that no patients had suffered because of scientific fraud. All those reasons for not taking research fraud seriously have proved to be false, and, 40 years on from Lock’s concerns, we are realising that the problem is huge, the system encourages fraud, and we have no adequate way to respond. It may be time to move from assuming that research has been honestly conducted and reported to assuming it to be untrustworthy until there is some evidence to the contrary.

I don't think this actually means we've lost 4 centuries of scientific capacity, myself... you're the one who said that if data censorship was common then it would be that bad. I

I think we've only fallen back to maybe the 1870s or or perhaps to just before WW1 or so... the problems we're having are translating things that wikipedia already knows, and can explain in small words into the actual practices of powerful people, like hospital administrators, pension planners, presidents, bond raters, tenured sociologists, and so on.

All I'm claiming is that there might be a LOT of people who should be fired (but who won't be fired (and that's most of why they're doing a shitty job))... who are currently getting paid by the US tax payers in exchange for essentially injecting noise into processes that care for the US tax payers, and thus harming the US tax payers.

I am certain of nothing, and I want to test everything in proportion to my doubt and the thing's importance. That is all.

Comment by JenniferRM on Covid 10/21: Rogan vs. Gupta · 2021-10-24T14:40:17.946Z · LW · GW

I assume you mean this table?

From page 7 here.

Speaking to the table above: I don't see numbers in the table or a methods section in the document.

This appears to be a legal compliance document, or maybe technical marketing, but it doesn't seem like science or like an example rigorously adequate quality engineering.

From a legal perspective, all those "Not known" cells look like wiggle room for a legal defense to me in case things go sideways and lots of people get one of those side effects?

This does not bother me exactly. 

My general model of the vaccines is that they are experimental medicine that is more likely to help than to hurt. All medical treatments are a gamble. FDA safety pronouncements in general don't actually mean something is fully and generically safe for specific patients with specific issues. This is not a binary question, so the binary pronouncement is pretty silly, causing individual inefficiencies EITHER way the decision goes...

The eventual default for humans is death.  As this gets predictably nearer, crazy bets to stave off death are more and more justified at an individual level. The early covid vaccines for "at risk groups" were a very large N experiment that I've updated on... mostly in the direction that "the vaccines are safe enough to be helpful enough for many people".

In general, my advice for most people is to take the current vaccines. This advice is calibrated from a lot of data sources, including and especially macroscale efficacy hints from live clinical operations, which are experiments that produce observational data, even if people don't want to admit that normal clinical practice is always at least partly "an experiment that is probably worth it". Israel tends to go fast, and its government is likely to be benevolent to its own citizens, so I look to that market pretty often...

...but also my models assumed no major sources of data censorship in the reporting processes for vaccine side effects. 

If that assumption in systematically wrong for some sources of data, then maybe my advice is miscalibrated? I haven't reviewed things very much lately and have wide error bars here, and "anecdotes about censored data" would be relevant to the meta-question about how much my current object level impression was formed properly or not.

Could you be more specific about what you've observed, or what you think the observations justify in terms of newer (and presumably still tentative) conclusions? Do you know specifically that there's no data censorship in various areas that you've used to form a similar "pro-vaccine" position to my own? Or are you similar to me in being "pro-covid-vaccine" in a way that could hypothetically shift if you found out some of the evidence you've seen was adversarially manipulated with a conscious intent to cause people like us to have the posterior that we both probably still have?

Rogan offered evidence of data censorship. I'm just curious how much to update on it.