Gems from the Wiki: Paranoid Debating 2020-09-15T03:51:10.453Z
David C Denkenberger on Food Production after a Sun Obscuring Disaster 2017-09-17T21:06:27.996Z
How often do you check this forum? 2017-01-30T16:56:54.302Z
[LINK] Poem: There are no beautiful surfaces without a terrible depth. 2012-03-27T17:30:33.772Z
But Butter Goes Rancid In The Freezer 2011-05-09T06:01:34.941Z
February 27 2011 Southern California Meetup 2011-02-24T05:05:39.907Z
Spoiled Discussion of Permutation City, A Fire Upon The Deep, and Eliezer's Mega Crossover 2011-02-19T06:10:15.258Z
January 2011 Southern California Meetup 2011-01-18T04:50:20.454Z
VIDEO: The Problem With Anecdotes 2011-01-12T02:37:33.860Z
December 2010 Southern California Meetup 2010-12-16T22:28:29.049Z
Starting point for calculating inferential distance? 2010-12-03T20:20:03.484Z
Seeking book about baseline life planning and expectations 2010-10-29T20:31:33.891Z
Luminosity (Twilight fanfic) Part 2 Discussion Thread 2010-10-25T23:07:49.960Z
September 2010 Southern California Meetup 2010-09-13T02:31:18.915Z
July 2010 Southern California Meetup 2010-07-07T19:54:25.535Z


Comment by JenniferRM on [REPOST] The Demiurge’s Older Brother · 2021-03-28T23:09:02.927Z · LW · GW

I went hunting for this story, so I could share it with someone, and now that I've found it I'm slightly surprised that it has so few upvotes and so few comments. Its a great story <3

Comment by JenniferRM on Making Vaccine · 2021-02-04T17:01:24.299Z · LW · GW

I think what you have done here is re-invented the actual helpful version of a practice whose authoritarian bureaucratic cargo-culted version is called "anonymous peer review".

It is easy (and maybe dangerously wrong) to come to the straightforward conclusion that peer review in general is simply evil bullshit... until one finds the place from which a benevolent truth-oriented human (like oneself) finds a reason to consult with an actual "epistemic peer" as a prudent and socially-embedded response to one's own uncertainty about things one cares about.

Comment by JenniferRM on Making Vaccine · 2021-02-04T02:46:13.353Z · LW · GW

You have my admiration, and my hope that you are calculating the risks accurately!

I have not read the RaDVaC paper so I don't have a good object level model of safety and risks. From a distance it looks like heroism, because from a distance it looks like taking a risk in a way that could provide a role model for many if it works safely! It reminds me a bit of Seth Roberts who was a part of the extended tribe who did awesome stuff over and over again (seemingly safely) but who also may have eventually guessed wrong about safety. 

I guess I just want to say: "This is so freaking awesome, and PLEASE be very careful, and also please keep going if the risks seem worth the benefits."

If you get a positive antibody result, have you thought about a personal challenge trial?

The big benefits to be gained from vaccination seem to be to be behavioral: going out, doing life similarly to the Before Times... which is similar to a partial/random/natural sort of "challenge trial". 

I wonder if 1daysooner can or would be interested in keeping track of people who have tried the RaDVaC option, to build up knowledge (based on accidental exposures or intentional challenges) of some sort.

Comment by JenniferRM on What's Wrong with Social Science and How to Fix It: Reflections After Reading 2578 Papers · 2020-09-12T07:22:42.900Z · LW · GW

Having read the original article, I was surprised at how long it was (compared to the brief excerpts), and how scathing it was, and how funny it was <3

Criticizing bad science from an abstract, 10000-foot view is pleasant: you hear about some stuff that doesn't replicate, some methodologies that seem a bit silly. "They should improve their methods", "p-hacking is bad", "we must change the incentives", you declare Zeuslike from your throne in the clouds, and then go on with your day.
But actually diving into the sea of trash that is social science gives you a more tangible perspective, a more visceral revulsion, and perhaps even a sense of Lovecraftian awe at the sheer magnitude of it all: a vast landfill—a great agglomeration of garbage extending as far as the eye can see, effluvious waves crashing and throwing up a foul foam of p=0.049 papers. As you walk up to the diving platform, the deformed attendant hands you a pair of flippers. Noticing your reticence, he gives a subtle nod as if to say: "come on then, jump in".
Comment by JenniferRM on Social Capital Paradoxes · 2020-09-12T06:41:44.747Z · LW · GW

It is a free country. No apology necessary <3

Also, maybe I'm at fault for NOT publishing and perishing, but rather (it could be argued) lurking and then enacting some kind of morally dubious "gotcha" maneuver?

In any case, it has generally been my ambition to be cited more in the manner of Socrates than Plato ;-)

Comment by JenniferRM on Social Capital Paradoxes · 2020-09-11T01:43:02.566Z · LW · GW

The viruses themselves are the prototypical genes that move horizontally. The bacteria use their control mostly to resist these "new genes", but sometimes they can't keep out the horizontal genes of the virus, and then the bacteria spends non-trivial energy generating viral particles that can invade other bacteria, and so on.

The bacterial genes that jump from one vertical linage to another (that make bacteria phylogenetic tree building a bit wonky) are sometimes carried by viruses. Incidental bacterial genes get packaged in viral capsids by accident, then those viral particles get into a bacteria and somehow fail to exploit the host optimally and the host has many descendants anyway. You are right that this is somewhat random/accidental. Neither viruses nor bacteria seem to generally "intend" it in coherent ways.

Sometimes bacteria have a second little genome called a plasmid, and these often contain genes for the construction of a tube that injects neighboring bacteria with the little secondary genome (but not the main genome). These "conjugative plasmids" are engaged in non-random horizontal gene transmission. Conjugative plasmids tend to be more aligned than viral prophages (that lurk in the host genome for several generations) which are more aligned than pure lytic viruses.

The more horizontal things are worse for the bacteria's genetic interests in very simple and concrete ways, related to the normal operation of the genes following their normal "selfish gene" lifecycles.

Comment by JenniferRM on Social Capital Paradoxes · 2020-09-11T01:29:00.051Z · LW · GW

1. Why do so many good things have horizontal transmission structures?

Memetic horizontal transmission that is mediated by human normative judgement routes around this filter... in some manner. Maybe they are slightly hacking your perceptions of goodness? Also, maybe these filters improve things some.

Far be it from me to claim that modern horizontally transmitted cultural ideas are bad. I would never...

However... between 1800 and 1950 it would have seemed to little children that smoking was terrible, but then if their peers smoke, smoking starts to seem like a way to minimize the disgust, and shortly it begins to seem pretty great, and this becomes the widely shared common wisdom among adults in a society with very high smoking rates. With smoking there was careful centralized analysis, with data collection, and peer review, and careful reasoning about causal models. Eventually we figured out: nope. I can tell you a story about how my mom stopped smoking when I was a kid, and then my brother and I copied her by not starting.

I would argue that "good careful reasoning" is the exception that proves the rule in some sense, because lots of so-called Official Science(!) is pretty shit (parts of tongues that taste different things? wtf? is it all just gossip? when did academia give up on "nullius in verba"?) and the good stuff tends to be invented by a TINY group of people and spreads via *baroquely* cautious transmission patterns.

2. The conclusion seems severe and counterintuitive...

In memetics, this is what trusted priests or scholars are an attempted patch on, I think? I'm sorry. I don't know any good news here.

Biologically, viruses prey on bacteria. Both are made of nucleic acid but some nucleic acid content is aligned with the protein inside the membrane... and some isn't. The "better" viruses are prophages (integrating with the genome and conferring useful phenotypes)... but often they go lytic eventually... and then the infected bacteria's daughter's daughter's daugher's daughters have a regret-worthy outcome.

If people have lots of unprotected sex, a venereal disease eventually finds the niche created by that aggregate behavior. If people fly around in airplanes while sneezing on each other, an aerosolized disease eventually finds that niche. If people drink from a river downstream of where other people poop in the river (especially if some of the the drinkers then travel back upstream), cholera happens. When you feed cows to cows, prions grow exponentially and eventually there's mad cow disease. If elementary school teachers who have never been outside of the school system teach school children who become teachers who teach children who become teachers... You will end up with the curricular equivalent of bovine spongiform encephalopathy.

How long until twitter collapses? Has twitter died already? I'm sorry. The circle of life is best when it circles very VERY widely. Gotta turn it to mulch. Then have fungus eat it. Then let the fungus dry out in direct sun for two years. Then use it CAREFULLY. It makes me sad, but I think it is true. Do not recycle "vital" things!

3. What about The Moral Economy by Samuel Bowles?

I have not read the book you cite. I want to defy the data. I would suggest that high social capital causes prosperity and enables trusted third party mediation, then, because people socially trust the third party mediators, it enables quick interactions based on shared traditions (that affirm trust and that often rely on deeper "trust rails" go back decades or often centuries (often literally to shared ancestors)). This could cause correlations in single temporal snapshots of data. Massaging such snapshots in modern academic writing, people have an incentive to tell happy lies in public like "prosperity causes social capital". The traditional theories here (and the long term economic demography), suggest to me that great wealth is generally squandered by the fourth generation, so the data collection I'd like to see would span 6 generations over various cultural cross-sections, or else it would span maybe like 10 generations (to hopefully see two full cycles)? I would love to be wrong about this, but my priors are strong enough that I want to see very very rigorous data collection methods as part of the presentation of why my priors here should be weakened. Maybe writing a rigorous book review of the contents of The Moral Economy would be virtuous!

If there was a key countervailing idea here, for me it is "acceleration itself". Progress. The increase in the number of humans, and per capita energy use, and humane culture-making activities. Old functional things are being copied and the "oomph" has not burned out... yet! :-)

0. Where did this theory come from and is it horizontal or vertical itself?

I'm going to assume you asked this, and answer it! I invented the theory, basically.

The geminating idea is: Dawkins is just wrong. He used to go around constantly dunking on TRADITIONAL religion about how it was a virus, and he was just... wrong. Many many many generations of shared co-evolution often tames parasites by aligning them deeply with more "metabolic" vertical replicators. Mitochondria are tamed bacterial parasites. The V(D)J combinatorial immune system is a tamed viral parasite. Endosymbiosis is a thing, but it works in a certain way.

Tiny fast evolving things (like cults) are sources of novelty, and larger slower things (like 1000 year old civilizations with old co-evolved religions) must tame them, or be devoured. Novel horizontal culture is often pretty bad. I have extended this theory in various conversational domains going back maybe 15 years to before the launch of Overcoming Bias but it always seemed gauche (and inconsistent with the theory itself) to bring it up ONLINE in a community deeply built around "the rejection of the supernatural mumbo-jumbo of one's parents".

I have talked about the importance of vertically transmitted ideas with my parents (who are themselves second generation atraditionalists), and they roll their eyes, but are happy enough to tolerate my antics when I "larp" "filial piety". In the meantime, filial piety occurs in many religions. The Abrahamic injunction is obvious. If Confucianism has ONE PUNCH, that punch is arguably "filial piety". I have purposefully not talked about horizontal meme transmission where Google can see, but if that goal is to fail at horizontal at this particular historical junction during a horizontally transmitted global plague then I guess I'm ok with it? Naturally it would be better if my children could teach the theory "as taught by their mother" (me), but they do not exist (yet?), and so they can't.

(I would not strongly object if you deleted this post before it can be seen by Google and generally become less of a vertical meme and more of a horizontal meme... Evangelism just seems mildly evil to me, but I'm not evangelical about evangelism being bad... because that would kinda defeat the point? My interest here is mostly... credit assignment I guess? I'm a HUGE fan of thinking about The Credit Assignment Problem. If I have done wrongly, or well, then it seems generally proper that I be credited as having done wrongly or well. Similarly for you. Similarly for all choice-making beings.)

Comment by JenniferRM on Covid-19 6/18: The Virus Goes South · 2020-06-20T01:53:37.998Z · LW · GW

Air conditioning! As near as I can tell, "indoor air conditioning" is the key mechanistic story for "covid in June".

You can skip the rest if you like, but for details and speculation... This result is actually kind of happy/surprising to me!

When the right was protesting against the covid shutdown I saw a lot of morbid covid speculation about how bad it would be on the left. Then the left was protesting against the police, and some on the right were complaining about how bad it could make covid... But I haven't been able to find any big structural/demographic signals related to any of these protests. We seem to have gotten lucky with the protesting: it didn't increase the plague much! I was worried about it, and so this feels like a relief to me.

More happy news in the structural department is a salon in Missouri that functioned as a natural experiment. Two hairdressers test positive. One was cutting while symptomatic and may have given it to the other somehow during a work week in the same room. Both wore masks. All customers wore masks. 140 customers tracked by computer. 46 of them consented to testing. All came back negative! Maybe there's structural censorship of key data (lying officials, or broken medical test) but if not then ZERO positive customers is a non-trivial signal about mask efficacy! :-)

The one person who seems to have gotten infected by the hairdresser was maybe the other hairdresser in the shop. This loops back around, in my mind, to "sharing a building" and and also calls attention to "air conditioning" as a key mechanistic driver...

As in the OP "Houston and Phoenix and Miami" are hotspots now and I think the common denominator is that in June those are all pretty hot places where the heat drives people indoors to get some AC (which tends to be recirculated air).

Spending a lot of time breathing recirculated indoor air looks like the boogey man to me at this point.

Comment by JenniferRM on Credibility of the CDC on SARS-CoV-2 · 2020-03-09T00:47:10.620Z · LW · GW
I don't recommend the site to friends or family because I know posts like this always pop up and I don't want to expose people to this...

This is just basically correct! Good job! :-)

Arguably, most thoughts that most humans have are either original or good but not both. People seriously attempting to have good, original, pragmatically relevant thoughts about nearly any topic normally just shoot themselves in the foot. This has been discussed ad nauseum.

This place is not good for cognitive children, and indeed it MIGHT not be good for ANYONE! It could be that "speech to persuade" is simply a cultural and biological adaptation of the brain which primarily exists to allow people to trick other people into giving them more resources, and the rest is just a spandrel at best.

It is admirable that you have restrained yourself from spreading links to this website to people you care about and you should continue this practice in the future. One experiment per family is probably more than enough.


HOWEVER, also, you should not try to regulate speech here so that it is safe for dumb people without the ability to calculate probabilities, detect irony, doubt things they read, or otherwise tolerate cognitive "ickiness" that may adhere to various ideas not normally explored or taught.

There is a possibility that original thinking is valuable, and it is possible that developing the capacity for such thinking through the consideration of complex topics is also valuable. This site presupposes the value of such cognitive experimentation, and then follows that impulse to whatever conclusions it leads to.

Regulating speech here to a level so low as to be "safe for anyone to be exposed to" would basically defeat the point of the site.

Comment by JenniferRM on Credibility of the CDC on SARS-CoV-2 · 2020-03-09T00:21:32.785Z · LW · GW

The word "cuarenta", in Spanish, means 40.

In English, if the word "quarantine" is applied to an infection-avoiding isolation period of either more or less than 40 days, that's arguably an abuse of linguistic tradition that reveals whoever says it to be in need of remedial education.

Maybe? *I* probably need remedial education, too! Very prestigious linguists have asserted here or there that linguistics is a descriptivist science, and so, from their very prestigious perspective, any use of language is as good as any other use of language...

Still, it does give one pause.

How many people in public health read or write latin anymore? Maybe there are some things that people used to take so MUCH for granted that no one thought to spell them out? Like "40 day periods should last 40 days" is basically a tautology. Should THAT go into a medical book and become testable knowledge for doctors?

It would be scary for medical inferences based in the obvious literal meaning of words to be valid, so they are probably not valid. I'm sure everything is fine.

Comment by JenniferRM on The LessWrong 2018 Review · 2019-12-08T08:01:09.255Z · LW · GW

I hunted your comment down here and upvoted it strongly.

I basically only write comments, and when I write "comments for the ages" that I feel proud of, I consider it a good sign if they (1) get many upvotes (especially votes that arrive after lots of competing sibling comments already exist) and (2) do not get any responses (except "Wow! Good! Thanks!" kind of stuff).

Looking at "first level comments" to worthwhile OPs according to a measure like this might provide some interesting and reasonably brief postscripts.

Applying the same basic measure to posts themselves, if an OP gets a large number of direct replies that are highly upvoted that OP may not be dense with relatively useful and/or flawless content. (Though there are probably exceptions that could be detected by thoughtful curating... for example, if the OP is a request for ideas then a lot of highly voted comments are kinda the point.)

Comment by JenniferRM on The unexpected difficulty of comparing AlphaStar to humans · 2019-12-05T20:45:10.086Z · LW · GW

I think the abstract question of how to cognitively manage a "large action space" and "fog of war" is central here.

In some sense StarCraft could be seen as turn based, with each turn lasting for 1 microsecond, but this framing makes the action space of a beginning-to-end game *enormous*. Maybe not so enormous that a bigger data center couldn't fix it? In some sense, brute force can eventually solve ANY problem tractable to a known "vaguely O(N*log(N))" algorithm.

BUT facing "a limit that forces meta-cognition" is a key idea for "the reason to apply AI to an RTS next, as opposed to a turn based game."

If DeepMind solves it with "merely a bigger data center" then there is a sense in which maybe DeepMind has not yet found the kinds of algorithms that deal with "nebulosity" as an explicit part of the action space (and which are expected by numerous people (including me) to be widely useful in many domains).

(Tangent: The Portia spider is relevant here because it seems that its whole schtick is that it scans with its (limited, but far seeing) eyes, builds up a model of the world via an accumulation of glances, re-uses (limited) neurons to slowly imagine a route through that space, and then follows the route to sneak up on other (similarly limited, but less "meta-cognitive"?) spiders which are its prey.)

No matter how fast something can think or react, SOME game could hypothetically be invented that forces a finitely speedy mind to need action space compression and (maybe) even compression of compression choices. Also, the physical world itself appears to contain huge computational depths.

In some sense then, the "idea of an AI getting good *at an RTS*" is an attempt (which might have failed or might be poorly motivated) to point at issues related to cognitive compression and meta-cognition. There is an implied research strategy aimed at learning to use a pragmatically finite mind to productively work on a pragmatically infinite challenge.

The hunch is that maybe object level compression choices should always have the capacity to suggest not just a move IN THE GAME of doing certain things, but also a move IN THE MIND to re-parse the action space, compress it differently, and hope to bring a different (and more appropriate) set of "reflexes" to bear.

The idea of a game with "fog of war" helps support this research vision. Some actions are pointless for the game, but essential to ensuring the game is "being understood correctly" and game designers adding fog of war to a video game could be seen as an attempt to represent this possibly universally inevitable cognitive limitation in a concretely-ludic symbolic form.

If an AI is trained by programmers "to learn to play an RTS" but that AI doesn't seem to be learning lessons about meta-cognition or clock/calendar management, then it feels a little bit like the AI is not learning what we hoped it was suppose to learn from "an RTS".

This is why these points made by maximkazhenkov in a neighboring comment are central:

The agents on [the public game] ladder don't scout much and can't react accordingly. They don't tech switch midgame and some of them get utterly confused in ways a human wouldn't.

I think this is conceptually linked (through the idea of having strategic access to the compression strategy currently employed) to this thing you said: can have a conversation with a starcraft player while he's playing. It will be clear the player is not paying you his full attention at particularly demanding moments, however... I considered using system 1 and 2 analogies, but because of certain resevations I have with the dichotomy... [that said] there is some deep strategical thinking being done at the instinctual level. This intelligence is just as real as system 2 intelligence and should not be dismissed as being merely reflexes.

In the story about metacognition, verbal powers seem to come up over and over.

I think a lot of people who think hard about this understand that "mere reflexes" are not mere (especially when deeply linked to a reasoning engine that has theories about reflexes).

Also, I think that human meta-cognitive processes might reveal themselves to some degree in the apparent fact that a verbal summary can be generated by a human *in parallel without disrupting the "reflexes" very much*... then sometimes there is a pause in the verbalization while a player concentrates on <something>, and then the verbalization resumes (possibly with a summary of the 'strategic meaning' of the actions that just occurred).

Arguably, to close the loop and make the system more like the general intelligence of a human, part of what should be happening is that any reasoning engine bolted onto the (constrained) reflex engine should be able to be queried by ML programmers to get advice about what kinds of "practice" or "training" needs to be attempted next.

The idea is that by *constraining* the "reflex engine" (to be INadequate for directly mastering the game) we might be forced to develop a reasoning engine for understanding the reflex engine and squeezing the most performance out of it in the face of constraints on what is known and how much time there is to correlate and integrate what is known.

A decent "reflexive reasoning engine" (ie a reasoning engine focused on reflexive engines) might be able to nudge the reflex engine (every 1-30 seconds or so?) to do things that allow the reflex engine to scout brand new maps or change tech trees or do whatever else "seems meta-cognitively important".

A good reasoning engine might be able to DESIGN new maps that would stress test a specific reflex repertoire that it thinks it is currently bad at.

A *great* reasoning engine might be able to predict in the first 30 seconds of a game that it is facing a "stronger player" (with a more relevant reflex engine for this game) such that it will probably lose the game for lack of "the right pre-computed way of thinking about the game".

A really FANTASTIC reflexive reasoning engine might even be able to notice a weaker opponent and then play a "teaching game" that shows that opponent a technique (a locally coherent part of the action space that is only sometimes relevant) that the opponent doesn't understand yet, in a way that might cause the opponent's own reflexive reasoning engine to understand its own weakness and be correctly motivated to practice a way to fix that weakness.

(Tangent: To recall the tangent above to the Portia spider. It preyed on other spiders with similar spider limits. One of the fears here is that all this metacognition, when it occurs in nature, is often deployed in service to competition, either with other members of the same species or else to catch prey. Giving these powers to software entities that ALREADY have better thinking hardware than humans in many ways... well... it certainly gives ME pause. Interesting to think about... but scary to imagine being deployed in the midst of WW3.)

It sounds, Mathias, like you understand a lot of the centrality and depth of "trained reflexes" intuitively from familiarity with BOTH StarCraft and ML both, and part of what I'm doing here is probably just restating large areas of agreement in a new way. Hopefully I am also pointing to other things that are relevant and unknown to some readers :-)

If what we really care about is proving that it can do long term thinking and planning in a game with a large actionspace and imperfect information, why choose starcraft? Why not select something like Frozen Synapse where the only way to win is to fundamentally understand these concepts?

Personally, I did not know that Frozen Synapse existed before I read your comment here. I suspect a lot of people didn't... and also I suspect that part of using StarCraft was simply for its PR value as a beloved RTS classic with a thriving pro scene and deep emotional engagement by many people.

I'm going to go explore Frozen Synapse now. Thank you for calling my attention to it!

Comment by JenniferRM on The Power to Demolish Bad Arguments · 2019-09-03T02:44:58.049Z · LW · GW
"...go ahead and tell me your causal model and I'll probably cook up an obvious example to satisfy myself in the first minute of your explanation."

I think maybe we agree... verbosely... with different emphasis? :-)

At least I think we could communicate reasonably well. I feel like the danger, if any, would arise from playing example ping pong and having the serious disagreements arise from how we "cook (instantiate?)" examples into models, and "uncook (generalize?)" models into examples.

When people just say what their model "actually is", I really like it.

When people only point to instances I feel like the instances often under-determine the hypothetical underlying idea and leave me still confused as to how to generate novel instances for myself that they would assent to as predictions consistent with the idea that they "meant to mean" with the instances.

Maybe: intensive theories > extensive theories?

Comment by JenniferRM on The Power to Demolish Bad Arguments · 2019-09-03T01:01:07.927Z · LW · GW
I appreciate your high-quality comment.

I likewise appreciate your prompt and generous response :-)

I think I see how you imagine a hypothetical example of "no net health from insurance" might work as a filter that "passes" Hanson's claim.

In this case, I don't think your example works super well and might almost cause more problems that not?

Differences of detail in different people's examples might SUBTRACT from attention to key facts relevant to a larger claim because people might propose different examples that hint at different larger causal models.

Like, if I was going to give the strongest possible hypothetical example to illustrate the basic idea of "no net health from insurance" I'd offers something like:

EXAMPLE: Alice has some minor symptoms of something that would clear up by itself and because she has health insurance she visits a doctor. ("Doctor visits" is one of the few things that health insurance strongly and reliably causes in many people.) While there she gets a nosocomial infection that is antibiotic resistant, lowering her life expectancy. This is more common than many people think. Done.

This example is quite different from your example. In your example medical treatment is good, and the key difference is basically just "pre-pay" vs "post-pay".

(Also, neither of our examples covers the issue where many innovative medical treatments often lower mortality due to the disease they aim at while, somehow (accidentally?) RAISING all cause mortality...)

In my mind, the substantive big picture claim rests ultimately on the sum of many positive and negative factors, each of which arguably deserves "an example of its own". (Things that raise my confidence quite a lot is often hearing the person's own best argument AGAINST their own conclusion, and then hearing an adequate argument against that critique. I trust the winning mind quite a bit more when someone is of two minds.)

No example is going to JUSTIFIABLY convince me, and the LACK of an example for one or all of the important factors wouldn't prevent me from being justifiably convinced by other methods that don't route through "specific examples".

ALSO: For that matter, I DO NOT ACTUALLY KNOW if Robin Hanson is actually right about medical insurance's net results, in the past or now. I vaguely suspect that he is right, but I'm not strongly confident. Real answers might require studies that haven't been performed? In the meantime I have insurance because "what if I get sick?!" and because "don't be a weirdo".


I think my key crux here has something to do with the rhetorical standards and conversational norms that "should" apply to various conversations between different kinds of people.

I assumed that having examples "ready-to-hand" (or offered early in a written argument) was something that you would actually be strongly in favor of (and below I'll offer a steelman in defense of), but then you said:

I wouldn't insist that he has an example "ready to hand during debate"; it's okay if he says "if you want an example, here's where we can pull one up".

So for me it would ALSO BE OK to say "If you want an example I'm sorry. I can't think of one right now. As a rule, I don't think in terms of fictional stories. I put effort into thinking in terms of causal models and measurables and authors with axes to grind and bridging theories and studies that rule out causal models and what observations I'd expect from differently weighed ensembles of the models not yet ruled out... Maybe I can explain more of my current working causal model and tell you some authors that care about it, and you can look up their studies and try to find one from which you can invent stories if that helps you?"

If someone said that TO ME I would experience it as a sort of a rhetorical "fuck you"... but WHAT a fuck you! {/me kisses her fingers} Then I would pump them for author recommendations!

My personal goal is often just to find out how the OTHER person feels they do their best thinking, run that process under emulation if I can, and then try to ask good questions from inside their frames. If they have lots of examples there's a certain virtue to that... but I can think of other good signs of systematically productive thought.


If I was going to run "example based discussion" under emulation to try to help you understand my position, I would offer the example of John Hattie's "Visible Learning".

It is literally a meta-meta-analysis of education.

It spends the first two chapters just setting up the methodology and responding preemptively to quibbles that will predictable come when motivated thinkers (like classroom teachers that the theory says are teaching suboptimally) try to hear what Hattie has to say.

Chapter 3 finally lays out an abstract architecture of principles for good teaching, by talking about six relevant factors and connecting them all (very very abstractly and loosely) to: tight OODA loops (though not under that name) and Popperian epistemology (explicitly).

I'll fully grant that it can take me an hour to read 5 pages of this book, and I'm stopping a lot and trying to imagine what Hattie might be saying at each step. The key point for me is that he's not filling the book with examples, but with abstract empirically authoritative statistical claims about a complex and multi-faceted domain. It doesn't feel like bullshit, it feels like extremely condensed wisdom.

Because of academic citation norms, in some sense his claims ultimately ground out in studies that are arguably "nothing BUT examples"? He's trying to condense >800 meta-analyses that cover >50k actual studies that cover >1M observed children.

I could imagine you arguing that this proves how useful examples are, because his book is based on over a million examples, but he hasn't talked about an example ONCE so far. He talks about methods and subjectively observed tendencies in meta-analyses mostly, trying to prepare the reader with a schema in which later results can land.

Plausibly, anyone could follow Hattie's citations back to an interesting meta-analysis, look at its references, track back to a likely study, look in their methods section, and find their questionnaires, track back to the methods paper validating an the questionnaire, then look in the supplementary materials to get specific questionnaire items... Then someone could create an imaginary kid in their head who answered that questionnaire some way (like in the study) and then imagine them getting the outcome (like in the study) and use that scenario as "the example"?

I'm not doing that as I read the book. I trust that I could do the above, "because scholarship" but I'm not doing it. When I ask myself why, it seems like it is because it would make reading the (valuable seeming) book EVEN SLOWER?


I keep looping back in my mind to the idea that a lot of this strongly depends on which people are talking and what kinds of communication norms are even relevant, and I'm trying to find a place where I think I strongly agree with "looking for examples"...

It makes sense to me that, if I were in the role of an angel investor, and someone wanted $200k from me, and offered 10% of their 2-month-old garage/hobby project, then asking for examples of various of their business claims would be a good way to move forward.

They might not be good at causal modeling, or good at stats, or good at scholarship, or super verbal, but if they have a "native faculty" for building stuff, and budgeting, and building things that are actually useful to actual people... then probably the KEY capacities would be detectable as a head full of examples to various key questions that could be strongly dispositive.

Like... a head full of enough good examples could be sufficient for a basically neurotypical person to build a valuable company, especially if (1) they were examples that addressed key tactical/strategic questions, and (2) no intervening bad examples were ALSO in their head?

(Like if they had terrible examples of startup governance running around in their heads, these might eventually interfere with important parts of being a functional founder down the road. Detecting the inability to give bad examples seems naively hard to me...)

As an investor, I'd be VERY interested in "pre-loaded ready-to-hand theories" that seem likely to actually work. Examples are kinda like "pre-loaded ready-to-hand theories"? Possession of these theories in this form would be a good sign in terms of the founder's readiness to execute very fast, which is a virtue in startups.

A LACK of ready-to-hand examples would suggest that even a good and feasible idea whose premises were "merely scientifically true" might not happen very fast if an angel funded it and the founder had to instantly start executing on it full time.

I would not be offended if you want to tap out. I feel like we haven't found a crux yet. I think examples and specificity are interesting and useful and important, but I merely have intuitions about why, roughly like "duh, of course you need data to train a model", not any high church formal theory with a fancy name that I can link to in wikipedia :-P

Comment by JenniferRM on The Power to Demolish Bad Arguments · 2019-09-02T18:09:35.915Z · LW · GW

I have a strong appreciation for the general point that "specificity is sometimes really great", but I'm wondering if this point might miss the forest for the trees with some large portion of its actual audience?

If you buy that in some sense all debates are bravery debates then audience can matter a lot, and perhaps this point addresses central tendencies in "global english internet discourse" while failing to address central tendencies on LW?

There is a sense in which nearly all highly general statements are technically false, because they admit of at least some counter examples.

However any such statement might still be a useful in a structured argument of very high quality, perhaps as an illustration of a troubling central tendency, or a "lemma" in a multi-part probabalistic argument.

It might even be the case that the MEDIAN EXAMPLE of a real tendency is highly imperfect without that "demolishing" the point.

Suppose for example that someone has focused on a lot on higher level structural truths whose evidential basis was, say, a thorough exploration of many meta-analyses about a given subject.

"Mel the meta-meta-analyst" might be communicating summary claims that are important and generally true that "Sophia the specificity demander" might rhetorically "win against" in a way that does not structurally correspond to the central tendencies of the actual world.

Mel might know things about medical practice without ever having treated a patient or even talked to a single doctor or nurse. Mel might understand something about how classrooms work without being a teacher or ever having visited a classroom. Mel might know things about the behavior of congressional representatives without ever working as a congressional staffer. If forced to confabulate an exemplar patient, or exemplar classroom, or an exemplar political representative the details might be easy to challenge even as a claim about the central tendencies is correct.

Naively, I would think that for Mel to be justified in his claims (even WITHOUT having exemplars ready-to-hand during debate) Mel might need to be moderately scrupulous in his collection of meta-analytic data, and know enough about statistics to include and exclude studies or meta-analyses in appropriately weighed ways. Perhaps he would also need to be good at assessing the character of authors and scientists to be able to predict which ones are outright faking their data, or using incredibly sloppy data collection?

The core point here is that Sophia might not be lead to the truth SIMPLY by demanding specificity without regard to the nature of the claims of her interlocutor.

If Sophia thinks this tactic gives her "the POWER to DEMOLISH arguments" in full generality, that might not actually be true, and it might even lower the quality of her beliefs over time, especially if she mostly converses with smart people (worth learning from, in their area(s) of expertise) rather than idiots (nearly all of whose claims might perhaps be worth demolishing on average).

It is totally possible that some people are just confused and wrong (as, indeed, many people seem to be, on many topics... which is OK because ignorance is the default and there is more information in the world now than any human can integrate within a lifetime of study). In that case, demanding specificity to demolish confused and wrong arguments might genuinely and helpfully debug many low quality abstract claims.

However, I think there's a lot to be said for first asking someone about the positive rigorous basis of any new claim, to see if the person who brought it up can articulate a constructive epistemic strategy.

If they have a constructive epistemic strategy that doesn't rely on personal knowledge of specific details, that would be reasonable, because I think such things ARE possible.

A culturally local example might be Hanson's general claim that medical insurance coverage does not appear to "cause health" on average. No single vivid patient generates this result. Vivid stories do exist here, but they don't adequately justify the broader claim. Rather, the substantiation arises from tallying many outcomes in a variety of circumstances and empirically noticing relations between circumstances and tallies.

If I was asked to offer a single specific positive example of "general arguments being worthwhile" I might nominate Visible Learning by John Hattie as a fascinating and extremely abstract synthesis of >1M students participating in >50k studies of K-12 learning. In this case a core claim of the book is that mindless teaching happens sometimes, nearly all mindful attempts to improve things work a bit, and very rarely a large number of things "go right" and unusually large effect sizes can be observed. I've never seen one of these ideal classrooms I think, but the arguments that they have a collection of general characteristics seem solid so far.

Maybe I'll change my mind by the end? I'm still in progress on this particular book, which makes it sort of "top of mind" for me, but the lack of specifics in the book present a readability challenge rather than an epistemic challenge ;-P

The book Made to Stick, by contrast, uses Stories that are Simple, Surprising, Emotional, Concrete, and Credible to argue that the best way to convince people of something is to tell them Stories that are Simple, Surprising, Emotional, Concrete, and Credible.

As near as I can tell, Made to Stick describes how to convince people of things whether or not the thing is true, which means that if these techniques work (and can in fact cause many false ideas to spread through speech communities with low epistemic hygiene, which the book arguably did not really "establish") then a useful epistemic heuristic might be to give a small evidential PENALTY to all claims illustrated merely via vivid example.

I guess one thing I would like to say here at the end is that I mean this comment in a positive spirit. I upvoted this article and the previous one, and if the rest of the sequence has similar quality I will upvote those as well.

I'm generally IN FAVOR of writing imperfect things and then unpacking and discussing them. This is a better than median post in my opinion, and deserved discussion, rather than deserving to be ignored :-)

Comment by JenniferRM on Unconscious Economics · 2019-02-27T22:30:02.331Z · LW · GW

David Friedman is awesome. I came to the comments to give a different Friedman explanation for one generator of economic rationality from a different Friedman book than "strangepoop" did :-)

In "Law's Order" (which sort of explores how laws that ignore incentives or produce bad incentives tend to be predictably suboptimal) Friedman points out that much of how people decide what to do is based on people finding someone who seems to be "winning" at something and copy them.

(This take is sort of friendly to your "selectionist #3" option but explored in more detail, and applied in more contexts than to simply explain "bad things".)

Friedman doesn't use the term "mimesis", but this is an extremely long-lived academic keyword with many people who have embellished and refined related theories. For example, Peter Thiel has a mild obsession with Rene Girard who was obsessed with a specific theory of mimesis and how it causes human communities to work in predictable ways. If you want the extremely pragmatic layman's version of the basic mimetic theory, it is simply "monkey see, monkey do" :-P

If you adopt mimesis as THE core process which causes human rationality (which it might well not be, but it is interesting to think of a generator of pragmatically correct beliefs in isolation, to see what its weaknesses are and then look for those weaknesses as signatures of the generator in action), it predicts that no new things in the human behavioral range become seriously optimized in a widespread way until AFTER at least one (maybe many) rounds of behavioral mimetic selection on less optimized random human behavioral exploration, where an audience can watch who succeeds and who fails and copy the winners over and over.

The very strong form of this theory (that it is the ONLY thing) is quite bleak and probably false in general, however some locally applied "strong mimesis" theories might be accurate descriptions of how SOME humans select from among various options in SOME parts of real life where optimized behavior is seen but hard to mechanistically explain in other ways.

Friedman pretty much needed to bring up a form of "economic rationality" in his book because a common debating point regarding criminal law in modern times is that incentives have nothing to do with, for example, criminal law, because criminals are mostly not very book smart, and often haven't even looked up (much less remembered) the number of years of punishment that any given crime might carry, and so "can't be affected by such numbers".

(Note the contrast to LW's standard inspirational theorizing about a theoretically derived life plan... around here actively encouraging people to look up numbers before making major life decisions is common.)

Friedman's larger point is that, for example, if burglary is profitable (perhaps punished by a $50 fine, even when the burglar has already sold their loot for $1500), then a child who has an uncle who has figured out this weird/rare trick and makes a living burgling homes will see an uncle who is rich and has a nice life and gives lavish presents at Christmas and donates a lot to the church and is friends with the pastor... That kid will be likely to mimic that uncle without looking up any laws or anything.

Over a long period of time (assuming no change to the laws) the same dynamic in the minds of many children could lead to perhaps 5% of the economy becoming semi-respected burglars, though it would be easy to imagine that another 30% of the private economy would end up focused on mitigating the harms caused by burglary to burglary victims?

(Friedman does not apply the mimesis model to financial crimes, or risky banking practices. However that's definitely something this theory of behavioral causation leads me to think about. Also, advertising seems to me like it might be a situation where harming random strangers in a specific way counts as technically legal, where the perpetration and harm mitigation of the act have both become huge parts of our economy.)

This theory probably under-determines the precise punishments that should be applied for a given crime, but as a heuristic it probably helps constrain punishment sizes to avoid punishments that are hilariously too small. It suggests that any punishment is too small which allow there to exist a "viable life strategy" that includes committing a crime over and over and then treating the punishment as a mere cost of business.

If you sent burglars to prison for "life without parole" on first offenses, mimesis theory predicts that it would put an end to burglary within a generation or four, but the costs of such a policy might well be higher than the benefits.

(Also, as Friedman himself pointed out over and over in various ways, incentives matter! If, hypothetically, burglary and murder are BOTH punished with "life without parole on first offense" AND murdering someone makes you less likely to be caught as a burglar, then murder/burglary is the crime that might be mimetically generated as a pair of crimes that are mimetically viable when only one of them is not viable... If someone was trying to use data science to tune all the punishments to suppress anti-social mimesis, they should really be tuning ALL the punishments and keeping careful and accurate track of the social costs of every anti-social act as part of the larger model.)

In reality, it does seem to me that mimesis is a BIG source of valid and useful rationality for getting along in life, especially for humans who never enter Piaget's "Stage 4" and start applying formal operational reasoning to some things. It works "good enough" a lot of the time that I could imagine it being a core part of any organism's epistemic repertoire?

Indeed, entire cultures seem to exist where the bulk of humans lack formal operational reasoning. For example, anthropologists who study such things often find that traditional farmers (which was basically ALL farmers, prior to the enlightenment) with very clever farming practices don't actually know how or why their farming practices work. They just "do what everyone has always done", and it basically works...

One keyword that offers another path here is one Piaget himself coined: "genetic epistemology". This wasn't meant in the sense of DNA, but rather in the sense of "generative", like "where and how is knowledge generated". I think stage 4 reasoning might be one real kind of generator (see: science and technology), but I think it is not anything like the most common generator, neither among humans nor among other animals.

Comment by JenniferRM on Transhumanists Don't Need Special Dispositions · 2018-12-09T06:14:03.592Z · LW · GW

I can see two senses for what you might be saying...

I agree with one of them (see the end of my response), but I suspect you intend the other:

First, it seems clear to me that the value of a philosophy early on is a speculative thing, highly abstract, oriented towards the future, and latent in the literal expected value of the actions and results the philosophy suggests and envisions.

However, eventually, the actual results of actual people whose hands were moved by brains that contain the philosophy can be valued directly.

Basically, the value of the results of a plan or philosophy screen off the early expected value of the plan or philosophy... not entirely (because the it might have been "the right play, given the visible cards" with the deal revealing low probability outcomes). However, bad results provide at least some Bayesian evidence of bad ideas without bringing more of a model into play.

So when you say that "the actual values of transhumanism" might be distinguished from less abstract "things done in the name of transhumanism" that feels to me like it could be a sort of category error related to expected value? If the abstraction doesn't address and prevent highly plausible failure modes of someone who might attempt to implement the abstract ideas, then the abstraction was bad.

(Worth pointing out: The LW/OB subculture has plenty to say here, though mostly by Hanson, who has been pointing out for over a decade that much of medicine is actively harmful and exists as a costly signal of fitness as an alliance partner aimed at non-perspicacious third parties through ostensible proofs of "caring" that have low actual utility with respect to desirable health outcomes. Like... it is arguably PART OF OUR CULTURE that "standard non-efficacious bullshit medicine" isn't "real transhumanism". However, that part of our culture maybe deserves to be pushed forward a bit more right now?)

A second argument that seems like it could be unpacked from your statement, that I would agree with, is that well formulated abstractions might contain within them a lot of valuable latent potential, and in the press of action it could be useful to refer back to these abstractions as a sort of True North that might otherwise fall from the mind and leave one's hands doing confused things.

When the fog of war descends, and a given plan seemed good before the fog descended, and no new evidence has arisen to the contrary, and the fog itself was expected, then sticking to the plan (however abstract or philosophical it may be) has much to commend it :-)

If this latter thing is all you meant, then... cool? :-)

Comment by JenniferRM on Transhumanists Don't Need Special Dispositions · 2018-12-08T20:28:47.114Z · LW · GW

Has someone been making bad criticisms of transhumanism lately?

In 2007, when this was first published, I think I understood which bravery debate this essay might apply to (/me throws some side-eye in the direction of Leon Kass et al), but in 2018 this sort of feels like something that (at least for a LW audience I would think?) has to be read backwards to really understand its valuable place in a larger global discourse.

If I'm trying to connect this to something in the news literally in the last week, it occurs to me to think about He Jiankui's recent attempt to use CRISPR technology to give HIV-immunity to two girls in China, which I think is very laudable in the abstract but also highly questionable as actually implemented based on current (murky and confused) reporting.

Basically, December of 2018 seems like a bad time to "go abstract" in favor of transhumanism, when the implementation details of transhumanism are finally being seriously discussed, and the real and specific challenges of getting the technical and ethical details right are the central issue.

Comment by JenniferRM on Is Clickbait Destroying Our General Intelligence? · 2018-12-03T08:53:18.640Z · LW · GW

One thing to keep in mind is sampling biases in social media, which are HUGE.

Even if we just had pure date ordered posts from people we followed, in a heterogeneous social network with long tailed popularity distributions the "median user" sees "the average person they follow" having more friends than them.

Also, posting behavior tends to also have a long tail, so sloppy prolific writers are more visible than slow careful writers. (Arguably Asimov himself was an example here: he was *insanely* prolific. Multiple books a year for a long time, plus stories, plus correspondence.)

Then, to make the social media sampling challenges worse, the algorithms surface content to mere users that is optimized for "engagement", and what could be more engaging than the opportunity to tell someone they are "wrong on the Internet"? Unless someone is using social media very *very* mindfully (like trying to diagonalize what the recommendation engine's think of them) they are going to what causes them to react.

I don't know what is really happening to the actual "average mind" right now, but I don't think many other people know either. If anyone has strong claims here, it makes me very curious about their methodology.

The newsfeed team at Facebook probably has the data to figure a lot of this out, but there is very little incentive for them to be very critical or tell the truth to the public. However, in my experience, the internal cultures of tech companies are often not that far below/behind the LW zeitgeist and I think engineering teams sometimes even go looking for things like "quality metrics" that they can try to boost (counting uses of the word "therefore" or the equivalent idea that uses semantic embedding spaces instead) as a salve for their consciences.

More deeply, like on historical timescales, I think that repeated low level exposure to lying liars improves people's bullshit detectors.

By modern standards, people who first started listening to radio were *insanely gulllible* in response to the sound of authoritative voices, both in the US and in Germany. Similarly for TV a few decades later. The very first ads on the Internet (primitive though they were) had incredibly high conversion rates... For a given "efficacy" of any kind of propaganda, more of the same tends to have less effect over time.

I fully expect this current media milieu to be considered charmingly simple, with gullible audiences and hamhanded influence campaigns, relative to the manipulative tactics that will be invented in future decades, because this stuff will stop working :-)

Comment by JenniferRM on In Logical Time, All Games are Iterated Games · 2018-10-10T16:18:09.254Z · LW · GW
(You might think meta-iteration involves making the other player forget what it learned in iterated play so far, so that you can re-start the learning process, but that doesn't make much sense if you retain your own knowledge; and if you don't, you can't be learning!)

If I was doing meta-iteration my thought would be to maybe turn the iterated game into a one-shot game of "taking the next step from a position of relative empirical ignorance and thereby determining the entire future".

So perhaps make up all the plausible naive hunches that I or my opponent might naively believe (update rules, prior probabilities, etc), then explore the combinatorial explosion of imaginary versions of us playing the iterated game starting from these hunches. Then adopt the hunch(es) that maximizes some criteria and play the first real move that that hunch suggests.

This would be like adopting tit-for-tat in iterated PD *because that seems to win tournaments*.

After adopting this plan your in-game behavior is sort of simplistic (just sticking to the initial hunch that tit-for-tat would work) even though many bits of information about the opponent are actually arriving during the game.

If I try to find analogies in the real world here it calls to mind martial arts practice with finite training time. You go watch a big diverse MMA tournament first. Then you notice that grapplers often win. Meta-iteration has finished and then your zeroth move is to decide to train as a grappler during the limited time before you fight for the first time ever. Then in the actual game you don't worry too much about the many "steps" in the game where decision theory might hypothetically inject itself. Instead, you just let your newly trained grappling reflexes operate "as trained".

Note that I don't think this even close optimal! (I think "Bruce Lee" beats this strategy pretty easily?) However, if you squint you could argue that this rough model of meta-iteration is what humans mostly do for games of very high importance. Arguably, this is because humans have neurons that are slow to rewire for biological reasons than epistemic reasons...

However, when offered the challenge that "meta-iteration can't be made to make sense", this is what pops into my head :-)

When I try to think of a more explicitly computational model of meta-iteration-compatible gaming my attention is drawn to Core War. If you consider the "players of Core War" to be the human programmers, their virtue is high quality programming and they only make one move: the program they submit. If you consider the "players of Core War" to be the programs themselves their virtues are harder to articulate but speed of operation is definitely among them.

Comment by JenniferRM on Weird question: could we see distant aliens? · 2018-04-23T00:12:24.115Z · LW · GW

Paul, I love what you're doing here, have been thinking about this a long time. I look forward to seeing an answer and would like to write a clarifying essay full of non answers :-)

By "get our attention" I mean: be interesting enough that we would already have noticed it and devoted some telescope time to looking in more detail at that part of the sky. (Once they have our attention it seems significantly cheaper to send a message.)

This suggests that we can list various anomalies that might have been thought to be extraterrestrials and already received attention, and then exclude them for various reasons.

1. For example, Tabby's Star recently had me wondering/hoping/worrying for a good year or two.

It is only 1,280 light years from Earth and I think it is plausible that we wouldn't even be able to see similar stars on the far side of our own galaxy which is mere ~100k light years in diameter... it can't count for this exercise because seeing it from other galaxies would be quite a trick.

HOWEVER, despite being an F type star (that shouldn't be variable (that varies in very irregular ways)) it was interesting enough raise $100k on Kickstarter for telescope time, and to deserve its own feed. I think people are pretty sure it is natural at this point, with a probable case of "indigestion" from the star colliding with a metallic planet in the last 10k years or so.

However, the fact that it got our attention means someone might do that to one planet/star combo like clockwork, every 1000 years in a regularly spaced line of stars.

It could work as a local "we exist" signal whose clocklike timing would count as the signature of intentional planning and sort of function like an invitation to show up at the logical NEXT star in the timed "indigestion collision" sequence to watch the collision and parley with whoever else showed up...

However, I don't think these events would be bright enough for the weird question?

(This does raise the question as to what counts as a "message" and what the bitrate of said message is allowed to be? Is a valid message just "this was intentionally created", or "this was intentionally sent", or "here is a place that will be interesting at a future time" or something even more than that? Also, what if the evidence of intentionality comes from a coincidence of timing spread across spans of time that requires detailed astronomical records for longer than humans seem to be able to maintain political or cultural or linguistic institutions?)

2. In 1967 Pulsars caused people to be very excited for a short period of time, thinking that such regularity must be intentional. However then it was worked out that pulsars were just spinning charged neutron star remnants leftover from supernovas. Still, they are pretty great natural clocks ;-)

This might make them a great "medium" in which to encode intentionality, but it means you have to modulate or sculpt them somehow so that when alien astronomers get interested they can see a deviation from what's natural.

Another problem is that they are highly directional, with most of the energy going out of their wobbling north and south poles (which when they wobble across your telescope is one of the pulses), so they don't signal very widely.

Another problem is that they aren't actually very bright. We see them in the Milky Way, and in our galactic neighbor the Large Magellanic Cloud, but finding an unusually bright pulsar 2 million light years away in Andromeda was newsworthy. In 2003 McLaughlin and Cordes tried to find very bright pulsars further afield and maaaaybe got a hit in M33 (aka "The Triangulum Galaxy") which is only 3M light years away. But seeing these things from 8000M light years away is highly questionable.

Binary pulsars are more rare and more likely to get scientific attention.

The first binary pulsar, discovered in 1974, won the 1993 Nobel in physics for Taylor and Hulse. By 2005 there were 113 discovered. They are interesting because they modulate the "clock" dynamics inherent to singleton pulsars.

Binary pulsars tick faster when coming towards you and tick slower when moving away, so the orbital parameters of the system can be characterized precisely just from the timing of the ticks. These orbital parameters measurably changes on the timescale of human lives, slowing down in a way that can be naturally interpreted as indirect proof that gravity waves exist and are pulling energy out of such massive systems :-)

If you wanted to catch someone's attention you might construct or find a three star system that included a pulsar aimed the way you wanted to send a message, and then mess with the orbital parameters intentionally.

Non hierarchical three star systems are chaotic by default and well understood chaotic systems can be controlled with surprisingly little energy which might make something like this attractive.

A probable hierarchical trinary-with-a-pulsar (and so not necessarily chaotic) that includes a sun-like star was surveyed in 2006. The third star is not totally confirmed, and even if it exists the arrangement here is more like a binary system, where one of the binaries has a large planet/star/thing orbiting it alone (hence "hierarchical" and hence probably not chaotic).

There is another pulsar trinary that might be chaotic found in 2014. These things tend not to last however, because "chaos".

Those are the only two I know of. I'm pretty sure the trinaries are being examined "because physics" but I've heard no peeps about unusual patterns of timing from them. But still, no matter how many neighbors pulsars have, they are fundamentally too dim and too directional to count as part of an answer to the weird question here I think...

3. The 234 star's that might be called "Borra's Hundreds" can probably also be discounted directly because at best, if these are signaling extraterrestrials, then they are just using puny pulsed lasers with roughly our own planet's industrial energy outputs, in more or less the visible spectrum (blockable by dust), which probably doesn't count because it obviously can't be seen from somewhere far away like the Sloan Great Wall.

The idea, initially articulated by Ermanno Borra in 2010 as I minimally understand it, is that a laser could shoot out light of nearly any frequency (frequency as given by the wavelength of individual photons), but if we or aliens could pulse the quantity of photons sent out fast enough, this would be visible to typical methods for measuring the "frequency of light from a star" in standard spectrographic surveys whose intentional goal is to figure out the atomic constituents of those stars from the wavelengths (and hence the frequencies) of the specific photons they emit. The methods aren't looking for very fast pulses of more and then less photons, but they could nonetheless see them by "accident".

In 2012, Borra tried to explain it again and spelled out more of the connections to SETI, basically saying that formal SETI was doing one thing, but spectrographic star surveys were better funded and you could do SETI there too just by processing the exact same data through another filter to make the possible injected signals pop out.

Aliens seeking to be discovered would know anyone smart would do spectrographic surveys of the stars, so that would be an obvious place to try to put a signal.

Then in 2016 Borra published again, now with Trottier as a coauthor, saying that he'd gone ahead and looked at archival spectral data, and found 234 stars that seemed to be sending out "peculiar periodic spectral modulations" of the sort that he predicted... unless the recorded version of the data had frequency artifacts in it?

As summarized by Snopes (normally a good source) the claim is disregarded but all the criticisms are status attacks rather than attending to any kind of object level analysis of the math, the physics, or the collected data.

The BEST argument against Borra is one I've almost never seen leveled, which is that the data processing method involved complex math, and had error bars, and they analyzed 2.5 million stars and only found 234 results. This makes me instantly wonder: data mining artifact?

But in that case you'd expect someone to make this argument seriously and explain in detail how the math went wrong somewhere? I don't get it.

Maybe people think that lasers that blink with a terahertz frequency are impossible because of "laser physics" or something? But no one seems to have raised this objection. And it seems to me like it might be possible to do this just from having a normal continuous laser and then spin something very very fast that periodically blocks the light coming out of the laser? I'm not a laser engineer, I don't know, it just seems weird to me that I've seen no speculation one way or another.

I've tried googling the coordinates of the stars Borra found and none of them have wikipedia pages, Google sends all the searches for the stellar coordinates back to Borra's own paper. I don't know how many light years away any of them are.

There's no kickstarter. The normal SETI people at UC Berkeley eventually, in October of 2016, agreed to look at a few of Borra's stars but you could see their heart wasn't in it. There's been no word since then.

However, despite humans being boring and uninterested in important things, what about a generalization of this method! :-)

(EDIT NOTE: In the first draft I had text here where I imagined Niven's fictional Ringworld made out of an impossible super material and then suggested modifications to create a "flicker ring" that could spin around a star and make the star appear to blink at spectral frequencies from certain perspectives. My optical reasoning was ludicrously wrong in the first draft, built around how things would be seen from very close rather than very far. Even with the hypothetical magic substance "scrith" a flicker ring big enough and fast enough to look right at a vast distance would be impossible. The material would have to be many orders of magnitude more magical than scrith to work in this capacity.)

4. Hoag's Object is pretty fascinating and fascinatingly pretty.

Sometimes I wonder if the only reason we don't believe in aliens yet is some kind of social signaling equilibrium similar to plate tectonics.

In 1915 Wegener was like "Duh, the continents obviously line up like a jigsaw puzzle" and people were like "No way!" and then 50 years later they were like "Oh, yeah, I guess so, funny how this is obvious to kids now but wasn't obvious to fancy scientists in 1890..."

If there are "Hoagians" shepherding all the stars in their galaxy into a pretty ring as a collective art project (or maybe just to prevent expensive damaging collisions?), that would be pretty epic.

In terms of the weird question however, the problem is that Hoag's Object is only 9M light years away (vs Andromeda's 2M, and that's part of why we easily see it. Picking it out uniquely from 8000M light years away would be a totally other thing. Also, it is only visible if you see it from the poles rather than the edges, which is another reason it isn't a very good universal signal.

5. Black hole collisions have never been attributed to aliens, to my knowledge. However, they are obviously big and awesome and get a lot of news. If you could survey moderately sized black holes in your galaxy and nudge them around in a controlled way you might have a partial solution? Timed collisions would be hard to deny were aliens I think. Imagine:

Chirp! (then wait 16.30 days)

Chirp! (2.32 days) Chirp! (then wait another 16.30 days)

Chirp! (2.32 days) Chirp! (2.32 days) Chirp!

You going to tell me that's not an intentional "here I am!" signal? You can't! :-P

From a long term signaling perspective (like to break through the Fermi Paradox by visibly declaring once and for all "intelligence existed!" before the Great Filter gets you) the problem here would be that this would be a one time signal that only communicates to a small shell of stars a precise distance away.

Many such events could have occurred before humans could hear them, and many might exist after we go extinct, with us none the wiser :-/

6. Gamma Ray Bursts are more usually associated with death and life. Basically they are so bright that they would probably cause mass extinctions in their home galaxies.

However, if you could figure out a way to cause them (not that hard? just crash neutron stars into each other in head on collisions?) and somehow survive a series of six-ish closely timed blasts then it could work like black holes, but way more obvious. No theory of relativity is even required to know to build a gravity wave detector! Black holes are still probably better in terms of style points, because their collisions don't seem to cause mass extinctions :-P


Anyway, my point is that all of these are thing that have already come to mainstream scientific human attention and caused lots of exploratory interest and analysis.

ALSO, all of them have been more or less dismissed by mainstream astronomers as being conclusive evidence of extraterrestrial civilizations.

ALSO, I don't instantly see super obvious ways to twist any of these things around to function as a clean cut answer to the weird question where a short-lived Kardashev Type III species with our physics and material science (but better and more manufacturing capacity) could set something up, have it persist after the Great Filter gets them, and signal to everyone forever.

Comment by JenniferRM on April Fools: Announcing: Karma 2.0 · 2018-04-01T15:24:54.477Z · LW · GW

I'm sure this day will be remembered in history as the day that LessWrong became great again!

Comment by JenniferRM on LessWrong Diaspora Jargon Survey · 2018-03-27T03:48:50.291Z · LW · GW

Your experimental results might be indicative of something other than problems merely within LW...

I decided to test the hypothesis that LessWrongers practice weak scholarship in regards to jargon. In particular, that for many important terms the true source of knowledge has not been transmitted to community members. [bold added]

The problem here is that a better reference group than "LessWrongers" might be "scientists"?

Or perhaps the the group of "scholars" (understood as all the scientists, plus all the people "not doing real science" per whatever weird definition someone has for calling something "science"), or perhaps even the still larger category of "humans"?

There is a generalized problem with scholarship related cognition in the the widespread failure of humans to remember the source of the contents of their minds. Photographs of events you weren't even alive for become vague visual memories. Hearsay becomes eyewitness report. Fishy stories from people you know you shouldn't trust become stories you don't remember the source of... and then become things you weakly believe... basically: in general, by default, human minds are terrible at retaining auditable fact profiles.

But suppose that we don't expect that much of generic humans, and only hold scientists to high intellectual standards?

Still a no go!

As per Stigler's Law Of Eponymy there are almost no laws which were actually named after their (carefully searched for) originators! The general pattern is similar to art: "Good scientists borrow, great scientists steal."

In practice, the thing that will be remembered by large groups of people is good popularization, especially when a well received version keeps things simple and vivid and doesn't even bother to mention the original source.

If LW can fix this, it will be doing something over and above what science itself has accomplished in terms of scholarly integrity. (Whether this will actually help with technological advances is perhaps a separate question?)


For an example here, I know about "ugh fields" because I invented that term and know the details of its early linguistic history.

1. The coining in this case preceded the existence of the overcomingbias blog by a few years... it was coined in conversations in the 2001-2003 era in and around College of Creative Study (CCS) seminars at UC Santa Barbara (UCSB) between me and friends, some of whom later propagated the term into this community.

My use of the term was aimed at describing the subjective experience of catastrophic procrastination along with some causal speculation. It seemed that mild anxiety over a looming deadline could cause mild diversion into a nominally anxiety ameliorating behavior like video games... which made the deadline situation worse... and thereby turned into a positive feedback of "ugh". These ugh fields would feel they have an external source whose apparent locus is "the deadline", with the amount of ugh increasing exponentially as the deadline gets closer and closer.

(I failed a class or two back then more or less because of this dynamic until I restructured my soul into a somewhat more platonically moderate pattern using Allan Bloom's translation of The Republic as my inspiration. Basically: consciously locally optimized hedonism has potentially unrecoverable failure modes and should be used with caution, if at all. Make lists! Perhaps amortize hedonism over times equal to or greater than your personal budgeting cycle? Or maybe better yet try to slowly junk hedonism in favor of duty and virtue? Anyway. This is a WIP for me still...)

2. Two of my friends from UCSB (Anna and Steve) were part of the conversations about me failing classes at UCSB and working out a causal model thereof, and in roughly 2008 brought the term to "Benton House" (which was the first "rationalist house" wherein lived participants in "the visiting fellows program" of the old version of MIRI which was then called "the Singularity Institute for Artificial Intelligence (SIAI)").

3. The term then propagated through the chalk board culture of SIAI (and possibly into diaspora rationalist houses?) and eventually the concept turned into a LW post. The new site link for this post doesn't work at the moment that I write this, but still remembers the 2010 article when I said of "ugh fields":

It is a head trip to see a pet term for a quirk of behavior reflected back at me on the internet as an official name for a phenomenon.

4. And the term keeps rolling around. It basically has a life of its own now, accreting hypothetical mechanisms and stories and interpretations as it goes.

It would not surprise me if some academic (2 or 10 or 50 years from now) turns it into a law and the law gets named after them, in fulfillment of Stigler's Law :-P


The core thing I'm trying to communicate is that humans in general can only think sporadically, and with great effort, and misremember almost everything, and especially misremember sources/credit/trust issues. The world has too many details, and neurons are too expensive. External media is required.

Lesswrongers falling prey to attribution failures is to be expected by default, because Lesswrong is full of humans. The surprising thing would be generally high performance in this domain.

My working understanding is that many of the original english language enlightenment folks were mindful of the problem and worked to deal with it by mostly distrusting words and instead constantly returning to detailed empirical observations (or written accounts thereof), over and over, at every event where it was hoped that true knowledge of the world might be "verbally" transmitted.

Comment by JenniferRM on Appropriateness of Discussing Rationalist Discourse of a Political Nature on LW? · 2018-03-26T13:01:59.958Z · LW · GW

London, New York, and nine full time employee in the NYT media orbit... updated!

Comment by JenniferRM on Shadow · 2018-03-20T19:26:41.226Z · LW · GW

I see below that you're aiming for something like "fear in political situations,". This calls to mind, for me, things like the triangle hypothesis, the Richardson arms race model, and less rigorously but clearly in the same ambit also things like confidence building measures.

These are tough topics and I can see how it might feel right to just "publish something" rather than sit on one's hands. I have the same issue myself (minus the courage to just go for it anyway) which leads me mostly to comment rather than top post. My sympathy... you have it!

Comment by JenniferRM on Appropriateness of Discussing Rationalist Discourse of a Political Nature on LW? · 2018-03-16T11:20:20.807Z · LW · GW

Uh... I can try to unroll the context and thinking I guess..

I think in my head I initially associated the name with childhood memories of a vaguely Investigative TV News Program that was apparently founded in 1986.

Also, it appears to be the name of an entire genre of magazines that includes things like New Statesmen which makes it a bit tricky to google for details about the thing itself, rather than the category of the same name.

It seemed plausible to me, given the general collapse of the journalism industry, that the old 1990's brand still existed, had moved to the Internet, mutated extensively, and was now reduced to taking potshots at people like Scott in order to drum up eyeballs?

(Plausibly the website could be co-branded with a TV version still eking out some sort of half life among the cable TV channels with 3 or 4 digit numbers, that could trace its existence back to 1986?)

None of what seemed plausible to me is actually true.

The old thing named Current Affairs apparently died in 1996, and was briefly revived in 2005 and then died again. The new thing started in 2015, and has nothing to do with the old thing.

Since I was surprised by the recency of the founding of the new incarnation of "something named Current Affairs" it seemed to me that other people might be confused too, so I linked to the supporting evidence.

Also, when Scott speaks indirectly of the callout, he makes a "request not to be cited in major national newspapers". But the name here is so maddeningly generic that I have difficulty even Googling my way to reliable circulation numbers.

Is it actually major? Do they even have a paper print format? I'm still not sure, and don't really care. Maybe Scott was fooled into thinking they matter too at first?

Basically, my model at this point, given the paucity of hard data, is that this new Current Affairs could easily be nothing like a "major national newspaper" but rather it could just be like two or three yahoos in a basement struggling to be professional journalists in an age when professional journalism is dying, and finding that they have to start trolling virtuously geeky bloggers to stir up drama and attract eyeballs to their website to make ends meet.

The circulation numbers and actual ambient reputation potentially matter, because if they are very low then who cares if some troll hasn't read Scott's old essay very carefully, but if many high quality eyeballs were reading the inaccurate summary and criticism, then the besmirching insinuations could hurt Scott.

In the meantime, maybe this will be the beginning of a beautiful friendship. When strangers get into fights in real life, it isn't totally uncommon for them, years later, to end up great friends who know each other's true measure :-)

Comment by JenniferRM on Appropriateness of Discussing Rationalist Discourse of a Political Nature on LW? · 2018-03-15T09:27:23.887Z · LW · GW

I appreciate that you're asking at a very "high level of meta" about a controversial topic.

Also, I appreciate that you helped me to know that something had even happened. I read Scott's original article back when it was fresh, but the Robinson piece wasn't on my radar until I searched for Scott's rebuttal on the basis of the question and found a link back to it.

I'm still not sure if I understand all the ins and outs here, but I will say that this is a complex topic which I personally avoid writing about because in many ways I'm sort of a coward...

However Scott reads to me as grappling with complicated ideas, in public, against his own interests, in a basically admirable way, while Robinson reads to me as having had to push some content out on a deadline (with a larger goal of trying to get his readers to buy the topmost book in the image at the end of his article).

I sympathize with Scott having been dissed in a magazine whose name suggests falsely that it has a long history and thus having been put in a position to either (1) defend himself and give the upstart that is insulting him the attention which was probably point of the attack or (2) not defend himself.

I think Scott's move of not putting his rebuttal on his own main page, but just putting it where it can be searched for (so it comes up as a defense if people search for the topic specifically, but doesn't move a lot of eyeballs) and running the URL through was quite smart. He appears to understand how he's being trolled and is responding in a way that navigates it pretty well :-)

Comment by JenniferRM on Shadow · 2018-03-15T08:00:06.691Z · LW · GW

Cybernetic polytheism is hard to do right, because you have to have a strong sense of cybernetics first. You need to understand and explore the center and the edges of a large scale optimization dynamic, explore the empirical details it entails, and generally get a scientific understanding of it... then, for lulz, you might name it and personify it.

"Evolution" is a good example. This process is instantiated in biology. It operates over heritable patterns of deoxyribonucleic acid whose transcription into protein by living cells constructs new cells and agglomerations of cells in the shape of bacteria and macroscale organisms... each with basically the same DNA as before, but with minor variations. There is math here: punnett squares, fixation, etc.

Now we could just leave it at that. The science is good enough.

But not everyone has time for the biology, or has the patience to learn the math. Also, the existence of biological structures has been attributed by non-biologists to gods with narrative character that doesn't really map that well to the biological principles.

Thus there is a strong temptation to perform a narrative correction and offer "better theology" to translate the science into something with more cogent emotional resonances.

Like... species were not created by a benevolent watch maker who loves us. That's crazy.

Actually, if biological nature (or biological nature's author) has any moral character, that character is at least half evil. This entity thinks nothing of parasitism or infanticide, except to promote them if these processes produce more copies of DNA and censor them of they produce fewer copies of DNA.

It tries countless redundant experiments (the same mutation over and over again) that leads to both misery and death, but even calling these experiments is generous... there is almost no intentional pursuit of knowledge (although HSP genes are pretty cool, and sort of related), no institutional review boards to ensure the experiments are ethical, no grant proposals arguing in favor of the experiments in terms of the value of the knowledge they might produce.

Evolution, construed as a god, is a god we should fear and probably a god we should fight.

We can probably do better than it does, and if we don't do better it will have its terrible way with us. Those who worship this god without major elements of caution and hostility are scary cultists... they are sort selling their great great grand children into slavery to something that won't reward them, and can't possibly feel gratitude. A narrative from old school horror or science fiction, that matches the right general tone, is Azathoth.

But you can't just make up the name Azathoth and say that it is a god and coin a bunch of other weird names, and make up some symbolic tools for dealing with them, and mix it together willy-nilly, and not mention biology or evolution at all.

You have to start with the science and end with the science.

Comment by JenniferRM on On Building Theories of History · 2018-03-11T18:39:44.255Z · LW · GW

Back in 2004-2005 (in a time I look back fondly on, because I was an OK kid) I was basically a naive techno-optimist about computers and software and AI, but I got seriously worried about Peak Oil.

All the muggles had a "policy level" understanding that the consumer energy economy (and everything in general) would be basically fine, but everyone I could find whose "gears level" understanding of fossil fuel economics was predicting some kind of doom. The futures markets basically said "in 2005, 2009, and 2019 OPEC will politically control the price of oil, and it will be ~$39 per barrel" but that didn't make any object level sense when you dug into the details.

I went kind of crazy, trying to reconcile these things, and read a lot of object level quantitative anthropology trying to figure out whether I was crazy or everyone else was.

What ended up happening is that the economic/technological solution arrived late (but more or less "before serious collapse", like failures of supply chains or the dissolution of traditional constitutions) and also Obama was elected in the midst of a relatively mild "financial collapse" that included oil prices spiking to over $120 per barrel (plus food riots in poor countries).

Since Obama was tribally blue (and the obvious corrective policies were tribally red) and elected with a mandate to solve "the Great Recession" he could get energy extraction reform in a way a red politician could never get away with.

Blue establishment activists objecting to backroom deals like this would be disloyal (only "outsider" ideological leftists, like those involved in the Dakota/Bakken/Standing Rock protests could pragmatically object), and red establishment activist networks were happy to unshackle the frackers and toss a regulatory bone to shale oil. By 2010 things were much less scary, and by 2013 the trajectory of US oil production had totally and dramatically deviated from the predictions inherent to the Hubbert's Peak model of historical oil production.

I consider the 2004-2013 period to have been very personally educational from a "theory of history" perspective :-)

My pet name for the hypothetical field (coined by Michael Flynn in the late 1980's) is "cliology" (named after Clio the Muse of History), and one of many barriers to creating a sociologically viable community of cliology researchers (I'm tempted to call it the "Fundamental Hypothesis of Cliology" as a joke?) is that most major insights in this field are inherently useful for guiding investment and are thus hoarded within the investing class as "one-off trade secrets".

The memetic incentives for serious public knowledge production in this domain would be extremely tricky to set up, and are unlikely to happen except via "great man" or "great circle" interventions. The Fundamental Hypothesis of Cliology suggests that Elon Musk could maybe do it, or a new "thing like the Vienna Circle" might be able to do it, but that's more or less what it would take. Also, even after the initial "boost" from this effort, public research would stall and/or devolve the moment any critical subset of people died, or got day jobs, or got head hunted by a hedge firm, or whatever. The memetic incentive patterns would probably continue to hold for each incremental addition to the field, more or less forever?

So in 2250 (assuming technology keeps advancing and yet there are still autonomous mortal human-shaped minds with their hands on the reins of history) they might very well think that the causality of our period of history was quite retrospectively straightforward... but they will be treating insights that help uniquely predict 2280 (or whatever their window of prediction is) as trade secrets.

Comment by JenniferRM on Circling · 2018-02-22T20:32:20.999Z · LW · GW

I really like this comment!

I think I see you calling explicit attention attention to your model of cognition, and how your own volitional mental moves interact with seemingly non-volitional mental observations you become aware of.

Then you're integrating this micro-experimental data into an explanatory framework that implicitly acknowledges the possibility that your own model of yourself might be wrong, and even if it is right other people might work differently or have different observations.

I think that to get any sort of genuine, reproducible, safe, inter-subjectively validated meditative science that knows general laws of subjective psychology, it will involve conversations in this mode :-)

Etymologically, "meditation" comes from the latin meditari, "to study".

To make a "science word" we switch to ancient greek, where "meletan" means "to study or meditate". The three original "Boetian muses" were memory (Mnemosyne, who often is considered the mother of them all), song (Aoede), and meditation (Melete)... so if a science existed here it might be called "meletology"?

A few times I've playfully used the term "meletonaut" to describe someone whose approach to the field is more exploratory than scholarly or experimental.

If I hear you correctly, in your cognitive explorations, you find that you can page through memories while watching yourself for symptoms of high "adrenaline" (by which I mean often actual adrenaline, but also the general constellation of "arousal" including heart rate and sweaty skin and probably cortisol and so on).

And then maybe when you think of yourself as "aware of your feelings" that phrase could be unpacked to say that you have a basically accurate metacognitive awareness of which memories or images cause adrenaline spikes, without the active metacognitive awareness itself causing an adrenaline spike.

So if someone accuses you of "causing feelings" you can defend yourself by saying the goal is actually to help people non-emotionally know what "causes them to have emotions" without actually "experiencing the feelings directly" except as a means of gathering emotional data.

I think I understand the basis of such defense, and the validity of the defense in terms of the real value of using this technique for some people.

My personal pet name for specifically this exploratory technique (which can be performed alone and appears to occur in numerous sociological and religious contexts) is "engram dousing".

The same basic process happens in the neuro lingusitic programming (NLP) community as one step of a process they might call something like "memory reconsolidation".

It also happens in Scientology, where instead of self reported adrenaline symptoms they use an "e-meter" (to measure sweaty palms electronically) and instead of a two person birthday circle they formalize the process quite a bit and call it an "audit". In scientology it is pretty clear they noticed how great this is as an introductory step in acquiring blackmail material and gaining the unjustified trust of marks (prior to headfucking them) and optimized it for that purpose.

Which is not to say that circling is as bad as scientology!

Also, apostate scientologists regularly report that "the tech" of scientology (which is scientology's jargon term for all their early well scripted psychological manipulations of new members) does in fact work and gives life benefits.

With dynamite, construction workers could suddenly build tunnels through mountains remarkably fast so that trains and roads could go places that would otherwise have been economically impossible. Dynamite used towards good ends, with decent safety engineering and skill, is great!

But if someone wants to turn a garbage can upside down, strap a chair to it, and have me sit in the chair while they put a smallish, roughly measured quantity of dynamite under it... even if the last person in the chair survived and thought it was a wild ride and wants to do it again... uh... yeah... I would love to watch from a safe distance, but I think I'd pass on sitting in the chair.

And more generally, as an aspiring meletologist and hobbyist in the sociology of religion, all I'm trying to say is that engram dousing (along with some other mental techniques) is like "cognitive nuclear technology", and circling might not be literally playing with refined uranium, but "the circling community in general" appears to have some cognitive uranium ore, and they've independently refined it a bit, and they're doing tricks with it.

That's all more or less great :-)

But it sounds like they are not being particularly careful, and many of them might not realize their magic rocks are powered by more than normal levels of uranium decay, and if they have even heard of Louis Slotin then they don't think he has anything to do with their toy (uranium) pellets.

Comment by JenniferRM on Circling · 2018-02-19T12:49:47.483Z · LW · GW
Ideally, everyone would have the opportunity to explore vulnerability carefully, step by step, with a skilled therapist or something to turn to if things ever got dicey.

I think this is an essential line, and a core problem. For more than a half century the social capital of the average person in the US has been falling and falling and falling. A therapist is sort of just a person you pay to pretend to be a genuine friend, without you having to reciprocate friendship back at them. That it is considered reasonable or ideal (as the first thought) to go to a paid professional to get basic F2F friend services is historically weird.

Maybe it is the best we can do, but... like... it didn't used to be this way I don't think, and that suggests that it could be like it was in the past if we knew what was causing it.

Comment by JenniferRM on Circling · 2018-02-19T12:24:49.779Z · LW · GW

I'm pretty sure these people don't think that what they are doing "borrows from" hypnosis or trance or suggestibility hacking or mesmerism or whatever words you want to use for it.

Their emotions are high, caused by skillful intentional actions, and involves a general dynamic of "playing along" with numerous secondary "critical cognitive faculties" seemingly disengaged. Their focus is on their own feelings, and how their feelings feel, and so on. It isn't that they don't notice what's directly happening to (and inside) them, it is that they notice very little else.

Maybe that's great. Being in religions seems empirically to be somewhat positive for people?

Maybe the preacher there has studied hypnosis and optimized things for trance states... but I don't think that would been required for him to be interacting with more or less the same basic mechanisms in people's cognitive machinery.

Those mechanisms are not particularly exotic or hard to mess with, but they cut directly to "goal-content integrity" and so caution is appropriate.

Comment by JenniferRM on Circling · 2018-02-18T08:52:34.575Z · LW · GW

The details reminds me a lot of hypnosis, with thoughts about thoughts, instead of just thinking things directly.

Breath. Body attention. Meta. Listen to the voice. Respond and recieve. Be open to the update. Body attention. Meta. Listen to the voice. Everyone trancing themselves and everyone else in a fuzzy haze...

Or how about, actually, NO!

How about instead we try to ramp up our critical faculties and talk about models and evidence?

I do not trust casual hypnosis because hypnosis can become "not casual" very fast.

Hypnosis is a power tool and basically it is one of those "things I won't work with" unless it is wartime and my side is losing and it seems highly relevant to victory. And it probably wouldn't be my side I'd be hypnotizing, it would be the bad guys.

"We broke the rules, Harry," she said in a hoarse voice. "We broke the rules."

"I..." Harry swallowed. "I still don't see how, I've been thinking but -"

"I asked if the Transfiguration was safe and you answered me! "

There was a pause...

"Right..." Harry said slowly. "That's probably one of those things they don't even bother telling you not to do because it's too obvious. Don't test brilliant new ideas for Transfiguration by yourselves in an unused classroom without consulting any professors."

Except there are no decent professors in this subject. (There were crazy CIA mind control experiments, but instead of publishing their results, the records were mostly purged in 1973.)

Comment by JenniferRM on Missives from China · 2018-02-17T18:09:06.492Z · LW · GW

I've thought a lot about iterated chicken, especially in the presence of agent variations.

I suspect the local long term iteration between a rememberable (sub-Dunbar?) number of agents leads to pecking orders, and widespread iteration in crowds of "similarly different" agents leads to something like "class systems".

For example, in the US, I think every human knows to get out of the way of things that look like buses, because that class of vehicles expects to be able to throw its weight around. Relatedly, the only time a Google car has ever been in a fender bender where it could be read as "at fault" using local human norms was when it was nosing out into traffic and assumed a bus would either yield or swing wide because of the car's positional priority.

What have you noticed about chinese traffic patterns? :-)

Comment by JenniferRM on Eternal, and Hearthstone Economy versus Magic Economy · 2018-02-11T02:45:31.153Z · LW · GW

If I understand correctly, the cognitive process/bias/heuristic/whatever of "sacredness" is relevant here.

Neither nails nor dollars are sacred so you're free to trade dollars for nails.

A kidney is sacred, so you can't trade that for dollars, but you can trade it for another kidney (although such trades still feel a bit weird).

Sacred things are often poorly managed in practice, and sacredness is easy to make fun of, but a decent defense of sacredness might be that it is one of the few widely installed psychological mechanisms in real life for managing the downsides of having markets in things. Thus, properly deployed sacredness might let you have "trade" in one area without ending up with "totalalizing trade"?

In the smaller and hopefully lower stakes world of video games, I think the suggestion would be to have card classes with different trading characteristics.

The lowest class of very non-sacred things could be swapped with extremely low transaction costs within the class and also be tradeable directly for money.

Higher sacredness things would have a separate market, perhaps with transaction costs like needing a purchaseable delivery mechanism or imposing delays so that objects go into limbo after the trade is finalized while "being delivered". The most sacred things would be "inalienable" so they can't be traded or given away or perhaps not even be destroyed.

Exactly where sacredness should be deployed in order to maximize fun seems like a deep and relatively unstudied problem.

One place in real life where the inalienability of something has large and substantive differences from jurisdiction to jurisdiction is the question of the rights of artistic creators to their artwork. In some jurisdictions, an artist cannot legally sell their right to veto the use of their artwork if deployed in artistically compromising ways (like use in advertising or political campaigns) after mere copyrights have been sold.

In the US artistic moral rights are not treated as very sacred, and the lack of sacredness in art production is probably part of the US's cultural dominance a la Hollywood, but it has arguably also had large effects in the lives of artists, visibly so with people like Bill Waterson and Prince.

Comment by JenniferRM on Arbital postmortem · 2018-01-30T21:01:00.017Z · LW · GW

Thank you for the writeup! I've long had a distant impression of Arbital as being some kind of "mindmapping prediction social thing" and now that I've heard the explanation of its iterating vision I think maybe my model of it might be "Alexei and Eliezer's Memex or Xanadu".

This updates me a bit in the direction that something like Arbital will exist in the future and be a big deal, and it will probably make more progress by exteme attention to (1) the microeconomics of users and their existing preferences and their desire to have property they seem to control and (2) compromising on the overall "economic architecture" of the system such that it does not actually bring about the full utopian societal transformation it initially promised.

Comment by JenniferRM on The First Fundamental · 2018-01-20T22:52:32.586Z · LW · GW


Comment by JenniferRM on The First Fundamental · 2018-01-20T22:44:14.989Z · LW · GW

I mean... in his defense... Paul Dirac was pretty dumb. He was probably just doing his best ;-)

Comment by JenniferRM on Sufficient · 2018-01-19T20:21:04.218Z · LW · GW

So if one person is seriously working on perpetual motion. Like they are acknowledging that they are probably not going to succeed, but they argue that if we don't find an exception to the 2nd law somehow then we're all doomed... Then in that case, a Sufficient person has to help because "social agreement"?

Comment by JenniferRM on The First Fundamental · 2018-01-19T06:58:20.395Z · LW · GW

"I do not see how a man can work on the frontiers of physics and write poetry at the same time. They are in opposition. In science you want to say something that nobody knew before, in words which everyone can understand. In poetry you are bound to say something that everybody knows already in words that nobody can understand."

-Paul Dirac (to Oppenheimer, regarding Oppie's reported dabbling in poetry)

Comment by JenniferRM on The Solitaire Principle: Game Theory for One · 2018-01-17T23:46:54.605Z · LW · GW

K1 wants to write a novel because she calculated a novel to be the best thing to be working on given many environmental factors as input to a reflectively stable and emotionally integrated theory of axiology.

The novel is completed if at least 300 future Ks agree.

However, K1 mostly ignores "other people" in favor of thinking of herself as something like a local/momentary snapshot of a turing machine's read/write head in operation....

She has obvious inputs and an obvious place for outputs, plus some memory and awareness of the larger program, and an ability and interest in fixing the program she is executing when definite errors are detected... and just trusting the system otherwise.

K1 writes 1/300th of a novel.

Since K1's value estimates were very reasonable, the estimates are replicated by many future K's and 753 days later a novel is finished.

It took more than 300 days, but during the 753 days many other similarly valuable things were also done that were plausibly valuable things to have done. The whole time, K has been more or less safely interuptible, and it would have been pretty weird if K had ignored surprising issues that were more important than the novel when those things actually came up.

If the novel was somehow never finished that would have been OK. It probably would mean it was an omnicient-persperctive-error to have worked on it, but that's OK because humans aren't omniscient.

Lesson: stop worrying about other people (who are often mostly crazy anyway) and instead pay attention to efficiently and reliably knowing what is actually good.

Comment by JenniferRM on Sufficient · 2018-01-17T23:05:26.601Z · LW · GW

I really like the poetry and potential rigor of this... but I'm wondering how the philosophy deals with the problem of entropy?

Some resources are just plain finite and can't be renewed.

For example, there is "only so much sun to go around for so long". A current iconic image of self sufficiency is the solar panel, but eventually the sun will run out and we'll either need to find a new and younger star or give up the game.

Long before we run out of real estate for solar panels we will probably need to radically up our mining of rare earth metals, maybe reaching out to the asteroids for such metals.

And so on with a process of discovering and then applying creative problem solving to a series of natural limits... Essentially, a lot of what counts as "Sufficient" probably depends on technological feasibility and the artbitrary choice of the time window we choose to consider.

The longer the timescale, the more clear it is that either we defeat entropy itself, or we can't be "Sufficient".

If there's an acceptance that its OK to "punt" on some kinds of sufficiency because we can't ultimately beat entropy, then the question of when and how to make the call to stop caring about some scale of analysis arises. Is there a finite amount of fresh water? A finite amount of phosphorous? A finite amount of neodymium? A finite amount of rich fools who will buy overpriced junk?

With sufficient energy we could make fresh water, phosphorous, neodymium, and rich fools to buy overpriced junk, but (probably) no amount of energy will let us make energy.

Basically, given that "being alive" is inherently extractive and doomed to eventual entropic collapse, where does a person being Sufficient draw their line in the sand with regard to resource sufficiency?

Comment by JenniferRM on Demon Threads · 2018-01-17T02:51:29.855Z · LW · GW

Thank you both for the feedback. I've taken the liberty of adding underlining in a second pass edit.

Comment by JenniferRM on Boiling the Crab: Slow Changes (beneath Sensory Threshold) add up · 2018-01-17T02:00:10.309Z · LW · GW

Is the "crab boiling" metaphor substantially different from the traditional "frog boiling" metaphor?

I've heard the frog version over and over since I was a child, and I've also heard that it is not experimentally verified.

Like... frogs do, in fact, try to escape objectively hot water when there are low barriers to exit. A good biology keyword for research on the clade-spanning mechanism(s) involved here is the "thermal critical maximum". There is a whole family of proteins for "responding to stress by paying more attention to folding or re-folding proteins" all the way down at the bacterial level, and the whole family is named for the first kind of stress response discovered: the stress response to heat.

Your post initially made me wonder if real crabs (whose recent evolution may have lacked really big temperature swings because of oceanic temperature buffering somehow?) might live up to the metaphor's implications better than real frogs (that are fresh water ectotherms whose entire life sorta revolves around leveraging their environment to control their internal state, with temperature being near the top of the list), but casual googling suggests that (warning: disturbing video) crabs also flee hot pans.

An uncharitable reading is that crabs are a better metaphor simply because they "seem more convincing" because it there has been less time for the crab version to have been debunked?

Frog experts perrenially get questions about this, because the meme refuses to die, and in their responses they sometimes note that the typical spreaders of the frog meme are individuals like business consultants, political activists, and religious preachers. When I squint and put on my cynic hat, this reads to me basically as "people who specialize in personally benefiting from tricking entire groups of people into doing things that often don't make a lot of sense".

Despite the fundamental dishonesty, if the frog metaphor was accepted by the audience, it could be a rhetorically solid part of a larger process of achieving group compliance for nearly arbitrary changes.

Basically, the frog metaphor encourages people to distrust their own ability to think objectively about how the world works now, or how it has worked in the past, and in the face of this uncertainty it offers the idea that a large but unmeasurable and essentially invisible harm can be avoided by doing... something... anything? It depends on the situation.

If there was a genuine large imminent loss (like dying from hyperthermia) then many dramatic changes might be justified to attempt to avoid this outcome. Run! Jump! Pull levers at random! Thus, a boiling frog metaphor, deployed with no "kicker" attached, is a slightly confusing thing...

One naturally wonders when the other shoe will drop and the speaker will reveal their claimed harm and propose a more specific plan...

...basically I'm wondering where you're going with this ;-)

Comment by JenniferRM on Demon Threads · 2018-01-10T12:58:56.079Z · LW · GW

In my experience the evolution of demon threads is moderately dependent on the mechanics of commenting, and (to extend the demonic metaphor) "exorcism comments" work differently depending on the mechanical position of new comments.

No matter how commenting works, a comment that "fixes" the bulk of the demon aspects of the larger conversation needs to have clean and coherent insight into whatever the issue is. You shouldn't worry too much about writing such a post unless you are moderately confident that you could pass an ideological turing test for all the major postions being espoused.

The thing that changes with different commenting systems is how much you can fix it and what the "shape" of the resulting conversation looks like if you "succeed".

With "unthreaded, most recent comment at the top" there is no hope.

No matter how excellent your writing, the content will drop lower in the queue and eventually be forgotten. This kind of commenting system is basically an anti-pattern used by manipulative propagandists.

Closely related: the last time I held my nose and visited Facebook it appeared to only show fresh/recent comments for any given item in the feed, and you had to choose to click to get the javascipt to load older comments above the recent comments that start out visible. Ouch! (At this point I consider Facebook to basically just be a propaganda honeypot.)

With "unthreaded, most recent at the bottom" (as with oldschool phpBB systems and the original OvercomingBias setup) a single perfect comment is incapable of totally changing the meaning of the seed. This helps the OP maintain a position of some structural authority...

What you can do, however, is wait for 5-30 posts (partly this depends on pagination - if pagination kicks in within less than 40 posts then wait until page two to attempt an exorcism), and then post a comment that offers a structural correction that praises previous comments, but points out something everyone seems to be missing, that really honestly matters to everyone, and that cuts to the very essence of the issue and deflates it.

This won't totally kill the thread, but it should dramatically change the tone to something more productive, and the tonal state transition will persist for many followups, hopefully leading to the drying up of conversation.

The danger here is that it doesn't really work in very large communities. Readers might be tempted to read the first three comments, then jump to the last page of comments to get the last three comments, then wade in themselves without readin the middle. If there are hundreds of pages of comments your attempted exorcism at the bottom of page 2 simply can't do the job.

With reddit style commenting (as with modern LW and HN) you have the most hope.

The depth of threading is strongly related to the amount of "punch/counterpunch dynamic" that is happening. A given "seed" will have many "child posts" and each of the child posts will sprawl quite deeply. Deep sprawl is only potentially a serious problem in the highest voted first level response. For subsequent comment it isn't actually a problem (at least I don't think?) because the only people who read that far down are the ones who actually enjoy a rhetorical ruckus.

A perfect exorcism in this sort of threading system arrives late enough for the default assumptions to become clear, and then responds to the original seed in a basically flawless way, being fairminded to both sides (often by going meta somehow) and then managing to get upvotes so that it is the first thing people see when they start reading the seed and "check the comments". After reading the "exorcising response" all the lower (and earlier written) comments should hopefully seem less critically in need of reponse because it looks like quibbling compared to a proper response.

The exorcising comment needs to hit the central issue directly and with some novelty so that it really functions as signal rather than noise. For example, use a scientific phrase that no one has so far used that reveals a deep literature.

It needs to avoid subtopics that could raise quibbling responses. Any "rough edges" that allow room for someone to respond will lead to even more surface area for quibbling attacks, and tertiary responses will tend to be even lower quality and more inflamitory, and the fire will get larger rather than smaller. Thus, an exocism must be close to flawless.

It helps to have a bit of a "moral tone" so that good people would feel guilty disturbing the purity of the signal. However too much moral tone can raise a "who the fuck do you think you are?!" sort of crticism, so go light with it. Also it helps a lot to "end on a high note", so that "knee jerk voters" will finish reading it it and click "UP" almost without thinking :-)

You might note that I used the "end on a high note" pattern in this very comment, because I re-ordered my discussion of commenting systems to discuss the one most amenable to being fixed last, which happens to be the one LW uses, because we are awesome. Putting good stuff last and explicitly flattering the entire community is sort of part of the formula ;-)

(EDIT: Added underlines at the suggestion of mr-hire and Raemon below.)

Comment by JenniferRM on The Right to be Wrong · 2018-01-02T20:04:18.004Z · LW · GW

Cool link! I had not heard of her before but I see the echoes. To summarize some of the resonances I think I see...

I noticed that the Sutra about her is the Heart Sutra, and it arose as part of the Mahayana correction to the early ascetic "small raft" Buddhism, and was claimed to have been the secret teachings of Buddha that couldn't be taught in the initial version of Buddhism because the people were not ready...

It is claimed to have been technically there at the beginning, but not in an obvious way.

The secret teachings were mythologically kept by the king of the snakes in his underwater kingdom for a full turn of history, until a reincarnation of Buddha arrived named Nagarjuna, where "Naga" means snake and "Arjuna" means something like "bright shining silver" and is the name of the central hero of the Bhagavad Gita. Thus Nagarjuna, the teacher of the lesson, had a name that basically meant "Illuminated Snake Hero".

The ideas were mythologically acquired by: going underwater, making friends with the snake king, then studying the snake king's secrets (that he got from Buddha).

These lessons, that Prajnaparamita is the embodiment of, are given the concept handle of "shunyata" ("emptiness") and basically seems to be a denial of local naive realism? That is to say: there are no permanent things whose meaning and reality are independent of context. So if you take this seriously and ask "But what's the context?" over and over for anything and everything, recursively, then perhaps eventually you always get to Prajnaparamita as the contextual "Mother of All".

Epistemically speaking, chasing Prajnaparamita is valuable, because you learn the context of your current naively local truth. However you'll never get to her and go past her, because she represents the edge of knowledge... she is always "the farther away context of which you are currently ignorant". As you learn, she always retreats into the background, representing the new edge of knowledge.

Prajnaparamita's name literally means "perfect wisdom", and while she is technically unattainable, it is useful to try to approach her :-)

If you look at the emotional differences in the symbolic choice of Tiamat vs Prajnaparamita, then Tiamat pushes all the ideas into a single fundamentally bad kind of watery chaos that must be destroyed in a violent way for goodness and masculine knowledge to triumph. On the other hand Prajnaparamita has all the emotionally negative aspects sublimated into the process of pursuing her (into the watery domain of the snake king), and is seen as fundamentally good in herself.

Both kinds of symbolism are "mixed", but one valorizes the heroic killing and re-use of "scary female mysteries" while the other justifies "painful exploration" as worthwhile pursuit of the ultimate ineffable female context.

Calling out some of these echoes, I think I see different arrangements of many of the same concepts. Also, the arrangement of the concepts in the "Space Mom" framing seems closer to Prajnaparamita than Tiamat.

Comment by JenniferRM on In the presence of disinformation, collective epistemology requires local modeling · 2017-12-21T08:41:11.532Z · LW · GW

I really like your promotion of fact checking :-)

Also, I'd like to especially thank you for offering the frame where every human group is potentially struggling to coordinate on collective punishment decisions from within a fog of war.

I had never explicitly noticed that people won't want their pursuit of justice to seem like unjustified aggression to "allies from a different bubble of fog", and for this reason might want to avoid certain updates in their public actions.

Like, I even had the concept of altriustic punishment and I had the concept of a fog of war, but somehow they never occured in my brain at the same time before this. Thank you!

If I was going to add a point of advice, it would be to think about being part of two or three "epistemic affinity groups". The affinity group model suggests these groups should be composed of maybe 3 to 15 people each and they should be built around a history of previous prolonged social contact. When the fog of war hits, reach out to at least one of your affinity groups!

Comment by JenniferRM on Why Bayesians should two-box in a one-shot · 2017-12-19T06:34:49.739Z · LW · GW

So, at one point in my misspent youth I played with the idea of building an experimental Omega and looked into the subject in some detail.

In Martin Gardiner's writeup on this back in 1973 reprinted in The Night Is Large the essay explained that the core idea still works if Omega can just predict with 90% accuracy.

Your choice of ONE box pays nothing if you're predicted (incorrectly) to two box, and pays $1M if predicted correctly at 90%, for a total EV of $900,000 (== (0.1)0 + (0.9)1,000,000).

Your choice of TWO box pays $1k if you're predicted (correctly) to two box, and pays $1,001,000 if you're predicted to only one box for a total EV of $101k (== 900 + 100,100 == (0.9)1,000 + (0.1)1,001,000).

So the expected profit from one boxing in a normal game, with Omega accuracy of 90% would be $799k.

Also, by adjusting the game's payouts we could hypothetically make any amount of genuine human predictability (even just a reliable 51% accuracy) be enough to motivate one boxing.

The super simplistic conceptual question here is the distinction between two kinds of sincerity. One kind of sincerity is assessed at the time of the promise. The other kind of sincerity is assessed retrospectively by seeing whether the promise was upheld.

Then the standard version of the game tries to put a wedge between these concepts by supposing that maybe an initially sincere promise might be violated by the intervention of something like "free will", and it tries to make this seem slightly more magical (more of a far mode question?) by imagining that the promise was never even uttered, but rather the promise was stolen from the person by the magical mind reading "Omega" entity before the promise was ever even imagined by the person as being possible to make.

One thing that seems clear to me is that if one boxing is profitable but not certain then you might wish you could have done something in the past that would make it clear that you'll one box, so that you land in the part of Omega's calculations where the prediction is easy, rather than being one of the edge cases where Omega really has to work for its brier score.

On the other hand, the setup is also (probably purposefully) quite fishy. The promise that "you made" is originally implicit, and depending on your understanding of the game maybe extremely abstract. Omega doesn't just tell you what it predicted. If you get one box and get nothing and complain then Omega will probably try to twist it around and blame you for its failed prediction. If it all works then you seem to be getting free money, and why is anyone handing out free money?

The whole thing just "feels like the setup for a scam". Like you one box, get a million, then in your glow of positive trust you give some money to their charitable cause. Then it turns out the charitable cause was fake. Then it turns out the million dollars was counterfeit but your donation was real. Sucker!

And yet... you know, parents actually are pretty good at knowing when their kids are telling the truth or lying. And parents really do give their kids a free lunch. And it isn't really a scam, it is just normal life as a mortal human being.

But also in the end, for someone to look their parents in the eyes and promise to be home before 10PM and really mean it for reals at the time of the promise, and then be given the car keys, and then come home at 1AM... that also happens. And wouldn't it be great to just blame that on "free will" and "the 10% of the time that Omega's predictions fail"?

Looping this back around to the larger AGI question, it seems like what we're basically hoping for is to learn how to become a flawless Omega (or at least build some software that can do this job) at least for the restricted case of an AGI that we can give the car keys without fear that after it has the car keys it will play the "free will" card and grind us all up into fuel paste after promising not to.

Comment by JenniferRM on Melting Gold, and Organizational Capacity · 2017-12-11T21:16:52.585Z · LW · GW
There's a saying for communities: if you're not gaining members, you're losing members.

This heuristic is totally worth turning into a snowclone and applying almost everywhere. If your net worth is not going up, it is probably going down. If your house isn't being remodeled, it is probably falling into disrepair. If your health isn't getting better, it is probably getting worse. Etc.

The general form of the underlying claim is that that the derivative with respect to time for any measurable characteristic is almost never zero, it is usually either positive or negative, and without attention, the direction is usually not the one that humans typically prefer.

Comment by JenniferRM on Melting Gold, and Organizational Capacity · 2017-12-11T21:02:05.593Z · LW · GW

Just to chime in with support, I read "The E-Myth Revisited: Why Most Small Businesses Don't Work and What to Do About It". It is not obviously epistemically sound because it is offered more like "lore" than "science". However it stayed with me, and seems to have changed how I approach organizational development, and I think I endorse the changed perspective.

One of the major concept handles that may have been coined in the book (or borrowed into the book thereby spreading it mnuch further and faster?) is the distinction between "working in your business versus working on your business". A lot of people seem to only "work in", not "work on", and book makes the claim that this lack of going meta on the business often leads to burnout and business failure.

One thing to keep in mind is that since all debates are bravery debates and this specific community is often great at meta, it is also possible to make the opposite error... you can spend too much time working "on" an organization, and not enough "in" the organization, and the failures there look different. One of my heuristics for noticing if there is "too much organizational meta" is if the bathrooms aren't clean.