Posts

David C Denkenberger on Food Production after a Sun Obscuring Disaster 2017-09-17T21:06:27.996Z · score: 9 (9 votes)
How often do you check this forum? 2017-01-30T16:56:54.302Z · score: 11 (12 votes)
[LINK] Poem: There are no beautiful surfaces without a terrible depth. 2012-03-27T17:30:33.772Z · score: 15 (18 votes)
But Butter Goes Rancid In The Freezer 2011-05-09T06:01:34.941Z · score: 25 (28 votes)
February 27 2011 Southern California Meetup 2011-02-24T05:05:39.907Z · score: 7 (8 votes)
Spoiled Discussion of Permutation City, A Fire Upon The Deep, and Eliezer's Mega Crossover 2011-02-19T06:10:15.258Z · score: 7 (8 votes)
January 2011 Southern California Meetup 2011-01-18T04:50:20.454Z · score: 8 (9 votes)
VIDEO: The Problem With Anecdotes 2011-01-12T02:37:33.860Z · score: 5 (6 votes)
December 2010 Southern California Meetup 2010-12-16T22:28:29.049Z · score: 10 (11 votes)
Starting point for calculating inferential distance? 2010-12-03T20:20:03.484Z · score: 15 (18 votes)
Seeking book about baseline life planning and expectations 2010-10-29T20:31:33.891Z · score: 5 (6 votes)
Luminosity (Twilight fanfic) Part 2 Discussion Thread 2010-10-25T23:07:49.960Z · score: 6 (9 votes)
September 2010 Southern California Meetup 2010-09-13T02:31:18.915Z · score: 10 (11 votes)
July 2010 Southern California Meetup 2010-07-07T19:54:25.535Z · score: 8 (9 votes)

Comments

Comment by jenniferrm on The Power to Demolish Bad Arguments · 2019-09-03T02:44:58.049Z · score: 4 (2 votes) · LW · GW
"...go ahead and tell me your causal model and I'll probably cook up an obvious example to satisfy myself in the first minute of your explanation."

I think maybe we agree... verbosely... with different emphasis? :-)

At least I think we could communicate reasonably well. I feel like the danger, if any, would arise from playing example ping pong and having the serious disagreements arise from how we "cook (instantiate?)" examples into models, and "uncook (generalize?)" models into examples.

When people just say what their model "actually is", I really like it.

When people only point to instances I feel like the instances often under-determine the hypothetical underlying idea and leave me still confused as to how to generate novel instances for myself that they would assent to as predictions consistent with the idea that they "meant to mean" with the instances.

Maybe: intensive theories > extensive theories?

Comment by jenniferrm on The Power to Demolish Bad Arguments · 2019-09-03T01:01:07.927Z · score: 9 (5 votes) · LW · GW
I appreciate your high-quality comment.

I likewise appreciate your prompt and generous response :-)

I think I see how you imagine a hypothetical example of "no net health from insurance" might work as a filter that "passes" Hanson's claim.

In this case, I don't think your example works super well and might almost cause more problems that not?

Differences of detail in different people's examples might SUBTRACT from attention to key facts relevant to a larger claim because people might propose different examples that hint at different larger causal models.

Like, if I was going to give the strongest possible hypothetical example to illustrate the basic idea of "no net health from insurance" I'd offers something like:

EXAMPLE: Alice has some minor symptoms of something that would clear up by itself and because she has health insurance she visits a doctor. ("Doctor visits" is one of the few things that health insurance strongly and reliably causes in many people.) While there she gets a nosocomial infection that is antibiotic resistant, lowering her life expectancy. This is more common than many people think. Done.

This example is quite different from your example. In your example medical treatment is good, and the key difference is basically just "pre-pay" vs "post-pay".

(Also, neither of our examples covers the issue where many innovative medical treatments often lower mortality due to the disease they aim at while, somehow (accidentally?) RAISING all cause mortality...)

In my mind, the substantive big picture claim rests ultimately on the sum of many positive and negative factors, each of which arguably deserves "an example of its own". (Things that raise my confidence quite a lot is often hearing the person's own best argument AGAINST their own conclusion, and then hearing an adequate argument against that critique. I trust the winning mind quite a bit more when someone is of two minds.)

No example is going to JUSTIFIABLY convince me, and the LACK of an example for one or all of the important factors wouldn't prevent me from being justifiably convinced by other methods that don't route through "specific examples".

ALSO: For that matter, I DO NOT ACTUALLY KNOW if Robin Hanson is actually right about medical insurance's net results, in the past or now. I vaguely suspect that he is right, but I'm not strongly confident. Real answers might require studies that haven't been performed? In the meantime I have insurance because "what if I get sick?!" and because "don't be a weirdo".

---

I think my key crux here has something to do with the rhetorical standards and conversational norms that "should" apply to various conversations between different kinds of people.

I assumed that having examples "ready-to-hand" (or offered early in a written argument) was something that you would actually be strongly in favor of (and below I'll offer a steelman in defense of), but then you said:

I wouldn't insist that he has an example "ready to hand during debate"; it's okay if he says "if you want an example, here's where we can pull one up".

So for me it would ALSO BE OK to say "If you want an example I'm sorry. I can't think of one right now. As a rule, I don't think in terms of fictional stories. I put effort into thinking in terms of causal models and measurables and authors with axes to grind and bridging theories and studies that rule out causal models and what observations I'd expect from differently weighed ensembles of the models not yet ruled out... Maybe I can explain more of my current working causal model and tell you some authors that care about it, and you can look up their studies and try to find one from which you can invent stories if that helps you?"

If someone said that TO ME I would experience it as a sort of a rhetorical "fuck you"... but WHAT a fuck you! {/me kisses her fingers} Then I would pump them for author recommendations!

My personal goal is often just to find out how the OTHER person feels they do their best thinking, run that process under emulation if I can, and then try to ask good questions from inside their frames. If they have lots of examples there's a certain virtue to that... but I can think of other good signs of systematically productive thought.

---

If I was going to run "example based discussion" under emulation to try to help you understand my position, I would offer the example of John Hattie's "Visible Learning".

It is literally a meta-meta-analysis of education.

It spends the first two chapters just setting up the methodology and responding preemptively to quibbles that will predictable come when motivated thinkers (like classroom teachers that the theory says are teaching suboptimally) try to hear what Hattie has to say.

Chapter 3 finally lays out an abstract architecture of principles for good teaching, by talking about six relevant factors and connecting them all (very very abstractly and loosely) to: tight OODA loops (though not under that name) and Popperian epistemology (explicitly).

I'll fully grant that it can take me an hour to read 5 pages of this book, and I'm stopping a lot and trying to imagine what Hattie might be saying at each step. The key point for me is that he's not filling the book with examples, but with abstract empirically authoritative statistical claims about a complex and multi-faceted domain. It doesn't feel like bullshit, it feels like extremely condensed wisdom.

Because of academic citation norms, in some sense his claims ultimately ground out in studies that are arguably "nothing BUT examples"? He's trying to condense >800 meta-analyses that cover >50k actual studies that cover >1M observed children.

I could imagine you arguing that this proves how useful examples are, because his book is based on over a million examples, but he hasn't talked about an example ONCE so far. He talks about methods and subjectively observed tendencies in meta-analyses mostly, trying to prepare the reader with a schema in which later results can land.

Plausibly, anyone could follow Hattie's citations back to an interesting meta-analysis, look at its references, track back to a likely study, look in their methods section, and find their questionnaires, track back to the methods paper validating an the questionnaire, then look in the supplementary materials to get specific questionnaire items... Then someone could create an imaginary kid in their head who answered that questionnaire some way (like in the study) and then imagine them getting the outcome (like in the study) and use that scenario as "the example"?

I'm not doing that as I read the book. I trust that I could do the above, "because scholarship" but I'm not doing it. When I ask myself why, it seems like it is because it would make reading the (valuable seeming) book EVEN SLOWER?

---

I keep looping back in my mind to the idea that a lot of this strongly depends on which people are talking and what kinds of communication norms are even relevant, and I'm trying to find a place where I think I strongly agree with "looking for examples"...

It makes sense to me that, if I were in the role of an angel investor, and someone wanted $200k from me, and offered 10% of their 2-month-old garage/hobby project, then asking for examples of various of their business claims would be a good way to move forward.

They might not be good at causal modeling, or good at stats, or good at scholarship, or super verbal, but if they have a "native faculty" for building stuff, and budgeting, and building things that are actually useful to actual people... then probably the KEY capacities would be detectable as a head full of examples to various key questions that could be strongly dispositive.

Like... a head full of enough good examples could be sufficient for a basically neurotypical person to build a valuable company, especially if (1) they were examples that addressed key tactical/strategic questions, and (2) no intervening bad examples were ALSO in their head?

(Like if they had terrible examples of startup governance running around in their heads, these might eventually interfere with important parts of being a functional founder down the road. Detecting the inability to give bad examples seems naively hard to me...)

As an investor, I'd be VERY interested in "pre-loaded ready-to-hand theories" that seem likely to actually work. Examples are kinda like "pre-loaded ready-to-hand theories"? Possession of these theories in this form would be a good sign in terms of the founder's readiness to execute very fast, which is a virtue in startups.

A LACK of ready-to-hand examples would suggest that even a good and feasible idea whose premises were "merely scientifically true" might not happen very fast if an angel funded it and the founder had to instantly start executing on it full time.

I would not be offended if you want to tap out. I feel like we haven't found a crux yet. I think examples and specificity are interesting and useful and important, but I merely have intuitions about why, roughly like "duh, of course you need data to train a model", not any high church formal theory with a fancy name that I can link to in wikipedia :-P

Comment by jenniferrm on The Power to Demolish Bad Arguments · 2019-09-02T18:09:35.915Z · score: 34 (18 votes) · LW · GW

I have a strong appreciation for the general point that "specificity is sometimes really great", but I'm wondering if this point might miss the forest for the trees with some large portion of its actual audience?

If you buy that in some sense all debates are bravery debates then audience can matter a lot, and perhaps this point addresses central tendencies in "global english internet discourse" while failing to address central tendencies on LW?

There is a sense in which nearly all highly general statements are technically false, because they admit of at least some counter examples.

However any such statement might still be a useful in a structured argument of very high quality, perhaps as an illustration of a troubling central tendency, or a "lemma" in a multi-part probabalistic argument.

It might even be the case that the MEDIAN EXAMPLE of a real tendency is highly imperfect without that "demolishing" the point.

Suppose for example that someone has focused on a lot on higher level structural truths whose evidential basis was, say, a thorough exploration of many meta-analyses about a given subject.

"Mel the meta-meta-analyst" might be communicating summary claims that are important and generally true that "Sophia the specificity demander" might rhetorically "win against" in a way that does not structurally correspond to the central tendencies of the actual world.

Mel might know things about medical practice without ever having treated a patient or even talked to a single doctor or nurse. Mel might understand something about how classrooms work without being a teacher or ever having visited a classroom. Mel might know things about the behavior of congressional representatives without ever working as a congressional staffer. If forced to confabulate an exemplar patient, or exemplar classroom, or an exemplar political representative the details might be easy to challenge even as a claim about the central tendencies is correct.

Naively, I would think that for Mel to be justified in his claims (even WITHOUT having exemplars ready-to-hand during debate) Mel might need to be moderately scrupulous in his collection of meta-analytic data, and know enough about statistics to include and exclude studies or meta-analyses in appropriately weighed ways. Perhaps he would also need to be good at assessing the character of authors and scientists to be able to predict which ones are outright faking their data, or using incredibly sloppy data collection?

The core point here is that Sophia might not be lead to the truth SIMPLY by demanding specificity without regard to the nature of the claims of her interlocutor.

If Sophia thinks this tactic gives her "the POWER to DEMOLISH arguments" in full generality, that might not actually be true, and it might even lower the quality of her beliefs over time, especially if she mostly converses with smart people (worth learning from, in their area(s) of expertise) rather than idiots (nearly all of whose claims might perhaps be worth demolishing on average).

It is totally possible that some people are just confused and wrong (as, indeed, many people seem to be, on many topics... which is OK because ignorance is the default and there is more information in the world now than any human can integrate within a lifetime of study). In that case, demanding specificity to demolish confused and wrong arguments might genuinely and helpfully debug many low quality abstract claims.

However, I think there's a lot to be said for first asking someone about the positive rigorous basis of any new claim, to see if the person who brought it up can articulate a constructive epistemic strategy.

If they have a constructive epistemic strategy that doesn't rely on personal knowledge of specific details, that would be reasonable, because I think such things ARE possible.

A culturally local example might be Hanson's general claim that medical insurance coverage does not appear to "cause health" on average. No single vivid patient generates this result. Vivid stories do exist here, but they don't adequately justify the broader claim. Rather, the substantiation arises from tallying many outcomes in a variety of circumstances and empirically noticing relations between circumstances and tallies.

If I was asked to offer a single specific positive example of "general arguments being worthwhile" I might nominate Visible Learning by John Hattie as a fascinating and extremely abstract synthesis of >1M students participating in >50k studies of K-12 learning. In this case a core claim of the book is that mindless teaching happens sometimes, nearly all mindful attempts to improve things work a bit, and very rarely a large number of things "go right" and unusually large effect sizes can be observed. I've never seen one of these ideal classrooms I think, but the arguments that they have a collection of general characteristics seem solid so far.

Maybe I'll change my mind by the end? I'm still in progress on this particular book, which makes it sort of "top of mind" for me, but the lack of specifics in the book present a readability challenge rather than an epistemic challenge ;-P

The book Made to Stick, by contrast, uses Stories that are Simple, Surprising, Emotional, Concrete, and Credible to argue that the best way to convince people of something is to tell them Stories that are Simple, Surprising, Emotional, Concrete, and Credible.

As near as I can tell, Made to Stick describes how to convince people of things whether or not the thing is true, which means that if these techniques work (and can in fact cause many false ideas to spread through speech communities with low epistemic hygiene, which the book arguably did not really "establish") then a useful epistemic heuristic might be to give a small evidential PENALTY to all claims illustrated merely via vivid example.

I guess one thing I would like to say here at the end is that I mean this comment in a positive spirit. I upvoted this article and the previous one, and if the rest of the sequence has similar quality I will upvote those as well.

I'm generally IN FAVOR of writing imperfect things and then unpacking and discussing them. This is a better than median post in my opinion, and deserved discussion, rather than deserving to be ignored :-)

Comment by jenniferrm on Unconscious Economics · 2019-02-27T22:30:02.331Z · score: 36 (17 votes) · LW · GW

David Friedman is awesome. I came to the comments to give a different Friedman explanation for one generator of economic rationality from a different Friedman book than "strangepoop" did :-)

In "Law's Order" (which sort of explores how laws that ignore incentives or produce bad incentives tend to be predictably suboptimal) Friedman points out that much of how people decide what to do is based on people finding someone who seems to be "winning" at something and copy them.

(This take is sort of friendly to your "selectionist #3" option but explored in more detail, and applied in more contexts than to simply explain "bad things".)

Friedman doesn't use the term "mimesis", but this is an extremely long-lived academic keyword with many people who have embellished and refined related theories. For example, Peter Thiel has a mild obsession with Rene Girard who was obsessed with a specific theory of mimesis and how it causes human communities to work in predictable ways. If you want the extremely pragmatic layman's version of the basic mimetic theory, it is simply "monkey see, monkey do" :-P

If you adopt mimesis as THE core process which causes human rationality (which it might well not be, but it is interesting to think of a generator of pragmatically correct beliefs in isolation, to see what its weaknesses are and then look for those weaknesses as signatures of the generator in action), it predicts that no new things in the human behavioral range become seriously optimized in a widespread way until AFTER at least one (maybe many) rounds of behavioral mimetic selection on less optimized random human behavioral exploration, where an audience can watch who succeeds and who fails and copy the winners over and over.

The very strong form of this theory (that it is the ONLY thing) is quite bleak and probably false in general, however some locally applied "strong mimesis" theories might be accurate descriptions of how SOME humans select from among various options in SOME parts of real life where optimized behavior is seen but hard to mechanistically explain in other ways.

Friedman pretty much needed to bring up a form of "economic rationality" in his book because a common debating point regarding criminal law in modern times is that incentives have nothing to do with, for example, criminal law, because criminals are mostly not very book smart, and often haven't even looked up (much less remembered) the number of years of punishment that any given crime might carry, and so "can't be affected by such numbers".

(Note the contrast to LW's standard inspirational theorizing about a theoretically derived life plan... around here actively encouraging people to look up numbers before making major life decisions is common.)

Friedman's larger point is that, for example, if burglary is profitable (perhaps punished by a $50 fine, even when the burglar has already sold their loot for $1500), then a child who has an uncle who has figured out this weird/rare trick and makes a living burgling homes will see an uncle who is rich and has a nice life and gives lavish presents at Christmas and donates a lot to the church and is friends with the pastor... That kid will be likely to mimic that uncle without looking up any laws or anything.

Over a long period of time (assuming no change to the laws) the same dynamic in the minds of many children could lead to perhaps 5% of the economy becoming semi-respected burglars, though it would be easy to imagine that another 30% of the private economy would end up focused on mitigating the harms caused by burglary to burglary victims?

(Friedman does not apply the mimesis model to financial crimes, or risky banking practices. However that's definitely something this theory of behavioral causation leads me to think about. Also, advertising seems to me like it might be a situation where harming random strangers in a specific way counts as technically legal, where the perpetration and harm mitigation of the act have both become huge parts of our economy.)

This theory probably under-determines the precise punishments that should be applied for a given crime, but as a heuristic it probably helps constrain punishment sizes to avoid punishments that are hilariously too small. It suggests that any punishment is too small which allow there to exist a "viable life strategy" that includes committing a crime over and over and then treating the punishment as a mere cost of business.

If you sent burglars to prison for "life without parole" on first offenses, mimesis theory predicts that it would put an end to burglary within a generation or four, but the costs of such a policy might well be higher than the benefits.

(Also, as Friedman himself pointed out over and over in various ways, incentives matter! If, hypothetically, burglary and murder are BOTH punished with "life without parole on first offense" AND murdering someone makes you less likely to be caught as a burglar, then murder/burglary is the crime that might be mimetically generated as a pair of crimes that are mimetically viable when only one of them is not viable... If someone was trying to use data science to tune all the punishments to suppress anti-social mimesis, they should really be tuning ALL the punishments and keeping careful and accurate track of the social costs of every anti-social act as part of the larger model.)

In reality, it does seem to me that mimesis is a BIG source of valid and useful rationality for getting along in life, especially for humans who never enter Piaget's "Stage 4" and start applying formal operational reasoning to some things. It works "good enough" a lot of the time that I could imagine it being a core part of any organism's epistemic repertoire?

Indeed, entire cultures seem to exist where the bulk of humans lack formal operational reasoning. For example, anthropologists who study such things often find that traditional farmers (which was basically ALL farmers, prior to the enlightenment) with very clever farming practices don't actually know how or why their farming practices work. They just "do what everyone has always done", and it basically works...

One keyword that offers another path here is one Piaget himself coined: "genetic epistemology". This wasn't meant in the sense of DNA, but rather in the sense of "generative", like "where and how is knowledge generated". I think stage 4 reasoning might be one real kind of generator (see: science and technology), but I think it is not anything like the most common generator, neither among humans nor among other animals.

Comment by jenniferrm on Transhumanists Don't Need Special Dispositions · 2018-12-09T06:14:03.592Z · score: 9 (6 votes) · LW · GW

I can see two senses for what you might be saying...

I agree with one of them (see the end of my response), but I suspect you intend the other:

First, it seems clear to me that the value of a philosophy early on is a speculative thing, highly abstract, oriented towards the future, and latent in the literal expected value of the actions and results the philosophy suggests and envisions.

However, eventually, the actual results of actual people whose hands were moved by brains that contain the philosophy can be valued directly.

Basically, the value of the results of a plan or philosophy screen off the early expected value of the plan or philosophy... not entirely (because the it might have been "the right play, given the visible cards" with the deal revealing low probability outcomes). However, bad results provide at least some Bayesian evidence of bad ideas without bringing more of a model into play.

So when you say that "the actual values of transhumanism" might be distinguished from less abstract "things done in the name of transhumanism" that feels to me like it could be a sort of category error related to expected value? If the abstraction doesn't address and prevent highly plausible failure modes of someone who might attempt to implement the abstract ideas, then the abstraction was bad.

(Worth pointing out: The LW/OB subculture has plenty to say here, though mostly by Hanson, who has been pointing out for over a decade that much of medicine is actively harmful and exists as a costly signal of fitness as an alliance partner aimed at non-perspicacious third parties through ostensible proofs of "caring" that have low actual utility with respect to desirable health outcomes. Like... it is arguably PART OF OUR CULTURE that "standard non-efficacious bullshit medicine" isn't "real transhumanism". However, that part of our culture maybe deserves to be pushed forward a bit more right now?)

A second argument that seems like it could be unpacked from your statement, that I would agree with, is that well formulated abstractions might contain within them a lot of valuable latent potential, and in the press of action it could be useful to refer back to these abstractions as a sort of True North that might otherwise fall from the mind and leave one's hands doing confused things.

When the fog of war descends, and a given plan seemed good before the fog descended, and no new evidence has arisen to the contrary, and the fog itself was expected, then sticking to the plan (however abstract or philosophical it may be) has much to commend it :-)

If this latter thing is all you meant, then... cool? :-)

Comment by jenniferrm on Transhumanists Don't Need Special Dispositions · 2018-12-08T20:28:47.114Z · score: 7 (4 votes) · LW · GW

Has someone been making bad criticisms of transhumanism lately?

In 2007, when this was first published, I think I understood which bravery debate this essay might apply to (/me throws some side-eye in the direction of Leon Kass et al), but in 2018 this sort of feels like something that (at least for a LW audience I would think?) has to be read backwards to really understand its valuable place in a larger global discourse.

If I'm trying to connect this to something in the news literally in the last week, it occurs to me to think about He Jiankui's recent attempt to use CRISPR technology to give HIV-immunity to two girls in China, which I think is very laudable in the abstract but also highly questionable as actually implemented based on current (murky and confused) reporting.

Basically, December of 2018 seems like a bad time to "go abstract" in favor of transhumanism, when the implementation details of transhumanism are finally being seriously discussed, and the real and specific challenges of getting the technical and ethical details right are the central issue.

Comment by jenniferrm on Is Clickbait Destroying Our General Intelligence? · 2018-12-03T08:53:18.640Z · score: 13 (4 votes) · LW · GW

One thing to keep in mind is sampling biases in social media, which are HUGE.

Even if we just had pure date ordered posts from people we followed, in a heterogeneous social network with long tailed popularity distributions the "median user" sees "the average person they follow" having more friends than them.

Also, posting behavior tends to also have a long tail, so sloppy prolific writers are more visible than slow careful writers. (Arguably Asimov himself was an example here: he was *insanely* prolific. Multiple books a year for a long time, plus stories, plus correspondence.)

Then, to make the social media sampling challenges worse, the algorithms surface content to mere users that is optimized for "engagement", and what could be more engaging than the opportunity to tell someone they are "wrong on the Internet"? Unless someone is using social media very *very* mindfully (like trying to diagonalize what the recommendation engine's think of them) they are going to what causes them to react.

I don't know what is really happening to the actual "average mind" right now, but I don't think many other people know either. If anyone has strong claims here, it makes me very curious about their methodology.

The newsfeed team at Facebook probably has the data to figure a lot of this out, but there is very little incentive for them to be very critical or tell the truth to the public. However, in my experience, the internal cultures of tech companies are often not that far below/behind the LW zeitgeist and I think engineering teams sometimes even go looking for things like "quality metrics" that they can try to boost (counting uses of the word "therefore" or the equivalent idea that uses semantic embedding spaces instead) as a salve for their consciences.

More deeply, like on historical timescales, I think that repeated low level exposure to lying liars improves people's bullshit detectors.

By modern standards, people who first started listening to radio were *insanely gulllible* in response to the sound of authoritative voices, both in the US and in Germany. Similarly for TV a few decades later. The very first ads on the Internet (primitive though they were) had incredibly high conversion rates... For a given "efficacy" of any kind of propaganda, more of the same tends to have less effect over time.

I fully expect this current media milieu to be considered charmingly simple, with gullible audiences and hamhanded influence campaigns, relative to the manipulative tactics that will be invented in future decades, because this stuff will stop working :-)

Comment by jenniferrm on In Logical Time, All Games are Iterated Games · 2018-10-10T16:18:09.254Z · score: 5 (3 votes) · LW · GW
(You might think meta-iteration involves making the other player forget what it learned in iterated play so far, so that you can re-start the learning process, but that doesn't make much sense if you retain your own knowledge; and if you don't, you can't be learning!)

If I was doing meta-iteration my thought would be to maybe turn the iterated game into a one-shot game of "taking the next step from a position of relative empirical ignorance and thereby determining the entire future".

So perhaps make up all the plausible naive hunches that I or my opponent might naively believe (update rules, prior probabilities, etc), then explore the combinatorial explosion of imaginary versions of us playing the iterated game starting from these hunches. Then adopt the hunch(es) that maximizes some criteria and play the first real move that that hunch suggests.

This would be like adopting tit-for-tat in iterated PD *because that seems to win tournaments*.

After adopting this plan your in-game behavior is sort of simplistic (just sticking to the initial hunch that tit-for-tat would work) even though many bits of information about the opponent are actually arriving during the game.

If I try to find analogies in the real world here it calls to mind martial arts practice with finite training time. You go watch a big diverse MMA tournament first. Then you notice that grapplers often win. Meta-iteration has finished and then your zeroth move is to decide to train as a grappler during the limited time before you fight for the first time ever. Then in the actual game you don't worry too much about the many "steps" in the game where decision theory might hypothetically inject itself. Instead, you just let your newly trained grappling reflexes operate "as trained".

Note that I don't think this even close optimal! (I think "Bruce Lee" beats this strategy pretty easily?) However, if you squint you could argue that this rough model of meta-iteration is what humans mostly do for games of very high importance. Arguably, this is because humans have neurons that are slow to rewire for biological reasons than epistemic reasons...

However, when offered the challenge that "meta-iteration can't be made to make sense", this is what pops into my head :-)

When I try to think of a more explicitly computational model of meta-iteration-compatible gaming my attention is drawn to Core War. If you consider the "players of Core War" to be the human programmers, their virtue is high quality programming and they only make one move: the program they submit. If you consider the "players of Core War" to be the programs themselves their virtues are harder to articulate but speed of operation is definitely among them.

Comment by jenniferrm on Weird question: could we see distant aliens? · 2018-04-23T00:12:24.115Z · score: 18 (5 votes) · LW · GW

Paul, I love what you're doing here, have been thinking about this a long time. I look forward to seeing an answer and would like to write a clarifying essay full of non answers :-)

By "get our attention" I mean: be interesting enough that we would already have noticed it and devoted some telescope time to looking in more detail at that part of the sky. (Once they have our attention it seems significantly cheaper to send a message.)

This suggests that we can list various anomalies that might have been thought to be extraterrestrials and already received attention, and then exclude them for various reasons.

1. For example, Tabby's Star recently had me wondering/hoping/worrying for a good year or two.

It is only 1,280 light years from Earth and I think it is plausible that we wouldn't even be able to see similar stars on the far side of our own galaxy which is mere ~100k light years in diameter... it can't count for this exercise because seeing it from other galaxies would be quite a trick.

HOWEVER, despite being an F type star (that shouldn't be variable (that varies in very irregular ways)) it was interesting enough raise $100k on Kickstarter for telescope time, and to deserve its own feed. I think people are pretty sure it is natural at this point, with a probable case of "indigestion" from the star colliding with a metallic planet in the last 10k years or so.

However, the fact that it got our attention means someone might do that to one planet/star combo like clockwork, every 1000 years in a regularly spaced line of stars.

It could work as a local "we exist" signal whose clocklike timing would count as the signature of intentional planning and sort of function like an invitation to show up at the logical NEXT star in the timed "indigestion collision" sequence to watch the collision and parley with whoever else showed up...

However, I don't think these events would be bright enough for the weird question?

(This does raise the question as to what counts as a "message" and what the bitrate of said message is allowed to be? Is a valid message just "this was intentionally created", or "this was intentionally sent", or "here is a place that will be interesting at a future time" or something even more than that? Also, what if the evidence of intentionality comes from a coincidence of timing spread across spans of time that requires detailed astronomical records for longer than humans seem to be able to maintain political or cultural or linguistic institutions?)

2. In 1967 Pulsars caused people to be very excited for a short period of time, thinking that such regularity must be intentional. However then it was worked out that pulsars were just spinning charged neutron star remnants leftover from supernovas. Still, they are pretty great natural clocks ;-)

This might make them a great "medium" in which to encode intentionality, but it means you have to modulate or sculpt them somehow so that when alien astronomers get interested they can see a deviation from what's natural.

Another problem is that they are highly directional, with most of the energy going out of their wobbling north and south poles (which when they wobble across your telescope is one of the pulses), so they don't signal very widely.

Another problem is that they aren't actually very bright. We see them in the Milky Way, and in our galactic neighbor the Large Magellanic Cloud, but finding an unusually bright pulsar 2 million light years away in Andromeda was newsworthy. In 2003 McLaughlin and Cordes tried to find very bright pulsars further afield and maaaaybe got a hit in M33 (aka "The Triangulum Galaxy") which is only 3M light years away. But seeing these things from 8000M light years away is highly questionable.

Binary pulsars are more rare and more likely to get scientific attention.

The first binary pulsar, discovered in 1974, won the 1993 Nobel in physics for Taylor and Hulse. By 2005 there were 113 discovered. They are interesting because they modulate the "clock" dynamics inherent to singleton pulsars.

Binary pulsars tick faster when coming towards you and tick slower when moving away, so the orbital parameters of the system can be characterized precisely just from the timing of the ticks. These orbital parameters measurably changes on the timescale of human lives, slowing down in a way that can be naturally interpreted as indirect proof that gravity waves exist and are pulling energy out of such massive systems :-)

If you wanted to catch someone's attention you might construct or find a three star system that included a pulsar aimed the way you wanted to send a message, and then mess with the orbital parameters intentionally.

Non hierarchical three star systems are chaotic by default and well understood chaotic systems can be controlled with surprisingly little energy which might make something like this attractive.

A probable hierarchical trinary-with-a-pulsar (and so not necessarily chaotic) that includes a sun-like star was surveyed in 2006. The third star is not totally confirmed, and even if it exists the arrangement here is more like a binary system, where one of the binaries has a large planet/star/thing orbiting it alone (hence "hierarchical" and hence probably not chaotic).

There is another pulsar trinary that might be chaotic found in 2014. These things tend not to last however, because "chaos".

Those are the only two I know of. I'm pretty sure the trinaries are being examined "because physics" but I've heard no peeps about unusual patterns of timing from them. But still, no matter how many neighbors pulsars have, they are fundamentally too dim and too directional to count as part of an answer to the weird question here I think...

3. The 234 star's that might be called "Borra's Hundreds" can probably also be discounted directly because at best, if these are signaling extraterrestrials, then they are just using puny pulsed lasers with roughly our own planet's industrial energy outputs, in more or less the visible spectrum (blockable by dust), which probably doesn't count because it obviously can't be seen from somewhere far away like the Sloan Great Wall.

The idea, initially articulated by Ermanno Borra in 2010 as I minimally understand it, is that a laser could shoot out light of nearly any frequency (frequency as given by the wavelength of individual photons), but if we or aliens could pulse the quantity of photons sent out fast enough, this would be visible to typical methods for measuring the "frequency of light from a star" in standard spectrographic surveys whose intentional goal is to figure out the atomic constituents of those stars from the wavelengths (and hence the frequencies) of the specific photons they emit. The methods aren't looking for very fast pulses of more and then less photons, but they could nonetheless see them by "accident".

In 2012, Borra tried to explain it again and spelled out more of the connections to SETI, basically saying that formal SETI was doing one thing, but spectrographic star surveys were better funded and you could do SETI there too just by processing the exact same data through another filter to make the possible injected signals pop out.

Aliens seeking to be discovered would know anyone smart would do spectrographic surveys of the stars, so that would be an obvious place to try to put a signal.

Then in 2016 Borra published again, now with Trottier as a coauthor, saying that he'd gone ahead and looked at archival spectral data, and found 234 stars that seemed to be sending out "peculiar periodic spectral modulations" of the sort that he predicted... unless the recorded version of the data had frequency artifacts in it?

As summarized by Snopes (normally a good source) the claim is disregarded but all the criticisms are status attacks rather than attending to any kind of object level analysis of the math, the physics, or the collected data.

The BEST argument against Borra is one I've almost never seen leveled, which is that the data processing method involved complex math, and had error bars, and they analyzed 2.5 million stars and only found 234 results. This makes me instantly wonder: data mining artifact?

But in that case you'd expect someone to make this argument seriously and explain in detail how the math went wrong somewhere? I don't get it.

Maybe people think that lasers that blink with a terahertz frequency are impossible because of "laser physics" or something? But no one seems to have raised this objection. And it seems to me like it might be possible to do this just from having a normal continuous laser and then spin something very very fast that periodically blocks the light coming out of the laser? I'm not a laser engineer, I don't know, it just seems weird to me that I've seen no speculation one way or another.

I've tried googling the coordinates of the stars Borra found and none of them have wikipedia pages, Google sends all the searches for the stellar coordinates back to Borra's own paper. I don't know how many light years away any of them are.

There's no kickstarter. The normal SETI people at UC Berkeley eventually, in October of 2016, agreed to look at a few of Borra's stars but you could see their heart wasn't in it. There's been no word since then.

However, despite humans being boring and uninterested in important things, what about a generalization of this method! :-)

(EDIT NOTE: In the first draft I had text here where I imagined Niven's fictional Ringworld made out of an impossible super material and then suggested modifications to create a "flicker ring" that could spin around a star and make the star appear to blink at spectral frequencies from certain perspectives. My optical reasoning was ludicrously wrong in the first draft, built around how things would be seen from very close rather than very far. Even with the hypothetical magic substance "scrith" a flicker ring big enough and fast enough to look right at a vast distance would be impossible. The material would have to be many orders of magnitude more magical than scrith to work in this capacity.)

4. Hoag's Object is pretty fascinating and fascinatingly pretty.

Sometimes I wonder if the only reason we don't believe in aliens yet is some kind of social signaling equilibrium similar to plate tectonics.

In 1915 Wegener was like "Duh, the continents obviously line up like a jigsaw puzzle" and people were like "No way!" and then 50 years later they were like "Oh, yeah, I guess so, funny how this is obvious to kids now but wasn't obvious to fancy scientists in 1890..."

If there are "Hoagians" shepherding all the stars in their galaxy into a pretty ring as a collective art project (or maybe just to prevent expensive damaging collisions?), that would be pretty epic.

In terms of the weird question however, the problem is that Hoag's Object is only 9M light years away (vs Andromeda's 2M, and that's part of why we easily see it. Picking it out uniquely from 8000M light years away would be a totally other thing. Also, it is only visible if you see it from the poles rather than the edges, which is another reason it isn't a very good universal signal.

5. Black hole collisions have never been attributed to aliens, to my knowledge. However, they are obviously big and awesome and get a lot of news. If you could survey moderately sized black holes in your galaxy and nudge them around in a controlled way you might have a partial solution? Timed collisions would be hard to deny were aliens I think. Imagine:

Chirp! (then wait 16.30 days)

Chirp! (2.32 days) Chirp! (then wait another 16.30 days)

Chirp! (2.32 days) Chirp! (2.32 days) Chirp!

You going to tell me that's not an intentional "here I am!" signal? You can't! :-P

From a long term signaling perspective (like to break through the Fermi Paradox by visibly declaring once and for all "intelligence existed!" before the Great Filter gets you) the problem here would be that this would be a one time signal that only communicates to a small shell of stars a precise distance away.

Many such events could have occurred before humans could hear them, and many might exist after we go extinct, with us none the wiser :-/

6. Gamma Ray Bursts are more usually associated with death and life. Basically they are so bright that they would probably cause mass extinctions in their home galaxies.

However, if you could figure out a way to cause them (not that hard? just crash neutron stars into each other in head on collisions?) and somehow survive a series of six-ish closely timed blasts then it could work like black holes, but way more obvious. No theory of relativity is even required to know to build a gravity wave detector! Black holes are still probably better in terms of style points, because their collisions don't seem to cause mass extinctions :-P

---

Anyway, my point is that all of these are thing that have already come to mainstream scientific human attention and caused lots of exploratory interest and analysis.

ALSO, all of them have been more or less dismissed by mainstream astronomers as being conclusive evidence of extraterrestrial civilizations.

ALSO, I don't instantly see super obvious ways to twist any of these things around to function as a clean cut answer to the weird question where a short-lived Kardashev Type III species with our physics and material science (but better and more manufacturing capacity) could set something up, have it persist after the Great Filter gets them, and signal to everyone forever.

Comment by jenniferrm on April Fools: Announcing: Karma 2.0 · 2018-04-01T15:24:54.477Z · score: 45 (11 votes) · LW · GW

I'm sure this day will be remembered in history as the day that LessWrong became great again!

Comment by jenniferrm on LessWrong Diaspora Jargon Survey · 2018-03-27T03:48:50.291Z · score: 45 (10 votes) · LW · GW

Your experimental results might be indicative of something other than problems merely within LW...

I decided to test the hypothesis that LessWrongers practice weak scholarship in regards to jargon. In particular, that for many important terms the true source of knowledge has not been transmitted to community members. [bold added]

The problem here is that a better reference group than "LessWrongers" might be "scientists"?

Or perhaps the the group of "scholars" (understood as all the scientists, plus all the people "not doing real science" per whatever weird definition someone has for calling something "science"), or perhaps even the still larger category of "humans"?

There is a generalized problem with scholarship related cognition in the the widespread failure of humans to remember the source of the contents of their minds. Photographs of events you weren't even alive for become vague visual memories. Hearsay becomes eyewitness report. Fishy stories from people you know you shouldn't trust become stories you don't remember the source of... and then become things you weakly believe... basically: in general, by default, human minds are terrible at retaining auditable fact profiles.

But suppose that we don't expect that much of generic humans, and only hold scientists to high intellectual standards?

Still a no go!

As per Stigler's Law Of Eponymy there are almost no laws which were actually named after their (carefully searched for) originators! The general pattern is similar to art: "Good scientists borrow, great scientists steal."

In practice, the thing that will be remembered by large groups of people is good popularization, especially when a well received version keeps things simple and vivid and doesn't even bother to mention the original source.

If LW can fix this, it will be doing something over and above what science itself has accomplished in terms of scholarly integrity. (Whether this will actually help with technological advances is perhaps a separate question?)

----

For an example here, I know about "ugh fields" because I invented that term and know the details of its early linguistic history.

1. The coining in this case preceded the existence of the overcomingbias blog by a few years... it was coined in conversations in the 2001-2003 era in and around College of Creative Study (CCS) seminars at UC Santa Barbara (UCSB) between me and friends, some of whom later propagated the term into this community.

My use of the term was aimed at describing the subjective experience of catastrophic procrastination along with some causal speculation. It seemed that mild anxiety over a looming deadline could cause mild diversion into a nominally anxiety ameliorating behavior like video games... which made the deadline situation worse... and thereby turned into a positive feedback of "ugh". These ugh fields would feel they have an external source whose apparent locus is "the deadline", with the amount of ugh increasing exponentially as the deadline gets closer and closer.

(I failed a class or two back then more or less because of this dynamic until I restructured my soul into a somewhat more platonically moderate pattern using Allan Bloom's translation of The Republic as my inspiration. Basically: consciously locally optimized hedonism has potentially unrecoverable failure modes and should be used with caution, if at all. Make lists! Perhaps amortize hedonism over times equal to or greater than your personal budgeting cycle? Or maybe better yet try to slowly junk hedonism in favor of duty and virtue? Anyway. This is a WIP for me still...)

2. Two of my friends from UCSB (Anna and Steve) were part of the conversations about me failing classes at UCSB and working out a causal model thereof, and in roughly 2008 brought the term to "Benton House" (which was the first "rationalist house" wherein lived participants in "the visiting fellows program" of the old version of MIRI which was then called "the Singularity Institute for Artificial Intelligence (SIAI)").

3. The term then propagated through the chalk board culture of SIAI (and possibly into diaspora rationalist houses?) and eventually the concept turned into a LW post. The new site link for this post doesn't work at the moment that I write this, but archive.org still remembers the 2010 article when I said of "ugh fields":

It is a head trip to see a pet term for a quirk of behavior reflected back at me on the internet as an official name for a phenomenon.

4. And the term keeps rolling around. It basically has a life of its own now, accreting hypothetical mechanisms and stories and interpretations as it goes.

It would not surprise me if some academic (2 or 10 or 50 years from now) turns it into a law and the law gets named after them, in fulfillment of Stigler's Law :-P

----

The core thing I'm trying to communicate is that humans in general can only think sporadically, and with great effort, and misremember almost everything, and especially misremember sources/credit/trust issues. The world has too many details, and neurons are too expensive. External media is required.

Lesswrongers falling prey to attribution failures is to be expected by default, because Lesswrong is full of humans. The surprising thing would be generally high performance in this domain.

My working understanding is that many of the original english language enlightenment folks were mindful of the problem and worked to deal with it by mostly distrusting words and instead constantly returning to detailed empirical observations (or written accounts thereof), over and over, at every event where it was hoped that true knowledge of the world might be "verbally" transmitted.

Comment by jenniferrm on Appropriateness of Discussing Rationalist Discourse of a Political Nature on LW? · 2018-03-26T13:01:59.958Z · score: 8 (2 votes) · LW · GW

London, New York, and nine full time employee in the NYT media orbit... updated!

Comment by jenniferrm on Shadow · 2018-03-20T19:26:41.226Z · score: 4 (1 votes) · LW · GW

I see below that you're aiming for something like "fear in political situations,". This calls to mind, for me, things like the triangle hypothesis, the Richardson arms race model, and less rigorously but clearly in the same ambit also things like confidence building measures.

These are tough topics and I can see how it might feel right to just "publish something" rather than sit on one's hands. I have the same issue myself (minus the courage to just go for it anyway) which leads me mostly to comment rather than top post. My sympathy... you have it!

Comment by jenniferrm on Appropriateness of Discussing Rationalist Discourse of a Political Nature on LW? · 2018-03-16T11:20:20.807Z · score: 12 (3 votes) · LW · GW

Uh... I can try to unroll the context and thinking I guess..

I think in my head I initially associated the name with childhood memories of a vaguely Investigative TV News Program that was apparently founded in 1986.

Also, it appears to be the name of an entire genre of magazines that includes things like New Statesmen which makes it a bit tricky to google for details about the thing itself, rather than the category of the same name.

It seemed plausible to me, given the general collapse of the journalism industry, that the old 1990's brand still existed, had moved to the Internet, mutated extensively, and was now reduced to taking potshots at people like Scott in order to drum up eyeballs?

(Plausibly the website could be co-branded with a TV version still eking out some sort of half life among the cable TV channels with 3 or 4 digit numbers, that could trace its existence back to 1986?)

None of what seemed plausible to me is actually true.

The old thing named Current Affairs apparently died in 1996, and was briefly revived in 2005 and then died again. The new thing started in 2015, and has nothing to do with the old thing.

Since I was surprised by the recency of the founding of the new incarnation of "something named Current Affairs" it seemed to me that other people might be confused too, so I linked to the supporting evidence.

Also, when Scott speaks indirectly of the callout, he makes a "request not to be cited in major national newspapers". But the name here is so maddeningly generic that I have difficulty even Googling my way to reliable circulation numbers.

Is it actually major? Do they even have a paper print format? I'm still not sure, and don't really care. Maybe Scott was fooled into thinking they matter too at first?

Basically, my model at this point, given the paucity of hard data, is that this new Current Affairs could easily be nothing like a "major national newspaper" but rather it could just be like two or three yahoos in a basement struggling to be professional journalists in an age when professional journalism is dying, and finding that they have to start trolling virtuously geeky bloggers to stir up drama and attract eyeballs to their website to make ends meet.

The circulation numbers and actual ambient reputation potentially matter, because if they are very low then who cares if some troll hasn't read Scott's old essay very carefully, but if many high quality eyeballs were reading the inaccurate summary and criticism, then the besmirching insinuations could hurt Scott.

In the meantime, maybe this will be the beginning of a beautiful friendship. When strangers get into fights in real life, it isn't totally uncommon for them, years later, to end up great friends who know each other's true measure :-)

Comment by jenniferrm on Appropriateness of Discussing Rationalist Discourse of a Political Nature on LW? · 2018-03-15T09:27:23.887Z · score: 16 (4 votes) · LW · GW

I appreciate that you're asking at a very "high level of meta" about a controversial topic.

Also, I appreciate that you helped me to know that something had even happened. I read Scott's original article back when it was fresh, but the Robinson piece wasn't on my radar until I searched for Scott's rebuttal on the basis of the question and found a link back to it.

I'm still not sure if I understand all the ins and outs here, but I will say that this is a complex topic which I personally avoid writing about because in many ways I'm sort of a coward...

However Scott reads to me as grappling with complicated ideas, in public, against his own interests, in a basically admirable way, while Robinson reads to me as having had to push some content out on a deadline (with a larger goal of trying to get his readers to buy the topmost book in the image at the end of his article).

I sympathize with Scott having been dissed in a magazine whose name suggests falsely that it has a long history and thus having been put in a position to either (1) defend himself and give the upstart that is insulting him the attention which was probably point of the attack or (2) not defend himself.

I think Scott's move of not putting his rebuttal on his own main page, but just putting it where it can be searched for (so it comes up as a defense if people search for the topic specifically, but doesn't move a lot of eyeballs) and running the URL through donotlink.it was quite smart. He appears to understand how he's being trolled and is responding in a way that navigates it pretty well :-)

Comment by jenniferrm on Shadow · 2018-03-15T08:00:06.691Z · score: 13 (3 votes) · LW · GW

Cybernetic polytheism is hard to do right, because you have to have a strong sense of cybernetics first. You need to understand and explore the center and the edges of a large scale optimization dynamic, explore the empirical details it entails, and generally get a scientific understanding of it... then, for lulz, you might name it and personify it.

"Evolution" is a good example. This process is instantiated in biology. It operates over heritable patterns of deoxyribonucleic acid whose transcription into protein by living cells constructs new cells and agglomerations of cells in the shape of bacteria and macroscale organisms... each with basically the same DNA as before, but with minor variations. There is math here: punnett squares, fixation, etc.

Now we could just leave it at that. The science is good enough.

But not everyone has time for the biology, or has the patience to learn the math. Also, the existence of biological structures has been attributed by non-biologists to gods with narrative character that doesn't really map that well to the biological principles.

Thus there is a strong temptation to perform a narrative correction and offer "better theology" to translate the science into something with more cogent emotional resonances.

Like... species were not created by a benevolent watch maker who loves us. That's crazy.

Actually, if biological nature (or biological nature's author) has any moral character, that character is at least half evil. This entity thinks nothing of parasitism or infanticide, except to promote them if these processes produce more copies of DNA and censor them of they produce fewer copies of DNA.

It tries countless redundant experiments (the same mutation over and over again) that leads to both misery and death, but even calling these experiments is generous... there is almost no intentional pursuit of knowledge (although HSP genes are pretty cool, and sort of related), no institutional review boards to ensure the experiments are ethical, no grant proposals arguing in favor of the experiments in terms of the value of the knowledge they might produce.

Evolution, construed as a god, is a god we should fear and probably a god we should fight.

We can probably do better than it does, and if we don't do better it will have its terrible way with us. Those who worship this god without major elements of caution and hostility are scary cultists... they are sort selling their great great grand children into slavery to something that won't reward them, and can't possibly feel gratitude. A narrative from old school horror or science fiction, that matches the right general tone, is Azathoth.

But you can't just make up the name Azathoth and say that it is a god and coin a bunch of other weird names, and make up some symbolic tools for dealing with them, and mix it together willy-nilly, and not mention biology or evolution at all.

You have to start with the science and end with the science.

Comment by jenniferrm on On Building Theories of History · 2018-03-11T18:39:44.255Z · score: 30 (9 votes) · LW · GW

Back in 2004-2005 (in a time I look back fondly on, because I was an OK kid) I was basically a naive techno-optimist about computers and software and AI, but I got seriously worried about Peak Oil.

All the muggles had a "policy level" understanding that the consumer energy economy (and everything in general) would be basically fine, but everyone I could find whose "gears level" understanding of fossil fuel economics was predicting some kind of doom. The futures markets basically said "in 2005, 2009, and 2019 OPEC will politically control the price of oil, and it will be ~$39 per barrel" but that didn't make any object level sense when you dug into the details.

I went kind of crazy, trying to reconcile these things, and read a lot of object level quantitative anthropology trying to figure out whether I was crazy or everyone else was.

What ended up happening is that the economic/technological solution arrived late (but more or less "before serious collapse", like failures of supply chains or the dissolution of traditional constitutions) and also Obama was elected in the midst of a relatively mild "financial collapse" that included oil prices spiking to over $120 per barrel (plus food riots in poor countries).

Since Obama was tribally blue (and the obvious corrective policies were tribally red) and elected with a mandate to solve "the Great Recession" he could get energy extraction reform in a way a red politician could never get away with.

Blue establishment activists objecting to backroom deals like this would be disloyal (only "outsider" ideological leftists, like those involved in the Dakota/Bakken/Standing Rock protests could pragmatically object), and red establishment activist networks were happy to unshackle the frackers and toss a regulatory bone to shale oil. By 2010 things were much less scary, and by 2013 the trajectory of US oil production had totally and dramatically deviated from the predictions inherent to the Hubbert's Peak model of historical oil production.

I consider the 2004-2013 period to have been very personally educational from a "theory of history" perspective :-)

My pet name for the hypothetical field (coined by Michael Flynn in the late 1980's) is "cliology" (named after Clio the Muse of History), and one of many barriers to creating a sociologically viable community of cliology researchers (I'm tempted to call it the "Fundamental Hypothesis of Cliology" as a joke?) is that most major insights in this field are inherently useful for guiding investment and are thus hoarded within the investing class as "one-off trade secrets".

The memetic incentives for serious public knowledge production in this domain would be extremely tricky to set up, and are unlikely to happen except via "great man" or "great circle" interventions. The Fundamental Hypothesis of Cliology suggests that Elon Musk could maybe do it, or a new "thing like the Vienna Circle" might be able to do it, but that's more or less what it would take. Also, even after the initial "boost" from this effort, public research would stall and/or devolve the moment any critical subset of people died, or got day jobs, or got head hunted by a hedge firm, or whatever. The memetic incentive patterns would probably continue to hold for each incremental addition to the field, more or less forever?

So in 2250 (assuming technology keeps advancing and yet there are still autonomous mortal human-shaped minds with their hands on the reins of history) they might very well think that the causality of our period of history was quite retrospectively straightforward... but they will be treating insights that help uniquely predict 2280 (or whatever their window of prediction is) as trade secrets.

Comment by jenniferrm on Circling · 2018-02-22T20:32:20.999Z · score: 21 (6 votes) · LW · GW

I really like this comment!

I think I see you calling explicit attention attention to your model of cognition, and how your own volitional mental moves interact with seemingly non-volitional mental observations you become aware of.

Then you're integrating this micro-experimental data into an explanatory framework that implicitly acknowledges the possibility that your own model of yourself might be wrong, and even if it is right other people might work differently or have different observations.

I think that to get any sort of genuine, reproducible, safe, inter-subjectively validated meditative science that knows general laws of subjective psychology, it will involve conversations in this mode :-)

Etymologically, "meditation" comes from the latin meditari, "to study".

To make a "science word" we switch to ancient greek, where "meletan" means "to study or meditate". The three original "Boetian muses" were memory (Mnemosyne, who often is considered the mother of them all), song (Aoede), and meditation (Melete)... so if a science existed here it might be called "meletology"?

A few times I've playfully used the term "meletonaut" to describe someone whose approach to the field is more exploratory than scholarly or experimental.

If I hear you correctly, in your cognitive explorations, you find that you can page through memories while watching yourself for symptoms of high "adrenaline" (by which I mean often actual adrenaline, but also the general constellation of "arousal" including heart rate and sweaty skin and probably cortisol and so on).

And then maybe when you think of yourself as "aware of your feelings" that phrase could be unpacked to say that you have a basically accurate metacognitive awareness of which memories or images cause adrenaline spikes, without the active metacognitive awareness itself causing an adrenaline spike.

So if someone accuses you of "causing feelings" you can defend yourself by saying the goal is actually to help people non-emotionally know what "causes them to have emotions" without actually "experiencing the feelings directly" except as a means of gathering emotional data.

I think I understand the basis of such defense, and the validity of the defense in terms of the real value of using this technique for some people.

My personal pet name for specifically this exploratory technique (which can be performed alone and appears to occur in numerous sociological and religious contexts) is "engram dousing".

The same basic process happens in the neuro lingusitic programming (NLP) community as one step of a process they might call something like "memory reconsolidation".

It also happens in Scientology, where instead of self reported adrenaline symptoms they use an "e-meter" (to measure sweaty palms electronically) and instead of a two person birthday circle they formalize the process quite a bit and call it an "audit". In scientology it is pretty clear they noticed how great this is as an introductory step in acquiring blackmail material and gaining the unjustified trust of marks (prior to headfucking them) and optimized it for that purpose.

Which is not to say that circling is as bad as scientology!

Also, apostate scientologists regularly report that "the tech" of scientology (which is scientology's jargon term for all their early well scripted psychological manipulations of new members) does in fact work and gives life benefits.

With dynamite, construction workers could suddenly build tunnels through mountains remarkably fast so that trains and roads could go places that would otherwise have been economically impossible. Dynamite used towards good ends, with decent safety engineering and skill, is great!

But if someone wants to turn a garbage can upside down, strap a chair to it, and have me sit in the chair while they put a smallish, roughly measured quantity of dynamite under it... even if the last person in the chair survived and thought it was a wild ride and wants to do it again... uh... yeah... I would love to watch from a safe distance, but I think I'd pass on sitting in the chair.

And more generally, as an aspiring meletologist and hobbyist in the sociology of religion, all I'm trying to say is that engram dousing (along with some other mental techniques) is like "cognitive nuclear technology", and circling might not be literally playing with refined uranium, but "the circling community in general" appears to have some cognitive uranium ore, and they've independently refined it a bit, and they're doing tricks with it.

That's all more or less great :-)

But it sounds like they are not being particularly careful, and many of them might not realize their magic rocks are powered by more than normal levels of uranium decay, and if they have even heard of Louis Slotin then they don't think he has anything to do with their toy (uranium) pellets.

Comment by jenniferrm on Circling · 2018-02-19T12:49:47.483Z · score: 32 (9 votes) · LW · GW
Ideally, everyone would have the opportunity to explore vulnerability carefully, step by step, with a skilled therapist or something to turn to if things ever got dicey.

I think this is an essential line, and a core problem. For more than a half century the social capital of the average person in the US has been falling and falling and falling. A therapist is sort of just a person you pay to pretend to be a genuine friend, without you having to reciprocate friendship back at them. That it is considered reasonable or ideal (as the first thought) to go to a paid professional to get basic F2F friend services is historically weird.

Maybe it is the best we can do, but... like... it didn't used to be this way I don't think, and that suggests that it could be like it was in the past if we knew what was causing it.

Comment by jenniferrm on Circling · 2018-02-19T12:24:49.779Z · score: 29 (10 votes) · LW · GW

I'm pretty sure these people don't think that what they are doing "borrows from" hypnosis or trance or suggestibility hacking or mesmerism or whatever words you want to use for it.

Their emotions are high, caused by skillful intentional actions, and involves a general dynamic of "playing along" with numerous secondary "critical cognitive faculties" seemingly disengaged. Their focus is on their own feelings, and how their feelings feel, and so on. It isn't that they don't notice what's directly happening to (and inside) them, it is that they notice very little else.

Maybe that's great. Being in religions seems empirically to be somewhat positive for people?

Maybe the preacher there has studied hypnosis and optimized things for trance states... but I don't think that would been required for him to be interacting with more or less the same basic mechanisms in people's cognitive machinery.

Those mechanisms are not particularly exotic or hard to mess with, but they cut directly to "goal-content integrity" and so caution is appropriate.

Comment by jenniferrm on Circling · 2018-02-18T08:52:34.575Z · score: 25 (17 votes) · LW · GW

The details reminds me a lot of hypnosis, with thoughts about thoughts, instead of just thinking things directly.

Breath. Body attention. Meta. Listen to the voice. Respond and recieve. Be open to the update. Body attention. Meta. Listen to the voice. Everyone trancing themselves and everyone else in a fuzzy haze...

Or how about, actually, NO!

How about instead we try to ramp up our critical faculties and talk about models and evidence?

I do not trust casual hypnosis because hypnosis can become "not casual" very fast.

Hypnosis is a power tool and basically it is one of those "things I won't work with" unless it is wartime and my side is losing and it seems highly relevant to victory. And it probably wouldn't be my side I'd be hypnotizing, it would be the bad guys.

"We broke the rules, Harry," she said in a hoarse voice. "We broke the rules."

"I..." Harry swallowed. "I still don't see how, I've been thinking but -"

"I asked if the Transfiguration was safe and you answered me! "

There was a pause...

"Right..." Harry said slowly. "That's probably one of those things they don't even bother telling you not to do because it's too obvious. Don't test brilliant new ideas for Transfiguration by yourselves in an unused classroom without consulting any professors."

Except there are no decent professors in this subject. (There were crazy CIA mind control experiments, but instead of publishing their results, the records were mostly purged in 1973.)

Comment by jenniferrm on Missives from China · 2018-02-17T18:09:06.492Z · score: 9 (2 votes) · LW · GW

I've thought a lot about iterated chicken, especially in the presence of agent variations.

I suspect the local long term iteration between a rememberable (sub-Dunbar?) number of agents leads to pecking orders, and widespread iteration in crowds of "similarly different" agents leads to something like "class systems".

For example, in the US, I think every human knows to get out of the way of things that look like buses, because that class of vehicles expects to be able to throw its weight around. Relatedly, the only time a Google car has ever been in a fender bender where it could be read as "at fault" using local human norms was when it was nosing out into traffic and assumed a bus would either yield or swing wide because of the car's positional priority.

What have you noticed about chinese traffic patterns? :-)

Comment by jenniferrm on Eternal, and Hearthstone Economy versus Magic Economy · 2018-02-11T02:45:31.153Z · score: 28 (7 votes) · LW · GW

If I understand correctly, the cognitive process/bias/heuristic/whatever of "sacredness" is relevant here.

Neither nails nor dollars are sacred so you're free to trade dollars for nails.

A kidney is sacred, so you can't trade that for dollars, but you can trade it for another kidney (although such trades still feel a bit weird).

Sacred things are often poorly managed in practice, and sacredness is easy to make fun of, but a decent defense of sacredness might be that it is one of the few widely installed psychological mechanisms in real life for managing the downsides of having markets in things. Thus, properly deployed sacredness might let you have "trade" in one area without ending up with "totalalizing trade"?

In the smaller and hopefully lower stakes world of video games, I think the suggestion would be to have card classes with different trading characteristics.

The lowest class of very non-sacred things could be swapped with extremely low transaction costs within the class and also be tradeable directly for money.

Higher sacredness things would have a separate market, perhaps with transaction costs like needing a purchaseable delivery mechanism or imposing delays so that objects go into limbo after the trade is finalized while "being delivered". The most sacred things would be "inalienable" so they can't be traded or given away or perhaps not even be destroyed.

Exactly where sacredness should be deployed in order to maximize fun seems like a deep and relatively unstudied problem.

One place in real life where the inalienability of something has large and substantive differences from jurisdiction to jurisdiction is the question of the rights of artistic creators to their artwork. In some jurisdictions, an artist cannot legally sell their right to veto the use of their artwork if deployed in artistically compromising ways (like use in advertising or political campaigns) after mere copyrights have been sold.

In the US artistic moral rights are not treated as very sacred, and the lack of sacredness in art production is probably part of the US's cultural dominance a la Hollywood, but it has arguably also had large effects in the lives of artists, visibly so with people like Bill Waterson and Prince.

Comment by jenniferrm on Arbital postmortem · 2018-01-30T21:01:00.017Z · score: 25 (7 votes) · LW · GW

Thank you for the writeup! I've long had a distant impression of Arbital as being some kind of "mindmapping prediction social thing" and now that I've heard the explanation of its iterating vision I think maybe my model of it might be "Alexei and Eliezer's Memex or Xanadu".

This updates me a bit in the direction that something like Arbital will exist in the future and be a big deal, and it will probably make more progress by exteme attention to (1) the microeconomics of users and their existing preferences and their desire to have property they seem to control and (2) compromising on the overall "economic architecture" of the system such that it does not actually bring about the full utopian societal transformation it initially promised.

Comment by jenniferrm on The First Fundamental · 2018-01-20T22:52:32.586Z · score: 4 (1 votes) · LW · GW

Likewise!

Comment by jenniferrm on The First Fundamental · 2018-01-20T22:44:14.989Z · score: 4 (1 votes) · LW · GW

I mean... in his defense... Paul Dirac was pretty dumb. He was probably just doing his best ;-)

Comment by jenniferrm on Sufficient · 2018-01-19T20:21:04.218Z · score: 8 (2 votes) · LW · GW

So if one person is seriously working on perpetual motion. Like they are acknowledging that they are probably not going to succeed, but they argue that if we don't find an exception to the 2nd law somehow then we're all doomed... Then in that case, a Sufficient person has to help because "social agreement"?

Comment by jenniferrm on The First Fundamental · 2018-01-19T06:58:20.395Z · score: 15 (5 votes) · LW · GW

"I do not see how a man can work on the frontiers of physics and write poetry at the same time. They are in opposition. In science you want to say something that nobody knew before, in words which everyone can understand. In poetry you are bound to say something that everybody knows already in words that nobody can understand."

-Paul Dirac (to Oppenheimer, regarding Oppie's reported dabbling in poetry)

Comment by jenniferrm on The Solitaire Principle: Game Theory for One · 2018-01-17T23:46:54.605Z · score: 6 (2 votes) · LW · GW

K1 wants to write a novel because she calculated a novel to be the best thing to be working on given many environmental factors as input to a reflectively stable and emotionally integrated theory of axiology.

The novel is completed if at least 300 future Ks agree.

However, K1 mostly ignores "other people" in favor of thinking of herself as something like a local/momentary snapshot of a turing machine's read/write head in operation....

She has obvious inputs and an obvious place for outputs, plus some memory and awareness of the larger program, and an ability and interest in fixing the program she is executing when definite errors are detected... and just trusting the system otherwise.

K1 writes 1/300th of a novel.

Since K1's value estimates were very reasonable, the estimates are replicated by many future K's and 753 days later a novel is finished.

It took more than 300 days, but during the 753 days many other similarly valuable things were also done that were plausibly valuable things to have done. The whole time, K has been more or less safely interuptible, and it would have been pretty weird if K had ignored surprising issues that were more important than the novel when those things actually came up.

If the novel was somehow never finished that would have been OK. It probably would mean it was an omnicient-persperctive-error to have worked on it, but that's OK because humans aren't omniscient.

Lesson: stop worrying about other people (who are often mostly crazy anyway) and instead pay attention to efficiently and reliably knowing what is actually good.

Comment by jenniferrm on Sufficient · 2018-01-17T23:05:26.601Z · score: 7 (2 votes) · LW · GW

I really like the poetry and potential rigor of this... but I'm wondering how the philosophy deals with the problem of entropy?

Some resources are just plain finite and can't be renewed.

For example, there is "only so much sun to go around for so long". A current iconic image of self sufficiency is the solar panel, but eventually the sun will run out and we'll either need to find a new and younger star or give up the game.

Long before we run out of real estate for solar panels we will probably need to radically up our mining of rare earth metals, maybe reaching out to the asteroids for such metals.

And so on with a process of discovering and then applying creative problem solving to a series of natural limits... Essentially, a lot of what counts as "Sufficient" probably depends on technological feasibility and the artbitrary choice of the time window we choose to consider.

The longer the timescale, the more clear it is that either we defeat entropy itself, or we can't be "Sufficient".

If there's an acceptance that its OK to "punt" on some kinds of sufficiency because we can't ultimately beat entropy, then the question of when and how to make the call to stop caring about some scale of analysis arises. Is there a finite amount of fresh water? A finite amount of phosphorous? A finite amount of neodymium? A finite amount of rich fools who will buy overpriced junk?

With sufficient energy we could make fresh water, phosphorous, neodymium, and rich fools to buy overpriced junk, but (probably) no amount of energy will let us make energy.

Basically, given that "being alive" is inherently extractive and doomed to eventual entropic collapse, where does a person being Sufficient draw their line in the sand with regard to resource sufficiency?

Comment by jenniferrm on Demon Threads · 2018-01-17T02:51:29.855Z · score: 4 (1 votes) · LW · GW

Thank you both for the feedback. I've taken the liberty of adding underlining in a second pass edit.

Comment by jenniferrm on Boiling the Crab: Slow Changes (beneath Sensory Threshold) add up · 2018-01-17T02:00:10.309Z · score: 23 (7 votes) · LW · GW

Is the "crab boiling" metaphor substantially different from the traditional "frog boiling" metaphor?

I've heard the frog version over and over since I was a child, and I've also heard that it is not experimentally verified.

Like... frogs do, in fact, try to escape objectively hot water when there are low barriers to exit. A good biology keyword for research on the clade-spanning mechanism(s) involved here is the "thermal critical maximum". There is a whole family of proteins for "responding to stress by paying more attention to folding or re-folding proteins" all the way down at the bacterial level, and the whole family is named for the first kind of stress response discovered: the stress response to heat.

Your post initially made me wonder if real crabs (whose recent evolution may have lacked really big temperature swings because of oceanic temperature buffering somehow?) might live up to the metaphor's implications better than real frogs (that are fresh water ectotherms whose entire life sorta revolves around leveraging their environment to control their internal state, with temperature being near the top of the list), but casual googling suggests that (warning: disturbing video) crabs also flee hot pans.

An uncharitable reading is that crabs are a better metaphor simply because they "seem more convincing" because it there has been less time for the crab version to have been debunked?

Frog experts perrenially get questions about this, because the meme refuses to die, and in their responses they sometimes note that the typical spreaders of the frog meme are individuals like business consultants, political activists, and religious preachers. When I squint and put on my cynic hat, this reads to me basically as "people who specialize in personally benefiting from tricking entire groups of people into doing things that often don't make a lot of sense".

Despite the fundamental dishonesty, if the frog metaphor was accepted by the audience, it could be a rhetorically solid part of a larger process of achieving group compliance for nearly arbitrary changes.

Basically, the frog metaphor encourages people to distrust their own ability to think objectively about how the world works now, or how it has worked in the past, and in the face of this uncertainty it offers the idea that a large but unmeasurable and essentially invisible harm can be avoided by doing... something... anything? It depends on the situation.

If there was a genuine large imminent loss (like dying from hyperthermia) then many dramatic changes might be justified to attempt to avoid this outcome. Run! Jump! Pull levers at random! Thus, a boiling frog metaphor, deployed with no "kicker" attached, is a slightly confusing thing...

One naturally wonders when the other shoe will drop and the speaker will reveal their claimed harm and propose a more specific plan...

...basically I'm wondering where you're going with this ;-)

Comment by jenniferrm on Demon Threads · 2018-01-10T12:58:56.079Z · score: 19 (6 votes) · LW · GW

In my experience the evolution of demon threads is moderately dependent on the mechanics of commenting, and (to extend the demonic metaphor) "exorcism comments" work differently depending on the mechanical position of new comments.

No matter how commenting works, a comment that "fixes" the bulk of the demon aspects of the larger conversation needs to have clean and coherent insight into whatever the issue is. You shouldn't worry too much about writing such a post unless you are moderately confident that you could pass an ideological turing test for all the major postions being espoused.

The thing that changes with different commenting systems is how much you can fix it and what the "shape" of the resulting conversation looks like if you "succeed".

With "unthreaded, most recent comment at the top" there is no hope.

No matter how excellent your writing, the content will drop lower in the queue and eventually be forgotten. This kind of commenting system is basically an anti-pattern used by manipulative propagandists.

Closely related: the last time I held my nose and visited Facebook it appeared to only show fresh/recent comments for any given item in the feed, and you had to choose to click to get the javascipt to load older comments above the recent comments that start out visible. Ouch! (At this point I consider Facebook to basically just be a propaganda honeypot.)

With "unthreaded, most recent at the bottom" (as with oldschool phpBB systems and the original OvercomingBias setup) a single perfect comment is incapable of totally changing the meaning of the seed. This helps the OP maintain a position of some structural authority...

What you can do, however, is wait for 5-30 posts (partly this depends on pagination - if pagination kicks in within less than 40 posts then wait until page two to attempt an exorcism), and then post a comment that offers a structural correction that praises previous comments, but points out something everyone seems to be missing, that really honestly matters to everyone, and that cuts to the very essence of the issue and deflates it.

This won't totally kill the thread, but it should dramatically change the tone to something more productive, and the tonal state transition will persist for many followups, hopefully leading to the drying up of conversation.

The danger here is that it doesn't really work in very large communities. Readers might be tempted to read the first three comments, then jump to the last page of comments to get the last three comments, then wade in themselves without readin the middle. If there are hundreds of pages of comments your attempted exorcism at the bottom of page 2 simply can't do the job.

With reddit style commenting (as with modern LW and HN) you have the most hope.

The depth of threading is strongly related to the amount of "punch/counterpunch dynamic" that is happening. A given "seed" will have many "child posts" and each of the child posts will sprawl quite deeply. Deep sprawl is only potentially a serious problem in the highest voted first level response. For subsequent comment it isn't actually a problem (at least I don't think?) because the only people who read that far down are the ones who actually enjoy a rhetorical ruckus.

A perfect exorcism in this sort of threading system arrives late enough for the default assumptions to become clear, and then responds to the original seed in a basically flawless way, being fairminded to both sides (often by going meta somehow) and then managing to get upvotes so that it is the first thing people see when they start reading the seed and "check the comments". After reading the "exorcising response" all the lower (and earlier written) comments should hopefully seem less critically in need of reponse because it looks like quibbling compared to a proper response.

The exorcising comment needs to hit the central issue directly and with some novelty so that it really functions as signal rather than noise. For example, use a scientific phrase that no one has so far used that reveals a deep literature.

It needs to avoid subtopics that could raise quibbling responses. Any "rough edges" that allow room for someone to respond will lead to even more surface area for quibbling attacks, and tertiary responses will tend to be even lower quality and more inflamitory, and the fire will get larger rather than smaller. Thus, an exocism must be close to flawless.

It helps to have a bit of a "moral tone" so that good people would feel guilty disturbing the purity of the signal. However too much moral tone can raise a "who the fuck do you think you are?!" sort of crticism, so go light with it. Also it helps a lot to "end on a high note", so that "knee jerk voters" will finish reading it it and click "UP" almost without thinking :-)

You might note that I used the "end on a high note" pattern in this very comment, because I re-ordered my discussion of commenting systems to discuss the one most amenable to being fixed last, which happens to be the one LW uses, because we are awesome. Putting good stuff last and explicitly flattering the entire community is sort of part of the formula ;-)

(EDIT: Added underlines at the suggestion of mr-hire and Raemon below.)

Comment by jenniferrm on The Right to be Wrong · 2018-01-02T20:04:18.004Z · score: 8 (2 votes) · LW · GW

Cool link! I had not heard of her before but I see the echoes. To summarize some of the resonances I think I see...

I noticed that the Sutra about her is the Heart Sutra, and it arose as part of the Mahayana correction to the early ascetic "small raft" Buddhism, and was claimed to have been the secret teachings of Buddha that couldn't be taught in the initial version of Buddhism because the people were not ready...

It is claimed to have been technically there at the beginning, but not in an obvious way.

The secret teachings were mythologically kept by the king of the snakes in his underwater kingdom for a full turn of history, until a reincarnation of Buddha arrived named Nagarjuna, where "Naga" means snake and "Arjuna" means something like "bright shining silver" and is the name of the central hero of the Bhagavad Gita. Thus Nagarjuna, the teacher of the lesson, had a name that basically meant "Illuminated Snake Hero".

The ideas were mythologically acquired by: going underwater, making friends with the snake king, then studying the snake king's secrets (that he got from Buddha).

These lessons, that Prajnaparamita is the embodiment of, are given the concept handle of "shunyata" ("emptiness") and basically seems to be a denial of local naive realism? That is to say: there are no permanent things whose meaning and reality are independent of context. So if you take this seriously and ask "But what's the context?" over and over for anything and everything, recursively, then perhaps eventually you always get to Prajnaparamita as the contextual "Mother of All".

Epistemically speaking, chasing Prajnaparamita is valuable, because you learn the context of your current naively local truth. However you'll never get to her and go past her, because she represents the edge of knowledge... she is always "the farther away context of which you are currently ignorant". As you learn, she always retreats into the background, representing the new edge of knowledge.

Prajnaparamita's name literally means "perfect wisdom", and while she is technically unattainable, it is useful to try to approach her :-)

If you look at the emotional differences in the symbolic choice of Tiamat vs Prajnaparamita, then Tiamat pushes all the ideas into a single fundamentally bad kind of watery chaos that must be destroyed in a violent way for goodness and masculine knowledge to triumph. On the other hand Prajnaparamita has all the emotionally negative aspects sublimated into the process of pursuing her (into the watery domain of the snake king), and is seen as fundamentally good in herself.

Both kinds of symbolism are "mixed", but one valorizes the heroic killing and re-use of "scary female mysteries" while the other justifies "painful exploration" as worthwhile pursuit of the ultimate ineffable female context.

Calling out some of these echoes, I think I see different arrangements of many of the same concepts. Also, the arrangement of the concepts in the "Space Mom" framing seems closer to Prajnaparamita than Tiamat.

Comment by jenniferrm on In the presence of disinformation, collective epistemology requires local modeling · 2017-12-21T08:41:11.532Z · score: 13 (4 votes) · LW · GW

I really like your promotion of fact checking :-)

Also, I'd like to especially thank you for offering the frame where every human group is potentially struggling to coordinate on collective punishment decisions from within a fog of war.

I had never explicitly noticed that people won't want their pursuit of justice to seem like unjustified aggression to "allies from a different bubble of fog", and for this reason might want to avoid certain updates in their public actions.

Like, I even had the concept of altriustic punishment and I had the concept of a fog of war, but somehow they never occured in my brain at the same time before this. Thank you!

If I was going to add a point of advice, it would be to think about being part of two or three "epistemic affinity groups". The affinity group model suggests these groups should be composed of maybe 3 to 15 people each and they should be built around a history of previous prolonged social contact. When the fog of war hits, reach out to at least one of your affinity groups!

Comment by jenniferrm on Why Bayesians should two-box in a one-shot · 2017-12-19T06:34:49.739Z · score: 0 (0 votes) · LW · GW

So, at one point in my misspent youth I played with the idea of building an experimental Omega and looked into the subject in some detail.

In Martin Gardiner's writeup on this back in 1973 reprinted in The Night Is Large the essay explained that the core idea still works if Omega can just predict with 90% accuracy.

Your choice of ONE box pays nothing if you're predicted (incorrectly) to two box, and pays $1M if predicted correctly at 90%, for a total EV of $900,000 (== (0.1)0 + (0.9)1,000,000).

Your choice of TWO box pays $1k if you're predicted (correctly) to two box, and pays $1,001,000 if you're predicted to only one box for a total EV of $101k (== 900 + 100,100 == (0.9)1,000 + (0.1)1,001,000).

So the expected profit from one boxing in a normal game, with Omega accuracy of 90% would be $799k.

Also, by adjusting the game's payouts we could hypothetically make any amount of genuine human predictability (even just a reliable 51% accuracy) be enough to motivate one boxing.

The super simplistic conceptual question here is the distinction between two kinds of sincerity. One kind of sincerity is assessed at the time of the promise. The other kind of sincerity is assessed retrospectively by seeing whether the promise was upheld.

Then the standard version of the game tries to put a wedge between these concepts by supposing that maybe an initially sincere promise might be violated by the intervention of something like "free will", and it tries to make this seem slightly more magical (more of a far mode question?) by imagining that the promise was never even uttered, but rather the promise was stolen from the person by the magical mind reading "Omega" entity before the promise was ever even imagined by the person as being possible to make.

One thing that seems clear to me is that if one boxing is profitable but not certain then you might wish you could have done something in the past that would make it clear that you'll one box, so that you land in the part of Omega's calculations where the prediction is easy, rather than being one of the edge cases where Omega really has to work for its brier score.

On the other hand, the setup is also (probably purposefully) quite fishy. The promise that "you made" is originally implicit, and depending on your understanding of the game maybe extremely abstract. Omega doesn't just tell you what it predicted. If you get one box and get nothing and complain then Omega will probably try to twist it around and blame you for its failed prediction. If it all works then you seem to be getting free money, and why is anyone handing out free money?

The whole thing just "feels like the setup for a scam". Like you one box, get a million, then in your glow of positive trust you give some money to their charitable cause. Then it turns out the charitable cause was fake. Then it turns out the million dollars was counterfeit but your donation was real. Sucker!

And yet... you know, parents actually are pretty good at knowing when their kids are telling the truth or lying. And parents really do give their kids a free lunch. And it isn't really a scam, it is just normal life as a mortal human being.

But also in the end, for someone to look their parents in the eyes and promise to be home before 10PM and really mean it for reals at the time of the promise, and then be given the car keys, and then come home at 1AM... that also happens. And wouldn't it be great to just blame that on "free will" and "the 10% of the time that Omega's predictions fail"?

Looping this back around to the larger AGI question, it seems like what we're basically hoping for is to learn how to become a flawless Omega (or at least build some software that can do this job) at least for the restricted case of an AGI that we can give the car keys without fear that after it has the car keys it will play the "free will" card and grind us all up into fuel paste after promising not to.

Comment by jenniferrm on Melting Gold, and Organizational Capacity · 2017-12-11T21:16:52.585Z · score: 75 (28 votes) · LW · GW
There's a saying for communities: if you're not gaining members, you're losing members.

This heuristic is totally worth turning into a snowclone and applying almost everywhere. If your net worth is not going up, it is probably going down. If your house isn't being remodeled, it is probably falling into disrepair. If your health isn't getting better, it is probably getting worse. Etc.

The general form of the underlying claim is that that the derivative with respect to time for any measurable characteristic is almost never zero, it is usually either positive or negative, and without attention, the direction is usually not the one that humans typically prefer.

Comment by jenniferrm on Melting Gold, and Organizational Capacity · 2017-12-11T21:02:05.593Z · score: 27 (10 votes) · LW · GW

Just to chime in with support, I read "The E-Myth Revisited: Why Most Small Businesses Don't Work and What to Do About It". It is not obviously epistemically sound because it is offered more like "lore" than "science". However it stayed with me, and seems to have changed how I approach organizational development, and I think I endorse the changed perspective.

One of the major concept handles that may have been coined in the book (or borrowed into the book thereby spreading it mnuch further and faster?) is the distinction between "working in your business versus working on your business". A lot of people seem to only "work in", not "work on", and book makes the claim that this lack of going meta on the business often leads to burnout and business failure.

One thing to keep in mind is that since all debates are bravery debates and this specific community is often great at meta, it is also possible to make the opposite error... you can spend too much time working "on" an organization, and not enough "in" the organization, and the failures there look different. One of my heuristics for noticing if there is "too much organizational meta" is if the bathrooms aren't clean.

Comment by jenniferrm on LDL 7: I wish I had a map · 2017-12-02T03:00:27.936Z · score: 4 (2 votes) · LW · GW

You got to the end of the essay and went "down" into the details instead of "up" in to the larger problem. Going up would be productive I think, because this is an issue that sort of comes up with every single field of human knowledge that exists, especially the long tail of specializations and options for graduate studies.

When you were an undergraduate and spent a lot of time thinking about the structure of mathematical knowledge, you were building a sort of map of all the maps that exist inside the books and minds of of the community of mathematians, with cues based on presumed structure in math itself, that everyone was studying in common.

Your "metamap of math" that you had in your head is not something shared by everyone, and I do not think that it would be easy for you to share (though maybe I'm wrong about your assessment of its sharability).

When I saw title of your post, I thought to myself "Yes! I want a map of all of human knowledge too!" and I was hoping that I'd get pointers towards a generalization of your undergrad work in mathematics, except written down, and shareable, and about "all of human knowledge". But then I got to the end of your essay and, as I said, it went "down" instead of "up"... :-/

Anyway, for deep learning, I think a lot of the metamap comes from just running code to get a practical feel for it, because unlike math the field is more empirically based (where people try lots of stuff, hide their mistakes, and then polish up the best thing they found to present as if they understand exactly how and why it worked).

For myself, I watch for papers to float by, and look for code to download and try out, and personally I get a lot from googelstalking smart people via blogs.

The best thing I know in this vein as a starting point is a post by Ilya Sutskever (guest writing on Yisong Yue's blog), with an overview of practical issues in deep learning to keep in mind when trying to make models train and do well that seem not to click when you think you have enough data and enough GPU and a decent architecture, yet they are still not working.

Comment by jenniferrm on The Right to be Wrong · 2017-11-30T21:40:00.824Z · score: 15 (4 votes) · LW · GW

I'm kind of a sucker for polytheistic playfulness, but even so I love your literary grounding and evocative definition of Space Mom, as an initially scary but ultimately calming guide through her chaotic domain.

If I want to reach back to earlier versions, there's also Eris (disliked by the Greeks but beloved of Discordians), and arguably even better than Eris is the Sumerian goddess Tiamat!

Tiamat is the "the chaos of primordial creation", who is female but with strong reptilian themes as well. In Sumerian mythology Tiamat is killed by Marduk, the king of the gods whose crown is made out of a ring of eyes that vigilantly look in every direction, and whose power is to speak the truth.

After killing her, Marduk used her body to make the entire world. If you contrast "Tiamat vs Space Mom" preferring the Space Mom iconography would argue that you need to make friends with chaos. You need to work on desensitizing yourself to fear so you can freely look into the shadows, not clench up and get ready to fight.

Moden visual adaptations of Tiamat lean on outer space motifs but her original context for the Sumerians was salt water, and the ocean, and especially the estuary where salt water and fresh water chaotically mix on a cycle a bit less than 13 hours long that is somehow related to the moon.

In MtG iconography, Tiamat is Blue/Black and Marduk might be White/Black, with the separation along the White/Black axis representing simplistic dualisms... the emotional triumph of understanding over fear, moral triumph of good over evil, the familial triumph of males over females, the astronomical importance of the sun over the moon, and the political triumph of law over crime.

Isbell's snake detection hypothesis argues that primate visual acuity evolved over the last ~100M years (with detection of literal predatory snakes as the biggest driver of the pixel count in our eyeballs) thereby installing "snake monsters" as a symbol that our brains reliably imagine and pareidolically react to because false negatives on the snake monster detection task are very expensive. Some of the "symbolicly loose" psychology people (who don't write off Jung as "not even wrong") have picked up on this and used it to justify connecting snake iconography to mythology and therapeutic stories.

Tiamat is THE dragon in Sumerian mythology. In Chinese mythology, dragons are often good, but fickle. In Western mythology dragons used to be mostly bad, but in more recent cinema they become the beloved black pets that let social misfits fly above the ocean who might grow up to talk like Sean Connery as they help con men gain a conscience and topple unjust kings.

Space Mom (seen as a friendlier version of Tiamat) is a more balanced take on blue far mode. She is never "here and now". She is always "out around the edges", either requiring a journey to reach, or else part of ancient history and possibly the deep future. She represents the dramatic schism between future eutopias and dystopias. And back when you acquired your mother tongue, in a time you forgot, back when you weren't even you (as far as continuity of memory goes) back when your only words were <crying> and <not crying>, she was with you and helped you to learn... and if you calm your fear of the dark she still can.

Comment by jenniferrm on Open Letter to MIRI + Tons of Interesting Discussion · 2017-11-28T23:30:08.168Z · score: 3 (3 votes) · LW · GW

My read on the state of public academic philosophy is that there are many specific and potentially-but-not-obviously-related issues that come up in the general topic of "foundations of inference". There are many angles of attack, and many researchers over the years. Many of them are no longer based out of official academic "philosophy departments" anymore and this is not necessarily a tragedy ;-)

The general issue is "why does 'thinking' seem to work at all ever?" This can be expressed in terms of logic, or probabilistic reasoning, or sorting, or compression, or computability, or theorem decidability, or P vs NP, or oracles of various kinds, or the possibility of language acquisition, and/or why (or why not) running basic plug-and-chug statistical procedures during data processing seems to (maybe) work in the "social sciences".

Arguably, these all share a conceptual unity, and might eventually be formally unified by a single overarching theory that they are all specialized versions of.

From existing work we know that lossless compression algorithms have actual uses in real life, and it certainly seems as though mathematicians make real progress over time, up to and including Chaitin himself!

However when people try to build up "first principles explanations" how how "good thinking" works at all, they often derive generalized impossibility when we scope over naive formulations of "all possible theories" or "all possible inputs".

So in most cases we almost certainly experience a "lucky fit" of some kind between various clearly productive thinking approaches and various practical restrictions on the kinds of input these approaches typically face.

Generative adversarial techniques in machine learning, and MIRI's own Garrabrant Inductor are probably relevant here because they start to spell out formal models where a reasoning process of some measurable strength is pitted against inputs produced by a process that is somewhat hostile but clearly weaker.

Hume functions in my mind as a sort of memetic LUCA for this vast field of research, which is fundamentally motivated by the core idea that thinking correctly about raw noise is formally impossible, and yet we seem to be pretty decent at some kinds of thinking, and so there must be some kind of fit between various methods of thinking and the things that these thinking techniques seem to work on.

Also thanks! The Neyman-Pearson lemma has come up for me in practical professional situations before, but I'd never pushed deeper into recognizing Jerzy Neyman as yet another player in this game :-)

Comment by jenniferrm on Open Letter to MIRI + Tons of Interesting Discussion · 2017-11-28T22:39:11.899Z · score: 7 (7 votes) · LW · GW

Fundamentally, the thing I offer you is respect, the more effective pursuit of truth, and a chance to help our species not go extinct, all of which I imagine you want (or think you want) because out of all the places on the Internet you are here.

If I'm wrong and you do NOT want respect, truth, and a slightly increased chance of long term survival, please let me know!

One of my real puzzles here is that I find it hard to impute a coherent, effective, transparent, and egosyntonic set of goals to you here and now.

Personally, I'd be selfishly just as happy if, instead of writing all new material, you just stopped posting and commenting here, and stopped sending "public letters" to MIRI (an organization I've donated to because I think they have limited resources and are doing good work).

I don't dislike books in general. I don't dislike commercialism in general. I dislike your drama, and your shallow citation filled posts showing up in this particular venue.

Basically I think you are sort of polluting this space with low quality communication acts, and that is probably my central beef with you here and now. There's lots of ways to fix this... you writing better stuff... you writing less stuff that is full of abstractions that ground themselves only in links to your own vanity website or specific (probably low value) books... you just leaving... etc...

If you want to then you can rewrite all new material that is actually relevant and good, to accomplish your own goals more effectively, but I probably won't read it if it is not in one of the few streams of push media I allow into my reading queue (like this website).

At this point it seems your primary claim (about having a useful research angle involving problems of induction) is off the table. I think in a conversation about that I would be teaching and you'd be learning, and I don't have much more time to teach you things about induction over and beyond the keywords and links to reputable third parties that I've already provided in this interaction, in an act of good faith.

More abstractly, I altruistically hope for you to feel a sense of realization at the fact that your behavior strongly overlaps with that of a spammer (or perhaps a narcissist or perhaps any of several less savory types of people) rather than an honest interlocutor.

After realizing this, you could stop linking to your personal website, and you could stop being beset on all sides by troubling criticisms, and you could begin to write about object level concerns and thereby start having better conversations here.

If you can learn how to have a good dialogue rather than behaving like a confused link farm spammer over and over again (apparently "a million times" so far) that might be good for you?

(If I learned that I was acting in a manner that caused people to confuse me with an anti-social link farm spammer, I'd want people to let me know. Hearing people honestly attribute this motive to me would cause me worry about my ego structure, and its possible defects, and I think I'd be grateful for people's honest corrective input here if it wasn't explained in an insulting tone.)

You could start to learn things and maybe teach things, in a friendly and mutually rewarding search for answers to various personally urgent questions. Not as part of some crazy status thing nor as a desperate hunt for customers for a "philosophic consulting" business...

If you become less confused over time, then a few months or years from now (assuming that neither DeepMind nor OpenAI have a world destroying industrial accident in the meantime) you could pitch in on the pro-social world saving stuff.

Presumably the world is a place that you live, and presumably you believe you can make a positive contribution to general project of make sure everyone in the world is NOT eventually ground up as fuel paste for robots? (Otherwise why even be here?)

And if you don't want to buy awesomely cheap altruism points, and you don't want friends, and you don't want the respect of me or anyone here, and you don't think we have anything to teach you, and you don't want to actually help us learn anything in ways that are consistent with our relatively optimized research workflows, then go away!

If that's the real situation, then by going away you'll get more of what you want and so will we :-)

If all you want is (for example) eyeballs for your website, then go buy some. They're pretty cheap. Often less than a dollar!

Have you considered the possibility that your efforts are better spent buying eyeballs rather using low grade philosophical trolling to trick people into following links to your vanity website?

Presumably you can look at the logs of your web pages. That data is available to you. How many new unique viewers have you gotten since you started seriously trolling here, and how many hours have you spent on this outreach effort? Is this really a good use of your hours?

What do you actually want, and why, and how do you imagine that spamming LW with drama and links to your vanity website will get you what you want?

Comment by jenniferrm on Humans can be assigned any values whatsoever... · 2017-11-28T07:20:05.971Z · score: 1 (1 votes) · LW · GW

I'll try to organize the basic thought more cleanly, and will comment here again with a link to the better version when it is ready :-)

Comment by jenniferrm on Open Letter to MIRI + Tons of Interesting Discussion · 2017-11-28T06:57:28.912Z · score: 8 (8 votes) · LW · GW

I think there are two big facts here.

ONE: You're posting over and over again with lots of links to your websites, which are places you offer consulting services, and so it kinda seems like you're maybe just a weirdly inefficient spammer for bespoke nerd consulting.

This makes almost everything you post here seem like it might all just be an excuse for you to make dramatic noise in the hopes of the noise leading somehow to getting eyeballs on your website, and then, I don't even know... consulting gigs or something?

This interpretation would seem less salient if you were trying to add value here in some sort of pro-social way, but you don't seem to be doing that so... so basically everything you write here I take with a giant grain of salt.

My hope is that you are just missing some basic insight, and once you learn why you seem to be half-malicious you will stop defecting in the communication game and become valuable :-)

TWO: From what you write here at an object level, you don't even seem to have a clear and succinct understanding of any of the things that have been called a "problem of induction" over the years, which is your major beef, from what I can see.

You've mentioned Popper... but not Hume, or Nelson? You've never mentioned "grue" or "bleen" that I've seen, so I'm assuming it is the Humean critique of induction that you're trying to gesture towards rather than the much more interesting arguments of Nelson...

But from a software engineering perspective Hume's argument against induction is about as much barrier to me being able to think clearly or build smart software as Zeno's paradox is a barrier to me being able to walk around on my feet or fix a bicycle.

Also, it looks like you haven't mentioned David Wolpert and his work in the area of no free lunch theorems. Nor have you brought up any of the machine vision results or word vector results that are plausibly relevant to these issues. My hypothesis here is that you just don't know about these things.

(Also, notice that I'm giving links to sites that are not my own? This is part of how the LW community can see that I'm not a self-promoting spammer.)

Basically, I don't really care about reading the original writings of Karl Popper right now. I think he was cool, but the only use I would expect to get from him right now would be to read him backwards in order to more deeply appreciate how dumb people used to be back when his content was perhaps a useful antidote to widespread misunderstandings of how to think clearly.

Let me spell this out very simply to address rather directly your question of communication pragmatics...

It sounds like you want me to rewrite material from DD and KP's books? Why would me rewriting the same things get a different outcome than the existing literature?

The key difference is that Karl Popper is not spamming this forum. His texts are somewhere else, not bothering us at all. Maybe they are relevant. My personal assessment is currently that they have relatively little import to active and urgent research issues.

If you displayed the ability to summarize thinkers that maybe not everyone has read, and explain that thinker's relevance to the community's topics of interests, that would be pro-social and helpful.

The longer the second fact (where you seem to not know what you're talking about or care about the valuable time of your readers) remains true, the more the first fact (that you seem to be an inefficient shit-stirring spammer) becomes glaring in its residual but enduring salience.

Please, surprise me! Please say something useful that does not involve a link to the sites you seem to be trying to push traffic towards.

you try to give me standard advice that i've heard a million times before

I really hope this was hyperbole on your part. Otherwise it seems I should set my base rates for this conversation being worth anything to 1 in a million, and then adjust from there...

Comment by jenniferrm on Open Letter to MIRI + Tons of Interesting Discussion · 2017-11-25T20:48:46.347Z · score: 4 (4 votes) · LW · GW

I hunted around your website until I found an actual summary of Popper's thinking in straightforward language.

Until I found that I had not seen you actually provide clear text like this, and I wanted to exhort you to write an entire sequence in language with that flavor: clean and clear and lacking in citation. The sequence should be about what "induction" is, and why you think other people believed something about it (even if not perhaps by that old fashioned name), and why you think those beliefs are connected to reliably predictable failures to achieve their goals via cognitively mediated processes.

I feel like maaaybe you are writing a lot about things you have pointers to, but not things that you have held in your hands, used skillfully, and made truly a part of you? Or maybe you are much much smarter and better read than me, so all your jargon makes sense to you and I'm just too ignorant to parse it.

My hope is that you can dereference your pointers and bring all the ideas and arguments into a single document, and clean it up and write it so that someone who had never heard of Popper would think you are really smart for having had all these ideas yourself.

Then you could push one small chapter from this document at a time out into the world (thereby tricking people into reading something piece by piece that they might have skipped if they saw how big it was going to be up front) and then after 10 chapters like this it will turn out that you're a genius and everyone else was wrong and by teaching people to think good you'll have saved the world.

I like people who try to save the world, because it makes me marginally less hopeless, and less in need of palliative cynicism :-)

Comment by JenniferRM on [deleted post] 2017-11-24T23:05:22.458Z

I'm still super curious about the process you used to distinguish color patterns in the game that are "mere mechanics" from color patterns in the game that are "useful for a larger motivational ontology".

Could you maybe talk about how you sift the ontology using the idea that "White and blue have more than their fair share of flying creatures while green tends to have few flying creatures but many counters to flying" as an example?

Is there a such a thing as a "way to be wrong" here? If so, how do you notice that you are more or less wrong in this domain?

Is flying just a game mechanic or is it a symbol (perhaps for "openness to new experience") or is there some orthogonal issue to pay attention to that is much more important?

Comment by jenniferrm on Unjustified ideas comment thread · 2017-11-24T22:30:46.836Z · score: 3 (2 votes) · LW · GW

I think the correct place to look for new content on LerW (like the place to actually bookmark?) might be the daily post list so as to see the posts that the magical scoring system thinks are not worth showing on the main page. However, I only saw this post to make this comment after discovering that link, so there is a decent chance that a lot of test users have not discovered the link yet, and won't be helped by this comment unless the magical sorting algorithm changes its mind somehow.

Comment by jenniferrm on Humans can be assigned any values whatsoever... · 2017-11-23T19:51:49.339Z · score: 0 (0 votes) · LW · GW

Initially I wrote a response spelling out in excruciating detail an example of a decent chess bot playing the final moves in a game of Preference Chess, ending with "How does this not reveal an extremely clear example of trivial preference inference, what am I missing?"

Then I developed the theory that what I'm missing is that you're not talking about "how preference inference works" but more like "what are extremely minimalist preconditions for preference inference to get started".

And given where this conversation is happening, I'm guessing that one of the things you can't take for granted is that the agent is at all competent, because sort of the whole point here is to get this to work for a super intelligence looking at a relatively incompetent human.

So even if a Preference Chess Bot has a board situation where it is one move away from winning, losing, or taking another piece that it might prefer to take... no matter what move the bot actually performs you could argue it was just a mistake because it couldn't even understand the extremely short run tournament level consequences of whatever Preference Chess move it made.

So I guess I would argue that even if any specific level of stable state intellectual competence or power can't be assumed, you might be able to get away with a weaker assumption of "online learning"?

It will always be tentative, but I think it buys you something similar to full rationality that is more likely to be usefully true of humans. Fundamentally you could use "an online learning assumption" to infer "regret of poorly chosen options" from repetitions of the same situation over and over, where either similar or different behaviors are observed later in time.

To make the agent have some of the right resonances... imagine a person at a table who is very short and wearing a diaper.

The person's stomach noisily grumbles (which doesn't count as evidence-of-preference at first).

They see in front of them a cupcake and a cricket (the eye's looking at both is somewhat important because it means they could know that a choice is even possible, allowing us to increment the choice event counter here).

They put the cricket in their mouth (which doesn't count as evidence-of-preference at first).

They cry (which doesn't count as evidence-of-preference at first).

However, we repeat this process over and over and notice that by the 50th repetition they are reliably putting the cupcake in their mouth and smiling afterwords. So we use the relatively weak "online learning assumption" to say that something about the cupcake choice itself (or the cupcake's second order consequences that the person may think semi-reliably reliably happens) are more preferred than the cricket.

Also, the earlier crying and later smiling begin to take on significance as either side channel signals of preference (or perhaps they are the actual thing that is really being pursued as a second order consequence?) because of the proximity of the cry/smile actions reliably coming right after the action whose rate changes over time from rare to common.

The development of theories about side channel information could make things go faster as time goes on. It might even becomes the dominant mode of inference, up to the point where it starts to become strategic, as with lying about one's goals in competitive negotiation contexts becoming salient once the watcher and actor are very deep into the process...

However, I think your concern is to find some way to make the first few foundational inferences in a clear and principled way that does not assume mutual understanding between the watcher and the actor, and does not assume perfect rationality on the part of the actor.

So an online learning assumption does seem to enable a tentative process, that focuses on tiny little recurring situations, and the understanding of each of these little situations as a place where preferences can operate causing changes in rates of performance.

If a deeply wise agent is the watcher, I could imagine them attempting to infer local choice tendencies in specific situations and envisioning how "all the apparently preferred microchoices" might eventually chain together into some macro scale behavioral pattern. The watcher might want to leap to a conclusion that the entire chain is preferred for some reason.

It isn't clear that the inference to the preference for the full chain of actions would be justified, precisely because of the assumption of the lack of full rationality.

The watcher would want to see the full chain start to occur in real life, and to become more common over time when chain initiation opportunities presented themselves.

Even then, the watcher might even double check by somehow adding signposts to the actor's environment, perhaps showing the actor pictures of the 2nd, 4th, 8th, and 16th local action/result pairs that it thinks are part of a behavioral chain. The worry is that the actor might not be aware how predictable they are and might not actually prefer all that can be predicted from their pattern of behavior...

(Doing the signposting right would require a very sophisticated watcher/actor relationship, where the watcher had already worked out a way to communicate with the actor, and observed the actor learning that the watcher's signals often functioned as a kind of environmental oracle for how the future could go, with trust in the oracle and so on. These preconditions would all need to be built up over time before post-signpost action rate increases could be taken as a sign that the actor preferred performing the full chain that had been signposted. And still things could be messed up if "hostile oracles" were in the environment such that the actor's trust in the "real oracle" is justifiably tentative.)

One especially valuable kind of thing the watcher might do is to search the action space for situations where a cycle of behavior is possible, with a side effect each time through the loop, and to put this loop and the loop's side effect into the agent's local awareness, to see if maybe "that's the point" (like a loop that causes the accumulation of money, and after such signposting the agent does more of the thing) or maybe "that's a tragedy" (like a loop that causes the loss of money, that might be a dutch booking in progress, and after signposting the agent does less of the thing).

Is this closer to what you're aiming for? :-)

Comment by JenniferRM on [deleted post] 2017-11-22T10:04:04.096Z

I'm a fan of pretty much any vocabulary with decent construct validity and a large installed base of pre-existing users, so I'm positively disposed towards this :-)

However one thing that makes this system a bit hard for me is my object-level familiarity with the original card game that this vocabulary is extracted from.

Like... in MtG itself I often wanted to make blue decks because I value knowledge and cleverness and blah blah blah, but counterspells and library manipulation can usually only be part of a well rounded deck. It is rare in my experience to find a pure blue deck that wins reliably. I've seen a really strong deck using mermen... but this is the exception that proves the rule because the inspiration was "How do I make a white weenie deck without actually using white cards?"

And like... what about flying creatures? Blue and white are the classic colors for this, and green's spider webs and hurricanes are mostly anti-flying. Is "flying" a symbol for something or not?

Red tends to be easy to play. You just hit hard and fast and then fireball at the end and hope it works. Is that a humorous commentary on people who are ruled by their passions, or... ?

And what about colorless artifacts?

And the primary resonances of black is about is death and rot. It has cards like Royal Assassin, Terror, Gravebind, Blight, and Pestilence. This doesn't make me think of Ayn Rand, it makes me think of Sid Vicious, suicidal goths, crack houses, and Aum Shinrikyo. So like.. uh... what?

I guess my core question is about the degree to which you think this archetypal vocabulary system is separate and different and more useful than the card game from which it was abstracted, and how you tell which parts of the card game's stereotypes are "just game mechanics" or "unproductive imagery" and which parts of the card game's stereotypes "encode useful psychological abstractions"?

Comment by jenniferrm on Hero Licensing · 2017-11-22T07:46:25.223Z · score: 21 (11 votes) · LW · GW

I think you're right in general.

However in specific unusual situations silence might not work... like if you're talking to potential investors (or philanthropists) and they ask "How come you think you're good enough to do this [thing that you want us to partially fund]?"

If I understand correctly, Eliezer decided at a young age to work on a public good whose value would be difficult (or evil) to reserve only to those who helped pay to bring it about, and which was unintelligible to voters, congress critters, the vast majority of philanthropists, and even to most of the high prestige experts in the relevant technical fields.

Having tracked much of his online oeuvre for approaching two decades, I say that arguably his biggest life accomplishment has been the construction of an entire subcultural ecosystem wherein the thing he aspired to spend his life on (ie building Friendly AGI) is basically validated as worth donating to.

There is still the question of whether the existence of such a culture is necessary or sufficient to actually be safe from "unaligned AGI" or "grey goo" or various other scary things (because at some point the rubber will meet the road) but the existence of the culture is probably a positive factor on net.

The existence of this cuture has caused a lot of cultural echoes within the broader english speaking world, and the plurality of this global outcome, traced back through causal dominos that have been falling since 1999 or so, can probably be laid at Eliezer's feet, though he may not want to claim it all. Gleick should write a book about him, because, he is pretty clearly the Drexler of transhuman AI. (Admittedly, Vernor Vinge, Nick Bostrom, Anna Salamon, Seth Baum, and Peter Thiel would probably deserve chapters in the book.)

Thus Eliezer's entire public life has sorta been one giant pitch to the small minority of philanthropists who will "get it" and the halo of people who are close to his ideal target audience in edit distance or in the social graph. I think a key reason it happened this way is that, economically speaking, for people working on "public credence goods that non-geniuses disbelieve" it kinda has to be funded this way. For such people, validation is not only digestable, validation is pretty much the only thing they can hope to eat.