Dark Forest Theories

post by Raemon · 2023-05-12T20:21:49.052Z · LW · GW · 52 comments

Contents

52 comments

There's a concept I first heard in relation to the Fermi Paradox, which I've ended up using a lot in other contexts.

Why do we see no aliens out there? A possible (though not necessarily correct) answer, is that the aliens might not want to reveal themselves for fear of being destroyed by larger, older, hostile civilizations. There might be friendly civilizations worth reaching out to, but the upside of finding friendlies is smaller than the downside of risking getting destroyed. 

Even old, powerful civilizations aren't sure that they're the oldest and most powerful civilization, and eldest civilizations could be orders of magnitude more powerful still.

So, maybe everyone made an individually rational-seeming decision to hide.

A quote from the original sci-fi story I saw describing this:

“The universe is a dark forest. Every civilization is an armed hunter stalking through the trees like a ghost, gently pushing aside branches that block the path and trying to tread without sound. Even breathing is done with care. The hunter has to be careful, because everywhere in the forest are stealthy hunters like him. If he finds another life—another hunter, angel, or a demon, a delicate infant to tottering old man, a fairy or demigod—there’s only one thing he can do: open fire and eliminate them.”

(I consider this a spoiler for the story it's from, so please don't bring that up in the comments unless you use spoiler tags[1])

However this applies (or doesn't) to aliens, I've found it useful to have the "Dark Forest" frame in a number of contexts where people are looking at situation, and see something missing, and are confused. "Why is nobody doing X?", or "Why does X not exist?". The answer may be that it does exist, but is hidden from you (often on purpose).

I once talked to someone new to my local community saying "nobody is building good group houses that really help people thrive. I'm going to build one and invite people to it." I said "oh, people are totally building good group houses that help people thrive... but they are private institutions designed to help the people living there. The way they function is by creating a closed boundary where people can build high trust relationships."

A couple other example areas where Dark Forest Theorizing is relevant:

In some cases, a thing might not literally be hidden on purpose from you, but it's absence is still evidence of something systematically important. For example, "Why aren't the AI people making a giant mass movement to raise public awareness of AI?" Because many versions of a mass movement might turn out to be net-negative (i.e. causing political polarization which makes it harder to get the necessary bipartisan support to pass the relevant laws), and instead the people involved are focused on more narrow lobbying efforts.

(This is not to say that you can't make good public meetups, or successfully talk to AI companies and get them to change their ways, or that there aren't good ways to do mass public outreach on AI that are being neglected. But there may be some systemic difficulties you may be missing)

I do realize the cosmic existential threat aspect of the original metaphor is pretty overkill for some of these. The part of the metaphor that feels most resonant to me here is "you're in a dark place and there's things you'd maybe expect to see, but don't, and the reason you don't see X is specifically because X doesn't want you to find it." 

I may have more to say on individual Dark Forests in the future. For now, I just want to present this as a model to keep in your back pocket and see where it's useful.

  1. ^

    Spoiler tagged lines begin ">!"

  2. ^

52 comments

Comments sorted by top scores.

comment by Julian Bradshaw · 2023-05-13T22:34:08.473Z · LW(p) · GW(p)

As applied to aliens, I think the Dark Forest frame is almost certainly wrong. Perhaps it's useful in other contexts, and I know you repeatedly disclaimed its accuracy in the alien context, but at least for others I want to explain why it's unlikely.

Basically, there are two reasons:

  1. The only technological civilization we know—humanity—hasn't tried at all to hide.
  2. There is no stealth in space.

To expand on the first, consider that humanity has consistently spammed out radio waves and sent out probes with the express hope aliens might find them. Now, these are unlikely to actually give away Earth's location at any distance (the probes are not moving that fast and are hard to find, the radio waves will fade to background noise quickly), but the important thing is that hiding is not on the agenda. Eventually, we are very likely to do something that really is very visible, such as starting up a Dyson Swarm. Consider that ancient humans were arguably often in analogous situations to a Dark Forest, and that the dominant strategy was not indefinite hiding. Hiding is something of an unnatural act for a civilization that has already conquered its planet.

To expand on the second, the cost to send self-replicating probes to every star system to search for life in your galaxy is trivial for even a K-2 civilization, and doable within a few million years, and their origin could be masked if you were paranoid. Building enormous telescopes capable of spotting biosignatures, or even techosignatures, is also possible. (And even if there was some technology that allowed you to hide, you'd have to invent that technology before you're spotted, and given galactic timescales, other civilizations ought to have scoped out the entire galaxy long before you even evolved.)

For what it's worth, I think the two most likely Fermi Question answers are:

  1. We've fundamentally misunderstood the nature of the universe. (ex. simulation hypothesis)
  2. We're the only intelligent civilization in at least the Milky Way.
Replies from: RomanS, Dumbledore's Army
comment by RomanS · 2023-05-14T12:22:38.508Z · LW(p) · GW(p)

There is no stealth in space.

Doesn't sound very convincing to me. Sufficiently advanced tech could allow things like:

  • build an underground civilization 50 kilometers below the surface of a rocky planet
  • settle in the emptiness between galaxies, too far away from anyone to bother looking for you
  • run your civilization of ems on extremely-low-energy computers somewhere in the Oort Cloud
  • hide deep in a gas giant or even in a star
  • run your digital mind on a carefully manipulated natural process (e.g. modify a bunch of growing salt crystals or stellar magnetic processes into doing useful computations)
  • go nanoscale, with entire civilizations running on swarms of nanoparticles somewhere in a small molecular cloud in the intergalactic space

In some of these scenarios, you could look right into a busy alien city using every possible sensor, but not recognize it as such, while standing one meter away from it. 

As for why bother with stealth, one can view it as a question of costs and benefits:

  • if you don't hide, there is some risk that your entire civilization will be killed off. Makes sense to invest at least some resources to reduce the risk.
  • if you hide, there is some cost of doing the hiding, which could be negligible (depending on your tech and philosophy). E.g. if your civ is already running on a swarm of nanoparticles for practical reasons, the cost of hiding is zero.
Replies from: Julian Bradshaw
comment by Julian Bradshaw · 2023-05-14T18:44:14.296Z · LW(p) · GW(p)

"Sufficiently advanced" tech could also plausibly identify all those hidden civilizations. For example, an underground civilization would produce unusual seismic activity, and taking up some inner portion of a gas giant or star would alter their outward behavior. Ultimately, civilizations use mass-energy in unnatural ways, and I don't see a fundamental physical principle that could protect that from all possible sensing. 

More importantly, I don't think your suggestions address my point that hostile civilizations would get you before you even evolve.

But, let's grant that you're the first civilization to evolve in your galaxy, or at the least among the first before someone starts sending out probes to prevent any new civilizations from arising and threatening them. And let's grant that they will never find you. That is a victory, in that you survive. But the costs are astronomical: you only get to use the mass-energy of a single planet, or star, or Oort Cloud, while someone else gets the entire galaxy.

To put it another way: mass-energy is required for your civilization to exist and fulfill its preferences, so far as we understand the universe. If you redirect any substantial amount of mass-energy away from its natural uses (stars, planets, asteroids), that's going to be proportionally detectable. So, you can only hide by strangling your own civilization in its crib. Not everyone is going to do that; I seriously doubt that humanity (or any artificial descendant) will, for one.

(This comes back to my link about "no stealth in space" - the phrase is most commonly invoked when referring to starships. If your starship is at CMB temperatures and never moves, then yeah, it'd be hard to detect. But also you couldn't live in it, and it couldn't go anywhere! You want your starship—your civilization—to actually do something, and doing work (in a physics sense) is detectable.)

comment by Dumbledore's Army · 2024-12-19T17:02:56.748Z · LW(p) · GW(p)

Hard disagree to point 1. The fact that humanity hasn't tried to hide is not counter-evidence to the Dark Forest theory. If the Dark Forest is correct, the prediction is that all non-hiding civilisations will be destroyed. We don't see anyone else out there, not because every civilisation decided to hide, but because only hiders survived.

To be clear: the prediction of the Dark Forest theory is that if humanity keeps being louder and noisier, we will at some point come to the attention of an elder civilisation and be destroyed. I don't know what probability to put on this theory being correct. I doubt it ranks higher than AI in terms of existential risk.

I do know that 'we haven't been destroyed yet, barely 100 years after inventing radio' is only evidence that there are no ultra-hostile civilisations within 50 light-years which also have the capability to detect even the very weakest radio signals from an antique Marconi. It is not evidence that we won't be destroyed in future when signals reach more distant civs and/or we make more noticeable signals.

Replies from: Julian Bradshaw
comment by Julian Bradshaw · 2024-12-22T05:42:36.511Z · LW(p) · GW(p)

The problem with Dark Forest theory is that, in the absence of FTL detection/communication, it requires a very high density and absurdly high proportion of hiding civilizations. Without that, expansionary civilizations dominate. The only known civilization, us, is expansionary for reasons that don't seem path-determinant, so it seems unlikely that the preconditions for Dark Forest theory exist.

To explain:

Hiders have limited space and mass-energy to work with. An expansionary civilization, once in its technological phase, can spread to thousands of star systems in mere thousands of years and become unstoppable by hiders. So, hiders need to kill expansionists before that happens. But if they're going to hide in their home system, they can't detect anything faster than FTL! So you need murderous hiding civs within at least a thousand light years of every single habitable planet in the galaxy, all of which need to have evolved before any expansionary civs in the area. This is improbable unless basically every civ is a murderous hider. The fact that the only known civ is not a murderous hider, for generalizable reasons, is thus evidence against the Dark Forest theory.

 

Potential objections:

  • Hider civs would send out stealth probes everywhere.

Still governed by FTL, expansionary civ would become overwhelmingly strong before probes reported back.

  • Hider civs would send out killer probes everywhere.

If the probes succeed in killing everything in the galaxy before they reach the stars, you didn't need to hide in the first place. (Also, note that hiding is a failed strategy for everyone else in this scenario, you can't do anything about a killer probe when you're the equivalent of the Han dynasty. Or the equivalent of a dinosaur.) If the probes fail, the civ they failed against will have no reason to hide, having been already discovered, and so will expand and dominate. 

  • Hider civs would become so advanced that they could hide indefinitely from expansionary civs, possibly by retreating to another dimension.

Conceivable, but I'd rather be the expansionary civs here?

  • Hider civs would become so advanced that they could kill any later expansionary civ that controlled thousands of star systems.

I think this is the strongest objection. If, for example, a hider civ could send out a few ships that can travel at a higher percentage of lightspeed than anything the expansionary civ can do, and those ships can detonate stars or something, and catching up to this tech would take millions of years, then just a few ships could track down and obliterate the expansionary civ within thousands/tens of thousands of years and win.

The problem is that the "hider civ evolved substantially earlier" part has to be true everywhere in the galaxy, or else somewhere an expansionary civilization wins and then snowballs with their resource advantages - this comes back to the "very high density and absurdly high proportion of hiding civilizations" requirement. The hiding civs have to always be the oldest whenever they meet an expansionary civ, and older to a degree that the expansionary civ's likely several orders of magnitude more resources and population doesn't counteract the age difference.

comment by Raemon · 2023-05-12T23:58:36.776Z · LW(p) · GW(p)

Note this is related to "selection effects". In the case of "where are the good group houses" - group houses that are functioning well tend not to have much turnover, and so don't advertise vacancies. Group houses that announce vacancies frequently are more likely to have some kind of drama going on, or just not being deeply good at satisfying people's life goals.

Replies from: Viliam
comment by Viliam · 2023-05-13T21:20:37.619Z · LW(p) · GW(p)

That reminds me of "You Are Not Hiring the Top 1% [LW · GW]".

Generally speaking, people or organizations that suck are over-represented at the market, because they keep trying and keep getting rejected; the good ones are quickly taken off the market.

For example, if you apply to a published job position, consider the fact that the best companies often do not need to publish, because their happy employees gladly tell their friends. On the other hand, companies that suck and people keep quitting them, keep advertising their job positions for years.

And from the company perspective, the best employees are rarely unemployed, but the ones that get rejected keep going and applying to other companies. Do not be surprised if 9 out of 10 candidates for the software developer position cannot solve the fizz-buzz test.

Rejected authors keep sending their manuscripts to every editor they find.

Best partners get married young; people who can't keep a relationship are always looking for someone new.

As an entrepreneur, you are most likely to get business proposals that many before you have already rejected, often for a good reason.

A lost child who actively asks a random adult person for help is less likely to meet a pedophile than a child who just stands helplessly and waits until some stranger starts a conversation with him or her.

Generally, being approached by someone is statistically worse than approaching a random person.

We should probably fly to other planets, not send signals and wait for someone else to fly here.

Replies from: M. Y. Zuo, AllAmericanBreakfast, lahwran
comment by M. Y. Zuo · 2023-05-14T14:31:21.808Z · LW(p) · GW(p)

Best partners get married young; people who can't keep a relationship are always looking for someone new.

This is why I was always puzzled why Tinder, or other dating apps, became so popular. 

Isn't using it a clear signal that no one in the user's friend/acquaintance groups desires to date them?

And that if they are on it for more then a few days, that they are less desirable partners?

There seems to be a negative feedback loop to scale instead of positive.  

Replies from: Viliam
comment by Viliam · 2023-05-15T21:08:56.371Z · LW(p) · GW(p)

Isn't using it a clear signal that no one in the user's friend/acquaintance groups desires to date them?

I guess, if no one in my social circle wants to date me, what do I lose by announcing this fact to people I wouldn't have met otherwise anyway? And given that they also use the dating app, they are unlikely to reject me just because I use the same dating app... that would defeat the entire purpose.

(Technically, if you are polyamorous, you are only signalling that there are not enough people in your social circle willing to date you, for whatever is your personal definition of "enough".)

And that if they are on it for more then a few days, that they are less desirable partners?

It would make sense to reset your account regularly.

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2023-05-15T22:37:08.004Z · LW(p) · GW(p)

It would make sense to reset your account regularly.

This got a laugh out of me. That's certainly one way to go about it.

comment by DirectedEvolution (AllAmericanBreakfast) · 2023-05-14T01:19:46.229Z · LW(p) · GW(p)

I don't think this is an adequate account of the selection effects we find in the job market. Consider:

  • We don't expect people to disappear from the job market just because they're the best. They disappear from the market when they've found a match, so that they and their counterpart become confident in the relationship's durability and shift from explore to exploit, investing in the relationship for the long term. The pool of candidates is comprised both of those who outgrew their previous position or relationship, and those who got fired or dumped. Insofar as the market suffers from information asymmetries or different rates of performance growth between partners, we should expect lots of participation in the market by high-quality people seeking to move up from a previous mismatch with a low-quality partner.
  • Low-quality employees get discouraged and exit the job market in their field, while low-quality businesses go bankrupt. The people who aren't dating include both those who are in relationships (which only means they're well-matched and better than no partner at all) and those who are too stressed or unhealthy or discouraged to even try to date. Participating in the market is a costly signal that you consider yourself hireable/dateable or are successful and ready to grow.
  • Searching widely for the next job may be a sign of vigor and open-mindedness - the people putting out the most applications are those most determined to succeed.

One factor that is discouraging to consider is how switching costs and the cost of participation in the market intersect.

  • If it's cheap to look for a partner, costly to break up, and a lot of information asymmetry, then there'll always be a set of terrible partners who are always on the hunt (because it's cheap), who have a real chance of finding a better match (because of information asymmetry), and who can expect to keep the good match around for a while (because of high switching costs). The US military is an example. It has a massive advertising budget and a huge manpower shortage, so it's always on the hunt for recruits. There's a big difference between its heroic and self-actualizing self-portrayal and the dismal and dehumanizing experience many soldiers report. And once you're in, you can't just leave. The existence of these entities in any market with these properties is a discouraging sign when considering candidate jobs/partners at random. If you find that participation in the market is cheap or that sharing negative information about a candidate is discouraged (i.e. a powerful politician who could retaliate against anyone accusing them of wrongdoing), then learning this information should make you downgrade your expectations of the candidate's quality.

That being said, it may be that seeking a job via responding to job applications online is a sign of a lower-tier candidate, all else equal. Whether a writer submits to editors independently or via an agent may say a lot about the writer's quality, and whether a first date comes from an app, a recommendation from a friend, or flirtation at a party might say a lot about the potential romantic partner.

Replies from: Viliam
comment by Viliam · 2023-05-15T21:52:40.472Z · LW(p) · GW(p)

Yeah, it does not work absolutely. As you say, sometimes the incompetent people and companies disappear from the market; and sometimes for various reason the competent ones are looking for something new.

Yet, I would say that in my personal experience there seems to be a correlation: the bad jobs I had were often those where I responded to a printed job announcement (even if I responded to multiple postings and chose the job that seemed relatively best among them), and the good jobs I had were often those where people I knew actively approached me saying "hey, I have a great job, and we are currently looking for someone just like you". (Or in one case, it was me calling my friends and asking: "hey, where are you working currently? is it an okay place? are you hiring?".) From the opposite perspective, I have interview a few job candidates whose CVs seemed impressive, but their actual skills were somewhere around the Hello-World level. So it seems to me that responding to job announcements is indeed a lemon market for both sides.

comment by the gears to ascension (lahwran) · 2023-05-13T21:48:59.760Z · LW(p) · GW(p)

(we should greatly improve the ratio of good things, also.)

comment by trevor (TrevorWiesinger) · 2023-05-13T10:44:47.608Z · LW(p) · GW(p)

There might be a dark forest phenomenon with anti-totalitarianism activism. 

I've seen a lot of people in EA saying things like "nobody is trying to prevent totalitarianism right now, therefore it's neglected" which leads them towards some pretty bonkers beliefs like "preventing totalitarianism is a great high-value way to contributing to making AGI end up aligned", because they see defending democracy as an endless crusade of being on the right side of history; and in reality, they have no clue that visibly precommitting to a potentially-losing side is something that competent people often avoid, or that the landscape of totalitarianism-prevention might already be pretty saturated by underground networks of rich and powerful people, who have already spent decades being really intense about quietly fighting totalitarianism, and aren't advertising the details of their underground network of allies and alliances. 

In which case, getting EA or AI safety entangled in that web would actually mean a high probability of having everything you care about hijacked, appropriated, used as cannon fodder, or some other way of being steered off a cliff.

The part of the metaphor that feels most resonant to me here is "you're in a dark place and there's things you'd maybe expect to see, but don't, and the reason you don't see X is specifically because X doesn't want you to find it."

I think that instead of ending with "and the reason you don't see X is specifically because X doesn't want you to find it", it makes more sense to end with something more like "and the reason you don't see X is specifically because X doesn't want *many unknown randos with a wide variety of capabilities and tendencies* to find it". 

Maybe the other people in the dark forest want to meet people like you! But there's all sorts of other people out there too, such as opportunistic lawyers and criminals and people much smarter or wealthier or aggressive than them.

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2023-05-14T14:21:32.154Z · LW(p) · GW(p)

... or that the landscape of totalitarianism-prevention might already be pretty saturated by underground networks of rich and powerful people, who have already spent decades being really intense about quietly fighting totalitarianism, and aren't advertising the details of their underground network of allies and alliances. 

I agree this is likely enough and that the opposite is commonly presupposed  in a lot of writings. That there are no individuals or groups several standard deviations above the writer in competence and who actually coordinate in a way, unknown to the writer, that would obviate the point they're trying to convey. 

It raises the interesting question of why exactly this happens since popular culture is full of stories of genius scientists, engineers, politicians, military leaders, entrepreneurs, artists, etc... 

Replies from: TrevorWiesinger
comment by trevor (TrevorWiesinger) · 2023-05-14T15:12:58.535Z · LW(p) · GW(p)

popular culture is full of stories of genius scientists, engineers, politicians, military leaders, entrepreneurs, artists, etc

I think it's possible that there are all sorts of reasons why these people could have vanished from public view. For example, maybe most of society's smartest people have all become socially-awkward corporate executives who already raced to the bottom and currently spend all of their brainpower outmaneuvering each other at office politics, or pumping out dozens of unique galaxy-brained resumes per day. Or maybe most of them have become software engineers who are smart enough to make $200k/y working from home one hour a day while appearing to work eight hours, and spend the rest of their time and energy hooked on major social media platforms (I've encountered several people in the Bay Area who do this).

It's difficult to theorize about invisible geniuses (or the absence of invisible geniuses) because it's unfalsifiable [? · GW]. But it's definitely possible that they could either secretly be in a glamorous place, like an underground network steering and coordinating multiple intelligence agencies (in which case all potential opposition might be doomed by default), or an unglamorous place, like socially awkward corporate executives struggling at office politics (office politics aren't a good fit for them due to social awkwardness), but they keep at it because they're still too smart to fail at the office politics and were never given a fair chance to find out about superintelligence [LW · GW] or human intelligence amplification [LW · GW].

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2023-05-14T16:13:18.541Z · LW(p) · GW(p)

... or an unglamorous place, like socially awkward corporate executives struggling at office politics (office politics aren't a good fit for them due to social awkwardness), but they keep at it because they're still too smart to fail at the office politics ...

That is a very interesting point. I never even considered before the possibility of someone just smart enough to keep afloat in office politics but not smart enough to transcend it. But when spelled out like that, it seems obvious that there must be a sizable cohort of middle managers and executives that fall into this category.

It does seem doubly tragic if they didn't even want to do it in the first place or have unsuitable personalities that effectively acts as a glass ceiling, regardless of how much effort they put in.

comment by Elizabeth (pktechgirl) · 2024-12-19T02:06:17.589Z · LW(p) · GW(p)

I think this is a useful concept that I use several times a year. I don't use the term Dark Forest I'm not sure how much that can be attributed to this post, but this post is the only relevant thing in the review so we'll go with that.

I also appreciate how easy to read and concise this post is. It gives me a vision of how my own writing could be shorter without losing impact.

comment by kave · 2024-10-28T19:15:26.236Z · LW(p) · GW(p)

For me, Dark Forest Theory reads strongly as "everyone is hiding, (because) everyone is hunting", rather than just "everyone is hiding".

comment by Shayne O'Neill (shayne-o-neill) · 2023-05-15T00:47:16.950Z · LW(p) · GW(p)

The "Dark Forest" idea originally actually appeared in an earlier novel "The Killing Star", by Charles Pellegrino and George Zebrowski, sometime in the 90s. (I'm not implying [mod-edit]the author you cite[/mod-edit] ripped it off, I have no claims to make on that, rather he was beaten to the punch) and I think the Killing Star's version of the the idea (Pellegrino uses the metaphor "Central park after dark")  is slightly stronger. 

Killing Star's method of anihilation is the relativisitic kill vehicle. Essentially that if you can accelerate a rock to relativistic speed (say 1/3 the speed of light), you have a planet buster, and such a weapon is almost unstoppable even if by sheer luck you do see the damn thing coming.  Its  low tech, lethal , and well within the tech capabilities of any species advanced enough to leave their solar system.

The most humbling feature of the relativistic bomb is that even if you happen to see it coming, its exact motion and position can never be determined; and given a technology even a hundred orders of magnitude above our own, you cannot hope to intercept one of these weapons. It often happens, in these discussions, that an expression from the old west arises: "God made some men bigger and stronger than others, but Mr. Colt made all men equal." Variations on Mr. Colt's weapon are still popular today, even in a society that possesses hydrogen bombs. Similarly, no matter how advanced civilizations grow, the relativistic bomb is not likely to go away...


So Pellegrino argues that as a matter of simple game theory, because diplomacy is nigh on impossible thanks to light speed delay, the most rational  response to discovering another alien civilization in space is "Do unto the other fellow as he would do unto you and do it first.", and since you dont know the other civilizations temperament, you can only assume in has a survival instinct, and therefore would kill you to preserve themselves at even the slightest possibility you would kill them, because you would do precisely the same. 

Thus such an act of interstellar omnicide is not an act of malice or aggression, but simply self preservation. And , of course, if you dont wish to engage in such cosmic violence, the alternative as a species is to remain very silent. 

I find the the whole concept absolutely terrifying. Particular in light of the fact that exoplanets DO in fact seem to be everywhere.

Of course the real reason for the Fermi Paradox might be something else, earths uniqueness (I have my doubts on this one), Humanities local uniqueness (Ie advanced civilizations might be rare enough that we are well outside the travel distances of other advanced species, much more likely), and perhaps most likely, radio communication is just a an early part of the tech tree for advanced civilizations that we eventually stop using. 

We have, alas, precisely one example of an advanced civilization to judge by;- Us. Thats a sample size thats rather hard to reason about.

Replies from: Raemon
comment by Raemon · 2023-05-15T00:54:43.195Z · LW(p) · GW(p)

(note, I prefer the source I was quoting from be kept spoilered. I'm not sure whether it matters for the thing you're referencing)

Replies from: shayne-o-neill
comment by Shayne O'Neill (shayne-o-neill) · 2023-05-15T01:24:50.588Z · LW(p) · GW(p)

Yeah it happens largely in the first few chapters, its not really a spoiler. Its the event the book was famous for.

comment by DirectedEvolution (AllAmericanBreakfast) · 2023-05-12T22:02:01.493Z · LW(p) · GW(p)

Hi Raemon! I found the comparison between the Dark Forest explanation for the Fermi Paradox and the more prosaic examples of group houses and meetups thought-provoking. Do you see the comparison as more of a loose analogy, or are they both examples of a single phenomenon? Or is the dissolution of human communities done to avoid them turning into a Dark Forest, or perhaps as a result to the first signs that they might be turning into one?

My own take is that group houses and meetups sometimes have a flavor of Dark Forest to them, when there's one or more predatory people whose uncomfortable attentions everybody is trying to avoid. I have often seen this happen with men competing for the romantic attention of young women in these settings. The women aren't necessarily trying to hide in the shadows, and the men aren't typically trying to destroy a potential rival as win the woman's attention, but the women do seem to have to figure out ways to avoid unwanted attention from these men. But that only seems superficially related to the Dark Forest explanation.

In my experience, group houses and meetups mostly break up naturally for pretty prosaic reasons: organizers get tired of running them, people build friendship and relationship networks that become more appealing than even a good meetup, people move away, the rent goes up, people's priorities change, a funder pulls out. The stable ones persist not so much because they're avoiding the attention of aggressive rivals as because they feel their time is best spent on each other, rather than in cultivating new relationships with outsiders.

However, I haven't spent time around the SF rationalist community - mainly around Seattle alternative/arts communities back in my 20s. Μaybe the dynamics are different?

Replies from: pktechgirl, Raemon
comment by Elizabeth (pktechgirl) · 2023-05-13T21:11:13.961Z · LW(p) · GW(p)

My own take is that group houses and meetups sometimes have a flavor of Dark Forest to them, when there's one or more predatory people whose uncomfortable attentions everybody is trying to avoid. I have often seen this happen with men competing for the romantic attention of young women in these settings. The women aren't necessarily trying to hide in the shadows, and the men aren't typically trying to destroy a potential rival as win the woman's attention, but the women do seem to have to figure out ways to avoid unwanted attention from these men. 

 

When people talk about stuff like this they often use sexual harassment or other really odious behavior as the thing being avoided, but I think that's missing the hard part for the exact reason that it's more comfortable to use as an example. Behavior that has a consensus that it is objectionable is relatively easy to act on. It's not necessarily fun, and there are still issues with people staying right under the line of violating a rule or behavior that is only objectionable as a pattern, but no one is afraid to say "I don't want sex pests at our meetup."

The relevant difficulty lies in the behavior where there either isn't a consensus, or there kind of is but no one feels good announcing it. Things like appearance, obviously non-predatory social awkwardness, missing background knowledge and not being shy about asking for explanations, political opinions you can't make into a moral issue but are super tired of hearing about...

This might be a pointless tangent or it might be a key part of the thing. Many of the reasons people want a Dark Forest are themselves hiding in a dark forest. 

Replies from: Dagon, Viliam, AllAmericanBreakfast
comment by Dagon · 2023-05-15T13:55:10.448Z · LW(p) · GW(p)

I think that's an important insight (though I'm not sure how universal it is).  The Dark Forest effect may be happening at multiple levels.  Not only can't you find the people you expect in the forest, you can't even find the reasons might be hidden or work on better search or attraction methods to find them.  Or even hypothesize well enough to find evidence for hiding vs nonexistence.

comment by Viliam · 2023-05-13T22:33:09.713Z · LW(p) · GW(p)

It seems like a scale with "obviously bad" on one end, "annoying but not too bad" somewhere in the middle, and "okay but slightly worse than current group average" on the other end.

Sadly, even the last one has a potential to destroy your group in long term, if you keep adding people who are slightly below the average... thus lowering the average, and unknowingly lowering the bar for the next person slightly below the new average... and your group has less and less of the thing that the original members joined or created it for.

Kicking out the obviously bad person at least feels righteous. Kicking out the annoying one can make you feel bad. Kicking out the slightly-below-average person definitely makes you feel like an asshole.

(It is even more complicated in situations where the group members improve at something by practicing. In that case, the newbie is very likely currently worse than the group average, but you have to estimate their potential for improvement.)

Sometimes it can even make sense to reject new people who are actually less problematic than the worst current member, but still more problematic than the average. Just because your group can handle one annoying member, doesn't mean it will be able to handle two. (Also, who knows what kind of interaction may happen between the two annoying members.)

Replies from: pktechgirl, Raemon, AllAmericanBreakfast
comment by Elizabeth (pktechgirl) · 2023-05-14T00:47:07.093Z · LW(p) · GW(p)

Yeah I think one reason groups go dark-forest is that being findable tends to come with demands you make your rules legible and unidimensionally consistent (see: LessWrong attempting to moderate [LW · GW] two users with very strong positives and very strong negatives) and that cuts you off from certain good states. 

comment by Raemon · 2023-05-13T23:56:29.547Z · LW(p) · GW(p)

See also: The Tale of Alice Almost: Strategies for Dealing With Pretty Good People [LW · GW

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2023-05-14T04:05:09.686Z · LW(p) · GW(p)

I think the post is describing a real problem (how to promote higher standards in a group that already has very high standards relative to the general population). I would like to see a different version framed around positive reinforcement. Constructive criticism is great, but it’s something we always need to improve, even the best of us.

People are capable of correctly interpreting the context of praise and taking away the right message. If Alice is a below-average fighter pilot, and her trainer praises her publicly for an above-average (for Alice) flight, her peers can correctly interpret that the praise is to recognize Alice’s personal growth, not to suggest that Alice is the ideal model of a fighter pilot. What inspires individual and collective improvement and striving is an empirical psychological question, and AFAIK a background of positive reinforcement along with specific constructive criticism is generally considered the way to go.

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2023-05-14T14:36:33.972Z · LW(p) · GW(p)

People are capable of correctly interpreting the context of praise and taking away the right message.

The rate of success of this is not anywhere near 100%. So for group dynamics, where members have a finite level of patience, this really doesn't prevent Villiam's point of every new member being ever so slightly below the level of the previous leading to evaporative cooling of the original members past a few dozen iterations.

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2023-05-14T16:36:24.952Z · LW(p) · GW(p)

The fundamental premise of trying to have a group at all is that you don’t exclusively care about group average quality. Otherwise, the easiest way to maximize that would be to kick out everybody except the best member.

So given that we care about group size as well as quality, kicking out or pushing away low performers is already looking bad. The natural place to start is by applying positive reinforcement for participating in the group, and only applying negative pressures, like holding up somebody as a bad example, when we’re really confident this is a huge win for overall group performance.

Edit:

The original version of my comment ended with:

"Humiliating slightly below group average performers seems frankly idiotic to me. Like, I’m not trying to accuse anybody here of being an idiot, I am just trying to express how intensely I disagree with the idea that this is a good way to build or improve the quality of groups. It’s like the leadership equivalent of bloodletting or something."

This was motivated by a misreading of the post Raemon linked and suggested an incorrect reading of what MY Zuo was saying. While I absolutely believe my statement here is true, it's not actually relevant to the conversation and is probably best ignored.

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2023-05-14T19:00:18.396Z · LW(p) · GW(p)

What are you talking about? 

I'm referring to Villiam's point that a common scenario is that the original members leave once the group average quality declines below a threshold.

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2023-05-14T19:14:21.830Z · LW(p) · GW(p)

Im responding to Raemon’s link to the Tale of Alice Almost, which is what I thought you were referring as well. If you haven’t read it already, it emphasizes the idea that by holding up members of a group who are slightly below the group average as negative examples, then this can somehow motivate an improvement in the group. Your response made me think you were advocating doing this in order to ice out low-performing members. If that’s wrong, then sorry for making false assumptions - my comment can mainly be treated as a response to the Tale of Alice Almost.

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2023-05-14T20:00:59.422Z · LW(p) · GW(p)

Is there some part of my original comment that you do not understand?

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2023-05-14T21:06:47.212Z · LW(p) · GW(p)

Your comment is a response to my rejection of the claims in Alice Almost that a good way to improve group quality is to publicly humiliate below average performers.

Specifically, you say that praising the improvement of the lower performing members fails to stop Villiam’s proposal to stop evaporative cooling by kicking out or criticizing low performers.

So I read you and Villiam as rejecting the idea that a combination of nurture and constructive criticism is the most important way to promote high group performance, and that instead, kicking out or publicly making an example of low performers is the better way.

If that’s not what you’re saying then let me know what specifically you are advocating - I think that one of the hard parts of this thread is the implicit references to previous comments and linked posts, without any direct quotes. That’s my fault in part, because I’m writing a lot of these comments on my phone, which makes quoting difficult.

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2023-05-14T21:57:45.507Z · LW(p) · GW(p)

 I'm really unsure how you read that into my comment.

I'll spell it out step by step and let's see where the confusion is:

Only one sentence out of many was quoted.

This usually indicates on LW the replier wants to address a specific portion, for some reason or another. 

If I wanted to address all your claims I probably would have quoted the whole comment or left it unquoted, following the usual practice on LW.

Your one sentence was:

People are capable of correctly interpreting the context of praise and taking away the right message.

My view is:

People are capable of correct interpretation some fraction of the time.

Some fraction of that fraction will result in them 'taking away the right message'.

These ratios are unknown but cumulatively will be well under 100% in any real life scenario I can think of.

Therefore, Villiam's point follows.

And so on.

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2023-05-14T23:35:59.167Z · LW(p) · GW(p)

Back on my laptop, so I can quote conveniently. First, I went back and read the Tale of Alice Almost more carefully, and found I had misinterpreted it. So I will go back and edit my original comment that you were responding to.

Second, Villiam's point is that "ok but slightly worse than current group average" behavior has "potential to destroy your group" if you "keep adding people who are slightly below the average... thus lowering the average," lowering the bar indefinitely.

Villiam is referencing a mathematical truth that may or may not be empirically relevant in real group behavior. For example of a situation where it would not apply, consider a university that effectively educates its students. Every year, it onboards a new group of freshman who are below-University-average in terms of scholastic ability, and graduates its highest-performers.

Of course, we know why the quality of students at the university doesn't necessarily degrade: the population of high schoolers it recruits from each year may have relatively static quality, and the university and students both try to improve the scholastic performance of the incoming class with each year so that the quality of the senior class remains static, or even improves, over time.

In my view, a combination of private constructive criticism and public praise works very well to motivate and inform students when they are learning. Furthermore, an environment that promotes learning and psychological wellbeing is attractive to most people, and I expect that it provides benefits in terms of selecting for high-performing recruits. I had mistakenly read sarahconstantin's post as advocating for public humiliation of slightly-below-average performers in order to ice them out or motivate people to work harder, which is not what she was calling for. This is why I wrote my orginal comment in response to Raemon.

You seem to be pointing out that if we praise people (in the context of my original comment, praise slightly-below-average performers for personal improvement), then some people will incorrectly interpret us as praising these slightly-below-average people as being "good enough."

I think there is a way to steelman your claim - perhaps if a sensei systematically praises the personal-best performance of a below-group-average student, then other students will interpret the sensei as having low standards, and start bringing less-committed and less-capable friends to intro sessions, resulting in a gradual degredation of the overall quality of the students in the dojo.

But I think this is an empirical claim, not a mathematical truth. I think that an environment where participants receive praise for personal-best performance results in accelerated improvement. At first, this merely counteracts any negative side effects with recruitment. Over time, it actually flips the dynamic. The high-praise environment attains higher average performance due to accelerated improvement, and this makes it more appealing to even higher-performing recruits both because high-praise is more appealing than low-praise and because they can work with higher-skill peers. Eventually, it becomes too costly to onboard more people, and so people have to compete to get in. This may allow the group to enforce higher standards for admission, so another beneficial selection force kicks in.

This model predicts that high-praise environments tend to have higher quality than low-praise environments, and that shifting to a high-praise style will result in improved group performance over time.

You seem to think that Villiam's point "follows" from the fact that not everybody will correctly understand that praising personal-best performance doesn't mean holding that person's work up as exemplary. I don't know how strongly you mean "follows," but I hope this essay will clarify the overall view I'm trying to get across here.

Replies from: Raemon
comment by Raemon · 2023-05-15T04:47:34.991Z · LW(p) · GW(p)

I’d been a bit confused at your earlier reaction to the post, this makes more sense to me.

comment by DirectedEvolution (AllAmericanBreakfast) · 2023-05-14T03:27:01.572Z · LW(p) · GW(p)

I’m not sure that “group average” is always the metric we want to improve. My intuition is that we want to think of most groups as markets, and supply and demand for various types of interaction with particular people varies from day to day. Adding more people to the market, even if they’re below average, can easily create surplus to the benefit of all and be desirable.

Obviously even in real markets it’s not always beneficial to have more entrants, I think mainly because of coordination costs as the market grows. So in my model, adding extra members to the group is typically good as long as they can pay for their own coordination costs in terms of the value they provide to the group.

comment by DirectedEvolution (AllAmericanBreakfast) · 2023-05-14T03:13:43.552Z · LW(p) · GW(p)

Yeah, I think this is an important explanation for why (in my preferred image), we’d find the faeries hiding under the leaves in the faerie forest.

To avoid behavior that’s costly to police, or shortcomings that are hard to identify, and also to attract virtues that are hard to define, we rely in part on private, reputation- and relationship-based networks.

These types of ambiguous bad behaviors are what I had in mind when I wrote “predatory,” but of course they are not necessarily so easy to define as such. They might just be uncomfortable, or sort of “icky sticky,” or even just tedious, and not only in a sexual way. The grandstanding blowhard or tireless complainer or self-righteous moralizer also fit the bill. Maybe the word “scrubs” is a better word?

comment by Raemon · 2023-05-12T22:53:38.501Z · LW(p) · GW(p)

I think the meetups example is fairly closely related to the alien example (i.e. people are often actively deciding not to advertise publicly to make it hard for random people to find the private meetup so they don't need to have an awkward conversation about rejecting them. From the inside, this just feels like "having friends that you invite to small private get-togethers". i.e.

people build friendship and relationship networks that become more appealing than even a good meetup

is an example of the phenomenon I'm talking about. You don't advertise your small friend group events because they're for your friends, not strangers. But that doesn't change the fact that from the perspective of the stranger, it's a dark matter event they probably can only infer.

I don't think this is a universal trait among meetups (I think it's more common in Berkeley than other cities for rationalist meetups at least, for reasons outlined in The Relationship Between the Village and the Mission [LW · GW]). And it's not about defending the group from rival meetups, but from random individuals.

My impression is this is still true-in-some-fashion in other communities - like, I think in dance communities, there are often public dance classes and private parties, which tend to feature people who are more hardcore dancers. (I think the dance community is more explicitly oriented around classes such that that the public events are well supported, but there's still a phenomenon of there being more exclusive things you only find out about if someone invites you)

This becomes particularly noteworthy only if the public meetups in an area are noticeably low in quality (because the community is at a stage in it's lifecycle when the organizers with the most passion/drive have gotten tired of running things for newcomers, and the people who step up to replace them don't have the quite the same vision, or the thing becomes a bit stagnant becomes stagnant because no one's putting in as much work)

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2023-05-13T01:20:49.753Z · LW(p) · GW(p)

That makes some sense to me. The most salient feature of the Dark Forest scenario, to me, is the one in which we're in a bad prisoner's game dilemma with something like the following payoff matrix:

  • Cooperate/cooperate gets some finite positive utility for both players
  • Defect/cooperate means ruin for the cooperator with equal or greater utility for the defector
  • Defect/defect ruins one player with 50% probability

Of course, real-world decisions to participate in private events don't resemble this payoff matrix, which is why, for me, the Dark Forest scenario feels somehow too dramatic, or like it's too paranoid an account of the feeling of anxiety or boredom that comes with trying to connect with people in a new social setting full of strangers.

Or maybe I'd take that further and say that newcomers at parties often seem to operate as if they were in a Dark Forest scenario ("careful not to embarrass yourself, people will laugh at you if you say something dumb, probably everybody's mean here, or else annoying and foolish which is why they're at this public gathering, your feelings of anxiety and alienation are warranted!").  And it's much better if you realize that's not in fact the case. People there are like you - wanting to connect, pretty interesting, kind of anxious, just waiting for someone else to make the first move. There are all kinds of perfectly normal reasons people are choosing to hang out at a public gathering place rather than with their close friends, and if public gatherings seem "terrible," it's usually because of the instrinsic awkwardness of strangers trying to break the ice. They'd almost all be interesting and fun if they got to know each other and felt more comfortable.

But I do see the connection with the desire to avoid unpleasant strangers and the need to infer the existence of all these private get-togethers and communities.

Replies from: Raemon
comment by Raemon · 2023-05-13T03:01:27.386Z · LW(p) · GW(p)

Yeah, I agree that the full narrative force of the metaphor is pretty extreme here, I first thought of it in this context sort of as a joke, but then found that a) the phrasing "dark forest" kinda makes sense as a "things are generally dark and you don't know who's out there" sort of way, without the galactic omnicidal premise. I also agree that your game theory summary is a reasonable formalism of the original situation, and yeah, not what I meant  

The aspect of the metaphor I found most helpful was just "you don't see X out there, and that is because the people who make X have an interest in you not seeing X, not because it's not happening."

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2023-05-13T06:53:57.335Z · LW(p) · GW(p)

I do think that visualizing the social world as a bright network of lively, private social connections with these relatively bland public outlets is a useful and probably neglected one. And the idea that a certain inaccessibility or privacy is key for their survival is important too. I visualize it more as a sort of faerie forest. To many, it seems like there’s nothing there. In fact there’s a whole faerie realm of private society, but you need to seek out or luck into access, and it’s not always easy to navigate and connections don’t always lead you where you expect.

comment by Shmi (shminux) · 2023-05-12T21:42:20.533Z · LW(p) · GW(p)

(I assume you meant your quote unspoilered? Since it is clearly visible.)

In general, this is a very good heuristic, I agree. If you think there is a low-hanging fruit, everyone is passing on, it is good to check or inquire, usually privately and quietly, whether anyone else noticed it, before going for it. Sometimes saying out loud that the king has no clothes is equivalent to shouting in the Dark Forest. Once in a while though there is indeed low-hanging fruit. Telling the two situations apart is the tricky part.

Replies from: Raemon
comment by Raemon · 2023-05-12T21:52:43.957Z · LW(p) · GW(p)

Yeah I think in isolation the quote is not a spoiler.

Replies from: Raemon
comment by Raemon · 2023-05-12T22:07:50.329Z · LW(p) · GW(p)

To be clear I think it's generally actually fine to ask out loud "hey why isn't this happening?" (rather than quietly/privately). At least, this seems fine for all the examples in the OP.

The thing I'm cautioning against is forming the strong belief "this isn't happening, people must be crazy/blind to what needs-to-be-done" and then boldly charging ahead.

Replies from: shminux, MondSemmel
comment by Shmi (shminux) · 2023-05-12T23:34:36.997Z · LW(p) · GW(p)

I definitely agree with that, and there is a clear pattern of this happening on LW among the newbie AI Doomers

comment by MondSemmel · 2023-05-15T15:56:34.787Z · LW(p) · GW(p)

Ah. I had been wondering what the actionable implications of this model were supposed to be. After all, it does not seem useful for truth-seeking to adopt a strategy of assuming that in every blank spot in your territory, there's actually an invisible dragon. With this comment, things make more sense to me.

That said, if shouting is fine, then the Dark Forest analogy seems misleading, and another metaphor (like DirectedEvolution's Faerie Forest) might be more apt.

comment by Review Bot · 2024-02-28T21:52:48.794Z · LW(p) · GW(p)

The LessWrong Review [? · GW] runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year.

Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?

comment by Martín Soto (martinsq) · 2023-07-06T13:56:12.007Z · LW(p) · GW(p)

The following is a spoiler about the sci-fi story mentioned, so I've tagged it spoiler:

 I think you should have taken more care handling the spoiler, since you mention the presence of a spoiler only after the quote, and even though you don't mention the story's name explicitly it's obvious which story it is, even if you haven't read it and only know about its existence. So I'd encourage a re-ordering, or more important changes.

comment by Matthew_Opitz · 2023-05-12T20:41:24.228Z · LW(p) · GW(p)

This is a good post and puts into words the reasons for some vague worries I had about an idea of trying to start an "AI Risk Club" at my local college, which I talk about here [LW · GW].  Perhaps that method of public outreach on this issue would just end up generating more heat than light and would attract the wrong kind of attention at the current moment.  It still sounds too outlandishly sci-fi for most people.  It is probably better, for the time being, to just explore AI risk issues with any students who happen to be interested in it in private after class or via e-mail or Zoom.