Posts

Forecasting Newsletter: July 2021 2021-08-01T17:00:07.550Z
Forecasting Newsletter: June 2021 2021-07-01T21:35:26.537Z
Forecasting Newsletter: May 2021 2021-06-01T15:51:26.463Z
Forecasting Newsletter: April 2021 2021-05-01T16:07:22.689Z
Forecasting Newsletter: March 2021 2021-04-01T17:12:09.499Z
Introducing Metaforecast: A Forecast Aggregator and Search Tool 2021-03-07T19:03:35.920Z
Forecasting Newsletter: February 2021 2021-03-01T21:51:27.758Z
Forecasting Prize Results 2021-02-19T19:07:09.420Z
Forecasting Newsletter: January 2021 2021-02-01T23:07:39.131Z
2020: Forecasting in Review. 2021-01-10T16:06:32.082Z
Forecasting Newsletter: December 2020 2021-01-01T16:07:39.015Z
Real-Life Examples of Prediction Systems Interfering with the Real World (Predict-O-Matic Problems) 2020-12-03T22:00:26.889Z
Forecasting Newsletter: November 2020 2020-12-01T17:00:58.898Z
Announcing the Forecasting Innovation Prize 2020-11-15T21:12:39.009Z
Incentive Problems With Current Forecasting Competitions. 2020-11-09T16:20:06.394Z
Forecasting Newsletter: October 2020. 2020-11-01T13:09:50.542Z
Adjusting probabilities for the passage of time, using Squiggle 2020-10-23T18:55:30.860Z
A prior for technological discontinuities 2020-10-13T16:51:32.572Z
NunoSempere's Shortform 2020-10-13T16:40:05.972Z
AI race considerations in a report by the U.S. House Committee on Armed Services 2020-10-04T12:11:36.129Z
Forecasting Newsletter: September 2020. 2020-10-01T11:00:54.354Z
Forecasting Newsletter: August 2020. 2020-09-01T11:38:45.564Z
Forecasting Newsletter: July 2020. 2020-08-01T17:08:15.401Z
Forecasting Newsletter. June 2020. 2020-07-01T09:46:04.555Z
Forecasting Newsletter: May 2020. 2020-05-31T12:35:58.063Z
Forecasting Newsletter: April 2020 2020-04-30T16:41:35.849Z
What are the relative speeds of AI capabilities and AI safety? 2020-04-24T18:21:58.528Z
Some examples of technology timelines 2020-03-27T18:13:19.834Z
[Part 1] Amplifying generalist research via forecasting – Models of impact and challenges 2019-12-19T15:50:33.412Z
[Part 2] Amplifying generalist research via forecasting – results from a preliminary exploration 2019-12-19T15:49:45.901Z
What do you do when you find out you have inconsistent probabilities? 2018-12-31T18:13:51.455Z
The hunt of the Iuventa 2018-03-10T20:12:13.342Z

Comments

Comment by NunoSempere (Radamantis) on Uncertainty can Defuse Logical Explosions · 2021-07-31T22:02:53.048Z · LW · GW

Can you give the probabilities that the agent assigns to B1 through D4 in the "sandboxed" counterfactual?

Comment by NunoSempere (Radamantis) on Uncertainty can Defuse Logical Explosions · 2021-07-30T21:18:11.742Z · LW · GW

Should B2 be "$10 > $5 (probability 0.9999)?". If so, you find yourself in the situation where you have 0.99+ for two contradictory hypothesis, and it's not clear to me what the step "ignore the proportion of probability mass assigned to worlds where 1 and 2 are both true" actually looks like.

Comment by NunoSempere (Radamantis) on Working With Monsters · 2021-07-22T09:02:15.073Z · LW · GW

Nice meta-comment. But it doesn't really work; green was very well chosen so that any right person with a modicum of brains and heart immediately detects it as both wrong and morally repugnant. To such an extent that I found it broke my suspension of disbelief that half of the future society would believe in green.

Comment by NunoSempere (Radamantis) on Chess and cheap ways to check day to day variance in cognition · 2021-07-07T11:14:06.074Z · LW · GW

I've also observed something similar, at the decent-but-not-great club player level.

Comment by NunoSempere (Radamantis) on Chess and cheap ways to check day to day variance in cognition · 2021-07-07T11:13:16.319Z · LW · GW

This is easier to do by playing twenty 1-minute games.

Comment by NunoSempere (Radamantis) on A (somewhat beta) site for embedding betting odds in your writing · 2021-07-05T11:06:24.123Z · LW · GW

Also, if you do make bets public by default (or, even better, make it the default option to give both an over/under bet), I'd love to scrap the website and add the implied probabilities to metaforecast.org

Comment by NunoSempere (Radamantis) on Forecasting Newsletter: June 2021 · 2021-07-03T01:44:33.409Z · LW · GW

Thanks, added your suggestions.

***

Sure, might be ok early on. But you could require the question maker to provide a probability (and, at least I always predict on the questions I predict), or reward forecasting early directly. 

***

Did they make testable, or empirical, claims?

Actually yes, I have a list of 10 predictions here which I extracted from his blog, but I've been procrastinating on evaluating them.

***

Cheers.

Comment by NunoSempere (Radamantis) on A (somewhat beta) site for embedding betting odds in your writing · 2021-07-03T01:31:59.373Z · LW · GW

would you want to browse all predictions, even ones by people you've never heard of?

Yes, all predictions. 

how do you know the randos you're betting against won't just run off with your money when you lose, and refuse to pay up when you win? Maybe you just trust the-sort-of-person-who-uses-this-site to be honorable?

I'd probably by default trust anyone with a LW karma of > [some threshold], or someone with a twitter account which is willing to confirm their identify, or in general someone who has written something I find insightful. If I'm feeling particularly paranoid, I might contact them outside your platform before making a bet, but I imagine that in most cases outside the first few ones, I probably wouldn't bother. I'd also expect to find out rather rapidly if people don't pay out. Also, from past experience using similar setups (handshake bets on the Polymarket Discord), people do care about the reputation of their anonymous aliases.

Comment by NunoSempere (Radamantis) on A (somewhat beta) site for embedding betting odds in your writing · 2021-07-02T11:52:38.412Z · LW · GW

Neat idea. I would like for other people's predictions to be public by default, so that I can browse them and bet against the ones that I think are wrong (margin-call them, as if it were). Sadly this isn't possible with the current setup, because bet urls are randomized.

Comment by NunoSempere (Radamantis) on Forecasting Newsletter: May 2021 · 2021-06-03T15:18:09.293Z · LW · GW

No, this hasn't been solved. But I imagine that mixing logical quantifiers and probability statements would be less messy if one e.g., knows the causal graph of the events to which the statements refer. This is something that the original post didn't mention, but which I thought was interesting.

Comment by NunoSempere (Radamantis) on Is Ray Kurzweil's prediction accuracy still being tracked? · 2021-05-31T16:01:47.082Z · LW · GW

See Assessing Kurzweil predictions about 2019: the results.

Comment by NunoSempere (Radamantis) on Low-stakes alignment · 2021-04-30T08:44:03.095Z · LW · GW

To the extent that SGD can’t find the optimum, it hurts the performance of both the aligned model and the unaligned model. In some sense what we really want is a regret bound compared to the “best learnable model,” where the argument for a regret bound is heuristic but seems valid.

I'm not sure this goes through. In particular, it could be that the architecture which you would otherwise deploy (presumably human brains, or some other kind of automated system) would do better than the "best learnable model" for some other (architecture + training data + etc) combination. Perhaps in some sense what you want is a regret bound over the aligned model which you would use if you don't deploy your AI model, not a regret bound between your AI model and the best learnable model in its (architecture + training data + etc) space.

That said, I'm really not familiar with regret bounds, and it could be that this is a non-concern.

Comment by NunoSempere (Radamantis) on Does an app/group for personal forecasting exist? · 2021-04-28T13:56:36.708Z · LW · GW

See also: What are the best tools for recording predictions? 

Comment by NunoSempere (Radamantis) on Scott Alexander 2021 Predictions: Buy/Sell/Hold · 2021-04-27T21:33:47.871Z · LW · GW

I've added these predictions to foretold, in case people want to forecast on them in one place: https://www.foretold.io/c/6eebf79b-4b6f-487b-a6a5-748d82524637

Comment by NunoSempere (Radamantis) on What will GPT-4 be incapable of? · 2021-04-06T22:08:00.564Z · LW · GW

On this same note, matrix multiplication or inversion.

Comment by NunoSempere (Radamantis) on Learning Russian Roulette · 2021-04-02T22:26:23.191Z · LW · GW

I also have the sense that this problem is interesting.

Comment by NunoSempere (Radamantis) on Learning Russian Roulette · 2021-04-02T22:13:57.611Z · LW · GW

I disagree; this might have real world implications. For example, the recent OpenPhil report on Semi-informative Priors for AI timelines updates on the passage of time, but if we model creating AGI as playing Russian roulette*, perhaps one shouldn't update on the passage of time. 

* I.e., AGI in the 2000s might have lead to an existential catastrophe due to underdeveloped safety theory

Comment by NunoSempere (Radamantis) on Learning Russian Roulette · 2021-04-02T22:05:56.386Z · LW · GW

You would never play the first few times

This isn't really a problem if the rewards start out high and gradually diminish. 

I.e., suppose that you value your life at $L (i.e., you're willing to die if the heirs of your choice get L dollars), and you assign a probability of 10^-15 to H1 =  "I am immune to losing at Russian roulette", something like 10^ 4 to H2 = "I intuitively twist the gun each time to avoid the bullet",, and a probability of something like 10^-3 to H3 = "they gave me an empty gun this time". Then you are offered to play enough rounds of Russian roulette for a price of $L/round until you update to arbitrary levels. 

Now, if you play enough times, H3 becomes the dominant hypothesis with say 90% probability, so you'd accept a payout for, say, $L/2. Similarly, if you know that H3 isn't the case, you'd still assign very high probability to something like H2 after enough rounds, so you'd still accept a bounty of $L/2.

Now, suppose that all the alternative hypothesis H2, H3,... are false, and your only other alternative hypothesis is H1 (magical intervention). Now the original dilemma has been saved. What should one do?

Comment by NunoSempere (Radamantis) on Forecasting Newsletter: March 2021 · 2021-04-02T08:24:19.482Z · LW · GW

No, the first part is a typo, thanks.

I'm not sure I understand what "this" refer to in that sentence

Comment by NunoSempere (Radamantis) on Improvement for pundit prediction comparisons · 2021-03-28T20:11:29.186Z · LW · GW

This feels solvable with a sufficiently large monetary prize.

Comment by NunoSempere (Radamantis) on What are some beautiful, rationalist artworks? · 2021-03-27T12:15:25.034Z · LW · GW

Dr. Manhattan has just been convinced by Veidt that he has been causing cancer in the people he cares about. He also finds himself caring less and less about the world (though he had previously accelerated technological progress by e.g., creating lithium batteries for electric cars), and leaves the Earth for Mars to build cool clocks and find some peace and quiet. 

Comment by NunoSempere (Radamantis) on [AN #143]: How to make embedded agents that reason probabilistically about their environments · 2021-03-24T18:01:41.960Z · LW · GW

This post seems pretty broken on Firefox, and doesn't look too great on Chrome either

Comment by NunoSempere (Radamantis) on Introducing Metaforecast: A Forecast Aggregator and Search Tool · 2021-03-09T19:50:51.626Z · LW · GW

Mmh. OTOH, they loose the ability to incorporate new information. Do you have a sense of which factor dominates?

Comment by NunoSempere (Radamantis) on Introducing Metaforecast: A Forecast Aggregator and Search Tool · 2021-03-09T10:04:59.856Z · LW · GW

This is changed now. As a bonus, it also resolves a previous bug

Comment by NunoSempere (Radamantis) on Introducing Metaforecast: A Forecast Aggregator and Search Tool · 2021-03-08T16:04:44.266Z · LW · GW

Thank! It should just include open questions, not those which have closed and are yet to resolve. But this is easy to change.

Comment by NunoSempere (Radamantis) on Survey on cortical uniformity - an expert amplification exercise · 2021-02-24T14:41:59.381Z · LW · GW

EDIT: rephrased the estimations so they match the probability one would enter in the Elicit questions 

Oof, that means I get to change my predictions. 

Comment by NunoSempere (Radamantis) on Survey on cortical uniformity - an expert amplification exercise · 2021-02-23T22:39:11.863Z · LW · GW

I made three quick predictions, of which I'm not really sure. Someone should do the Bayesian calculation with a reasonable prior to determine how likely is it than more than half of experts would answer some way given the answers by the 6 experts who did answer.

For some of these questions, I'd expect experts to care more about the specific details than I would. E.g., maybe for “The entire cortical network could be modeled as the repetition of a few relatively simple neural structures, arranged in a similar pattern even across different cortical areas” someone who spends a lot of time researching the minutiae of cortical regions is more likely to consider the sentence false.

Comment by NunoSempere (Radamantis) on Real-Life Examples of Prediction Systems Interfering with the Real World (Predict-O-Matic Problems) · 2021-02-19T20:56:59.933Z · LW · GW
Comment by NunoSempere (Radamantis) on Mathematical Models of Progress? · 2021-02-16T14:39:52.475Z · LW · GW

Artificial Intelligence and Economic Growth, by Chad Jones et al for a particular model, Economic growth under transformative AI for a comprehensive review.

Comment by NunoSempere (Radamantis) on Incentive Problems With Current Forecasting Competitions. · 2021-02-16T11:37:56.595Z · LW · GW

Cheers, thanks! These are great

Comment by NunoSempere (Radamantis) on Forecasting Newsletter: January 2021 · 2021-02-08T10:31:08.519Z · LW · GW

Thanks!

Comment by NunoSempere (Radamantis) on Real-Life Examples of Prediction Systems Interfering with the Real World (Predict-O-Matic Problems) · 2021-01-28T11:19:37.771Z · LW · GW

More examples in this paper: From self-prediction to self-defeat: behavioral forecasting, self-fulfilling prophecies, and the effect of competitive expectations

Comment by NunoSempere (Radamantis) on Real-Life Examples of Prediction Systems Interfering with the Real World (Predict-O-Matic Problems) · 2021-01-11T11:00:02.421Z · LW · GW

Two other examples:

  • Youtube's recommender system changes the habits of Youtube video producers (e.g., using keywords at the beginning of the titles, and at the beginning of the video now that Youtube can parse speech)
  • Andrew Yang apparently received death threats over a prediction market on the number of tweets. 
Comment by NunoSempere (Radamantis) on GraphQL tutorial for LessWrong and Effective Altruism Forum · 2021-01-06T11:14:47.555Z · LW · GW

I've come back to this occasionally, thanks. Here are two more snippets:

To get one post 

{
        post(
            input: {  
            selector: {
                _id: "Here goes the id"
            }      
            }) 
        {
            result {
            _id
            title
            slug
            pageUrl
            postedAt
            baseScore
            voteCount
            commentCount
            meta
            question
            url
            user {
                username
                slug
                karma
                maxPostCount
                commentCount
            }
            }
        }
}

or, as a JavaScript/node function:

let graphQLendpoint = 'https://forum.effectivealtruism.org/graphql' // or https://www.lesswrong.com/graphql. Note that this is not the same as the graph*i*ql visual interface talked about in the post. 

async function fetchPost(id){ 
  // note the async
  let response  = await fetch(graphQLendpoint, ({
    method: 'POST',
    headers: ({ 'Content-Type': 'application/json' }),
    body: JSON.stringify(({ query: `
       {
        post(
            input: {  
            selector: {
                _id: "${id}"
            }      
            }) 
        {
            result {
            _id
            title
            slug
            pageUrl
            postedAt
            baseScore
            voteCount
            commentCount
            meta
            question
            url
            user {
                username
                slug
                karma
                maxPostCount
                commentCount
            }
            }
        }
}`
})),
  }))
  .then(res => res.json())
  .then(res => res.data.post? res.data.post.result : undefined)  
  return response
}

 

To get a user

{
  user(input: {
    selector: {
      slug: "heregoestheslug"
    }
  }){
    result{
      username
      pageUrl
      karma
      maxPostCount
      commentCount
    }
  }
  
}

Or, as a JavaScript function

let graphQLendpoint = 'https://forum.effectivealtruism.org/graphql' // or https://www.lesswrong.com/graphql. Note that this is not the same as the graph*i*ql visual interface talked about in the post. 

async function fetchAuthor(slug){
  // note the async
  let response  = await fetch(graphQLendpoint, ({
    method: 'POST',
    headers: ({ 'Content-Type': 'application/json' }),
    body: JSON.stringify(({ query: `
       {
  user(input: {
    selector: {
      slug: "${slug}"
    }
  }){
    result{
      username
      pageUrl
      karma
      maxPostCount
      commentCount
    }
  }
  
}`
})),
  }))
  .then(res => res.json())
  .then(res => res.data.user? res.data.user.result : undefined)  
  return response
}
Comment by NunoSempere (Radamantis) on Anti-Aging: State of the Art · 2021-01-03T19:30:42.117Z · LW · GW

Thoughtful answer, thanks

Comment by NunoSempere (Radamantis) on Anti-Aging: State of the Art · 2021-01-03T13:27:52.385Z · LW · GW

The evidence is promising that in the next 5-10 years, we will start seeing robust evidence that aging can be therapeutically slowed or reversed in humans

 

Are you willing to bet on this? If so, how much?

Comment by NunoSempere (Radamantis) on Interactive exploration of LessWrong and other large collections of documents · 2020-12-31T09:05:04.895Z · LW · GW

Yes, I'd be interested, many thanks!

Comment by NunoSempere (Radamantis) on Range and Forecasting Accuracy · 2020-12-26T18:06:43.522Z · LW · GW

Cool. Once you rewrite that, and if you do so before the end of the year, I'd encourage you to resubmit it to this contest

In particular, the reason I'm excited about this kind of work is because it allows us to have at least some information about how accurate long-term predictions can be. Some previous work on this has been done, e.g., rating Kurzweil's predictions from the 90s but overall we have very little information about this kind of thing. And yet we are interested in seeing how good we can be at making predictions n years out, and potentially making decisions based on that. 

Comment by NunoSempere (Radamantis) on Interactive exploration of LessWrong and other large collections of documents · 2020-12-26T16:12:37.136Z · LW · GW

So here is something I'm interested in: I have a list of cause area candidates proposed in the EA Forum (available here) as a Google Sheet. Could I use a set-up similar to your own to find out similar posts?

Also, you should definitely post this to the EA forum as well. 

Comment by NunoSempere (Radamantis) on Probability theory implies Occam's razor · 2020-12-26T15:57:45.281Z · LW · GW

Waveman says:

I am not sure you actually justified your claim, that OR follows from the laws of probability with no empirical input. 

I wanted to say the same thing. 

The OP uses the example of age, but I like the example of shade of eye color better. If h is height and s is shade of eye color, then 

weight = alpha * a + beta * s

Then if beta is anything other than 0, your estimate will, on expectation, be worse. This feels correct, and it seems like this should be demonstrable, but I haven't really tried. 

Comment by NunoSempere (Radamantis) on Recommendation for a good international event betting site like predictit.org · 2020-12-26T15:47:36.356Z · LW · GW

I'd point you towards polymarket (polymarket.com). It trades in USDC (a  cryptocurrency pegged to the US dollar), which you can acquire at various exchanges, like Coinbase or the crypto.com app. 

Comment by NunoSempere (Radamantis) on Luna Lovegood and the Chamber of Secrets - Part 8 · 2020-12-22T22:27:30.792Z · LW · GW

How did Luna come to represent Ravenclaw at the dueling tournament? Did she sleepwalk even during Lockhart's class, and somehow win the spot by casting spells while sleepwalking?

 

She was sleepwalking, and thus was able to shrug off a Somnium from her opponent, and win. Possibly repeatedly.

Comment by NunoSempere (Radamantis) on What are the best precedents for industries failing to invest in valuable AI research? · 2020-12-15T21:28:16.848Z · LW · GW

I have some data on this on the top of my head from having read the history of 50 mostly random technologies (database.csv in the post):

  • People not believing that heavier than air flight was a thing, and Zeppelins eventually becoming obsolete
  • Various camera film producing firms, notably Kodak, failing to realize that digital was going to be a thing
  • (Nazi Germany not realizing that the nuclear bomb was going to be a thing)
  • London not investing in better sanitation until the Great Stink; this applies to mostly every major city.
  • People not investing in condoms for various reasons
  • People not coming up with the bicycle as an idea
  • Navies repeatedly not taking the idea of submarines seriously
  • Philip LeBon failing to raise interest in his "thermolamp"

So that's 8/50 of the top of my head (9/50 including Blockbuster, mentioned by another commenter)

I also have some examples of technology timelines here and some technology anecdotes from my sample of 50 technologies here, which might serve as inspiration. 

Comment by NunoSempere (Radamantis) on The Parable of Predict-O-Matic · 2020-12-13T17:34:51.004Z · LW · GW

The short story presents some intuitions which would be harder to get from a more theoretical standpoint. And these intuitions then catalyzed further discussion, like The Dualist Predict-O-Matic ($100 prize), or my own Real-Life Examples of Prediction Systems Interfering with the Real World (Predict-O-Matic Problems).

Personally, a downside of the post is that "Predict-O-Matic problems" isn't that great a category. I prefer "inner and outer alignment problems for predictive systems," which is neater. On the other hand, if I mention the Parable of the Predict-O-Matic people can quickly understand what I'm talking about. 

But the post provides a useful starting point. In particular, to me it suggests looking to prediction systems as a toy model for the alignment problem, which is something I've personally had fun looking into, and which strikes me as promising.  

Lastly, I feel that the title is missing a "the."

Comment by NunoSempere (Radamantis) on Real-Life Examples of Prediction Systems Interfering with the Real World (Predict-O-Matic Problems) · 2020-12-12T18:45:55.388Z · LW · GW

Thanks. I keep missing this one, because Good Judgment Open, the platform used to select forecasters, rewards both Brier score and relative Brier score.

Comment by NunoSempere (Radamantis) on Real-Life Examples of Prediction Systems Interfering with the Real World (Predict-O-Matic Problems) · 2020-12-12T18:44:47.170Z · LW · GW

You can see this effect for election predictions, such that there are plenty of smallish predictors which predicted the result of the current election closely (but such that it's easy to speculate that they're just a selection effect) 

Comment by NunoSempere (Radamantis) on Real-Life Examples of Prediction Systems Interfering with the Real World (Predict-O-Matic Problems) · 2020-12-12T18:43:18.398Z · LW · GW

Thanks to both; this is a great example; I might add it to the main text

Comment by NunoSempere (Radamantis) on Real-Life Examples of Prediction Systems Interfering with the Real World (Predict-O-Matic Problems) · 2020-12-12T18:42:42.578Z · LW · GW

Another example, from @albrgr

"This is kind of crazy: https://nber.org/digest-202012/corporate-reporting-era-artificial-intelligence Companies have learned to use (or exclude) certain words to make their corporate filings be interpreted more positively by financial ML algorithms."

Then quoting from the article:

The researchers find that companies expecting higher levels of machine readership prepare their
disclosures in ways that are more readable by this audience. "Machine readability" is measured in
terms of how easily the information can be processed and parsed, with a one standard deviation
increase in expected machine downloads corresponding to a 0.24 standard deviation increase in
machine readability. For example, a table in a disclosure document might receive a low readability
score because its formatting makes it difficult for a machine to recognize it as a table. A table in a
disclosure document would receive a high readability score if it made effective use of tagging so
that a machine could easily identify and analyze the content.
Companies also go beyond machine readability and manage the sentiment and tone of their
disclosures to induce algorithmic readers to draw favorable conclusions about the content. For
example, companies avoid words that are listed as negative in the directions given to algorithms.
The researchers show this by contrasting the occurrence of positive and negative words from the
Harvard Psychosocial Dictionary — which has long been used by human readers — with those
from an alternative, finance-specific dictionary that was published in 2011 and is now used
extensively to train machine readers. After 2011, companies expecting high machine readership
significantly reduced their use of words labelled as negatives in the finance-specific dictionary,
relative to words that might be close synonyms in the Harvard dictionary but were not included in
the finance publication. A one standard deviation increase in the share of machine downloads for a
company is associated with a 0.1 percentage point drop in negative-sentiment words based on the
finance-specific dictionary, as a percentage of total word count.

Comment by NunoSempere (Radamantis) on Parable of the Dammed · 2020-12-11T09:47:26.511Z · LW · GW

So for some realism which the original story didn't call for —it's a "parable trying to make a point", not a "detailed historical account of territorial feuds in 15th century Albania"—, we can look at how this works out in practice. To do this, we look to The Kanun of Lekë Dukagjini, which describes the sort of laws used to deal with this kind of thing in 15th century Albania. My details might be iffy here, but I did read the book and remember some parts.

In practice, there are several points of intervention, if I'm remembering correctly:

  • After the first murder, the extended family of the murdered goes after the murderer, to the extent that he can't safely go out of his home. If he is killed, the feud ends on the part of the murdered's family.
  • At any point, one of the families can ask a more powerful figure to mediate; in some regions this can be a cleric. The resolution might involve substantial amounts of money to be paid, which, crucially, is set beforehand by law, in excruciating detail depending on the conditions.
  • The lands wouldn't in fact be the most valuable resource here; it would be the working power of adult men, who can't get out because they would be killed in revenge. This cripples both families economically, so they do have an incentive to cooperate.

So, in practice

a clever couple from one of the families hatched an idea

I get the impression that this ends with the clever couple getting killed in the middle of the night by one of the more violent and impulsive cousins of the second family, and maybe the second family paying some reparations if they're caught. Probably less than, you know, if they'd killed a normal couple. That, or the dam gets destroyed. Or actually, the husband from the clever couple would have to ask the Patriarch of the family for permission, who would veto the idea because he wants to make the truce work, and is hesitant to lose more of his sons to a new feud. Also, with or without the discount factor rural people in Albania have, doing this kind of thing wouldn't be worth it. Or actually, the clever couple learnt in childhood that this kind of thing wasn't worth it, and got some lashes in the process. 

Violence escalates, and the feud breaks out anew - but peace is even harder to come by, now, since the river has been permanently destroyed as a Schelling point.

The Schelling point wasn't the river, the Schelling point was someone more powerful than you telling you not to start trouble. This is harder to game. Also, you don't have "the government", you have "the more powerful village cacique," or the priest, which works because you don't want to hell when you die. 

You do see a thing in rural Spain with territory boundaries being marked by stones, and those stones being moved, which kind of works if one side doesn't spend time in the land.

Comment by NunoSempere (Radamantis) on Forecasting Newsletter: November 2020 · 2020-12-10T15:31:33.634Z · LW · GW

Makes sense