Posts

Why you should minimax in two-player zero-sum games 2020-05-17T20:48:03.770Z · score: 17 (6 votes)
Book report: Theory of Games and Economic Behavior (von Neumann & Morgenstern) 2020-05-11T09:47:00.773Z · score: 37 (13 votes)
Conflict vs. mistake in non-zero-sum games 2020-04-05T22:22:41.374Z · score: 143 (62 votes)
Beliefs at different timescales 2018-11-04T20:10:59.223Z · score: 34 (10 votes)
Counterfactuals and reflective oracles 2018-09-05T08:54:06.303Z · score: 8 (4 votes)
Counterfactuals, thick and thin 2018-07-31T15:43:59.187Z · score: 29 (12 votes)
An environment for studying counterfactuals 2018-07-11T00:14:49.756Z · score: 15 (5 votes)
Logical counterfactuals and differential privacy 2018-02-04T00:17:43.000Z · score: 1 (1 votes)
Oracle machines for automated philosophy 2015-02-17T15:10:04.000Z · score: 1 (1 votes)
Meetup : Berkeley: Beta-testing at CFAR 2014-03-19T05:32:26.521Z · score: 2 (3 votes)
Meetup : Berkeley: Implementation Intentions 2014-02-27T07:06:29.784Z · score: 1 (2 votes)
Meetup : Berkeley: Ask vs. Guess (vs. Tell) Culture 2014-02-19T20:16:30.017Z · score: 0 (1 votes)
Meetup : Berkeley: The Twelve Virtues 2014-02-12T19:56:53.045Z · score: 0 (1 votes)
Meetup : Berkeley: Talk on communication 2014-01-24T03:57:50.244Z · score: 1 (2 votes)
Meetup : Berkeley: Weekly goals 2014-01-22T18:16:38.107Z · score: 1 (2 votes)
Meetup : Berkeley meetup: 5-minute exercises 2014-01-15T21:02:26.223Z · score: 1 (2 votes)
Meetup : Meetup at CFAR, Wednesday: Nutritionally complete bread 2014-01-07T10:25:33.016Z · score: 1 (2 votes)
Meetup : Berkeley: Hypothetical Apostasy 2013-06-12T17:53:40.651Z · score: 3 (4 votes)
Meetup : Berkeley: Board games 2013-06-04T16:21:17.574Z · score: 2 (3 votes)
Meetup : Berkeley: The Motivation Hacker by Nick Winter 2013-05-28T06:02:07.554Z · score: 1 (2 votes)
Meetup : Berkeley: To-do lists and other systems 2013-05-22T01:09:51.917Z · score: 3 (4 votes)
Meetup : Berkeley: Munchkinism 2013-05-14T04:25:21.643Z · score: 2 (3 votes)
Meetup : Berkeley: Information theory and the art of conversation 2013-05-05T22:35:00.823Z · score: 1 (2 votes)
Meetup : Berkeley: Dungeons & Discourse 2013-03-03T06:13:05.399Z · score: 3 (4 votes)
Meetup : Berkeley: Board games 2013-01-29T03:09:23.841Z · score: 3 (4 votes)
Meetup : Berkeley: CFAR focus group 2013-01-23T02:06:35.830Z · score: 3 (4 votes)
A fungibility theorem 2013-01-12T09:27:25.637Z · score: 21 (26 votes)
Proof of fungibility theorem 2013-01-12T09:26:09.484Z · score: 3 (8 votes)
Meetup : Berkeley meetup: Board games! 2013-01-08T20:40:42.392Z · score: 1 (2 votes)
Meetup : Berkeley: How Robot Cars Are Near 2012-12-17T19:46:33.980Z · score: 1 (2 votes)
Meetup : Berkeley: Boardgames 2012-12-05T18:28:09.814Z · score: 1 (2 votes)
Meetup : Berkeley meetup: Hermeneutics! 2012-11-26T05:40:29.186Z · score: 3 (4 votes)
Meetup : Berkeley meetup: Deliberate performance 2012-11-13T23:58:50.742Z · score: 1 (2 votes)
Meetup : Berkeley meetup: Success stories 2012-10-23T22:10:43.964Z · score: 0 (1 votes)
Meetup : Different location for Berkeley meetup 2012-10-17T17:19:56.746Z · score: 1 (2 votes)
[Link] "Fewer than X% of Americans know Y" 2012-10-10T16:59:38.114Z · score: 36 (38 votes)
Meetup : Different location: Berkeley meetup 2012-10-03T08:26:09.910Z · score: 1 (2 votes)
Meetup : Pre-Singularity Summit Overcoming Bias / Less Wrong Meetup Party 2012-09-24T14:46:05.475Z · score: 5 (6 votes)
Meetup : Vienna meetup 2012-09-22T13:14:23.668Z · score: 6 (7 votes)
Meetup report: How harmful is cannabis, and will you change your habits? 2012-09-09T04:50:10.943Z · score: 11 (12 votes)
Meetup : Berkeley meetup: Cannabis, Decision-Making, And A Chance To Change Your Mind 2012-08-29T03:50:23.867Z · score: 4 (5 votes)
Meetup : Berkeley meetup: Operant conditioning game 2012-08-21T15:07:36.431Z · score: 3 (4 votes)
Meetup : Berkeley meetup: Discussion about startups 2012-08-14T17:09:10.149Z · score: 1 (2 votes)
Meetup : Berkeley meetup: Board game night 2012-08-01T06:40:27.322Z · score: 1 (2 votes)
Meetup : Berkeley meetup: Rationalist group therapy 2012-07-25T05:50:53.138Z · score: 4 (5 votes)
Meetup : Berkeley meetup: Argument mapping software 2012-07-18T19:50:27.973Z · score: 3 (4 votes)
Meetup : Berkeley meta-meetup 2012-07-06T08:02:11.372Z · score: 1 (2 votes)
Meetup : Berkeley meetup 2012-06-24T04:36:23.833Z · score: 0 (1 votes)
Meetup : Small Berkeley meetup at Zendo 2012-06-20T08:49:46.065Z · score: 0 (1 votes)
Meetup : Big Berkeley meetup 2012-06-13T01:36:01.863Z · score: 0 (1 votes)

Comments

Comment by nisan on Singularity Mindset · 2020-09-04T19:51:12.298Z · score: 5 (3 votes) · LW · GW

2 years later, do you have an answer to this?

Comment by nisan on Risk is not empirically correlated with return · 2020-08-24T07:42:45.810Z · score: 4 (2 votes) · LW · GW

Hm, I think all I meant was:

"If you have two assets with the same per-share price, and asset A's value per share has a higher variance than asset B's value per share, then asset A's per-share value must have a higher expectation than asset B's per-share value."

I guess I was using "cost" to mean "price" and "return" to mean "discounted value or earnings or profit".

Comment by nisan on Maybe Lying Can't Exist?! · 2020-08-24T04:55:53.880Z · score: 3 (2 votes) · LW · GW

(I haven't read any of the literature on deception you cite, so this is my unimformed opinion.)

I don't think there's any propositional content at all in these sender-receiver games. As far as the P.redator is concerned, the signal means "I want to eat you" and the P.rey wants to be eaten.

If the environment were somewhat richer, the agents would model each other as agents, and they'd have a shared understanding of the meaning of the signals, and then I'd think we'd have a better shot of understanding deception.

Comment by nisan on What Would I Do? Self-prediction in Simple Algorithms · 2020-07-20T14:28:18.350Z · score: 2 (1 votes) · LW · GW

Ah, are you excited about Algorithm 6 because the recurrence relation feels iterative rather than topological?

Comment by nisan on Self-sacrifice is a scarce resource · 2020-07-20T02:15:08.449Z · score: 12 (8 votes) · LW · GW

Like, if you’re in a crashing airplane with Eliezer Yudkowsky and Scott Alexander (or substitute your morally important figures of choice) and there are only two parachutes, then sure, there’s probably a good argument to be made for letting them have the parachutes.

This reminds me of something that happened when I joined the Bay Area rationalist community. A number of us were hanging out and decided to pile in a car to go somewhere, I don't remember where. Unfortunately there were more people than seatbelts. The group decided that one of us, who was widely recognized as an Important High-Impact Person, would definitely get a seatbelt; I ended up without a seatbelt.

I now regret going on that car ride. Not because of the danger; it was a short drive and traffic was light. But the self-signaling was unhealthy. I should have stayed behind, to demonstrate to myself that my safety is important. I needed to tell myself "the world will lose something precious if I die, and I have a duty to protect myself, just as these people are protecting the Important High-Impact Person".

Everyone involved in this story has grown a lot since then (me included!) and I don't have any hard feelings. I bring it up because offhand comments or jokes about sacrificing one's life for an Important High-Impact Person sound a bit off to me; they possibly reveal an unhealthy attitude towards self-sacrifice.

(If someone actually does find themselves in a situation where they must give their life to save another, I won't judge their choice.)

Comment by nisan on Classifying games like the Prisoner's Dilemma · 2020-07-13T04:20:44.205Z · score: 11 (3 votes) · LW · GW

Von Neumann and Morgenstern also classify the two-player games, but they get only two games, up to equivalence. The reason is they assume the players get to negotiate beforehand. The only properties that matter for this are:

  • The maximin value , which represents each player's best alternative to negotiated agreement (BATNA).

  • The maximum total utility .

There are two cases:

  1. The inessential case, . This includes the Abundant Commons with . No player has any incentive to negotiate, because the BATNA is Pareto-optimal.

  2. The essential case, . This includes all other games in the OP.

It might seem strange that VNM consider, say, Cake Eating to be equivalent to Prisoner's Dilemma. But in the VNM framework, Player 1 can threaten not to eat cake in order to extract a side payment from Player 2, and this is equivalent to threatening to defect.

Comment by Nisan on [deleted post] 2020-07-13T04:10:20.223Z
  • item
    • subitem
    • subitem
  • item
Comment by Nisan on [deleted post] 2020-07-13T04:03:22.664Z

Von Neumann and Morgenstern also classify the two-player games, but they get only two games, up to equivalence. The reason is they assume the players get to negotiate beforehand. For them the only properties that matter are:

  • The maximin value , which represents each player's best alternative to negotiated agreement (BATNA).

  • The maximum total utility .

There are two cases:

  1. The inessential case, . This includes the Abundant Commons with . No player has any incentive to negotiate, because the BATNA is Pareto-optimal.

  2. The essential case, . This includes all other games in the OP.

It might seem strange that VNM consider Cake Eating to be equivalent to Prisoner's Dilemma. But in the VNM framework, Player 1 can threaten not to eat cake in order to extract a side payment from Player 2, just and this is the same as threatening to defect.

Comment by nisan on Your Prioritization is Underspecified · 2020-07-11T21:18:50.248Z · score: 6 (3 votes) · LW · GW

There is likely much more here than just 'cognition is expensive'

In particular, prioritization involves negotiation between self-parts with different beliefs/desires, which is a tricky kind of cognition. A suboptimal outcome of negotiation might look like the Delay strategy.

Comment by nisan on Learning the prior · 2020-07-06T03:54:00.815Z · score: 6 (3 votes) · LW · GW

In this case humans are doing the job of transferring from to , and the training algorithm just has to generalize from a representative sample of to the test set.

Comment by nisan on Book report: Theory of Games and Economic Behavior (von Neumann & Morgenstern) · 2020-05-22T16:42:45.987Z · score: 2 (1 votes) · LW · GW

Thanks for the references! I now know that I'm interested specifically in cooperative game theory, and I see that Shoham & Leyton-Brown has a chapter on "coalitional game theory", so I'll take a look.

Comment by nisan on Conflict vs. mistake in non-zero-sum games · 2020-05-21T15:56:47.448Z · score: 4 (2 votes) · LW · GW

If you have two strategy pairs , you can form a convex combination of them like this: Flip a weighted coin; play strategy on heads and strategy on tails. This scheme requires both players to see the same coin flip.

Comment by nisan on Why you should minimax in two-player zero-sum games · 2020-05-17T20:50:54.673Z · score: 2 (1 votes) · LW · GW

A proof of the lemma :

Comment by nisan on Multi-agent safety · 2020-05-16T16:51:12.202Z · score: 2 (1 votes) · LW · GW

Ah, ok. When you said "obedience" I imagined too little agency — an agent that wouldn't stop to ask clarifying questions. But I think we're on the same page regarding the flavor of the objective.

Comment by nisan on Multi-agent safety · 2020-05-16T10:20:01.322Z · score: 4 (2 votes) · LW · GW

Might not intent alignment (doing what a human wants it to do, being helpful) be a better target than obedience (doing what a human told it to do)?

Comment by nisan on Stop saying wrong things · 2020-05-03T07:39:37.822Z · score: 7 (3 votes) · LW · GW

Also Dan Luu's essay 95%-ile isn't that good, where he claims that even 95th-percentile Overwatch players routinely make silly mistakes, suggesting that you can get to that level by not making mistakes.

Comment by nisan on Conflict vs. mistake in non-zero-sum games · 2020-04-22T05:45:47.848Z · score: 8 (5 votes) · LW · GW

Oh, this is quite interesting! Have you thought about how to make it work with mixed strategies?

I also found your paper about the Kripke semantics of PTE. I'll want to give this one a careful read.

You might be interested in: Robust Cooperation in the Prisoner's Dilemma (Barasz et al. 2014), which kind of extends Tennenholtz's program equilibrium.

Comment by nisan on Jan Bloch's Impossible War · 2020-04-09T15:21:51.013Z · score: 4 (2 votes) · LW · GW

Ah, thank you! I have now read the post, and I didn't find it hazardous either.

Comment by nisan on Jan Bloch's Impossible War · 2020-04-08T07:07:55.932Z · score: 2 (1 votes) · LW · GW

More info on the content or severity of the neuropsychological and evocation infohazards would be welcome. (The WWI warning is helpful; I didn't see that the first time.)

Examples of specific evocation hazards:

  • Images of gore
  • Graphic descriptions of violence
  • Flashing lights / epilepsy trigger

Examples of specific neuropsychological hazards:

  • Glowing descriptions of bad role models
  • Suicide baiting

I know which of these hazards I'm especially susceptible to and which I'm not.

I appreciate that Hivewired thought to put these warnings in. But I'm kind of astounded that enough readers plowed through the warnings and read the post (with the expectation that they would be harmed thereby?) to cause it to be promoted.

Comment by nisan on Conflict vs. mistake in non-zero-sum games · 2020-04-08T06:41:33.531Z · score: 4 (2 votes) · LW · GW

Oh I see, the Pareto frontier doesn't have to be convex because there isn't a shared random signal that the players can use to coordinate. Thanks!

Comment by nisan on Jan Bloch's Impossible War · 2020-04-07T22:44:01.228Z · score: 2 (1 votes) · LW · GW

Can you give more informative content warnings so that your readers can make an informed decision about whether to read the post?

Comment by nisan on Conflict vs. mistake in non-zero-sum games · 2020-04-07T04:00:38.957Z · score: 6 (3 votes) · LW · GW

Is it a selfish utility-maximizer? Can its definition of utility change under any circumstances? Does it care about absolute or relative gains, or does it have some rule for trading off absolute against relative gains?

The agent just wants to maximize their expected payoff in the game. They don't care about the other agents' payoffs.

Do the agents in the negotiation have perfect information about the external situation?

The agents know the action spaces and payoff matrix. There may be sources of randomness they can use to implement mixed strategies, and they can't predict these.

Do they know each others' decision logic?

This is the part I don't know how to define. They should have some accurate counterfactual beliefs about what the other agent will do, but they shouldn't be logically omniscient.

Comment by nisan on Conflict vs. mistake in non-zero-sum games · 2020-04-07T03:49:36.130Z · score: 3 (2 votes) · LW · GW

They switch to negotiating for allocation. But yeah, it's weird because there's no basis for negotiation once both parties have committed to playing on the Pareto frontier.

I feel like in practice, negotiation consists of provisional commitments, with the understanding that both parties will retreat to their BATNA if negotiations break down.

Maybe one can model negotiation as a continuous process that approaches the Pareto frontier, with the allocation changing along the way.

Comment by nisan on Conflict vs. mistake in non-zero-sum games · 2020-04-05T22:22:49.360Z · score: 14 (7 votes) · LW · GW

A political example: In March 2020, San Francisco voters approved Proposition E, which limited the amount of new office space that can be built proportionally to the amount of new affordable housing.

This was appealing to voters on Team Affordable Housing who wanted to incentivize Team Office Space to help them build affordable housing.

("Team Affordable Housing" and "Team Office Space" aren't accurate descriptions of the relevant political factions, but they're close enough for this example.)

Team Office Space was able to use the simple mistake-theory argument that fewer limits on building stuff would allow us to have more stuff, which is good.

Team Affordable Housing knew it could build a little affordable housing on its own, but believed it could get more by locking in a favorable allocation early on with the Proposition.

Comment by nisan on Can crimes be discussed literally? · 2020-03-22T23:37:52.887Z · score: 20 (12 votes) · LW · GW

Even setting aside the norm-enforcing functions of language, "the American hospital system is built on lies" is a pretty vague and unhelpful way of pointing to a problem. It could mean any number of things.

But I do think you have a good model of how people in fact respond to different language.

Comment by nisan on Vanessa Kosoy's Shortform · 2019-11-07T22:28:27.972Z · score: 6 (3 votes) · LW · GW

My takeaway from this is that if we're doing policy selection in an environment that contains predictors, instead of applying the counterfactual belief that the predictor is always right, we can assume that we get rewarded if the predictor is wrong, and then take maximin.

How would you handle Agent Simulates Predictor? Is that what TRL is for?

Comment by nisan on "Rationalizing" and "Sitting Bolt Upright in Alarm." · 2019-07-08T21:22:20.968Z · score: 3 (4 votes) · LW · GW

It sounds like you want a word for "Alice is wrong, and that's terrible". In that case, you can say "Alice is fucking wrong", or similar.

Comment by nisan on Why it took so long to do the Fermi calculation right? · 2019-01-10T17:25:06.596Z · score: 3 (2 votes) · LW · GW

Good point. In that case the Drake equation must be modified to include panspermia probabilities and the variance in time-to-civilization among our sister lineages. I'm curious what kind of Bayesian update we get on those...

Comment by nisan on An environment for studying counterfactuals · 2018-12-19T20:23:45.130Z · score: 2 (1 votes) · LW · GW

The observation can provide all sorts of information about the universe, including whether exploration occurs. The exact set of possible observations depends on the decision problem.

and can have any relationship, but the most interesting case is when one can infer from with certainty.

Comment by nisan on Beliefs at different timescales · 2018-11-10T21:51:53.184Z · score: 2 (1 votes) · LW · GW

Thanks, I made this change to the post.

Comment by nisan on Beliefs at different timescales · 2018-11-10T21:49:28.138Z · score: 4 (2 votes) · LW · GW

Yeah, I think the fact that Elo only models the macrostate makes this an imperfect analogy. I think a better analogy would involve a hybrid model, which assigns a probability to a chess game based on whether each move is plausible (using a policy network), and whether the higher-rated player won.

I don't think the distinction between near-exact and nonexact models is essential here. I bet we could introduce extra entropy into the short-term gas model and the rollout would still be superior for predicting the microstate than the Boltzmann distribution.

Comment by nisan on Beliefs at different timescales · 2018-11-05T16:33:37.268Z · score: 10 (3 votes) · LW · GW

Sure: If we can predict the next move in the chess game, we can predict the next move, then the next, then the next. By iterating, we can predict the whole game. If we have a probability for each next move, we multiply them to get the probability of the game.

Conversely, if we have a probability for an entire game, then we can get a probability for just the next move by adding up all the probabilities of all games that can follow from that move.

Comment by nisan on Beliefs at different timescales · 2018-11-05T16:21:42.662Z · score: 2 (1 votes) · LW · GW

Thanks, I didn't know that about the partition function.

In the post I was thinking about a situation where we know the microstate to some precision, so the simulation is accurate. I realize this isn't realistic.

Comment by nisan on Beliefs at different timescales · 2018-11-05T00:48:59.639Z · score: 2 (1 votes) · LW · GW

The sum isn't over , though, it's over all possible tuples of length . Any ideas for how to make that more clear?

Comment by nisan on EDT solves 5 and 10 with conditional oracles · 2018-10-01T17:07:49.116Z · score: 4 (2 votes) · LW · GW

I'm having trouble following this step of the proof of Theorem 4: "Obviously, the first conditional probability is 1". Since the COD isn't necessarily reflective, couldn't the conditional be anything?

Comment by nisan on History of the Development of Logical Induction · 2018-08-29T06:40:14.962Z · score: 6 (4 votes) · LW · GW

The linchpin discovery is probably February 2016.

Comment by nisan on Counterfactuals, thick and thin · 2018-08-02T22:29:32.624Z · score: 4 (2 votes) · LW · GW

Ok. I think that's the way I should have written it, then.

Comment by nisan on Counterfactuals, thick and thin · 2018-08-01T06:03:21.138Z · score: 5 (3 votes) · LW · GW

The definition involving the permutation is a generalization of the example earlier in the post: is the identity and swaps heads and tails. And . In general, if you observe and , then the counterfactual statement is that if you had observed , then you would have also observed .

I just learned about probability kernels thanks to user Diffractor. I might be using them wrong.

Comment by nisan on Counterfactuals, thick and thin · 2018-08-01T05:46:46.037Z · score: 3 (2 votes) · LW · GW

Oh, interesting. Would your interpretation be different if the guess occurred well after the coinflip (but before we get to see the coinflip)?

Comment by nisan on Counterfactuals, thick and thin · 2018-08-01T01:57:40.074Z · score: 2 (1 votes) · LW · GW

That sounds about right to me. I think people have taken stabs at looking for causality-like structure in logic, but they haven't found anything useful.

Comment by nisan on On the Role of Counterfactuals in Learning · 2018-07-11T04:22:52.524Z · score: 4 (2 votes) · LW · GW

What predictions can we get out of this model? If humans use counterfactual reasoning to initialize MCMC, does that imply that humans' implicit world models don't match their explicit counterfactual reasoning?

Comment by nisan on An environment for studying counterfactuals · 2018-07-11T02:30:23.306Z · score: 2 (1 votes) · LW · GW

I agree exploration is a hack. I think exploration vs. other sources of non-dogmatism is orthogonal to the question of counterfactuals, so I'm happy to rely on exploration for now.

Comment by nisan on Mechanistic Transparency for Machine Learning · 2018-07-11T02:03:59.815Z · score: 7 (4 votes) · LW · GW

"Programmatically Interpretable Reinforcement Learning" (Verma et al.) seems related. It would be great to see modular, understandable glosses of neural networks.

Comment by nisan on Why it took so long to do the Fermi calculation right? · 2018-07-06T21:53:29.096Z · score: 16 (7 votes) · LW · GW

I'd like to rescue/clarify Mitchell's summary. The paper's resolution of the Fermi paradox boils down to "(1) Some factors in the Drake equation are highly uncertain, and we don't see any aliens, so (2) one or more of those factors must be small after all".

(1) is enough to weaken the argument for aliens, to the point where there's no paradox anymore. (2) is basically Section 5 from the paper ("Updating the factors").

The point you raised, that "expected number of aliens is high vs. substantial probability of no aliens" is an explanation of why people were confused.

I'm making this comment because if I'm right it means that we only need to look for people (like me?) who were saying all along "there is no Fermi paradox because abiogenesis is cosmically rare", and figure out why no one listened to them.

Comment by nisan on The Math Learning Experiment · 2018-03-24T22:48:29.109Z · score: 17 (4 votes) · LW · GW

I heard a similar story about when Paul Sally visited a grade school classroom. He asked the students what they were learning, and they said "Adding fractions. It's really hard, you have to find the greatest common denominator...." Sally said "Forget about that, just multiply the numerator of each fraction by the denominator of the other and add them, and that's your numerator." The students loved this, and called it the Sally method.

Comment by nisan on The Math Learning Experiment · 2018-03-24T17:25:34.492Z · score: 7 (2 votes) · LW · GW

Cool, do you remember what the 5-minute explanation was?

Comment by nisan on Hufflepuff Cynicism · 2018-02-13T16:33:39.343Z · score: 15 (4 votes) · LW · GW

I'd love to hear your thoughts on A Fable Of Politics And Science. Would you say that Barron's attitude is better than Ferris's, at least sometimes?

Comment by nisan on Hero Licensing · 2018-02-02T02:40:15.454Z · score: 2 (1 votes) · LW · GW

I like the resemblance to this scene from The Fall Of Doc Future.

Comment by nisan on Logical counterfactuals and differential privacy · 2018-01-24T05:38:55.000Z · score: 2 (1 votes) · LW · GW

This doesn't quite work. The theorem and examples only work if you maximize the unconditional mutual information, , not . And the choice of is doing a lot of work — it's not enough to make it "sufficiently rich".

Comment by nisan on Robustness as a Path to AI Alignment · 2017-10-13T15:56:49.694Z · score: 5 (1 votes) · LW · GW

Why is the scenario you describe the "real" argument for transitivity, rather than the sequential scenario? Or are you pointing to a class of scenarios that includes the sequential one?