D&D.Sci III Evaluation and Ruleset

post by abstractapplic · 2021-03-08T23:01:36.833Z · LW · GW · 4 comments

Contents

  Ruleset
    Fights
    and Anomalies
    
  Reflections
  Scheduling
None
4 comments

This is a followup to the D&D.Sci post [LW · GW] I made last week; if you haven’t already read it, you should do so now before spoiling yourself.

Here is the web interactive I built to let you test your solution; below is a complete explanation of the rules used to generate the dataset. You’ll probably want to test your answer before reading any further.


Ruleset

Mage Fights

Mages have differing levels of Power. When they fight, each of them roll two ten-sided dice, and add the result to their Power. The mage with the higher total result wins; if it’s a tie, the defender wins.

The raw Power of each mage at the time of your summoning is given below:

MagePower
Vitamancer A19
Vitamancer B22
Geomancer A21
Geomancer B12
Cryomancer23
Pyromancer A20
Pyromancer B21
Necromancer A22
Necromancer B15
Necromancer C13
Electromancer20

Modifiers and Anomalies

Power is modified by the circumstances in which a fight occurs. The main modifiers are:

In addition, all the mages have something unique going on with them:

History

Geomancer B believes (mostly correctly) that time is not a relevant factor, and appends data to his records in the order he receives it, without noting what happened when.

The simulated timeline that created the dataset is as follows:

Reflections

The ‘making use of row order’ trick for this challenge was artificial even by the standards of such things; I sincerely congratulate everyone who considered and discarded the possibility for their pragmatism. In the real world, if you notice a formatting pattern that seems relevant to your task, you should usually ask the person who provided the dataset about it, rather than naively trying to model on it. That said, being able to detect (and prove) when a dataset has ‘extra’ information can be useful.

I added a lot of small, weird, marginally-relevant-to-the-task detail to this dataset, partly for the sake of realism, but also because I was pleasantly surprised by how comprehensively LW managed to pick apart my last two challenges and I wanted to establish an upper bound on what players would detect.

Scheduling

I’ve discovered I dislike creating D&D.Sci games to a strict deadline, so all I can promise is that the next one will be sometime in April.

I’m undecided about how long these challenges should last; feedback on this point would be greatly welcomed, as would feedback on any other point.

4 comments

Comments sorted by top scores.

comment by Measure · 2021-03-09T00:43:00.174Z · LW(p) · GW(p)

I'd prefer a week before the evaluation.

comment by Yonge · 2021-03-09T21:48:31.924Z · LW(p) · GW(p)

Thank you for organising this.

I think a week is a good length for them to last. 3 days felt a little rushed.

comment by Measure · 2021-03-09T01:19:09.776Z · LW(p) · GW(p)

I like the flavorful responses to different outcomes in the evaluator — especially when you manage to have only the summoner lose.

comment by Randomized, Controlled (BossSleepy) · 2021-03-08T23:59:37.223Z · LW(p) · GW(p)

I really appreciated this challenge, as I'd like to get more into data analysis. Thank you for the effort.