The Gray Zone: Convention Games

Let's consider three different contexts for playing D&D:

  1. Home campaigns. Here you'll be playing with the same players & characters over an extended period of time. Characters will almost certainly be generated individually to player taste; they will advance and explore the world over time. Old-school “sandbox” style play basically requires this context.

  2. Convention games. This is a one-shot adventure, possibly limited to a 4-hour time slot or something similar. Characters may be pre-generated or custom-made (consider RPGA point-buy rules or the old DMG Appendix P, which I still use). The characters won't advance in any mechanical way.

  3. Tournament play. This is also a one-shot adventure, but in a competitive context. There will be multiple (possibly very many) playgroups run through the same scenario, with an eye towards scoring the best and picking a “champion”. Characters are almost certainly pre-generated (so as to give a level playing field to the competition).
Notice that I distinguish here between “convention games” in general and “tournament play” in particular (even though they have many coarse similarities, and tournaments are generally run within a convention gathering). Convention & tournament games are similar in that they both feature short one-off adventures, and they avoid any usage of the character-advancement rules. But they differ in that one is competitive and the other is not. Simple convention games, perhaps, have more of an incentive to let the players “win” (sometimes they are run as product-release promotions or trials, and have good reason to want the players to leave the table feeling like they “had fun” with the experience and the product).

Tournament games, meanwhile, have an excellent reason to be tough meat-grinders where the majority of the players “lose” (by acting as a strict filter, they make it easier to identify the one “champion” in the event that made the most progress; whereas if many people uniformly “win” it will be difficult to make that distinction). Compare to an interesting quote from recent cyberware games at West Point: the attacks designed by the NSA were made "a little too hard for the strongest undergraduate team to deal with, so that we could distinguish the strongest teams from the weaker ones." And this also explains why the earliest D&D published adventures all had a "killer DM" feel to them: they were all originally developed for competitive tournament situations.

Okay, so getting closer to my point -- Having considered the different kinds of play contexts I've seen for D&D, two of them have seemed the most compelling, and one is rather more frail for me. We might ask the question, "Why are we playing; what do we gain at the end?" Two of these situations have a meta-reward, outside the game itself, that makes the experience deeper and more compelling. In case (1) Home campaigns, the meta-reward is largely character advancement; levelling up, accessing new powers and magic items. There's also exploration of a larger campaign world over time, but let's face it -- The #1 revolutionary, addictive development that D&D brought us was the idea of persistent, advancing characters over many game sessions, and this is almost solely accessible in terms of a home campaign. In case (3) Tournament play, the meta-reward is the competition with other teams playing in parallel to yours, and seeing one team at the end awarded with honor and a trophy (or somesuch). Personally, I love playing in a tournament, and love the heads-down, high-proficiency play that I see in that context.

So that leaves case (2) Convention games, and frankly, I can't figure out what the meta-game "point" is to them anymore. When I run one, I'm left a little bit bewildered at the end about what the payoff is. It seems very awkward if there's a TPK at the end, and it seems almost equally awkward if time simply runs out after a certain number of rooms are successfully looted.

One suggestion is that there needs to be a specific "quest" in a convention game -- The players are given an explicit (or obvious) assignment at the start, and if they can succeed in the time alloted, they are declared to have "won". A few problems here: (1) It's difficult to estimate in advance a perfect set of encounters that lead to a "win" at exactly the 4-hour mark. (2) The setup manages to frustrate the classic D&D architecture of open-ended exploration, multiple paths, resource management, wandering monsters, treasure and XP rewards, etc. (3) There's still no meta-game reward from this in-game "victory".

Now, I have a good friend Paul who recently ran an exceptional convention game a few weeks back. Philosophically, we tend to disagree about many of the high-level "whys and wherefores" of D&D, but I think we almost always agree about whether a given game we just experienced was good or not (sort of an "I know it when I see it" experience). In the past we simultaneously co-DM'd a campaign, and at least once our differing styles stomped ugly all over each other (Ettin-style?). He may run a better convention game than I do; the one he ran the other weekend was one of the most fun D&D sessions I've had in a long time -- hilarious characters, great encounters, well-paced, filthy humor (which I like), great ending. I was mulling over my troubles with convention games on the ride over, and lo, my friend snaps off one of the best such games in my memory.

Anyway, Paul wrote up his notes on that adventure on his blog over here. The thing I was surprised and a bit unsettled by was that the quest, locations, and NPCs were all being invented and moved around backstage on the fly, which is how our investigations managed to lead us to saving the girl at almost exactly the 4-hour mark. Made for a great, nigh-perfect gaming session -- and it's not something I think I'd ever be able to bring myself to do, as it goes against every grain I've been trained in as a game designer, thinking more in terms of published tournament-style adventures that we'd prefer to keep fixed, replicable, and fair if multiple groups are run through the same adventure over time.

So, what to do? Should I just give up on running one-off convention games (granted that they frustrate all the meta-game rewards that are the hallmark of D&D), and leave them to better narrative DMs? Is there any way to interface the classic rewards of D&D in an isolated, one-shot experience? Troubling questions, since at this point in my life the only opportunities I have for play are the infrequent one-off convention games: the "gray zone" in the middle, if you will.


What is the Best Combat Algorithm?

Figure thinking with question mark

Throughout the history of D&D and RPG's (and more generally, any action/wargame), there have been a host of different algorithms to determining success in combat and other feats of skill and luck. For example: to-hit-tables, THACO, compare to increasing AC score, etc.

Within some very small tolerance for error (say +/-1 difference), all of these systems have been mathematically equivalent (i.e., result in "hits" for the same rolls of the d20 die). But which is the best algorithm? That is, treating the tabletop gamer's brain as a kind of natural "computer", which is easiest/ fastest/ most efficient/ least error prone? Is it one of the aforementioned algorithms, or something different?

First, let's establish the different components of the basic D&D "to hit" (or anything else) roll. They include: (1) a d20 die roll, (2) a basic attack proficiency, by class or hit dice, (3) the armor of the defender, (4) miscellaneous modifiers (positive bonuses being good for the attacker), and (5) the "baseline" chance to succeed at hitting, irrespective of other modifiers #2-4.

Let's look at one example, say, the THACO mechanic from 1E-2E. In the form of an inequality, the basic algorithm is:


* The THACO was itself determined (pre-game time) from tables in the core books. But in essence, for fighter and monster-types, this incorporated the "baseline" success chance (Normal Men need to roll ~20 vs. AC 0) and a +1 bonus per fighter/monster level. In other words, THACO = (~20 - level). Let's substitute and see all 5 terms plainly:

THACO ALGORITHM: d20 + mods ≥ 20 - level - AC

Now, if we proceed to search for other, variant algorithms, we can apply the basic algebraic "rebalancing" operations to make any of these terms appear on either side of the inequality that we wish. For example, we could add a "level" term to both sides (canceling it on the right and appearing as an addition to the left). Or, we could subtract the "mods" from both sides (thereby appearing as a subtraction on the right).

In fact, since there are 5 terms, and each can appear on either of the 2 sides of the inequality that we wish, there are in fact 2^5 = 32 different formats for this inequality (by the fundamental principle of counting) that we could consider. Here are just a few of those 32 possible variations:

TABLE ALGORITHM: d20 + mods ≥ (20 - level - AC)
[Encapsulated in table]

THACO ALGORITHM: d20 + mods ≥ (20 - level) - AC
[Encapsulated in THACO]

d20 SYSTEM ALGORITHM: d20 + mods + level ≥ (20 - AC)
[Defined as New AC]

"SUBTRACT ALL" ALGORITHM: d20 ≥ 20 - level - AC - mods

"GO NEGATIVE" ALGORITHM: 0 ≥ 20 - level - AC - mods - d20


Now, obviously, those last few were for humorous illustrations only, and I assume not many people would want to use those systems. But what criteria can we use to choose the "best" possible system? Let's consider the following as guiding principles (and we'll back each of them up with results from experiments in cognitive psychology as we proceed):

(1) Additions are easier than subtractions.
Although mathematically equivalent (and using fundamentally the same operation in digital computing systems), most people find subtraction significantly harder than addition. For example, addition is commutative (the order is irrelevant) whereas subtraction is not. See the paper by MacIntyre, University of Edinburgh, 2004, p. 2: "Addition tasks are clearly completed in a much more confident manner than the subtraction items, with over 80% of the study group with at most one error on the items. Subtraction items appear to have presented a much bigger challenge to the pupils, with over 50% having 3 or more of those questions wrong." See also Kamii et. al. 2001.

(2) Round numbers are easier to compare than odd numbers. In other words, when comparing which of two numbers is larger (the final, required step in any "to hit" algorithm) it will be easier if the second number is "20" than, say "27". This follows from the psychological finding that it's faster to compare single digits that are farther apart; see Sousa, How the Brain Learns Mathematics, p. 21: "When two digits were far apart in values, such as 2 and 9, the adults responded quickly, and almost without error. But when the digits were closer in value, such as 5 and 6, the response time increased significantly, and the error rate rose dramatically..." In our case, setting the second digit to zero would maximize the opportunity for a large (and thus easy-to-discern) difference between the numbers.

(3) Small numbers are easier to compare than large numbers. This has also been borne out by a host of psychological experiments over the last several decades. Again from Sousa, p. 22: "The speed with which we compare two numbers depends not just on the distance between them but on their size as well. It takes far longer to decide that 9 is larger than 8 than to decide that 2 is larger than 1. For numbers of equal distance apart, larger numbers are more difficult to compare than smaller ones." Again, this is true for human computers only, not digital ones (ironically, the digital processor "compare" operation is really just an application of the same "subtract" circuitry).

Okay, so let's think about applying these principles to find the cognitively-justified best tabletop resolution algorithm. Applying principle #3 means that we'd generally prefer dealing with smaller numbers rather than larger. Before considering anything else, it's clear that it will be hardest for people to mentally operate in a d% percentile system, easier in a d20-scaled system, and easier still on a d6-scaled system. We should pick the easiest of these that gives the fidelity necessary to our simulation, and the d20-scale does seem like a nice medium.

We can also apply principles #2 and #3 to discard a key change brought about to D&D in the 3rd Edition: Ascending AC numbers. While it has its proponents (and is of course mathematically equivalent to all the other 32 permutations of the core mechanic inequality), it forces us at the end of our algorithm to run a comparison against a relatively large, and frequently odd, number, such as AC 15, or AC 27. By using instead descending ACs, they will always be a single digit (and therefore easier to manipulate according to finding #3), and we'll also see below that we can arrange a rule such that the final comparison is always run against a fixed, round number (and therefore preferred according to principle #2 as well).

Applying principle #1 indicates that we'd prefer to have all of our operations be additions, and do away with any subtractions (as in the THACO system). Returning to our 32 different options for presenting the basic resolution inequality, this is easily accomplished: simply add back all the terms on the right-hand side of the inequality, and all those terms become simple additions on the left. Having done this, we'll see that we're left with a nice round number to compare that addition to (fortuitously complying with principle #2, as mentioned above). We'll call this the "Target 20" algorithm:

TARGET 20 ALGORITHM: d20 + level + AC + mods ≥ 20

Now, you may have guessed where I was going with this if you'd read previous blog entries of mine supporting the idea. While never presented this way in TSR/WOTC core rulebooks, I'm quite confident that this is the most mentally efficient representation of the core d20-based resolution mechanic: Add d20, your fighter level, your opponent's single-digit descending AC, and miscellaneous bonuses; a number equal to or greater than 20 then indicates a "hit". It satisfies all of our 3 psychologically-verified guiding principles: (1) additions are easier than subtractions, (2) round numbers easier to compare than odd ones, and (3) small numbers easier to manipulate than large ones (particularly in the form of single-digit, descending ACs).

Like a lot of things in our hobby, the Original D&D rule was pretty close to optimal, but not quite perfect in this sense. If I won the lottery it might be interesting to definitively prove which method is best by running a series of psychological experiments; but since the result just follows from already-proven principles, I'd also want to set up a betting pool and recoup some of my money from the WOTC chief designers of the last several years.