Target 20 System Accuracy

We've discussed the Target 20 mechanic a few times (such as in the "Best Combat Algorithm" discussion, a top-10 post for this blog, per sidebar), in which we replace all the standard D&D combat tables with a mechanic of roll d20 + level + modifiers, and check for a total of 20 or more. This simple, time- and space-saving rule has always been the core mechanic in the OED House Rules; I first jotted it down in my AD&D DMG some time in the 90's; and it frequently winds up bleeding into other people's games after they play with me a few times. I recently created a new explanatory page at the OEDGames website.

A question that the more discriminating gamers are right to ask is: How accurate is it, as compared to the classic D&D rules (of any edition)? There's definitely a smoothing-out effect, such that it's not 100% exactly the same as the combat tables used in those early editions (in some sense, that's the point; to get away from those slow and clunky table-lookups; however, see later comments by Gygax & Lakofka further below). Let's compute a precise answer to that question.

For this purpose, we'll use the statistic called Root-Mean-Square Deviation (RMSD). This is the standard way of measuring the difference between a model and real-world observations (or, in our case, two combat models) when you have a bunch of data points to consider. Basically, we find the difference in each pair of data points, square them so they're all positive (and also extra-punish severe misses in the model), take the average of those squares, and then square-root it to get it back in scale with the original data. The basic concept is almost exactly the same as for standard deviation, and you see this RMSD tactic get played out over and over again in various descriptive statistics. In our present case, you can effectively interpret the result as just "average pips off from the book table results". Let's look at a summary of Target 20 vs. results in several early editions of D&D:

Target 20 average pips difference from early D&D systems

The executive summary, for all of these editions, is this: For attacks and saves, Target 20 tends to be 1 or 2 pips off the book table results. If you want to see the full individual level-by-level data set, you can download an ODS spreadsheet here. Each analysis has been done for levels 1 to 14 (which is where most of the book tables tend to top out). Note that the OD&D and B/X rules used the exact same combat and save tables for the standard classes (so I put those editions next to each other); AD&D represented more of an evolution of the system. I've also included the Dragon #80 article by Gygax/Lakofka which presented expanded tables such that adjustments always occur in 1-pip (5%) increments everywhere -- perhaps the best expression of the underlying desired system, unencumbered by page-space limits in early D&D (and the one that Target 20 matches most closely). 

Now, if you dig into the details, you'll see that the "1 or 2 pip" variation is level-dependent, as follows:

Attack rolls in Target 20 are, at 1st level, exactly equal to the OD&D-B/X combat tables; at middle levels they are very close; and they deviate more at the higher levels, with Target 20 being more generous by a few points. The Target 20 fighter attacks are even closer to AD&D, because the AD&D table was explicitly set up to give a regular 5% boost for every added fighter level (see "Special Note" at bottom of DMG p. 74); as seen again in the Dragon #80 article. That is, we're exactly one pip off from the AD&D fighter hits in every case.

Saving throws vs. spells (for brevity, the only one shown here; others are similar) likewise tends to vary from the book results by about 2 points on average. In this case, looking at the details, you'll see that Target 20 is a bit more harsh at 1st level; closely matches the book at middle levels; and is again more generous at the highest levels.

Thief skills, looking at the grey-highlighted part of the chart, appear to be more at odds with the book rules; however, this is only the base rule, and in practice we apply the thief's Dexterity modifier to their rolls, presumably a 1- or 2-point shift closer to the book. Given that, the Target 20 results are again exactly the same as the OD&D-B/X rules at 1st level, and then track a little bit below the book rule at higher levels. (Note that in this particular case we used a Normalized root-mean-square deviation (NRMSD) statistic, because we changed the scale from percentile 1-100 to a die roll of 1-20.) My overall interpretation: At the most commonly played levels, Target 20 is maximally accurate to the book; at the more exotic higher levels, it deviates a bit.

It further bears noting that in practice, when a PC attacks a monster, we (as DM) keep the AC secret -- so players just report the d20 + attack bonus modifiers (a single addition), and the DM mentally adds the monster AC in their head. (Of course, if the player-announced total is itself 20 or more, then they've almost surely scored a hit.) On the other hand, in the case of mass monster attacks or saving throws, the DM may perform a mental reverse-subtraction to come up with the raw die-roll needed, and then roll a fistful of d20's in the open, so that players can immediately confirm if they've been hit or not. But that reverse-subtraction (as is customary with the THACO technique) is never something we ever ask the players to do.

Final thought: If you like the Target 20 method (and we hope you do!), the image at the top here is actually the official Target 20 Compatibility Mark, and you should feel free to use in a new gaming product of your authorship (also, a link to the explanatory web page could be helpful; questions via the email there are welcome). Above all, we hope it's a simplification that helps make your own game more fast-paced and furious. Fight on!


  1. I've adopted this for my G+ homebrews since I saw it in the "Searchers of the Unknown" collection. Which 2 fonts are used in the logo?

    1. So cool, thanks for sharing that! Apparently in the logo, the "Target" is Haettenschweiler Regular, the "2" is Jasmine UPC, and the "0" is a handmade circle (I farmed out the creation). Feel free to copy-paste the image above if you're looking to recreate it. :-)

  2. I've done a fair job of conditioning my players to accept the roll-under method for all task-resolution rolls in my game (e.g. for an attack roll, the chance to hit is defender's AC + attacker's to-hit bonus in 20). I find this more intuitive, because all rolls made to determine "whether something happens" (attack rolls, saving throws, skill checks) can be expressed as "chance X in die-size Y", and the player automatically knows the odds before rolling.

    Also, when you convert everything to roll-under, you start to notice some similarities, like the chance for a 1st level character of any class (ignoring ability mods) to hit a foe in chainmail, the average chance for a 1st level character or 1 hit die monster to make a saving throw, and a great many "skill" or "searching" or "listening" type checks, have a 7 in 20 chance (or a 2 in 6) chance to succeed, quite close. Enough so that you can take handy shortcuts, like rolling saving throws for large groups of low-level monsters on d6s, counting results of 1 or 2 as success. Handy, when a caster drops an area-effect attack on a mob of skeletons or goblins!

  3. Can I ask what your thoughts are on the Roll Under Blackjack style system are?

    Basically you aim to roll under your score (attribute, AV, ST, whatever), but as high as possible. For opposed rolls, both parties roll, if one passes and the other fails, the first wins. Otherwise, highest roll wins. Extra difficulty is achieved by increasing the minimum roll required, bonuses can be granted to the score before rolling.

    Example: a fighter has AV (attack value) 12, and attacks something wearing chain (AC 4). They pass if they roll 5-12.

    It's rather handy, because the player can roll and instantly see if it was good enough and announce that they passed an attack check with a 4; that is better than the 2 needed to hit leather armour so the GM confirms a hit (without acknowledging the type of armour worn). Key benefit: it's all straight comparisons, no addition or subtraction required.

    It does run into issues when scores get very high, but it seems to work nicely for OD&D and B/X score ranges (let AV = Attack Bonus+10, and AC = ascending AC-10, but otherwise it's the same).

    1. I can't say as I've ever thought about that. At first blush, it seems like the multiple conversions, and two comparisons per roll, seem fiddly.

  4. I'm currently running Whitehack (a heavily modified S&W: White Box), so all the conversions are already done, and for published modules it's no worse than converting between ascending and descending AC. The multiple (usually one, sometimes two) comparisons are actually pretty easy in practice and quick to do.

    I was quite impressed at how intuitive it felt when I tried it.

  5. I've been using Target 20 in AD&D (not entirely perfect but close enough) and when running Astonishing Swordsmen and Sorcerers of Hyperborea. In that game fighting ability is a separate # that perfectly fills in the Target algorithm (a base 9 AC system doesn't hurt either).

    1. Awesome. That's good to know!

    2. I was hoping someone used this in AS&SH before me. Good to know it works well.