SALO now uses a new method titled MARKOV to convert ability estimates to estimates of value over replacement. The acronym MARKOV is chosen to stand for “MARKOV Approximation for Reasonable Konstruction of Overall Value”.

MARKOV works by simulating the full distribution of game outcomes expected by each player, under standardized conditions, from a representation of the game as a discrete-time Markov chain whose states are home-team leads (and periods). The model and some simplifying, standardizing assumptions provide the transition probability between any two states in any half-second of play. The probability of any state at the end of the game is found by repeated multiplication of the transition matrix.

Simplifying assumptions include:

The last assumption is required because, although SALO accounts for how many skaters each team has on ice in estimating shot probability, it does not yet include a model of why there may be more or fewer skaters on ice at any time.

Players’ expected win totals over 82 games under the above conditions are calculated, the same is done for a synthetic replacement player (drawn from the prior for player ability given zero games played), and the difference is presented as the player’s value over replacement.

The procedure is repeated for each of the Monte Carlo draws taken in fitting the model to generate error estimates for VORP.

The procedure is accelerated via the observation that value estimates from MARKOV are virtually linear in the SALO ability estimates they are calculated from. In practice, for each Monte Carlo draw, the MARKOV procedure is used only to find wins above average for the most extreme player, and then a linear approximation is constructed through that point and a value of zero wins above average for an average player.

A longer write-up of MARKOV may be expected eventually.