Elo method was coined by the physics professor and excellent chess player Arpad Elo. In 1970, FIDE, the World Chess Federation, agreed to adopt the Elo Rating System.

The method works as follows. Suppose that players \(i\) and \(j\) match. Let \(s_{i,j}\) be the actual score of \(i\) in the match against \(j\). We have that:

Notice that the actual score \(s_{j,i}\) of \(j\) in the match against \(i\) is \(1 - s_{i,j}\). Let \(\mu_{i,j}\) be the expected score of \(i\) in the match against \(j\). We have that:

\[ \begin{array}{lll} \mu_{i,j} & = & \frac{1}{1 + 10^{-(r_i - r_j) / \zeta}} = \frac{10^{r_i / \zeta}}{10^{r_i / \zeta} + 10^{r_j / \zeta}} \\\\ \end{array} \]

with \(r_i\) and \(r_j\) the ratings of \(i\) and \(j\) before the match and \(\zeta\) is a constant. Notice that the expected score \(\mu_{j,i}\) of \(j\) in the match against \(i\) is \(1 - \mu_{i,j}\).

We assume that initially all player ratings are equal to 0. When players \(i\) and \(j\) match, the new ratings \(r_i\) of \(i\) and \(r_j\) of \(j\) are modified using the following update rule:

\[ \begin{array}{lll} r_{i} & \leftarrow & r_i + \kappa (s_{i,j} - \mu_{i,j}) \\ r_j & \leftarrow & r_j + \kappa (s_{j,i} - \mu_{j,i}) \end{array} \]

where \(\kappa\) is a constant.

The Elo thesis is:

If a player performs as expected, it gains nothing. If it performs better than expected, it is rewarded, while if it performs poorer than expected, it is penalized.

According to the movie The social network by David Fincher, it appears that the Elo’s method formed the basis for rating people on Zuckerberg’s Web site Facemash, which was the predecessor of Facebook.

Dataset

Challenges

  1. An interesting property of Elo’s ratings is that the sum of all player ratings is always 0. Formally show this property. (Hint: use the fact that \(s_{i,j} + s_{j,i}=1\) and \(\mu_{i,j} + \mu_{j,i} =1\))
  2. Write an optimized version of the Elo’s rating function below (use also the above property)
  3. Compare the execution time of the original and optimized version on this chess dataset
library(readr)
games = read_csv("training_data.csv")
##  Elo
# INPUT
# games: a *dataframe* with matches
# zeta: logistic parameter
# k: update factor
# OUTPUT
# r: rating vector
elo_slow = function(games, z = 400, k = 25) {
  
  # number of players
  players = unique(c(games$White, games$Black))
  n = max(players)

  # number of games
  m = nrow(games)
  
  # old rating vector
  rold = rep(0, n)
  
  # new rating vector
  rnew = rep(0, n)
  
  for (i in 1:m) {

    # White player
    # compute update
    score = games[[i, "Score"]]
    spread = rold[games[[i, "White"]]] - rold[games[[i, "Black"]]]
    mu = 1 / (1 + 10^(-spread / z))
    update = k * (score - mu)
    # update rating
    rnew[games[[i,"White"]]] = rold[games[[i,"White"]]] + update
    
    # Black player
    # compute update
    score = 1 - games[[i, "Score"]]
    spread = rold[games[[i, "Black"]]] - rold[games[[i, "White"]]]
    mu = 1 / (1 + 10^(-spread / z))
    update = k * (score - mu)
    # update rating
    rnew[games[[i,"Black"]]] = rold[games[[i,"Black"]]] + update
    
    # update old ratings
    rold[games[[i,"White"]]] = rnew[games[[i,"White"]]]
    rold[games[[i,"Black"]]] = rnew[games[[i,"Black"]]]
  }
  return(rnew)
}