An alternative similarity measure is Pearson correlation coefficient:
The similarity \(\sigma_{i,j}\) of nodes \(i\) and \(j\) is the correlation coefficient between vectors \(A_i\) and \(A_j\)
\[
\sigma_{i,j} = \frac{cov(A_i, A_j)}{sd(A_i) \, sd(A_j)} = \frac{\sum_k (A_{i,k} - \langle A_i \rangle) (A_{j,k} - \langle A_j \rangle)}
{\sqrt{\sum_k (A_{i,k} - \langle A_i \rangle)^{2}} \sqrt{\sum_k (A_{j,k} - \langle A_j \rangle)^{2}}}
\]
The measure runs from -1 (maximum negative correlation or maximum dissimilarity) to 1 (maximum positive correlation or maximum similarity). Notice that values close to \(0\) indicate no correlation, hence neither similarity nor dissimilarity.
Again, since the involved vectors are 0-1 vectors, it is not difficult to see that the numerator of the correlation coefficient, that is the co-variance between vectors \(A_i\) and \(A_j\) is:
\[
cov(A_i, A_j) = n_{i,j} - \frac{k_i k_j}{n}
\]
Notice that \(k_i k_j/n\) is the expected number of common neighbors between \(i\) and \(j\) if they would choose their neighbors at random: the probability that a random neighbor of \(i\) is also a neighbor of \(j\) is \(k_j / n\), hence the expected number of common neighbors between \(i\) and \(j\) is \(k_i k_j/n\).
Hence a positive covariance, or a similarity between \(i\) and \(j\), holds when \(i\) and \(j\) share more neighbors than we would expect by chance, while a negative covariance, or a dissimilarity between \(i\) and \(j\) happens when \(i\) and \(j\) have less neighbors in common than we would expect by chance.