Power

In some circumstances centrality - the quality of being connected to central ones - has limited utility in predicting the locus of power in networks. Consider exchange networks, where the relationship in the network involves the transfer of valued items (i.e., information, time, money, energy). A set of exchange relations is positive if exchange in one relation promotes exchange in others and negative if exchange in one relation inhibits exchange in others. In negative exchange networks, power comes from being connected to those who have few options. Being connected to those who have many possibilities reduces one's power.

Think, for instance, to a social network in which time is the exchanged value. Imagine that every actor has a limited time to listen to others and that each actor divides its time between its neighbors. Clearly, exchange of time in one relation precludes the exchange of the same time in other relations. What are the actors that receive most attention? These are the nodes that are connected to many neighbors with few options, since they receive almost full attention from all their neighbors. On the other hand, actors connected to few neighbors with a lot of possibilities receive little consideration, since their neighbors are mostly busy with others.

The power thesis is as follows:

A node is powerful if it is connected with powerless nodes.
Power $x_{i}$ of node $i$ is given by: $$x_i = \sum_k \frac{a_{k,i}}{x_k}$$ that is: $$x = x^{\div} A$$ The above states two properties of power:
  1. the larger the degree of a node, the larger its power; and
  2. the smaller the power of neighbors of a node, the larger its power.
The first property seems reasonable: the more ties an actor has, the more powerful the actor is. The second property characterizes power: for equal number of ties, actors that are linked to powerless others are powerful; on the other hand, actors that are tied to powerful others are powerless.

Power can be computed with the following iterative process. Let $x_0 = e$, where $e$ is the vector of all 1's. For $k \geq 1$: $$ x_k = x_{k-1}^{\div} A $$ For $k \geq 0$, let $r_k = x_{2k}$ and $c_k = x_{2k+1}$.

  1. If $r_k$ converges, then it does to a vector $r$ such that $\alpha r = r^{\div} A$, with $\alpha > 0$. It follows that $\sqrt{\alpha} r = (\sqrt{\alpha} r)^{\div} A$, hence $\sqrt{\alpha} r$ is a solution of the power equation $x = x^{\div} A$.
  2. Similarly, if $c_k$ converges, it does to a vector $c$ such that $\beta c = c^{\div} A$ and $\sqrt{\beta} c$ is a again solution of the power equation $x = x^{\div} A$.
It holds that solutions $r$ and $c$ are proportional: $r = \gamma c$, for some $\gamma > 0$. For instance:
  1. the 1st iteration $x_1 = e A$, which is, for a given node $i$, the degree of $i$;
  2. the 2nd iteration $x_2 = (e A)^{\div} A$, which is, for a given node $i$, the sum of reciprocals of degrees of neighbors of $i$;
In general, power of node $i$ is positively influenced by the degrees of nodes that are reachable from $i$ with an even number of edges and it is negatively influenced by the degrees of nodes that are reachable from $i$ an odd number of edges. Nodes that are reachable from $i$ both with a path of even length as well as with a path of odd length play a dual role with respect to $i$. Notice, however, that such an influence devalues with the length of the path: it is stronger when the path is shorter.

By Sinkhorn-Knopp theorem (on symmetric matrices), power exists, and the corresponding iterative method converges, if and only if matrix the corresponding graph $G_A$ is regularizable. The regularizability problem is equivalent to the following linear programming feasibility problem: $$ \begin{array}{l} B w = re \\ w \geq 1 \end{array} $$ where $B$ is the incidence matrix of $G_A$, that is, an $n \times m$ matrix such that $b_{i,l} = 1$ if $i$ belongs to edge $l$ and $b_{i,l} = 0$ otherwise, with $n$ and $m$ the number of nodes and edges of the graph, $w$ is a vector of length $m$ with edge weight variables and $r$ is a variable for the regularization degree. If matrix $A$ is not regularizable, it can be fully perturbed as follows: $$A^{F}_{\epsilon} = A + \epsilon E.$$ A less intrusive diagonal perturbation is sufficient: $$A^{D}_{\epsilon} = A + \epsilon I$$ where $I$ is the diagonal matrix. This corresponds to add a loop with weight $\epsilon$ to each node in the graph.

Code

The following user-defined function regularify checks is a graph is regularizable and in case gives the regularization solution. It uses the lpSolve and lpSolveAPI packages for solving linear, integer and mixed integer programs. In order to find a positive solution $w > 0$, the program solves the linear problem $\hat{B} (\hat{w}, r) = -d$, with $\hat{w} \geq 0$, where $\hat{B}$ is the incidence matrix $B$ with one additional column $-e$, $d$ is a vector with node degrees, and $(\hat{w}, r)$ is a vector with weight variables $\hat{w}$ and an additional variable $r$ for the regularization degree. Setting $w = \hat{w} + 1$, the problem corresponds to $B w = r e$, with the constrain that $w \geq 1$.

library(lpSolve) library(lpSolveAPI) regularify = function (g) { n = vcount(g) m = ecount(g) E = get.edges(g, E(g)) B = matrix(0, nrow = n, ncol = m) # build incidence matrix for (i in 1:m) { B[E[i,1], i] = 1 B[E[i,2], i] = 1 } # objective function obj = rep(0, m + 1) # constraint matrix con = cbind(B, rep(-1, n)) # direction of constraints dir = rep("=", n) # right hand side terms rhs = -degree(g) # solve the LP problem sol = lp("max", obj, con, dir, rhs) # get solution if (sol$status == 0) { s = sol$solution # weights w = s[1:m] + 1 # weighted degree d = s[m+1] } # return the solution if (sol$status == 0) return(list(weights = w, degree = d)) else return(NULL) }

The following user-defined function power computes power using a direct computation on a regularizable adjacency matrix A:

# Compute power x = (1/x) A #INPUT # A = graph adjacency matrix # t = precision # OUTPUT # A list with: # vector = power vector # iter = number of iterations power = function(A, t) { n = dim(A)[1]; # x_2k x0 = rep(0, n); # x_2k+1 x1 = rep(1, n); # x_2k+2 x2 = rep(1, n); diff = 1 eps = 1/10^t; iter = 0; while (diff > eps) { x0 = x1; x1 = x2; x2 = (1/x2) %*% A; diff = sum(abs(x2 - x0)); iter = iter + 1; } # it holds now: alpha x2 = (1/x2) A alpha = ((1/x2) %*% A[,1]) / x2[1]; # hence sqrt(alpha) * x2 = (1/(sqrt(alpha) * x2)) A x2 = sqrt(alpha) %*% x2; return(list(vector = as.vector(x2), iter = iter)) }