Author:
Maksim Tsvetovat | Open Health Network | United States
In Network analysis, the notion of time has always been the elephant in the room. In some of the earliest writings on SNA (Granovetter, Krackhardt, Simon, etc) the idea that an edge of a network is actually a temporal phenomena was addressed in various ways, and then brushed away as an inconvenience.
Krackhardt used an logarithmic frequency scale to encode strength of ties. Granovetter talked about emotional energy expenditures in a unit of time, as a proxy for tie closeness. Dunbar talked about innate cognitive limitations on number of ties one can keep current, which is simply our ability to cope with incoming and outgoing data over time.
And what did we always have at the end? A binary edge. A real-valued edge, maybe, or a Bayesian edge, at best.
Then, we scrambled to re-assemble a temporal narrative from these heavily aliased slices -- hence the eternal debate of homophily vs. diffusion, questions of how and if ties decay, incredible complexity of SIENA models, as well as strange agent-based concoctions that I myself have perpetrated.
In this paper, I'd like to come back to the original notion of communication as a directed micro-behaviour happening in a continuous, not discrete, time -- and, as such, social networks as agglomerations of these continuous, fluid behaviours.
When we do that, disjoint findings can start falling into place. Centrality metrics get a temporal nature and allow us to differentiate roles not just by how many people you talk to, but when. We'll show temporal degree, betweenness, and eigenvector centralities. In fact, we believe that treating centralities as a temporal phenomenon resolves some of the critical interpretation issues. Diffusion vs. homophily debate resolves itself peacefully, and the two processes can live happily ever after. "Mysterious" Dunbar number becomes a straightforward arithmetic derivation. Mapping network ties to micro-behaviors becomes straightforward, without unwarranted determinism or a "crutch" of Bayesian inference.
Why have we not done it this way before? We did not have the means to collect empirical data of this sort (now we do, thanks to cell phones and Twitter and Facebook), nor we had the computing capacity to analyze such amounts of data and theorize about it.
In a spirit of bringing new and untested theories to INSNA conferences for your input and criticism (which I have reliably done for 10 years), I urge you to poke this idea full of holes.