15 min read
![](https://reticular.hypotheses.org/files/2020/03/20200320_145902-scaled.jpg)
A story within a story: this is the only way I can explain the problem of visualizing complex networks.
The bigger story investigates an important question: what happens when we try to know something that we cannot know? The mere existence of this question is already capable of causing havoc. It assumes that we cannot know everything, that the horizon of Science is unattainable. Blasphemous to some, self-evident to others, the idea is, in any case, old. Leibniz’ dream has been put to death, and the killing caused riot in Maths Kingdom. Did puny humans learn the lesson? I’ll jump to a more interesting question: when we try to know the unknowable, what happens instead? Does the cake of knowledge refuse to enter the mouth of our cognition? Do we choke on it, incapable of chewing? Do we just eat a chunk and declare that it was enough? Do we even see this infinitely big cake? I suspect that in many situations, the feeling that we understand something has more to do with the familiarity we have with it, and the confidence that it will not cause unreasonable trouble.
I believe that we are delusional about the unknowable, and it makes us take the wrong courses of action. This issue is not purely epistemic: it is social, cultural, political, and psychological. It is a question of culture, practice, and materiality. I would like to put this hypothesis to a test, but such a thing is like climbing a mountain. I must take smaller steps. This is where complexity, as a topic, gets interesting. Complexity is a sophisticated concept in many ways, but it is also a simple notion about our the failure of our powers to know. What do we do in front of an epistemic wall?
The smaller story is about visualizing complex networks. Visualizing non-complex networks is no a big deal. It is about following the connections. The semiotics of that affair are generic: respecting symmetries, limiting cluttering by having less lines cross each other, prevent overlap between elements… Think of it as laying out a diagram nicely. But complex networks are something else. Those are too big to digest at once. There are too many connections to be able to follow one individually.
Visualizing a complex network generally relies on a placement algorithm, capable of arranging the nodes so that their positions tell us something about the topology. This way we can analyze the structure indirectly, by looking at where the nodes are. For instance, if they are packed in a certain area, it means that we have a cluster – in the topological sense.
For non-obvious reasons that would take time to formulate entirely, we do not really know what we see in networks. The most common family of layout algorithms, namely “force-directed placement algorithms”, like Force Atlas 2, produce a kind of representation that is poorly understood. Computer scientists who create and evaluate these algorithms do not explain what these do, only how they work. People who use these algorithms do not have to understand them either. Using these algorithms is a self-legitimized cultural practice; at this point it is accepted well enough that it becomes a tacit norm. Do not get me wrong: these algorithms produce highly usable results; we just ignore why. These excellent operational qualities explain why they remain in use, despite the constant critique that these algorithms are problematic. Unfortunately, the critiques do not seem to understand these algorithms either.
I assume here that these layouts capture something of the topology, but we do not know exactly why. Some people believe that force-driven layouts do not tell us more than, say, a good clustering algorithm. I believe that they are wrong, but I have not evidence, and they do not either. Some people assume that what these algorithms capture has been studied in depth; but no. For instance the rationale provided by the gold standard, the Lin Log algorithm, is flawed – I wrote on that here. I take this matter for an out of fashion, but still open question.
As there are two stories, my interest is double. On a practical level, I am on a quest to determine what we see in networks. This would help people make sense of their network maps, at least. But on a more general level, I am interested in the narratives that all sorts of people come up with to rationalize their networks. The computer scientists who publish algo papers. The engineers who implement these algorithms into code packages. The different fields that use these algorithms in practice. And even popular culture. From a techno-anthropological standpoint, it allows accounting for the black-boxing of our knowledge apparatus. Complex network visualization is a good case to study our cognitive, psychological, and social reaction to the unknowable.
One of the difficulties I face in my inquiries is simply the lack of source material. I developed algorithms myself, so have a sense of the issues faced in the process. But this process leaves no traces, as those are often conflicting with the narrative of the final paper, that tries to black-box as much as possible. Developing an algorithm is complicated enough that spending an additional effort to take readable notes is problematic. I did not document my own process either. I regret it, but I learned the lesson.
I developed a new metric
This week I developed a metric for reading networks – Yay! It started as an attempt to find a quantitative answer to a simple question. It required some visualization. This gave me an idea that I decided to test. It provided good results, so I decided to evaluate it more systematically. It turned out it worked very well, so I black-boxed it as a metric to evaluate the quality of a layout in practice. I call it connected-closeness.
The only tool I used for this process was Observable, a Javascript-based notebook platform. Think of it as Jupyter notebooks, but online – hosted on their website and executed in your browser. This allowed me to intertwine text, visualization, computations and interactions.
I started to write for a broad public, but then I realized that I was on my way to develop an actual metric, so I switched to a more technical writing style, focusing on documenting my process. At the end of the day, these notebooks contain a lot of information that are only useful to someone who wants to study the process in depth. But I will summarize it here.
My process consists of 8 notebooks (so far) and you can find the whole series there. It starts with small things and ends up with a self-contained, reusable implementation. You could read it all, in order, to get the complete story of the process.
The whole process is one of the two stories I can tell. The other story is the black-boxed one, the kind of narrative expected in a computer science paper: this is a new metric, it is better than all existing alternatives, here is evidence of that, now give me magic academic coins. If you are still reading this, you are interested in the process; I will expose it as briefly as I can.
What do force-driven layouts accomplish?
We know how these algorithms work: all nodes repulse each other, and connected nodes also attract each other. The position of each node depends on the positions of all other nodes. The algorithm applies the forces step by step, all nodes moving at each step. It converges at some point to a state of equilibrium (approximately). The final state depends on the initial positions, set at random. It is called “non-deterministic”. From one time to another, the final state can be better or worse.
So these layouts are not like a statistical projection. We cannot tell what the position of a node means with a straightforward statement. But what can we know about the result?
The functioning of the algorithm tells us that it tries to put connected nodes closer. From there, it seems reasonable to conclude that connected nodes are indeed closer. But is it true? How much closer? For all networks, or only some?
Let us take an example: a network spatialized by Force Atlas 2.
![](https://reticular.hypotheses.org/files/2020/03/image.png)
Look at the long edges: those are connected pairs that are not close. So the layout did not succeed completely. Why is the unanswered question that shows the limits of our understanding. But at least we can describe the situation.
If we just account for the distances between nodes, we can see that connected nodes are indeed closer. In that sense, the algorithm was effective. Note: the length unit is arbitrary.
Count | Mean length | |
All pairs of nodes | 63,190 | 179 |
Connected pairs (edges) | 982 | 42 |
Disconnected pairs | 62,208 | 181 |
So there is at least one statement that is true here: connected pairs are closer than disconnected pairs. If we count how the node pairs are distributed over different distances, we can evaluate other relevant statements. Here is a list.
- Connected nodes are closer on average: TRUE ✔
- All connected nodes are very close: FALSE ✘
- Most connected nodes are very close: TRUE ✔
- Most very close nodes are connected: FALSE ✘
- Nodes that are far away are disconnected: (mostly) TRUE ✔
- Disconnected nodes are split apart: FALSE ✘
Check this notebook for more details and interactions (e.g. re-run the layout to check how randomization affects those figures).
The problem with these statements is twofold. Firstly, they are not quantitative – but this is easy to fix. Secondarily, they are not very informative. The simplest statement here is that “nodes as close as X are connected”, but unfortunately it is false (regardless of X). The best we have here is that most connected nodes are very close. We can quantify that. But can we make it more informative?
Challenge accepted
My process was exactly how you imagine: I started with one case, and then scaled up in generality. I visualized what happens for one network and one layout in the second notebook of the series. Then I tried a different network and two layouts in the third notebook. At this point I realized that there was a problem. Let me explain.
The low hanging fruit at this point is to quantify the statement “most connected nodes are very close“. We must quantify the “most” and the “very close”, and the former depends on the latter. So I set up an interactive device to count and visualize the edges closer than a given distance (try it).
![](https://reticular.hypotheses.org/files/2020/03/image-1.png)
When you tinker with it, you realize that the distance captures a lot of edges very quickly – as expected from the statement that “most connected nodes are very close“. But to give this a meaning, we need a point of comparison.
Here I plot different measures as a function of the distance D. In black, the proportion of edges shorter than D. In blue, the proportion of node pairs shorter than D. As you can see, the latter rises much more slowly, and is a natural point of comparison. This is a good opportunity.
![](https://reticular.hypotheses.org/files/2020/03/image-4.png)
The proportion of node pairs is basically the same thing as the proportion of edges if they were distributed randomly. I call this the “expected proportion of edges shorter than D”, where “expected” means “in a similar but randomized situation”. We can then compare the actual proportion of edges to the expected proportion. This gives us the green curve, the proportion of edges shorter than D above expectations. It is equal to the difference between the black and the blue curve (in light green).
The green curve is obviously null at both ends, so it has to reach a peak somewhere in between. The peak point is very interesting:
- The higher the green curve, the more “unexpected” edges captured by the layout; the more dramatic is the statement we can make.
- It provides the precise distance were the layout is the most efficient, which is a precious practical information on the map.
Identifying this point allows forging a quantitative and informative statement such as: “X% of edges are unexpectedly shorter than D“, where X is as high as possible. This is what I aim at, and it looks promising, but there is an issue.
In my third notebook I try a random layout, to have a point of comparison. And then this happens:
![](https://reticular.hypotheses.org/files/2020/03/image-5.png)
Naturally, for any distance, the number of connected pairs is as random as expected. The blue curve sticks to the black curve; the green is flat; there is no significant bump. There is no special distance. What if my test case was a favorable case, but the metric fails with other networks, other layouts?
From one case to many
Notebooks are great for that, and I had a particularly great time with Observable: you can quite easily navigate the ladder of abstraction. In my fourth notebook I conducted a systematic benchmark of the metrics using 14 different network generators and 7 different layout algorithms. And for each network+layout pair, I generated 100 different cases, for a total of almost 10,000 cases. I visualized the behavior of different metrics across this table, generating plots such as this one:
![](https://reticular.hypotheses.org/files/2020/03/image-6.png)
Just visually, I realized that the metric was very consistent in all cases except for random layouts and, sometimes, random networks.
![](https://reticular.hypotheses.org/files/2020/03/image-22.png)
Look at the little circles: they stack up most of the time even though the network is generated randomly, and the layout is non-deterministic. This is not obvious, so let me illustrate that. You can also try by yourself.
![](https://reticular.hypotheses.org/files/2020/03/image-7.png)
Here you see four different random networks, where edges have been generated with a probability of 5%, spatialized with Force Atlas 2. The curve profiles are nevertheless very similar, the optimal distance is basically at the same point (115, 120, 120 and 100) and the maximal proportion of unexpected edges is consistent (60%, 55%, 55% and 50%). Even though these networks are random and the layout non-deterministic, the statement we can formulate about the different cases is almost the same.
All of that is nice, but I still have the “flat curve” problem, and even worse, as the benchmark unfortunately reveals a different but connected problem.
Fixing unexpected issues
Certain curves have a plateau on top. Check this out:
![](https://reticular.hypotheses.org/files/2020/03/image-9.png)
This curve is for two cliques with just one bridge between them. The plateau corresponds to this long bridge.
![](https://reticular.hypotheses.org/files/2020/03/image-10.png)
The problem here is that even though the max of the curve is pretty clear, there are multiple valid distances. At first I just picked the shortest one, as it carries more information. But it does not work in practice, as micro bumps on the curve give give a strict answer. No tie breakers, the micro-bumps decide of a winner, but there are a whole lot other almost as valid distances that are also very different. The peak is not a peak, and its location is not a point.
I fixed this issue by using a tolerance parameter epsilon, very small. It works this way: I pick for the smallest distance that fits the max of the curve with a tolerance of epsilon. That apparently works.
It does not solve the problem of the flat-to-zero curve, though. In this case, picking a distance would be like declaring a winner in a race where no one has left the starting line. This is not a maths problem. It is a design problem.
Designing an algorithm
In my fifth notebook I redesigned the algorithm. This step was probably the most important to document.
So far I had been tinkering with equations and data, calling things the way they made sense to me at the moment. This is not my first tool and I ended up realizing that if you do not have a (re-)design step, your tool gets crippled by an upside-down logic. So far I started from the internal constraints about measuring a layout, and I progressed toward something operational. The actual user will follow the same path in reverse: she will see the metric first, and get to understanding the underlying constraints by applying it.
In this post I refrained from naming the different quantities: this is because they have a different name before and after the redesign. Here is an example. The “green curve” plays the role of a quality metric, so I named it Q. This makes sense for an engineer, but not for a user, as Q does not say anything about what it represents. As this quantity is pivotal to understanding this construction, I finally give it a literal name, “connected-closeness“, and note it accordingly “C”. My fifth notebook details my rationale for these decisions.
The redesign is about the naming of the different quantities, and how to communicate them the best way possible. It is also about the mathematical formalism, featured below.
![](https://reticular.hypotheses.org/files/2020/03/image-11.png)
![](https://reticular.hypotheses.org/files/2020/03/image-12.png)
The design is also about the graphical appearance of the metric. It matters a lot, as the special distance, now called Δmax, has to be drawn onto the network map. Check the result below, featuring the classic data set C. Elegans.
![](https://reticular.hypotheses.org/files/2020/03/image-13.png)
Finally, the design is about political decisions. I know, it sounds overly dramatic; but there is a point to make. I decided to refuse to declare Δmax when the “top of the curve” Cmax is below 10%, as in the “flat-to-zero” curve. It is not a mathematical decision, as in practice there is always a Δmax, but a design decision. Indeed, if I allow a Δmax in a situation where it is blatantly meaningless, I support misinterpretations. This algorithm will not be that docile, and refuses to communicate a value when it is meaningless. Check this out:
![](https://reticular.hypotheses.org/files/2020/03/image-15.png)
One last word on design; it is, famously, iterative. After I redesigned the algorithm, I had to redo my fourth notebook entirely in order to have statistics that feature the right elements of language and formalism – including the code. This became my sixth notebook.
Statistical justification
My seventh notebook focused on highlighting interesting facts about connected-closeness. It consists of a statistical analysis of the data I had generated. It explores how the behavior of the metric relates to intuition and the knowledge we have on networks.
For instance, it captures very well the fact that force-directed layouts succeed better when there is a community structure. The chart below features the maximum connected-closeness for different settings of a simple stochastic block model.
![](https://reticular.hypotheses.org/files/2020/03/image-16.png)
It also captures that these algorithms are better on sparse networks than dense networks. The chart below features random networks with different settings.
![](https://reticular.hypotheses.org/files/2020/03/image-18.png)
It also confirms that “bad” layouts are worse. Here “bad” means either using the wrong settings, or just the random layout.
![](https://reticular.hypotheses.org/files/2020/03/image-20.png)
![](https://reticular.hypotheses.org/files/2020/03/image-21.png)
Optimization
This was not the end of the journey, as my implementation during these tests was naive. I did not bother to make it efficient, but unfortunately it required as many computations as the square number of nodes, because I was looking at all node pairs, which prevents the algorithm to resolve on large networks. My eighth notebook exposes the algorithmic techniques I considered to improve performance.
I finally settled a self-contained Javascript implementation of the algorithm, requiring less computations (the number of edges). Find it below, for your curiosity.
computeConnectedCloseness = function(g, settings){
// Default settings
settings = settings || {}
settings.epsilon = settings.epsilon || 0.03; // 3%
settings.grid_size = settings.grid_size || 10; // This is an optimization thing, it's not the graphical grid
const pairs_of_nodes_sampled = sample_pairs_of_nodes();
const connected_pairs = g.edges().map(eid => {
const n1 = g.getNodeAttributes(g.source(eid));
const n2 = g.getNodeAttributes(g.target(eid));
const d = Math.sqrt(Math.pow(n1.x-n2.x, 2)+Math.pow(n1.y-n2.y, 2));
return d;
})
// Grid search for C_max
let range = [0, Math.max(d3.max(pairs_of_nodes_sampled), d3.max(connected_pairs))];
let C_max = 0;
let distances_index = {};
let Delta, old_C_max, C, i, target_index, indicators_over_Delta;
do {
for(i=0; i<=grid_size; i++){
Delta = range[0] + (range[1]-range[0]) * i / grid_size;
if (distances_index[Delta] === undefined) {
distances_index[Delta] = computeIndicators(Delta, g, pairs_of_nodes_sampled, connected_pairs);
}
}
old_C_max = C_max;
C_max = 0;
indicators_over_Delta = Object.values(distances_index);
indicators_over_Delta.forEach((indicators, i) => {
C = indicators.C;
if (C > C_max) {
C_max = C;
target_index = i;
}
});
range = [
indicators_over_Delta[Math.max(0, target_index-1)].Delta,
indicators_over_Delta[Math.min(indicators_over_Delta.length-1, target_index+1)].Delta
]
} while ( (C_max-old_C_max)/C_max >= settings.epsilon/10 )
const Delta_max = find_Delta_max(indicators_over_Delta, epsilon);
const indicators_of_Delta_max = computeIndicators(Delta_max, g, pairs_of_nodes_sampled, connected_pairs);
// Resistance to misinterpretation
if (indicators_of_Delta_max.C < 0.1) {
return {
undefined,
E_percent_of_Delta_max: undefined,
p_percent_of_Delta_max: undefined,
P_edge_of_Delta_max: undefined,
C_max: indicators_of_Delta_max.C
}
} else {
return {
Delta_max,
E_percent_of_Delta_max: indicators_of_Delta_max.E_percent,
p_percent_of_Delta_max: indicators_of_Delta_max.p_percent,
P_edge_of_Delta_max: indicators_of_Delta_max.P_edge,
C_max: indicators_of_Delta_max.C
}
}
// Internal methods
// Compute indicators given a distance Delta
function computeIndicators(Delta, g, pairs_of_nodes_sampled, connected_pairs) {
const connected_pairs_below_Delta = connected_pairs.filter(d => d<=Delta);
const pairs_below_Delta = pairs_of_nodes_sampled.filter(d => d<=Delta);
// Count of edges shorter than Delta
// note: actual count
const E = connected_pairs_below_Delta.length;
// Proportion of edges shorter than Delta
// note: actual count
const E_percent = E / connected_pairs.length;
// Count of node pairs closer than Delta
// note: sampling-dependent
const p = pairs_below_Delta.length;
// Proportion of node pairs closer than Delta
// note: sampling-dependent, but it cancels out
const p_percent = p / pairs_of_nodes_sampled.length;
// Connected closeness
const C = E_percent - p_percent;
// Probability that, considering two nodes closer than Delta, they are connected
// note: p is sampling-dependent, so we have to normalize it here.
const possible_edges_per_pair = g.undirected ? 1 : 2;
const P_edge = E / (possible_edges_per_pair * p * (g.order * (g.order-1)) / pairs_of_nodes_sampled.length);
return {
Delta,
E_percent,
p_percent,
P_edge, // Note: P_edge is complentary information, not strictly necessary
C
};
}
function sample_pairs_of_nodes(){
if (g.order<2) return [];
let samples = [];
let node1, node2, n1, n2, d, c;
const samples_count = g.size; // We want as many samples as edges
if (samples_count<1) return [];
for (let i=0; i<samples_count; i++) {
node1 = g.nodes()[Math.floor(Math.random()*g.order)]
do {
node2 = g.nodes()[Math.floor(Math.random()*g.order)]
} while (node1 == node2)
n1 = g.getNodeAttributes(node1);
n2 = g.getNodeAttributes(node2);
d = Math.sqrt(Math.pow(n1.x-n2.x, 2)+Math.pow(n1.y-n2.y, 2));
samples.push(d);
}
return samples;
}
function find_Delta_max(indicators_over_Delta, epsilon) {
const C_max = d3.max(indicators_over_Delta, d => d.C);
const Delta_max = d3.min(
indicators_over_Delta.filter(d => (
d.C >= (1-epsilon) * C_max
)
),
d => d.Delta
);
return Delta_max;
}
}
OpenEdition suggests that you cite this post as follows:
Mathieu Jacomy (March 20, 2020). Making complex networks interpretable with a metric. Reticular. Retrieved January 20, 2025 from https://reticular.hypotheses.org/1603