Annotating a large network map on AI in science

15 min read.

This is a show and tell, a photo reporting, and open notes about this work.

In short, we have harvested ~1.5M academic abstracts about AI, machine learning and algorithms from Scopus, we have extracted expressions, and made a co-occurrence network. We have rendered this network as a big map, and we are annotating it.

Johan Søltoft harvested the data, Snorre Ralund lead the computations, and Anders Munk, Matilde Ficozzi and I did the qualitative work. This is part of the ADD project.

This is how we work. We have the map on a big table with a big screen, and we navigate between the original network in Gephi, where we can filter in various ways to look more closely into the topology, a spreadsheet where we have a series of clusters to annotate (obtained from clique percolation), and an ElasticSearch engine where we have stored all the abstracts, and where we can read them for a given cluster. We draw on the map with various markers.

Where we annotate the map. Done!

The network is mostly made of clusters, but not only. The non-cluster parts are the harder to work with, but also the most interesting. They are the reason why we do this qualitative work (annotating is a form of coding). Our very last piece was a bridge, for instance.

The last bridge to annotate. Finally! It was long. Weeks of full-time work. Matilde did the biggest part: summarizing the underlying literature of most of the clusters.

The base map consisted of 7.5K nodes and 85K edges. I had rendered it in a neutral way but with a bit of hillshading, so that we could read the clusters quickly while being able to annotate on top. Some of the node labels (expressions such as “telescope” or “orthopedics”) were displayed to help. We added about 250 landmarks (in red), 50 bridges (in orange), a dozen of areas (in blue) and a few borders (in green).

Overview of the annotated labels

It is nice that with a printed map, you can just get closer to read the labels. You zoom in and out with your own body. This is what it looks like from close. And this is just a tiny tiny part.

Zoom on a tiny part of the map: about the brain, and AI/algorithms, like everything else in this network.

A big part of our time has been dedicated to finding the labels of the landmarks. We invented the method while doing it, and it deserves a separate discussion, so I will not develop that here. But in short, we do read a sample of the corresponding abstracts, and we wonder how algorithms and AI are involved. Each landmark has a longer description in our spreadsheet, but we only have room for a quite short label. So the landmarks are not just about the words, we look into the articles to understand how AI/algorithms are involved.

My focus in this post is the manual annotation, and why we do it that way. Indeed, there is a simpler way to bring context to the network: we could apply community detection and code the clusters obtained. See below our network colored by clusters from Infomap (a clustering algorithm) in Gephi. We could document what each color is. In pink, on the right, we find health and medicine, for instance. That is a valid option. But we have a good reason to put the extra effort.

The same network in Gephi. Colors: clusters obtained from Infomap.

Our ultimate goal is twofold. We want to account for how AI, algorithms and machine learning are involved in science, and we want to make it accessible and understandable to a wide audience. This audience consisting of academics and non-academics, for instance journalists. We want this network to be part of an atlas readable by people not trained in network analysis. The output is not just a paper. We aim at something visually explicit. Therefore, we optimize our method for visual affordances. We want to get the most out of the visual medium.

The visual medium is at its best when a place (an area, a locality) in the image tells something clear about the represented thing (the network). This is why most of our annotations consist of landmarks: a red cross that implicitly applies to the space around it.

We use landmarks instead of hulls or other delimiting shapes because the clusters do not always have clear boundaries. See the cluster about cryptography below: it has distinct subclusters that we detected with an algorithm and annotated separately, but there is also a real blending between these topics in the literature, which is why they are merged together (topologically as well as visually). Using landmarks allows us to account for this continuity. The ambiguity is an empirical reality that we need to account for. We think that it is more exact to not add arbitrary boundaries when there are none in reality. In this case, we can leverage the visual medium in a productive way.

Cluster about cryptography.

However, the visual medium does not have much affinity with nonlocal structures, and that is a problem when it comes to account for this network. Nonlocality is not an intuitive notion and is hard to explain, because it works differently than our everyday experience of the world, but let me take a shot at it quickly. It is like the fourth dimension, but worse, because even high-dimensional spaces are Euclidean, and nonlocal structures are not. The simplest nonlocal structure is a bridge between two clusters. We call a cluster “local” because being close in the map means being connected in the topology (and vice-versa). It does not have to be an absolute rule, a good-enough correspondence between the topological structure of the network and the 2-dimensional space of the map suffices. That is why we can read it. We can infer the structure from the map space: what is drawn, and where it is drawn. From a visual cluster, we can infer a topological cluster. But bridges are topological clusters that cannot be represented as visual clusters because they are nonlocal. They connect two local structures (topological clusters) that are only local separately, but not together. The bridge is at the same time in different “localities” (because the clusters are distinct) and in the same “locality” (because the bridge is connected). The notion of “local” does not help us understand such a structure. And unfortunately visualizing on a 2D space requires locality, because either you draw things in the same place, or you do not. You have to commit to one or the other.

A simple example of a bridge, with planted partitions. The bridge is somehow in two distinct places at the same time; it is nonlocal in that sense.

Bridges are the reason why it is useful to display the edges in addition to the nodes. The long edges show nonlocal structures: connected nodes that have been split apart by the layout algorithm to preserve locality where it is possible. Do you see what I mean? If not, that is not really a problem. The takeaway is that reducing the topology to a set of clusters only accounts for a part of the structure, because of the bridges and other nonlocal structures. And at the same time, we can compensate for this loss of information by using a number of tricks, for instance drawing the edges, and emphasizing the bridges. The more general question is then: how to assess the situations where clusters are not enough to capture the structure, and how to annotate to account for it?

We annotate manually because we want to look at the edge cases straight into the eyes. We face the problems of representing this network as a map, and we look for solutions. So for the rest of this post, I will showcase the weird parts of the map, explain what makes them problematic, and how we dealt with them.

Simple bridges

Let me start with a simple bridge. The bridge in the image below connects the social science cluster (it has a lot of landmarks such as “text classification” and “optical character recognition”) with a small cluster about audio technologies, with two landmarks: “music genre recognition” and “application to cochlear implants”. We labelled the bridge itself “language”. We had to look into Gephi to understand it. It turns out that when machine learning is used to model language, it is sometimes at the same time about the written language and the spoken language (audio), which is why we find a number of papers that mix vocabularies that tend to be in distinct papers the rest of the time, such as “sentences” and “speaker”.

Focus on the “language” bridge.

The bridge is a real structure, it is a cluster from a topological standpoint. But we cannot represent it as a locality in the map without breaking the fact that the two other clusters are distinct localities. And by the way, for feasibility reasons, we decided to commit to the layout at this stage, so we would not change the node positions anyway. So far we decided to represent the bridge as a thick highlight with a label. We picked orange because it is close to the red of the landmarks: the bridges are, in fact, quite the same thing as a landmark, except that we cannot represent them as a place.

Artifacts

Some clusters were artifacts of our method. The algorithm we used picked up on expressions that were noise in the abstracts. We had to annotate them anyway, because they are visible in the map. We keep them, but annotate them accordingly. I just wanted to mention that.

Those clusters are artifacts of our method.

Artifact bridges

We ruled out a number of bridges as artifacts. The pattern generalizes very well, so let me give a typical example. Below, the cluster about signal processing (left) and the cluster about electricity (right) seem connected through the expression “modulation”. We can verify it by selecting it in Gephi.

The two cluster are connected through a single expression, “modulation”.

But as we can check as well in Gephi, the two clusters have exactly zero co-occurrence. Basically, there are no papers using some typical words of the signal processing field as well as of typical words of the electricity field. There is no cluster overlapping the two that could constitute a bridge.

The left cluster is highlighted, and we see that its neighbors are not in the right cluster: there are no real bridges.

In short, the explanation is that the word “modulation” is used in both fields, but possibly in different ways. The word has multiple uses, and possibly multiple meanings. This phenomenon happens all over the place in this network. That is just how language works. We ruled those as “artifact bridges”.

The artifact bridges are visible in the map, as the edges are saillant. So not every set of convergent edges is necessarily a “real” bridge. We always double-check, and our criterion is that the clusters have direct links aside from the bridging node. There are very few edge cases.

Bridges with nodes in the middle

For sure, bridges are always made of nodes. Because even edges are made of nodes. But those nodes are not always in the clusters at the two ends of the bridge. Sometimes, there are nodes in the middle. Often, in addition to the direct edges from one cluster to the other. When that happens, we try to have the bridge go over those nodes. Here is an example.

We included nodes in the path of the bridge “metamaterials applications”.

Another example below. In this case, we tried to fix the bridge by adding some blue (the background color, more or less) so that it keeps looking like a path. Keeping bridges path-shaped can be a challenge, as we will see.

Another bridge where we included nodes. We decided it after the fact, and we had to paint out part of it with a blue pen.

Does it sound like a good idea to you? I ask this because it comes with problems. If we decide that bridges must include the nodes they pass on, then we now have to avoid the nodes that do not belong to the bridge. The bridge is a topological cluster, as you know, so we can determine which nodes belong to it, or not.

We decided to go for this strategy nevertheless, and try to avoid passing over nodes that are not in the bridge when we can. It strongly shaped how we annotate bridges, but we think that it is still better.

Bent bridges

It happens quite often that an entire cluster is on the direct path of a bridge. When that happens, and insofar as we have enough space to do so, we bend the bridge to avoid confusion about where it starts and ends. The area below is very dense in bridges, and we had to bend many of them. I find it quite appealing, as an unexpected bonus.

This area is rich in bent bridges (health and medical science).

Keeping bridges in check

We sometimes had to annotate that a bridge does not own certain nodes. Finding a path for a bridge can be tricky sometimes, and we had to compromise. We used the blue pen in the examples below. We had to clarify some bridge intersections. The first bridges are easy to draw, and the last ones complicated, because the space gets crowded at places.

Breadcrumbs

Some of the bridges we found had so many nodes in the middle that we informally called them “breadcrumbs”. The pattern is very visible in the map, even without the edges displayed.

This bridge, “renewable energy storage”, consists of this whole trail of nodes. We called this pattern “breadcrumbs” because you can easily follow the bridge from it.

This pattern forced me to realize that bridges always consist of nodes, while I was tempted to think of them as edges. Below is another example of a breadcrumb we annotated first as a bridge before we found out it was a topological cluster found by our clique percolation algorithm, which incentivized us to also annotate it as a landmark.

Breadcrumbs may be small clusters. In this case, we used a landmark.

Bridges into nothingness

It happens sometimes that one of our actual bridges ends up on an artifact bridge. You can think of it this way: a small cluster is stretched between another cluster and a single node that connects to a distant cluster because it is polysemic. The small cluster will look like breadcrumbs because it is so stretched, so we cannot put a landmark, but we can annotate it as a bridge. Yet, there is seemingly nothing on the end of the bridge because it ends, in fact, by an artifact bridge. I struggle to explain! Anyway, find two examples below.

Some bridges look like they lead to nowhere. There are two in this map. This is because they end on artifact bridges.

Thick bridges / stretched clusters

In some cases there are so many nodes in the middle of the bridge that it may look almost like a cluster. When this happens, we have no other choice than to make the bridge as thick as necessary. Sometimes, it can be very thick, and weird, as below.

A thick bridge. Aqueous solutions analysis and operations. It is less dense as a cluster.

You may think of this bridge as a stretched cluster. We certainly do so, as we do for every bridge. Bridges are clusters in the topological sense. We find them by clique percolation, for instance (in addition to seeing them in the map). What makes them a bridge and not a cluster is the fact that they are so stretched that there is no single place where to put a landmark. Because they spread over a large area, and/or because they are not dense enough. Below is another example where the bridge is super thick and slightly less dense than a cluster, but not very far from it. I did not have enough space to write my label!

This bridge is almost as dense as a cluster. It’s “hemodynamics” but I did not have much room for the label.

We already know that we will rework our annotations to account for this continuity between visual clusters and bridges. There is a whole spectrum of in-between cases, which suggests that having one flexible way to annotate them might be better than two distinct ones (the red landmark and the orange highlighting).

Branching bridges

As bridges are made of nodes, there is no guarantee that they behave well and follow a path. Sometimes the topological cluster branches out. We have a bunch of two-legged bridges, for instance.

A two-legged bridge

But it can get much more complicated. We followed the topology as far as we could, and tried to represent such branching bridges as faithfully as possible. A pretty intricate example below. There is a lot going on… That is just how it is!

An area with multiple branching bridges.

An even more bizarre example below. Honestly, this one was so hard to grasp that we were reaching the limits of our ability to annotate. We just did what we could, and it’s not great.

An very intricate area that we could barely annotate.

Impossible to annotate

I will end this post with this. There is an area in this map that we failed to annotate. It does not look very threatening, but in short, it is so nonlocal that we could not do anything with it. Here it is.

An area impossible to annotate

Let me explain. There are a bunch of nodes floating in the middle of this picture. Those are many things at once. First of all, they are not connected together, so in that sense, they are not a cluster, even though they occupy the same space. In fact, they belong to 3 different quasi-bridges, very sparse, each of which connects to a different pair of clusters, and sometimes no cluster at all. One cluster is almost an artifact, and consists of heavy metals. The corresponding papers do not have much in common besides the fact that a list of heavy metals are listed in the abstract. Another cluster is about pollutants and water solutions, but not heavy metals. The nodes do connect to those around, and they fit the loose theme of that area (chemistry and materials) but not in a single, identifiable way. Here is what it looks like in Gephi.

The problematic area in Gephi

Next steps

Our next steps will be to redo the annotations a cleaner way, possibly improving our visual language, and have experts of the different fields represented comment on the map. One of those experts might have the key to understand better parts of the map.



Cite this blog post
Mathieu Jacomy (2022, November 18). Annotating a large network map on AI in science. Reticular. Retrieved April 18, 2024, from https://reticular.hypotheses.org/6132

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Search OpenEdition Search

You will be redirected to OpenEdition Search