Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us
Qu'est-ce que la Cartographie du Web

What is Web Cartography? by Franck Ghitalla

13 min. read

With Dominique Boullier and OpenEdition Press, we edited and published Franck Ghitalla’s posthumous book, Qu’est-ce que la Cartographie du Web?. The book is in French, and Franck Ghitalla is mostly known in France, but it is worth introducing his work to a larger, English-speaking audience (and if you speak French, even better! I have more resources for you at the end). His book is in open access online, which allows you to Google-translate it in-place. Check for instance my foreword, or browse the chapters from the summary.

Portrait of Franck Ghitalla
Franck Ghitalla the “champion of networks”, as introduced by Le Monde in 2012.

In short, Franck was a linguist who used to teach at a French engineering school (UTC). He tragically died in December 2018, while teaching his famous course, the one I had followed in 2005 as a student, which had started our collaboration and friendship. From a French perspective, his originality was to have imported early the concepts of network science, for instance scale-freeness, from Barabási’s bestseller Linked. That worked very well because his UTC students were avid of discovering things and building stuff. The visions he inspired were immediately implemented into prototypes of web crawlers, digital libraries, network visualization tools, and more. That dynamic gave birth to Gephi. His students also founded startups like Linkfluence or Linkurious. Franck was renowned as an innovator. He was a very singular figure in the French academia, which caused him some trouble. He narrates it in the book. For instance, he was accused of Americanizing young French minds, which I find hilarious because from a US standpoint, Franck looks so irredeemably French. That is what I would like to briefly explain here, because it is important in a European context, where other researchers may find Franck’s perspective unique and interesting, and relate to it.

Qu’est-ce que la Cartographie du Web? is available in paperback version.

Franck’s starting point was knowledge. His interest in networks was driven by the idea that information had a geography. He saw networks (graphs) as a knowledge technique, in the spirit of Vannevar Bush, and the web as an empirical field to investigate, like the pioneers of network science (Albert-Laszló Barabási and Réka Albert; Lada Adamic and Natalie Glance…). He was drawn to the web because it was datafied knowledge. This unique form of writing allowed him computing visualizations, revealing the geography of information. But it is worth noting that modeling was never his goal. Contrary to the network science movement in the USA, largely founded by statistical physicists, Franck did not trust the machine over the person, and was not willing to blindly delegate analysis to algorithms. He wanted to see by himself, to explore and make discoveries. Franck was interested in information visualization for its hermeneutic potential, he did not want to use algorithms as a way to mechanize the work of interpretation, but to enrich it. He understood information cartography as a craftmanship, a qualitative art enabled by quantitative computations (but not limited to them), and an enlightened form of reading (and writing). For Franck, information cartography was not an automated process of reduction, but an instrumented process of revelation. It was an hermeneutic practice.

The letters for rods
Original illustration of the Memex from the Life reprint of “As We May Think” by Vannevar Bush. It inspired the hypertext.

His endeavor to seek interpretive opportunities in datafied knowledge was built upon the idea that complex phenomena had something to show, something more than universal laws, something specific to each empirical case. In that, he was meeting a forgotten idea of Gabriel Tarde, also promoted by Bruno Latour: the idea that complex phenomena like society, culture, and knowledge consist of nothing more than what composes them, like individual interactions. The idea roots in the philosophy of Leibniz, who wanted to show that thinking the world does not require the idea of God. Leibniz conceptualized the monad as a way to give meaning to things without resorting to God’s will, the soul, Plato’s ideal forms, or any other avatars of divinity. Leibniz conceptualized a purely material world, and Tarde reused the concept of monad for the same purpose, to state that even the most complex collective behaviors depend on nothing more than local interactions. Unfortunately, Tarde’s rival Emile Durkheim famously won the battle of ideas, and incepted sociology as a quantitative science. For Durkheim, on the contrary, complex collective phenomena are sui generis entities, they exist on their own, independently. They are something more than what composes them, they exist as something else. Social entities like nations exist on a different level than individuals, in this perspective. They have an essence, something reminiscent of a soul. Durkheim supports discarding individual information by using statistics (i.e., reductionism) because it gets you closer to the essence of collective phenomena. In a Tardian perspective, that essence is an unnecessary assumption. So if we are to be radical empiricists and minimize our assumptions, then we must get rid of the essence (or hidden truth, or universal law) and find another role and justification for reductive methods. You can read the long and better version of this argument in The Whole is Aways Smaller than its Parts (Latour et al., 2012).

For Tarde, society is entirely contained in individuals, it is entirely material; yet, and that is the crucial point, that is something that we can only see with the right tools. And, as Tarde admitted, such tools did not exist in his time. He knew that his argument was quite speculative, which also explains why Durkheim seemed to be right. This is where the web and big data, in the mid-2000s, were relevant to people like Franck Ghitalla and Bruno Latour. Because they believed that those could be the first real-world realizations of Tardian tools, allowing us to track collective phenomena down to their smallest components. This explains why modeling, and statistical reductionism in general, was not on Franck Ghitalla’s mind. He was confident that he would see collective phenomena in qualitative data sets, if they were large enough, and with the right tools. He trusted the topology of the web to offer an image of the collective interests of humans, an overview of knowledge. A distorted image, certainly, but still truer to our minds than how libraries and encyclopedias were organized. He saw the web and other data sources as fields for phylogenetic investigations à la Foucault, and he did not presume of what he would find. He had no grand theory, he only bothered with finding good empirical cases and interpreting them. He was a radical empiricist, and a practitioner. He did not conduct quantitative experiments to test hypotheses formulated by theorists, in contrast to the literature of network science (that he nevertheless loved). The only theoretical elements he proposed consisted of down-to-earth advices based on accumulated observations.

I remade a number of images for the book, and I made English versions. I feature them here (licensed as CC-BY, feel free to reuse) with a short version of the argument about them. This will give you a non-representative idea of the book’s content. Check my foreword for a more representative overview.

The web as layers

The web has been described as scale-free, although this is now controversial; I have written on that topic, and I will keep things simple here. The fact is that a few web pages have all the hyperlinks, while most pages have almost none of the hyperlinks. And this is also true at the level of websites. If we were to sort pages from the most linked to the less linked and plot the number of links, like in the figure below, we would see a power-law distribution, or at least a heavy-tailed distribution, which is basically the same for us. This type of distribution is sometimes called 80/20, because 20% of the pages would have 80% of the links; the numbers may vary but the important is that it is extremely skewed.

Let us call the top of the curve the “hubs” (the most connected pages) and the rest the “long tail” or “heavy tail”. This distribution has a curious property: the tail is itself a power-law distribution (figure below, bottom). That means that if you remove the head, the tail is still composed of its own head and its own tail. This is, in short, why it is called scale free: it looks the same at different scales (if you consider that zooming is focusing on the tail).

The power law distribution

Franck had a clever idea that we refined over the years. We think of the web as a series of layers, and these layers correspond to the distribution of links. From top to bottom, we have: the high layer, with the most connected websites like Google and Wikipedia; the intermediate layer, with the moderately linked websites; and the deep layer, with mostly disconnected resources. In this model, the specificity of the intermediate layer is to contain aggregates of web documents. In short, communities; I will return to that. The deep layer also deserves an explanation. It is said deep because it is so poorly linked that it is hard to reach by following hyperlinks. It contains most of the information of the web, but each piece of information is accessed rarely. It notably consists of specialized databases and storage systems used as in the web infrastructure, but with a logic of resource providing, not of curated hyperlinks. For instance, Wikimedia Commons. In other words, these are not resources that people link intentionally. By contrast, the intermediate layer contains resources that we see naturally as documents, such as blog posts, articles, or tweets. We share them via hyperlinks, and that creates communities.

The three layers of the web

In terms of link distribution, it requires thinking of the power law not in two parts (the head and the tail) but three: the hubs, the aggregation area, and the tail (figure below). The middle part is the intermediate layer, not as connected as the global hubs, but connected enough that a geography can emerge.

It is worth mentioning that, as the curve suggests, the demarcations are not clear-cut. Similarly, the layers are not clear-cut either. In reality, the layers form a continuum, but it is useful to think of them as different subspaces with different properties, for simplicity.

The intermediate layer is the middle part of the power law

The direction of the hyperlinks is crucially important. Each layer has different properties, both in terms of how it is linked to itself, and to the other layers. The high layer is massively cited by every other layer (figure below, left). Most links converge to the top, so the web behaves like an ocean where the internet user tends to bubble to the surface. Importantly, the deep layer points directly to the high layer, so there is no need to travel through the middle to reach the surface. There are always shortcuts to the top.

The deep layer is the opposite, it is only accessible from the intermediate layer (figure below, center). It is as difficult to go deeper as it is easy to reach the top. The crawler or internet user has to travel through the intermediate layers, from sublayer to sublayer, to reach the depths.

The intermediate layer is basically a bridge between the surface and the depths (figure below, right). These links are asymmetrical, as they tend to be bottom-up, but the layer is nevertheless a passage point. More importantly, it consists of aggregates that are also linked to each other.

Each layer is a space with different topological properties.

The intermediate layer is composed of aggregates. Each aggregate is composed of a core and a periphery. The core is composed of hubs and authorities. The hubs cite a lot of the aggregate’s resources (portals) while the authorities are cited a lot by within the aggregate. The core is highly connected, while the periphery is less connected.

In fact, each aggregate replicates the whole structure of the web. Intuitively, the core is closer to the top, and the periphery to the bottom. There might also be sub-aggregates, on multiple levels. This structure is quite intuitive if we think of it as communities. We may see the gamers as a community with its own hubs and authorities, but it certainly has sub-communities like FPS, RPG, and then possibly sub-sub-communities by sub-genre or game. And there might be bridges or overlaps between different communities (aggregates are not clear-cut either). The important is that the distribution of hyperlinks is heterogeneous and creates a number of self-organized localities.

Each aggregate is not just a denser subspace, it is also thematically centered. This is why aggregates are communities: their resources share a common interest or practice. They are not purely topological. They are subspaces where content and structure correspond to each other.

The intermediate layer is the only one where we can empirically observe a geography of information, because the deep layer is not connected enough, and the high layer is too connected. In other words, the high layer is everywhere, while the deep layer is nowhere. Only the intermediate layer offers a “somewhere” that we can study.

Structure of an aggregate (a community)

Different aggregate structures are possible. It depends on the core (or center, in the figure below), and it depends on the lesser connected websites too. The more hyperconnected the center, the stronger the aggregation. But a community may also exist as links between the less connected actors, which we call here a lattice. Community activity does not have to go through relation brokers (the center). It might be hierarchical, or on the contrary, flat. But to be considered an aggregate, i.e. a local subspace, it needs either a lattice plus a weak core, or a strong core.

Different types of aggregates and non-aggregates

The layers also correspond to different practices of information diffusion. The high layer corresponds to a broadcast model, where actors, typically mainstream media, push information to less connected actors (the public). On the contrary, the aggregates correspond to a viral model where the information spreads in the community between actors of an equivalent level of connectivity and visibility (horizontally).

Different information diffusion models in different layers

These models are not mutually exclusive. As shown in the figure below (c), a hybrid scenario is also possible, where the information circulates virally first, then gets broadcasted by a highly visible actor, then resumes to a viral circulation in the intermediate layer.

Three scenarios of information diffusion

This hybrid scenario is in fact a pretty good example of fake news laundering. Many fake news circulate in low-visibility layers, as partisan bullshit or humour. It is pretty difficult for these contents to disseminate widely from these subspaces. High visibility actors may however launder these contents and promote them to spaces where they could not otherwise circulate. This laundering has been relentlessly documented and is critical providing reach to fake news. The fake news may be debunked and resume a viral circulation, but the laundering will have spread it to localities of the intermediate layer that it could not have reached just from viral circulation.

I have no more original images to share, and I will leave you at that. More details in Franck’s book. I hope I have piqued your interest, and in that case, have a good read.

Links


OpenEdition suggests that you cite this post as follows:
Mathieu Jacomy (October 19, 2021). What is Web Cartography? by Franck Ghitalla. Reticular. Retrieved January 20, 2025 from https://reticular.hypotheses.org/1896


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.