5 min read
Each algorithm has its own mythology. Most algorithms are complicated machineries requiring a lot of expert work to be properly understood or implemented. In that sense, they are opaque, and it is hard (ie. costly) to debunk false ideas about them. Myths about algorithms tend to stick.
Academic papers about algorithms tell nice stories. A publication cannot just expose an algorithm, certain criteria are required (newness, efficiency…) and a rationale must frame them. In other words, a story. But it does not have to be true. Its role is to justify and explain why the algorithm matches the criteria. In this rationale, the logical conclusion predicts the features of the algorithm. And indeed, it has them. But it does not mean that the reason exposed by the rationale is the right one.
In those narratives, the authors always seem to have a good understanding of the reasons why the algorithm is better (performance, quality…). That, I must admit, is a complete joke. Although theoretical work can lead to the discovery of a new algorithm, which can come with an explanation, algorithms can also be discovered by heuristic, trial-and-error iterations. In that case, there is no guarantee that its authors understand the reasons why it is better. Fortunately, as long as they come up with a reasonable narrative, they can get away with it.
As weird as it sounds, you can perfectly understand how an algorithm works and completely ignore why it performs better. I even believe it is generally the case. But academic practices do not incentivize to be honest on that matter. So authors narrate their algorithm so that a justification appears. That justification is rarely discussed, especially if the algorithm works. And it gives birth to a myth.
I was decided to write on that matter when I read the short piece titled “Everything you know about word2vec is wrong”. The author, assuming here the role of a software engineer, tried to reimplement the famous word2vec algorithm. Following the instructions from the original paper and other major sources (Wikipedia…), they could not get the impressive results that the algorithm is known for. By checking the code of reference implementations, they discovered that those are “drastically different”. The piece is discussed on Hacker News and according to different comments, this discrepancy between the paper and the implementation is not unusual.
“One infamous example of this is SSAPRE – to this day, people have a lot of trouble understanding the paper (and it has significant errors that make the algorithm incorrect as written). The concept sure, but the exact algorithm – less so. Reading the source code … it is just wildly different than the paper (and often requires a lot of thought for people to convince themselves it is correct).”
DannyBee, most upvoted comment
The word2vec paper narrates that “subsampling of frequent words during training results in a significant speedup … , and improves accuracy of the representations of less frequent words” and also that “a simplified variant of Noise Contrastive Estimation … results in faster training and better vector representations for frequent words.” It comes up with a theory about why it performs better. But it turns out that a seemingly minor detail, barely mentioned in the paper, has a major impact on the result. At the very least, the paper’s narrative does not seem so good at explaining why the algorithm performs well.
This story reminded me about my own process of writing and publishing the Force Atlas 2 paper, on a force-directed network layout algorithm. Our motivation was absolutely not the algorithm’s performance. It proposed novel ideas, solved engineering issues on integrating different existing techniques, and had a specific design. It was used by many Gephi users, and we thought a peer-reviewed reference on the algorithm would be of help to the research community. We wanted to provide a reference explanation for how it worked, a ground for interpreting the resulting network maps. But the peer-review asked for a benchmark. Fortunately, it was also faster than existing alternatives, and we were able to come up with a nice story about performance. We were published. But let us be honest: that is not why people cite the paper. It is because they use it. And that is because it produces good results. But why does it work so well?
The best quality force-driven algorithm is arguably the LinLog algorithm, proposed by Andreas Noack. He provides a convincing narrative in this 2007 paper but the clearest explanation relies on a picture from his a 2009 paper. In short, all algorithms of that kind use two parameters, one for the attraction force (a) and one for the repulsion force (r). Those are positive or null integer, and the attraction must be stronger than the repulsion (a>r). Noack showed that the result is better when a and r are smaller. This leaves one optimal solution: a equals 1 (linear attraction) and r equals 0 (logarithmic repulsion), hence the optimal algorithm, LinLog. The picture below shows where other algorithms place themselves, and we see that the best spot is in the top-left corner.

Do not get me wrong: LinLog is indeed the best performing algorithm, in the sense that it is the one representing clusters the most clearly. That is the entire point of his 2009 paper, “Modularity clustering is force-directed layout”. But the narrative, as convincing as it is, is wrong.
The explanation is in plain sight in the 2007 paper. The LinLog model comes in two flavors: node repulsion and edge repulsion. Both match the nice narrative. But only the edge version provides the nice results. In the pictures from his paper, you can judge by yourself how the node version compares more to the venerable “Früchterman Reingold” algorithm than to the edge version. Noack himself chose to highlight it visually.


Once again, the decisive parameter is absent from the algorithm’s narrative. The algorithm is as good as advertised, the narrative predicts it, but the narrative is nevertheless false.
Last example, similar concerns arise in the field of machine learning. For a long time Bayesian strategies were the most successful. They were considered mathematically elegant, requiring fewer assumptions than alternatives. In the spirit of Occam’s razor, this elegance crystallized as a rationale for the superiority of the method. Certainly, if your approach requires many meta-parameters, it must mean that you did not model your problem properly, and thus cannot reach an optimal solution. But that narrative could only hold until deep learning and its inherent messiness dethroned the Bayesian approach. The unreasonable effectiveness of mathematical elegance turned out a myth. Was it wrong? That, I do not know. But it is certainly not dead. Myths have a thick skin.
OpenEdition suggests that you cite this post as follows:
Mathieu Jacomy (June 6, 2019). Narrated Algorithms. Reticular. Retrieved February 19, 2025 from https://reticular.hypotheses.org/991
One thought on “Narrated Algorithms”