In our recent paper entitled ”Extrapolating Paths with Graph Neural Networks”, Jean-Baptiste and I delved into the problem of path inference: given a path prefix, i.e., a partially observed sequence of nodes in a graph, we aimed to predict which nodes are in the missing suffix. I already wrote a short blog-post exposing some of the main algorithmic ideas (see also our github code repository).
In this post, I would like to discuss our experience with training our graph neural network to mimic how humans link concepts in the Wikispeedia game.
Human players are called to find a path from a source to a target article by following a sequence of hyperlinks:
- Each player is presented with two Wikipedia articles.
- Starting from the first article, the player aims to reach the second by following links in the articles encountered.
- The faster the solution is found (in terms of time/number of clicks), the better!
The intriguing aspect of Wikispeedia is that it provides a glimpse into how people link abstract ideas. When Bob West and Jure Leskovec performed a large scale study of more than 300k game traces, they discovered that human way-finding is not optimal in any familiar mathematical sense. Since players have to rely on their prior knowledge to decide the next article to click on, most paths differ qualitatively from shortest-paths on the wikipedia network. Most players prefer to navigate to hubs (general purpose articles with many links) in the early phase of the game and progressively specialise their search afterwards. People also predominantly link concepts by clicking on articles from the same category (as I did in the example above) or by thinking in terms of geography.
Could a machine link concepts the way humans do?
It is compelling to believe that human decisions are too complex and idiosyncratic to be reproduced by an algorithm. After all, our thinking as humans is influenced by a myriad of factors, including our experiences, training, and genetics. Human decision making also often contains an emotional factor—and emotions are something that a machine undoubtedly lacks.
On the other hand, it might also be that actions become less random if we focus on large populations rather than individuals. This was the idea that inspired Isaac Asimov to write the science fiction book series “Foundation“. The book premise is that, in the waning days of a future Galactic Empire, a mathematician spends his life developing a theory of mathematical sociology. Using statistical laws of mass action, this theory can predict the future of large populations. Asimov used the analogy of a gas:
An observer has great difficulty in predicting the motion of a single molecule in a gas, but with the kinetic theory can predict the mass action of the gas to a high level of accuracy.Psychohistory, a fictional science in Isaac Asimov‘s Foundation universe.
Applying Asimov’s premise in the Wikispeedia game, while it might be very hard to guess the path of a specific player in the game, it might be easier to predict what the average player will do.
To get a better understanding of the predictability of player decisions in Wikispeedia, we trained a path-finding neural network called GRETEL to mimic humans based on a set of example game traces. For each trace prefix (the first 4 articles in the path sequence), we asked GRETEL to predict a likely suffix path. We then employed stochastic gradient descent to maximize the likelihood that the predicted path was the one actually followed by the human player.
The results were intriguing. Despite the difficulty of this task (there are on average about 50 choices a human player can make at every step), the neural network was able to make some unlikely apt predictions.
Let us look at the results more carefully:
Poor baselines. As expected, choosing the path at random (Uniform NB) does not work at all, with the probability of the classifier predicting the correct target falling to 0.1% even for suffixes of length 2 (since the path prefix is 4, this corresponds to a total number of 6 articles). Incorporating information about the text of the articles helped very little (we used a simple predictor based on FastText pre-trained word embeddings of the wikipedia article text). Though better NLP-based classifiers probably exist, our experiment suggests that making the classifier aware of the intrinsic meaning of articles does not suffice.
Better classifiers. Methods that exploited the wikipedia hyperlink structure faired better. A simple strategy we found to be effective was to always follow the link that players used more frequently in the training set. This worked reasonably well for one click, but suffered for longer paths. The path extrapolation neural network, on the other hand, could make better long horizon predictions by utilising all available information (text, hyperlink click frequency, and player path prefix). A case in point, for a horizon of three clicks GRETEL was 8 times more accurate than the next best baseline.
Delving deeper into GRETEL’s predictions
To get a better intuition, we can look at some of the neural network’s predictions selected from the test set. Aiming to improve visibility, we display only the one-hop neighbors of the nodes in the true path—high resolution versions can be downloaded from here.
Case 1. Latvia to Edmund Hillary. In the first test case, the player linked Latvia and climber Edmund Hillary by navigating to Russia, China, Nepal and then selecting Himalayas (yellow path). This makes sense: Edmund Hillary and Tenzing Norgay were the first climbers confirmed to have reached the summit of Mount Everest on 29 May 1953.
The three most likely suffixes according to GRETEL are shown in red, purple and blue. Though the neural network could not guess the human path exactly, its predictions are remarkably relevant: Tenzing Norgay was part of the expedition and Mount Everest is the mountain they climbed. As for Yeti… could it be that the climbers encountered one in their trip?
Case 2. Lunar Eclipse to Nuclear Power. Here, GRETEL had the true suffix amongst its top 2 likeliest paths. The suffix [Chemical element, Periodic table] was given probability 0.13 whereas [Nuclear fission, Nuclear power] and [Nuclear fission, Nuclear weapon] were given a joint probability of 0.17.
Case 3. John Adams to Train. Let’s look at another instance were the human path was recovered exactly.
It might be interesting that in this case, the algorithm believed that there are two likely concepts the human might have been interested in: trains and the geography of Canada. This seems to agree with Bob West’s observation that human exhibit an inductive bias towards categorical and geographic connections.
So what to make of these results?
Thankfully, machines cannot yet predict with certainty how individual players link concepts. However, Asimov’s hypothesis might hold some sway. In many situations, the neural network was able to correctly guess the implicit biases used by players in order to navigate through a web of concepts. This perhaps suggests that, given sufficient data and computing, the decisions of the average player are not as unpredictable as one would hope. Make of this what you will.
I would like to thank the Swiss National Science Foundation for supporting this work.