Download

Abstract

The ability to discriminate large and small quantities---number discrimination---is a core aspect of basic numerical competence in both humans and animals. In this work, we examine the extent to which the state-of-the-art neural networks designed for vision exhibit this basic ability. Motivated by studies in animal and infant numerical cognition, we use the numerical bisection procedure to test number discrimination in three family of neural architectures. We find that models with vision-specific inductive biases are more successful in discriminating numbers than those with no or less implicit biases. Interestingly, the model with both hierarchical and locality biases best match the empirical data. We also observe that even the strongest model does not exhibit the expected number discrimination behavior if the test situation is different that the training one. In some cases, the model has learned a correct and ordered clustering of numbers, but cannot use this knowledge in new situations.