Matrix factorization is an important and challenging mathematical problem encountered in the context of dictionary learning, recommendation systems and machine learning. The study of its Bayes-optimal limits, namely the insurmountable bounds provided by information theory, presents several obstacles that are still hard to overcome. In this talk, I will abandon Bayes-optimality, in favor of an alternative procedure, called “decimation". Decimation is shown to map matrix factorization into a sequence of neural network models of associative memory, of which the Hopfield model is a celebrated example. Each of these networks turn out to depend on the order parameters of the previous ones, that are in turn linked to their retrieval performances. Although sub-optimal in general, decimation has the benefit of completely analyzable performances. Finally, I will exhibit an “oracle” algorithm based on the ground-state search of a neural network, which shows performances that match the theoretical prediction.
Based on: Camilli, Francesco, and Marc Mézard. "Matrix factorization with neural networks", Physical Review E (2023, to appear), arXiv:2212.02105.