Low-rank approximation methods for matrices are based on projecting the columns (or rows) to suitable low-dimensional subspaces. Taking subspaces spanned by individual columns or by random linear combinations of all columns are common alternatives to the computation of optimal subspaces via the SVD, especially for large or implicitly given matrices. Well-known results on volume sampling or randomized SVD show that such approaches indeed achieve quasi-optimal approximation errors in expectation. In this talk, we discuss generalizations of such results to low-rank approximation of Hilbert-Schmidt operators between infinite-dimensional Hilbert spaces. In the first part, we consider the approximation of vector valued L2 functions in subspaces spanned by point samples, and show existence of quasi-optimal sample points based on a continuous version of volume sampling. In the second part, we discuss infinite-dimensional extensions of the randomized SVD as recently proposed by Boullé and Townsend, for which we present an alternative approach. This also includes a novel extension of the Nyström approximation for self-adjoint positive semi-definite trace class operators.
Based on joint work with D. Kressner, T. Ni, and D. Persson.