Sparse features for PCA-like linear regression
Christos Boutsidis, Petros Drineas, et al.
NeurIPS 2011
The CUR decomposition of an m × n matrix A finds an m × c matrix C with a subset of c < n columns of A, together with an r × n matrix R with a subset of r < m rows of A, as well as a c × r low-rank matrix U such that the matrix CUR approximates the matrix A, that is, ∥A-CUR∥2 F ≤ (1 + ϵ) ∥A-Ak∥2 F, where ∥. ∥F denotes the Frobenius norm and Ak is the best m × n matrix of rank k constructed via the SVD. We present input-sparsity-time and deterministic algorithms for constructing such a CUR decomposition where c = O(k/ϵ) and r = O(k/ϵ) and rank(U) = k. Up to constant factors, our algorithms are simultaneously optimal in the values c, r, and rank(U).
Christos Boutsidis, Petros Drineas, et al.
NeurIPS 2011
T.S. Jayram, David P. Woodruff
ACM Transactions on Algorithms
Vladimir Braverman, Stephen R. Chestnut, et al.
SIGMOD/PODS 2017
Jelani Nelson, David P. Woodruff
SIGMOD/PODS/ 2010