Pseudo-Bayesian Robust PCA: Algorithms and Analysis

Category
International Journal
Journal/Conference
arXiv Preprint
Author
Tae-Hyun Oh, David Wipf, Yasuyuki Matsushita, In So Kweon
Year
2016
Award
Empty
tags
arXiv
2015
Published
μƒμ„±μΌμž
1/29/2021, 7:07:00 AM
[Abstract]
πŸ’‘
Commonly used in computer vision and other applications, robust PCA represents an algorithmic attempt to reduce the sensitivity of classical PCA to outliers. The basic idea is to learn a decomposition of some data matrix of interest into low rank and sparse components, the latter representing unwanted outliers. Although the resulting optimization problem is typically NP-hard, convex relaxations provide a computationally-expedient alternative with theoretical support. However, in practical regimes performance guarantees break down and a variety of non-convex alternatives, including Bayesian-inspired models, have been proposed to boost estimation quality. Unfortunately though, without additional a priori knowledge none of these methods can significantly expand the critical operational range such that exact principal subspace recovery is possible. Into this mix we propose a novel pseudo-Bayesian algorithm that explicitly compensates for design weaknesses in many existing non-convex approaches leading to state-of-the-art performance with a sound analytical foundation. Surprisingly, our algorithm can even outperform convex matrix completion despite the fact that the latter is provided with perfect knowledge of which entries are not corrupted.