Two-dimensional singular-value decomposition

Method of decomposing a set of matrices via low-rank approximation

In linear algebra, two-dimensional singular-value decomposition (2DSVD) computes the low-rank approximation of a set of matrices such as 2D images or weather maps in a manner almost identical to SVD (singular-value decomposition) which computes the low-rank approximation of a single matrix (or a set of 1D vectors).

SVD

Let matrix X = [ x 1 , , x n ] {\displaystyle X=[\mathbf {x} _{1},\ldots ,\mathbf {x} _{n}]} contains the set of 1D vectors which have been centered. In PCA/SVD, we construct covariance matrix F {\displaystyle F} and Gram matrix G {\displaystyle G}

F = X X T {\displaystyle F=XX^{\mathsf {T}}} , G = X T X , {\displaystyle G=X^{\mathsf {T}}X,}

and compute their eigenvectors U = [ u 1 , , u n ] {\displaystyle U=[\mathbf {u} _{1},\ldots ,\mathbf {u} _{n}]} and V = [ v 1 , , v n ] {\displaystyle V=[\mathbf {v} _{1},\ldots ,\mathbf {v} _{n}]} . Since V V T = I {\displaystyle VV^{\mathsf {T}}=I} and U U T = I {\displaystyle UU^{\mathsf {T}}=I} we have

X = U U T X V V T = U ( U T X V ) V T = U Σ V T . {\displaystyle X=UU^{\mathsf {T}}XVV^{\mathsf {T}}=U\left(U^{\mathsf {T}}XV\right)V^{\mathsf {T}}=U\Sigma V^{\mathsf {T}}.}

If we retain only K {\displaystyle K} principal eigenvectors in U , V {\displaystyle U,V} , this gives low-rank approximation of X {\displaystyle X} .

2DSVD

Here we deal with a set of 2D matrices ( X 1 , , X n ) {\displaystyle (X_{1},\ldots ,X_{n})} . Suppose they are centered i X i = 0 {\textstyle \sum _{i}X_{i}=0} . We construct row–row and column–column covariance matrices

F = i X i X i T {\displaystyle F=\sum _{i}X_{i}X_{i}^{\mathsf {T}}} and G = i X i T X i {\displaystyle G=\sum _{i}X_{i}^{\mathsf {T}}X_{i}}

in exactly the same manner as in SVD, and compute their eigenvectors U {\displaystyle U} and V {\displaystyle V} . We approximate X i {\displaystyle X_{i}} as

X i = U U T X i V V T = U ( U T X i V ) V T = U M i V T {\displaystyle X_{i}=UU^{\mathsf {T}}X_{i}VV^{\mathsf {T}}=U\left(U^{\mathsf {T}}X_{i}V\right)V^{\mathsf {T}}=UM_{i}V^{\mathsf {T}}}

in identical fashion as in SVD. This gives a near optimal low-rank approximation of ( X 1 , , X n ) {\displaystyle (X_{1},\ldots ,X_{n})} with the objective function

J = i = 1 n | X i L M i R T | 2 {\displaystyle J=\sum _{i=1}^{n}\left|X_{i}-LM_{i}R^{\mathsf {T}}\right|^{2}}

Error bounds similar to Eckard–Young theorem also exist.

2DSVD is mostly used in image compression and representation.

References

  • Chris Ding and Jieping Ye. "Two-dimensional Singular Value Decomposition (2DSVD) for 2D Maps and Images". Proc. SIAM Int'l Conf. Data Mining (SDM'05), pp. 32–43, April 2005. http://ranger.uta.edu/~chqding/papers/2dsvdSDM05.pdf
  • Jieping Ye. "Generalized Low Rank Approximations of Matrices". Machine Learning Journal. Vol. 61, pp. 167–191, 2005.