Transfer matrix

In applied mathematics, the transfer matrix is a formulation in terms of a block-Toeplitz matrix of the two-scale equation, which characterizes refinable functions. Refinable functions play an important role in wavelet theory and finite element theory.

For the mask h {\displaystyle h} , which is a vector with component indexes from a {\displaystyle a} to b {\displaystyle b} , the transfer matrix of h {\displaystyle h} , we call it T h {\displaystyle T_{h}} here, is defined as

( T h ) j , k = h 2 j k . {\displaystyle (T_{h})_{j,k}=h_{2\cdot j-k}.}

More verbosely

T h = ( h a h a + 2 h a + 1 h a h a + 4 h a + 3 h a + 2 h a + 1 h a h b h b 1 h b 2 h b 3 h b 4 h b h b 1 h b 2 h b ) . {\displaystyle T_{h}={\begin{pmatrix}h_{a}&&&&&\\h_{a+2}&h_{a+1}&h_{a}&&&\\h_{a+4}&h_{a+3}&h_{a+2}&h_{a+1}&h_{a}&\\\ddots &\ddots &\ddots &\ddots &\ddots &\ddots \\&h_{b}&h_{b-1}&h_{b-2}&h_{b-3}&h_{b-4}\\&&&h_{b}&h_{b-1}&h_{b-2}\\&&&&&h_{b}\end{pmatrix}}.}

The effect of T h {\displaystyle T_{h}} can be expressed in terms of the downsampling operator " {\displaystyle \downarrow } ":

T h x = ( h x ) 2. {\displaystyle T_{h}\cdot x=(h*x)\downarrow 2.}

Properties

  • T h x = T x h {\displaystyle T_{h}\cdot x=T_{x}\cdot h} .
  • If you drop the first and the last column and move the odd-indexed columns to the left and the even-indexed columns to the right, then you obtain a transposed Sylvester matrix.
  • The determinant of a transfer matrix is essentially a resultant.

    More precisely:

    Let h e {\displaystyle h_{\mathrm {e} }} be the even-indexed coefficients of h {\displaystyle h} ( ( h e ) k = h 2 k {\displaystyle (h_{\mathrm {e} })_{k}=h_{2k}} ) and let h o {\displaystyle h_{\mathrm {o} }} be the odd-indexed coefficients of h {\displaystyle h} ( ( h o ) k = h 2 k + 1 {\displaystyle (h_{\mathrm {o} })_{k}=h_{2k+1}} ).

    Then det T h = ( 1 ) b a + 1 4 h a h b r e s ( h e , h o ) {\displaystyle \det T_{h}=(-1)^{\lfloor {\frac {b-a+1}{4}}\rfloor }\cdot h_{a}\cdot h_{b}\cdot \mathrm {res} (h_{\mathrm {e} },h_{\mathrm {o} })} , where r e s {\displaystyle \mathrm {res} } is the resultant.

    This connection allows for fast computation using the Euclidean algorithm.
  • For the trace of the transfer matrix of convolved masks holds
    t r   T g h = t r   T g t r   T h {\displaystyle \mathrm {tr} ~T_{g*h}=\mathrm {tr} ~T_{g}\cdot \mathrm {tr} ~T_{h}}
  • For the determinant of the transfer matrix of convolved mask holds

    det T g h = det T g det T h r e s ( g , h ) {\displaystyle \det T_{g*h}=\det T_{g}\cdot \det T_{h}\cdot \mathrm {res} (g_{-},h)}

    where g {\displaystyle g_{-}} denotes the mask with alternating signs, i.e. ( g ) k = ( 1 ) k g k {\displaystyle (g_{-})_{k}=(-1)^{k}\cdot g_{k}} .
  • If T h x = 0 {\displaystyle T_{h}\cdot x=0} , then T g h ( g x ) = 0 {\displaystyle T_{g*h}\cdot (g_{-}*x)=0} .
    This is a concretion of the determinant property above. From the determinant property one knows that T g h {\displaystyle T_{g*h}} is singular whenever T h {\displaystyle T_{h}} is singular. This property also tells, how vectors from the null space of T h {\displaystyle T_{h}} can be converted to null space vectors of T g h {\displaystyle T_{g*h}} .
  • If x {\displaystyle x} is an eigenvector of T h {\displaystyle T_{h}} with respect to the eigenvalue λ {\displaystyle \lambda } , i.e.

    T h x = λ x {\displaystyle T_{h}\cdot x=\lambda \cdot x} ,

    then x ( 1 , 1 ) {\displaystyle x*(1,-1)} is an eigenvector of T h ( 1 , 1 ) {\displaystyle T_{h*(1,1)}} with respect to the same eigenvalue, i.e.

    T h ( 1 , 1 ) ( x ( 1 , 1 ) ) = λ ( x ( 1 , 1 ) ) {\displaystyle T_{h*(1,1)}\cdot (x*(1,-1))=\lambda \cdot (x*(1,-1))} .
  • Let λ a , , λ b {\displaystyle \lambda _{a},\dots ,\lambda _{b}} be the eigenvalues of T h {\displaystyle T_{h}} , which implies λ a + + λ b = t r   T h {\displaystyle \lambda _{a}+\dots +\lambda _{b}=\mathrm {tr} ~T_{h}} and more generally λ a n + + λ b n = t r ( T h n ) {\displaystyle \lambda _{a}^{n}+\dots +\lambda _{b}^{n}=\mathrm {tr} (T_{h}^{n})} . This sum is useful for estimating the spectral radius of T h {\displaystyle T_{h}} . There is an alternative possibility for computing the sum of eigenvalue powers, which is faster for small n {\displaystyle n} .

    Let C k h {\displaystyle C_{k}h} be the periodization of h {\displaystyle h} with respect to period 2 k 1 {\displaystyle 2^{k}-1} . That is C k h {\displaystyle C_{k}h} is a circular filter, which means that the component indexes are residue classes with respect to the modulus 2 k 1 {\displaystyle 2^{k}-1} . Then with the upsampling operator {\displaystyle \uparrow } it holds

    t r ( T h n ) = ( C k h ( C k h 2 ) ( C k h 2 2 ) ( C k h 2 n 1 ) ) [ 0 ] 2 n 1 {\displaystyle \mathrm {tr} (T_{h}^{n})=\left(C_{k}h*(C_{k}h\uparrow 2)*(C_{k}h\uparrow 2^{2})*\cdots *(C_{k}h\uparrow 2^{n-1})\right)_{[0]_{2^{n}-1}}}

    Actually not n 2 {\displaystyle n-2} convolutions are necessary, but only 2 log 2 n {\displaystyle 2\cdot \log _{2}n} ones, when applying the strategy of efficient computation of powers. Even more the approach can be further sped up using the Fast Fourier transform.
  • From the previous statement we can derive an estimate of the spectral radius of ϱ ( T h ) {\displaystyle \varrho (T_{h})} . It holds

    ϱ ( T h ) a # h 1 3 # h {\displaystyle \varrho (T_{h})\geq {\frac {a}{\sqrt {\#h}}}\geq {\frac {1}{\sqrt {3\cdot \#h}}}}

    where # h {\displaystyle \#h} is the size of the filter and if all eigenvalues are real, it is also true that

    ϱ ( T h ) a {\displaystyle \varrho (T_{h})\leq a} ,

    where a = C 2 h 2 {\displaystyle a=\Vert C_{2}h\Vert _{2}} .

See also

References

  • Strang, Gilbert (1996). "Eigenvalues of ( 2 ) H {\displaystyle (\downarrow 2){H}} and convergence of the cascade algorithm". IEEE Transactions on Signal Processing. 44: 233–238. doi:10.1109/78.485920.
  • Thielemann, Henning (2006). Optimally matched wavelets (PhD thesis). (contains proofs of the above properties)