Parity-check matrix

In coding theory, a parity-check matrix of a linear block code C is a matrix which describes the linear relations that the components of a codeword must satisfy. It can be used to decide whether a particular vector is a codeword and is also used in decoding algorithms.

Definition

Formally, a parity check matrix H of a linear code C is a generator matrix of the dual code, C. This means that a codeword c is in C if and only if the matrix-vector product Hc = 0 (some authors[1] would write this in an equivalent form, cH = 0.)

The rows of a parity check matrix are the coefficients of the parity check equations.[2] That is, they show how linear combinations of certain digits (components) of each codeword equal zero. For example, the parity check matrix

H = [ 0 0 1 1 1 1 0 0 ] {\displaystyle H=\left[{\begin{array}{cccc}0&0&1&1\\1&1&0&0\end{array}}\right]} ,

compactly represents the parity check equations,

c 3 + c 4 = 0 c 1 + c 2 = 0 {\displaystyle {\begin{aligned}c_{3}+c_{4}&=0\\c_{1}+c_{2}&=0\end{aligned}}} ,

that must be satisfied for the vector ( c 1 , c 2 , c 3 , c 4 ) {\displaystyle (c_{1},c_{2},c_{3},c_{4})} to be a codeword of C.

From the definition of the parity-check matrix it directly follows the minimum distance of the code is the minimum number d such that every d - 1 columns of a parity-check matrix H are linearly independent while there exist d columns of H that are linearly dependent.

Creating a parity check matrix

The parity check matrix for a given code can be derived from its generator matrix (and vice versa).[3] If the generator matrix for an [n,k]-code is in standard form

G = [ I k | P ] {\displaystyle G={\begin{bmatrix}I_{k}|P\end{bmatrix}}} ,

then the parity check matrix is given by

H = [ P | I n k ] {\displaystyle H={\begin{bmatrix}-P^{\top }|I_{n-k}\end{bmatrix}}} ,

because

G H = P P = 0 {\displaystyle GH^{\top }=P-P=0} .

Negation is performed in the finite field Fq. Note that if the characteristic of the underlying field is 2 (i.e., 1 + 1 = 0 in that field), as in binary codes, then -P = P, so the negation is unnecessary.

For example, if a binary code has the generator matrix

G = [ 1 0 1 0 1 0 1 1 1 0 ] {\displaystyle G=\left[{\begin{array}{cc|ccc}1&0&1&0&1\\0&1&1&1&0\\\end{array}}\right]} ,

then its parity check matrix is

H = [ 1 1 1 0 0 0 1 0 1 0 1 0 0 0 1 ] {\displaystyle H=\left[{\begin{array}{cc|ccc}1&1&1&0&0\\0&1&0&1&0\\1&0&0&0&1\\\end{array}}\right]} .

It can be verified that G is a k × n {\displaystyle k\times n} matrix, while H is a ( n k ) × n {\displaystyle (n-k)\times n} matrix.

Syndromes

For any (row) vector x of the ambient vector space, s = Hx is called the syndrome of x. The vector x is a codeword if and only if s = 0. The calculation of syndromes is the basis for the syndrome decoding algorithm.[4]

See also

  • Hamming code

Notes

  1. ^ for instance, Roman 1992, p. 200
  2. ^ Roman 1992, p. 201
  3. ^ Pless 1998, p. 9
  4. ^ Pless 1998, p. 20

References

  • Hill, Raymond (1986). A first course in coding theory. Oxford Applied Mathematics and Computing Science Series. Oxford University Press. pp. 69. ISBN 0-19-853803-0.
  • Pless, Vera (1998), Introduction to the Theory of Error-Correcting Codes (3rd ed.), Wiley Interscience, ISBN 0-471-19047-0
  • Roman, Steven (1992), Coding and Information Theory, GTM, vol. 134, Springer-Verlag, ISBN 0-387-97812-7
  • J.H. van Lint (1992). Introduction to Coding Theory. GTM. Vol. 86 (2nd ed.). Springer-Verlag. pp. 34. ISBN 3-540-54894-7.