Just Stuff

Just another WordPress.com weblog

Blocking Matrices

leave a comment »

Proof that the Blocking Matrices do Indeed Block Signals from Specified Directions.

For some/certain signal processing tasks we wish to generate beam patterns with nulls in specified directions (or equivalently; filters with narrow notches at specified frequencies). Typically these can be used to null out stationary interference sources, directions to multipath images of a desired source, …

We assume that we have a number of sensors (M )which may be array elements, subarrays or formed beams and we wish to null signals from directions \theta_1, ..., \theta_k, and k < M, and responses {\bf{b}}(\theta_i) for a source in direction \theta_i (being a column vector of complex signals one element for each sensor). In a narrowband system we may assume there are simply complex numbers representing the sensor amplitude and phase response. We let {\bf{\widehat{b}}}(\theta_i) denote the normalised versions of these signals.

It is obvious (when pointed out anyway) that the matrix:

{\bf{A}}={\bf{I}}_{M\times M} - {\bf{B}} ({\bf{B}}^H{\bf{B}})^{^{-1}} {\bf{B}}^H

is a blocking matrix for directions \theta_1, ..., \theta_k (where {{{\bf{B}}}} is the matrix with \widehat{\bf{b}}(\theta_i)  for its columns). The space of possible sensor outputs constitutes a (complex) vector space of dimension M and the subspace  \text{span}[{\bf{b}}(\theta_i), i=1,\dots ,k]  is the null space of {\bf{A}} and {\bf{A}} behaves like the identity transformation on the orthogonal complement of the null space.

To see that this is a blocking matrix for \widehat{\bf{b}}(\theta_i) we need just consider:

{\bf{AB}} = \left[{\bf{I}}_{M\times M} - {\bf{B}} ({\bf{B}}^H{\bf{B}})^{^{-1}} {\bf{B}}^H\right] {\bf{B}}

so:

{\bf{AB}} = {\bf{B}} - {\bf{B}} ({\bf{B}}^H{\bf{B}})^{^{-1}} {\bf{B}}^H {\bf{B}} ={\bf{0}}

Now the columns of {\bf{AB}} are the vectors {\bf{A\widehat{b}}}(\theta_i) and so these are zeros vectors as expected.

To see that {\bf{A}} behaves like the identity on the orthogonal complement consider {\bf{x}} in the orthogonal complement of the space spanned by the columns of {\bf{B}}. Then {\bf{B}}^H{\bf{x}}={\bf{0}} since each component of this is the dot product of a column of {\bf{B}} and {\bf{x}} and so zero. Hence:

{\bf{Ax}} = \left[{\bf{I}}_{M\times M} - {\bf{B}} ({\bf{B}}^H{\bf{B}})^{^{-1}} {\bf{B}}^H\right] {\bf{x}}

={\bf{x}}+{\bf{B}} ({\bf{B}}^H{\bf{B}})^{^{-1}} {\bf{B}}^H {\bf{x}}

={\bf{x}}

(I hope that is all clear, WordPress LaTeX seems to have started deleting all the \ characters from the LaTeX strings again so there may still be some omissions from when I have tried to restore them)

Advertisements

Written by CaptainBlack

March 25, 2009 at 12:23

Posted in DSP, Maths and Stuff

Tagged with ,

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: