LPC question

Giganews Newsgroups
Subject: LPC question
Posted by:  Jack (NOSP…@THANK.YOU)
Date: Mon, 15 May 2006

Hi,

I have read some litterature about linear prediction.

However, I haven't been able to find any litterature
that explains in mathematical terms exactly why
minimization of the variance of the residual leads
to an estimate of the all-pole filter coefficients.

The known output of a 10th order unknown all-pole filter is:

x[k]=u[k]-sum(a[q]x[k-q],q=1,q=10)

where the unknown u[k] is defined to be random noise with
variance 1 and the unknown, constants a[q] are the coefficients
of the all-pole filter.

Now...if I send x[k] through a FIR filter I get:

e[k]=x[k]+sum(b[q]x[k-q],q=1,q=10)

where the constants b[q] are the coefficients
of the FIR filter.

I can see that choosing b[q]=a[q] leads to

e[k]=u[k]

but I don't understand why minimizing the
variance of e[k] guarantees that
the resulting estimates of b[q] are
close to a[q]

How do I prove that mathematically?

I haven't been able to find such a proof
with google.

Maybe some of you guys could help me?

Thanks :o)

Replies