Artikel: Matched filter GudangMovies21 Rebahinxxi

    • Source: Matched filter
    • In signal processing, the output of the matched filter is given by correlating a known delayed signal, or template, with an unknown signal to detect the presence of the template in the unknown signal. This is equivalent to convolving the unknown signal with a conjugated time-reversed version of the template. The matched filter is the optimal linear filter for maximizing the signal-to-noise ratio (SNR) in the presence of additive stochastic noise.
      Matched filters are commonly used in radar, in which a known signal is sent out, and the reflected signal is examined for common elements of the out-going signal. Pulse compression is an example of matched filtering. It is so called because the impulse response is matched to input pulse signals. Two-dimensional matched filters are commonly used in image processing, e.g., to improve the SNR of X-ray observations. Additional applications of note are in seismology and gravitational-wave astronomy.
      Matched filtering is a demodulation technique with LTI (linear time invariant) filters to maximize SNR.
      It was originally also known as a North filter.


      Derivation




      = Derivation via matrix algebra

      =
      The following section derives the matched filter for a discrete-time system. The derivation for a continuous-time system is similar, with summations replaced with integrals.
      The matched filter is the linear filter,



      h


      {\displaystyle h}

      , that maximizes the output signal-to-noise ratio.





      y
      [
      n
      ]
      =



      k
      =







      h
      [
      n

      k
      ]
      x
      [
      k
      ]
      ,


      {\displaystyle \ y[n]=\sum _{k=-\infty }^{\infty }h[n-k]x[k],}


      where



      x
      [
      k
      ]


      {\displaystyle x[k]}

      is the input as a function of the independent variable



      k


      {\displaystyle k}

      , and



      y
      [
      n
      ]


      {\displaystyle y[n]}

      is the filtered output. Though we most often express filters as the impulse response of convolution systems, as above (see LTI system theory), it is easiest to think of the matched filter in the context of the inner product, which we will see shortly.
      We can derive the linear filter that maximizes output signal-to-noise ratio by invoking a geometric argument. The intuition behind the matched filter relies on correlating the received signal (a vector) with a filter (another vector) that is parallel with the signal, maximizing the inner product. This enhances the signal. When we consider the additive stochastic noise, we have the additional challenge of minimizing the output due to noise by choosing a filter that is orthogonal to the noise.
      Let us formally define the problem. We seek a filter,



      h


      {\displaystyle h}

      , such that we maximize the output signal-to-noise ratio, where the output is the inner product of the filter and the observed signal



      x


      {\displaystyle x}

      .
      Our observed signal consists of the desirable signal



      s


      {\displaystyle s}

      and additive noise



      v


      {\displaystyle v}

      :





      x
      =
      s
      +
      v
      .



      {\displaystyle \ x=s+v.\,}


      Let us define the auto-correlation matrix of the noise, reminding ourselves that this matrix has Hermitian symmetry, a property that will become useful in the derivation:






      R

      v


      =
      E
      {
      v

      v


      H



      }



      {\displaystyle \ R_{v}=E\{vv^{\mathrm {H} }\}\,}


      where




      v


      H





      {\displaystyle v^{\mathrm {H} }}

      denotes the conjugate transpose of



      v


      {\displaystyle v}

      , and



      E


      {\displaystyle E}

      denotes expectation (note that in case the noise



      v


      {\displaystyle v}

      has zero-mean, its auto-correlation matrix




      R

      v




      {\displaystyle R_{v}}

      is equal to its covariance matrix).
      Let us call our output,



      y


      {\displaystyle y}

      , the inner product of our filter and the observed signal such that





      y
      =



      k
      =








      h




      [
      k
      ]
      x
      [
      k
      ]
      =

      h


      H



      x
      =

      h


      H



      s
      +

      h


      H



      v
      =

      y

      s


      +

      y

      v


      .


      {\displaystyle \ y=\sum _{k=-\infty }^{\infty }h^{*}[k]x[k]=h^{\mathrm {H} }x=h^{\mathrm {H} }s+h^{\mathrm {H} }v=y_{s}+y_{v}.}


      We now define the signal-to-noise ratio, which is our objective function, to be the ratio of the power of the output due to the desired signal to the power of the output due to the noise:





      S
      N
      R

      =




      |


      y

      s




      |


      2




      E
      {

      |


      y

      v




      |


      2


      }



      .


      {\displaystyle \mathrm {SNR} ={\frac {|y_{s}|^{2}}{E\{|y_{v}|^{2}\}}}.}


      We rewrite the above:





      S
      N
      R

      =




      |


      h


      H



      s


      |


      2




      E
      {

      |


      h


      H



      v


      |


      2


      }



      .


      {\displaystyle \mathrm {SNR} ={\frac {|h^{\mathrm {H} }s|^{2}}{E\{|h^{\mathrm {H} }v|^{2}\}}}.}


      We wish to maximize this quantity by choosing



      h


      {\displaystyle h}

      . Expanding the denominator of our objective function, we have





      E
      {

      |


      h


      H



      v


      |


      2


      }
      =
      E
      {
      (

      h


      H



      v
      )


      (

      h


      H



      v
      )



      H



      }
      =

      h


      H



      E
      {
      v

      v


      H



      }
      h
      =

      h


      H




      R

      v


      h
      .



      {\displaystyle \ E\{|h^{\mathrm {H} }v|^{2}\}=E\{(h^{\mathrm {H} }v){(h^{\mathrm {H} }v)}^{\mathrm {H} }\}=h^{\mathrm {H} }E\{vv^{\mathrm {H} }\}h=h^{\mathrm {H} }R_{v}h.\,}


      Now, our




      S
      N
      R



      {\displaystyle \mathrm {SNR} }

      becomes





      S
      N
      R

      =




      |


      h


      H



      s


      |


      2





      h


      H




      R

      v


      h



      .


      {\displaystyle \mathrm {SNR} ={\frac {|h^{\mathrm {H} }s|^{2}}{h^{\mathrm {H} }R_{v}h}}.}


      We will rewrite this expression with some matrix manipulation. The reason for this seemingly counterproductive measure will become evident shortly. Exploiting the Hermitian symmetry of the auto-correlation matrix




      R

      v




      {\displaystyle R_{v}}

      , we can write





      S
      N
      R

      =




      |



      (

      R

      v


      1

      /

      2


      h
      )



      H



      (

      R

      v



      1

      /

      2


      s
      )


      |


      2






      (

      R

      v


      1

      /

      2


      h
      )



      H



      (

      R

      v


      1

      /

      2


      h
      )



      ,


      {\displaystyle \mathrm {SNR} ={\frac {|{(R_{v}^{1/2}h)}^{\mathrm {H} }(R_{v}^{-1/2}s)|^{2}}{{(R_{v}^{1/2}h)}^{\mathrm {H} }(R_{v}^{1/2}h)}},}


      We would like to find an upper bound on this expression. To do so, we first recognize a form of the Cauchy–Schwarz inequality:






      |


      a


      H



      b


      |


      2



      (

      a


      H



      a
      )
      (

      b


      H



      b
      )
      ,



      {\displaystyle \ |a^{\mathrm {H} }b|^{2}\leq (a^{\mathrm {H} }a)(b^{\mathrm {H} }b),\,}


      which is to say that the square of the inner product of two vectors can only be as large as the product of the individual inner products of the vectors. This concept returns to the intuition behind the matched filter: this upper bound is achieved when the two vectors



      a


      {\displaystyle a}

      and



      b


      {\displaystyle b}

      are parallel. We resume our derivation by expressing the upper bound on our




      S
      N
      R



      {\displaystyle \mathrm {SNR} }

      in light of the geometric inequality above:





      S
      N
      R

      =




      |



      (

      R

      v


      1

      /

      2


      h
      )



      H



      (

      R

      v



      1

      /

      2


      s
      )


      |


      2






      (

      R

      v


      1

      /

      2


      h
      )



      H



      (

      R

      v


      1

      /

      2


      h
      )








      [



      (

      R

      v


      1

      /

      2


      h
      )



      H



      (

      R

      v


      1

      /

      2


      h
      )

      ]


      [



      (

      R

      v



      1

      /

      2


      s
      )



      H



      (

      R

      v



      1

      /

      2


      s
      )

      ]





      (

      R

      v


      1

      /

      2


      h
      )



      H



      (

      R

      v


      1

      /

      2


      h
      )



      .


      {\displaystyle \mathrm {SNR} ={\frac {|{(R_{v}^{1/2}h)}^{\mathrm {H} }(R_{v}^{-1/2}s)|^{2}}{{(R_{v}^{1/2}h)}^{\mathrm {H} }(R_{v}^{1/2}h)}}\leq {\frac {\left[{(R_{v}^{1/2}h)}^{\mathrm {H} }(R_{v}^{1/2}h)\right]\left[{(R_{v}^{-1/2}s)}^{\mathrm {H} }(R_{v}^{-1/2}s)\right]}{{(R_{v}^{1/2}h)}^{\mathrm {H} }(R_{v}^{1/2}h)}}.}


      Our valiant matrix manipulation has now paid off. We see that the expression for our upper bound can be greatly simplified:





      S
      N
      R

      =




      |



      (

      R

      v


      1

      /

      2


      h
      )



      H



      (

      R

      v



      1

      /

      2


      s
      )


      |


      2






      (

      R

      v


      1

      /

      2


      h
      )



      H



      (

      R

      v


      1

      /

      2


      h
      )





      s


      H




      R

      v



      1


      s
      .


      {\displaystyle \mathrm {SNR} ={\frac {|{(R_{v}^{1/2}h)}^{\mathrm {H} }(R_{v}^{-1/2}s)|^{2}}{{(R_{v}^{1/2}h)}^{\mathrm {H} }(R_{v}^{1/2}h)}}\leq s^{\mathrm {H} }R_{v}^{-1}s.}


      We can achieve this upper bound if we choose,






      R

      v


      1

      /

      2


      h
      =
      α

      R

      v



      1

      /

      2


      s


      {\displaystyle \ R_{v}^{1/2}h=\alpha R_{v}^{-1/2}s}


      where



      α


      {\displaystyle \alpha }

      is an arbitrary real number. To verify this, we plug into our expression for the output




      S
      N
      R



      {\displaystyle \mathrm {SNR} }

      :





      S
      N
      R

      =




      |



      (

      R

      v


      1

      /

      2


      h
      )



      H



      (

      R

      v



      1

      /

      2


      s
      )


      |


      2






      (

      R

      v


      1

      /

      2


      h
      )



      H



      (

      R

      v


      1

      /

      2


      h
      )



      =




      α

      2



      |



      (

      R

      v



      1

      /

      2


      s
      )



      H



      (

      R

      v



      1

      /

      2


      s
      )


      |


      2





      α

      2




      (

      R

      v



      1

      /

      2


      s
      )



      H



      (

      R

      v



      1

      /

      2


      s
      )



      =




      |


      s


      H




      R

      v



      1


      s


      |


      2





      s


      H




      R

      v



      1


      s



      =

      s


      H




      R

      v



      1


      s
      .


      {\displaystyle \mathrm {SNR} ={\frac {|{(R_{v}^{1/2}h)}^{\mathrm {H} }(R_{v}^{-1/2}s)|^{2}}{{(R_{v}^{1/2}h)}^{\mathrm {H} }(R_{v}^{1/2}h)}}={\frac {\alpha ^{2}|{(R_{v}^{-1/2}s)}^{\mathrm {H} }(R_{v}^{-1/2}s)|^{2}}{\alpha ^{2}{(R_{v}^{-1/2}s)}^{\mathrm {H} }(R_{v}^{-1/2}s)}}={\frac {|s^{\mathrm {H} }R_{v}^{-1}s|^{2}}{s^{\mathrm {H} }R_{v}^{-1}s}}=s^{\mathrm {H} }R_{v}^{-1}s.}


      Thus, our optimal matched filter is





      h
      =
      α

      R

      v



      1


      s
      .


      {\displaystyle \ h=\alpha R_{v}^{-1}s.}


      We often choose to normalize the expected value of the power of the filter output due to the noise to unity. That is, we constrain





      E
      {

      |


      y

      v




      |


      2


      }
      =
      1.



      {\displaystyle \ E\{|y_{v}|^{2}\}=1.\,}


      This constraint implies a value of



      α


      {\displaystyle \alpha }

      , for which we can solve:





      E
      {

      |


      y

      v




      |


      2


      }
      =

      α

      2



      s


      H




      R

      v



      1


      s
      =
      1
      ,


      {\displaystyle \ E\{|y_{v}|^{2}\}=\alpha ^{2}s^{\mathrm {H} }R_{v}^{-1}s=1,}


      yielding





      α
      =


      1


      s


      H




      R

      v



      1


      s



      ,


      {\displaystyle \ \alpha ={\frac {1}{\sqrt {s^{\mathrm {H} }R_{v}^{-1}s}}},}


      giving us our normalized filter,





      h
      =


      1


      s


      H




      R

      v



      1


      s




      R

      v



      1


      s
      .


      {\displaystyle \ h={\frac {1}{\sqrt {s^{\mathrm {H} }R_{v}^{-1}s}}}R_{v}^{-1}s.}


      If we care to write the impulse response



      h


      {\displaystyle h}

      of the filter for the convolution system, it is simply the complex conjugate time reversal of the input



      s


      {\displaystyle s}

      .
      Though we have derived the matched filter in discrete time, we can extend the concept to continuous-time systems if we replace




      R

      v




      {\displaystyle R_{v}}

      with the continuous-time autocorrelation function of the noise, assuming a continuous signal



      s
      (
      t
      )


      {\displaystyle s(t)}

      , continuous noise



      v
      (
      t
      )


      {\displaystyle v(t)}

      , and a continuous filter



      h
      (
      t
      )


      {\displaystyle h(t)}

      .


      = Derivation via Lagrangian

      =
      Alternatively, we may solve for the matched filter by solving our maximization problem with a Lagrangian. Again, the matched filter endeavors to maximize the output signal-to-noise ratio (




      S
      N
      R



      {\displaystyle \mathrm {SNR} }

      ) of a filtered deterministic signal in stochastic additive noise. The observed sequence, again, is





      x
      =
      s
      +
      v
      ,



      {\displaystyle \ x=s+v,\,}


      with the noise auto-correlation matrix,






      R

      v


      =
      E
      {
      v

      v


      H



      }
      .



      {\displaystyle \ R_{v}=E\{vv^{\mathrm {H} }\}.\,}


      The signal-to-noise ratio is





      S
      N
      R

      =




      |


      y

      s




      |


      2




      E
      {

      |


      y

      v




      |


      2


      }



      ,


      {\displaystyle \mathrm {SNR} ={\frac {|y_{s}|^{2}}{E\{|y_{v}|^{2}\}}},}


      where




      y

      s


      =

      h


      H



      s


      {\displaystyle y_{s}=h^{\mathrm {H} }s}

      and




      y

      v


      =

      h


      H



      v


      {\displaystyle y_{v}=h^{\mathrm {H} }v}

      .
      Evaluating the expression in the numerator, we have






      |


      y

      s




      |


      2


      =



      y

      s





      H




      y

      s


      =

      h


      H



      s

      s


      H



      h
      .



      {\displaystyle \ |y_{s}|^{2}={y_{s}}^{\mathrm {H} }y_{s}=h^{\mathrm {H} }ss^{\mathrm {H} }h.\,}


      and in the denominator,





      E
      {

      |


      y

      v




      |


      2


      }
      =
      E
      {



      y

      v





      H




      y

      v


      }
      =
      E
      {

      h


      H



      v

      v


      H



      h
      }
      =

      h


      H




      R

      v


      h
      .



      {\displaystyle \ E\{|y_{v}|^{2}\}=E\{{y_{v}}^{\mathrm {H} }y_{v}\}=E\{h^{\mathrm {H} }vv^{\mathrm {H} }h\}=h^{\mathrm {H} }R_{v}h.\,}


      The signal-to-noise ratio becomes





      S
      N
      R

      =




      h


      H



      s

      s


      H



      h



      h


      H




      R

      v


      h



      .


      {\displaystyle \mathrm {SNR} ={\frac {h^{\mathrm {H} }ss^{\mathrm {H} }h}{h^{\mathrm {H} }R_{v}h}}.}


      If we now constrain the denominator to be 1, the problem of maximizing




      S
      N
      R



      {\displaystyle \mathrm {SNR} }

      is reduced to maximizing the numerator. We can then formulate the problem using a Lagrange multiplier:






      h


      H




      R

      v


      h
      =
      1


      {\displaystyle \ h^{\mathrm {H} }R_{v}h=1}








      L


      =

      h


      H



      s

      s


      H



      h
      +
      λ
      (
      1


      h


      H




      R

      v


      h
      )


      {\displaystyle \ {\mathcal {L}}=h^{\mathrm {H} }ss^{\mathrm {H} }h+\lambda (1-h^{\mathrm {H} }R_{v}h)}










      h








      L


      =
      s

      s


      H



      h

      λ

      R

      v


      h
      =
      0


      {\displaystyle \ \nabla _{h^{*}}{\mathcal {L}}=ss^{\mathrm {H} }h-\lambda R_{v}h=0}






      (
      s

      s


      H



      )
      h
      =
      λ

      R

      v


      h


      {\displaystyle \ (ss^{\mathrm {H} })h=\lambda R_{v}h}


      which we recognize as a generalized eigenvalue problem






      h


      H



      (
      s

      s


      H



      )
      h
      =
      λ

      h


      H




      R

      v


      h
      .


      {\displaystyle \ h^{\mathrm {H} }(ss^{\mathrm {H} })h=\lambda h^{\mathrm {H} }R_{v}h.}


      Since



      s

      s


      H





      {\displaystyle ss^{\mathrm {H} }}

      is of unit rank, it has only one nonzero eigenvalue. It can be shown that this eigenvalue equals






      λ

      max


      =

      s


      H




      R

      v



      1


      s
      ,


      {\displaystyle \ \lambda _{\max }=s^{\mathrm {H} }R_{v}^{-1}s,}


      yielding the following optimal matched filter





      h
      =


      1


      s


      H




      R

      v



      1


      s




      R

      v



      1


      s
      .


      {\displaystyle \ h={\frac {1}{\sqrt {s^{\mathrm {H} }R_{v}^{-1}s}}}R_{v}^{-1}s.}


      This is the same result found in the previous subsection.


      Interpretation as a least-squares estimator




      = Derivation

      =
      Matched filtering can also be interpreted as a least-squares estimator for the optimal location and scaling of a given model or template. Once again, let the observed sequence be defined as






      x

      k


      =

      s

      k


      +

      v

      k


      ,



      {\displaystyle \ x_{k}=s_{k}+v_{k},\,}


      where




      v

      k




      {\displaystyle v_{k}}

      is uncorrelated zero mean noise. The signal




      s

      k




      {\displaystyle s_{k}}

      is assumed to be a scaled and shifted version of a known model sequence




      f

      k




      {\displaystyle f_{k}}

      :






      s

      k


      =

      μ

      0




      f

      k


      j

      0






      {\displaystyle \ s_{k}=\mu _{0}\cdot f_{k-j_{0}}}


      We want to find optimal estimates




      j






      {\displaystyle j^{*}}

      and




      μ






      {\displaystyle \mu ^{*}}

      for the unknown shift




      j

      0




      {\displaystyle j_{0}}

      and scaling




      μ

      0




      {\displaystyle \mu _{0}}

      by minimizing the least-squares residual between the observed sequence




      x

      k




      {\displaystyle x_{k}}

      and a "probing sequence"




      h

      j

      k




      {\displaystyle h_{j-k}}

      :






      j




      ,

      μ




      =
      arg


      min

      j
      ,
      μ





      k




      (


      x

      k



      μ


      h

      j

      k



      )


      2




      {\displaystyle \ j^{*},\mu ^{*}=\arg \min _{j,\mu }\sum _{k}\left(x_{k}-\mu \cdot h_{j-k}\right)^{2}}


      The appropriate




      h

      j

      k




      {\displaystyle h_{j-k}}

      will later turn out to be the matched filter, but is as yet unspecified. Expanding




      x

      k




      {\displaystyle x_{k}}

      and the square within the sum yields






      j




      ,

      μ




      =
      arg


      min

      j
      ,
      μ



      [




      k


      (

      s

      k


      +

      v

      k



      )

      2


      +

      μ

      2





      k



      h

      j

      k


      2



      2
      μ



      k



      s

      k



      h

      j

      k



      2
      μ



      k



      v

      k



      h

      j

      k



      ]

      .


      {\displaystyle \ j^{*},\mu ^{*}=\arg \min _{j,\mu }\left[\sum _{k}(s_{k}+v_{k})^{2}+\mu ^{2}\sum _{k}h_{j-k}^{2}-2\mu \sum _{k}s_{k}h_{j-k}-2\mu \sum _{k}v_{k}h_{j-k}\right].}


      The first term in brackets is a constant (since the observed signal is given) and has no influence on the optimal solution. The last term has constant expected value because the noise is uncorrelated and has zero mean. We can therefore drop both terms from the optimization. After reversing the sign, we obtain the equivalent optimization problem






      j




      ,

      μ




      =
      arg


      max

      j
      ,
      μ



      [

      2
      μ



      k



      s

      k



      h

      j

      k




      μ

      2





      k



      h

      j

      k


      2



      ]

      .


      {\displaystyle \ j^{*},\mu ^{*}=\arg \max _{j,\mu }\left[2\mu \sum _{k}s_{k}h_{j-k}-\mu ^{2}\sum _{k}h_{j-k}^{2}\right].}


      Setting the derivative w.r.t.



      μ


      {\displaystyle \mu }

      to zero gives an analytic solution for




      μ






      {\displaystyle \mu ^{*}}

      :






      μ




      =






      k



      s

      k



      h

      j

      k







      k



      h

      j

      k


      2





      .


      {\displaystyle \ \mu ^{*}={\frac {\sum _{k}s_{k}h_{j-k}}{\sum _{k}h_{j-k}^{2}}}.}


      Inserting this into our objective function yields a reduced maximization problem for just




      j






      {\displaystyle j^{*}}

      :






      j




      =
      arg


      max

      j






      (




      k



      s

      k



      h

      j

      k



      )


      2






      k



      h

      j

      k


      2





      .


      {\displaystyle \ j^{*}=\arg \max _{j}{\frac {\left(\sum _{k}s_{k}h_{j-k}\right)^{2}}{\sum _{k}h_{j-k}^{2}}}.}


      The numerator can be upper-bounded by means of the Cauchy–Schwarz inequality:









      (




      k



      s

      k



      h

      j

      k



      )


      2






      k



      h

      j

      k


      2












      k



      s

      k


      2






      k



      h

      j

      k


      2







      k



      h

      j

      k


      2





      =



      k



      s

      k


      2


      =

      constant

      .


      {\displaystyle \ {\frac {\left(\sum _{k}s_{k}h_{j-k}\right)^{2}}{\sum _{k}h_{j-k}^{2}}}\leq {\frac {\sum _{k}s_{k}^{2}\cdot \sum _{k}h_{j-k}^{2}}{\sum _{k}h_{j-k}^{2}}}=\sum _{k}s_{k}^{2}={\text{constant}}.}


      The optimization problem assumes its maximum when equality holds in this expression. According to the properties of the Cauchy–Schwarz inequality, this is only possible when






      h

      j

      k


      =
      ν


      s

      k


      =
      κ


      f

      k


      j

      0




      .


      {\displaystyle \ h_{j-k}=\nu \cdot s_{k}=\kappa \cdot f_{k-j_{0}}.}


      for arbitrary non-zero constants



      ν


      {\displaystyle \nu }

      or



      κ


      {\displaystyle \kappa }

      , and the optimal solution is obtained at




      j




      =

      j

      0




      {\displaystyle j^{*}=j_{0}}

      as desired. Thus, our "probing sequence"




      h

      j

      k




      {\displaystyle h_{j-k}}

      must be proportional to the signal model




      f

      k


      j

      0






      {\displaystyle f_{k-j_{0}}}

      , and the convenient choice



      κ
      =
      1


      {\displaystyle \kappa =1}

      yields the matched filter






      h

      k


      =

      f


      k


      .


      {\displaystyle \ h_{k}=f_{-k}.}


      Note that the filter is the mirrored signal model. This ensures that the operation






      k



      x

      k



      h

      j

      k




      {\displaystyle \sum _{k}x_{k}h_{j-k}}

      to be applied in order to find the optimum is indeed the convolution between the observed sequence




      x

      k




      {\displaystyle x_{k}}

      and the matched filter




      h

      k




      {\displaystyle h_{k}}

      . The filtered sequence assumes its maximum at the position where the observed sequence




      x

      k




      {\displaystyle x_{k}}

      best matches (in a least-squares sense) the signal model




      f

      k




      {\displaystyle f_{k}}

      .


      = Implications

      =
      The matched filter may be derived in a variety of ways, but as a special case of a least-squares procedure it may also be interpreted as a maximum likelihood method in the context of a (coloured) Gaussian noise model and the associated Whittle likelihood.
      If the transmitted signal possessed no unknown parameters (like time-of-arrival, amplitude,...), then the matched filter would, according to the Neyman–Pearson lemma, minimize the error probability. However, since the exact signal generally is determined by unknown parameters that effectively are estimated (or fitted) in the filtering process, the matched filter constitutes a generalized maximum likelihood (test-) statistic. The filtered time series may then be interpreted as (proportional to) the profile likelihood, the maximized conditional likelihood as a function of the ("arrival") time parameter.
      This implies in particular that the error probability (in the sense of Neyman and Pearson, i.e., concerning maximization of the detection probability for a given false-alarm probability) is not necessarily optimal.
      What is commonly referred to as the Signal-to-noise ratio (SNR), which is supposed to be maximized by a matched filter, in this context corresponds to





      2
      log

      (


      L


      )




      {\displaystyle {\sqrt {2\log({\mathcal {L}})}}}

      , where





      L




      {\displaystyle {\mathcal {L}}}

      is the (conditionally) maximized likelihood ratio.
      The construction of the matched filter is based on a known noise spectrum. In practice, however, the noise spectrum is usually estimated from data and hence only known up to a limited precision. For the case of an uncertain spectrum, the matched filter may be generalized to a more robust iterative procedure with favourable properties also in non-Gaussian noise.


      Frequency-domain interpretation


      When viewed in the frequency domain, it is evident that the matched filter applies the greatest weighting to spectral components exhibiting the greatest signal-to-noise ratio (i.e., large weight where noise is relatively low, and vice versa). In general this requires a non-flat frequency response, but the associated "distortion" is no cause for concern in situations such as radar and digital communications, where the original waveform is known and the objective is the detection of this signal against the background noise. On the technical side, the matched filter is a weighted least-squares method based on the (heteroscedastic) frequency-domain data (where the "weights" are determined via the noise spectrum, see also previous section), or equivalently, a least-squares method applied to the whitened data.


      Examples




      = Radar and sonar

      =
      Matched filters are often used in signal detection. As an example, suppose that we wish to judge the distance of an object by reflecting a signal off it. We may choose to transmit a pure-tone sinusoid at 1 Hz. We assume that our received signal is an attenuated and phase-shifted form of the transmitted signal with added noise.
      To judge the distance of the object, we correlate the received signal with a matched filter, which, in the case of white (uncorrelated) noise, is another pure-tone 1-Hz sinusoid. When the output of the matched filter system exceeds a certain threshold, we conclude with high probability that the received signal has been reflected off the object. Using the speed of propagation and the time that we first observe the reflected signal, we can estimate the distance of the object. If we change the shape of the pulse in a specially-designed way, the signal-to-noise ratio and the distance resolution can be even improved after matched filtering: this is a technique known as pulse compression.
      Additionally, matched filters can be used in parameter estimation problems (see estimation theory). To return to our previous example, we may desire to estimate the speed of the object, in addition to its position. To exploit the Doppler effect, we would like to estimate the frequency of the received signal. To do so, we may correlate the received signal with several matched filters of sinusoids at varying frequencies. The matched filter with the highest output will reveal, with high probability, the frequency of the reflected signal and help us determine the radial velocity of the object, i.e. the relative speed either directly towards or away from the observer. This method is, in fact, a simple version of the discrete Fourier transform (DFT). The DFT takes an



      N


      {\displaystyle N}

      -valued complex input and correlates it with



      N


      {\displaystyle N}

      matched filters, corresponding to complex exponentials at



      N


      {\displaystyle N}

      different frequencies, to yield



      N


      {\displaystyle N}

      complex-valued numbers corresponding to the relative amplitudes and phases of the sinusoidal components (see Moving target indication).


      = Digital communications

      =
      The matched filter is also used in communications. In the context of a communication system that sends binary messages from the transmitter to the receiver across a noisy channel, a matched filter can be used to detect the transmitted pulses in the noisy received signal.

      Imagine we want to send the sequence "0101100100" coded in non polar non-return-to-zero (NRZ) through a certain channel.
      Mathematically, a sequence in NRZ code can be described as a sequence of unit pulses or shifted rect functions, each pulse being weighted by +1 if the bit is "1" and by -1 if the bit is "0". Formally, the scaling factor for the




      k


      t
      h





      {\displaystyle k^{\mathrm {th} }}

      bit is,






      a

      k


      =


      {



      +
      1
      ,



      if bit

      k

      is

      1
      ,





      1
      ,



      if bit

      k

      is

      0.








      {\displaystyle \ a_{k}={\begin{cases}+1,&{\text{if bit }}k{\text{ is }}1,\\-1,&{\text{if bit }}k{\text{ is }}0.\end{cases}}}


      We can represent our message,



      M
      (
      t
      )


      {\displaystyle M(t)}

      , as the sum of shifted unit pulses:





      M
      (
      t
      )
      =



      k
      =








      a

      k


      ×
      Π

      (



      t

      k
      T

      T


      )

      .


      {\displaystyle \ M(t)=\sum _{k=-\infty }^{\infty }a_{k}\times \Pi \left({\frac {t-kT}{T}}\right).}


      where



      T


      {\displaystyle T}

      is the time length of one bit and



      Π
      (
      x
      )


      {\displaystyle \Pi (x)}

      is the rectangular function.
      Thus, the signal to be sent by the transmitter is

      If we model our noisy channel as an AWGN channel, white Gaussian noise is added to the signal. At the receiver end, for a Signal-to-noise ratio of 3 dB, this may look like:

      A first glance will not reveal the original transmitted sequence. There is a high power of noise relative to the power of the desired signal (i.e., there is a low signal-to-noise ratio). If the receiver were to sample this signal at the correct moments, the resulting binary message could be incorrect.
      To increase our signal-to-noise ratio, we pass the received signal through a matched filter. In this case, the filter should be matched to an NRZ pulse (equivalent to a "1" coded in NRZ code). Precisely, the impulse response of the ideal matched filter, assuming white (uncorrelated) noise should be a time-reversed complex-conjugated scaled version of the signal that we are seeking. We choose





      h
      (
      t
      )
      =
      Π

      (


      t
      T


      )

      .


      {\displaystyle \ h(t)=\Pi \left({\frac {t}{T}}\right).}


      In this case, due to symmetry, the time-reversed complex conjugate of



      h
      (
      t
      )


      {\displaystyle h(t)}

      is in fact



      h
      (
      t
      )


      {\displaystyle h(t)}

      , allowing us to call



      h
      (
      t
      )


      {\displaystyle h(t)}

      the impulse response of our matched filter convolution system.
      After convolving with the correct matched filter, the resulting signal,




      M


      f
      i
      l
      t
      e
      r
      e
      d



      (
      t
      )


      {\displaystyle M_{\mathrm {filtered} }(t)}

      is,






      M


      f
      i
      l
      t
      e
      r
      e
      d



      (
      t
      )
      =
      (
      M

      h
      )
      (
      t
      )


      {\displaystyle \ M_{\mathrm {filtered} }(t)=(M*h)(t)}


      where






      {\displaystyle *}

      denotes convolution.

      Which can now be safely sampled by the receiver at the correct sampling instants, and compared to an appropriate threshold, resulting in a correct interpretation of the binary message.


      = Gravitational-wave astronomy

      =
      Matched filters play a central role in gravitational-wave astronomy. The first observation of gravitational waves was based on large-scale filtering of each detector's output for signals resembling the expected shape, followed by subsequent screening for coincident and coherent triggers between both instruments. False-alarm rates, and with that, the statistical significance of the detection were then assessed using resampling methods. Inference on the astrophysical source parameters was completed using Bayesian methods based on parameterized theoretical models for the signal waveform and (again) on the Whittle likelihood.


      = Seismology

      =
      Matched filters find use in seismology to detect similar earthquake or other seismic signals, often using multicomponent and/or multichannel empirically determined templates. Matched filtering applications in seismology include the generation of large event catalogues to study earthquake seismicity and volcanic activity, and in the global detection of nuclear explosions.


      = Biology

      =
      Animals living in relatively static environments would have relatively fixed features of the environment to perceive. This allows the evolution of filters that match the expected signal with the highest signal-to-noise ratio, the matched filter. Sensors that perceive the world "through such a 'matched filter' severely limits the amount of information the brain can pick up from the outside world, but it frees the brain from the need to perform more intricate computations to extract the information finally needed for fulfilling a particular task."


      See also


      Periodogram
      Filtered backprojection (Radon transform)
      Digital filter
      Statistical signal processing
      Whittle likelihood
      Profile likelihood
      Detection theory
      Multiple comparisons problem
      Channel capacity
      Noisy-channel coding theorem
      Spectral density estimation
      Least mean squares (LMS) filter
      Wiener filter
      MUltiple SIgnal Classification (MUSIC), a popular parametric superresolution method
      SAMV


      Notes




      References




      Further reading


      Turin, G. L. (1960). "An introduction to matched filters". IRE Transactions on Information Theory. 6 (3): 311–329. doi:10.1109/TIT.1960.1057571. S2CID 5128742.
      Wainstein, L. A.; Zubakov, V. D. (1962). Extraction of signals from noise. Englewood Cliffs, NJ: Prentice-Hall.
      Melvin, W. L. (2004). "A STAP overview". IEEE Aerospace and Electronic Systems Magazine. 19 (1): 19–35. doi:10.1109/MAES.2004.1263229. S2CID 31133715.
      Röver, C. (2011). "Student-t based filter for robust signal detection". Physical Review D. 84 (12): 122004. arXiv:1109.0442. Bibcode:2011PhRvD..84l2004R. doi:10.1103/PhysRevD.84.122004.
      Fish, A.; Gurevich, S.; Hadani, R.; Sayeed, A.; Schwartz, O. (December 2011). "Computing the matched filter in linear time". arXiv:1112.4883 [cs.IT].

    Kata Kunci Pencarian:

    matched filtermatched filter matlabmatched filter pythonmatched filter gnu radiomatched filter radarmatched filter equationmatched filter in digital communicationmatched filter examplematched filter block diagrammatched filter impulse responseSearch Results

    Artikel Terkait "matched filter"

    Multiple words in a single cell. Possible to filter words separately …

    26 Nov 2021 · Yes, in the filter drop-down just above the area that displays the distinct values, you will see a Text Filters choice, and if you click on this you will get another drop-down to the right with a number of other ways of filtering the data. Click on the Contains... button, and then you can enter the word(s) you are looking for. Hope this helps.

    Formula to extract data from range [SOLVED] - Excel Help Forum

    08 Jan 2025 · Does anyone have a dynamic array formula to take data, filter it and remove some columns? On a separate sheet show only the columns with the yellow title and rows where column M is "YES" The number of rows will vary although the column count is fixed. I've attached a file with a mock up showing the desired result.

    Sorting on Visible Rows Only - Excel Help Forum

    05 Jun 2010 · I'm going to create unique IDs, as you suggested, and clear the outline. After I've appended a bunch of new records, I will sort on ID, then run the "filter uniques" using DigDB. This will automatically re-apply the outline view. Because the "filter uniques" creates the outline automatically, it doesn't matter how many records are in the database.

    SUMIFS with filter and multiple criteria [SOLVED] - Excel Help …

    12 Okt 2016 · I have successfully used a SUMIFS formula with multiple criteria but when I filter out several rows the result does not change. Attached is an example with more details of the issue. Any help would be appreciated. Thanks in advance.

    countif() with filter() function - Excel Help Forum

    10 Agu 2023 · Join Date 01-09-2014 Location Leeds, England MS-Off Ver Excel 365 Posts 77

    how to cut filtered rows - Excel Help Forum

    03 Nov 2006 · after applying Auto Filter copy filtered rows and paste where you want. select the filtered data again and press Delete key to remove it. now from drop down list choose All to show all data. (there will be some blank rows which you have deleted)

    How to Limit the number of rows returned by Filter formula?

    18 Jan 2021 · I've attached an example sheet showing what I'm trying to do. I'm using the filter command to pick "events" out of a list that are currently ongoing. I've searched quite a bit and can't find any way to limit the number of rows the filter command gives, as I …

    Pivot table date filter not working - Excel Help Forum

    21 Mar 2017 · The date is one of many variables and has daily dates going back to 2013. I want to filter the date as necessary in the pivot table, but the filter only shows me each and every day...not grouped data by year which can be drilled down as desired. When I apply a filter on the data set the dates are grouped by year, month, and finally be date.

    Matching negative value w/ corresponding positive value - Excel …

    01 Des 2014 · Join Date 12-06-2006 Location Mississauga, CANADA MS-Off Ver 2003:2010 Posts 34,898

    advanced filter [SOLVED] - Excel Help Forum

    14 Jun 2006 · > You are attempting to use Advanced Filter to PUSH the filtered results to > another sheet. Excel doesn't allow that. > > You must PULL the data into the destination sheet. > Here's the workaround: > > START on the sheet to receive the filtered data > > From the Excel main menu: > <data><filter><advanced filter> > Check: Copy to another location