No More Posts Available.

No more pages to load.

    • Source: Indefinite sum
    • In discrete calculus the indefinite sum operator (also known as the antidifference operator), denoted by






      x




      {\textstyle \sum _{x}}

      or




      Δ


      1




      {\displaystyle \Delta ^{-1}}

      , is the linear operator, inverse of the forward difference operator



      Δ


      {\displaystyle \Delta }

      . It relates to the forward difference operator as the indefinite integral relates to the derivative. Thus




      Δ



      x


      f
      (
      x
      )
      =
      f
      (
      x
      )

      .


      {\displaystyle \Delta \sum _{x}f(x)=f(x)\,.}


      More explicitly, if






      x


      f
      (
      x
      )
      =
      F
      (
      x
      )


      {\textstyle \sum _{x}f(x)=F(x)}

      , then




      F
      (
      x
      +
      1
      )

      F
      (
      x
      )
      =
      f
      (
      x
      )

      .


      {\displaystyle F(x+1)-F(x)=f(x)\,.}


      If F(x) is a solution of this functional equation for a given f(x), then so is F(x)+C(x) for any periodic function C(x) with period 1. Therefore, each indefinite sum actually represents a family of functions. However, due to the Carlson's theorem, the solution equal to its Newton series expansion is unique up to an additive constant C. This unique solution can be represented by formal power series form of the antidifference operator:




      Δ


      1


      =


      1


      e

      D



      1





      {\displaystyle \Delta ^{-1}={\frac {1}{e^{D}-1}}}

      .


      Fundamental theorem of discrete calculus


      Indefinite sums can be used to calculate definite sums with the formula:







      k
      =
      a


      b


      f
      (
      k
      )
      =

      Δ


      1


      f
      (
      b
      +
      1
      )


      Δ


      1


      f
      (
      a
      )


      {\displaystyle \sum _{k=a}^{b}f(k)=\Delta ^{-1}f(b+1)-\Delta ^{-1}f(a)}



      Definitions




      = Laplace summation formula

      =
      The Laplace summation formula allows the indefinite sum to be written as the indefinite integral plus correction terms obtained from iterating the difference operator, although it was originally developed for the reverse process of writing an integral as an indefinite sum plus correction terms. As usual with indefinite sums and indefinite integrals, it is valid up to an arbitrary choice of the constant of integration. Using operator algebra avoids cluttering the formula with repeated copies of the function to be operated on:







      x


      =




      +


      1
      2





      1
      12


      Δ
      +


      1
      24



      Δ

      2





      19
      720



      Δ

      3


      +


      3
      160



      Δ

      4






      {\displaystyle \sum _{x}=\int {}+{\frac {1}{2}}-{\frac {1}{12}}\Delta +{\frac {1}{24}}\Delta ^{2}-{\frac {19}{720}}\Delta ^{3}+{\frac {3}{160}}\Delta ^{4}-\cdots }


      In this formula, for instance, the term






      1
      2





      {\displaystyle {\tfrac {1}{2}}}

      represents an operator that divides the given function by two. The coefficients



      +



      1
      2





      {\displaystyle +{\tfrac {1}{2}}}

      ,







      1
      12





      {\displaystyle -{\tfrac {1}{12}}}

      , etc., appearing in this formula are the Gregory coefficients, also called Laplace numbers. The coefficient in the term




      Δ

      n

      1




      {\displaystyle \Delta ^{n-1}}

      is









      C



      n



      n
      !



      =



      0


      1





      (


      x
      n


      )




      d
      x


      {\displaystyle {\frac {{\mathcal {C}}_{n}}{n!}}=\int _{0}^{1}{\binom {x}{n}}\,dx}


      where the numerator






      C



      n




      {\displaystyle {\mathcal {C}}_{n}}

      of the left hand side is called a Cauchy number of the first kind, although this name sometimes applies to the Gregory coefficients themselves.


      = Newton's formula

      =







      x


      f
      (
      x
      )
      =



      k
      =
      1








      (


      x
      k


      )




      Δ

      k

      1


      [
      f
      ]

      (
      0
      )

      +
      C
      =



      k
      =
      1









      Δ

      k

      1


      [
      f
      ]
      (
      0
      )


      k
      !



      (
      x

      )

      k


      +
      C


      {\displaystyle \sum _{x}f(x)=\sum _{k=1}^{\infty }{\binom {x}{k}}\Delta ^{k-1}[f]\left(0\right)+C=\sum _{k=1}^{\infty }{\frac {\Delta ^{k-1}[f](0)}{k!}}(x)_{k}+C}


      where



      (
      x

      )

      k


      =



      Γ
      (
      x
      +
      1
      )


      Γ
      (
      x

      k
      +
      1
      )





      {\displaystyle (x)_{k}={\frac {\Gamma (x+1)}{\Gamma (x-k+1)}}}

      is the falling factorial.


      = Faulhaber's formula

      =







      x


      f
      (
      x
      )
      =



      n
      =
      1









      f

      (
      n

      1
      )


      (
      0
      )


      n
      !




      B

      n


      (
      x
      )
      +
      C

      ,


      {\displaystyle \sum _{x}f(x)=\sum _{n=1}^{\infty }{\frac {f^{(n-1)}(0)}{n!}}B_{n}(x)+C\,,}


      Faulhaber's formula provides that the right-hand side of the equation converges.


      = Mueller's formula

      =
      If




      lim

      x


      +




      f
      (
      x
      )
      =
      0
      ,


      {\displaystyle \lim _{x\to {+\infty }}f(x)=0,}

      then







      x


      f
      (
      x
      )
      =



      n
      =
      0






      (

      f
      (
      n
      )

      f
      (
      n
      +
      x
      )

      )

      +
      C
      .


      {\displaystyle \sum _{x}f(x)=\sum _{n=0}^{\infty }\left(f(n)-f(n+x)\right)+C.}



      = Euler–Maclaurin formula

      =







      x


      f
      (
      x
      )
      =



      0


      x


      f
      (
      t
      )
      d
      t



      1
      2


      f
      (
      x
      )
      +



      k
      =
      1








      B

      2
      k



      (
      2
      k
      )
      !




      f

      (
      2
      k

      1
      )


      (
      x
      )
      +
      C


      {\displaystyle \sum _{x}f(x)=\int _{0}^{x}f(t)dt-{\frac {1}{2}}f(x)+\sum _{k=1}^{\infty }{\frac {B_{2k}}{(2k)!}}f^{(2k-1)}(x)+C}



      Choice of the constant term


      Often the constant C in indefinite sum is fixed from the following condition.
      Let




      F
      (
      x
      )
      =



      x


      f
      (
      x
      )
      +
      C


      {\displaystyle F(x)=\sum _{x}f(x)+C}


      Then the constant C is fixed from the condition







      0


      1


      F
      (
      x
      )

      d
      x
      =
      0


      {\displaystyle \int _{0}^{1}F(x)\,dx=0}


      or







      1


      2


      F
      (
      x
      )

      d
      x
      =
      0


      {\displaystyle \int _{1}^{2}F(x)\,dx=0}


      Alternatively, Ramanujan's sum can be used:







      x

      1





      f
      (
      x
      )
      =

      f
      (
      0
      )

      F
      (
      0
      )


      {\displaystyle \sum _{x\geq 1}^{\Re }f(x)=-f(0)-F(0)}


      or at 1







      x

      1





      f
      (
      x
      )
      =

      F
      (
      1
      )


      {\displaystyle \sum _{x\geq 1}^{\Re }f(x)=-F(1)}


      respectively


      Summation by parts



      Indefinite summation by parts:







      x


      f
      (
      x
      )
      Δ
      g
      (
      x
      )
      =
      f
      (
      x
      )
      g
      (
      x
      )




      x


      (
      g
      (
      x
      )
      +
      Δ
      g
      (
      x
      )
      )
      Δ
      f
      (
      x
      )


      {\displaystyle \sum _{x}f(x)\Delta g(x)=f(x)g(x)-\sum _{x}(g(x)+\Delta g(x))\Delta f(x)}








      x


      f
      (
      x
      )
      Δ
      g
      (
      x
      )
      +



      x


      g
      (
      x
      )
      Δ
      f
      (
      x
      )
      =
      f
      (
      x
      )
      g
      (
      x
      )




      x


      Δ
      f
      (
      x
      )
      Δ
      g
      (
      x
      )


      {\displaystyle \sum _{x}f(x)\Delta g(x)+\sum _{x}g(x)\Delta f(x)=f(x)g(x)-\sum _{x}\Delta f(x)\Delta g(x)}


      Definite summation by parts:







      i
      =
      a


      b


      f
      (
      i
      )
      Δ
      g
      (
      i
      )
      =
      f
      (
      b
      +
      1
      )
      g
      (
      b
      +
      1
      )

      f
      (
      a
      )
      g
      (
      a
      )




      i
      =
      a


      b


      g
      (
      i
      +
      1
      )
      Δ
      f
      (
      i
      )


      {\displaystyle \sum _{i=a}^{b}f(i)\Delta g(i)=f(b+1)g(b+1)-f(a)g(a)-\sum _{i=a}^{b}g(i+1)\Delta f(i)}



      Period rules


      If



      T


      {\displaystyle T}

      is a period of function



      f
      (
      x
      )


      {\displaystyle f(x)}

      then







      x


      f
      (
      T
      x
      )
      =
      x
      f
      (
      T
      x
      )
      +
      C


      {\displaystyle \sum _{x}f(Tx)=xf(Tx)+C}


      If



      T


      {\displaystyle T}

      is an antiperiod of function



      f
      (
      x
      )


      {\displaystyle f(x)}

      , that is



      f
      (
      x
      +
      T
      )
      =

      f
      (
      x
      )


      {\displaystyle f(x+T)=-f(x)}

      then







      x


      f
      (
      T
      x
      )
      =



      1
      2


      f
      (
      T
      x
      )
      +
      C


      {\displaystyle \sum _{x}f(Tx)=-{\frac {1}{2}}f(Tx)+C}



      Alternative usage


      Some authors use the phrase "indefinite sum" to describe a sum in which the numerical value of the upper limit is not given:







      k
      =
      1


      n


      f
      (
      k
      )
      .


      {\displaystyle \sum _{k=1}^{n}f(k).}


      In this case a closed form expression F(k) for the sum is a solution of




      F
      (
      x
      +
      1
      )

      F
      (
      x
      )
      =
      f
      (
      x
      +
      1
      )


      {\displaystyle F(x+1)-F(x)=f(x+1)}


      which is called the telescoping equation. It is the inverse of the backward difference






      {\displaystyle \nabla }

      operator.
      It is related to the forward antidifference operator using the fundamental theorem of discrete calculus described earlier.


      List of indefinite sums


      This is a list of indefinite sums of various functions. Not every function has an indefinite sum that can be expressed in terms of elementary functions.


      = Antidifferences of rational functions

      =







      x


      a
      =
      a
      x
      +
      C


      {\displaystyle \sum _{x}a=ax+C}


      From which



      a


      {\displaystyle a}

      can be factored out, leaving 1, with the alternative form




      x

      0




      {\displaystyle x^{0}}

      . From that, we have:







      x



      x

      0


      =

      x


      {\displaystyle \sum _{x}x^{0}=\ x}


      For the sum below, remember



      x
      =

      x

      1




      {\displaystyle x=x^{1}}








      x


      x
      =



      x
      (
      x
      +
      1
      )

      2


      +
      C


      {\displaystyle \sum _{x}x={\frac {x(x+1)}{2}}+C}


      For positive integer exponents Faulhaber's formula can be used. For negative integer exponents,







      x




      1

      x

      a




      =



      (

      1

      )

      a
      +
      1



      ψ

      (
      a
      +
      1
      )


      (
      x
      )


      a
      !



      +
      C
      ,

      a


      Z



      {\displaystyle \sum _{x}{\frac {1}{x^{a}}}={\frac {(-1)^{a+1}\psi ^{(a+1)}(x)}{a!}}+C,\,a\in \mathbb {Z} }


      where




      ψ

      (
      n
      )


      (
      x
      )


      {\displaystyle \psi ^{(n)}(x)}

      is the polygamma function can be used.
      More generally,







      x



      x

      a


      =


      {




      ζ
      (

      a
      ,
      x
      +
      1
      )
      +

      C

      1


      ,



      if

      a


      1




      ψ
      (
      x
      +
      1
      )
      +

      C

      2


      ,



      if

      a
      =

      1








      {\displaystyle \sum _{x}x^{a}={\begin{cases}-\zeta (-a,x+1)+C_{1},&{\text{if }}a\neq -1\\\psi (x+1)+C_{2},&{\text{if }}a=-1\end{cases}}}


      where



      ζ
      (
      s
      ,
      a
      )


      {\displaystyle \zeta (s,a)}

      is the Hurwitz zeta function and



      ψ
      (
      z
      )


      {\displaystyle \psi (z)}

      is the Digamma function.




      C

      1




      {\displaystyle C_{1}}

      and




      C

      2




      {\displaystyle C_{2}}

      are constants which would normally be set to



      ζ
      (

      a
      )


      {\displaystyle \zeta (-a)}

      (where



      ζ
      (
      s
      )


      {\displaystyle \zeta (s)}

      is the Riemann zeta function) and the Euler–Mascheroni constant respectively. By replacing the variable



      a


      {\displaystyle a}

      with




      a


      {\displaystyle -a}

      , this becomes the Generalized harmonic number. For the relation between the Hurwitz zeta and Polygamma functions, refer to Balanced polygamma function and Hurwitz zeta function#Special cases and generalizations.
      From this, using








      a



      ζ
      (
      s
      ,
      a
      )
      =

      s
      ζ
      (
      s
      +
      1
      ,
      a
      )


      {\displaystyle {\frac {\partial }{\partial a}}\zeta (s,a)=-s\zeta (s+1,a)}

      , another form can be obtained:







      x



      x

      a


      =



      0


      x



      a
      ζ
      (
      1

      a
      ,
      u
      +
      1
      )
      d
      u
      +
      C
      ,

      if

      a


      1


      {\displaystyle \sum _{x}x^{a}=\int _{0}^{x}-a\zeta (1-a,u+1)du+C,{\text{ if }}a\neq -1}








      x



      B

      a


      (
      x
      )
      =
      (
      x

      1
      )

      B

      a


      (
      x
      )



      a

      a
      +
      1




      B

      a
      +
      1


      (
      x
      )
      +
      C


      {\displaystyle \sum _{x}B_{a}(x)=(x-1)B_{a}(x)-{\frac {a}{a+1}}B_{a+1}(x)+C}



      = Antidifferences of exponential functions

      =







      x



      a

      x


      =



      a

      x



      a

      1



      +
      C


      {\displaystyle \sum _{x}a^{x}={\frac {a^{x}}{a-1}}+C}


      Particularly,







      x



      2

      x


      =

      2

      x


      +
      C


      {\displaystyle \sum _{x}2^{x}=2^{x}+C}



      = Antidifferences of logarithmic functions

      =







      x



      log

      b



      x
      =

      log

      b



      (
      x
      !
      )
      +
      C


      {\displaystyle \sum _{x}\log _{b}x=\log _{b}(x!)+C}








      x



      log

      b



      a
      x
      =

      log

      b



      (
      x
      !

      a

      x


      )
      +
      C


      {\displaystyle \sum _{x}\log _{b}ax=\log _{b}(x!a^{x})+C}



      = Antidifferences of hyperbolic functions

      =







      x


      sinh

      a
      x
      =


      1
      2


      csch


      (


      a
      2


      )

      cosh


      (



      a
      2



      a
      x

      )

      +
      C


      {\displaystyle \sum _{x}\sinh ax={\frac {1}{2}}\operatorname {csch} \left({\frac {a}{2}}\right)\cosh \left({\frac {a}{2}}-ax\right)+C}








      x


      cosh

      a
      x
      =


      1
      2


      csch


      (


      a
      2


      )

      sinh


      (

      a
      x



      a
      2



      )

      +
      C


      {\displaystyle \sum _{x}\cosh ax={\frac {1}{2}}\operatorname {csch} \left({\frac {a}{2}}\right)\sinh \left(ax-{\frac {a}{2}}\right)+C}








      x


      tanh

      a
      x
      =


      1
      a



      ψ


      e

      a





      (

      x




      i
      π


      2
      a




      )

      +


      1
      a



      ψ


      e

      a





      (

      x
      +



      i
      π


      2
      a




      )


      x
      +
      C


      {\displaystyle \sum _{x}\tanh ax={\frac {1}{a}}\psi _{e^{a}}\left(x-{\frac {i\pi }{2a}}\right)+{\frac {1}{a}}\psi _{e^{a}}\left(x+{\frac {i\pi }{2a}}\right)-x+C}


      where




      ψ

      q


      (
      x
      )


      {\displaystyle \psi _{q}(x)}

      is the q-digamma function.


      = Antidifferences of trigonometric functions

      =







      x


      sin

      a
      x
      =



      1
      2


      csc


      (


      a
      2


      )

      cos


      (



      a
      2



      a
      x

      )

      +
      C

      ,


      a

      2
      n
      π


      {\displaystyle \sum _{x}\sin ax=-{\frac {1}{2}}\csc \left({\frac {a}{2}}\right)\cos \left({\frac {a}{2}}-ax\right)+C\,,\,\,a\neq 2n\pi }








      x


      cos

      a
      x
      =


      1
      2


      csc


      (


      a
      2


      )

      sin


      (

      a
      x



      a
      2



      )

      +
      C

      ,


      a

      2
      n
      π


      {\displaystyle \sum _{x}\cos ax={\frac {1}{2}}\csc \left({\frac {a}{2}}\right)\sin \left(ax-{\frac {a}{2}}\right)+C\,,\,\,a\neq 2n\pi }








      x



      sin

      2



      a
      x
      =


      x
      2


      +


      1
      4


      csc

      (
      a
      )
      sin

      (
      a

      2
      a
      x
      )
      +
      C


      ,


      a

      n
      π


      {\displaystyle \sum _{x}\sin ^{2}ax={\frac {x}{2}}+{\frac {1}{4}}\csc(a)\sin(a-2ax)+C\,\,,\,\,a\neq n\pi }








      x



      cos

      2



      a
      x
      =


      x
      2





      1
      4


      csc

      (
      a
      )
      sin

      (
      a

      2
      a
      x
      )
      +
      C


      ,


      a

      n
      π


      {\displaystyle \sum _{x}\cos ^{2}ax={\frac {x}{2}}-{\frac {1}{4}}\csc(a)\sin(a-2ax)+C\,\,,\,\,a\neq n\pi }








      x


      tan

      a
      x
      =
      i
      x



      1
      a



      ψ


      e

      2
      i
      a





      (

      x



      π

      2
      a




      )

      +
      C

      ,


      a




      n
      π

      2




      {\displaystyle \sum _{x}\tan ax=ix-{\frac {1}{a}}\psi _{e^{2ia}}\left(x-{\frac {\pi }{2a}}\right)+C\,,\,\,a\neq {\frac {n\pi }{2}}}


      where




      ψ

      q


      (
      x
      )


      {\displaystyle \psi _{q}(x)}

      is the q-digamma function.







      x


      tan

      x
      =
      i
      x


      ψ


      e

      2
      i





      (

      x
      +


      π
      2



      )

      +
      C
      =




      k
      =
      1






      (

      ψ

      (

      k
      π



      π
      2


      +
      1

      x

      )

      +
      ψ

      (

      k
      π



      π
      2


      +
      x

      )


      ψ

      (

      k
      π



      π
      2


      +
      1

      )


      ψ

      (

      k
      π



      π
      2



      )


      )

      +
      C


      {\displaystyle \sum _{x}\tan x=ix-\psi _{e^{2i}}\left(x+{\frac {\pi }{2}}\right)+C=-\sum _{k=1}^{\infty }\left(\psi \left(k\pi -{\frac {\pi }{2}}+1-x\right)+\psi \left(k\pi -{\frac {\pi }{2}}+x\right)-\psi \left(k\pi -{\frac {\pi }{2}}+1\right)-\psi \left(k\pi -{\frac {\pi }{2}}\right)\right)+C}








      x


      cot

      a
      x
      =

      i
      x




      i

      ψ


      e

      2
      i
      a




      (
      x
      )

      a


      +
      C

      ,


      a




      n
      π

      2




      {\displaystyle \sum _{x}\cot ax=-ix-{\frac {i\psi _{e^{2ia}}(x)}{a}}+C\,,\,\,a\neq {\frac {n\pi }{2}}}








      x


      sinc

      x
      =
      sinc

      (
      x

      1
      )

      (



      1
      2


      +
      (
      x

      1
      )

      (

      ln

      (
      2
      )
      +



      ψ
      (



      x

      1

      2


      )
      +
      ψ
      (



      1

      x

      2


      )

      2






      ψ
      (
      x

      1
      )
      +
      ψ
      (
      1

      x
      )

      2



      )


      )

      +
      C


      {\displaystyle \sum _{x}\operatorname {sinc} x=\operatorname {sinc} (x-1)\left({\frac {1}{2}}+(x-1)\left(\ln(2)+{\frac {\psi ({\frac {x-1}{2}})+\psi ({\frac {1-x}{2}})}{2}}-{\frac {\psi (x-1)+\psi (1-x)}{2}}\right)\right)+C}


      where



      sinc

      (
      x
      )


      {\displaystyle \operatorname {sinc} (x)}

      is the normalized sinc function.


      = Antidifferences of inverse hyperbolic functions

      =







      x


      artanh

      a
      x
      =


      1
      2


      ln


      (



      Γ

      (

      x
      +


      1
      a



      )



      Γ

      (

      x



      1
      a



      )




      )

      +
      C


      {\displaystyle \sum _{x}\operatorname {artanh} \,ax={\frac {1}{2}}\ln \left({\frac {\Gamma \left(x+{\frac {1}{a}}\right)}{\Gamma \left(x-{\frac {1}{a}}\right)}}\right)+C}



      = Antidifferences of inverse trigonometric functions

      =







      x


      arctan

      a
      x
      =


      i
      2


      ln


      (



      Γ
      (
      x
      +


      i
      a


      )


      Γ
      (
      x



      i
      a


      )



      )

      +
      C


      {\displaystyle \sum _{x}\arctan ax={\frac {i}{2}}\ln \left({\frac {\Gamma (x+{\frac {i}{a}})}{\Gamma (x-{\frac {i}{a}})}}\right)+C}



      = Antidifferences of special functions

      =







      x


      ψ
      (
      x
      )
      =
      (
      x

      1
      )
      ψ
      (
      x
      )

      x
      +
      C


      {\displaystyle \sum _{x}\psi (x)=(x-1)\psi (x)-x+C}








      x


      Γ
      (
      x
      )
      =
      (

      1

      )

      x
      +
      1


      Γ
      (
      x
      )



      Γ
      (
      1

      x
      ,

      1
      )

      e


      +
      C


      {\displaystyle \sum _{x}\Gamma (x)=(-1)^{x+1}\Gamma (x){\frac {\Gamma (1-x,-1)}{e}}+C}


      where



      Γ
      (
      s
      ,
      x
      )


      {\displaystyle \Gamma (s,x)}

      is the incomplete gamma function.







      x


      (
      x

      )

      a


      =



      (
      x

      )

      a
      +
      1




      a
      +
      1



      +
      C


      {\displaystyle \sum _{x}(x)_{a}={\frac {(x)_{a+1}}{a+1}}+C}


      where



      (
      x

      )

      a




      {\displaystyle (x)_{a}}

      is the falling factorial.







      x



      sexp

      a



      (
      x
      )
      =

      ln

      a






      (

      sexp

      a



      (
      x
      )

      )




      (
      ln

      a

      )

      x





      +
      C


      {\displaystyle \sum _{x}\operatorname {sexp} _{a}(x)=\ln _{a}{\frac {(\operatorname {sexp} _{a}(x))'}{(\ln a)^{x}}}+C}


      (see super-exponential function)


      See also


      Indefinite product
      Time scale calculus
      List of derivatives and integrals in alternative calculi


      References




      Further reading


      "Difference Equations: An Introduction with Applications", Walter G. Kelley, Allan C. Peterson, Academic Press, 2001, ISBN 0-12-403330-X
      Markus Müller. How to Add a Non-Integer Number of Terms, and How to Produce Unusual Infinite Summations
      Markus Mueller, Dierk Schleicher. Fractional Sums and Euler-like Identities
      S. P. Polyakov. Indefinite summation of rational functions with additional minimization of the summable part. Programmirovanie, 2008, Vol. 34, No. 2.
      "Finite-Difference Equations And Simulations", Francis B. Hildebrand, Prenctice-Hall, 1968

    Kata Kunci Pencarian: