 # Special integrals: M-dimensional Gaussian

[latexpage]

Regular readers of this blog will know that I love my integrals. For those that share the same enjoyment, I recommend this channel that often covers integrals from the MIT integration bee. For example, see this case of writing an integral in terms of itself. Very cool.

In this post, I want to think about the m-dimensional Gaussian. It is a common integral in QFT. Here’s how we compute it when of the form,

[ z = int_{mathbb{R}^n} d^n xe^{-frac{1}{2} x^{T}Ax} ]

Two comments: \$A\$ is a real symmetric matrix, \$A = A^T in mathbb{R}^{n times n} Leftrightarrow A_{ij} = A_{ji}\$. X is a column vector,

[ x = begin{pmatrix}

x_1 \

x_2 \

. \

. \

. \

x_n \

end{pmatrix} ]

Since \$A\$ is real and symmetric, we make use of a result from spectral theorem. In particular, consider the following spectral theorem proposition:

[ A = A^T in mathbb{R}^{n times n} ]

This implies \$A\$ has eigenvalues \$lambda_i in  mathbb{R}\$. It can also be diagonalised into a matrix \$D = diag(lambda_1, lambda_2, … , lambda_n)\$ by an orthogonal matrix \$O\$, such that

[ OAO^T = D = begin{pmatrix}

lambda & 0 & … & 0 \

0 & lambda_2 & … & 0 \

. & . & … & . \

. & . & … & . \

0 & 0 & … & lambda_n \

end{pmatrix} Leftrightarrow A = O^T DO ]

Here, it should be highlighted \$O\$ is an \$n times m\$ matrix. From these considerations,

[ implies z = int_{mathbb{R}^n} d^n xe^{-frac{1}{2} x^{T} Ax} ]

[ = int_{mathbb{R}^n} d^n xe^{-frac{1}{2} x^{T} O^T DO_x} ]

[ = int_{mathbb{R}^n} d^n xe^{-frac{1}{2} (O_x)^{T} D(O_x)} ]

At this point, we perform variable substitution. Notice,

[ y : = O_{x} ]

[ d^{n} y = det (frac{dy}{dx}) d^{n} x ]

[ implies d^{n} x = frac{1}{det (O)} d^{n} y ]

[ therefore z = int_{mathbb{R}^n} d^{n} y frac{1}{det (O)} e^{-frac{1}{2} y^{T} Dy} ]

[ =   int_{mathbb{R}^n} d^{n} y frac{1}{det (O)} e^{-frac{1}{2} sum_{i=1}^{n} lambda_i y_{i}^2} ]

[ frac{1}{det (O)} prod_{i=1}^{n} int_{- infty}^{infty} dy_{i} e^{-frac{1}{2} lambda_i y_{i}^2 } ]

From the 1-dimensional case, we know: \$e^{-frac{1}{2} lambda_i y_{i}^2} = sqrt{frac{2 pi}{ lambda_i}}\$. So,

[ z = frac{1}{det(O)}  prod_{i=1}^{n} sqrt{frac{2 pi}{ lambda_i}} ]

Now, recall that:

[ D = begin{pmatrix}

lambda & 0 & … & 0 \

0 & lambda_2 & … & 0 \

. & . & … & . \

. & . & … & . \

0 & 0 & … & lambda_n \

end{pmatrix} ]

From this, we can simplify our last result

[ z = frac{1}{det (O)} frac{(2 pi)^{frac{n}{2}}}{sqrt{det (D)}} ]

[ = frac{(2 pi)^{frac{n}{2}}}{sqrt{det (O^T DO)}} ]

[ therefore z = frac{(2 pi)^{frac{n}{2}}}{sqrt{det (A)}} ] # Simply beautiful: Finding the covariant basis from seemingly nothing

[latexpage]

Seeing how I wrote a new post yesterday in the area of tensor analysis, I was reminded of the beautiful result below. It likely would have made it onto this blog at some point, so I thought I would quickly write it all out.

Here we go.

***

Imagine we are presented with some arbitrary curve in Euclidean space. This curve has no defining coordinate system, and so we can simply picture it like this: Now, as should be emphasised more than once, without enforcing a coordinate system we’re going to want to parameterise this curve. Similar, perhaps, to how we might build a picture of a relativistic string from the ground up, we’re going to want to issue some generalised coordinates and piece together what we can glean about this string. So let’s use arc length, \$s\$, for parameter. Let’s also define some general coordinates, say, \$gamma : (Z^1(s), Z^2(s))\$. But remember, these general coordinates can be Cartesian, Polar or whatever. This is good. We are making progress.

With the stage now set, here’s the intriguing question I want you to think about (you have already seen this in a textbook): Can you, given the unit tangent to this arbitrary curve (image below), find the components of this unit tangent with respect to the covarient basis?

First, one might be inclined to say, “without defining a coordinate system I can’t even think about or imagine deriving the coordinates of the tangent basis!” Putting aside what we know of the power of tensor analysis, one can certainly sympathise with such a response. But what we have is, indeed, the power of tensor analysis and so we can proceed in a very elegant way. The main objective here is that, given this unit tangent, we want to find some algebraic expression for it of the general form

\$vec{T} = T^1 vec{e}_1 + T^2 vec{e}_2\$

Let me also say this: the desire here is to express \$T^1\$ and \$T^2\$ in terms of our general coordinates \$(Z^1(s), Z^2(s))\$.

So, how do we go about this? Recall, firstly, that as a standard the definition of the unit normal is \$dr ds\$. Think of \$R\$ as a function of \$s\$. As such, it follows that we can write: \$R(s) = vec{R}(Z^1(s),Z^2(s))\$.

But our unit tangent can also be written as \$vec{T}(s)\$, noting that \$vec{T}(s) = frac{d vec{R}(s)}{ds}\$.

This leads us directly to ask, “what is  \$frac{d vec{R}(s)}{ds}\$. Well, we can compute it as follows

[

vec{T}(s) = frac{d vec{R}(s)}{ds} implies T(s) = frac{partial R}{partial Z^1}frac{d Z^1}{ds} + frac{partial R}{partial Z^2}frac{d Z^2}{ds}

]

Ask yourself, what is \$frac{partial R}{partial Z^1}\$? It is the covariant basis!

And this deserves celebration, as it is appeared in our investigation of this arbitrary curve without force!

Thus, and I will write it out again in full as the conclusion,

[

vec{T}(s) = frac{dvec{R}(s)}{ds} implies T(s)

= frac{partial R}{partial Z^1}frac{d Z^1}{ds} + frac{partial R}{partial Z^2}frac{d Z^2}{ds}

= frac{d Z^1}{ds} e_1 + frac{d Z^2}{ds} e_2

]

Where, \$T^1 = frac{d Z^1}{ds}\$ and \$T^2 = frac{d Z^2}{ds}\$.

There we have it, a beautiful result. # Understanding metric elements for cylindrical coordinates

[latexpage]

The metric tensor is ubiquitous when arriving at a certain level in one’s physics career. When it comes to cyndrilical coordinates, there is a useful way to remember its deeper meaning through a rather simple derivation – or at least through the use and construction of a series of definitions. (I say ‘simple’ in that this is something one can do ‘on the fly’, should they be required to remind themselves of the properties of the metric tensor). For a more general treatment of what follows, see also this document on special tensors.

*** To start, if we had some general curvilinear coordinate system, we could begin by writing

\$dtextbf{R} = g_1 du^1 + g_2 du^2 + … = g_i du^i \$

From this we can also immediately invoke the principle that \$ds^2 = d textbf{R} cdot dtextbf{R} \$. I won’t explain this for sake space, but any textbook will cover why \$ds^2\$ is equal to the dot product of our displacement vector with itself.

But from above we also can see that \$dtextbf{R} cdot dtextbf{R} \$ is the same as, in this example using two curvilinear coordinates, \$g_i du^i cdot g_j du^j\$. It follows,

\$ds^2 = g_i du^i cdot g_j du^j = g_i cdot g_j du^i du^j\$

As taking the dot product of our two tangent vectors, \$g_1 du^1 + g_2 du^2\$, is equal by standard relation to the metric tensor, we arrive at the following

\$ds^2 = g_i cdot g_j du^i du^j = g_{ij} du^i du^j\$

This identity, if I may describe it as such, is important to remember. But how can we issue further meaning to this result?

***

Let’s use cylindrical coordinates.

[Image]

Now, for pedagogical purposes, I’ve labelled the unit vectors \$e_p\$,\$e_phi\$, and \$e_z\$. But, to generalise things, moving forward note that I have set \$e_p = e_1\$,\$e_phi = e_2\$, and \$e_z = e_3\$.

The first thing that one should notice is that our displacement vector, \$textbf{R}\$, can be written as

\$textbf{R} = rho cos phi hat{i} + rho sin phi hat{j} + z hat{k}\$

If this is not immediately clear, I suggest taking a few minutes to revise cylindrical coordinates (a generalisation of 2D polar coords.).

Now, here is a key step. Having written our equation for \$textbf{R}\$, we want to focus in on  \$e_1\$, \$e_2\$, and \$e_3\$ and how we might represent them with respect to \$textbf{R}\$. To do this, we’re going to take partials.

\$e_1 = frac{partial textbf{R}}{partial rho} = cos phi hat{i} + sin phi hat{j}\$

\$e_2 = frac{partial textbf{R}}{partial phi} = – rho sin phi hat{i} + rho cos phi hat{j}\$

\$e_3 = frac{partial textbf{R}}{partial z} = hat{k}\$

With expressions for \$e_1\$, \$e_2\$, and \$e_3\$ established, we can return to our previous definition of the metric tensor and look to derive a definition of our metric tensor for cylindrical coordinates.

Notice, if \$e_i cdot e_j = g_{ij}\$, in matrix form we have

[ begin{pmatrix}
e_{11} & e_{12} & e_{13} \
e_{21} & e_{22} & e_{23} \

e_{31} & e_{32} & e_{33}
end{pmatrix}

=

begin{pmatrix}
g_{11} & g_{12} & g_{13} \

g_{21} & g_{22} & g_{23} \

g_{31} & g_{32} & g_{33} \

end{pmatrix} ]

What is this saying? Well, let’s look. We know, again, that \$g_{ij} = e_i cdot e_j\$. So if we take the dot product, as defined, we should then arrive at our metric tensor for our cylindrical system. And, indeed, this is exactly the result. For \$g_{11}\$, for instance, we’re just taking the dot product of \$e_1\$ with itself.

[ g_{11} =  (cos phi hat{i} + sin phi hat{j}) cdot (cos phi hat{i} + sin phi hat{j})

= cos^2 phi + sin^2 phi = 1 ]

We keep doing this for the entire matrix. But to save time, I will only work out the remaining diagonals. The non-diagonal components cancel to zero. You can do the dot product for each to see that this is indeed true. Thus,

[g_{22} = ( – rho sin phi hat{i} + rho cos phi hat{j}) cdot (- rho sin phi hat{i} + rho cos phi hat{j})

=rho^2 sin^2 phi + rho^2 cos^2 phi = rho^2 ]

[g_{33} = hat{k} cdot hat{k} = 1]

Therefore, we arrive at our desired metric

[ begin{pmatrix}
1 & 0 & 0 \

0& rho^2 & 0 \

0 & 0 & 1 \

end{pmatrix} ] # Leibniz Rule: Differentiating under the integral sign (special case)

[latexpage]
I recently wrote a short piece on the general case of the Leibniz Rule. In that article I alluded to how there is also a special case of the rule, in which the limits of integration are constants as opposed to differentiable functions.

In short, the special case of the Leibniz Rule is a much simpler operation. The key idea remains that one must perform partial differentiation within the the integral, but the rest of the general case is cut out. This means all one has to do is perform any remaining basic integration techniques and then clean up the evaluation with some algebra.

In general, the special case of Leibniz Rule states

if

[
I(x)=int_{a}^{b}f(t,x)mathrm{d}t}
]

then
[
I'(x)= int_{a}^{b}frac{partial mathscr{F}}{partial x}mathrm{d}t} ]

Notice how this compares with the general case.

***

Here’s an example to stimulate curiosity in learning.

[
F(x)=int_{1}^{2}frac{cos(tx)}{t}mathrm{d}t}
]

[
F^{prime}(x)=frac{d}{dx}int_{1}^{2}frac{cos(tx)}{t}mathrm{d}t}
]

Now, by the special case of Leibniz Rule:
[
=int_{1}^{2}frac{partial}{partial x} left[frac{cos(tx)}{t} right]mathrm{d}t}
]
At this point, take the partial derivative. Working this out,
[
follows int_{1}^{2}frac{-tsin(tx)}{t}mathrm{d}t}
]
Cancel, t, in numerator and denominator.
[
=int_{1}^{2}-sin(tx)mathrm{d}t}
]
Note, we’re integrating wrt. t, therefore
[
=left[frac{cos(tx)}{x} right]_{t=1}^{t=2}
]
[
= frac{cos(2x) – cos(x)}{x}
]

As you can see, the special case is a simpler operation. The key thing to remember, again, is to insert the partial in the integral and then take the partial derivative of the integrand. From that point, it is usually just a matter of cleaning up the algebra. # Leibniz Rule: Differentiating under the integral (general case)

[latexpage]

Evaluating integrals has over time become one of my favourite activities. Whether it is by parts or by substitution or by partial fractions or reduction – there are many cool integrals to be found in the world of applied maths. Two especially cool methods for solving definite integrals are the Leibniz Rule (or what some call Feynman integration) and the Bracket Method. In this post I will focus on the former, particularly its general case.

Admittedly, when I first learned Leibniz’s Rule I was enthralled. It opened up a new world of integrals, which, otherwise, would seem impossible to evaluate. Consider evaluating, for example, such integrals as those listed below using basic methods:

[
int_{-infty}^{infty}frac{sin(x)}{x} mathrm{d}x
]

or

[
int_{1}^{2}frac{cos{tx}}{t},mathrm{d}t
]

or something like

[
int_{-infty}^{infty}frac{cos{x}}{x^2 +1}mathrm{d}x
]

This is where the Leibniz Rule comes into play.

There are two cases of the Leibniz Rule, a general and a special case. The difference can be simplified. In the latter, the limits of integration are constants, where in the former the limits of integration are differentiable functions. In the general case (derivation), the rule states that if

[
I(x)=int_{u(x)}^{v(x)} f(t,x)mathrm{d}t
]

then

[
I^{prime}(x)=int_{u(x)}^{v(x)}frac{partial f}{partial x}(t,x)mathrm{d}t + f(v(x),x)v'(x) – f(u(x), x)u'(x)
]

So this is the rule, in its general case, but what to make of it?

One of the best summations of the Leibniz Rule that I have read states how: “for constant limits of integration the order of integration and differentiation are reversible” (Riley, Hobson & Bence, 2016, pp.189-179). In other words, we can use the Leibniz Rule to interchange the integral sign and the differential. Another way to word this is that by using the Leibniz Rule, we’re integrating by way of differentiation under the integral sign.

The key idea to remember is to insert the differential into the integral, which then makes it a partial differential (as seen on line 3 below). From there, you want to work out the partial with the hope that the integral simplifies. Finally, it is primarily a matter of using basic integration techniques and algebra to complete the evaluation.

Here’s an example:

[
F(x)=int_{sqrt{x}}^{x}frac{sin(tx)}{t}mathrm{d}t
]

[F^{prime}(x)=frac{d}{dx}int_{sqrt{x}}^{x}frac{sin(tx)}{t}mathrm{d}t ]

[F^{prime}(x)=int_{sqrt{x}}^{x}frac{partial}{partial x}[frac{sin(tx)}{t}]mathrm{d}t + frac{sin(x cdot x)}{x}(1) – frac{sin(sqrt{x} cdot x)}{sqrt{x}}(frac{1)}{2sqrt{x}}) ]

Notice on line 3 that for the part \$f(v(x),x)v'(x) – f(u(x), x)u'(x)\$, we’re simply replacing t with x, then multiplying by the differential of the limit.

[implies F^{prime}(x)=int_{sqrt{x}}^{x}frac{tcos(tx)}{t}mathrm{d}t + frac{sin(x^2)}{x} – frac{sin(x^{frac{3}{2}})}{2x} ]

Note, we still have to integrate \$int_{sqrt{x}}^{x}frac{tcos(tx)}{t}\$ with respect to t. For this, use substitution. You then end up with what follows:

[=left[frac{sin(tx)}{x}right]_{t=sqrt{x}}^{t=x} + frac{sin(x^2)}{x} – frac{sin(x^{frac{3}{2}})}{2x} ]

[=left[(frac{sin(x cdot x)}{x})-(frac{sin(sqrt{x} cdot x)}{x})right] + frac{sin(x^2)}{x} – frac{sin(x^{frac{3}{2}})}{2x} ]

[=frac{2sin(x^2)}{x} – frac{3sin(x^{frac{3}{2}})}{2x} ]

Why is Leibniz’s Rule such a powerful and useful tool? In that it allows us to solve otherwise challenging integrals – integrals that we regularly come across that cannot be evaluated in terms of simple functions (of finite sum) – one of the things I like most about this technique is that it is really quite simple for what it does. Once you get the hang of it, and it becomes as basic as integrating by parts or something similar, it is a very useful tool to have in one’s toolbox.

Leibniz was of course one of the fathers of calculus, along with Newton. But this integration technique was made famous by Feynman, who offers a lovely story in Surely, You’re Joking, Mr. Feynman! about how he first came across the method and how he later used it on a regular basis (contributing to his reputation for doing integrals!).

References

Riley, K.F., Hobson, M.P., & Bence, S.J., (2016), “Mathematical Methods for Physics and Engineering”. Cambridge, UK: Cambridge University Press, pp.189-179