Mathematics

Jensen Polynomials, the Riemann-zeta Function, and SYK

A new paper by Michael Griffin, Ken Ono, Larry Rolen, and Don Zagier appears to have a made some intriguing steps when it comes to the Riemann Hypothesis (RH). The paper is titled, ‘Jensen polynomials for the Riemann zeta function and other sequences’. The preprint originally appeared in arXiv [arXiv:1902.07321 [math.NT]] in February 2019. It was one of a long list of papers that I wanted to read over the summer. And with the final version now in the Proceedings of the National Academy of Sciences (PNAS), I would like to discuss a bit about the author’s work and one way in which it relates to my own research.

First, the regular reader will recall that in a past post on string mass and the Riemann-zeta function, we discussed the RH very briefly, including the late Sir Michael Atiyah’s claim to have solved it, and finally the separate idea of a stringy proof. The status of Atiyah’s claim still seems unclear, though I mentioned previously that it doesn’t look like it will hold. The idea of a stringy proof also remains a distant dream. But we may at least recall from this earlier post some basic properties of the RH.

What is very interesting about the Griffin et al paper is that it returns to a rather old approach to the RH, based on George Pólya’s research in 1927. The authors also build on the work of Johan Jensen. The connection is as follows. It was the former, Pólya, a Hungarian mathematician, who proved that, for the Riemann-zeta function \zeta{s} at its point of symmetry, the RH is equivalent to the hyperbolicity of Jensen polynomials. For the inquisitive reader, as an entry I recommend this 1990 article in the Journal of Mathematical Analysis and Applications by George Csordas, Rirchard S. Varga, and Istvan Vincze titled, ‘Jensen polynomials with applications to the Riemann zeta-function’.

Pólya’s work is generally very interesting, something I have been familiarising myself with in relation to the Sachdev-Ye-Kitaev model (more on this later) and quantum gravity. When it comes to the RH, his approach was left mostly abandoned for decades. But Griffin et al formulate what is basically a new general framework, leveraging Pólya’s insights, and in the process proving a few new theorems and even proving criterion pertaining to the RH.

1. Hyperbolicity of Polynomials

I won’t discuss their paper in length, instead focusing on a particular section of the work. But as a short entry to their study, Griffin et al pick up from the work of Pólya, summarising his result about how the RH is equivalent to the hyperbolicity of all Jensen polynomials associated with a particular sequence of Taylor coefficients,

\displaystyle (-1 + 4z^{2}) \Lambda(\frac{1}{2} + z) = \sum_{n=0}^{\infty} \frac{\gamma (n)}{n!} \cdot z^{2n} \ \ (1)

Where {\Lambda(s) = \pi^{-s/2} \Gamma (s/2)\zeta{s} = \Lambda (1 - s)}, as stated in the paper. Now, if I am not mistaken, the sequence of Taylor coefficients belongs to what is called the Laguerre-Pólya class, in which case if there is some function {f(x)} that belongs to this class, the function satisfies the Laguerre inequalities.

Additionally,  the Jensen polynomial can be seen in (1). Written generally, a Jensen polynomial is of the form {g_{n}(t) := \sum_{k = 0}^{n} {n \choose k} \gamma_{k}t^{k}}, where {\gamma_{k}}‘s are positive and they satisfy the Turán inequalities {\gamma_{k}^{2} - \gamma_{k - 1} \gamma_{k + 1} \geq 0}.

Now, given that a polynomial with real coefficients is hyperbolic if all of its zeros are real, where read in Griffin et al how the Jensen polynomial of degree {d} and shift {n} in the arbitrary sequence of real numbers {\{ \alpha (0), \alpha (1), ... \}} is the following polynomial,

\displaystyle J_{\alpha}^{d,n} (X) := \sum_{j = 0}^{d} {d \choose j} \alpha (n + j)X^{j} \ \ (2)

Where {n} and {d} are the non-negative integers and where, I think, {J_{\alpha}^{d,n} (X)} is the hyperbolicity of polynomials. Now, recall that we have our previous Taylor coefficients {\gamma}. From the above result, the following statement is given that the RH is equivalent to {J_{\gamma}^{d,n}(X)} – the hyperbolicity of polynomials – for all non-negative integers. What is very curious, and what I would like to look into a bit more, is how this conditions holds under differentiation. In any case, as the authors point out, one can prove the RH by showing hyperbolicity for {J_{\alpha}^{d,n} (X)}; but proving the RH is of course notoriously difficult!

Alternatively, another path may be chosen. My understanding is that Griffin-Ono-Rolen-Zagier use shifts in {n} for small {d}, because, from what I understand about hyperbolic polynomials, one wants to limit the hyperbolicity in the {d} direction. Then the idea, should I not be corrected, is to study the asymptotic behaviour of {\gamma(n)}.

This is the general entry, from which the authors then go on to consider a number of theorems. I won’t go through all of the theorems. One can just as well read the paper and the proofs. What I want to do is focus particularly on Theorem 3.

2. Theorem 3

Aside from the more general considerations and potential breakthroughs with respect to the RH, one of my interests triggered in the Griffin-Ono-Rolen-Zagier paper has to do with my ongoing studies concerning Gaussian Unitary Ensembles (GUE) and Random Matrix Theory (RMT) in the context of the Sachdev-Ye-Kitaev (SYK) model (plus similar models) and quantum gravity. Moreover, RMT has become an interest in relation to chaos and complexity, not least because in SYK and similar models we consider late-time behaviour of quantum black holes in relation to theories of quantum chaos and random matrices.

But for now, one thing that is quite fascinating about Jensen polynomials for the Riemann-zeta function is the proof in Griffin et al of the GUE random matrix model prediction. That is, the derivative aspect GUE random matrix model prediction for the zeros of Jensen polynomials. One of the claims here is that the GUE and the RH are satisfied by the symmetric version of the zeta function. To quote in length,

‘To make this precise, recall that Dyson, Montgomery, and Odlyzko [9, 10, 11] conjecture that the nontrivial zeros of the Riemann zeta function are distributed like the eigenvalues of random Hermitian matrices. These eigenvalues satisfy Wigner’s Semicircular Law, as do the roots of the Hermite polynomials {H_{d}(X)}, when suitably normalized, as {d \rightarrow +\infty} (see Chapter 3 of [12]). The roots of {J){\gamma}^{d,0} (X)}, as {d \rightarrow +\infty} approximate the zeros of {\Lambda (\frac{1}{2} + z)} (see [1] or Lemma 2.2 of [13]), and so GUE predicts that these roots also obey the Semicircular Law. Since the derivatives of {\Lambda (\frac{1}{2} + z)} are also predicted to satisfy GUE, it is natural to consider the limiting behavior of {J_{\gamma}^{d,n}(X)} as {n \rightarrow +\infty}. The work here proves that these derivative aspect limits are the Hermite polynomials {H_{d}(X)}, which, as mentioned above, satisfy GUE in degree aspect.’

I think Theorem 3 raises some very interesting, albeit searching questions. I also think it possibly raises or inspires (even if naively) some course of thought about the connection of insights being made in SYK and SYK-like models, RMT more generally, and even studies of the zeros of the Riemann-zeta function in relation to quantum black holes. In my own mind, I also immediately think of the Hilbert-Polya hypothesis and the Jensen polynomials in this context, as well as ideas pertaining to the eigenvalues of Hamiltonians in different random matrix models of quantum chaos. There is connection and certainly also an interesting analogy here. To what degree? It is not entirely clear, from my current vantage. There are also some differences that need to be considered in all of these areas. But it may not be naive to ask, in relation to some developing inclinations in SYK and other tensor models, about how GUE random matrices and local Riemann zeros are or may be connected.

Perhaps I should save such considerations for a separate article.

Standard
Mathematics

Special integrals: M-dimensional Gaussian

[latexpage]

Regular readers of this blog will know that I love my integrals. For those that share the same enjoyment, I recommend this channel that often covers integrals from the MIT integration bee. For example, see this case of writing an integral in terms of itself. Very cool.

In this post, I want to think about the m-dimensional Gaussian. It is a common integral in QFT. Here’s how we compute it when of the form,

[ z = int_{mathbb{R}^n} d^n xe^{-frac{1}{2} x^{T}Ax} ]

Two comments: $A$ is a real symmetric matrix, $A = A^T in mathbb{R}^{n times n} Leftrightarrow A_{ij} = A_{ji}$. X is a column vector,

[ x = begin{pmatrix}

x_1 \

x_2 \

. \

. \

. \

x_n \

end{pmatrix} ]

Since $A$ is real and symmetric, we make use of a result from spectral theorem. In particular, consider the following spectral theorem proposition:

[ A = A^T in mathbb{R}^{n times n} ]

This implies $A$ has eigenvalues $lambda_i in  mathbb{R}$. It can also be diagonalised into a matrix $D = diag(lambda_1, lambda_2, … , lambda_n)$ by an orthogonal matrix $O$, such that

[ OAO^T = D = begin{pmatrix}

lambda & 0 & … & 0 \

0 & lambda_2 & … & 0 \

. & . & … & . \

. & . & … & . \

0 & 0 & … & lambda_n \

end{pmatrix} Leftrightarrow A = O^T DO ]

Here, it should be highlighted $O$ is an $n times m$ matrix. From these considerations,

[ implies z = int_{mathbb{R}^n} d^n xe^{-frac{1}{2} x^{T} Ax} ]

[ = int_{mathbb{R}^n} d^n xe^{-frac{1}{2} x^{T} O^T DO_x} ]

[ = int_{mathbb{R}^n} d^n xe^{-frac{1}{2} (O_x)^{T} D(O_x)} ]

At this point, we perform variable substitution. Notice,

[ y : = O_{x} ]

[ d^{n} y = det (frac{dy}{dx}) d^{n} x ]

[ implies d^{n} x = frac{1}{det (O)} d^{n} y ]

[ therefore z = int_{mathbb{R}^n} d^{n} y frac{1}{det (O)} e^{-frac{1}{2} y^{T} Dy} ]

[ =   int_{mathbb{R}^n} d^{n} y frac{1}{det (O)} e^{-frac{1}{2} sum_{i=1}^{n} lambda_i y_{i}^2} ]

[ frac{1}{det (O)} prod_{i=1}^{n} int_{- infty}^{infty} dy_{i} e^{-frac{1}{2} lambda_i y_{i}^2 } ]

From the 1-dimensional case, we know: $e^{-frac{1}{2} lambda_i y_{i}^2} = sqrt{frac{2 pi}{ lambda_i}}$. So,

[ z = frac{1}{det(O)}  prod_{i=1}^{n} sqrt{frac{2 pi}{ lambda_i}} ]

Now, recall that:

[ D = begin{pmatrix}

lambda & 0 & … & 0 \

0 & lambda_2 & … & 0 \

. & . & … & . \

. & . & … & . \

0 & 0 & … & lambda_n \

end{pmatrix} ]

From this, we can simplify our last result

[ z = frac{1}{det (O)} frac{(2 pi)^{frac{n}{2}}}{sqrt{det (D)}} ]

[ = frac{(2 pi)^{frac{n}{2}}}{sqrt{det (O^T DO)}} ]

[ therefore z = frac{(2 pi)^{frac{n}{2}}}{sqrt{det (A)}} ]

Standard
Mathematics

Simply beautiful: Finding the covariant basis from seemingly nothing

[latexpage]

Seeing how I wrote a new post yesterday in the area of tensor analysis, I was reminded of the beautiful result below. It likely would have made it onto this blog at some point, so I thought I would quickly write it all out.

Here we go.

***

Imagine we are presented with some arbitrary curve in Euclidean space. This curve has no defining coordinate system, and so we can simply picture it like this:

Now, as should be emphasised more than once, without enforcing a coordinate system we’re going to want to parameterise this curve. Similar, perhaps, to how we might build a picture of a relativistic string from the ground up, we’re going to want to issue some generalised coordinates and piece together what we can glean about this string. So let’s use arc length, $s$, for parameter. Let’s also define some general coordinates, say, $gamma : (Z^1(s), Z^2(s))$. But remember, these general coordinates can be Cartesian, Polar or whatever.

This is good. We are making progress.

With the stage now set, here’s the intriguing question I want you to think about (you have already seen this in a textbook): Can you, given the unit tangent to this arbitrary curve (image below), find the components of this unit tangent with respect to the covarient basis?

First, one might be inclined to say, “without defining a coordinate system I can’t even think about or imagine deriving the coordinates of the tangent basis!” Putting aside what we know of the power of tensor analysis, one can certainly sympathise with such a response. But what we have is, indeed, the power of tensor analysis and so we can proceed in a very elegant way.

The main objective here is that, given this unit tangent, we want to find some algebraic expression for it of the general form

$vec{T} = T^1 vec{e}_1 + T^2 vec{e}_2$

Let me also say this: the desire here is to express $T^1$ and $T^2$ in terms of our general coordinates $(Z^1(s), Z^2(s))$.

So, how do we go about this? Recall, firstly, that as a standard the definition of the unit normal is $dr ds$.

Think of $R$ as a function of $s$. As such, it follows that we can write: $R(s) = vec{R}(Z^1(s),Z^2(s))$.

But our unit tangent can also be written as $vec{T}(s)$, noting that $vec{T}(s) = frac{d vec{R}(s)}{ds}$.

This leads us directly to ask, “what is  $frac{d vec{R}(s)}{ds}$. Well, we can compute it as follows

[

vec{T}(s) = frac{d vec{R}(s)}{ds} implies T(s) = frac{partial R}{partial Z^1}frac{d Z^1}{ds} + frac{partial R}{partial Z^2}frac{d Z^2}{ds}

]

Ask yourself, what is $frac{partial R}{partial Z^1}$? It is the covariant basis!

And this deserves celebration, as it is appeared in our investigation of this arbitrary curve without force!

Thus, and I will write it out again in full as the conclusion,

[

vec{T}(s) = frac{dvec{R}(s)}{ds} implies T(s)

= frac{partial R}{partial Z^1}frac{d Z^1}{ds} + frac{partial R}{partial Z^2}frac{d Z^2}{ds}

= frac{d Z^1}{ds} e_1 + frac{d Z^2}{ds} e_2

]

Where, $T^1 = frac{d Z^1}{ds}$ and $T^2 = frac{d Z^2}{ds}$.

There we have it, a beautiful result.

Standard
Mathematics

Understanding metric elements for cylindrical coordinates

[latexpage]

The metric tensor is ubiquitous when arriving at a certain level in one’s physics career. When it comes to cyndrilical coordinates, there is a useful way to remember its deeper meaning through a rather simple derivation – or at least through the use and construction of a series of definitions. (I say ‘simple’ in that this is something one can do ‘on the fly’, should they be required to remind themselves of the properties of the metric tensor). For a more general treatment of what follows, see also this document on special tensors.

***

To start, if we had some general curvilinear coordinate system, we could begin by writing

$dtextbf{R} = g_1 du^1 + g_2 du^2 + … = g_i du^i $

From this we can also immediately invoke the principle that $ds^2 = d textbf{R} cdot dtextbf{R} $. I won’t explain this for sake space, but any textbook will cover why $ds^2$ is equal to the dot product of our displacement vector with itself.

But from above we also can see that $dtextbf{R} cdot dtextbf{R} $ is the same as, in this example using two curvilinear coordinates, $g_i du^i cdot g_j du^j$. It follows,

$ds^2 = g_i du^i cdot g_j du^j = g_i cdot g_j du^i du^j$

As taking the dot product of our two tangent vectors, $g_1 du^1 + g_2 du^2$, is equal by standard relation to the metric tensor, we arrive at the following

$ds^2 = g_i cdot g_j du^i du^j = g_{ij} du^i du^j$

This identity, if I may describe it as such, is important to remember. But how can we issue further meaning to this result?

***

Let’s use cylindrical coordinates.

[Image]

Now, for pedagogical purposes, I’ve labelled the unit vectors $e_p$,$e_phi$, and $e_z$. But, to generalise things, moving forward note that I have set $e_p = e_1$,$e_phi = e_2$, and $e_z = e_3$.

The first thing that one should notice is that our displacement vector, $textbf{R}$, can be written as

$textbf{R} = rho cos phi hat{i} + rho sin phi hat{j} + z hat{k}$

If this is not immediately clear, I suggest taking a few minutes to revise cylindrical coordinates (a generalisation of 2D polar coords.).

Now, here is a key step. Having written our equation for $textbf{R}$, we want to focus in on  $e_1$, $e_2$, and $e_3$ and how we might represent them with respect to $textbf{R}$. To do this, we’re going to take partials.

$e_1 = frac{partial textbf{R}}{partial rho} = cos phi hat{i} + sin phi hat{j}$

$e_2 = frac{partial textbf{R}}{partial phi} = – rho sin phi hat{i} + rho cos phi hat{j}$

$e_3 = frac{partial textbf{R}}{partial z} = hat{k}$

With expressions for $e_1$, $e_2$, and $e_3$ established, we can return to our previous definition of the metric tensor and look to derive a definition of our metric tensor for cylindrical coordinates.

Notice, if $e_i cdot e_j = g_{ij}$, in matrix form we have

[ begin{pmatrix}
e_{11} & e_{12} & e_{13} \
e_{21} & e_{22} & e_{23} \

e_{31} & e_{32} & e_{33}
end{pmatrix}

=

begin{pmatrix}
g_{11} & g_{12} & g_{13} \

g_{21} & g_{22} & g_{23} \

g_{31} & g_{32} & g_{33} \

end{pmatrix} ]

What is this saying? Well, let’s look. We know, again, that $g_{ij} = e_i cdot e_j$. So if we take the dot product, as defined, we should then arrive at our metric tensor for our cylindrical system. And, indeed, this is exactly the result. For $g_{11}$, for instance, we’re just taking the dot product of $e_1$ with itself.

[ g_{11} =  (cos phi hat{i} + sin phi hat{j}) cdot (cos phi hat{i} + sin phi hat{j})

= cos^2 phi + sin^2 phi = 1 ]

We keep doing this for the entire matrix. But to save time, I will only work out the remaining diagonals. The non-diagonal components cancel to zero. You can do the dot product for each to see that this is indeed true. Thus,

[g_{22} = ( – rho sin phi hat{i} + rho cos phi hat{j}) cdot (- rho sin phi hat{i} + rho cos phi hat{j})

=rho^2 sin^2 phi + rho^2 cos^2 phi = rho^2 ]

[g_{33} = hat{k} cdot hat{k} = 1]

Therefore, we arrive at our desired metric

[ begin{pmatrix}
1 & 0 & 0 \

0& rho^2 & 0 \

0 & 0 & 1 \

end{pmatrix} ]

Standard
Mathematics

Differentiate sin(sin(sinx))

[latexpage]

In passing I came across this expression the other day,

[ sin(sin(sinx))prime ]

To put it in words, we have a composite function: the sin of the sin of the sin(x).

I like this expression quite a lot and for a few reasons. One reason has to do with its cascading quality, especially after we differentiate it. (We’ll look at this in a moment). As I’ll comment in just a moment, to the eye it may seem much more complicated than it is. But really, we could nest more sin’s into the expression and it doesn’t really change much when we go to differentiate it.

Another reason I like this expression is because I think it serves a nice lesson. When one first looks at it, with a mind that in our case it requires differentiation, the expression might first seem daunting. It may even evoke a sense of fear. But its derivative is actually very simple, so simple that it sort of reminds me of a coffin problem (a seemingly difficult problem with a relatively simple solution).

If I were to teach a calculus course at university at some point in the future, I would present my students with this expression on the first day for this very reason. The intention would not be a sadistic one, but to show that often in mathematics we are presented with difficult looking expressions or equations or problems – the moral being that we ought not to fear. I often find that in mathematics, and certainly also frontier mathematical physics, one can’t be ruled by fear of the daunting or of the unknown. You have to venture forward. The key is to take a deep breath and think it through step by step, experiment and just freely explore the maths.

With that said, let’s now take the derivative of the above expression.  The important thing is to first identify that we need to use the chain rule, then work from left to right step by step.

Notice when we take the derivative, the outermost sin becomes cos of everything inside the parenthesis. The middle sin becomes cos of the innermost sin(x). And, finally, the innermost sin(x) differentiates to cos(x).

[ F(x)=sin(sin(sinx)) ]

[ frac{d}{dx} = cos(sin(sinx)) cdot cos(sinx) cdot cosx ]

[ = cos((sin(sinx))cos(sinx)cosx ]

Standard
Leibniz Rule special case
Mathematics

Leibniz Rule: Differentiating under the integral sign (special case)

[latexpage]
I recently wrote a short piece on the general case of the Leibniz Rule. In that article I alluded to how there is also a special case of the rule, in which the limits of integration are constants as opposed to differentiable functions.

In short, the special case of the Leibniz Rule is a much simpler operation. The key idea remains that one must perform partial differentiation within the the integral, but the rest of the general case is cut out. This means all one has to do is perform any remaining basic integration techniques and then clean up the evaluation with some algebra.

In general, the special case of Leibniz Rule states

if

[
I(x)=int_{a}^{b}f(t,x)mathrm{d}t}
]

then
[
I'(x)= int_{a}^{b}frac{partial mathscr{F}}{partial x}mathrm{d}t} ]

Notice how this compares with the general case.

***

Here’s an example to stimulate curiosity in learning.

[
F(x)=int_{1}^{2}frac{cos(tx)}{t}mathrm{d}t}
]

[
F^{prime}(x)=frac{d}{dx}int_{1}^{2}frac{cos(tx)}{t}mathrm{d}t}
]

Now, by the special case of Leibniz Rule:
[
=int_{1}^{2}frac{partial}{partial x} left[frac{cos(tx)}{t} right]mathrm{d}t}
]
At this point, take the partial derivative. Working this out,
[
follows int_{1}^{2}frac{-tsin(tx)}{t}mathrm{d}t}
]
Cancel, t, in numerator and denominator.
[
=int_{1}^{2}-sin(tx)mathrm{d}t}
]
Note, we’re integrating wrt. t, therefore
[
=left[frac{cos(tx)}{x} right]_{t=1}^{t=2}
]
[
= frac{cos(2x) – cos(x)}{x}
]

As you can see, the special case is a simpler operation. The key thing to remember, again, is to insert the partial in the integral and then take the partial derivative of the integrand. From that point, it is usually just a matter of cleaning up the algebra.

Standard
Leibniz rule general case
Mathematics

Leibniz Rule: Differentiating under the integral (general case)

[latexpage]

Evaluating integrals has over time become one of my favourite activities. Whether it is by parts or by substitution or by partial fractions or reduction – there are many cool integrals to be found in the world of applied maths. Two especially cool methods for solving definite integrals are the Leibniz Rule (or what some call Feynman integration) and the Bracket Method. In this post I will focus on the former, particularly its general case.

Admittedly, when I first learned Leibniz’s Rule I was enthralled. It opened up a new world of integrals, which, otherwise, would seem impossible to evaluate. Consider evaluating, for example, such integrals as those listed below using basic methods:

[
int_{-infty}^{infty}frac{sin(x)}{x} mathrm{d}x
]

or

[
int_{1}^{2}frac{cos{tx}}{t},mathrm{d}t
]

or something like

[
int_{-infty}^{infty}frac{cos{x}}{x^2 +1}mathrm{d}x
]

This is where the Leibniz Rule comes into play.

There are two cases of the Leibniz Rule, a general and a special case. The difference can be simplified. In the latter, the limits of integration are constants, where in the former the limits of integration are differentiable functions. In the general case (derivation), the rule states that if

[
I(x)=int_{u(x)}^{v(x)} f(t,x)mathrm{d}t
]

then

[
I^{prime}(x)=int_{u(x)}^{v(x)}frac{partial f}{partial x}(t,x)mathrm{d}t + f(v(x),x)v'(x) – f(u(x), x)u'(x)
]

So this is the rule, in its general case, but what to make of it?

One of the best summations of the Leibniz Rule that I have read states how: “for constant limits of integration the order of integration and differentiation are reversible” (Riley, Hobson & Bence, 2016, pp.189-179). In other words, we can use the Leibniz Rule to interchange the integral sign and the differential. Another way to word this is that by using the Leibniz Rule, we’re integrating by way of differentiation under the integral sign.

The key idea to remember is to insert the differential into the integral, which then makes it a partial differential (as seen on line 3 below). From there, you want to work out the partial with the hope that the integral simplifies. Finally, it is primarily a matter of using basic integration techniques and algebra to complete the evaluation.

Here’s an example:

[
F(x)=int_{sqrt{x}}^{x}frac{sin(tx)}{t}mathrm{d}t
]

[F^{prime}(x)=frac{d}{dx}int_{sqrt{x}}^{x}frac{sin(tx)}{t}mathrm{d}t ]

[F^{prime}(x)=int_{sqrt{x}}^{x}frac{partial}{partial x}[frac{sin(tx)}{t}]mathrm{d}t + frac{sin(x cdot x)}{x}(1) – frac{sin(sqrt{x} cdot x)}{sqrt{x}}(frac{1)}{2sqrt{x}}) ]

Notice on line 3 that for the part $f(v(x),x)v'(x) – f(u(x), x)u'(x)$, we’re simply replacing t with x, then multiplying by the differential of the limit.

[implies F^{prime}(x)=int_{sqrt{x}}^{x}frac{tcos(tx)}{t}mathrm{d}t + frac{sin(x^2)}{x} – frac{sin(x^{frac{3}{2}})}{2x} ]

Note, we still have to integrate $int_{sqrt{x}}^{x}frac{tcos(tx)}{t}$ with respect to t. For this, use substitution. You then end up with what follows:

[=left[frac{sin(tx)}{x}right]_{t=sqrt{x}}^{t=x} + frac{sin(x^2)}{x} – frac{sin(x^{frac{3}{2}})}{2x} ]

[=left[(frac{sin(x cdot x)}{x})-(frac{sin(sqrt{x} cdot x)}{x})right] + frac{sin(x^2)}{x} – frac{sin(x^{frac{3}{2}})}{2x} ]

[=frac{2sin(x^2)}{x} – frac{3sin(x^{frac{3}{2}})}{2x} ]

Why is Leibniz’s Rule such a powerful and useful tool? In that it allows us to solve otherwise challenging integrals – integrals that we regularly come across that cannot be evaluated in terms of simple functions (of finite sum) – one of the things I like most about this technique is that it is really quite simple for what it does. Once you get the hang of it, and it becomes as basic as integrating by parts or something similar, it is a very useful tool to have in one’s toolbox.

Leibniz was of course one of the fathers of calculus, along with Newton. But this integration technique was made famous by Feynman, who offers a lovely story in Surely, You’re Joking, Mr. Feynman! about how he first came across the method and how he later used it on a regular basis (contributing to his reputation for doing integrals!).

References

Riley, K.F., Hobson, M.P., & Bence, S.J., (2016), “Mathematical Methods for Physics and Engineering”. Cambridge, UK: Cambridge University Press, pp.189-179

Standard